I'm getting the same kind of error as described here. I'm running in to the file name limit of windows in TFS Build. The answers to this question suggest redefining the build agent working directory. I'm not sure if I should do this. I have a few questions...
Does changing the build agent working directory affect me, or everyone using the build agent?
The build agent is not ran by me, it's on a different computer controlled by a different department in our company. I can get to the "manage build controller" settings, and I seem to be able to make this changes, I'm just scared to!
There are a couple of options here, changing the build directory on the build agent, in Team Explorer right click on the "builds" folder and select manage build agents. Select your build server(s) and change the build folder to something like "e:\b" (or even "e:\" if that's all you use that drive for) This will change the build working directory for that build server. This will shave a few characters off the working directory.
In addition to this you can map the workspace used by the build as far down the tree as possible. This is a good idea to do even if you're not running out of characters on the path as TFS uses the workspace to determine which code to get for your build.
e.g. If your workspace is mapped to $/TeamProject = $(SourceDir) this means TFS will get all of the code in the team project for the build. Even if you only want 1 solution from 1 branch
Consider a Team Project set up like this
`$/TeamProject/DevBranch/Docs
/Source/Solutions/Solution1
/Solution2
/etc...
/More Stuff
/MainBranch/[Same As Dev]
/HotFixBranch/[Same As Dev]
/ReleaseBranch/[Same As Dev]`
If your workspace is mapped to $/TeamProject you're going to be getting everything from TFS when all you really want is the code in the "solution2" folder from the dev branch. Change the mapping to $/TeamProject/Devbranch/Source/Solutions/Solution2 and you've just shaved about 60 characters of the length of the path. In addition to this you'll speed up the builds as it will only get the code they need.
It is used by every build task that uses that build agent so yes, it could affect multiple people. If you're running into a path limitation, you should definitely change the working directory. I would just clear it with the other people in the department before you proceed, just to make sure everybody knows what's going to be done.
The full qualified name must not exceed MAX_PATH which is on Windows 260 characters. In reality there different values used for various tools.
The GAC e.g. does reject file names which exceed 256 characters. With file name the path and the file name itself is meant.
TFS is built upon Windows hence the same limitations exist for TFS builds. The only feasible solution is to use a workspace directory which is as short as possible to limit the amount of issues you will run into. You could try to subst the workspace directory to make it even shorter but the fundamental file system limitation will not go away.
Yes Microsoft should solve this issue but in reality it is not feasible soon. Any tool (MS and non MS) which is used during the build will run into the same limitations. At best the tools will crash when the file names get too long. In the worst case you get corrupt binaries without noticing. All applications which are written in C/C++ use for file name buffers a MAX_PATH+1 sized buffer. Even if Windows would allow such file names (there are ways but ...) the buffers of nearly all applications would be too small to make use of them.
You are not alone we all suffer. On the bright side this limitation does prevent nitpick architects to enforce naming conventions which exceed MAX_PATH alone with the target name.
Related
Input from someone who had to qualify an application for being a Microsoft Gold Partner would be especially welcome here.
We are running our software through Microsoft Platform Ready Test Tool. Its installer was made with the latest stable version of WiX. The gist of the test is, while the tool is running, you install, run, and uninstall your application and the Test Tool says whether it's behaving well or not. It fails the test because of leftover files and registry keys, but I haven't a clue how we could possibly clean those up reliably.
EDIT: this is the application in question: https://github.com/modulogrc/modSIC
Our software sets up a Windows service, which is correctly removed on uninstall. The offending leftover files are like this:
Under the logged-in user's Local\Temp directory, four DLLs with 8-character random names (different at each install) and many .tmp files with names starting with "wbk" (and one starting with "vgx")
A lot of .rbf files under C:\Config.Msi
C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys\{a very long name}
Another long-named file under the user's AppData\Local\Microsoft\Windows\Caches directory -- and under some other user's, too. (That other user wasn't doing anything while I was testing)
Many under C:\Users\Public\Documents\MPR -- well, those ones are left by the Test Tool itself, and I suppose we can justify that with a waiver when certifying.
The offending registry keys:
Many under HKU{Various IDs} under folders named "MuiCache"
Many under HKU{Various IDs} under folders named "UserAssist"
These intriguing ones, also under HKU{various IDs}\Software\Microsoft\Windows\CurrentVersion:
ImmersiveShell\StateStore\ItemsStateStoreLastWrite
UFH\SHC\8
Explorer\GlobalAssocChangedCounter
Many under HKU{Some ID}\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\Bags\AllFolders\Shell\Microsoft.Windows.ControlPanel
SessionIdHigh and SessionIdLow, under HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing
HKEY_LOCAL_MACHINE\SYSTEM\RNG\Seed
The ones below, and the same thing with CurrentControlSet instead of ControlSet001 (it's a link, I believe)
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\FastCache\Data\Volatile
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\MUI\StringCacheSettings\StringCacheGeneration
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Session
The files left under the MPR folder tells me the Test Tool does complain about things it shouldn't, so I'm inclined to submit the app anyway. But if there's something, anything I could ameliorate with tweaks in the WiX project's Product.wxs file (some of the other criteria of the MPR test were indeed met by changing it), I'm all ears.
I have few softlinks, says 1000 images which i have created in MacBook Pro which i am using in my iOS Apps.
Now i am porting the same app in Windows 8 phone app, so i want to reuse the same Softlink in Windows phone 8 apps as well, so how can i use that ?
I have tried to open the softlink in Windows 8 machine, but it says that the "File format is not supported".
I have both the original file and the softlink in my Windows machine.
Is there anyother way that i can reuse the same soft link ? if NOT what is the best approach that i can follow.
EDIT
Ok, here is some more info on this :
In MacBook Pro
I have a folder in desktop which has physical paths (actual images), now i have created softlinks using a script and these softlinks are placed in some different folder.
Now i am using these soflinks in my iOS app.
In Windows 8
I have copied the folder which has soflink as well as the folder which has actual files in it from Mac.
Now i have pasted actual files folder on my desktop and soflinks folder in some D: drive now if i go my soflink folder in D drive and when i check those images it shows blank, because its not pointing to the actual files.
I have both actual files folder and also the soflink folder.
One more point is that when you create a soflink, in MacBook Pro it shows this icon :
But on Windows 8 its blank nothing like that.
Your question is missing a couple of details so I'm going to have to make a guess about your situation. The problem is:
You created some symlinks using OS X on a file system and now you are
having problems accessing those symlinks in Windows.
Unless you did something tricky, like installing 3rd party file system drivers, then the only file system that both Windows and OS X can read/write to natively is FAT based. So I'm guessing your situation is:
You created some symlinks using OS X on a FAT32 file system and now
you are having problems accessing those symlinks in Windows.
Assuming the above situation, the problem is that there are no symlinks in FAT32 because the file system doesn't support them. OS X is tricking you because it "just works". What is really happening is that OS X is creating an ASCII text file that contains the line "XSym" along with the name of the file it is "linking" to, plus some file system information. You can confirm this by opening your softlinks on your Windows system in notepad. Normally you would see binary code if you were opening an actual image in notepad, but instead you should see the text from these fake symlinks.
So, what do you do? I see a couple of options:
You could use a file system that supports soft links. This could mean using HFS+ (OS X file system) which would require you to install HFS+ drivers on your Windows system so that it can read/write to the file system. Or it could mean going in the other direction and using NTFS (Windows file system) which would require you to install NTFS drivers on your Mac. Note that most recent versions of OS X can read NTFS file systems, they just can't write to them.
You could use the fake symlinks that OS X is creating. This would require writing a parser to interpret the links or finding a library that does this for you. I don't have a copy, but I believe the XSym format is covered in the "OS X Internals" book.
You could rethink the approach to your problem so that it doesn't require you to use symlinks.
If this didn't solve your problem, then please provide more details because I had to make some guesses about your situation.
==EDIT==
Take a look at the subversion documentation on symbolic links here.
The relevant quote from the doc is:
Versioning Symbolic Links
On non-Windows platforms, Subversion is able to version files of the
special type symbolic link (or “symlink”). A symlink is a file that
acts as a sort of transparent reference to some other object in the
filesystem, allowing programs to read and write to those objects
indirectly by way of performing operations on the symlink itself.
When a symlink is committed into a Subversion repository, Subversion
remembers that the file was in fact a symlink, as well as the object
to which the symlink “points.” When that symlink is checked out to
another working copy on a non-Windows system, Subversion reconstructs
a real filesystem-level symbolic link from the versioned symlink. But
that doesn't in any way limit the usability of working copies on
systems such as Windows that do not support symlinks. On such systems,
Subversion simply creates a regular text file whose contents are the
path to which to the original symlink pointed. While that file can't
be used as a symlink on a Windows system, it also won't prevent
Windows users from performing their other Subversion-related
activities.
Basically, it says something similar to what I mentioned earlier, which is that symlinks are not supported that well if at all on Windows systems. Subversion just creates text files with the contents of the link so you can choose to either figure out how to parse these text files yourself or try to find a library that will parse them for you.
Maybe the problem is that there are so many links in one directory
There is a maximum of 31 reparse points (and therefore symbolic links)
allowed in a particular path.
See also
Programming Considerations
I know I am late in this, but I hope that others may benefit from my answer, even though the asker may long have moved on.
Some background
Symbolic link semantics differ considerably between unixoid systems and Windows. As was stated before, Windows uses reparse points to implement symbolic links and junction points (some deduplication features on the Server editions also seem to use it).
Now, a reparse point contains extra data as a hint to the I/O manager and object manager. Essentially, based on a reparse point tag (a GUID) the type of reparse point can be determined and then a file system filter driver handles the details. You can find a moderately detailed description of this in the 6th edition of "Windows Internals" in chapter 9 or in a recent Windows Driver Kit or on MSDN under REPARSE_GUID_DATA_BUFFER (and related topics).
On unixoid systems the file system metadata also contains a clue that the (text file) is a symlink. If you use ls -l that clue is visible in the form of a leading l, e.g. in:
lrwxrwxrwx 1 user group 38 2015-10-12 11:51
The actual contents of symlinks are system-specific as well, on Linux for example they contain merely the target path.
What the Windows and *nix symlinks share is that the target needn't exist at the time of creation. Also on Windows a symlink can point to a network location, which is special because on Windows network paths differ from local paths.
Possible compatibility
Assuming a symlink was created on the OSX or Linux side, we can imagine certain levels of compatibility. If the file system driver on the Windows side would now present symlinks as reparse points and some party (either said file system driver or a file system filter) would handle these reparse points, it would be possible to interpret the target path of a symlink in some way.
Converting forward slashes to backward slashes is the least concern, however.
In this answer I already outlined a few cases where there would be no meaningful translation possible.
Essentially the only type of symlinks for which I would see a potential for compatibility are relative symlinks. But even for those is is necessary to point out that the target path may not point outside of the folder hierarchy that is visible on the Windows side. That is, if your symlink on the OSX or Linux side resides inside /var/www/html and points to ../../../something it becomes meaningless in a case where /var is the mounted volume on Windows.
If, however, such symlink /var/www/html/foobar and pointed to ../html1/foo/bar chances are that if /var was the mounted volume on OSX or Linux and now on Windows, the relative target path still makes sense (after adjustments such as forward to backward slashes etc).
For any absolute target paths, the file system driver or the file system filter driver would have to get some hints on how to translate the source form of a symlink into the target form.
E.g. if a symlink pointed to /home/foo/bar the /home part might translate to a specific mounted volume.
But you can already see that this requires a lot of user intervention, which is probably why most people would consider it futile to even attempt a meaningful translation.
Possible workaround for SVN
A possible workaround for you could be to use SVN externals. It depends on the exact scenario, but since you are using SVN they come to mind.
You can think of SVN externals as Subversion's native symlinks. I have used them this way and I know of several others who have, but I don't know how widespread that train of thought and subsequent usage is.
Attention: externals pointing to files were only introduced in SVN 1.6, so this may or may not be an issue in your scenario.
SVN externals come in several flavors. You can set them for folders or files (files only with 1.6 and newer).
And an external can point to:
an external repo (schema://server/path)
relative to the same repo (^/path)
relative to the schema (//server/path) or
relative to the parent directory
You'll probably want 2 or 4 from that list. Most likely you'll want 4, though, because file externals must point to the same repository.
Long story short
If your images are in a folder such as trunk/images and you have a folder trunk/platforms/windows/images you can either set the the svn:externals property on trunk/platforms/windows to have an external named images pointing to ../../images (i.e. directory external) or, assuming you wanted to use a different hierarchy or different names underneath trunk/platforms/windows/images you could create file externals like so (images subdirectory must exist in WC):
cd trunk/platforms/windows
svn propedit svn:externals images
and add individual externals like this:
../../../images/filename.jpeg other-filename.jpeg
Please note that the target directories need to exist in the repository and the working copy, so for an external like this:
../../../images/filename.jpeg foo/other-filename.jpeg
the subdirectory trunk/platforms/windows/images/foo must exist.
Updating your working copy will result in those externals to manifest as versioned files inside the working copy. So they are a type of symlinks that exists in SVN and manifests as proper files in the working copy, which means all platforms can handle them equally.
I'm trying to get one of our internal c# click once applications into VSOnline for source control to allow access for an external developer.
I think I've got it set up and working in the Source Control Editor, but am having trouble working through how to actually use the setup day to day.
I've got some git experience but zero TFS experience, but went with the TFS option as I thought it's more likely developers are familiar with it than git.
What I'm trying to achieve is 3 branches; Main/Trunk, Dev and Release and be able to deploy at least Release and Main. Release is for external clients, Main for internal clients.
At the moment my Source Control Explorer looks like;
DefaultCollection
-->Name of project
---->(Branch icon) Dev (created as a Branch from Main)
---->(Branch icon) Main
---->(Branch icon) Release (created as a Branch from Main)
2 things;
In terms of use I'm not really sure how to swap between the branches for coding / making changes? Do I just open the solution file for the branch I want to work on then save all changes as I go, then commit that as a changeset? Or is it a matter of manually checking the file out, working on it, then checking it back in again?
Given it's a ClickOnce app; each branch is deployed to a different IIS site, meaning diff app identies, paths and settings. Am I right in using branches for this or is there a better way? I'm worried about someone committing the wrong file and causing a mandatory uninstall/reinstall of the app.
Any pointers / docco greatly appreciated; just note I'm using VS2010.
Thanks,
Liam
How do I swap between branches
If you're used to GIT than the 'heavy weight' branching in TFVC can be a bit confusing. There is no real "Switching between branches" as you've encountered. You map a branch to a local folder and by opening the files there you're "working on that branch".
As Lee points out you can create separate workspaces for each branch, which will isolate the work areas for each. If you're using a Local Workspace, each workspace gets its own "/tf$" folder, the TFVC equivalent of the "/.git" folder.
There's a couple of documents on MSDN that explain this in a little more detail:
Set up TFVC
Create one or more workspaces
Optimize your workspaces
How do I check in
A changeset in TFVC is the equivalent of a commit in Git, it's a logical set of changed files that is committed/pushed as a whole, or not at all. But just as in Git, you can commit all the changes to your local work area at once, or you can exclude certain changes from the first commit and stick those in a second.
In TFVC you'd normally try to commit a logical set of files that fixed the bug, achieved some goal etc. Though it's still possible to check-out/check-in files individually, chances are much higher that you'll actually cause the sources in the main repository to be in an inconsistent state that way.
See
What is a Changeset
Check in your work
Shelving your work
As for your second question
Depending on how far you'd want to go, you could setup Team Build to actually build the application and to take the configuration from a specific location during the build process. That way you wouldn't have to store the configuration for your production environment with the development settings. Configuration files can contain sensitive information, you might not want to have them in Source Control, except for the development versions.
You can also store the config files in a special folder in each branch and make sure that each time you merge them, they're updated accordingly.
And you can, as Lee mentions, look into Config Transaformations. which apply some XSLT to your config file in the build process. That way you can have multiple config files stored in each branch and the selection of your "Configuration" in Visual Studio will define what the final config looks like.
See:
Tricks with app.config files and click once
The _PublishedApplication Nuget package
SlowCheetah
In terms of use I'm not really sure how to swap between the branches for coding / making changes?
I recommend creating separate workspaces for each branch. This way you won't accidentally check in release code when you are trying to check in dev code. Also, when you want to switch which branch of code you are working on, you switch your workspace. This should keep things "cleaner" and easier to work with.
Do I just open the solution file for the branch I want to work on then save all changes as I go, then commit that as a changeset? Or is it a matter of manually checking the file out, working on it, then checking it back in again?
You shouldn't have to manually check it out. If I remember correctly, it will default to auto check out when you start to make changes. You can check code in however big of chunks as you want. But make sure if you are checking in changes to ClassA.cs that reference needed changes in ClassB.cs, you check that in as well. You don't want to leave the source code in a broken state for the other developers.
If you start working on something and have to suspend that work to do some other task that rose in importance, shelve your work instead of letting your workspace get cluttered up with half done work that makes it difficult to manage check ins.
Given it's a ClickOnce app; each branch is deployed to a different IIS site, meaning diff app identies, paths and settings. Am I right in using branches for this or is there a better way?
I'd look into using web.config transformations for this. You'll still want multiple branches but to separate tested/completed/developing code from each other.
So I have been writing to
Environment.SpecialFolder.ApplicationData
this data file, that upon uninstall needs to be deleted. I am using Innos Setup to build my installer. It works great for me. So my data file hangs out in the above path and I do that cause when I used to try to write it to
Application.ExecutablePath
certain boxes I tested it on would throw a nasty error at me trying to write data there. I do research and somehow its not always writable and its how i came up with the Environment.SpecialFolder.ApplicationData
That is why my data file now resides in the SpecialFolder.ApplicationData. Trouble is if the user uninstalls and reinstalls I need that file gone. It might be a short coming of my knowledge of Innos but I cannot figure out how to know where that file will be to tell innos that.
So then I thought I had a clever solution: Innos can run a file when its done uninstalling, so I had my program create this file "uninstallData.bat" that says:
del "the file in my special folder application data path"
and I wrote it out to drumroll
Application.ExecutablePath
(yes it was a while in development and I had forgot it was't doable.)
So of course I am back to square one, I need to write a file to a path Innos knows about {app} and I need it to be able to delete my data file in the SpecialFolder... i don't care how I do it i just need that file gone.
Are there other Environment. or Application. approches I have missed? Maybe somewhere that is viewable by an uninstaller AND can be written to?
As an aside, I am not sure why my box I develop on can write to the application folder no issue, but it cannot on other boxes... weird.
Any input would be great sorta lost as to how to crack this nut.
The environment location is in the user profile. If there are multiple users on the machine, and they all run the application then a copy of the file will be in each profile.
The path also depends on the OS.
Regardless, the current user's app data location is pointed to by %APPDATA% and %LOCALAPPDATA%. These Windows environment variables should be available within Innos.
Appliccation.ExecutablePath is not writable per standard defintions - the program files folder should never be manipulated by running applications. Ther area number of special folders for that. Nice that you finally found.... what is properly documented by Microsoft for a LONG time now (minimum 10 years).
I suggest you get a proper installer - WIX comes to my mind. Your problem is totally unrelated to C# - it seems to be totally a "crappy installer" issue. Or provide a PROGRAM (not bat file) to run at uninstall. What exatly is your problem there?
For a testing usage of not very complex WPF applications I often don't make installer - just, after building project, copy Bin>Debug folder content of a VS2008 project folder to a hard drive of a user computer and put an icon to a desktop. No records to windows registry.
Are there any drawbacks of such a way of using windows applications for testing period?
There's nothing wrong with this approach at all - it is what's called xcopy deployment. You don't get a few things doing it this way:
an entry in the add/remove programs for users to uninstall with
the ability to add shortcuts to desktop/start menu/quick launch
any changes to the registry for settings etc...
Another benefit is that you can get your application onto a computer by a user who does not need administrative privileges to install.
It really comes down to your requirements. If you don't need any features of an installer, then just copying the files is a good approach.
I'd agree with the other comments about using a release build though - especially if you are deploying for real use and not just testing.
The only change you might want to make is to build the app in Release rather than Debug and take the files from the Bin>Release folder.
more info: http://haacked.com/archive/2004/02/14/difference-between-debug-vs-release-build.aspx
When you have multiple files to deploy along with your exe, dll's to register, file associations to set up, then an installer is a neat way to deliver all of that in a reliable manner. If you don't do this with an installer the user could easily screw things up.
In addition to that, the installer is sometimes used as a means to ensure the computer is truly ready for the application. For example, the installers I've written check to ensure the proper version of .NET is installed, and will download & install it if necessary.
However, there are many times when these characteristics are simply not worth it and deploying a standalone application in a single exe is perfectly acceptable. Simple applications that don't need to store a lot of settings on your computer and don't have a lot of prerequisites are perfect examples. The first thing that comes to mind are all the utilities from Sysinternals.
I see only one potencial drawback. There is nowthing wrong in your approach as long as you don't have more than 1 to 3 users and changes during test session are not often.
When changes are often and you have to copy library to more than 3 users (hosts etc.) the drawbac I mean is maintenance time. I know what I'm talking about because in place where I work, we have such issue.
Last time I've started to more taking care about maintaining our application and copying files from one host to another than coding. :(
In my honest opinion sometimes is better to invest your time on the beggining and write an installer than have a lot mainteneance and copy things later.
To solve the number of users problem, there's a very simple solution that does not require to setup a full installer.
Basic setup for multi-users xcopy/.bat deployment:
A shared drive, with one folder for the .bat files, one for the binaries.
Upload the binaries to the shared
drive and update the install script
if needed.
Have every user run the install script.
By the way, some very complex information systems are ENTIRELY deployed by .bat files (even when not testing!).