Environment.SpecialFolder.CommonDocuments - c#

When I execute this statement:
string folderPath =
Environment.GetFolderPath(Environment.SpecialFolder.CommonDocuments);
folderPath is set to C:\ProgramData.
When I execute this statement in the Immediate Window:
Environment.GetFolderPath(Environment.SpecialFolder.CommonDocuments);
C:\Users\Public\Documents is displayed (which is what I expected).
Any thoughts on the difference?
UPDATE 7/6/12:
I’m getting different results in different classes in the same exe.
I have one class that lives in a library, and one that’s linked directly into the app.
The library class returns “C:\ProgramData”.
The linked code returns “C:\Users\Public\Documents”.
Further, the library code returns “C:\ProgramData” for both
“Environment.SpecialFolder.CommonDocuments” and
“Environment.SpecialFolder.ApplicationData”.
The linked code returns “C:\Users\Public\Documents” for “Environment.SpecialFolder.CommonDocuments” and "C:\Users\Me\AppData\Roaming" for “Environment.SpecialFolder.ApplicationData”.
I’m baffled.

This could happen if your program is 64-bit. Since Visual Studio is 32-bit, when you execute Environment.GetFolderPath(Environment.SpecialFolder.CommonDocuments); in the Immediate Window, it looks up the Windows 32 hive, whereas your program would look up the 64 hive. And it is possible that the CommonDocuments folder has been moved, which would only be registered in the 64 hive.
This is a Windows bug as defined here
EDIT Your update says that it is happening in two classes within the same EXE. Since a process can only be 32-bit or 64-bit (not both), this would indicate the above bug does not apply to you (assuming normal comms between the assemblies, not COM with a wrapper for example). Are you able to work it into a suitable test that you can post?
As a quick confirmation, it might also be worthwhile including the following code in each to be doubly sure they are both running in the same process:
Console.WriteLine("{0} Process {1} is {2}bit", GetType().ToString(), System.Diagnostics.Process.GetCurrentProcess().Id, IntPtr.Size * 8);

C:\Users\Public\Documents is the right path:
Per Machine “Documents”
“Document” type files that users create/open/close/save in the application that are used across users. These are usually template or public documents.
Example: MyTemplate.dot
Windows 7: C:\Users\Public
Vista: %SystemDrive%\Users\Public
XP: %ALLUSERSPROFILE%\Documents
Environment Variable: Vista/Win7: %PUBLIC% Note: Does not exist on XP
Known Folder ID: FOLDERID_PublicDocuments
System.Environment.SpecialFolder: System.Environment.SpecialFolder.CommonDocuments
CSIDL: CSIDL_COMMON_DOCUMENTS
It’s obvious after looking at all these locations that where you store your files can be challenging if you are targeting multiple OS versions. The best guidance is to use API’s to find the special folder path. API’s will return the appropriate location for the target OS.
source: http://blogs.msdn.com/b/patricka/archive/2010/03/18/where-should-i-store-my-data-and-configuration-files-if-i-target-multiple-os-versions.aspx

Related

AppDomainSetup.PrivateBinPath vs Environment.SetEnvironmentVariable

I just need my app to know where to look for some unmanaged dlls. I am using SetEnvironmentVariable and it is working great. I know that there is also a property AppDomainSetup.PrivateBinPath. What is the practical difference between them?
Currently I am doing it as below:
var dllDirectory = #"C:\some\path";
Environment.SetEnvironmentVariable("PATH", Environment.GetEnvironmentVariable("PATH") + ";" + dllDirectory)
Edit:
I noticed that Environment.SetEnvironmentVariable does not actually change the PATH variable, it seams to affect only the app that have called it.
PrivateBinPath is where the CLR will look for assemblies.
Which is not where Windows will look for DLLs, it doesn't know anything about CLR configuration. It uses the regular Windows search rules, which usually behaves like this:
same directory as where the EXE is stored
the directory specified in a Set/AddDllDirectory() call, if any
the Windows system directory (normally c:\windows\system32)
the Windows install directory (normally c:\windows)
the current default directory (Environment.CurrentDirectory)
the directories listed in the PATH environment variable.
Several quirks to this, it has been tinkered with a lot. Particularly bullet 5 is a security problem and can be abused to get a program to load a rogue DLL. But close enough to what you can expect in the wild.
Setting the PATH environment variable in your code is okayish, it is not exactly reliable. It being on the bottom of the list is of course an issue, you might get the wrong DLL loaded. And the PATH environment variable itself is troublesome, it can easily be corrupted on a machine and may be already too long to allow you to append another directory to it. Very hard to diagnose problems.
You should always, always, always favor bullet 1. Simply copy the native DLLs into the same directory as your EXE. Always works, always reliable, never a surprise, no special config needed. Nobody cares that this directory is a bit full, not your customer, not the file system, not the operating system.
If you have to then always favor bullet 2, pinvoke SetDllDirectory(). It is not completely reliable, you'll have trouble if one of the DLLs you load is using it too. But you quickly find that out. Using AddDllDiretory() is better but it isn't supported on enough Windows versions yet to be relied upon.
AppDomainSetup.PrivateBinPath is set of folders under application base directory that are probed for private assemblies during appdomain setup. Env var PATH will not be necessarily pointing to folders under application base directory. PATH will contain any arbitrary folder path.

How to use reuse softlinks created on Mac in Windows 8

I have few softlinks, says 1000 images which i have created in MacBook Pro which i am using in my iOS Apps.
Now i am porting the same app in Windows 8 phone app, so i want to reuse the same Softlink in Windows phone 8 apps as well, so how can i use that ?
I have tried to open the softlink in Windows 8 machine, but it says that the "File format is not supported".
I have both the original file and the softlink in my Windows machine.
Is there anyother way that i can reuse the same soft link ? if NOT what is the best approach that i can follow.
EDIT
Ok, here is some more info on this :
In MacBook Pro
I have a folder in desktop which has physical paths (actual images), now i have created softlinks using a script and these softlinks are placed in some different folder.
Now i am using these soflinks in my iOS app.
In Windows 8
I have copied the folder which has soflink as well as the folder which has actual files in it from Mac.
Now i have pasted actual files folder on my desktop and soflinks folder in some D: drive now if i go my soflink folder in D drive and when i check those images it shows blank, because its not pointing to the actual files.
I have both actual files folder and also the soflink folder.
One more point is that when you create a soflink, in MacBook Pro it shows this icon :
But on Windows 8 its blank nothing like that.
Your question is missing a couple of details so I'm going to have to make a guess about your situation. The problem is:
You created some symlinks using OS X on a file system and now you are
having problems accessing those symlinks in Windows.
Unless you did something tricky, like installing 3rd party file system drivers, then the only file system that both Windows and OS X can read/write to natively is FAT based. So I'm guessing your situation is:
You created some symlinks using OS X on a FAT32 file system and now
you are having problems accessing those symlinks in Windows.
Assuming the above situation, the problem is that there are no symlinks in FAT32 because the file system doesn't support them. OS X is tricking you because it "just works". What is really happening is that OS X is creating an ASCII text file that contains the line "XSym" along with the name of the file it is "linking" to, plus some file system information. You can confirm this by opening your softlinks on your Windows system in notepad. Normally you would see binary code if you were opening an actual image in notepad, but instead you should see the text from these fake symlinks.
So, what do you do? I see a couple of options:
You could use a file system that supports soft links. This could mean using HFS+ (OS X file system) which would require you to install HFS+ drivers on your Windows system so that it can read/write to the file system. Or it could mean going in the other direction and using NTFS (Windows file system) which would require you to install NTFS drivers on your Mac. Note that most recent versions of OS X can read NTFS file systems, they just can't write to them.
You could use the fake symlinks that OS X is creating. This would require writing a parser to interpret the links or finding a library that does this for you. I don't have a copy, but I believe the XSym format is covered in the "OS X Internals" book.
You could rethink the approach to your problem so that it doesn't require you to use symlinks.
If this didn't solve your problem, then please provide more details because I had to make some guesses about your situation.
==EDIT==
Take a look at the subversion documentation on symbolic links here.
The relevant quote from the doc is:
Versioning Symbolic Links
On non-Windows platforms, Subversion is able to version files of the
special type symbolic link (or “symlink”). A symlink is a file that
acts as a sort of transparent reference to some other object in the
filesystem, allowing programs to read and write to those objects
indirectly by way of performing operations on the symlink itself.
When a symlink is committed into a Subversion repository, Subversion
remembers that the file was in fact a symlink, as well as the object
to which the symlink “points.” When that symlink is checked out to
another working copy on a non-Windows system, Subversion reconstructs
a real filesystem-level symbolic link from the versioned symlink. But
that doesn't in any way limit the usability of working copies on
systems such as Windows that do not support symlinks. On such systems,
Subversion simply creates a regular text file whose contents are the
path to which to the original symlink pointed. While that file can't
be used as a symlink on a Windows system, it also won't prevent
Windows users from performing their other Subversion-related
activities.
Basically, it says something similar to what I mentioned earlier, which is that symlinks are not supported that well if at all on Windows systems. Subversion just creates text files with the contents of the link so you can choose to either figure out how to parse these text files yourself or try to find a library that will parse them for you.
Maybe the problem is that there are so many links in one directory
There is a maximum of 31 reparse points (and therefore symbolic links)
allowed in a particular path.
See also
Programming Considerations
I know I am late in this, but I hope that others may benefit from my answer, even though the asker may long have moved on.
Some background
Symbolic link semantics differ considerably between unixoid systems and Windows. As was stated before, Windows uses reparse points to implement symbolic links and junction points (some deduplication features on the Server editions also seem to use it).
Now, a reparse point contains extra data as a hint to the I/O manager and object manager. Essentially, based on a reparse point tag (a GUID) the type of reparse point can be determined and then a file system filter driver handles the details. You can find a moderately detailed description of this in the 6th edition of "Windows Internals" in chapter 9 or in a recent Windows Driver Kit or on MSDN under REPARSE_GUID_DATA_BUFFER (and related topics).
On unixoid systems the file system metadata also contains a clue that the (text file) is a symlink. If you use ls -l that clue is visible in the form of a leading l, e.g. in:
lrwxrwxrwx 1 user group 38 2015-10-12 11:51
The actual contents of symlinks are system-specific as well, on Linux for example they contain merely the target path.
What the Windows and *nix symlinks share is that the target needn't exist at the time of creation. Also on Windows a symlink can point to a network location, which is special because on Windows network paths differ from local paths.
Possible compatibility
Assuming a symlink was created on the OSX or Linux side, we can imagine certain levels of compatibility. If the file system driver on the Windows side would now present symlinks as reparse points and some party (either said file system driver or a file system filter) would handle these reparse points, it would be possible to interpret the target path of a symlink in some way.
Converting forward slashes to backward slashes is the least concern, however.
In this answer I already outlined a few cases where there would be no meaningful translation possible.
Essentially the only type of symlinks for which I would see a potential for compatibility are relative symlinks. But even for those is is necessary to point out that the target path may not point outside of the folder hierarchy that is visible on the Windows side. That is, if your symlink on the OSX or Linux side resides inside /var/www/html and points to ../../../something it becomes meaningless in a case where /var is the mounted volume on Windows.
If, however, such symlink /var/www/html/foobar and pointed to ../html1/foo/bar chances are that if /var was the mounted volume on OSX or Linux and now on Windows, the relative target path still makes sense (after adjustments such as forward to backward slashes etc).
For any absolute target paths, the file system driver or the file system filter driver would have to get some hints on how to translate the source form of a symlink into the target form.
E.g. if a symlink pointed to /home/foo/bar the /home part might translate to a specific mounted volume.
But you can already see that this requires a lot of user intervention, which is probably why most people would consider it futile to even attempt a meaningful translation.
Possible workaround for SVN
A possible workaround for you could be to use SVN externals. It depends on the exact scenario, but since you are using SVN they come to mind.
You can think of SVN externals as Subversion's native symlinks. I have used them this way and I know of several others who have, but I don't know how widespread that train of thought and subsequent usage is.
Attention: externals pointing to files were only introduced in SVN 1.6, so this may or may not be an issue in your scenario.
SVN externals come in several flavors. You can set them for folders or files (files only with 1.6 and newer).
And an external can point to:
an external repo (schema://server/path)
relative to the same repo (^/path)
relative to the schema (//server/path) or
relative to the parent directory
You'll probably want 2 or 4 from that list. Most likely you'll want 4, though, because file externals must point to the same repository.
Long story short
If your images are in a folder such as trunk/images and you have a folder trunk/platforms/windows/images you can either set the the svn:externals property on trunk/platforms/windows to have an external named images pointing to ../../images (i.e. directory external) or, assuming you wanted to use a different hierarchy or different names underneath trunk/platforms/windows/images you could create file externals like so (images subdirectory must exist in WC):
cd trunk/platforms/windows
svn propedit svn:externals images
and add individual externals like this:
../../../images/filename.jpeg other-filename.jpeg
Please note that the target directories need to exist in the repository and the working copy, so for an external like this:
../../../images/filename.jpeg foo/other-filename.jpeg
the subdirectory trunk/platforms/windows/images/foo must exist.
Updating your working copy will result in those externals to manifest as versioned files inside the working copy. So they are a type of symlinks that exists in SVN and manifests as proper files in the working copy, which means all platforms can handle them equally.

C# application says "No" when executed

I have developed a small-ish C# console application (TextMatcher.exe) on my local development machine and now need to deploy it to the live environment. It references another class library which I developed which has generic functions, which I intend to use and improve in future console applications.
Ultimately this specific application will be executed from within an SSIS package, but for now I'm just trying to run it from cmd.
I kid you not that this is the actual output from the program:
E:/TextMatcher>TextMatcher.exe
No
E:/TextMatcher>
The computer literally says "No" and gives no further information. I have not included, anywhere in the program, to output the word "No", on any failure or otherwise.
Of course, it runs fine locally. I made sure I included the dll of the utility class library too. I have read other questions (here, here) about how to deploy console apps correctly, and have followed the advice.
NB: This is also proving to be quite hard to Google because of the use of the word "No" being fundamental to the problem...
EDIT - It seems to be working now... I simply copied over the files again from my local machine to the remote machine... I am trying to get it to break again so that I can figure out what on Earth happened, and until I do, I will not accept an answer so that people could maybe shed some more light onto it. Either way it is quite baffling.
There's a chance that someone has modified the Image File Execution Options registry setting on the server to launch a debugger automatically.
In short, examine the registry key HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\currentversion\image file execution options and see if there's an entry that matches the name of your executable.
Check whether you have installed the necessary Framework components,
i.e. the suiting Dot net framework. Application with 4.0 and installed 3.5
can cause very strange behaviour.
Try writing a very simple window application and start it on the deployment machine
(this will give you probably more help what is missing).
Check whether the needed Dlls (that you developed, e.g. your class library) are reachable for the console application. And check whether the right version of your Dll is matched!
Check the plattform settings in your console application. Do they match with
the machine where you want to run your application? (win64 and win32 mismatch)
If all of those do not help, try inspecting your executable on the deployment machine
for example with depends.net, checkasm, or similar.
Does your production environment have AppLocker running? I don't know if its output can be customized to output "No" on a command line. Applocker comes to my mind because you can use it to restrict a system to run only signed executables. If your Textmatcher executable is unsigned, it may get shut down immediately. If you have the ability, I'd be curious to see if signing your binary changes the behavior.

SharePoint fails to load a C++ DLL on Windows 2008

I have a SharePoint DLL that does some licensing things and as part of the code it uses an external C++ DLL to get the serial number of the hardisk.
When I run this application on Windows Server 2003 it works fine, but on Windows Server 2008 the whole site (loaded on load) crashes and resets continually. This is not Windows Server 2008 R2 and is the same in 64 or 32 bits.
If I put a Debugger.Break before the DLL execution then I see the code get to the point of the break and then never come back into the DLL again. I do get some debug assertion warnings from within the function, again only in Windows Server 2008, but I'm not sure this is related.
I created a console application that runs the C# DLL, which in turn loads the C++ DLL, and this works perfectly on Windows Server 2008 (although it does show the assertion errors, but I have suppressed these now).
The assertion errors are not in my code but within ICtypes.c, and not something I can debug.
If I put a breakpoint in the DLL it is never hit and the compiler says:
"step in: Stepping over non user code"
If I try to debug into the DLL using Visual Studio.
I have tried wrapping the code used to call the DLL in:
SPSecurity.RunWithElevatedPrivileges(delegate()
But this also does not help.
I have the source code for this DLL so that is not a problem.
If I delete the DLL from the directory I get an error about a missing DLL. If I replace it, back to no error or warning just a complete failure.
If I replace this code with a hardcoded string the whole application works fine.
Any advice would be much appreciated, I can't understand why it works as a console application, yet not when run by SharePoint. This is with the same user account, on the same machine...
This is the code used to call the DLL:
[DllImport("idDll.dll", EntryPoint = "GetMachineId", SetLastError = true)]
extern static string GetComponentId([MarshalAs(UnmanagedType.LPStr)]String s);
public static string GetComponentId()
{
Debugger.Break();
if (_machine == string.Empty)
{
string temp = "";
id= ComponentId.GetComponentId(temp);
}
return id;
}
This could be security related:
An important point is that it works in a console app.
In a console app RunWithElevatedPrivileges has no effect since it emulates the app pool user for your worker process, a user that should have no special rights on the box itself.
In contrast a console app runs in context of the logged in user.
Try emulating a user with rights like when you run the console application specified here (with Undo() inside try/finally mind you!). When obtaining the token you can create an SPUserToken and establish site context using the SPSite constructor that takes a GUID and a SPUserToken
Theres several examples out there documenting this approach, here for example.
EDIT: oh and the reason it worked on 2003 could be that your app pool account had way too many rights ;-)
Why not use WMI to get the serial number of hard disk, thus avoids execution of unmanaged code. See this sample How to Retrieve the REAL Hard Drive Serial Number
That non-deterministic crashing behavior is often seen with memory overwrites/corruption; sometimes it matters (crash), sometimes you get lucky.
You might want to check into getting a crash dump and analyze it with WindDbg. Since you have the source you could re-build it with the various stack, heap memory protection and warning systems enabled (depending on your compiler) and see what you get.
I'd find out if it is a User Account Control related problem, you can try to disable it.
2003 doesn't have UAC.
Your app pool account might not have the right to retrieve this information?
In visual studio, go into the properties of your executable assembly, and under the debug tab, check the enable debugging unmanaged code option.
If the method your are importing belongs to a class, you need to add the mangled C++ name (e.g. 2#MyClass#MyMethod?zii) as an entry point to the DllImport attribute (run depends on the native DLL to get it).
You do not need C++ for that: http://www.codeproject.com/KB/cs/hard_disk_serialno.aspx
If i put a breakpoint in the DLL it is
never hit and the compiler says :
"step in: Stepping over non user code"
That's the debugger, not the compiler, and if you configured it properly it wouldn't do that. Look for the options calls "Use native debugging" and "Just my code". The first one should be on and the second one off.
This problem may happen due to one of the problems listed below.
the web part may not have the right permissions to call the DLL or
you may not have set the appropriate trust level for your SharePoint site.
For the permission you can use impersonation and for the trust level below site can help you.
http://msdn.microsoft.com/en-us/library/dd583158(office.11).aspx
I made a new C++ DLL from scratch which works fine when referenced as a console application on Windows Server 2003 and Windows Server 2008, but as soon as I reference it from the DLL in SharePoint the same things happens and it won't run.
It does find the DLL, but I think it has no permissions to execute it, even if I put it into the My Documents section and reference it directly!

Windows seems to lose track of .NET application

We have a .NET application that we distribute to our users via an MSI installer package. We have C++ applications that run each morning to see if the user's copy of the application is out of date, and if so, we pull down the new MSI and install it. If the application is running, we need to take it down so we can perform the update.
Our problem is that every once in a while it seems like windows "loses" our application. It will not report that the process is running - though it is. It will allow us to overwrite, or even delete, the in-use executable file without taking down the application.
Maybe this is something that is common -- but we can't figure out what is going on! Does anyone have any insight into this situation?
It seems like a temporary copy of our application is getting created, and the program is getting ran from that. But if that is the case, why doesn't it happen all the time?
EDIT:
In our program, We are using the "EnumProcesses" function from the Platform SDK, PSAPI.dll, to enumerate all of the running processes.
Could it be that either the script or the application runs as a 64-bit program, and the other as a 32-bit program? If so, then on 64-bit machines the update check could be looking in the wrong location for an existing application and thus reporting it as missing?
What mechanism are you using to check to see if the process is running or not?
Try using something like process explorer to see what path the executable image is loaded from - it should be listed in the modules section.

Categories

Resources