I have an odd bug where my code returns a file not found exception but the file seems to be exactly where it should be. My project has some code to run a system cmdlet and look for the results of the cmdlet in an XML output file. We tell the cmdlet to put this output XML in a custom subdir of the system TEMP dir, e.g., C:\WINDOWS\TEMP\SomeFolder\output.xml. We then use the .NET XmlDocument class to open and parse the XML file.
On WinXP, this works. On my dev box, this works. On a clean Win7 test machine, it does not.
My first thought was that I'm running into Vista/Win7 File Virtualization, but our application manifest specifies that our app run as Admin -- and from what I've read, that should bypass file virtualization.
The other wrinkle is that our code likes to use UNC file paths, even if the file is local to the machine. (We have a requirement that the code in question may need to run the cmdlet on a remote machine, and therefore the output XML could be on a remote machine too.) So we try to open the XML file via \MATT-WIN7\C$\WINDOWS\TEMP\SomeFolder.xml rather than C:\WINDOWS\TEMP\SomeFolder\output.xml.
But I removed the UNC path code temporarily and a simple call to File.Exists() still says the XML file is not there, when Windows Explorer shows the file sitting exactly where I think it should be.
Is there some nuance of file virtualization that I have not read about yet?
My workaround is to move the output xml file somewhere else, but that will potentially break the "portability" of our code when it needs to run on a remote machine, because using the %TEMP% location is a location that can be resolved for remote computers pretty easily (via remote registry call to find the system environment variable).
I would prefer to leave the file where it is, and fix our code so it actually finds the file!
There is a user-specific override for the %TEMP% environment variable that points to %USERPROFILE%\AppData\Local\Temp, not to %SYSTEMROOT%\Temp. Make sure your code is looking in the temp folder you expect it to look at.
Update: Based on your comments, it seems that the problem is that your app is not actually being elevated on the test machine, but is elevated on your dev machine. I suspect the following:
you either have UAC disabled on your dev machine or you are running VS as administrator. Big no-no on both. :-)
your binary is not code-signed and is not in one of the two trusted locations - %SystemRoot%\system32 or %ProgramFiles%. For security reasons UAC does not even prompt the user for elevation for apps that have elevation manifest, but are not code-signed or in a trusted location.
You can create a self-signed certificate to code-sign your binary and add that certificate to the test machines, to get the UAC prompt. Once you've confirmed that your app is properly being elevated, your code to access the system %TEMP% folder should work correctly.
Related
The documentation doesn't explain the behavior when passing in a path such as "myFile_temp.jpg" but I would assume that it would save the the application directory because this is a relative path, relative to the application we are currently running.
I think that the problem can be solved by prepending the current application directory to my temp file name using
string appPath = Path.GetDirectoryName(Application.ExecutablePath);
Sure there are lots of ways to do it, but this should work.
My issue is I'd like to know why this is happening rather than just throw a patch on it and ship it back out to the users.
Code is WPF, C# project compiled with .NET 4.0 and Visual Studio 2010 and runs on a lot of different machines. Mostly 32-bit XP,while the dev machine is a 64-bit Windows 7.
Can any one explain this behavior and why it's occuring?
Edit
The files will on occasion be saved to the directory the user selected files from to manipulate. They resize them, the program keeps track of the size percent for each of the file paths. When the user is finished they will click done and the program will go through each of the file paths, create a copy, resize the image and then save it with a _temp on the end.
Take note that it doesn't always do it and it when it does it doesn't do it for all the files they touched.
It works as s expected. You just didn't expect valid behavior. Lets assume that your app is placed in c:/superapps/myapp.exe. You opened command line and you're in C:\ which means that this your current working directory.
You can still run your app by ./superapps/myapp but your working directory is still C:\. And this will be working directory of your app in this case, not the directory you placed the binaries.
That is why it may not have permission or save data in some unexpected by you location. You should always think that your app could be run just like any other command like dir. It will be working in the place where user is currently standing (his current working dir) not in the place it's binaries are stored in
I'm trying to make an uninstaller.
I basically need to be able to remove a directory in the program files that contains the uninstaller.
I was thinking to have the uninstaller create a copy of itself to the temp folder,
then have the uninstaller running from the program folder open the uninstaller in temp and close itself where it continues the uninstall.
Problem is, how do I delete the uninstaller in the temp folder...
Check out: https://www.catch22.net/tuts/win32/self-deleting-executables
He has multiple solutions - but mostly aimed at C++ code.
I am currently trying to implement the "DELETE_ON_CLOSE" method in C#.
A comment to all the nay-sayers: MSI does not solve this problem in all cases. In my case, my application needs to install to a network folder, where any network user can run the app. It also needs to support upgrades and uninstalls from any network workstation - not necessarily the same workstation that installed the app. This means I cannot register an Uninstaller into the Add/Remove Programs list on the local machine. I must create an Uninstall.exe that is dropped into the install folder. MSI does not support that, so I have to write my own.
Even though I agree with everyone saying you shouldn't do that, what you can do is:
Have your program create an executable (and config file) in the temporary folder of the OS it's working on.
Have your program start it as an independent process, then exit.
That executable then calls back the actual setup stuff.
Once the setup is done, the temporary executable can delete the setup files.
Since the only trace of your program is now in the temp folder, it will eventually get cleared automatically.
I am using C# with .net 3.5
I am saving my program data in a file under: C:\Program Data\MyProgramName\fileName.xml
After installing and running my application one time I uninstalled it (during uninstallation I'm removing all the files from "program data")
and then I reinstall the application, and ran it.
The strange thig is that my application started as if the files in program data existed - means, I had old data in my app even though the data file was deleted.
When running:
File.Exists("C:\Program Data\MyProgramName\fileName.xml")
I got "true" even though I knew for sure that the file does not exist.
The thing became stranger when I ran the application as admin and then the file didn't exist.
After a research, I found out that when running my application with no admin priviliges instead of getting: "C:\Program Data\MyProgramName\fileName.xml" I get "C:\Users\userName\AppData\Local\VirtualStore\ProgramData\MyProgramName\fileName.xml"
and indeed there was a file that existed from the previous installation (that I obviously didn't delete ,because I didn't know it existed).
So apparentlly there is some virtual path to the file under program data.
EDIT :
I found out that after deleting the old file in the virtual store, my application is suddenly able to find the correct file. (I didn't make any changes in the file under Program Data.
My question is:
why is it happen.
How can I prevent it from happening
Thanks in advance
Do you actually have to write to the per-system Program Data folder instead of the per-user Application Data folder(s)?
You might want to take a look at Environment.GetFolderPath and the following Environment.SpecialFolders:
Environment.SpecialFolder.ApplicationData - data folder for application data, synchronized onto domain controller if the user profile is roaming
Environment.SpecialFolder.LocalApplicationData - data folder for application data, local and not synchronized (useful for, for instance, caches)
EDIT:
Tested on Windows 7 x64, non-administrator user.
var appData = Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData);
var myFolder = Path.Combine(appData, "MyApp");
if(!Directory.Exists(myFolder)) Directory.CreateDirectory(myFolder);
File.WriteAllText(Path.Combine(myFolder, "Test.txt"), "Test.");
This does what is expected, ie. writes into C:\ProgramData\MyApp\Test.txt. As far as I can tell (Administrator mode Command Prompt), there's no UAC virtualization going on either.
Double edit:
I guess what's happened is that at some point an Administrator user has written the files into your ProgramData folder, and as such, UAC file system virtualization kicks in and redirects the non-administrator writes into the VirtualStore.
Does your uninstaller run as Administrator? If it does, you might have to check both the VirtualStore path for the user who initiates the uninstall, and the actual file system path for program data to remove. I'm not sure if there's an official way to do this, though...
I found the reason for the bug.
the application is trying to take ownership on the file and then the other file is created.
I removed that line and now everything works just fine.
I've built a winforms app (C#) that will take a list of file paths, and copy those files (from a different VS solution) to a new location (In a folder the user specifies) in the same directory structure they currently exist on local file system.
I use the Path class, Directory class etc and everything works wonderfully...except when it reaches a file path that points to a DLL.
The DLLs I am trying to copy are a part of the other solution, and that solution is not currently open.
I have tried restarting computer to make sure visual studio isn't somehow hooking into that DLL even after the solution is closed.
The DLL in question can be copied by regular manual means (i.e. copy and paste shortcut).
So short of creating a batch file in the program, and running xcopy on that DLL path, I don't know of a way to get this to work.
From what I have found from google searches (which isn't much on this particular situation), File.Copy() should work..
Any help would be wonderful, even if it is a link to a duplicate question I may have over looked.
Thanks!
-The error message is: The process cannot access the file [insert file path] because it is being used by another process (The path is definitely correct also)
-Just downloaded and tried to search for the DLL name with Process Explorer.. I also ran a similar exe from command prompt to no avail. It claims nothing is using it. That's why I am utterly baffled by this. Also, I just checked the permissions and everything looks great (i.e. Full Control, owner effective permissions)
-It does not handle open files. It basically build the correct src and dest paths and does a File.Copy() on those. How would I go about handling open files? I'm sure I could figure out if it was open, but what would I do it it were open?
It is not complaining about the file you're trying to copy, it is complaining about the file that you're trying to overwrite with the copy. Lots of candidates for that, virus scanners always get very excited about new DLLs, for example. Or it is loaded into a process, the typical failure mode for trying to implement your own auto-updater.
You can rename the target file to make your copy succeed.
Are you in vista or win7? If so, Check your 'User Account Control Settings'. Sometimes this can interfere with .NET security options and prevent file operations that would otherwise work.
As well as Process Explorer, I would use Process Monitor also from Microsoft so you can see what is happening at the point of failure and allows you to see if anything else is accessing the dll.
Possible culprits are
the program you are running,
your antivirus package
a virus.
If the path it is complaining about is the destination path, then is is possible that the path is too long?
Also, when using Process Explorer, make sure you have enabled the option to show details for all processes and not just your own.
I just ran into this issue as well. I tried copying a .DLL from an FTP server to a local directory (replacing the existing one) and for the life of me I could not get it to work. Keeps giving me an 'Access Denied code: 5' Error.
I then realized that the .DLL on the FTP server was not marked as hidden while the .DLL I was trying to replace was marked as hidden.
Once I changed the local one to also be visible. I had no more issues.
So my solution is:
Make sure both files are visible.
Hope this helps someone
i want to give a file(already present on the client computer .exe) permissions to always execute with administrative permissions.
please note that the file i wants to give permissions is already on target machine. and i want to change permissions of that file through another program written in c# and it has administrative permissions to do everything.
kindly let me know how to do it
i am using this code
System.Security.AccessControl.FileSecurity fs = File.GetAccessControl(#"c:\inam.exe");
FileSystemAccessRule fsar = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Allow);
fs.AddAccessRule(fsar);
File.SetAccessControl(#"c:\inam.exe", fs);
this code will change the permissions correctly but still when i execute inam.exe after executing this code the UAC not appeared and also the inam.exe cant perform administrative operations.
actually i have already deployed an application on more than 10,000 clients so wants to release a patch to resolve the administrative rights issue.
Execute with administrative privileges is not a file permission.
This is usually configured by adding a manifest file (either to the Win32 resources in the EXE, or as an external manifest). This manifest file can state whether the application needs to run elevated or not.
I'm not entirely sure where Windows stashes the "Run this program as an administrator" compatibility setting.
Using a manifest file is the best approach, but an alternative one would be to programmatically set the "Run this program as an administrator" flag (the option you find in the Compatibility tab of an EXE's properties), by setting a simple registry key. You need to create a string value (REG_SZ) under the one of these keys (if you want the setting to be per user or per machine, respectively):
HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers
or
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers
The name of the value needs to be the full path to your executable (if the path contains spaces, do not surround the path with quotes), and the data of the value must contain the string RUNASADMIN.
Build a manifest file (see http://www.gregcons.com/KateBlog/AddingAManifestToAVistaApplication.aspx among other places) and name it Whatever.exe.manifest and put it in the same folder as the exe. The nanifest should set the requestedExecutionLevel to requireAdministrator. All set.
If you own the other exe, you can embed the manifest when you build it. This is almost trivial in Visual Studio 2008 and up. See the Application tab and drop down the Manifests drop down. There are instructions nearby. Also when you use VS 2008 to add a manifest to your project you don't have to type all the XML, you just copy the appropriate requested execution level from the comments that are generated for you.