The OpenFileByID line in test() is giving me System.AccessViolationException Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
I am trying to replicate this code example (see the answer), which I'm running in Visual Studio Express 2013 for Windows Desktop. But this example doesn't seem to work for me. It is breaking on the OpenFileByID line in test().
In a nutshell, I am getting a file's ID, then trying to create a file handle from that ID. Later on I plan to use that handle to get information about the file. The reason I'm using IDs is so that I can repair broken links, since a target file's GUID is far more reliable than its presumed location. Help appreciated!
Edit: The file I'm trying to open is an ordinary text file on my Desktop, nothing special.
You're not checking to see if you got a valid volume handle - which you might not be. Could be the source of your a/v.
When you're opening the root dir, the doc says you shouldn't use FILE_ATTRIBUTE_NORMAL with any other flags - but you're using it with FILE_FLAG_BACKUP_SEMANTICS. To use FILE_FLAG_BACKUP_SEMANTICS, you have to get privileges for SE_BACKUP_NAME. You'll have to be an admin or a backup operator to do so. I can't imagine that you need that flag.
You can get the volume handle by opening "\\.\C:" (for example)...which is different than that handle to the root folder. I usually open it with GenericRead, but if all you need it for is for OpenFileById, you can specify 0 for access.
Also - adding object IDs to files isn't necessary - the file reference number (FRN) is the master file table identifier for the file - it's the "other" kind of ID you can pass in a FILE_ID_DESCRIPTOR. You can get it from an open file handle calling GetFileInformationByHandle - it's the nFileindexHigh and nFileIndexLow made into a long int. When you move a file, the FRN stays (only it's parent FRN changes). Also, when you rename a file, the FRN doesn't change. The benefit of using this over ObjectID is that you're not altering the volume in order to track a file...and you don't have to use DeviceIOControl - which is a bit of an interop bad dream.
One more thought - OpenFileByID didn't show up until Vista and Windows Server 2008. You're there, right?
Related
I have a WinForms (.NET C#) OLTP application based on Oracle.
From our support environment we regularly experience loss of connectivity to the database, and a resulting minidump file is generated (by what, i am not entirely certain of) - apparently it does not cause the application to crash, but in order to actually do anything you have to close it and start it again.
After a many such minidumps have been created in the same directory, all of a sudden the minidumps starts getting rather strange file names, filenames that are apparently "illegal" on windows.
For instance we have a file name like:
"°÷ƒ
_minidump_default_pid_20248_tid_x19AC_2015_9_1_8_31_51.dmp"
And yes the carriage return is PART of the file name.
We discovered this because log4net watches the directory and all of a sudden starts to bark unhandled exceptiosn due to these invalid file names.
So we are trying to figure out why the minidump is generated in the first place, but the question here is, can we somehow prevent the minidump from being generated with an invalid filename or otherwise control the naming process?
Secondly, does anybody know why is it even possible to create invalid file names in the first place?
Update:
For anyone looking at this trying to figure out why the dump files are created in the first place, our issue was that Windows was generating them when it was near running out of memory, but for some reason we would'nt always get an OOMException.
First, you should really try to find out how those dumps are generated. Microsoft e.g. provides a nice way using a Registry key called LocalDumps which has provided great help for me. I am sure that this approach won't generate invalid file names like above.
Second, if the application does not crash, it has probably registered an unhandled exception handler. This is basically ok and designed to write crash dumps, but the unhandled exception is handled by the crashing process itself. How can the code to handle the situation be sure he himself is not affected by the crash? The better option is to let Windows as the OS handle the crash. Then the Windows kernel (which is not affected by the crash) can really handle the situation. That's what LocalDumps does.
Third, direct file system access is possible in Windows via paths that start with \\.\ when passing it to the Windows API. Starting a path like that will skip any file name check so you can generate files with reserved characters such as *, ?, : or newlines as observed by you. The unhandled exception handler of your application is probably doing that and is affected by the crash in a way that parts of the file name are overwritten.
Chkdsk should be able to repair the file system.
pls check if you are installing from network path like \remoteserver\d$\client.
then change it to \remoteserver\d\clinet
"$" in share path create issue while extration on elevated permission files
I have the following piece of code in my application:
if (!Directory.Exists(myPath))
Directory.CreateDirectory(myPath);
If I run it in a regular unit test sometimes it passes, sometimes not. The directory is always there (I made sure of it, so technically it will never be "created" by code). But every once in a while Directory.Exists(myPath) returned false, which makes the code try to create the folder and then I get an UnauthorizedAccessException!
The funny thing here is if I put a breakpoint on the CreateDirectory, and then move the yellow arrow up back to test, the test returns true!
What's going on?
myPath is \\nameOfLocalMachine\sharedFolder. The share is reliable and constantly used... .NET 4.0
I just made a fiddler simulate 3000 sequentials requests. 175 failed... All with the same message:
Access to the path '\nameOfLocalMachine\sharedFolder\randomFileName.json' is denied
This mishap is pretty normal on Windows. Programs open a handle on a directory like this and specify delete sharing. Which permits anybody to delete the directory, even though the program is using it. The directory won't actually disappear from the file system until that handle is closed. What follows is that trying to recreate that directory cannot work, it still exists. Windows generates an "access denied" error, reported in your C# program with the UnauthorizedAccessException.
While that sounds like an obscure feature, every program in Windows does this. Every process has a default working directory, the value of Environment.CurrentDirectory. Creating a handle on such a directory ensures that it cannot disappear while the program is using it. There are other cases, FileSystemWatcher would be another example. Or a program busy iterating the directory. Anti-malware and search indexers are notorious for hard to diagnose sources of such errors.
Otherwise a standard hazard of a multi-tasking operating system. You are not the only one using the file system. Not repeatedly deleting and creating the same directory ought to be very high on your list. If this is absolutely necessary then rename the directory first before you delete it. You'd still fail to delete the renamed directory but you won't fail recreating it. You can delete it later, next time you need to do this. Much lower odds for trouble then. Because more time passed.
I'm reading the contents of an XML file and parsing that into an object model.
When I modify the values in the object model, then use the following code to save it back to the xml:
XElement optionXml = _panelElement.Elements("options").FirstOrDefault();
optionXml.SetAttributeValue("arming", value.ToString());
_document.Save(_fileName);
This works, as far as I can see, because when I close the application and restart it the values that I had saved are reflected in the object model next time I view it.
However, when I load the actual XML file, the values are still as they were originally.
Why is this? What do I need to do to save the actual XML file with the new values?
You are most likely experiencing file system virtualisation, which was introduced in Windows Vista.
Basically what this means is that you are saving your file, just not where you think you're saving it. For example, you might think that you are saving to C:\Program Files\Your App\yourFile.xml, but what is happening under the hood is that the OS is silently redirecting that to %APPDATA%\Your App\yourFile.xml. When you go to reload it, once again the OS silently redirects from that location.
This is a security measure designed to better encapsulate applications and their data and to prevent unauthorised writes to locations where damage can occur. You can still force a save to %PROGRAMFILES%\Your App, but to do that you either need to relax the ACLs applied to that folder, or you need to elevate the privilege level your application runs at.
I wasn't sure whether to put this as a comment or as an answer, but I think it could be a potential answer. It sounds like the XML file is being saved because the data is being persisted across instances of the application. It may be file system virtualization like slugster mentioned, but it might be a simple as the fact that you are looking at the wrong copy of the XML file. If you are using a relative path, the file may have been copied to the new location. I would suggest you do a quick file search for that file name and see what you get back.
It turns out the file was being copied to and read from the Output Directory. I can see that it's being updated as expected from there.
So I have been writing to
Environment.SpecialFolder.ApplicationData
this data file, that upon uninstall needs to be deleted. I am using Innos Setup to build my installer. It works great for me. So my data file hangs out in the above path and I do that cause when I used to try to write it to
Application.ExecutablePath
certain boxes I tested it on would throw a nasty error at me trying to write data there. I do research and somehow its not always writable and its how i came up with the Environment.SpecialFolder.ApplicationData
That is why my data file now resides in the SpecialFolder.ApplicationData. Trouble is if the user uninstalls and reinstalls I need that file gone. It might be a short coming of my knowledge of Innos but I cannot figure out how to know where that file will be to tell innos that.
So then I thought I had a clever solution: Innos can run a file when its done uninstalling, so I had my program create this file "uninstallData.bat" that says:
del "the file in my special folder application data path"
and I wrote it out to drumroll
Application.ExecutablePath
(yes it was a while in development and I had forgot it was't doable.)
So of course I am back to square one, I need to write a file to a path Innos knows about {app} and I need it to be able to delete my data file in the SpecialFolder... i don't care how I do it i just need that file gone.
Are there other Environment. or Application. approches I have missed? Maybe somewhere that is viewable by an uninstaller AND can be written to?
As an aside, I am not sure why my box I develop on can write to the application folder no issue, but it cannot on other boxes... weird.
Any input would be great sorta lost as to how to crack this nut.
The environment location is in the user profile. If there are multiple users on the machine, and they all run the application then a copy of the file will be in each profile.
The path also depends on the OS.
Regardless, the current user's app data location is pointed to by %APPDATA% and %LOCALAPPDATA%. These Windows environment variables should be available within Innos.
Appliccation.ExecutablePath is not writable per standard defintions - the program files folder should never be manipulated by running applications. Ther area number of special folders for that. Nice that you finally found.... what is properly documented by Microsoft for a LONG time now (minimum 10 years).
I suggest you get a proper installer - WIX comes to my mind. Your problem is totally unrelated to C# - it seems to be totally a "crappy installer" issue. Or provide a PROGRAM (not bat file) to run at uninstall. What exatly is your problem there?
When I call FileInfo(path).LastAccessTime or FileInfo(path).LastWriteTime on a file that is in the process of being written it returns the time that the file was created, not the last time it was written to (ie. now).
Is there a way to get this information?
Edit: To all the responses so far. I hadn't tried Refresh() but that does not do it either. I am returned the time that the file was started to be written to. The same goes for the static method, and creating a new instance of FileInfo.
Codymanix might have the answer, but I'm not running Windows Server (using Windows 7), and I don't know where the setting is to test.
Edit 2: Nobody finds it interesting that this function doesn't seem to work?
The FileInfo values are only loaded once and then cached. To get the current value, call Refresh() before getting a property:
f.Refresh();
t = f.LastAccessTime;
Another way to get the current value is by using the static methods on the File class:
t = File.GetLastAccessTime(path);
Starting in Windows Vista, last access time is not updated by default. This is to improve file system performance. You can find details here:
http://blogs.technet.com/b/filecab/archive/2006/11/07/disabling-last-access-time-in-windows-vista-to-improve-ntfs-performance.aspx
To reenable last access time on the computer, you can run the following command:
fsutil behavior set disablelastaccess 0
As James has pointed out LastAccessTime is not updated.
The LastWriteTime has also undergone a twist since Vista. When the process has the file still open and another process checks the LastWriteTime it will not see the new write time for a long time -- until the process has closed the file.
As a workaround you can open and close the file from your external process. After you have done that you can try to read the LastWriteTime again which is then the up to date value.
File System Tunneling:
If an application implements something like a rolling logger which closes the file and then renames it to a different file name you will also run into issues since the creation time and file size of the "old" file is remembered by the OS although you did create a new file. This includes wrong reports of the file size even if you did recreate log.txt from scratch which is still 0 bytes in size. This feature is called OS File System Tunneling which is still present on Windows 8.1 . An example how to work around this issue check out RollingFlatFileTracelistener from Enterprise Library.
You can see the effects of file system tunneling on your own machine from the cmd shell.
echo test > file1.txt
ren file1.txt file2.txt
Wait one minute
echo test > file1.txt
dir /tc file*.txt
...
05.07.2015 19:26 7 file1.txt
05.07.2015 19:26 7 file2.txt
The file system is a state machine. Keeping states correctly synchronized is hard if you care about performance and correctness.
This strange tunneling syndrome is obviously still used by application which do e.g. autosave a file and move it to a save location and then recreate the file again at the same location. For these applications it makes to sense to give the file a new creation date because it was only copied around. Some installers do also such tricks to move files temporarily to a different location and write the contents back later to get past some file exists check for some install hooks.
Have you tried calling Refresh() just before accessing the property (to avoid getting a cached value)? If that doesn't work, have you looked at what Explorer shows at the same time? If Explorer is showing the wrong information, then it's probably something you can't really address - it might be that the information is only updated when the file handle is closed, for example.
There is a setting in windows which is sometimes set especially on server systems so that modified and accessed times for files are not set for better performance.
From MSDN:
When first called, FileSystemInfo
calls Refresh and returns the
cached information on APIs to get
attributes and so on. On subsequent
calls, you must call Refresh to get
the latest copy of the information.
FileSystemInfo.Refresh()
If you're application is the one doing the writing, I think you are going to have to "touch" the file by setting the LastWriteTime property your self between each buffer of data you write. Some psuedocode:
while(bytesWritten < totalBytes)
{
bytesWritten += br.Write(buffer);
myFileInfo.LastWriteTime = DateTime.Now;
}
I'm not sure how severely this will affect write performance.
Tommy Carlier's answer got me thinking....
A good way to visualise the differences is seperately running the two snippets (I just used LinqPAD) simliar to below while also running sysinternals Process Monitor.
while(true)
File.GetLastAccessTime([file path here]);
and
FileInfo bob = new FileInfo(path);
while(true){
string accessed = bob.LastAccessTime.ToString();
}
If you look at Process Monitor while running the first snippet you will see repeated and constant access attempts to the file for the LinqPAD process. The second snippet will do an initial access of the file, for which you will see activity in process monitor, and then very little afterwards.
However if you go and modify the file (I just opened the text file I was monitoring using FileInfo and added a character and saved) you will see a series of access attempts by the LinqPAD process to the file in process monitor.
This illustrates the non-cached and cached behaviour of the two different approachs respectively.
Will the non-cached approach wear a hole in the hard drive?!
EDIT
I went away feeling all clever over my testing and then used the caching behaviour of FileInfo in my windows service (basically to sit in a loop and say 'Has-file-changed-has-file-changed...' before doing processing)
While this approach worked on my dev box, it did not work in the production environment, ie the process just kept running regardless if the file had changed or not. I ended up changing my approach to checking and just used GetLastAccessTime as part of it. Don't know why it would behave differently on production server....but I am not too concerned at this point.