Directory.Exists sensible to time? - c#

I have the following piece of code in my application:
if (!Directory.Exists(myPath))
Directory.CreateDirectory(myPath);
If I run it in a regular unit test sometimes it passes, sometimes not. The directory is always there (I made sure of it, so technically it will never be "created" by code). But every once in a while Directory.Exists(myPath) returned false, which makes the code try to create the folder and then I get an UnauthorizedAccessException!
The funny thing here is if I put a breakpoint on the CreateDirectory, and then move the yellow arrow up back to test, the test returns true!
What's going on?
myPath is \\nameOfLocalMachine\sharedFolder. The share is reliable and constantly used... .NET 4.0
I just made a fiddler simulate 3000 sequentials requests. 175 failed... All with the same message:
Access to the path '\nameOfLocalMachine\sharedFolder\randomFileName.json' is denied

This mishap is pretty normal on Windows. Programs open a handle on a directory like this and specify delete sharing. Which permits anybody to delete the directory, even though the program is using it. The directory won't actually disappear from the file system until that handle is closed. What follows is that trying to recreate that directory cannot work, it still exists. Windows generates an "access denied" error, reported in your C# program with the UnauthorizedAccessException.
While that sounds like an obscure feature, every program in Windows does this. Every process has a default working directory, the value of Environment.CurrentDirectory. Creating a handle on such a directory ensures that it cannot disappear while the program is using it. There are other cases, FileSystemWatcher would be another example. Or a program busy iterating the directory. Anti-malware and search indexers are notorious for hard to diagnose sources of such errors.
Otherwise a standard hazard of a multi-tasking operating system. You are not the only one using the file system. Not repeatedly deleting and creating the same directory ought to be very high on your list. If this is absolutely necessary then rename the directory first before you delete it. You'd still fail to delete the renamed directory but you won't fail recreating it. You can delete it later, next time you need to do this. Much lower odds for trouble then. Because more time passed.

Related

Can I prevent invalid minidump file names

I have a WinForms (.NET C#) OLTP application based on Oracle.
From our support environment we regularly experience loss of connectivity to the database, and a resulting minidump file is generated (by what, i am not entirely certain of) - apparently it does not cause the application to crash, but in order to actually do anything you have to close it and start it again.
After a many such minidumps have been created in the same directory, all of a sudden the minidumps starts getting rather strange file names, filenames that are apparently "illegal" on windows.
For instance we have a file name like:
"°÷ƒ
_minidump_default_pid_20248_tid_x19AC_2015_9_1_8_31_51.dmp"
And yes the carriage return is PART of the file name.
We discovered this because log4net watches the directory and all of a sudden starts to bark unhandled exceptiosn due to these invalid file names.
So we are trying to figure out why the minidump is generated in the first place, but the question here is, can we somehow prevent the minidump from being generated with an invalid filename or otherwise control the naming process?
Secondly, does anybody know why is it even possible to create invalid file names in the first place?
Update:
For anyone looking at this trying to figure out why the dump files are created in the first place, our issue was that Windows was generating them when it was near running out of memory, but for some reason we would'nt always get an OOMException.
First, you should really try to find out how those dumps are generated. Microsoft e.g. provides a nice way using a Registry key called LocalDumps which has provided great help for me. I am sure that this approach won't generate invalid file names like above.
Second, if the application does not crash, it has probably registered an unhandled exception handler. This is basically ok and designed to write crash dumps, but the unhandled exception is handled by the crashing process itself. How can the code to handle the situation be sure he himself is not affected by the crash? The better option is to let Windows as the OS handle the crash. Then the Windows kernel (which is not affected by the crash) can really handle the situation. That's what LocalDumps does.
Third, direct file system access is possible in Windows via paths that start with \\.\ when passing it to the Windows API. Starting a path like that will skip any file name check so you can generate files with reserved characters such as *, ?, : or newlines as observed by you. The unhandled exception handler of your application is probably doing that and is affected by the crash in a way that parts of the file name are overwritten.
Chkdsk should be able to repair the file system.
pls check if you are installing from network path like \remoteserver\d$\client.
then change it to \remoteserver\d\clinet
"$" in share path create issue while extration on elevated permission files

Is it possible to be sure a certain file can be written to without opening it?

I wrote a custom control for output file name selection with the typical: text box for the filename, a "browse" button, and some other functionality specific to my application.
The text box changes color depending on the filename. If the file location cannot be written to, it turns red. If the file already exist, it turns yellow. Otherwise, it remains the system-assigned color.
To see if a file exists, I use IO.File.Exists; simple enough.
I implemented the "if the file can be written to" as a simple try-catch block where a file is actually opened, something written in it, closed, then deleted. If at any point an exception is thrown, I know the user can't use that filename and I turn the text box red.
This is a catch-all; since I'm doing the actual operation I intend to do, it is fool-proof. However, it seems irresponsible to have software creating and deleting files like crazy just to see if it can.
So my question is, how do I replicate this functionality without creating files? I can see I have to:
Check the path for legality (e.g., 'z:' is not a valid filename). This entails parsing the path and making sure all directories exist.
If the location exists, I have to check for write permissions. (Several answered questions exist to this end.)
Is there anything else?
EDIT
Within minutes I see people are already voting up an answer that criticizes that I'm checking at all that the file is accessible before actual writing to it occurs. While I appreciate experts "standing back" from my question to see whether or not there is a completely different way to achieve it, telling me I shouldn't be doing it is not an answer to my question.
So let me elaborate on my application (I am not expecting hundreds of users at the same time).
I use this file chooser control in data acquisition applications. In many situations the test that you are about to run is "expensive" in one way or another. Therefore it is critical to set things up very carefully. Overwriting data can be very expensive (and for the fearful user I have a checkbox that will append the date and time down to the millisecond to the filename).
So the purpose of my indicator colors is not to provide a surefire way for the software to know the file can be written to (that check is still done at the instant it actually has to), it's to serve as an indicator to the user that at least he has set up the file name correctly so if he goes forward he is guaranteed not to overwrite old data and he's almost sure a last-minute IO error (filename typo) won't let the experiment run unrecorded.
I suggest this - don't check anything before user commits the action. With your current approach, even if you verified the file is okay, it may be locked 5 seconds later when the user actually commits to write to a file. Doing preliminary checks may only give user a false impression of estimated success. Especially consider this point on a terminal server with 100+ simultaneous users.
There is nothing wrong with showing a prompt with Retry/Cancel/etc. if no access, and let user decide.
EDIT:
No offense, but there are standards on how such collisions are handled. Windows standard is to show a prompt to the user. Also consider this - if you suddenly have a deny in write access to the folder, which you are not expected to have, you probably need to hire another system/network administrator.
If the operation is costly, make sure this guy is paid well. C'mon, what if your network goes down during writing? Hard drive? Router? There are many reasons why writing to a file can be interrupted, and you should be prepared for that. If you cannot afford it, make sure you have invested in good infrastructure and good people to support it.
Down on earth, you can increase chances of acquiring a successful lock on the file:
Pick a unique file name, using datetime-based hash as a suffix/prefix.
Write to user's home directory, also known as %UserProfile%, it is likely that you will succeed.
I can understand your problem with not wanting to risk losing "expensive" data because the file couldn't be written and a responsible program will do it's best to avoid the situation.
I would do this by cacheing the results. Before the test is run write a mock result to a file somewhere in the user data space, then leave the file open and write the real result to the file. After this is done write it to the user-specified file. Provide a recovery option that will read the cache file and write it out to the user's file.
Your approach could fail because just because the file was writable at the start doesn't mean it's still writable. The network could have gone down. Someone could have removed the flash drive. Someone else could be doing a large data transfer through a buggy router. (Real world case--it took me a long time to prove it was a network problem and not my program. finally accepted it was their fault when I showed that dir :*.* /s on multiple machines at once would almost certainly cause one or more to fail.)

File.WriteAllLines Silent Failure

I've stumbled upon a somewhat unusual issue with File.WriteAllLines.
I have code that looks like this
File.WriteAllLines(filename, data);
bool exists = File.Exists(filename);
The problem is that sometimes file writing fails, but does not raise an exception, and the code thinks the file exists when it doesn't.
The file is in a network location.
The file name is Database.lock. Does a lock extension mean anything to the OS?
Exists returns true, but the file is simply not there. No exception is raised.
Calling Exists from a separate process returns false.
Calling Process.Start(filename) results in an error (not a code exception, just the OS saying it can't find the file).
The local machine is running Windows 7.
The remote machine is running Windows XP.
How can I debug what's going on here?
Update
Following David's advice, I watched the process using procmon.exe.
This is the result: http://i.imgur.com/IBz6Ujt.png
You'll notice there's a lot of things going on repetitively, which I don't fully understand, and at the end, the file is reported to have been written successfully.
Solved
Thanks to Patrick's suggestion, I discovered that due to a code path I hadn't taken into consideration, the file was getting immediately deleted in a different segment of code. Sorry for wasting everyone's time. I am relieved though that it's just me being thoughtless, instead of unforeseeable network issues.
This could be a permissions issue. File.Exists will return false if you don't have read permissions for the file. It could be that you are maybe running your code to create the file from Visual Studio and it has admin privileges while you are running LINQPad with other permissions that don't have read access to that location.

Where to store custom configuration files

I currently store a serialized XML file in the application directory that contains all changes specific to the program operation (not typical system or user configuration). Weeks ago, we started running into problems where it would not save correctly (read my previous question about this).
Long story short, we finally discovered that Windows 7 (and sometimes Vista) has an issue with writing into the application directory (specifically anything under Program Files). Now, if this were a normal configuration file I would simply store it under the user's APPDATA folder, but it is not normal. We run this on our own instrumentation, and misconfigurations are 99% of the reason customers have issues running our software. So we need this file to be accessible such that they can easily find it and email it to us. Appdata is hard enough for experienced users to find, much less very non-technological people.
We've also tried running it as Administrator, and making folder permissions wide open (we have control over every computer it runs on; it will never run on some random person's machine). But, these sometimes work, and sometimes do not.
The worst part is that when I write the file back out, it doesn't even throw an error; it simply writes it to some temporary directory that expires at some unknown point in time. Weeks later, our user will have an issue, and the configuration file is all messed up.
So, my question is where should I be storing this file, if not in Program Files? Should I just put it in APPDATA anyway, and make a small utility that emails it to us automatically in case of a problem? Or can I leave it in Program Files, but change some specific permission or registry key to allow it to operate normally?
It depends on whether or not the user needs to edit the file directly. If not, you should put them in %APPDATA%, which you can access via:
Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData)
Otherwise, you might put it in My Documents:
Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments)
Either way, putting it in Program Files is not a good idea. As you discovered, there are permission issues, even if running as Administrator.
For those users, you could build a button in that would open this directory. You could put it in an inconspicuous place that you could later direct them to.
For users that have an email client on their box, you could have a button that would create a new email with subject and automatically attach the file to the email.

.NET FileInfo.LastWriteTime & FileInfo.LastAccessTime are wrong

When I call FileInfo(path).LastAccessTime or FileInfo(path).LastWriteTime on a file that is in the process of being written it returns the time that the file was created, not the last time it was written to (ie. now).
Is there a way to get this information?
Edit: To all the responses so far. I hadn't tried Refresh() but that does not do it either. I am returned the time that the file was started to be written to. The same goes for the static method, and creating a new instance of FileInfo.
Codymanix might have the answer, but I'm not running Windows Server (using Windows 7), and I don't know where the setting is to test.
Edit 2: Nobody finds it interesting that this function doesn't seem to work?
The FileInfo values are only loaded once and then cached. To get the current value, call Refresh() before getting a property:
f.Refresh();
t = f.LastAccessTime;
Another way to get the current value is by using the static methods on the File class:
t = File.GetLastAccessTime(path);
Starting in Windows Vista, last access time is not updated by default. This is to improve file system performance. You can find details here:
http://blogs.technet.com/b/filecab/archive/2006/11/07/disabling-last-access-time-in-windows-vista-to-improve-ntfs-performance.aspx
To reenable last access time on the computer, you can run the following command:
fsutil behavior set disablelastaccess 0
As James has pointed out LastAccessTime is not updated.
The LastWriteTime has also undergone a twist since Vista. When the process has the file still open and another process checks the LastWriteTime it will not see the new write time for a long time -- until the process has closed the file.
As a workaround you can open and close the file from your external process. After you have done that you can try to read the LastWriteTime again which is then the up to date value.
File System Tunneling:
If an application implements something like a rolling logger which closes the file and then renames it to a different file name you will also run into issues since the creation time and file size of the "old" file is remembered by the OS although you did create a new file. This includes wrong reports of the file size even if you did recreate log.txt from scratch which is still 0 bytes in size. This feature is called OS File System Tunneling which is still present on Windows 8.1 . An example how to work around this issue check out RollingFlatFileTracelistener from Enterprise Library.
You can see the effects of file system tunneling on your own machine from the cmd shell.
echo test > file1.txt
ren file1.txt file2.txt
Wait one minute
echo test > file1.txt
dir /tc file*.txt
...
05.07.2015 19:26 7 file1.txt
05.07.2015 19:26 7 file2.txt
The file system is a state machine. Keeping states correctly synchronized is hard if you care about performance and correctness.
This strange tunneling syndrome is obviously still used by application which do e.g. autosave a file and move it to a save location and then recreate the file again at the same location. For these applications it makes to sense to give the file a new creation date because it was only copied around. Some installers do also such tricks to move files temporarily to a different location and write the contents back later to get past some file exists check for some install hooks.
Have you tried calling Refresh() just before accessing the property (to avoid getting a cached value)? If that doesn't work, have you looked at what Explorer shows at the same time? If Explorer is showing the wrong information, then it's probably something you can't really address - it might be that the information is only updated when the file handle is closed, for example.
There is a setting in windows which is sometimes set especially on server systems so that modified and accessed times for files are not set for better performance.
From MSDN:
When first called, FileSystemInfo
calls Refresh and returns the
cached information on APIs to get
attributes and so on. On subsequent
calls, you must call Refresh to get
the latest copy of the information.
FileSystemInfo.Refresh()
If you're application is the one doing the writing, I think you are going to have to "touch" the file by setting the LastWriteTime property your self between each buffer of data you write. Some psuedocode:
while(bytesWritten < totalBytes)
{
bytesWritten += br.Write(buffer);
myFileInfo.LastWriteTime = DateTime.Now;
}
I'm not sure how severely this will affect write performance.
Tommy Carlier's answer got me thinking....
A good way to visualise the differences is seperately running the two snippets (I just used LinqPAD) simliar to below while also running sysinternals Process Monitor.
while(true)
File.GetLastAccessTime([file path here]);
and
FileInfo bob = new FileInfo(path);
while(true){
string accessed = bob.LastAccessTime.ToString();
}
If you look at Process Monitor while running the first snippet you will see repeated and constant access attempts to the file for the LinqPAD process. The second snippet will do an initial access of the file, for which you will see activity in process monitor, and then very little afterwards.
However if you go and modify the file (I just opened the text file I was monitoring using FileInfo and added a character and saved) you will see a series of access attempts by the LinqPAD process to the file in process monitor.
This illustrates the non-cached and cached behaviour of the two different approachs respectively.
Will the non-cached approach wear a hole in the hard drive?!
EDIT
I went away feeling all clever over my testing and then used the caching behaviour of FileInfo in my windows service (basically to sit in a loop and say 'Has-file-changed-has-file-changed...' before doing processing)
While this approach worked on my dev box, it did not work in the production environment, ie the process just kept running regardless if the file had changed or not. I ended up changing my approach to checking and just used GetLastAccessTime as part of it. Don't know why it would behave differently on production server....but I am not too concerned at this point.

Categories

Resources