C# can read from a file that doesn't exist? - c#

We have some C# code that reads data from a text file using a StreamReader. On one computer we can read data from the text file even after it has been deleted or replaced with a different text file - the File.Exists call reports that the file exists even when it doesn't in Windows Explorer. However, on another computer this behaviour doesn't happen. Both computers are running Vista Business and .NET 2.0.50727 SP2.
We have tried restarting the machine without a resolution.
Does anyone have any understanding on how this could be possible and information about possible solutions?
Thanks,
Alan

From MSDN
The Exists method should not be used for path validation, this method merely checks if the file specified in path exists.
Be aware that another process can potentially do something with the file in between the time you call the Exists method and perform another operation on the file, such as Delete. A recommended programming practice is to wrap the Exists method, and the operations you take on the file, in a try...catch block as shown in the example. This helps to narrow the scope for potential conflicts. The Exists method can only help to ensure that the file will be available, it cannot guarantee it.

Could this be a folder virtualization issue?

Is the file being opened for reading before it's being deleted? If it is, it's not unexpected to still be able to read from the opened file even after the filesystem has otherwise let it go.
RE: File.Exists():
File.Exists is inherently prone to race-conditions. It should not be used as the exclusive manner to verify that a file does or doesn't exist before performing some operation. This mistake can frequently result in a security flaw within your software.
Rather, always handle the exceptions that can be thrown from your actual file operations that open, etc, and verify your input once it's open.

Related

UnauthorizedAccessException when calling FileInfo.Length

I do get reports via Crashlytics, that some of the Users of my Unity app (roughly 0.5%) get an UnauthorizedAccessException when I call FileInfo.Length;
the interesting part of the stacktrace is:
Non-fatal Exception: java.lang.Exception
UnauthorizedAccessException : Access to the path '/storage/emulated/0/Android/data/com.myCompany.myGreatGame/files/assets/myAsset.asset' is denied.
System.IO.__Error.WinIOError (System.IO.__Error)
System.IO.FileInfo.get_Length (System.IO.FileInfo)
The corresponding File (it's a different file for every report) was written (or is currently written) by the same application (possibly many sessions earlier). The call happens in a backgroundthread and there might be some writing going on at the same time. But according to the .net doc this property should be pre-cached (see https://learn.microsoft.com/en-us/dotnet/api/system.io.fileinfo.length?view=netframework-2.0)
The whole code causing it is:
private static long DirSize(DirectoryInfo d)
{
long size = 0;
FileInfo[] fileInfos = d.GetFiles();
foreach (FileInfo fileInfo in fileInfos)
{
size += fileInfo.Length;
}
...
Did anyone experience something similar and knows what might be causing it?
This looks like a very exotic error, and because of that, I have no evidence to back up my suggestions.
Suggestion 1:
User has installed Antivirus software - Those applications work sometimes like malware, locking files that are not used by the host program to test them (especially if they want to prevent malicious behavior). This would explain the rare nature of the error. I would try to see permissions of the file after the failed call of the Length method, this might give you (and possibly us) more insights.
Suggestion 2:
You cannot read length when the application is actively writing to this file in some circumstances. This should never happen but bugs happen even in OS. Possible path: Some application is writing to the File. The file is modified and metadata (including Lenght) is written, while it happens you are reading length from another thread, OS Locks the file from reading metadata (including Length), while metadata is written (probably for security reasons)
Suggestion 3 (and most probable):
Bad SD Card/Memory/CPU - Some random errors always can happen because you do not control the client's hardware. I would check if this 0.5% of errors are not from one User, or seemingly from multiple users but because of other issues with hardware, their unique ID resets (check for other data like phone model as this might also give you clues).
You are most likely trying to access a file you don't have permissions to access. There are certain files that even Administrator cannot access.
You could do a Try/Catch block to handle the exception.
See this question.
If you read carefully Microsoft's documentation it clearly states that:
an I/O Error is thrown in case the Refresh fails
The FileInfo.Length Property is pre-cached only in a very precise list of cases (GetDirectories, GetFiles, GetFileSystemInfos, EnumerateDirectories, EnumerateFiles, EnumerateFileSystemInfos). The cached info should be refreshed by calling the Refresh() method.
Interpolating #1 and #2 you easily identify the problem: while you try to get that information, you have a file open with an exclusive lock, which gives you the error in #1. I would suggest approaching this implementing two different logics, one is the obvious try/catch block, but because that block (a) costs in performances and (b) doesn't solve the logical problem of knowing the file size, you also should cache those data yourself when you acquire the exclusive lock.
Put those in a static table in memory, a simple key/value (file/size), and check against it before to call FileInfo.Length(). Basically, when you acquire the lock you add the file/size value to the dictionary, and when you are done you remove it. This way you will never get the error again while being able to compute the directory size all the same.
~Pino

Safely saving a file in Windows 10 IOT

My team requires a bulletproof way to save a file (less than 100kb) on Windows 10 IOT.
The file cannot be corrupted but it's OK to loose the most recent version if save failed because of power off etc.
Since the File IO has changed significantly (no more File.Replace) we are not sure how to achieve it.
We can see that:
var file = await folder.CreateFileAsync(fileName, CreationCollisionOption.OpenIfExists);
await Windows.Storage.FileIO.WriteTextAsync(file, data);
is reliably unreliable (it repeatedly broke when stopping debugging, or reset the device.) and we are ending up with a corrupted file (full of zeroes) and and a .tmp file next to it. We can recover this .tmp file I'm not confident that we should base our solution on undocumented behaviour.
One way we want to try is:
var tmpfile = await folder.CreateFileAsync(fileName+".tmp",
CreationCollisionOption.ReplaceExisting);
await Windows.Storage.FileIO.WriteTextAsync(tmpfile, data);
var file = await folder.CreateFileAsync(fileName, CreationCollisionOption.OpenIfExists);
// can this end up with a corrupt or missing file?
await tmpfile.MoveAndReplaceAsync(file);
In summary, is there a safe way to save some text to a file that will never corrupt the file?
Not sure if there's a best practice for this, but if needed to come up with something myself:
I would do something like calculating a checksum and save that along with the file.
When saving the next time, don't overwrite it but save it next to the previous one (which should be "known good"), and delete the previous one only after verifying that the new save completed successfully (together with the checksum)
Also I would assume that a rename operation should not corrupt the file, but I haven't researched that
This article has a good explanation: Best practices for writing to files on the underlying processes involved with writing to files in UWP.
The following common issues are highlighted:
A file is partially written.
The app receives an exception when calling one of the methods.
The operations leave behind .TMP files with a file name similar to the target file name.
What is not easily deduced in discussion about the trade off with convenience-vs-control is that while create or edit operations are more prone to failure, because they do a lot of things, renaming operations are a lot more fault tolerant if they are not physically writing bits around the filesystem.
You suggestion of creating a temp file first, is on the right track and may serve you well, but using MoveAndReplaceAsync means that you are still susceptible to these known issues if the destination file already exists.
UWP will use a transactional pattern with the file system and may create various backup copies of the source and the destination files.
You can take control of the final element by deleting the original file before calling MoveAndReplaceAsync, or you could simply use RenameAsync if your temp file is in the same folder, these have less components which should reduce the area for failure.
#hansmbakker has an answer along these lines, how you identify that the file write was successful is up to you, but by isolating the heavy write operation and verifying it before overwriting your original is a good idea if you need it to be bulletproof.
About Failure
I have observed the .TMP files a lot, when using the Append variants of FileIO writing, the .TMP files have the content of the original file before Append, but the actual file does not always have all of the original client, sometimes its a mix of old and new content, and sometimes the
In my experience, UWP file writes are very reliable when your entire call structure to the write operation is asynchronous and correctly awaits the pipeline. AND you take steps to ensure that only one process is trying to access the same file at any point in time.
When you try to manipulate files from a synchronous context we can start to see the "unreliable" nature you have identified, this happens a lot in code that is being transitioned from the old synchronous operations to the newer Async variants of FileIO operations.
Make sure the code calling your write method is non-blocking and correctly awaits, this will allow you to catch any exceptions that might be raised
it is common for us traditionally synchronous minded developers to try to use a lock(){} pattern to ensure single access to the file, but you cannot easily await inside a lock and attempts to do so often become the source of UWP file write issues.
If your code has a locking mechanism to ensure singleton access to the file, have a read over these articles for a different approach, they're old but a good resource that covers the transition for a traditional synchronous C# developer into async and parallel development.
What’s New For Parallelism in .NET 4.5
Building Async Coordination Primitives, Part 6: AsyncLock
Building Async Coordination Primitives, Part 7: AsyncReaderWriterLock
Other times we encounter a synchronous constraint are when an Event or Timer or Dispose context are the trigger for writing to the file in the first place. There are different techniques to involve there, please post another question that covers that scenario specifically if you think it might be contributing to your issues. :)

Is File.Exists considered harmful?

I often use library functions like File.Exists to check for a file's existence before opening it or doing some other action. While I have had good luck with this in practice over the years, I wonder if it is a poorly thought-out pattern.
Any IO call like a file system read can fail for multiple reasons. The path string could be wrong or the file actually not exist, you could lack permissions, someone else might have a lock that blocks you. You could even have another process or another user on the network move a file in the millisecond between your File.Exists and your Open.
Even if you get a successful result from File.Exists, you still really should enclose your actual open statements in a try block to handle one of the other possible failure modes. If I am thinking about this correctly, File.Exists just lulls you into a false sense of safety if you use it instead of Try (as I am sure that I have on occasion in the past).
All of this makes it sound like I should abandon File.Exists and change whatever existing code I find to use the Try...Catch pattern only. Is this a sensible conclusion? I realize that the framework authors but it there for us to use, but that does not automatically make it a good tool in practice.
I think that the answer completely depends on your specific reasons for using File.Exists.
For example, if you are checking a certain file path for arrival of a file, File.Exists could easily be the appropriate approach because you don't care what the reason for non-existence is.
However, if you are processing a file that an end user has requested (i.e. please import this excel file), you will want to know exactly why the file has failed. In this specific instance, File.Exists wouldn't be quite the right approach because the file existence could change between the time you check it and the time you open the file. In this case, we attempt to open the file and obtain a lock on it prior to processing it. The open method will throw the errors appropriate to the specific scenario that you are handling so you can provide the user more accurate information about the problem (i.e. another process has the file locked, the network file is not available, etc.).
You should absolutely implement an exception handler for any code that could reasonably throw an exception and any I/O operation falls into that category.
That doesn't mean that using File.Exists is wrong though. If there is a reasonable possibility that the file may not exist then prevention is more efficient than cure. If the file absolutely should exist though, it might be more performant overall to suffer the occasional exception rather than take the hit of checking first every time.
I use File.Exists in cases where the file may not exist under normal operating conditions (without something being broken in my system). If I know the file should exist (unless my system is broken), then I don't use File.Exist.
I wouldn't call this a "pattern" though. The best pattern is to consider what you're doing on a case by case basis.
It is up to you how you want to handle the File not found. If you have used File.Exists to check if file is there or not, or you can also use try catch block around your code and handle FilenotFound exception this will identify weather a file exists or not. Its purely up to you but I would prefer to check for File.Exists. It is same like checking null for accesing object rather than writing try catch around your code block and identify in catch that you object is null. It is always good to handle such validations at your end rather leaving it to c# try catch block.

Is it possible to bypass a file lock in C# when another thread/process is unecessarily using an exclusive lock?

Is there a way to bypass or remove the file lock held by another thread without killing the thread?
I am using a third-party library in my app that is performing read-only operations on a file. I need a second thread read the file at the same time to extract out some extra data the third-party library is not exposing. Unfortunately, the third-party library opened the file using a Read/Write lock and hence I am getting the usual "The process cannot access the file ... because it is being used by another process" exception.
I would like to avoid pre-loading the entire file with my thread because the file is large and would cause unwanted delays in the loading of this file and excess memory usage. Copying the file is not practical due to the size of the files. During normal operation, two threads hitting the same file would not cause any significant IO contention/performance problems. I don't need perfect time-synchronization between the two threads, but they need to be reading the same data within a half second of eachother.
I cannot change the third-party library.
Are there any work-arounds to this problem?
If you start messing with the underlying file handle you may be able to unlock portions, the trouble is that the thread accessing the file is not designed to handle this kind of tampering and may end up crashing.
My strong recommendation would be to patch the third party library, anything you do can and probably will blow up in real world conditions.
In short, you cannot do anything about the locking of the file by a third-party. You can get away with Richard E's answer above that mentions the utility Unlocker.
Once the third-party opens a file and sets the lock on it, the underlying system will give that third-party a lock to ensure no other process can access it. There are two trains of thought on this.
Using DLL injection to patch up the code to explicitly set the lock or unset it. This can be dangerous as you would be messing with another process's stability, and possibly end up crashing the process and rendering grief. Think about it, the underlying system is keeping track of files opened by a process..DLL injection at the point and patch up the code - this requires technical knowledge to determine which process you want to inject into at run-time and alter the flags upon interception of the Win32 API call OpenFile(...).
Since this was tagged as .NET, why not disassemble the source of the third-party into .il files, and alter the flag for the lock to shared, rebuild the library by recompiling all .il files back together into a DLL. This of course, would require to root around the code where the opening of the file is taking place in some class somewhere.
Have a look at the podcast here. And have a look here that explains how to do the second option highlighted above, here.
Hope this helps,
Best regards,
Tom.
This doesn't address your situation directly, but a tool like Unlocker acheieves what you're trying to do, but via a Windows UI.
Any low level hack to do this may result in a thread crashing, file corruption or etc.
Hence I thought I'd mention the next best thing, just wait your turn and poll until the file is not locked: https://stackoverflow.com/a/11060322/495455
I dont think this 2nd advice will help but the closest thing (that I know of) would be DemandReadFileIO:
IntSecurity.DemandReadFileIO(filename);
internal static void DemandReadFileIO(string fileName)
{
string full = fileName;
full = UnsafeGetFullPath(fileName);
new FileIOPermission(FileIOPermissionAccess.Read, full).Demand();
}
I do think that this is a problem that can be solved with c++. It is annoying but at least it works (as discussed here: win32 C/C++ read data from a "locked" file)
The steps are:
Open the file before the third-library with fsopen and the _SH_DENYNO flag
Open the file with the third-library
Read the file within your code
You may be interested in these links as well:
Calling c++ from c# (Possible to call C++ code from C#?)
The inner link from this post with a sample (http://blogs.msdn.com/b/borisj/archive/2006/09/28/769708.aspx)
Have you tried making a dummy copy of the file before your third-party library gets a hold of it... then using the actual copy for your manipulations, logically this would only be considered if the file we are talking about is fairly small. but it is a kind of a cheat :) good luck
If the file is locked and isn't being used, then you have a problem with the way your file locking/unlocking mechanism works. You should only lock a file when you are modifying it, and should then immediately unlock it to avoid situations like this.

Atomicity of File.Move

I want to rename a file in a directory as an atomic transaction. The file will not be changing directories. The path is provided as a UNC Path to an NTFS file system, probably on either Server 03 or 08.
Is File.Move() atomic for these purposes? As in, it either completes successfully or fails such that the original file is still intact?
My gut says yes, but I wanted to make sure.
Yes, in NTFS. From here:
As an aside if you are running under NTFS then file operations are atomic at the file system level. A rename will occur in a single operation as far as any higher code is concerned. The problem you are seeing almost appears to be an issue where the FileInfo object is being shared across applications. It is a MarshalByRef object and therefore can be used in remoting environments. Don't know if this applies to you.

Categories

Resources