I have some code that writes a file by saving a MemoryStream to a FileStream using MemoryStream.WriteTo(). After the file is closed it is opened up again to read some metdata...
This works about 80 - 90% of the time. The other 20% I get an exception saying the file is "in use by another process".
Does FileStream.Dispose() not release resources synchronously? Is there something going on lower in Win32 land I'm not aware of? I'm not seeing anything obvious in the .Net documentation.
As "immediately" as possible. There can easily be some lag due to outstanding writes, delay in updating the directory info etc. It could also be anti-virus software checking your changed file.
This may be a rare case where a Thread.Sleep(1) is called for. But to be totally safe you will have to catch the (any) exception and try again a set number of times.
Related
I do get reports via Crashlytics, that some of the Users of my Unity app (roughly 0.5%) get an UnauthorizedAccessException when I call FileInfo.Length;
the interesting part of the stacktrace is:
Non-fatal Exception: java.lang.Exception
UnauthorizedAccessException : Access to the path '/storage/emulated/0/Android/data/com.myCompany.myGreatGame/files/assets/myAsset.asset' is denied.
System.IO.__Error.WinIOError (System.IO.__Error)
System.IO.FileInfo.get_Length (System.IO.FileInfo)
The corresponding File (it's a different file for every report) was written (or is currently written) by the same application (possibly many sessions earlier). The call happens in a backgroundthread and there might be some writing going on at the same time. But according to the .net doc this property should be pre-cached (see https://learn.microsoft.com/en-us/dotnet/api/system.io.fileinfo.length?view=netframework-2.0)
The whole code causing it is:
private static long DirSize(DirectoryInfo d)
{
long size = 0;
FileInfo[] fileInfos = d.GetFiles();
foreach (FileInfo fileInfo in fileInfos)
{
size += fileInfo.Length;
}
...
Did anyone experience something similar and knows what might be causing it?
This looks like a very exotic error, and because of that, I have no evidence to back up my suggestions.
Suggestion 1:
User has installed Antivirus software - Those applications work sometimes like malware, locking files that are not used by the host program to test them (especially if they want to prevent malicious behavior). This would explain the rare nature of the error. I would try to see permissions of the file after the failed call of the Length method, this might give you (and possibly us) more insights.
Suggestion 2:
You cannot read length when the application is actively writing to this file in some circumstances. This should never happen but bugs happen even in OS. Possible path: Some application is writing to the File. The file is modified and metadata (including Lenght) is written, while it happens you are reading length from another thread, OS Locks the file from reading metadata (including Length), while metadata is written (probably for security reasons)
Suggestion 3 (and most probable):
Bad SD Card/Memory/CPU - Some random errors always can happen because you do not control the client's hardware. I would check if this 0.5% of errors are not from one User, or seemingly from multiple users but because of other issues with hardware, their unique ID resets (check for other data like phone model as this might also give you clues).
You are most likely trying to access a file you don't have permissions to access. There are certain files that even Administrator cannot access.
You could do a Try/Catch block to handle the exception.
See this question.
If you read carefully Microsoft's documentation it clearly states that:
an I/O Error is thrown in case the Refresh fails
The FileInfo.Length Property is pre-cached only in a very precise list of cases (GetDirectories, GetFiles, GetFileSystemInfos, EnumerateDirectories, EnumerateFiles, EnumerateFileSystemInfos). The cached info should be refreshed by calling the Refresh() method.
Interpolating #1 and #2 you easily identify the problem: while you try to get that information, you have a file open with an exclusive lock, which gives you the error in #1. I would suggest approaching this implementing two different logics, one is the obvious try/catch block, but because that block (a) costs in performances and (b) doesn't solve the logical problem of knowing the file size, you also should cache those data yourself when you acquire the exclusive lock.
Put those in a static table in memory, a simple key/value (file/size), and check against it before to call FileInfo.Length(). Basically, when you acquire the lock you add the file/size value to the dictionary, and when you are done you remove it. This way you will never get the error again while being able to compute the directory size all the same.
~Pino
I just saw this question: Is it safe to use static methods on File class in C#?. To summarize OP has an IOException because file is in use in this ASP.NET code snippet:
var text= File.ReadAllText("path-to-file.txt");
// Do something with text
File.WriteAllText("path-to-file.txt");
My first thought has been it's a simple concurrent access issue because of multiple ASP.NET overlapping requests. Something I'd solve centralizing I/O into a synchronized thread-safe class (or dropping files in favor of something else). I read both answers and when I was about to downvote one of them then I saw who those users are and I thought what the h* and stopped.
I'll cite them both (then please refer to original answers for more context).
For this OP paragraph:
I am guessing that the file read operation sometimes is not closing the file before the write operation happens [...]
An answer says:
Correct. File systems do not support atomic updates well [...] Using FileStream does not help [...] File has no magic inside. It just uses FileStream wrapped for your convenience.
However I don't see any expectancy for an atomic operation (read + subsequent write) and parallel (because of partially overlapping multi-threaded requests) may cause concurrent accesses. Even an atomic I/O operation (read + write) will have exactly same issue. OK FileStream may be asynchronous but it's not how File.ReadAllText() and File.WriteAllText() use it.
The other answer made me much more perplex, it says:
Although according to the documentation the file handle is guaranteed to be closed by this method, even if exceptions are raised, the timing of the closing is not guaranteed to happen before the method returns: the closing could be done asynchronously.
What? MSDN says method will open, read and close file (also in case of exceptions). Is it ever possible that such method will close file asynchronously? Will OS defer CloseHandle()? In which cases? Why?
In short: is it just a misunderstanding or CloseHandle() is asynchronous? I'm missing something extremely important?
If you look at the CloseHandle documentation, it states that each method which opens a handle has a description of how it should be closed:
The documentation for the functions that create these objects
indicates that CloseHandle should be used when you are finished with
the object, and what happens to pending operations on the object after
the handle is closed. In general, CloseHandle invalidates the
specified object handle, decrements the object's handle count, and
performs object retention checks. After the last handle to an object
is closed, the object is removed from the system.
When you look at the CreateFile docs, this is what it says:
When an application is finished using the object handle returned by
CreateFile, use the CloseHandle function to close the handle. This not
only frees up system resources, but can have wider influence on things
like sharing the file or device and committing data to disk.
I would find it peculiar that CloseHandle would yield that the underlying handle is closed while asynchronously retaining the file for additional checks. This would weaken many guarantees the OS makes to the callers, and would be a source for many bugs.
The first two quotes in your question are not supposed to be related. When File.* is done, or when you close a FileStream, the file is unlocked immediately. There never is any kind of "lingering". If there was you could never safely access the same file again without rebooting.
May answer assumes that the code in the question is being run multiple times in parallel. If not, that code is clearly safe.
However I don't see any expectancy for an atomic operation ... Even an atomic I/O operation (read + write) will have exactly same issue.
That's true. I don't know why I made a statement about that in my answer (it's correct, though. Just not relevant).
the timing of the closing is not guaranteed to happen before the method returns: the closing could be done asynchronously.
I don't know why he said that because it's not correct under any circumstances that I can think of. Closing a handle has an immediate effect.
I think your understanding of the situation is completely accurate. Apparently, our answers were unclear and slightly misleading... Sorry about that.
I wish there was a File.ExistsAsync()
I have:
bool exists = await Task.Run(() => File.Exists(fileName));
Using a thread for this feels like an antipattern.
Is there a cleaner way?
There is no cleaner way than your solution.
The problems of race conditions aside I believe your solution can be used in some situations.
e.g.
I have static file content in many different folders. (in my case cshtml views,script files, css files, for mvc)
These files (which do not change much, during application execution) are always checked for in every request to the webserver, due to my application architecture, there are alot more places that files are checked for than in the default mvc application. So much so that file.exists takes up quite a portion of time each request.
so race conditions will generally not happen. The only interesting question for me is performance
starting a task with Task.Factory.StartNew() takes 0.002 ms (source Why so much difference in performance between Thread and Task?)
calling file.exists takes "0.006255ms when the file exists and 0.010925ms when the file does not exist." [Richard Harrison]
so by simple math calling the async File.Exists takes 0.008 ms up to 0.012 ms
in the best case async File.Exists takes 1.2 times as long as File.Exists and in the worst case it takes 1.3 times as long. (in my case most paths that are searched do not exist) so most of the time a File.Exists is mostly close to 0.01 ms
so it is not that much overhead, and you can utilize multiple cores/ harddisk controllers etc. more efficiently. With these calculations you can see that asynchroniously checking for existence of 2 files you will already have a performance increase of 1.6 in the worst case (0.02/ 0.012 )
well i'm just asyning async File.Exists is worth it in specific situations.
caveats of my post:
i might have not calculated everything correctly
i rounded alot
i did not measure performance on a single pc
i took performance from other posts
i just added the time of File.Exists and Task.Factory.StartNew() (this may be wrong)
i disregard alot of sideffects of multithreading
Long time since this thread, but I found it today...
ExistsAsync should definitely be a thing. In fact, in UWP, you have to use Async methods to find out if a file exists, as it could take longer than 50ms (anything that 'could' take longer than 50ms should be async in UWP language).
However, this is not UWP. The reason I need it is to check for folder.exists which if on a network share, remote disk, or idle disk would block the UI. So I can put all the messages like "checking..." but the UI wouldn't update without aysnc (or ViewModel, or timers, etc.)
bool exists = await Task.Run(() => File.Exists(fileName)); works perfectly. In my code, I have both (Exists and ExistsAsync) so that I can run Exists() when running on a non UI thread and don't have to worry about the overhead.
There isn't a File.ExistsAsync probably for good reason; because it makes absolutely no sense to have one; File.Exists is not going to take very long; I tested it as 0.006255ms when the file exists and 0.010925ms when the file does not exist.
There are a few times when it is sensible to call File.Exists; however usually I think the correct solution would be to open the file (thus preventing deletion), catching any exceptions - as there is no guarantee that the file will continue to exist after the call to File.Exists.
When you want to create a new file and not overwrite the old one :
File.Open("fn", FileMode.CreateNew)
For most of the use cases I can think of File.Open() (whether for existing or create new) is going to be better because once the call succeeds you will have a handle to the file and be able to do something with it. Even when using the file's existence as a flag I think I'd still open and close it. The only time I've really used File.Exists is when checking to see if a local HTML file is there before calling the browser so I can show a nice error message when it isn't.
The is no guarantee that something else won't delete the file after File.Exists; so if you did open it after checking with File.Exists the open call could still fail.
In my tests using a FileExists on network drive takes longer than File.Open, File.Exists takes 1.5967ms whereas File.OpenRead takes 0.3927ms)
Maybe if you could expand upon why you're doing this we'd be better able to answer; until then I'd say that you shouldn't do this
HI,
My question has to do with a very basic understanding of Writing data to using a StreamWriter.
If you consider the following code:
StreamWriter writer = new StreamWriter(#"C:\TEST.XML");
writer.WriteLine("somestring");
writer.Flush();
writer.Close();
When the writer object is initialized, with the filename, all it has is a pointer to the file.
However when we write any string to the writer object, does it actually LOAD the whole file, read its contents, append the string towards the end and then close the handle?
I hope its not a silly questions.
I ask this because, I came across an application that writes frequently probably every half a second to a file, and the file size increased to about 1 GB, and it still continuted to write to the file. (logging)
Do you think this could have resulted in a CPU usage of 100 % ?
Please let me know if my question is unclear?
Thanks in advance.
does it actually LOAD the whole file, read its contents
After the framework opens the file, it will perform a FileStream.Seek operation to position the file pointer to the end of the file. This is supported by the operating system, and does not require reading or writing any file data.
and then close the handle
The handle is closed when you call Close or Dispose. Both are equivalent. (Note for convenience that you can take advantage of the C# using statement to create a scope where the call to Dispose is handled by the compiler on exiting the scope.)
every half a second to a file
That doesn't sound frequent enough to load the machine at 100%. Especially since disk I/O mainly consists of waiting on the disk, and this kind of wait does not contribute to CPU usage. Use a profiler to see where your application is spending its time. Alternatively, a simple technique that you might try is to run under the debugger, click pause, and examine the call stacks of your threads. There is a good chance that a method that is consuming a lot of time will be on a stack when you randomly pause the application.
The code you provided above will overwrite the content of the file, so it has no need to load the file upfront.
Nonetheless, you can append to a file by saying:
StreamWriter writer = new StreamWriter(#"C:\TEST.XML", true);
The true parameter is to tell it to append to the file.
And still, it does not load the entire file in memory before it appends to it.
That's what makes this called a "stream", which means if you're going one way, you're going one way.
Is there a way to bypass or remove the file lock held by another thread without killing the thread?
I am using a third-party library in my app that is performing read-only operations on a file. I need a second thread read the file at the same time to extract out some extra data the third-party library is not exposing. Unfortunately, the third-party library opened the file using a Read/Write lock and hence I am getting the usual "The process cannot access the file ... because it is being used by another process" exception.
I would like to avoid pre-loading the entire file with my thread because the file is large and would cause unwanted delays in the loading of this file and excess memory usage. Copying the file is not practical due to the size of the files. During normal operation, two threads hitting the same file would not cause any significant IO contention/performance problems. I don't need perfect time-synchronization between the two threads, but they need to be reading the same data within a half second of eachother.
I cannot change the third-party library.
Are there any work-arounds to this problem?
If you start messing with the underlying file handle you may be able to unlock portions, the trouble is that the thread accessing the file is not designed to handle this kind of tampering and may end up crashing.
My strong recommendation would be to patch the third party library, anything you do can and probably will blow up in real world conditions.
In short, you cannot do anything about the locking of the file by a third-party. You can get away with Richard E's answer above that mentions the utility Unlocker.
Once the third-party opens a file and sets the lock on it, the underlying system will give that third-party a lock to ensure no other process can access it. There are two trains of thought on this.
Using DLL injection to patch up the code to explicitly set the lock or unset it. This can be dangerous as you would be messing with another process's stability, and possibly end up crashing the process and rendering grief. Think about it, the underlying system is keeping track of files opened by a process..DLL injection at the point and patch up the code - this requires technical knowledge to determine which process you want to inject into at run-time and alter the flags upon interception of the Win32 API call OpenFile(...).
Since this was tagged as .NET, why not disassemble the source of the third-party into .il files, and alter the flag for the lock to shared, rebuild the library by recompiling all .il files back together into a DLL. This of course, would require to root around the code where the opening of the file is taking place in some class somewhere.
Have a look at the podcast here. And have a look here that explains how to do the second option highlighted above, here.
Hope this helps,
Best regards,
Tom.
This doesn't address your situation directly, but a tool like Unlocker acheieves what you're trying to do, but via a Windows UI.
Any low level hack to do this may result in a thread crashing, file corruption or etc.
Hence I thought I'd mention the next best thing, just wait your turn and poll until the file is not locked: https://stackoverflow.com/a/11060322/495455
I dont think this 2nd advice will help but the closest thing (that I know of) would be DemandReadFileIO:
IntSecurity.DemandReadFileIO(filename);
internal static void DemandReadFileIO(string fileName)
{
string full = fileName;
full = UnsafeGetFullPath(fileName);
new FileIOPermission(FileIOPermissionAccess.Read, full).Demand();
}
I do think that this is a problem that can be solved with c++. It is annoying but at least it works (as discussed here: win32 C/C++ read data from a "locked" file)
The steps are:
Open the file before the third-library with fsopen and the _SH_DENYNO flag
Open the file with the third-library
Read the file within your code
You may be interested in these links as well:
Calling c++ from c# (Possible to call C++ code from C#?)
The inner link from this post with a sample (http://blogs.msdn.com/b/borisj/archive/2006/09/28/769708.aspx)
Have you tried making a dummy copy of the file before your third-party library gets a hold of it... then using the actual copy for your manipulations, logically this would only be considered if the file we are talking about is fairly small. but it is a kind of a cheat :) good luck
If the file is locked and isn't being used, then you have a problem with the way your file locking/unlocking mechanism works. You should only lock a file when you are modifying it, and should then immediately unlock it to avoid situations like this.