I'm using a FileSystemWatcher to watch a directory. I created a _Created() event handler to fire when a file is moved to this folder. My problem is the following:
The files in this directory get created when the user hits a "real life button" (a button in our stock, not in the application). The FileSystemWatcher take this file, do some stuff in the system and then delete it. That wouldn't be a problem when the application runs only once. But it is used by 6 clients. So every application on every client is trying to delete it. If one client is too slow, it will throw an exception because the file is already deleted.
What I'm asking for is: Is there a way to avoid this?
I tried using loops and check if the file still exists, but without any success.
while (File.Exists(file))
{
File.Delete(file);
Thread.Sleep(100);
}
Can someone give me a hint how it could probably work?
Design
If you want a file to be processed by a single instance only (for example, the first instance that reacts gets the job), then you should implement a locking mechanism. Only the instance that is able to obtain a lock on the file is allowed to process and remove it, all other instances should skip the file.
If you're fine with all instances processing the file, and only care that at least one of them succeeds, then you need to figure out which exceptions indicate a genuine failure and which ones indicate a failure caused by the actions of another instance.
Locking
To 'lock' a file, you can open it with share-mode FileShare.None. This prevents other processes from opening it until you close the file. However, you'll then need to close the file before you can delete it, which leaves a small gap during which another instance could open the file.
A better solution is to create a separate lock file for that purpose. Create it with file-mode FileMode.Create and share-mode FileShare.None and keep it open until the whole process is finished, including the removal of the processed file. Then the lock file can be closed and optionally removed.
Exception
As for the UnauthorizedAccessException you got, according to the documentation, that means one of 4 things:
You don't have the required permission
The file is an executable file that is in use
The path is a directory
The file is read-only
1 and 4 seem most likely in this case (if the file was open in another process you'd get an IOException).
If you want to synchronize access between multiple clients on the same computer you should use a Named Mutex.
Related
I am implementing an event handler that must open and process the content of a file created by a third part application over which I have no control. I am warned by a note in "C# 4.0 in a nutshell" (page 495) about the risk to open a file before it is fully populated; so I am wondering how to manage this occurrence. To keep at minimum the load on the event handler, I am considering to have the handler simply insert in a queue the file names and then to have a different thread to manage the processing, but, anyways, how may I make sure that the write is completed and the file read is safe? The file size could be arbitrary.
Some idea? Thanks
A reliable way to achieve what you want might be to use FileSystemWatcher + NTFS USN journal.
Maybe more complicated than you expected, but FileSystemWatcher alone won't tell you for sure that the newly created file has been closed
-first, the FileSystemWatcher, to know when a file is created. From there you have the complete file path, and are 1 or 2 pinvokes away from getting the file unique ID (which can help you to track it during its whole lifetime).
-then, read the USN journal, which tracks everything that occurs on your drive. Filter on entries corresponding to your new file's ID, and read the journal until reaching the entry with the 'Close' event.
From there, unless your file is manipulated in special ways (opened and closed multiple times by the application that generates it), you can assume it is safe to read it and do whatever you wanted to do with it.
A really great C# implementation of an USN journal parser is StCroixSkipper's work, available here:
http://mftscanner.codeplex.com/
If you are interested I can give you more help about USN journal, as I use it in my project.
Our workaround is to watch for a specific extension. When a file is uploaded, the extension is ".tmp". When its done uploading, it's renamed to have the proper extension.
Another alternative is to have the server try to move the file in a try/catch block. If the fie isn't done being uploaded, the attempt to move the file will throw an exception, so we wait and try again.
Realistically, you can't know. If the other applications "write" operation is to open the file denying write access to everyone else then when it's done, close the file. When you get a notification then you could simply open the file requesting write access and if that fails, you know the operation isn't complete. But, if the "write" operation is to open the file, write, close the file, open the file again, and write again, etc., then you're pretty much out of luck.
The best solution I've seen is to set a timer after the last notification. When the timer elapses, try to open the file for write--if you can, assume the "operation" is done and do what you need to do. If the open fails, assume the operation is still in progress and wait some more.
Of course, nothing is foolproof. Despite the above, another operation could start while you're doing what you want with the file and cause interaction problems.
I need to read a text based log file to check for certain contents (the completion of a backup job). Obviously, the file is written to when the job completes.
My question is, how can I (or how SHOULD I write the code to) read the file, taking into account the file may be locked, or locked by my process when it needs to be read, without causing any reliability concerns.
Assuming the writing process has at least specified System.IO.FileShare.Read when opening the file, you should be able to read the text file while it is still being written to.
In addition to the answer by #BrokenGlass:
Only open the file for reading. If you try to open it for Read/Write access, it's more likely (almost certain) to fail - you may not be able to open it, and/or you may stop the other process being able to write to it.
Close the file when you aren't reading it to minimise the chance that you might cause problems for any other processes.
If the writing process denies read access while it is writing to the file, you may have to write some form of "retry loop", which allows your application to wait (keep retrying) until the file becomes readable. Just try to open the file (and catch errors) - if it fails, Sleep() for a bit and then try again. (However, if you're monitoring a log file, you will probbably want to keep checking it for more data anyway)
When a file is being written to, it is locked for all other processes that try to open the file in Write-mode. Read-mode will always be available.
However, if your writing process saves changes while you have already opened the file in your reading process, the changes will not be reflected there until you refresh (Close-Open) the file again.
I have an application that is modifying 5 identical xml files, each located on a different network share. I am aware that this is needlessly redundant, but "it must be so."
Every time this application runs, exactly one element (no more, no less) will be added/removed/modified.
Initially, the application opens each xml file, adds/removes/modifies the element to the appropriate node and saves the file, or throws an error if it cannot (Unable to access the network share, timeout, etc...)
How do I make this atomic?
My initial assumption was to:
foreach (var path in NetworkPaths)
if (!File.Exists(path)
isAtomic = false;
if (isAtomic)
{
//Do things
}
But I can see that only going so far. Is there another way to do this, or a direction I can be pointed to?
Unfortunately, for it to be truly "atomic" isn't really possible. My best advice would be to wrap up your own form of transaction for this, so you can at least undo the changes.
I'd do something like check for each file - if one doesn't exist, throw.
Backup each file, save the state needed to undo, or save a copy in memory if they're not huge. If you can't, throw.
Make your edits, then save the files. If you get a failure here, try to restore from each of the backups. You'll need to do some error handling here so you don't throw until all of the backups were restored. After restoring, throw your exception.
At least this way, you'll be more likely to not make a change to just a single file. Hopefully, if you can modify one file, you'll be able to restore it from your backup/undo your modification.
I suggest the following solution.
Try opening all files with a write lock.
If one or more fail, abort.
Modify and flush all files.
If one or more fail, roll the already modified ones back and flush them again.
Close all files.
If the rollback fails ... well ... try again, and try again, and try again ... and give up in an inconsitent state.
If you have control over all processes writing this files, you could implement a simple locking mechanism using a lock file. You could even perform write ahead logging and record the planned change in the lock file. If your process crashes, the next one attempting to modify the files would detect the incomplete operation and could continue it before doing it's one modification.
I would introduce versioning of the files. You can do this easily by appending a suffix to the filename. e.g a counter variable. The process for the reader is as follows:
prepare the next version of the file
write it to a temp file with a different name.
Get the highest version number
increment this version by one
rename the temp file to the new file
delete old files (you can keep e.g. 2 of them)
as Reader you do
- find the file with the highest version
- read it
I have a requirement to move certain files after they has been processed. Another process access the file and I am not sure when it releases them. Is there any way I can find out when the handle to the file has been released so I can move them at that time.
I am using Microsoft C# and .Net framework 3.5.
Cheers,
Hamid
If you have control of both the producer of the file and the consumer, the old trick to use is create the file under a different name, and rename it once complete.
For example, say the producer is creating files always called file_.txt, and your consumer is scanning for all files beginning file_, then the producer can do this:
1. Create the file called tmpfile_.txt
2. When the file is written, the producer simply renames the file to file_.txt
The rename operation is atomic, so once your consumer sees its available, it is safe to open it.
Of course, this answer depends on if you are writing both programs.
HTH
Dermot.
Just contniually try to open the file for exclusive writing? (e.g. pass FileShare.None to the FileStream constructor). Once you have opened it, you know no one else is using it. However, this might not be the best way to do what you're doing.
If you're after two way communication, see if the other program can be talked to via a pipe.
If you have control of both of the sources, use a named mutex (which works across processes) to control access to the files rather than locking the file at the filesystem level. This way, you don't have to catch the exception raised by attempting to lock a locked file and loop on that, which is rather inelegant.
Is there a built in method for waiting for a file to be created in c#? How about waiting for a file to be completely written?
I've baked my own by repeatedly attempting File.OpenRead() on a file until it succeeds (and failing on a timeout), but spinning on a file doesn't seem like the right thing to do. I'm guessing there's a baked-in method in .NET to do this, but I can't find it.
What about using the FileSystemWatcher component ?
This class 'watches' a given directory or file, and can raise events when something (you can define what) has happened.
When creating a file with File.Create you can just call the Close Function.
Like this:
File.Create(savePath).Close();
FileSystemWatcher can notify you when a file is created, deleted, updated, attributes changed etc. It will solve your first issue of waitign for it to be created.
As for waiting for it to be written, when a file is created, you can spin off and start tracking it's size and wait for it stop being updated, then add in a settle time period, You can also try and get an exclusive lock but be careful of locking the file if the other process is also trying to lock it...you could cause unexpected thigns to occur.
FileSysWatcher cannot monitor network paths. In such instances, you manually have to "crawl" the files in a directory -- which can result in the above users error.
Is there an alternative, so that we can be sure we don't open a file before it has been fully written to disk and closed?