Best way (best performance) to lock file creation - c#

I have a web application to return images to my frontend.
In this application what happens is: when a request is made to a particular image the application checks if the image already exists on disk; if it exists the image is returned.
My problem starts when the image does not exist on disk. In that case two requests are made at the same time for the same image which does not exist on disk. Problem occurs when two threads try to create the same file on disk at the same time.
To solve the problem, what I tried to do was to create a Mutex in the creation of disk image. But it had a problem: as the server load is enormous due to the large number of simultaneous requests, the server crashes.
I would like to ask what your ideas to solve this problem. Or what you would do otherwise?
Thank you.

You could try the following pattern:
Try to read the image (if succeeds, than done)
Try to create the image with Write lock
Only on "File in use exception", small delay (milliseconds)
Go back to step 1 (retry)
Make the delay really small, just a tiny bit larger than the time it should need to create an image.
Implement a retry limit, max 3 times or so.
This would allow you to make use of the already existing (file) locking mechanism

You can call the open function with O_CREAT and O_EXCL flags. The first process's open call will get exclusive access to create the file and it will start downloading the image. The subsequent process's open call will fail because their open is not exclusive and "errno" will be set to EEXIST.
Based on your design, the subsequent processes can either wait for complete file creation or can return back.
fd = open(path, O_CREAT|O_EXCL)

Related

Prevent Multiple users accessing the same folder C#

I am currently in the process of writing a small application in C# to process batches of images and put them into a PDF. Each batch of images is stored in its own folder on a network share. The application will enable users to perform QA checks on a random number of images from a single batch before creating a PDF. At most there will be between 4-6 users running this application on individual desktops with access to the location where the image batches are stored.
The problem I'm running into at the moment is how do I prevent 2 users from processing the same batch? Initially I thought about using FileSystemWatcher to check for last access to each folder, but reading up on how FileSystemWatcher raises events it didn't seem suitable. I've condsidered using polling to check the images in each folder for File access using a filestream, but I don't think that will be suitable either(I may be wrong).
What would be the simplest solution?
I'd use a lock file with a package like this.
Code is quite simple:
var fileLock = new SimpleFileLock("networkFolder/file.lock", TimeSpan.FromMinutes(timeout));
where timeout is used to unlock the folder if the process using it crashed (so that it doesn't stay locked forever).
Then, everytime a process needs to use that directory, you go with a simple check:
if (fileLock.TryAcquireLock())
{
//Lock acquired - do your work here
}
else
{
//Failed to acquire lock - SpinWait or do something else
}
Code is taken from the samples on the repo, so that's the way the author suggests using his library.
I had the chance to use it and I found it both useful and reliable.

Best way to read and write time-critical data?

I have .txt files that are overwritten with data from software every 5-10 seconds, I then have a wpf application that reads and displays this data every second. Here are my issues:
Currently the text files are stored on a server and there are a bunch of users running this application to view this "live" data.
HOWEVER, due to:
An I/O bug in windows
The files "lock" up periodically and cause all of the applications to lock up (can't even close in task manager).
Therefore I decided to have the data copied from the text files to SQL, however from my understanding there's no way to overwrite the data in the SQL table. One must Drop the Table and Create a new one. This cause a 10+ second delay updating the data, which cannot happen.
My question is, there HAS to be a way to rapidly read and write data from somewhere, be it a database, etc. I am not sure where else to turn.
My constraints:
I'm stuck with Server 2008, have to use these text file, and I have to display it on my wpf application. Does anyone have any suggestions for a method that can handle this type of I/O?
All help is greatly appreciated, I'm at a complete loss..
It seems like you may not have extensive experience with database technology, so let me propose something different:
string text = System.IO.File.ReadAllText(path);
Then perhaps you can take the text and do what you want with it, dump it in a queue for action in another part of the application.
ReadAllText has some exceptions that are thrown:
https://msdn.microsoft.com/en-us/library/ms143368(v=vs.110).aspx
I'd be on the look out for UnauthorizedAccessException as you said, the file seems to lock up when multiple users are accessing it.

How to tell a file is *completely* written

I am familiar with the FileSystemWatcher class, and have tested using this, alternatively I have tested using a fast loop and doing a directory listing of files of type in a directory. In this particular case they are zip compressed SDF files, I need to decompress, open, and query.
The problem is that when a large file is put in a directory, sometimes that takes time, such as it being downloaded, or copied from a network location, etc...
When the FileSystemWatcher raises an OnChange event, I have a handle to the ChangeType and on these types of operations the Create is immediate, while the file is still not completely copied to the location.
Likewise using the loop, I see a file is there, before the whole file is there.
The FileSystemWatcher raises several change events, one after create, and then one or more during the copy, nothing that says This file is now complete
So if I am expecting files of a type, to be placed in a directory ultimately to read and processed, with no knowledge of their transport mechanism, and no knowledge of their final size...
How do I know when the file is ready to actually be processed other than with using error control as a workflow control (albeit the error control is there anyway as it should be)? This just seems like a bad way to have to handle this, as sometimes the error control may actually be representing a legitimate issue, sometimes it may just be that the file is not completely written, and I do not see any real safe way to differentiate.
I despise anticipated error, but realize that is has its place like sockets, nothing guarantees a check for open does not change before an attempt to read/write. But I do avoid it at all costs.
This particular one troubles me mostly because of the ambiguity of the message that will be produced. There is a conflict queue for files that legitimately error because they did not come across entirely or are otherwise corrupt, I do not want otherwise good files going there. Getting more granular to detect this specific case will be almost impossible.
edit:
I know I can do this... And I have read the other SA articles concerning others doing the same thing. (And I know this method is both crude and blocking, it is just an example.)
private static void OnChanged(object source, FileSystemEventArgs e)
{
if (e.ChangeType == WatcherChangeTypes.Created)
{
bool ready = false;
while (!ready)
{
try
{
using (FileStream fs = new FileStream(e.FullPath, FileMode.Open))
{
Console.WriteLine(String.Format("{0} - {1}", e.FullPath, fs.Length));
}
ready = true;
}
catch (IOException)
{
ready = false;
}
}
}
}
What I am trying to find out is this definitively the only way, is there no other component, or some hook to the file system that will actually do this with a proper event?
The only way to tell is to open the file with FileShare.Read. That will always fail if the process is still writing to the file and hasn't closed it yet. There is otherwise no mechanism to know anything at all about which particular process is doing the writing, FSW operates at the file system device driver level and doesn't know anything about what process is performing the operation. Could be more than one.
That will very often fail the first time you try, FSW is very efficient. In general you have no idea how much time the process will take, it of course depends on how it is written and might leave the file opened for a while. Could be hours or days, a log file would be an example.
So you need a re-try mechanism, it should have an exponential back-off algorithm to increase the re-try delays between attempts. Start it off at, say, a half second delay and keep increasing that delay when it fails. This needs to be done in a worker thread, not the FSW callback. Use a thread-safe queue to pass the path of the file from the FSW callback to the worker thread. Also in general a good strategy to deal with the multiple FSW notifications you get.
Watch out for startup effects, you of course missed any notification before you started running so there might be a load of files that are waiting for work. And watch out for Heisenbugs, whatever you do with the file might cause another process to fall over. Much like this process did to yours :)
Consider that a batch-style program that you periodically run with the task scheduler could be an easier alternative.
For the one extreme, you could use a file system mini filter driver which analyzes all activities for a file at the lowest level (and communicates with a user mode application).
I wrote a proof-of-concept mini filter some time ago to detect MS Office file conversions. See below. This way, you can reliably check for every open handle to the file.
But: even this would be no universal solution for you problem:
Consider:
A tool (e.g. FTP file transfer) could in theory write part of the file, close it, and re-open it again for appending new data. This seems very curious, but you cannot reliably just check for “no more open file handles” ==> “file is ready now”
Alex K. provided a good link in his comment, and I myself would use a solution similar to the answer from Jon (https://stackoverflow.com/a/4278034/4547223)
If time is not critical (you can waste a few seconds for the decision):
Periodic timer (1 second seems reasonable)
Check file size in every timer tick
If file size did not increment for e.g. 10 seconds and there are no more FSWatcher change events too, try to open it. If you realize that the size increments take place uneven or very slowly, you could adjust the “wait time” on the fly.
Your big advantage is that you are processing ZIP files only, where you have a chance of detecting invalid (incomplete) files due to “checksum not valid”
I do not expect official ways to detect this, since there is no universal notion of “file written completely”.
File System mini filter
This may be like a sledgehammer solution for the problem.
Some time ago, I had the requirement of working around a weird bug in Office 2010, where it does not copy ADS meta data during office file conversion (ADS needed for File Classification). We discussed this with Microsoft engineers (MS was not willing to fix the bug), they complied with our filter driver solution (in the end, this was stopped since business preferred a manual workaround.)
Nevertheless, if someony really want to check if this could be a possible solution:
I have written an explanation of the steps:
https://stackoverflow.com/a/29252665/4547223

Windows Phone Resource Intensive Task exiting

I'm writing a Windows Phone application, and it needs to download very large mp3 files, and save them to isolated storage. I've got all the code for this working, and I tested it with smaller files, but now using the actual files and monitoring what the code is doing using the debug output, I've realized that the threads are actually exiting half way through downloads, and files never actually finish downloading.
Is there a reason for this happening, and if so, what can I do to prevent this?
How long does it timeout after? If you are using HttpWebRequest to download the file, the default time out is 100,000ms (100 seconds). This can be changed as simply as inserting:
HttpWebRequest.Timeout = 10;
Obviously setting your own timeout (in milliseconds!) and attaching it to your WebRequest :)
If your not using HttpWebRequest, let me know what you are using and i'll try my best to hep you out :)
WP's internal memory and process management takes care about this. If you spawned a thread from your app which downloads a lot of data in the background OS will drop it off when those resources (by most chance memory) becomes needed for other processes.
You can do two things, depending on your approach for download:
To periodically save buffer chunks in IsolatedStorage when buffer reaches certain size, thus limiting memory usage of the thread.
Implement download thread as BackgroundTask, which should allow "endless" execution.

FileSystemWatcher and write completion

I am implementing an event handler that must open and process the content of a file created by a third part application over which I have no control. I am warned by a note in "C# 4.0 in a nutshell" (page 495) about the risk to open a file before it is fully populated; so I am wondering how to manage this occurrence. To keep at minimum the load on the event handler, I am considering to have the handler simply insert in a queue the file names and then to have a different thread to manage the processing, but, anyways, how may I make sure that the write is completed and the file read is safe? The file size could be arbitrary.
Some idea? Thanks
A reliable way to achieve what you want might be to use FileSystemWatcher + NTFS USN journal.
Maybe more complicated than you expected, but FileSystemWatcher alone won't tell you for sure that the newly created file has been closed
-first, the FileSystemWatcher, to know when a file is created. From there you have the complete file path, and are 1 or 2 pinvokes away from getting the file unique ID (which can help you to track it during its whole lifetime).
-then, read the USN journal, which tracks everything that occurs on your drive. Filter on entries corresponding to your new file's ID, and read the journal until reaching the entry with the 'Close' event.
From there, unless your file is manipulated in special ways (opened and closed multiple times by the application that generates it), you can assume it is safe to read it and do whatever you wanted to do with it.
A really great C# implementation of an USN journal parser is StCroixSkipper's work, available here:
http://mftscanner.codeplex.com/
If you are interested I can give you more help about USN journal, as I use it in my project.
Our workaround is to watch for a specific extension. When a file is uploaded, the extension is ".tmp". When its done uploading, it's renamed to have the proper extension.
Another alternative is to have the server try to move the file in a try/catch block. If the fie isn't done being uploaded, the attempt to move the file will throw an exception, so we wait and try again.
Realistically, you can't know. If the other applications "write" operation is to open the file denying write access to everyone else then when it's done, close the file. When you get a notification then you could simply open the file requesting write access and if that fails, you know the operation isn't complete. But, if the "write" operation is to open the file, write, close the file, open the file again, and write again, etc., then you're pretty much out of luck.
The best solution I've seen is to set a timer after the last notification. When the timer elapses, try to open the file for write--if you can, assume the "operation" is done and do what you need to do. If the open fails, assume the operation is still in progress and wait some more.
Of course, nothing is foolproof. Despite the above, another operation could start while you're doing what you want with the file and cause interaction problems.

Categories

Resources