I have a program that reads an XML file (for now, on local computer.) and loads the data into a list of struct.
How can I make it such that if I execute it, it does the above but then waits to keep checking for any change to the file. Should the file be changed, it reads the file all over again.
Do I need to create a file watcher service as described here:
http://www.codeproject.com/KB/files/C__FileWatcher.aspx
You need FileSystemWatcher - the docs give examples.
Basically you create an instance, give it a filter (which would be your exact file in this case), hook up an event handler (probably the Changed event in your case) and then set EnableRaisingEvents to true.
You'll want to look at the System.IO.FileSystemWatcher class. You can have it raise an event in your code when the file is changed.
Details can be found on MSDN: http://msdn.microsoft.com/en-us/library/system.io.filesystemwatcher.aspx
Look at the FileSystemWatcher class. You can point it at your XML file and when it changes, it will fire an event so you can then read the file again
Related
I am implementing an event handler that must open and process the content of a file created by a third part application over which I have no control. I am warned by a note in "C# 4.0 in a nutshell" (page 495) about the risk to open a file before it is fully populated; so I am wondering how to manage this occurrence. To keep at minimum the load on the event handler, I am considering to have the handler simply insert in a queue the file names and then to have a different thread to manage the processing, but, anyways, how may I make sure that the write is completed and the file read is safe? The file size could be arbitrary.
Some idea? Thanks
A reliable way to achieve what you want might be to use FileSystemWatcher + NTFS USN journal.
Maybe more complicated than you expected, but FileSystemWatcher alone won't tell you for sure that the newly created file has been closed
-first, the FileSystemWatcher, to know when a file is created. From there you have the complete file path, and are 1 or 2 pinvokes away from getting the file unique ID (which can help you to track it during its whole lifetime).
-then, read the USN journal, which tracks everything that occurs on your drive. Filter on entries corresponding to your new file's ID, and read the journal until reaching the entry with the 'Close' event.
From there, unless your file is manipulated in special ways (opened and closed multiple times by the application that generates it), you can assume it is safe to read it and do whatever you wanted to do with it.
A really great C# implementation of an USN journal parser is StCroixSkipper's work, available here:
http://mftscanner.codeplex.com/
If you are interested I can give you more help about USN journal, as I use it in my project.
Our workaround is to watch for a specific extension. When a file is uploaded, the extension is ".tmp". When its done uploading, it's renamed to have the proper extension.
Another alternative is to have the server try to move the file in a try/catch block. If the fie isn't done being uploaded, the attempt to move the file will throw an exception, so we wait and try again.
Realistically, you can't know. If the other applications "write" operation is to open the file denying write access to everyone else then when it's done, close the file. When you get a notification then you could simply open the file requesting write access and if that fails, you know the operation isn't complete. But, if the "write" operation is to open the file, write, close the file, open the file again, and write again, etc., then you're pretty much out of luck.
The best solution I've seen is to set a timer after the last notification. When the timer elapses, try to open the file for write--if you can, assume the "operation" is done and do what you need to do. If the open fails, assume the operation is still in progress and wait some more.
Of course, nothing is foolproof. Despite the above, another operation could start while you're doing what you want with the file and cause interaction problems.
Is it possible to access a file before it's deleted when using FileSystemWatcher.OnDeleted event?
I'm storing some data about the document itself in its metadata and i need that info before it's deleted.
Any Ideas how to accomplish this with or without FileSystemWatcher if it's even possible ?
Update ://
I realized that storing the data in the file is bad as i cannot access it when file is deleted.
Solution : rewrite my app to store the data in a local database(sqlite/xml or something like that) as i get the full path and name when the file is being created/renamed/updated/deleted it would be easier to update the database record for the file.
Thanks to all for the ideas and suggestions!
Is it possible to access a file before it's deleted when using
FileSystemWatcher.OnDeleted event?
The event is triggered after the file deletion not before, so you won't be able to access the file when this event is raised.
Any Ideas how to accomplish this if it's even possible ?
I would use the OnChanged event instead, which is fired every time the file changes. Basically, you read the file metadata every time the file changes. This can be a bit cumbersome if the file gets updated very often but should allow you to have the latest metadata before the file is removed.
FileSystemWatcher1 = Your Main Watcher.
FileSystemWatcher2 = RecycleBin Watcher
If the FileSystemWatcher1 Deleted file == the FileSystemWatcher2 Created File
{
//Do what you want with the FileSystemWatcher2.FullPath
}
I want to parse data from a log file, pump it into a database, and then purge the log file.
I could use the FileSystemWatcher component, and monitor the Change event, but the event would be firing non-stop, as the log file is pretty much "constantly" being written to. I don't want to be opening/closing db connections willy-nilly.
My current instinct is to use a Timer, and then parse/pump/purge the log file every so often (based on time or based on time and size of file).
Is there a common/proven way of handling the scenario (design pattern)?
Update: I see FileSystemWatcher has a NotifyFilter property, with one of the filterables being "Size"; I'm guessing (haven't found any verification yet) that any time the size of the file changes by 1KB it fires; this would be a reasonable "throttle," if true...
Not sure if this is a design pattern, but if you control how much you buffer before actually writing to the log file you can minimize the frequency.
The change event is way too chatty here. I would check the file on a scheduled basis with a timer, looking at the modification timestamp (and possibly create, especially if someone deletes/recreates the file.)
Do you have any control over the log file generation? if so what you could do is create a new log file say every time it gets to a certain log size and rename the old log file to a specific format. Then have the filesystem watcher filter for the "archive" log files and process them when they are created.
How I may know which file is modified and what data is changed in the file?
Edit: I want to watch the file as it gets modified and then compare it against a previous version to know which data blocks are changed. I guess watching the file for changes can be accomplished by using file watcher API but I have no idea about the second part.
You may need the FileSystemWatcher class.
The most common approach is define FileSystemWatcher, subscribe to its events and process them accordingly to the logic of your application.
Here is a simple example.
I have a task to save versions of documents for specified directory and look for changes.
before each change i need to keep the CURRENT version of the file in other place.
but the filesystemWatcher doesnt help me here because its events is after the change...
what should i do ?
You'd want to snapshot the target directories before watching them, like when your service starts up or something, that way when the file change comes through you have the base to compare to.