I want to get an event Before a file is being deleted?
How can I do it?
As per my answer to this question: How could I prevent a folder from being created using a windows service?
There's no support within the System.IO.FileSystemWatcher, or anything else within the .net Framework (as far as I'm aware) for receiving an event prior to a file being deleted, i.e. at the point the deletion request hits the file system, but prior to it being actioned (I'm assuming here that you want to be able to selectively cancel requests to delete files).
What you'll need to do, if you want to go down this route, is write a File System Filter Driver, which you'll have to write in unmanaged code as far as I'm aware.
Related
I have two folders that have some files inside them, I want to use Microsoft Sync Framework in a way that first it detects changes in a folder, if there is any, then carry out the Sync operation with the other folder.
The idea behind detecting the change on the folder is that I could query on that change and can do some operations first, before sync.
Any idea is also welcome to use the MSF with other techniques to achieve the same.
I have tried the sample code example given in this link https://msdn.microsoft.com/en-us/library/mt763483.aspx
But it first sync the folders and then fire some events. I try to fire the events first but it doesnt work.
I am a beginner in all of this so any help in this regard is higly helpful.
There is an event "ApplyingChange" which is triggered whenever there is a change in the folder. In this event, I first check the change and then call the "e.SkipChange" so that I only detect that there was a change in either the source folder or destination folder, then in the next call of this event I dont call the skip change and carry out the change or synchronization.
Experienced .NET developer here (but only client object experience in SharePoint). Here's my scenario:
In SharePoint 2013 a user checks in an existing/new file after making changes
File check code (c# pref) is run against the file being checked in
If file passes checks continue check in
If file fails, discard check in, inform the user that the check in has failed & provide the reasons why it failed (reasons supplied by file check code).
I already have the file checks implemented as a c# class lib (used in a couple of other apps). I would like to be able to limit this to a specific folder (and all child folders within) and file type (identified by file extension).
What's the best practices method of implementing this? My guess is to tie into existing SP events to determine check in and insert my file check class into that execution path. In a perfect world I'd find a tutorial demonstrating this. :)
Thank you in advance for your time.
Regards,
Falconeer
what you want is to develop a SharePoint farm solution which uses the event receivers. There are specific event receivers which will fire when someone checks in a document. Then you should do your logic there.
http://beginnersbook.com/2013/02/event-receivers-in-sharepoint/
Watch out for the event receivers - checkingin - checkedin. There is a difference between the two. The one is synchronous, the other asynchronous. I would put your logic in the -ing event receiver as this allows you to cancel the checkin.
You might have to play with before and afterproperties to do the appropriate check on folder, file, etc...
http://www.sharepointalex.co.uk/index.php/2010/06/beforepropertiesafterproperties-in-event-receivers-i-always-forget-this/
This should be the way to go!
I Hope that this is the Correct way of asking this question. first my problem is that i want to know that how many times a specific folder was opened from the time my windows service start's. I don't want to write a desktop application for this purpose because i want it to happen in the background and also later i may want to add some more functionality. So that is why i need to be it a windows service.
is there some kind of OS Event that i can handle during my code, i.e the event is fired when a user open's folder.
If this is not the correct method then please let me know some other method that can help.
That's not possible in C#. You can be notified of changes within a directory and infer from that that the directory was opened--but there are many times when a directory is opened and nothing will be changed. What you're describing is most like a File System Filter Driver.
From What is a File System Filter Driver:
A file system filter driver can filter I/O operations for one or more file systems or file system volumes. Depending on the nature of the driver, filter can mean log, observe, modify, or even prevent.
Writing a filter is relatively easy, considering there are templates that you can use to base your work from. But, they do consist of kernel-mode code meaning they're not written with C# (they are typically written with C) and they are drivers.
for more details: http://msdn.microsoft.com/en-us/library/windows/hardware/ff540382(v=vs.85).aspx
I am implementing an event handler that must open and process the content of a file created by a third part application over which I have no control. I am warned by a note in "C# 4.0 in a nutshell" (page 495) about the risk to open a file before it is fully populated; so I am wondering how to manage this occurrence. To keep at minimum the load on the event handler, I am considering to have the handler simply insert in a queue the file names and then to have a different thread to manage the processing, but, anyways, how may I make sure that the write is completed and the file read is safe? The file size could be arbitrary.
Some idea? Thanks
A reliable way to achieve what you want might be to use FileSystemWatcher + NTFS USN journal.
Maybe more complicated than you expected, but FileSystemWatcher alone won't tell you for sure that the newly created file has been closed
-first, the FileSystemWatcher, to know when a file is created. From there you have the complete file path, and are 1 or 2 pinvokes away from getting the file unique ID (which can help you to track it during its whole lifetime).
-then, read the USN journal, which tracks everything that occurs on your drive. Filter on entries corresponding to your new file's ID, and read the journal until reaching the entry with the 'Close' event.
From there, unless your file is manipulated in special ways (opened and closed multiple times by the application that generates it), you can assume it is safe to read it and do whatever you wanted to do with it.
A really great C# implementation of an USN journal parser is StCroixSkipper's work, available here:
http://mftscanner.codeplex.com/
If you are interested I can give you more help about USN journal, as I use it in my project.
Our workaround is to watch for a specific extension. When a file is uploaded, the extension is ".tmp". When its done uploading, it's renamed to have the proper extension.
Another alternative is to have the server try to move the file in a try/catch block. If the fie isn't done being uploaded, the attempt to move the file will throw an exception, so we wait and try again.
Realistically, you can't know. If the other applications "write" operation is to open the file denying write access to everyone else then when it's done, close the file. When you get a notification then you could simply open the file requesting write access and if that fails, you know the operation isn't complete. But, if the "write" operation is to open the file, write, close the file, open the file again, and write again, etc., then you're pretty much out of luck.
The best solution I've seen is to set a timer after the last notification. When the timer elapses, try to open the file for write--if you can, assume the "operation" is done and do what you need to do. If the open fails, assume the operation is still in progress and wait some more.
Of course, nothing is foolproof. Despite the above, another operation could start while you're doing what you want with the file and cause interaction problems.
I need to write an application that polls a directory which contains images on a file server and display 4 at a time.
This application will be run up to 50 times across the network at the same time.
I'm trying to think of the best architecture to complete this requirement.
I was working on the idea of opening a file with read/write access and no file share allowed so that if another PC came in to read it it would error and it would have to move on to the next one, the problem is, is that I need to access all 4 images in sequence on the same pc ensuring other pc's dont try to open them. So for example if PC1 tries to open 1.jpg it needs to be able to open 1,2,3,4.jpg. If another PC comes in at the same time to read them I need a way for it to then open 5,6,7,8.jpg and so on and so on.
It seems a simple requirement but a nightmare to try and build successfully.
You're basically dealing with a race condition here, and I don't see a way to handle it from separate instances of your application running on separate machines unless you can guarantee your file naming will always follow a standard naming convention that would allow you to work with the sequence of 4 files using only the name of the first.
The best way to handle this would be using a centralized resource to manage access to your files, either a database as was suggested in a comment or else a service (such as WCF) that would "hand out" each set of 4 files.
What about creating a 1.jpg.lock file? The presence of a the file indicates the images are locked and any other instance of the application should skip that set.