do exceptions reduce performance? - c#

My application traverses a directory tree and in each directory it tries to open a file with a particular name (using File.OpenRead()). If this call throws FileNotFoundException then it knows that the file does not exist. Would I rather have a File.Exists() call before that to check if file exists? Would this be more efficient?

Update
I ran these two methods in a loop and timed each:
void throwException()
{
try
{
throw new NotImplementedException();
}
catch
{
}
}
void fileOpen()
{
string filename = string.Format("does_not_exist_{0}.txt", random.Next());
try
{
File.Open(filename, FileMode.Open);
}
catch
{
}
}
void fileExists()
{
string filename = string.Format("does_not_exist_{0}.txt", random.Next());
File.Exists(filename);
}
Random random = new Random();
These are the results without the debugger attached and running a release build :
Method Iterations per second
throwException 10100
fileOpen 2200
fileExists 11300
The cost of a throwing an exception is a lot higher than I was expecting, and calling FileOpen on a file that doesn't exist seems much slower than checking the existence of a file that doesn't exist.
In the case where the file will often not be present it appears to be faster to check if the file exists. I would imagine that in the opposite case - when the file is usually present you will find it is faster to catch the exception. If performance is critical to your application I suggest that you benchmark both apporaches on realistic data.
As mentioned in other answers, remember that even in you check for existence of the file before opening it you should be careful of the race condition if someone deletes the file after your existence check but just before you open it. You still need to handle the exception.

No, don't. If you use File.Exists, you introduce concurrency problem. If you wrote this code:
if file exists then
open file
then if another program deleted your file between when you checked File.Exists and before you actually open the file, then the program will still throw exception.
Second, even if a file exists, that does not mean you can actually open the file, you might not have the permission to open the file, or the file might be a read-only filesystem so you can't open in write mode, etc.
File I/O is much, much more expensive than exception, there is no need to worry about the performance of exceptions.
EDIT:
Benchmarking Exception vs Exists in Python under Linux
import timeit
setup = 'import random, os'
s = '''
try:
open('does not exist_%s.txt' % random.randint(0, 10000)).read()
except Exception:
pass
'''
byException = timeit.Timer(stmt=s, setup=setup).timeit(1000000)
s = '''
fn = 'does not exists_%s.txt' % random.randint(0, 10000)
if os.path.exists(fn):
open(fn).read()
'''
byExists = timeit.Timer(stmt=s, setup=setup).timeit(1000000)
print 'byException: ', byException # byException: 23.2779269218
print 'byExists: ', byExists # byExists: 22.4937438965

Is this behavior truly exceptional? If it is expected, you should be testing with an if statement, and not using exceptions at all. Performance isn't the only issue with this solution and from the sound of what you are trying to do, performance should not be an issue. Therefore, style and a good approach should be the items of concern with this solution.
So, to summarize, since you expect some tests to fail, do use the File.Exists to check instead of catching exceptions after the fact. You should still catch other exceptions that can occur, of course.

It depends !
If there's a high chance for the file to be there (you know this for your scenario, but as an example something like desktop.ini) I would rather prefer to directly try to open it.
Anyway, in case of using File.Exist you need to put File.OpenRead in try/catch for concurrency reasons and avoiding any run-time exception but it would considerably boost your application performance if the chance for file to be there is low. Ostrich algorithm

Wouldn't it be most efficient to run a directory search, find it, and then try to open it?
Dim Files() as string = System.IO.Directory.GetFiles("C:\", "SpecificName.txt", IO.SearchOption.AllDirectories)
Then you would get an array of strings that you know exist.
Oh, and as an answer to the original question, I would say that yes, try/catch would introduce more processor cycles, I would also assume that IO peeks actually take longer than the overhead of the processor cycles.
Running the Exists first, then the open second, is 2 IO functions against 1 of just trying to open it. So really, I'd say the overall performance is going to be a judgment call on processor time vs. hard drive speed on the PC it will be running on. If you've got a slower processor, I'd go with the check, if you've got a fast processor, I might go with the try/catch on this one.

File.Exists is a good first line of defense. If the file doesn't exist, then you're guaranteed to get an exception if you try to open it. The existence check is cheaper than the cost of throwing and catching an exception. (Maybe not much cheaper, but a bit.)
There's another consideration, too: debugging. When you're running in the debugger, the cost of throwing and catching an exception is higher, because the IDE has hooks into the exception mechanism that increase your overhead. And if you've checked any of the "Break on thrown" checkboxes in Debug > Exceptions, then any avoidable exceptions become a huge pain point. For that reason alone, I would argue for preventing exceptions when possible.
However, you still need the try-catch, for the reasons pointed out by other answers here. The File.Exists call is merely an optimization; it doesn't save you from needing to catch exceptions due to timing, permissions, solar flares, etc.

I don't know about efficiency but I would prefer the File.Exists check. The problem is all the other things that could happen: bad file handle, etc. If your program logic knows that sometimes the file doesn't exist and you want to have a different behavior for existing vs. non-existing files, use File.Exists. If its lack of existence is the same as other file-related exceptions, just use exception handling.
Vexing Exceptions -- more about using exceptions well

Yes, you should use File.Exists. Exceptions should be used for exceptional situations not to control the normal flow of your program. In your case, a file not being there is not an exceptional occurrence. Therefore, you should not rely on exceptions.
UPDATE:
So everyone can try it for themselves, I'll post my test code. For non existing files, relying on File.Open to throw an exception for you is about 50 times worse than checking with File.Exists.
class Program
{
static void Main(string[] args)
{
TimeSpan ts1 = TimeIt(OpenExistingFileWithCheck);
TimeSpan ts2 = TimeIt(OpenExistingFileWithoutCheck);
TimeSpan ts3 = TimeIt(OpenNonExistingFileWithCheck);
TimeSpan ts4 = TimeIt(OpenNonExistingFileWithoutCheck);
}
private static TimeSpan TimeIt(Action action)
{
int loopSize = 10000;
DateTime startTime = DateTime.Now;
for (int i = 0; i < loopSize; i++)
{
action();
}
return DateTime.Now.Subtract(startTime);
}
private static void OpenExistingFileWithCheck()
{
string file = #"C:\temp\existingfile.txt";
if (File.Exists(file))
{
using (FileStream fs = File.Open(file, FileMode.Open, FileAccess.Read))
{
}
}
}
private static void OpenExistingFileWithoutCheck()
{
string file = #"C:\temp\existingfile.txt";
using (FileStream fs = File.Open(file, FileMode.Open, FileAccess.Read))
{
}
}
private static void OpenNonExistingFileWithCheck()
{
string file = #"C:\temp\nonexistantfile.txt";
if (File.Exists(file))
{
using (FileStream fs = File.Open(file, FileMode.Open, FileAccess.Read))
{
}
}
}
private static void OpenNonExistingFileWithoutCheck()
{
try
{
string file = #"C:\temp\nonexistantfile.txt";
using (FileStream fs = File.Open(file, FileMode.Open, FileAccess.Read))
{
}
}
catch (Exception ex)
{
}
}
}
On my computer:
ts1 = .75 seconds (same with or without debugger attached)
ts2 = .56 seconds (same with or without debugger attached)
ts3 = .14 seconds (same with or without debugger attached)
ts4 = 14.28 seconds (with debugger attached)
ts4 = 1.07 (without debugger attached)
UPDATE:
I added details on whether a dubgger was attached or not. I tested debug and release build but the only thing that made a difference was the one function that ended up throwing exceptions while the debugger was attached (which makes sense). Still though, checking with File.Exists is the best choice.

I would say that, generally speaking, exceptions "increase" the overall "performance" of your system!
In your sample, anyway, it is better to use File.Exists...

The problem with using File.Exists first is that it opens the file too. So you end up opening the file twice. I haven't measured it, but I guess this additional opening of the file is more expensive than the occasional exceptions.
If the File.Exists check improves the performance depends on the probability of the file existing. If it likely exists then don't use File.Exists, if it usually doesn't exist the the additional check will improve the performance.

The overhead of an exception is noticeable, but it's not significant compared to file operations.

Related

File.WriteAllBytes does not block

I have a simple piece of code like so:
File.WriteAllBytes(Path.Combine(temp, node.Name), stuffFile.Read(0, node.FileHeader.FileSize));
One would think that WriteAllBytes would be a blocking call as it has Async counterparts in C# 5.0 and it doesn't state anywhere in any MSDN documentation that it is non-blocking. HOWEVER when a file is of a reasonable size (not massive, but somewhere in the realms of 20mb) the call afterwards which opens the file seems to be called before the writing is finished, and the file is opened (the program complains its corrupted, rightly so) and the WriteAllBytes then complains the file is open in another process. What is going on here?! For curiosity sake, this is the code used to open the file:
System.Diagnostics.Process.Start(Path.Combine(temp, node.Name));
Anyone experience this sort of weirdness before? Or is it me being a blonde and doing something wrong?
If it is indeed blocking, what could possibly be causing this issue?
EDIT: I'll put the full method up.
var node = item.Tag as FileNode;
stuffFile.Position = node.FileOffset;
string temp = Path.GetTempPath();
File.WriteAllBytes(Path.Combine(temp, node.Name), stuffFile.Read(0, node.FileHeader.FileSize));
System.Diagnostics.Process.Start(Path.Combine(temp, node.Name));
What seems to be happening is that Process.Start is being called BEFORE WriteAllBytes is finished, and its attempting to open the file, and then WriteAllBytes complains about another process holding the lock on the file.
No, WriteAllBytes is a blocking, synchronous method. As you stated, if it were not, the documentation would say so.
Possibly the virus scanner is still busy scanning the file that you just wrote, and is responsible for locking the file. Try temporarily disabling the scanner to test my hypothesis.
I think your problem may be with the way you are reading from the file. Note that Stream.Read (and FileStream.Read) is not required to read all you request.
In other words, your call stuffFile.Read(0, node.FileHeader.FileSize) might (and definitely will, sometimes) return an array of node.FileHeader.FileSize which contains some bytes of the file at the beginning, and then the 0's after.
The bug is in your UsableFileStream.Read method. You could fix it by having it read the entire file into memory:
public byte[] Read(int offset, int count)
{
// There are still bugs in this method, like assuming that 'count' bytes
// can actually be read from the file
byte[] temp = new byte[count];
int bytesRead;
while ( count > 0 && (bytesRead = _stream.Read(temp, offset, count)) > 0 )
{
offset += bytesRead;
count -= bytesRead;
}
return temp;
}
But since you are only using this to copy file contents, you could avoid having these potentially massive allocations and use Stream.CopyTo in your tree_MouseDoubleClick:
var node = item.Tag as FileNode;
stuffFile.Position = node.FileOffset;
string temp = Path.GetTempPath();
using (var output = File.Create(Path.Combine(temp, node.Name)))
stuffFile._stream.CopyTo(output);
System.Diagnostics.Process.Start(Path.Combine(temp, node.Name));
A little late, but adding for the benefit of anyone else that might come along.
The underlying C# implementation of File.WriteAllBytes may well be synchronous, but the authors of C# cannot control at the OS level how the writing to disk is handled.
Something called write caching means that when C# asks to save the file to disk, the OS may return "I'm done" before the file is fully written to the disk, causing the issue OP highlighted.
In that case, after writing, it may be better to sleep in a loop and keep checking to see if the file is still locked before calling Process.Start.
You can see that I run into problems caused by this here: C#, Entity Framework Core & PostgreSql : inserting a single row takes 20+ seconds
Also, in the final sentence of OPs post "and then WriteAllBytes complains about another process holding the lock on the file." I think they actually meant to write "and then Process.Start complains" which seems to have caused some confusion in the comments.

Open a file that has just been closed

I'm writing an application that manipulates a text file. The first half of my function reads the textfile, while the second half writes to (optionally) the same file. Although I call .close() on the StreamReader object before opening the StreamWriter object, I still get a IOException: The process cannot access the file "file.txt" because it is being used by another process.
How do I force my program to release the file before continuing?
public static void manipulateFile(String fileIn, String fileOut,String obj)
{
StreamReader sr = new StreamReader(fileIn);
String line;
while ((line = sr.ReadLine()) != null)
{
//code to split up file into part1, part2, and part3[]
}
sr.Close();
//Write the file
if (fileOut != null)
{
StreamWriter sw = new StreamWriter(fileOut);
sw.Write(part1 + part2);
foreach (String s in part3)
{
sw.WriteLine(s);
}
sw.Close();
}
}
Your code as posted runs fine - I don't see the exception.
However calling Close() manually like that is a bad idea - if an exception is thrown your call to Close() might never be made. You should use a finally block, or better yet : a using statement.
using (StreamReader sr = new StreamReader(fileIn))
{
// ...
}
But the actual problem you are experiencing might not be specifically with this method, but a general problem with forgetting to close files properly in using blocks. I suggest you go through all your code base and look for all the places in your code where you use IDisposable objects and check that you dispose them correctly even when there could be exceptions.
Getting read access to a file that's already opened elsewhere isn't usually difficult. Most code would open a file for reading with FileShare.Read, allowing somebody else to read the file as well. StreamReader does so for example.
Getting write access is an entirely different ball of wax. That same FileShare.Read does not include FileShare.Write, allowing you to write the file while somebody else is reading it. That's very troublesome, you're jerking the mat out from under that somebody else, suddenly providing entirely different data.
All you have to do is find out who that 'somebody else' might be. SysInternals' Handles utility can tell you. Hopefully it is your own program, you could do something about that.
May sound like a stupid question, but are you sure you didn't edit the file with another application, which didn't release the file? I've had this situation before, mostly with Excel files where Excel didn't completely unloading from memory (or me just being dumb enough not to close the other application sometimes). Might happen with whatever application you use for .txt files, if any. Just a suggestion.

Unable to move file because it's being used by another process -- my program?

My program is unable to File.Move or File.Delete a file because it is being used "by another process", but it's actually my own program that is using it.
I use Directory.GetFiles to initially get the file paths, and from there, I process the files by simply looking at their names and processing information that way. Consequently all I'm doing is working with the strings themselves, right? Afterwards, I try to move the files to a "Handled" directory. Nearly all of them will usually move, but from time to time, they simply won't because they're being used by my program.
Why is it that most of them move but one or two stick around? Is there anything I can do to try freeing up the file? There's no streams to close.
Edit Here's some code:
public object[] UnzipFiles(string[] zipFiles)
{
ArrayList al = new ArrayList(); //not sure of proper array size, so using arraylist
string[] files = null;
for (int a = 0; a < zipFiles.Length; a++)
{
string destination = settings.GetTorrentSaveFolder() + #"\[CSL]--Temp\" + Path.GetFileNameWithoutExtension(zipFiles[a]) + #"\";
try
{
fz.ExtractZip(zipFiles[a], destination, ".torrent");
files = Directory.GetFiles(destination,
"*.torrent", SearchOption.AllDirectories);
for (int b = 0; b < files.Length; b++)
al.Add(files[b]);
}
catch(Exception e)
{}
}
try
{
return al.ToArray(); //return all files of all zips
}
catch (Exception e)
{
return null;
}
}
This is called from:
try
{
object[] rawFiles = directory.UnzipFiles(zipFiles);
string[] files = Array.ConvertAll<object, string>(rawFiles, Convert.ToString);
if (files != null)
{
torrents = builder.Build(files);
xml.AddTorrents(torrents);
directory.MoveProcessedFiles(xml);
directory.MoveProcessedZipFiles();
}
}
catch (Exception e)
{ }
Therefore, the builder builds objects of class Torrent. Then I add the objects of class Torrent into a xml file, which stores information about it, and then I try to move the processed files which uses the xml file as reference about where each file is.
Despite it all working fine for most of the files, I'll get an IOException thrown about it being used by another process eventually here:
public void MoveProcessedZipFiles()
{
string[] zipFiles = Directory.GetFiles(settings.GetTorrentSaveFolder(), "*.zip", SearchOption.TopDirectoryOnly);
if (!Directory.Exists(settings.GetTorrentSaveFolder() + #"\[CSL] -- Processed Zips"))
Directory.CreateDirectory(settings.GetTorrentSaveFolder() + #"\[CSL] -- Processed Zips");
for (int a = 0; a < zipFiles.Length; a++)
{
try
{
File.Move(zipFiles[a], settings.GetTorrentSaveFolder() + #"\[CSL] -- Processed Zips\" + zipFiles[a].Substring(zipFiles[a].LastIndexOf('\\') + 1));
}
catch (Exception e)
{
}
}
}
Based on your comments, this really smells like a handle leak. Then, looking at your code, the fz.ExtractZip(...) looks like the best candidate to be using file handles, and hence be leaking them.
Is the type of fz part of your code, or a third party library? If it's within your code, make sure it closes all its handles (the safest way is via using or try-finally blocks). If it's part of a third party library, check the documentation and see if it requires any kind of cleanup. It's quite possible that it implements IDisposable; in such case put its usage within a using block or ensure it's properly disposed.
The line catch(Exception e) {} is horribly bad practice. You should only get rid of exceptions this way when you know exactly what exception may be thrown and why do you want to ignore it. If an exception your program can't handle happens, it's better for it to crash with a descriptive error message and valuable debug information (eg: exception type, stack trace, etc), than to ignore the issue and continue as if nothing had gone wrong, because an exception means that something has definitely gone wrong.
Long story short, the quickest approach to debug your program would be to:
replace your generic catchers with finally blocks
add/move any relevant cleanup code to the finally blocks
pay attention to any exception you get: where was it thrown form? what kind of exception is it? what the documentation or code comments say about the method throwing it? and so on.
Either
4.1. If the type of fz is part of your code, look for leaks there.
4.2. If it's part of a third party library, review the documentation (and consider getting support from the author).
Hope this helps
What this mean: "there is no streams to close"? You mean that you do not use streams or that you close them?
I believe that you nevertheless have some opened stream.
Do you have some static classes that uses this files?
1. Try to write simple application that will only parse move and delete the files, see if this will works.
2. Write here some pieces of code that works with your files.
3. Try to use unlocker to be sure twice that you have not any other thing that uses those files: http://www.emptyloop.com/unlocker/ (don't forget check files for viruses :))
Class Path was handling multiple files to get me their filenames. Despite being unsuccessful in reproducing the same issue, forcing a garbage collect using GC.Collect at the end of the "processing" phase of my program has been successful in fixing the issue.
Thanks again all who helped. I learned a lot.

How to Lock a file and avoid readings while it's writing

My web application returns a file from the filesystem. These files are dynamic, so I have no way to know the names o how many of them will there be. When this file doesn't exist, the application creates it from the database. I want to avoid that two different threads recreate the same file at the same time, or that a thread try to return the file while other thread is creating it.
Also, I don't want to get a lock over a element that is common for all the files. Therefore I should lock the file just when I'm creating it.
So I want to lock a file till its recreation is complete, if other thread try to access it ... it will have to wait the file be unlocked.
I've been reading about FileStream.Lock, but I have to know the file length and it won't prevent that other thread try to read the file, so it doesn't work for my particular case.
I've been reading also about FileShare.None, but it will throw an exception (which exception type?) if other thread/process try to access the file... so I should develop a "try again while is faulting" because I'd like to avoid the exception generation ... and I don't like too much that approach, although maybe there is not a better way.
The approach with FileShare.None would be this more or less:
static void Main(string[] args)
{
new Thread(new ThreadStart(WriteFile)).Start();
Thread.Sleep(1000);
new Thread(new ThreadStart(ReadFile)).Start();
Console.ReadKey(true);
}
static void WriteFile()
{
using (FileStream fs = new FileStream("lala.txt", FileMode.Create, FileAccess.Write, FileShare.None))
using (StreamWriter sw = new StreamWriter(fs))
{
Thread.Sleep(3000);
sw.WriteLine("trolololoooooooooo lolololo");
}
}
static void ReadFile()
{
Boolean readed = false;
Int32 maxTries = 5;
while (!readed && maxTries > 0)
{
try
{
Console.WriteLine("Reading...");
using (FileStream fs = new FileStream("lala.txt", FileMode.Open, FileAccess.Read, FileShare.Read))
using (StreamReader sr = new StreamReader(fs))
{
while (!sr.EndOfStream)
Console.WriteLine(sr.ReadToEnd());
}
readed = true;
Console.WriteLine("Readed");
}
catch (IOException)
{
Console.WriteLine("Fail: " + maxTries.ToString());
maxTries--;
Thread.Sleep(1000);
}
}
}
But I don't like the fact that I have to catch exceptions, try several times and wait an inaccurate amount of time :|
You can handle this by using the FileMode.CreateNew argument to the stream constructor. One of the threads is going to lose and find out that the file was already created a microsecond earlier by another thread. And will get an IOException.
It will then need to spin, waiting for the file to be fully created. Which you enforce with FileShare.None. Catching exceptions here doesn't matter, it is spinning anyway. There's no other workaround for it anyway unless you P/Invoke.
i think that a right aproach would be the following:
create a set of string were u will save the current file name
so one thread would process the file at time, something like this
//somewhere on your code or put on a singleton
static System.Collections.Generic.HashSet<String> filesAlreadyProcessed= new System.Collections.Generic.HashSet<String>();
//thread main method code
bool filealreadyprocessed = false
lock(filesAlreadyProcessed){
if(set.Contains(filename)){
filealreadyprocessed= true;
}
else{
set.Add(filename)
}
}
if(!filealreadyprocessed){
//ProcessFile
}
Do you have a way to identify what files are being created?
Say every one of those files corresponds to a unique ID in your database. You create a centralised location (Singleton?), where these IDs can be associated with something lockable (Dictionary). A thread that needs to read/write to one of those files does the following:
//Request access
ReaderWriterLockSlim fileLock = null;
bool needCreate = false;
lock(Coordination.Instance)
{
if(Coordination.Instance.ContainsKey(theId))
{
fileLock = Coordination.Instance[theId];
}
else if(!fileExists(theId)) //check if the file exists at this moment
{
Coordination.Instance[theId] = fileLock = new ReaderWriterLockSlim();
fileLock.EnterWriteLock(); //give no other thread the chance to get into write mode
needCreate = true;
}
else
{
//The file exists, and whoever created it, is done with writing. No need to synchronize in this case.
}
}
if(needCreate)
{
createFile(theId); //Writes the file from the database
lock(Coordination.Instance)
Coordination.Instance.Remove[theId];
fileLock.ExitWriteLock();
fileLock = null;
}
if(fileLock != null)
fileLock.EnterReadLock();
//read your data from the file
if(fileLock != null)
fileLock.ExitReadLock();
Of course, threads that don't follow this exact locking protocol will have access to the file.
Now, locking over a Singleton object is certainly not ideal, but if your application needs global synchronization then this is a way to achieve it.
Your question really got me thinking.
Instead of having every thread responsible for file access and having them block, what if you used a queue of files that need to be persisted and have a single background worker thread dequeue and persist?
While the background worker is cranking away, you can have the web application threads return the db values until the file does actually exist.
I've posted a very simple example of this on GitHub.
Feel free to give it a shot and let me know what you think.
FYI, if you don't have git, you can use svn to pull it http://svn.github.com/statianzo/MultiThreadFileAccessWebApp
The question is old and there is already a marked answer. Nevertheless I would like to post a simpler alternative.
I think we can directly use the lock statement on the filename, as follows:
lock(string.Intern("FileLock:absoluteFilePath.txt"))
{
// your code here
}
Generally, locking a string is a bad idea because of String Interning. But in this particular case it should ensure that no one else is able to access that lock. Just use the same lock string before attempting to read. Here interning works for us and not against.
PS: The text 'FileLock' is just some arbitrary text to ensure that other string file paths are not affected.
Why aren't you just using the database - e.g. if you have a way to associate a filename with the data from the db it contains, just add some information to the db that specifies whether a file exists with that information currently and when it was created, how stale the information in the file is etc. When a thread needs some information, it checks the db to see if that file exists and if not, it writes out a row to the table saying it's creating the file. When it's done it updates that row with a boolean saying the file is ready to be used by others.
the nice thing about this approach - all your information is in 1 place - so you can do nice error recovery - e.g. if the thread creating the file dies badly for some reason, another thread can come along and decide to rewrite the file because the creation time is too old. You can also create simple batch cleanup processes and get accurate data on how frequently certain data is being used for a file, how often information is updated (by looking at the creation times etc). Also, you avoid having to do many many disk seeks across your filesystem as different threads look for different files all over the place - especially if you decide to have multiple front-end machines seeking across a common disk.
The tricky thing - you'll have to make sure your db supports row-level locking on the table that threads write to when they create files because otherwise the table itself may be locked which could make this unacceptably slow.

How to check if a file is in use?

Is there any way to first test if a file is in use before attempting to open it for reading? For example, this block of code will throw an exception if the file is still being written to or is considered in use:
try
{
FileStream stream = new FileStream(fullPath, FileMode.Open, FileAccess.Read, FileShare.Read);
}
catch (IOException ex)
{
// ex.Message == "The process cannot access the file 'XYZ' because it is being used by another process."
}
I've looked all around and the best I can find is to perform some sort of polling with a try catch inside, and that feels so hacky. I would expect there to be something on System.IO.FileInfo but there isn't.
Any ideas on a better way?
"You can call the LockFile API function through the P/Invoke layer directly. You would use the handle returned by the SafeFileHandle property on the FileStream.
Calling the API directly will allow you to check the return value for an error condition as opposed to resorting to catching an exception."
"The try/catch block is the CORRECT solution (though you want to catch IOException, not all exceptions). There's no way you can properly synchronize, because testing the lock + acquiring the lock is not an atomic operation."
"Remember, the file system is volatile: just because your file is in one state for one operation doesn't mean it will be in the same state for the next operation. You have to be able to handle exceptions from the file system."
Using C# is it possible to test if a lock is held on a file
http://www.dotnet247.com/247reference/msgs/32/162678.aspx
Well a function that would try and do it would simply try catch in a loop. Just like with databases, the best way to find out IF you can do something is to try and do it. If it fails, deal with it. Unless your threading code is off, there is no reason that your program shouldn't be able to open a file unless the user has it open in another program.
Unless of course you're doing interesting things.

Categories

Resources