I made a system where user can upload a file (image) to a server and server saves it. All is good, but when I want to delete the files uploaded by user, I get an exception saying:
the process cannot access the file because it is being used by another process
This is the code for saving the file:
HttpFileCollection files = httpRequest.Files;
for (int i = 0; i < files.Count; i++) {
var postedFile = files[i];
// I tried this one before, but I read that I should .Dispose() files, therefore
// I settled to the other, uncommented solution (however, both of them do the same thing)
//postedFile.SaveAs(filePath);
using (FileStream fs = File.Create(filePath)) {
postedFile.InputStream.CopyTo(fs);
postedFile.InputStream.Close();
postedFile.InputStream.Dispose();
fs.Dispose();
fs.Close();
}
}
The deleting of files is quite simple. In a method called Delete, I call this method:
...
File.Delete(HttpContext.Current.Server.MapPath(CORRECT_PATH_TO_FILE));
...
Any suggestions on how to solve this?
Thanks
Just as Michael Perrenoud suggested me in the comment to my main question, I was also opening the file in another class and not disposing it when done with working with it. Problem is therefore solved. Thanks!
Where are you trying the file delete method? As part of the loop? If so, it is natural to have it locked. If outside of the loop, then it is a different problem (perhaps not garbage collected yet?).
To avoid the loop problem, gather a list of locations you are going to delete (declare outside of the loop, can be populated within) and then delete in another "clean up" loop (another method is even better for reusability).
NOTE: Close() before Dispose() not the other way around. You actually do not have to do both, as Dispose() should always handle making sure everything is clean (especially in .NET framework uses of IDisposable), but I don't see any harm in Close() followed by Dispose().
Related
I have a simple piece of code like so:
File.WriteAllBytes(Path.Combine(temp, node.Name), stuffFile.Read(0, node.FileHeader.FileSize));
One would think that WriteAllBytes would be a blocking call as it has Async counterparts in C# 5.0 and it doesn't state anywhere in any MSDN documentation that it is non-blocking. HOWEVER when a file is of a reasonable size (not massive, but somewhere in the realms of 20mb) the call afterwards which opens the file seems to be called before the writing is finished, and the file is opened (the program complains its corrupted, rightly so) and the WriteAllBytes then complains the file is open in another process. What is going on here?! For curiosity sake, this is the code used to open the file:
System.Diagnostics.Process.Start(Path.Combine(temp, node.Name));
Anyone experience this sort of weirdness before? Or is it me being a blonde and doing something wrong?
If it is indeed blocking, what could possibly be causing this issue?
EDIT: I'll put the full method up.
var node = item.Tag as FileNode;
stuffFile.Position = node.FileOffset;
string temp = Path.GetTempPath();
File.WriteAllBytes(Path.Combine(temp, node.Name), stuffFile.Read(0, node.FileHeader.FileSize));
System.Diagnostics.Process.Start(Path.Combine(temp, node.Name));
What seems to be happening is that Process.Start is being called BEFORE WriteAllBytes is finished, and its attempting to open the file, and then WriteAllBytes complains about another process holding the lock on the file.
No, WriteAllBytes is a blocking, synchronous method. As you stated, if it were not, the documentation would say so.
Possibly the virus scanner is still busy scanning the file that you just wrote, and is responsible for locking the file. Try temporarily disabling the scanner to test my hypothesis.
I think your problem may be with the way you are reading from the file. Note that Stream.Read (and FileStream.Read) is not required to read all you request.
In other words, your call stuffFile.Read(0, node.FileHeader.FileSize) might (and definitely will, sometimes) return an array of node.FileHeader.FileSize which contains some bytes of the file at the beginning, and then the 0's after.
The bug is in your UsableFileStream.Read method. You could fix it by having it read the entire file into memory:
public byte[] Read(int offset, int count)
{
// There are still bugs in this method, like assuming that 'count' bytes
// can actually be read from the file
byte[] temp = new byte[count];
int bytesRead;
while ( count > 0 && (bytesRead = _stream.Read(temp, offset, count)) > 0 )
{
offset += bytesRead;
count -= bytesRead;
}
return temp;
}
But since you are only using this to copy file contents, you could avoid having these potentially massive allocations and use Stream.CopyTo in your tree_MouseDoubleClick:
var node = item.Tag as FileNode;
stuffFile.Position = node.FileOffset;
string temp = Path.GetTempPath();
using (var output = File.Create(Path.Combine(temp, node.Name)))
stuffFile._stream.CopyTo(output);
System.Diagnostics.Process.Start(Path.Combine(temp, node.Name));
A little late, but adding for the benefit of anyone else that might come along.
The underlying C# implementation of File.WriteAllBytes may well be synchronous, but the authors of C# cannot control at the OS level how the writing to disk is handled.
Something called write caching means that when C# asks to save the file to disk, the OS may return "I'm done" before the file is fully written to the disk, causing the issue OP highlighted.
In that case, after writing, it may be better to sleep in a loop and keep checking to see if the file is still locked before calling Process.Start.
You can see that I run into problems caused by this here: C#, Entity Framework Core & PostgreSql : inserting a single row takes 20+ seconds
Also, in the final sentence of OPs post "and then WriteAllBytes complains about another process holding the lock on the file." I think they actually meant to write "and then Process.Start complains" which seems to have caused some confusion in the comments.
After an image is uploaded to my server, my code moves it into a specific folder given by the user details. Sometimes I think it tries to move the file too fast or the upload file is still in use so 9/10 the function won't perform the move.
Is there a way to add a 'wait' or a way to check if a file is in use and possibly perform a while loop until the file is allowed to be moved?
Current move function in my controller:
while (!File.Exists(uploadedPath))
{
}
File.Move(uploadedPath, savePath);
PS. I intend to add in a counter to ensure the while loop doesn't get stuck and has a timeout.
If you have control over the code receiving the file, I would update it to notify the moving code when the file is received completely. Alternatively I would move the file from there or even save the file where it should be eventually.
Otherwise, it will be a hack. You need
Try to move the file,
Catch the exception if it doesn't move
Use Thread.Sleep for a few sec
Go To 1
Something along the lines:
bool success = false;
for (var count = 0; !success && count < 10; ++count)
{
try
{
File.Move(uploadedPath, savePath);
success = true;
}
catch (IOException)
{
Thread.Wait(1000);
}
}
You also need to handle the situation when it cannot move the file at all. So it is a hack and should not be done in general if there are other ways to notify the moving code.
Also note:
From File.Move msdn:
If you try to move a file across disk volumes and that file is in use,
the file is copied to the destination, but it is not deleted from the
source.
which means that your file will remain in the received files directory after moving.
Are UploadFile and MoveFile 2 different components that are independent of each other. If so I don't think it's a good architecture. I would recommend a way so as to have the UploadFile pass the control to MoveFile once it's part is done. This way you can avoid multiple processes trying to access the same file.
I have a function that is reading a file and adding some of the string in a list and returning this list. Because I wanted that nobody and nothing could change, delete or whatever the current file that I was reading I locked it. Everything was fine, I did it somehow like this:
public static List<string> Read(string myfile)
{
using (FileStream fs = File.Open(myfile, FileMode.Open, FileAccess.Read, FileShare.None))
{
//read lines, add string to a list
//return list
}
}
Thats fine. Now I have another function in another class that is doing stuff with the list and calling other functions and so on. Now sometimes I want to move the file that I was reading. And here is the problem: because Im now in a new function and the function Read(string myfile) is already processed, there is no more lock in the file.
//in another class
public static void DoStuff(/*somefile*/)
{
List<string> = Read(/*somefile*/);
//the file (somefile) is not more locked!
//do stuff
if (something)
Move(/*somefile*/) //could get an error, file maybe not more there or changed...
}
So another function/user could change the file, rename it, deleting it or whatever => Im not able to move this file. Or I will move the changed file, but I dont what that. If I would use threading, another thread with the same function could lock the file again and I could not move it.
Thats why I somehow need to lock this file for a longer time. Is there an easy way? Or do I have to replace my using (FileStream fs = File.Open(myfile, FileMode.Open, FileAccess.Read, FileShare.None) code? Any suggestions? thank you
If you want to keep the file locked for longer then you need to refactor your code so that the Stream object is kept around for longer - I would change the Read method to accept a FileStream, a little bit like this
using (FileStream fs = File.Open(myfile, FileMode.Open, FileAccess., FileShare.None))
{
List<string> = Read(fs);
if (something)
{
File.Move(/* somefile */)
}
}
The problem you are going to have is that File.Move method is going to fail as this file is already locked (by you, but File.Move doesn't know that).
Depending on what exactly it is you want to do it might be possible to work out a way of keeping the file locked while also "moving" the file, (for example if you know something in advance you could open the file specifying FileOptions.DeleteOnClose and write a new file with the same contents in the desired destination), however this isn't really the same as moving the file and so it all depends on what exactly it is you are trying to do.
In general such things are almost always going to be more trouble than they are worth - you are better off just unlocking the file just before you move it and catching/ handling any exception that is thrown as a result of the move.
The only way you could keep it locked is to keep it exclusively open, like you have done in your code.
Maybe you need to //do stuff within your using statement, and then straight after call Move
No amount of locking will prevent this. A lock only protects the data in the file. The user (or any other program) can still move or rename the file. The file's attributes, including name, time stamps and file attributes are stored separately and can be changed at will.
This is just something you'll have to deal with in any Windows program. It is rare enough that simply catching the exception is good enough to let you know that something happened to the file. The user will rarely be surprised. If you really need to know up front then you can use FileSystemWatcher to get a notification when it happens.
You are locking the file only when Read method is called.
If you want to keep it locked and release it only when you decide, make your methods OpenFile(string filename) and CloseFile(string filename). Then remove the using statement from Read method.
Open it when you start working (lock). Read it when you need it. When you have to move it, simply create a new file with the same name and copy the content. Close the original file (unlock) and delete it.
I'm writing an application that manipulates a text file. The first half of my function reads the textfile, while the second half writes to (optionally) the same file. Although I call .close() on the StreamReader object before opening the StreamWriter object, I still get a IOException: The process cannot access the file "file.txt" because it is being used by another process.
How do I force my program to release the file before continuing?
public static void manipulateFile(String fileIn, String fileOut,String obj)
{
StreamReader sr = new StreamReader(fileIn);
String line;
while ((line = sr.ReadLine()) != null)
{
//code to split up file into part1, part2, and part3[]
}
sr.Close();
//Write the file
if (fileOut != null)
{
StreamWriter sw = new StreamWriter(fileOut);
sw.Write(part1 + part2);
foreach (String s in part3)
{
sw.WriteLine(s);
}
sw.Close();
}
}
Your code as posted runs fine - I don't see the exception.
However calling Close() manually like that is a bad idea - if an exception is thrown your call to Close() might never be made. You should use a finally block, or better yet : a using statement.
using (StreamReader sr = new StreamReader(fileIn))
{
// ...
}
But the actual problem you are experiencing might not be specifically with this method, but a general problem with forgetting to close files properly in using blocks. I suggest you go through all your code base and look for all the places in your code where you use IDisposable objects and check that you dispose them correctly even when there could be exceptions.
Getting read access to a file that's already opened elsewhere isn't usually difficult. Most code would open a file for reading with FileShare.Read, allowing somebody else to read the file as well. StreamReader does so for example.
Getting write access is an entirely different ball of wax. That same FileShare.Read does not include FileShare.Write, allowing you to write the file while somebody else is reading it. That's very troublesome, you're jerking the mat out from under that somebody else, suddenly providing entirely different data.
All you have to do is find out who that 'somebody else' might be. SysInternals' Handles utility can tell you. Hopefully it is your own program, you could do something about that.
May sound like a stupid question, but are you sure you didn't edit the file with another application, which didn't release the file? I've had this situation before, mostly with Excel files where Excel didn't completely unloading from memory (or me just being dumb enough not to close the other application sometimes). Might happen with whatever application you use for .txt files, if any. Just a suggestion.
My program is unable to File.Move or File.Delete a file because it is being used "by another process", but it's actually my own program that is using it.
I use Directory.GetFiles to initially get the file paths, and from there, I process the files by simply looking at their names and processing information that way. Consequently all I'm doing is working with the strings themselves, right? Afterwards, I try to move the files to a "Handled" directory. Nearly all of them will usually move, but from time to time, they simply won't because they're being used by my program.
Why is it that most of them move but one or two stick around? Is there anything I can do to try freeing up the file? There's no streams to close.
Edit Here's some code:
public object[] UnzipFiles(string[] zipFiles)
{
ArrayList al = new ArrayList(); //not sure of proper array size, so using arraylist
string[] files = null;
for (int a = 0; a < zipFiles.Length; a++)
{
string destination = settings.GetTorrentSaveFolder() + #"\[CSL]--Temp\" + Path.GetFileNameWithoutExtension(zipFiles[a]) + #"\";
try
{
fz.ExtractZip(zipFiles[a], destination, ".torrent");
files = Directory.GetFiles(destination,
"*.torrent", SearchOption.AllDirectories);
for (int b = 0; b < files.Length; b++)
al.Add(files[b]);
}
catch(Exception e)
{}
}
try
{
return al.ToArray(); //return all files of all zips
}
catch (Exception e)
{
return null;
}
}
This is called from:
try
{
object[] rawFiles = directory.UnzipFiles(zipFiles);
string[] files = Array.ConvertAll<object, string>(rawFiles, Convert.ToString);
if (files != null)
{
torrents = builder.Build(files);
xml.AddTorrents(torrents);
directory.MoveProcessedFiles(xml);
directory.MoveProcessedZipFiles();
}
}
catch (Exception e)
{ }
Therefore, the builder builds objects of class Torrent. Then I add the objects of class Torrent into a xml file, which stores information about it, and then I try to move the processed files which uses the xml file as reference about where each file is.
Despite it all working fine for most of the files, I'll get an IOException thrown about it being used by another process eventually here:
public void MoveProcessedZipFiles()
{
string[] zipFiles = Directory.GetFiles(settings.GetTorrentSaveFolder(), "*.zip", SearchOption.TopDirectoryOnly);
if (!Directory.Exists(settings.GetTorrentSaveFolder() + #"\[CSL] -- Processed Zips"))
Directory.CreateDirectory(settings.GetTorrentSaveFolder() + #"\[CSL] -- Processed Zips");
for (int a = 0; a < zipFiles.Length; a++)
{
try
{
File.Move(zipFiles[a], settings.GetTorrentSaveFolder() + #"\[CSL] -- Processed Zips\" + zipFiles[a].Substring(zipFiles[a].LastIndexOf('\\') + 1));
}
catch (Exception e)
{
}
}
}
Based on your comments, this really smells like a handle leak. Then, looking at your code, the fz.ExtractZip(...) looks like the best candidate to be using file handles, and hence be leaking them.
Is the type of fz part of your code, or a third party library? If it's within your code, make sure it closes all its handles (the safest way is via using or try-finally blocks). If it's part of a third party library, check the documentation and see if it requires any kind of cleanup. It's quite possible that it implements IDisposable; in such case put its usage within a using block or ensure it's properly disposed.
The line catch(Exception e) {} is horribly bad practice. You should only get rid of exceptions this way when you know exactly what exception may be thrown and why do you want to ignore it. If an exception your program can't handle happens, it's better for it to crash with a descriptive error message and valuable debug information (eg: exception type, stack trace, etc), than to ignore the issue and continue as if nothing had gone wrong, because an exception means that something has definitely gone wrong.
Long story short, the quickest approach to debug your program would be to:
replace your generic catchers with finally blocks
add/move any relevant cleanup code to the finally blocks
pay attention to any exception you get: where was it thrown form? what kind of exception is it? what the documentation or code comments say about the method throwing it? and so on.
Either
4.1. If the type of fz is part of your code, look for leaks there.
4.2. If it's part of a third party library, review the documentation (and consider getting support from the author).
Hope this helps
What this mean: "there is no streams to close"? You mean that you do not use streams or that you close them?
I believe that you nevertheless have some opened stream.
Do you have some static classes that uses this files?
1. Try to write simple application that will only parse move and delete the files, see if this will works.
2. Write here some pieces of code that works with your files.
3. Try to use unlocker to be sure twice that you have not any other thing that uses those files: http://www.emptyloop.com/unlocker/ (don't forget check files for viruses :))
Class Path was handling multiple files to get me their filenames. Despite being unsuccessful in reproducing the same issue, forcing a garbage collect using GC.Collect at the end of the "processing" phase of my program has been successful in fixing the issue.
Thanks again all who helped. I learned a lot.