I am trying to save an XDcoument to a thumb drive which doesnt have enough memory space available. (This is a special test condition for the app) Though the application is giving an exception like below, I cant get that in the try catch block around the XDocument.Save(filePath). Looks like it is a delayed throw. Is it a LINQ issue or am I doing something wrong?.
alt text http://img211.imageshack.us/img211/8324/exce.png
System.IO.IOException was unhandled
Message="There is not enough space on the disk.\r\n"
Source="mscorlib"
StackTrace:
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.FileStream.FlushWrite(Boolean calledFromFinalizer)
at System.IO.FileStream.Dispose(Boolean disposing)
at System.IO.FileStream.Finalize()
You found a bug in the framework. XDocument.Save(string) uses the "using" statement to ensure the output stream gets disposed. It depends on the encoding you used in the processing instruction but the internal System.Xml.XmlUtf8RawTextReader would be a common one to implement the text writer.
The bug: the Microsoft programmer that wrote that class forgot to implement the Dispose() method. Only the Close() method is implemented.
It is rather strange that this bug wasn't yet reported at the connect.microsoft.com feedback site. It ought to cause trouble in general use because the file stays open until the finalizer thread runs. Although that normally doesn't take that long, a couple of seconds or so. Except in your case where you exit the program right after writing and have the unfortunate luck to run out of disk space at the exact moment the buffer gets flushed.
A workaround for this bug is to use the XDocument.Save(TextWriter) overload instead, passing a StreamWriter whose Encoding matches the encoding of the XML.
Look at the stack trace. This trace starts with a Finalize call, which does a Dispose, which does a FlushWrite, which calls WriteCore, which gets the error.
In other words, flush your data first.
Post the code you use to write and we can show you where to do the flush.
Peeking into reflector, the last few lines are
using (XmlWriter writer = XmlWriter.Create(fileName, xmlWriterSettings))
{
this.Save(writer);
}
It means, the exception is thrown when the writer is disposed.
I guess, it will be better to check for available disk space before calling Save.
EDIT: Have you Disposed any object that the instance of XDocument depended on before making a call to Save?
XDocument.Save(string) does not have a bug, it does implement the Dispose method. The using statement is:- (as also described above)
using (XmlWriter writer = XmlWriter.Create(fileName, xmlWriterSettings))
this.Save(writer);
And XmlWriter does have a Dispose(), it implement the IDisposable interface.
Related
Is there any class in the .NET framework which provides access to \.\G: - style paths. (i.e. raw volumes)?
We're currently doing this without any problem using p/invoked ReadFile and WriteFile, which is not complex for synchronous access, but it's tedious to add async read/write, because of all the care you need to take over pinning and handling the OVERLAPPED structure and managing the event object lifetime, etc. (i.e. all the tedious stuff we'd have to do in Win32 code...)
It's hard to prove you've got the interaction with the GC correct too, with any simple testing technique.
The FileStream class contains all this code in, no doubt, a completely bomb-proof and refined fashion, and taking advantage of lots of internal helpers which we can't use. Unfortunately FileStream explicitly stops you opening a raw volume, so we can't use it.
Is there anything else in framework which helps avoid writing this sort of code from scratch? I've poked about in Reference Source, but nothing leaps out.
Update - we had already tried the suggestion below to avoid the check on the path type by opening the device ourselves and passing in the handle. When we try this, it blows up with the following error (note that this trace goes through the contstructor of FileStream - i.e we don't get any chance to interact with the stream at all):
System.IO.IOException: The parameter is incorrect.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.SeekCore(Int64 offset, SeekOrigin origin)
at System.IO.FileStream..ctor(SafeFileHandle handle, FileAccess access, Int32 bufferSize, Boolean isAsync)
at OurApp.USBComms.UsbDevice..ctor(Char driveLetter) in
For reference, our CreateFile call looks like this:
var deviceName = String.Format(#"\\.\{0}:", driveLetter);
var handle = SafeNativeMethods.CreateFile(deviceName,
0x80000000 | 0x40000000,
FileShare.ReadWrite,
0,
FileMode.Open,
(uint)FileOptions.Asynchronous | 0x20000000, // Last option is 'FILE_FLAG_NO_BUFFERING'
IntPtr.Zero);
if (handle.IsInvalid)
{
throw new IOException("CreateFile Error: " + Marshal.GetLastWin32Error());
}
Update3: It turns out that (on a volume handle, anyway), you can't call SetFilePointer on a handle which has been opened with FILE_FLAG_OVERLAPPED. This makes some sense, as SetFilePointer is useless on files where there's any kind of multithreaded access anyway. Unfortunately FileStream seems determined to call this during construction for some reason (still trying to track down why) which is what causes the failure.
As Sriram Sakthivel noted (+1), you could pass a SafeFileHandle to the FileStream constructor.
From your stacktrace I'm assuming you tried to seek the stream to an invalid position ; raw disks / volume have special rules about read/write positions.
For instance, you cannot start reading a disk in the middle of a sector (you'll have to read/seek per chunk of 512 bytes). Try to read for instance at offset 0 to see if it work better.
I believe you can use this constructor of FileStream passing the pre opened FileHandle as SafeFileHandle instance. With that you have managed FileStream instance which you can use to issue an async I/O operation.
public FileStream(SafeFileHandle handle, FileAccess access, int bufferSize, bool isAsync);
Also when you're planning to do async I/O, don't forget to set isAsnc flag to true. Otherwise you'll not get all the benefits of async I/O.
I have a simple piece of code like so:
File.WriteAllBytes(Path.Combine(temp, node.Name), stuffFile.Read(0, node.FileHeader.FileSize));
One would think that WriteAllBytes would be a blocking call as it has Async counterparts in C# 5.0 and it doesn't state anywhere in any MSDN documentation that it is non-blocking. HOWEVER when a file is of a reasonable size (not massive, but somewhere in the realms of 20mb) the call afterwards which opens the file seems to be called before the writing is finished, and the file is opened (the program complains its corrupted, rightly so) and the WriteAllBytes then complains the file is open in another process. What is going on here?! For curiosity sake, this is the code used to open the file:
System.Diagnostics.Process.Start(Path.Combine(temp, node.Name));
Anyone experience this sort of weirdness before? Or is it me being a blonde and doing something wrong?
If it is indeed blocking, what could possibly be causing this issue?
EDIT: I'll put the full method up.
var node = item.Tag as FileNode;
stuffFile.Position = node.FileOffset;
string temp = Path.GetTempPath();
File.WriteAllBytes(Path.Combine(temp, node.Name), stuffFile.Read(0, node.FileHeader.FileSize));
System.Diagnostics.Process.Start(Path.Combine(temp, node.Name));
What seems to be happening is that Process.Start is being called BEFORE WriteAllBytes is finished, and its attempting to open the file, and then WriteAllBytes complains about another process holding the lock on the file.
No, WriteAllBytes is a blocking, synchronous method. As you stated, if it were not, the documentation would say so.
Possibly the virus scanner is still busy scanning the file that you just wrote, and is responsible for locking the file. Try temporarily disabling the scanner to test my hypothesis.
I think your problem may be with the way you are reading from the file. Note that Stream.Read (and FileStream.Read) is not required to read all you request.
In other words, your call stuffFile.Read(0, node.FileHeader.FileSize) might (and definitely will, sometimes) return an array of node.FileHeader.FileSize which contains some bytes of the file at the beginning, and then the 0's after.
The bug is in your UsableFileStream.Read method. You could fix it by having it read the entire file into memory:
public byte[] Read(int offset, int count)
{
// There are still bugs in this method, like assuming that 'count' bytes
// can actually be read from the file
byte[] temp = new byte[count];
int bytesRead;
while ( count > 0 && (bytesRead = _stream.Read(temp, offset, count)) > 0 )
{
offset += bytesRead;
count -= bytesRead;
}
return temp;
}
But since you are only using this to copy file contents, you could avoid having these potentially massive allocations and use Stream.CopyTo in your tree_MouseDoubleClick:
var node = item.Tag as FileNode;
stuffFile.Position = node.FileOffset;
string temp = Path.GetTempPath();
using (var output = File.Create(Path.Combine(temp, node.Name)))
stuffFile._stream.CopyTo(output);
System.Diagnostics.Process.Start(Path.Combine(temp, node.Name));
A little late, but adding for the benefit of anyone else that might come along.
The underlying C# implementation of File.WriteAllBytes may well be synchronous, but the authors of C# cannot control at the OS level how the writing to disk is handled.
Something called write caching means that when C# asks to save the file to disk, the OS may return "I'm done" before the file is fully written to the disk, causing the issue OP highlighted.
In that case, after writing, it may be better to sleep in a loop and keep checking to see if the file is still locked before calling Process.Start.
You can see that I run into problems caused by this here: C#, Entity Framework Core & PostgreSql : inserting a single row takes 20+ seconds
Also, in the final sentence of OPs post "and then WriteAllBytes complains about another process holding the lock on the file." I think they actually meant to write "and then Process.Start complains" which seems to have caused some confusion in the comments.
I ran into something interesting when using a StreamWriter with a FileStream to append text to an existing file in .NET 4.5 (haven't tried any older frameworks). I tried two ways, one worked and one didn't. I'm wondering what the difference between the two is.
Both methods contained the following code at the top
if (!File.Exists(filepath))
using (File.Create(filepath));
I have the creation in a using statement because I've found through personal experience that it's the best way to ensure that the application fully closes the file.
Non-Working Method:
using (FileStream f = new FileStream(filepath, FileMode.Append,FileAccess.Write))
(new StreamWriter(f)).WriteLine("somestring");
With this method nothing ends up being appended to the file.
Working Method:
using (FileStream f = new FileStream(filepath, FileMode.Append,FileAccess.Write))
using (StreamWriter s = new StreamWriter(f))
s.WriteLine("somestring");
I've done a bit of Googling, without quite knowing what to search for, and haven't found anything informative. So, why is it that the anonymous StreamWriter fails where the (non-anonymous? named?) StreamWriter works?
It sounds like you did not flush the stream.
http://msdn.microsoft.com/en-us/library/system.io.stream.flush.aspx
It looks like StreamWriter writes to a buffer before writing to the final destination, in this case, the file. You may also be able to set the AutoFlush property and not have to explicitly flush it.
http://msdn.microsoft.com/en-us/library/system.io.streamwriter.autoflush.aspx
To answer your question, when you use the "using" block, it calls dispose on the StreamWriter, which must in turn call Flush.
The difference between the two code snippets is the use of using. The using statement disposes the object at the end of the block.
A StreamWriter buffers data before writing it to the underlying stream. Disposing the StreamWriter flushes the buffer. If you don't flush the buffer, nothing gets written.
From MSDN:
You must call Close to ensure that all data is correctly written out to the underlying stream.
See also: When should I use “using” blocks in C#?
Is it true that this does not necessarily mean the stream has been disposed of by code - either in a using or by calling dispose.
The stream could have been closed outside of this code and this exception would still occur?
So I will make my comment an answer: Yes, a stream could just as well be closed from outside your code, so make sure you check for a System.ObjectDisposedException.
There are several occasions this could happen: imagine for example a stream associated with a network connection and the connection is suddenly interrupted. Depending on the implementation this could close the stream and throw that particular exception if the stream is accessed.
The stream could have been closed outside of this code and this exception would still occur?
Yes. For example - This can happen if you wrap a stream within another stream, and dispose of the "wrapper" stream. Many implementations dispose of the stream they are wrapping.
If you then try to write to the "wrapped" stream, you'll receive this error message.
either in a using or by calling dispose.
Also realize that, for objects which have a Close() method, such as Stream, Close and Dispose typically perform the same function. Closing a stream also disposes of it.
This error can also happen if the requestLengthDiskThreshold is smaller than the size of the file you are trying to upload/handle via the stream. This is defined in your web.config:
<httpRuntime maxRequestLength="512000" requestLengthDiskThreshold="512000" />
If you loook at the explanation for the 2nd parameter here:
https://msdn.microsoft.com/en-us/library/system.web.configuration.httpruntimesection.requestlengthdiskthreshold(v=vs.110).aspx
you will see that it sets the input-stream buffering threshold (in kilobytes). The default value is 80KB so if you don't have this value set and you try, for example, to ajax upload the file bigger than 80KB you will get the System.ObjectDisposedException exception since the stream will be closed once the threshold limit is met.
In my case I'm setting the threshold to 500MB...
A few days ago I posted some code like this:
StreamWriter writer = new StreamWriter(Response.OutputStream);
writer.WriteLine("col1,col2,col3");
writer.WriteLine("1,2,3");
writer.Close();
Response.End();
I was told that instead I should wrap StreamWriter in a using block in case of exceptions. Such a change would make it look like this:
using(StreamWriter writer = new StreamWriter(Response.OutputStream))
{
writer.WriteLine("col1,col2,col3");
writer.WriteLine("1,2,3");
writer.Close(); //not necessary I think... end of using block should close writer
}
Response.End();
I am not sure why this is a valuable change. If an exception occurred without the using block, the writer and response would still be cleaned up, right? What does the using block gain me?
Nope the stream would stay open in the first example, since the error would negate the closing of it.
The using operator forces the calling of Dispose() which is supposed to clean the object up and close all open connections when it exits the block.
I'm going to give the dissenting opinion. The answer to the specific question "Is it necessary to wrap StreamWriter in a using block?" is actually No. In fact, you should not call Dispose on a StreamWriter, because its Dispose is badly designed and does the wrong thing.
The problem with StreamWriter is that, when you Dispose it, it Disposes the underlying stream. If you created the StreamWriter with a filename, and it created its own FileStream internally, then this behavior would be totally appropriate. But if, as here, you created the StreamWriter with an existing stream, then this behavior is absolutely The Wrong Thing(tm). But it does it anyway.
Code like this won't work:
var stream = new MemoryStream();
using (var writer = new StreamWriter(stream)) { ... }
stream.Position = 0;
using (var reader = new StreamReader(stream)) { ... }
because when the StreamWriter's using block Disposes the StreamWriter, that will in turn throw away the stream. So when you try to read from the stream, you get an ObjectDisposedException.
StreamWriter is a horrible violation of the "clean up your own mess" rule. It tries to clean up someone else's mess, whether they wanted it to or not.
(Imagine if you tried this in real life. Try explaining to the cops why you broke into someone else's house and started throwing all their stuff into the trash...)
For that reason, I consider StreamWriter (and StreamReader, which does the same thing) to be among the very few classes where "if it implements IDisposable, you should call Dispose" is wrong. Never call Dispose on a StreamWriter that was created on an existing stream. Call Flush() instead.
Then just make sure you clean up the Stream when you should. (As Joe pointed out, ASP.NET disposes the Response.OutputStream for you, so you don't need to worry about it here.)
Warning: if you don't Dispose the StreamWriter, then you do need to call Flush() when you're done writing. Otherwise you could have data still being buffered in memory that never makes it to the output stream.
My rule for StreamReader is, pretend it doesn't implement IDisposable. Just let it go when you're done.
My rule for StreamWriter is, call Flush where you otherwise would have called Dispose. (This means you have to use a try..finally instead of a using.)
If an exception occurs without the using block and kills the program, you will be left with openconnections. The using block will always close the connection for you, similiar to if you were to use try{}catch{}finally{}
Eventually, the writer will be cleaned up. When this happens is up to the garbage collector, who will notice that the Dispose for the command has not been called, and invoke it. Of course, the GC may not run for minutes, hours or days depending on the situation. If the writer is holding an exclusive lock on say, a file, no other process will be able to open it, even though you're long finished.
The using block ensures the Dispose call is always made, and hence that the Close is always called, regardless of what control flow occurs.
Wrapping the StreamWriter in a using block is pretty much equivalent of the following code:
StreamWriter writer;
try
{
writer = new StreamWriter(Response.OutputStream);
writer.WriteLine("col1,col2,col3");
writer.WriteLine("1,2,3");
}
catch
{
throw;
}
finally
{
if (writer != null)
{
writer.Close();
}
}
While you could very well write this code yourself, it is so much easier to just put it in a using block.
In my opinion it's necessary to wrap any class that implements IDisposable in a using block. The fact that a class implements IDisposable means that the class has resources that need to be cleaned up.
My rule of thumb is, if I see Dispose listed in intellisense, I wrap it in a using block.
the using block calls dispose() when it ends. It's just a handy way of ensuring that resources are cleaned up in a timely manner.
In almost every case, if a class implements IDisposable, and if you're creating an instance of that class, then you need the using block.
While it's good practice to always dipose disposable classes such as StreamWriter, as others point out, in this case it doesn't matter.
Response.OutputStream will be disposed by the ASP.NET infrastructure when it's finished processing your request.
StreamWriter assumes it "owns" a Stream passed to the constructor, and will therefore Close the stream when it's disposed. But in the sample you provide, the stream was instantiated outside your code, so there will be another owner who is responsible for the clean-up.