Low level file access with .NET - c#

Is there any class in the .NET framework which provides access to \.\G: - style paths. (i.e. raw volumes)?
We're currently doing this without any problem using p/invoked ReadFile and WriteFile, which is not complex for synchronous access, but it's tedious to add async read/write, because of all the care you need to take over pinning and handling the OVERLAPPED structure and managing the event object lifetime, etc. (i.e. all the tedious stuff we'd have to do in Win32 code...)
It's hard to prove you've got the interaction with the GC correct too, with any simple testing technique.
The FileStream class contains all this code in, no doubt, a completely bomb-proof and refined fashion, and taking advantage of lots of internal helpers which we can't use. Unfortunately FileStream explicitly stops you opening a raw volume, so we can't use it.
Is there anything else in framework which helps avoid writing this sort of code from scratch? I've poked about in Reference Source, but nothing leaps out.
Update - we had already tried the suggestion below to avoid the check on the path type by opening the device ourselves and passing in the handle. When we try this, it blows up with the following error (note that this trace goes through the contstructor of FileStream - i.e we don't get any chance to interact with the stream at all):
System.IO.IOException: The parameter is incorrect.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.SeekCore(Int64 offset, SeekOrigin origin)
at System.IO.FileStream..ctor(SafeFileHandle handle, FileAccess access, Int32 bufferSize, Boolean isAsync)
at OurApp.USBComms.UsbDevice..ctor(Char driveLetter) in
For reference, our CreateFile call looks like this:
var deviceName = String.Format(#"\\.\{0}:", driveLetter);
var handle = SafeNativeMethods.CreateFile(deviceName,
0x80000000 | 0x40000000,
FileShare.ReadWrite,
0,
FileMode.Open,
(uint)FileOptions.Asynchronous | 0x20000000, // Last option is 'FILE_FLAG_NO_BUFFERING'
IntPtr.Zero);
if (handle.IsInvalid)
{
throw new IOException("CreateFile Error: " + Marshal.GetLastWin32Error());
}
Update3: It turns out that (on a volume handle, anyway), you can't call SetFilePointer on a handle which has been opened with FILE_FLAG_OVERLAPPED. This makes some sense, as SetFilePointer is useless on files where there's any kind of multithreaded access anyway. Unfortunately FileStream seems determined to call this during construction for some reason (still trying to track down why) which is what causes the failure.

As Sriram Sakthivel noted (+1), you could pass a SafeFileHandle to the FileStream constructor.
From your stacktrace I'm assuming you tried to seek the stream to an invalid position ; raw disks / volume have special rules about read/write positions.
For instance, you cannot start reading a disk in the middle of a sector (you'll have to read/seek per chunk of 512 bytes). Try to read for instance at offset 0 to see if it work better.

I believe you can use this constructor of FileStream passing the pre opened FileHandle as SafeFileHandle instance. With that you have managed FileStream instance which you can use to issue an async I/O operation.
public FileStream(SafeFileHandle handle, FileAccess access, int bufferSize, bool isAsync);
Also when you're planning to do async I/O, don't forget to set isAsnc flag to true. Otherwise you'll not get all the benefits of async I/O.

Related

Windows service: using of BitmapEncoder or BitmapDecoder ends with «The operation completed successfully»

I am facing a problem that I am not able to solve or google solution anywhere.
I am running service that load or saves images and uses BitmapEncoder or BitmapDecoder classes. After some time (depending how often I save/load images) service refuse to save/load images. First I see warning in event log with
heap allocation failed
I googled what does it mean and it has to do with limited number of GDI objects that Windows service has. Its possible to modify registry to increase number of these object but its not very nice solution I think and also it does not work for me.
My code throws following exception with stack trace when saving
Error while storing image : System.ComponentModel.Win32Exception (0x80004005): The operation completed successfully
at MS.Win32.HwndWrapper..ctor(Int32 classStyle, Int32 style, Int32 exStyle, Int32 x, Int32 y, Int32 width, Int32 height, String name, IntPtr parent, HwndWrapperHook[] hooks)
at System.Windows.Threading.Dispatcher..ctor()
at System.Windows.Threading.DispatcherObject..ctor()
at System.Windows.Media.Imaging.BitmapEncoder..ctor(Boolean isBuiltIn)
at Imaging.TiffReadWrite.Save(String filename, Image img)
and when loading
Error while loading image : System.ComponentModel.Win32Exception (0x80004005): The operation completed successfully
at MS.Win32.HwndWrapper..ctor(Int32 classStyle, Int32 style, Int32 exStyle, Int32 x, Int32 y, Int32 width, Int32 height, String name, IntPtr parent, HwndWrapperHook[] hooks)
at System.Windows.Threading.Dispatcher..ctor()
at System.Windows.Threading.DispatcherObject..ctor()
at System.Windows.Media.Imaging.BitmapDecoder..ctor(Stream bitmapStream, BitmapCreateOptions createOptions, BitmapCacheOption cacheOption, Guid expectedClsId)
at Imaging.TiffReadWrite.Load(String filename)
My code for saving images looks like:
public static void Save(string filename, BitmapSource img)
{
using (FileStream stream = new FileStream(filename, FileMode.Create))
{
TiffBitmapEncoder encoder = new TiffBitmapEncoder();
encoder.Compression = TiffCompressOption.None;
BitmapFrame frm = BitmapFrame.Create(img);
encoder.Frames.Add(frm);
encoder.Save(stream);
}
}
and for loading images looks like:
public static BitmapSource Load(string filename)
{
BitmapSource resultImage = null;
using (Stream imSource = new FileStream(filename, FileMode.Open, FileAccess.Read, FileShare.Read))
{
var decoder = new TiffBitmapDecoder(imSource, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default);
resultImage = decoder.Frames[0];
}
return resultImage;
}
So, service refuses to save/load images. I can try/catch this exception so the service can continue running, but no images could be saved/load. Sometimes after first occurrence of this exception few images could be saved/loaded and after a while no saving/loading is performed.
My only workaround for this problem is not running this code in service but in application. Then it runs just fine, but that is not a solution I am looking for. If anyone has any better suggestion please let me know.
There are some similar posts (stack trace of exception is more or less the same) that is not actually solved:
Image Resizing : The operation completed successfully
Does that mean an object doesn't need to be cleared manually if it doesn't implement IDisposable?
Windows.Media.Imaging Thumbnail generation causing exceptions
The operation completed successfully
This mystifying message narrows down the exact code in the HwndWrapper constructor that fails. WPF has a bug in the GetStockObject pinvoke declaration. Its SetLastError = true property is wrong, GetStockObject() does not in fact produces an error code. You see the description of error code 0, "nothing went wrong".
GetStockObject() is a winapi function that never fails if it gets the correct argument. Stock objects are pre-allocated and never released. You have very strong evidence that the process state is thoroughly corrupted. Seeing a "heap allocation failed" message in the event log is certainly part of that misery.
If you have no idea what could cause this corruption, machine is known-good with reliable RAM, you are not running any dangerous native code and the machine is not running any other services that could corrupt the desktop heap, then the only alternative you have is to create a minidump of the crashed process. Call Microsoft Support, they can follow the trace from the GetStockObject() failure. Do beware that you'll have to get through the first support layers, the ones that will tell you to swap the machine out :)

how do disable disk cache in c# invoke win32 CreateFile api with FILE_FLAG_NO_BUFFERING

everyone,i have a lot of files write to disk per seconds,i want to disable disk cache to improve performance,i google search find a solution:win32 CreateFile method with FILE_FLAG_NO_BUFFERING and How to empty/flush Windows READ disk cache in C#?.
i write a little of code to test whether can worked:
const int FILE_FLAG_NO_BUFFERING = unchecked((int)0x20000000);
[DllImport("KERNEL32", SetLastError = true, CharSet = CharSet.Auto, BestFitMapping = false)]
static extern SafeFileHandle CreateFile(
String fileName,
int desiredAccess,
System.IO.FileShare shareMode,
IntPtr securityAttrs,
System.IO.FileMode creationDisposition,
int flagsAndAttributes,
IntPtr templateFile);
static void Main(string[] args)
{
var handler = CreateFile(#"d:\temp.bin", (int)FileAccess.Write, FileShare.None,IntPtr.Zero, FileMode.Create, FILE_FLAG_NO_BUFFERING, IntPtr.Zero);
var stream = new FileStream(handler, FileAccess.Write, BlockSize);//BlockSize=4096
byte[] array = Encoding.UTF8.GetBytes("hello,world");
stream.Write(array, 0, array.Length);
stream.Close();
}
when running this program,the application get exception:IO operation will not work. Most likely the file will become too long or the handle was not opened to support synchronous IO operations
later,i found this article When you create an object with constraints, you have to make sure everybody who uses the object understands those constraints,but i can't fully understand,so i change my code to test:
var stream = new FileStream(handler, FileAccess.Write, 4096);
byte[] ioBuffer = new byte[4096];
byte[] array = Encoding.UTF8.GetBytes("hello,world");
Array.Copy(array, ioBuffer, array.Length);
stream.Write(ioBuffer, 0, ioBuffer.Length);
stream.Close();
it's running ok,but i just want "hello,world" bytes not all.i trying change blocksize to 1 or other integer(not 512 multiple) get same error.i also try win32 WriteFile api also get same error.someone can help me?
CreateFile() function in No Buffering mode imposes strict requirements on what may and what may not be done. Having a buffer of certain size (multiple of device sector size) is one of them.
Now, you can improve file writes in this way only if you use buffering in your code. If you want to write 10 bytes without buffering, then No Buffering mode won't help you.
If I understood your requirements correctly, this is what I'd try first:
Create a queue with objects that have the data in memory and the target file on the disk.
You start writing the files first just into memory, and then on another thread start going through the queue, opening io-completion port based filestream handles (isAsync=True) - just don't open too many of them as at some point you'll probably start losing perf due to cache trashing etc. You need to profile to see what is optimal amount for your system and ssd's.
After each open, you can use the async filestream methods Begin... to start writing data from memory to the files. the isAsync puts some requirements so this may not be as easy to get working in every corner case as using filestream normally.
Whether there will be any improvement to using another thread to create the files and another to write to them using the async api, that might only be the case if there is a possibility that creating/opening the files would block. SSD's perform various things internally to keep the access to data fast, so when you start doing this sort of extreme performance stuff, there may be pronounced differences between SSD controllers. It's also possible that if the controller drivers aren't well implemented, OS/Windows may start to feel sluggish or freeze. The hardware benchmarks sites do not really stress this particular kind of scenario (eg. create and write x KB into million files asap) and no doubt there's some drivers out there that are slower than others.

Memory exception while XDocument.Save()

I am trying to save an XDcoument to a thumb drive which doesnt have enough memory space available. (This is a special test condition for the app) Though the application is giving an exception like below, I cant get that in the try catch block around the XDocument.Save(filePath). Looks like it is a delayed throw. Is it a LINQ issue or am I doing something wrong?.
alt text http://img211.imageshack.us/img211/8324/exce.png
System.IO.IOException was unhandled
Message="There is not enough space on the disk.\r\n"
Source="mscorlib"
StackTrace:
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.FileStream.FlushWrite(Boolean calledFromFinalizer)
at System.IO.FileStream.Dispose(Boolean disposing)
at System.IO.FileStream.Finalize()
You found a bug in the framework. XDocument.Save(string) uses the "using" statement to ensure the output stream gets disposed. It depends on the encoding you used in the processing instruction but the internal System.Xml.XmlUtf8RawTextReader would be a common one to implement the text writer.
The bug: the Microsoft programmer that wrote that class forgot to implement the Dispose() method. Only the Close() method is implemented.
It is rather strange that this bug wasn't yet reported at the connect.microsoft.com feedback site. It ought to cause trouble in general use because the file stays open until the finalizer thread runs. Although that normally doesn't take that long, a couple of seconds or so. Except in your case where you exit the program right after writing and have the unfortunate luck to run out of disk space at the exact moment the buffer gets flushed.
A workaround for this bug is to use the XDocument.Save(TextWriter) overload instead, passing a StreamWriter whose Encoding matches the encoding of the XML.
Look at the stack trace. This trace starts with a Finalize call, which does a Dispose, which does a FlushWrite, which calls WriteCore, which gets the error.
In other words, flush your data first.
Post the code you use to write and we can show you where to do the flush.
Peeking into reflector, the last few lines are
using (XmlWriter writer = XmlWriter.Create(fileName, xmlWriterSettings))
{
this.Save(writer);
}
It means, the exception is thrown when the writer is disposed.
I guess, it will be better to check for available disk space before calling Save.
EDIT: Have you Disposed any object that the instance of XDocument depended on before making a call to Save?
XDocument.Save(string) does not have a bug, it does implement the Dispose method. The using statement is:- (as also described above)
using (XmlWriter writer = XmlWriter.Create(fileName, xmlWriterSettings))
this.Save(writer);
And XmlWriter does have a Dispose(), it implement the IDisposable interface.

File.Copy vs. Manual FileStream.Write For Copying File

My problem is in regards file copying performance. We have a media management system that requires a lot of moving files around on the file system to different locations including windows shares on the same network, FTP sites, AmazonS3, etc. When we were all on one windows network we could get away with using System.IO.File.Copy(source, destination) to copy a file. Since many times all we have is an input Stream (like a MemoryStream), we tried abstracting the Copy operation to take an input Stream and an output Stream but we are seeing a massive performance decrease. Below is some code for copying a file to use as a discussion point.
public void Copy(System.IO.Stream inStream, string outputFilePath)
{
int bufferSize = 1024 * 64;
using (FileStream fileStream = new FileStream(outputFilePath, FileMode.OpenOrCreate, FileAccess.Write))
{
int bytesRead = -1;
byte[] bytes = new byte[bufferSize];
while ((bytesRead = inStream.Read(bytes, 0, bufferSize)) > 0)
{
fileStream.Write(bytes, 0, bytesRead);
fileStream.Flush();
}
}
}
Does anyone know why this performs so much slower than File.Copy? Is there anything I can do to improve performance? Am I just going to have to put special logic in to see if I'm copying from one windows location to another--in which case I would just use File.Copy and in the other cases I'll use the streams?
Please let me know what you think and whether you need additional information. I have tried different buffer sizes and it seems like a 64k buffer size is optimal for our "small" files and 256k+ is a better buffer size for our "large" files--but in either case it performs much worse than File.Copy(). Thanks in advance!
File.Copy was build around CopyFile Win32 function and this function takes lot of attention from MS crew (remember this Vista-related threads about slow copy performance).
Several clues to improve performance of your method:
Like many said earlier remove Flush method from your cycle. You do not need it at all.
Increasing buffer may help, but only on file-to-file operations, for network shares, or ftp servers this will slow down instead. 60 * 1024 is ideal for network shares, at least before vista. for ftp 32k will be enough in most cases.
Help os by providing your caching strategy (in your case sequential reading and writing), use FileStream constructor override with FileOptions parameter (SequentalScan).
You can speed up copying by using asynchronous pattern (especially useful for network-to-file cases), but do not use threads for this, instead use overlapped io (BeginRead, EndRead, BeginWrite, EndWrite in .net), and do not forget set Asynchronous option in FileStream constructor (see FileOptions)
Example of asynchronous copy pattern:
int Readed = 0;
IAsyncResult ReadResult;
IAsyncResult WriteResult;
ReadResult = sourceStream.BeginRead(ActiveBuffer, 0, ActiveBuffer.Length, null, null);
do
{
Readed = sourceStream.EndRead(ReadResult);
WriteResult = destStream.BeginWrite(ActiveBuffer, 0, Readed, null, null);
WriteBuffer = ActiveBuffer;
if (Readed > 0)
{
ReadResult = sourceStream.BeginRead(BackBuffer, 0, BackBuffer.Length, null, null);
BackBuffer = Interlocked.Exchange(ref ActiveBuffer, BackBuffer);
}
destStream.EndWrite(WriteResult);
}
while (Readed > 0);
Three changes will dramatically improve performance:
Increase your buffer size, try 1MB (well -just experiment)
After you open your fileStream, call fileStream.SetLength(inStream.Length) to allocate the entire block on disk up front (only works if inStream is seekable)
Remove fileStream.Flush() - it is redundant and probably has the single biggest impact on performance as it will block until the flush is complete. The stream will be flushed anyway on dispose.
This seemed about 3-4 times faster in the experiments I tried:
public static void Copy(System.IO.Stream inStream, string outputFilePath)
{
int bufferSize = 1024 * 1024;
using (FileStream fileStream = new FileStream(outputFilePath, FileMode.OpenOrCreate, FileAccess.Write))
{
fileStream.SetLength(inStream.Length);
int bytesRead = -1;
byte[] bytes = new byte[bufferSize];
while ((bytesRead = inStream.Read(bytes, 0, bufferSize)) > 0)
{
fileStream.Write(bytes, 0, bytesRead);
}
}
}
Dusting off reflector we can see that File.Copy actually calls the Win32 API:
if (!Win32Native.CopyFile(fullPathInternal, dst, !overwrite))
Which resolves to
[DllImport("kernel32.dll", CharSet=CharSet.Auto, SetLastError=true)]
internal static extern bool CopyFile(string src, string dst, bool failIfExists);
And here is the documentation for CopyFile
You'll never going to able to beat the operating system at doing something so fundemental with your own code, not even if you crafted it carefully in assembler.
If you need make sure that your operations occur with the best performance AND you want to mix and match various sources then you will need to create a type that describes the resource locations. You then create an API that has functions such as Copy that takes two such types and having examined the descriptions of both chooses the best performing copy mechanism. E.g., having determined that both locations are windows file locations you it would choose File.Copy OR if the source is windows file but the destination is to be HTTP POST it uses a WebRequest.
Try to remove the Flush call, and move it to be outside the loop.
Sometimes the OS knows best when to flush the IO.. It allows it to better use its internal buffers.
Here's a similar answer
How do I copy the contents of one stream to another?
Your main problem is the call to Flush(), that will bind your performance to the speed of the I/O.
Mark Russinovich would be the authority on this.
He wrote on his blog an entry Inside Vista SP1 File Copy Improvements which sums up the Windows state of the art through Vista SP1.
My semi-educated guess would be that File.Copy would be most robust over the greatest number of situations. Of course, that doesn't mean in some specific corner case, your own code might beat it...
One thing that stands out is that you are reading a chunk, writing that chunk, reading another chunk and so on.
Streaming operations are great candidates for multithreading. My guess is that File.Copy implements multithreading.
Try reading in one thread and writing in another thread. You will need to coordinate the threads so that the write thread doesn't start writing away a buffer until the read thread is done filling it up. You can solve this by having two buffers, one that is being read while the other is being written, and a flag that says which buffer is currently being used for which purpose.

Does a FileStream object (.NETCF, C#) created using handle returned from Win32 API CreateFile (C++, P/Invoke) prone to .NET Garbage Collection

UPDATED QUESTION
Since the ctor is not supported by .NETCF (public FileStream(IntPtr handle, FileAccess access). Could you please suggest other ways of sharing large file in memory between managed and unmanaged code on a limited resource (RAM) platform. Basically I want to map the file in the upper region of 2GB user space (Win CE 5.0) outside of process space / heap. How can I do that in C#.
Also, do MemoryStream objects allocate space in heap or in memory mapped region on Win CE 5.0 ?
thanks...
ORIGINAL QUESTION
I am instantiating a FileStream Object (.NETCF , C#) using a file handle returned by native CreateFile() as below:
//P/Invoke
[DllImport("coredll.dll", SetLastError = true)]
public static extern IntPtr CreateFile(string lpFileName,
uint dwDesiredAccess,
uint dwShareMode,
IntPtr lpSecurityAttributes,
uint dwCreationDisposition,
uint dwFlagsAndAttributes,
IntPtr hTemplateFile);
// File handle received from native Win32 API
IntPtr ptr= CreateFile("myfile.txt",
0,
0,
0,
FileMode.Create,
0,
IntPtr.Zero);
//Instantiate a FileStream object using handle (returned above) as parameter.
FileStream fs = new FileStream(ptr,FileAccess.ReadWrite);
The file will grow to large size > 500 KB or more. So, my questions are:
*1) Is there anything wrong with this way of doing things given that SafeFileHandle / Handle properties are not supported in .NETCF version ? Is there any better way of doing it (I am planning to use native memory mapped file handle with FileStream / MemoryStream) ?
2) Is the memory allocated by FileStream object fall under .NETCF garbage collector ? Or given that handle is of a memory mapped file created using native API, it (managed FileStream object and its resources) is out of purview of garbage collector ?*
Thanks in advance.
Overall there is nothing wrong with this approach of using a native Create file and wrapping it in a FileStream object. This is a supported feature of FileStream.
n terms of garbage collection though there are really 2 things at play here.
The memory associated with the FileStream object. Yes this will be garbage collected
The handle which is a resource created with CreateFile. The FileStream object will take ownership of this handle and will free it when it is disposed (passively or actively).
According to the documentation, the constructor you're planning on using isn't available in .NET CF.

Categories

Resources