Related
Is it possible to get the size of a file in C# without using System.IO.FileInfo at all?
I know that you can get other things like Name and Extension by using Path.GetFileName(yourFilePath) and Path.GetExtension(yourFilePath) respectively, but apparently not file size? Is there another way I can get file size without using System.IO.FileInfo?
The only reason for this is that, if I'm correct, FileInfo grabs more info than I really need, therefore it takes longer to gather all those FileInfo's if the only thing I need is the size of the file. Is there a faster way?
I performed a benchmark using these two methods:
public static uint GetFileSizeA(string filename)
{
WIN32_FIND_DATA findData;
FindFirstFile(filename, out findData);
return findData.nFileSizeLow;
}
public static uint GetFileSizeB(string filename)
{
IntPtr handle = CreateFile(
filename,
FileAccess.Read,
FileShare.Read,
IntPtr.Zero,
FileMode.Open,
FileAttributes.ReadOnly,
IntPtr.Zero);
long fileSize;
GetFileSizeEx(handle, out fileSize);
CloseHandle(handle);
return (uint) fileSize;
}
Running against a bit over 2300 files, GetFileSizeA took 62-63ms to run. GetFileSizeB took over 18 seconds.
Unless someone sees something I'm doing wrong, I think the answer is clear as to which method is faster.
Is there a way I can refrain from actually opening the file?
Update
Changing FileAttributes.ReadOnly to FileAttributes.Normal reduced the timing so that the two methods were identical in performance.
Furthermore, if you skip the CloseHandle() call, the GetFileSizeEx method becomes about 20-30% faster, though I don't know that I'd recommend that.
From a short test i did, i've found that using a FileStream is just 1 millisecond slower in average than using Pete's GetFileSizeB (took me about 21 milliseconds over a network share...). Personally i prefer staying within the BCL limits whenever i can.
The code is simple:
using (var file = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
return file.Length;
}
As per this comment:
I have a small application that gathers the size info and saves it into an array... but I often have half a million files, give or take and that takes a while to go through all of those files (I'm using FileInfo). I was just wondering if there was a faster way...
Since you're finding the length of so many files you're much more likely to benefit from parallelization than from trying to get the file size through another method. The FileInfo class should be good enough, and any improvements are likely to be small.
Parallelizing the file size requests, on the other hand, has the potential for significant improvements in speed. (Note that the degree of improvement will be largely based on your disk drive, not your processor, so results can vary greatly.)
Not a direct answer...because I am not sure there is a faster way using the .NET framework.
Here's the code I am using:
List<long> list = new List<long>();
DirectoryInfo di = new DirectoryInfo("C:\\Program Files");
FileInfo[] fiArray = di.GetFiles("*", SearchOption.AllDirectories);
foreach (FileInfo f in fiArray)
list.Add(f.Length);
Running that, it took 2709ms to run on my "Program Files" directory, which was around 22720 files. That's no slouch by any means. Furthermore, when I put *.txt as a filter for the first parameter of the GetFiles method, it cut the time down drastically to 461ms.
A lot of this will depend on how fast your hard drive is, but I really don't think that FileInfo is killing performance.
NOTE: I thikn this only valid for .NET 4+
A quick'n'dirty solution if you want to do this on the .NET Core or Mono runtimes on non-Windows hosts:
Include the Mono.Posix.NETStandard NuGet package, then something like this...
using Mono.Unix.Native;
private long GetFileSize(string filePath)
{
Stat stat;
Syscall.stat(filePath, out stat);
return stat.st_size;
}
I've tested this running .NET Core on Linux and macOS - not sure if it works on Windows - it might, given that these are POSIX syscalls under the hood (and the package is maintained by Microsoft). If not, combine with the other P/Invoke-based answer to cover all platforms.
When compared to FileInfo.Length, this gives me much more reliable results when getting the size of a file that is actively being written to by another process/thread.
You can try this:
[DllImport("kernel32.dll")]
static extern bool GetFileSizeEx(IntPtr hFile, out long lpFileSize);
But that's not much of an improvement...
Here's the example code taken from pinvoke.net:
IntPtr handle = CreateFile(
PathString,
GENERIC_READ,
FILE_SHARE_READ,
0,
OPEN_EXISTING,
FILE_ATTRIBUTE_READONLY,
0); //PInvoked too
if (handle.ToInt32() == -1)
{
return;
}
long fileSize;
bool result = GetFileSizeEx(handle, out fileSize);
if (!result)
{
return;
}
everyone,i have a lot of files write to disk per seconds,i want to disable disk cache to improve performance,i google search find a solution:win32 CreateFile method with FILE_FLAG_NO_BUFFERING and How to empty/flush Windows READ disk cache in C#?.
i write a little of code to test whether can worked:
const int FILE_FLAG_NO_BUFFERING = unchecked((int)0x20000000);
[DllImport("KERNEL32", SetLastError = true, CharSet = CharSet.Auto, BestFitMapping = false)]
static extern SafeFileHandle CreateFile(
String fileName,
int desiredAccess,
System.IO.FileShare shareMode,
IntPtr securityAttrs,
System.IO.FileMode creationDisposition,
int flagsAndAttributes,
IntPtr templateFile);
static void Main(string[] args)
{
var handler = CreateFile(#"d:\temp.bin", (int)FileAccess.Write, FileShare.None,IntPtr.Zero, FileMode.Create, FILE_FLAG_NO_BUFFERING, IntPtr.Zero);
var stream = new FileStream(handler, FileAccess.Write, BlockSize);//BlockSize=4096
byte[] array = Encoding.UTF8.GetBytes("hello,world");
stream.Write(array, 0, array.Length);
stream.Close();
}
when running this program,the application get exception:IO operation will not work. Most likely the file will become too long or the handle was not opened to support synchronous IO operations
later,i found this article When you create an object with constraints, you have to make sure everybody who uses the object understands those constraints,but i can't fully understand,so i change my code to test:
var stream = new FileStream(handler, FileAccess.Write, 4096);
byte[] ioBuffer = new byte[4096];
byte[] array = Encoding.UTF8.GetBytes("hello,world");
Array.Copy(array, ioBuffer, array.Length);
stream.Write(ioBuffer, 0, ioBuffer.Length);
stream.Close();
it's running ok,but i just want "hello,world" bytes not all.i trying change blocksize to 1 or other integer(not 512 multiple) get same error.i also try win32 WriteFile api also get same error.someone can help me?
CreateFile() function in No Buffering mode imposes strict requirements on what may and what may not be done. Having a buffer of certain size (multiple of device sector size) is one of them.
Now, you can improve file writes in this way only if you use buffering in your code. If you want to write 10 bytes without buffering, then No Buffering mode won't help you.
If I understood your requirements correctly, this is what I'd try first:
Create a queue with objects that have the data in memory and the target file on the disk.
You start writing the files first just into memory, and then on another thread start going through the queue, opening io-completion port based filestream handles (isAsync=True) - just don't open too many of them as at some point you'll probably start losing perf due to cache trashing etc. You need to profile to see what is optimal amount for your system and ssd's.
After each open, you can use the async filestream methods Begin... to start writing data from memory to the files. the isAsync puts some requirements so this may not be as easy to get working in every corner case as using filestream normally.
Whether there will be any improvement to using another thread to create the files and another to write to them using the async api, that might only be the case if there is a possibility that creating/opening the files would block. SSD's perform various things internally to keep the access to data fast, so when you start doing this sort of extreme performance stuff, there may be pronounced differences between SSD controllers. It's also possible that if the controller drivers aren't well implemented, OS/Windows may start to feel sluggish or freeze. The hardware benchmarks sites do not really stress this particular kind of scenario (eg. create and write x KB into million files asap) and no doubt there's some drivers out there that are slower than others.
I have this code that saves a pdf file.
FileStream fs = new FileStream(SaveLocation, FileMode.Create);
fs.Write(result.DocumentBytes, 0, result.DocumentBytes.Length);
fs.Flush();
fs.Close();
It works fine. However sometimes it does not release the lock right away and that causes file locking exceptions with functions run after this one run.
Is there a ideal way to release the file lock right after the fs.Close()
Here's the ideal:
using (var fs = new FileStream(SaveLocation, FileMode.Create))
{
fs.Write(result.DocumentBytes, 0, result.DocumentBytes.Length);
}
which is roughly equivalent to:
FileStream fs = null;
try
{
fs = new FileStream(SaveLocation, FileMode.Create);
fs.Write(result.DocumentBytes, 0, result.DocumentBytes.Length);
}
finally
{
if (fs != null)
{
((IDisposable)fs).Dispose();
}
}
the using being more readable.
UPDATE:
#aron, now that I'm thinking
File.WriteAllBytes(SaveLocation, result.DocumentBytes);
looks even prettier to the eye than the ideal :-)
We have seen this same issue in production with a using() statement wrapping it.
One of the top culprits here is anti-virus software which can sneak in after the file is closed, grab it to check it doesn't contain a virus before releasing it.
But even with all anti-virus software out of the mix, in very high load systems with files stored on network shares we still saw the problem occasionally. A, cough, short Thread.Sleep(), cough, after the close seemed to cure it. If someone has a better solution I'd love to hear it!
I can't imagine why the lock would be maintained after the file is closed. But you should consider wrapping this in a using statment to ensure that the file is closed even if an exception is raised
using (FileStream fs = new FileStream(SaveLocation, FileMode.Create))
{
fs.Write(result.DocumentBytes, 0, result.DocumentBytes.Length);
}
If the functions that run after this one are part of the same application, then a better approach might be to open the file for read/write at the beginning of the entire process, and then pass the file to each function without closing it until the end of the process. Then it will be unnecessary for the application to block waiting for the IO operation to complete.
This worked for me when using .Flush() I had to add a close inside the using statement.
using (var imageFile = new FileStream(filePath, FileMode.Create, FileAccess.ReadWrite,FileShare.ReadWrite))
{
imageFile.Write(bytes, 0, bytes.Length);
imageFile.Flush();
imageFile.Close();
}
Just had the same issue when I closed a FileStream and opened the file immediately in another class. The using statement was not a solution since the FileStream had been created at another place and stored in a list. Clearing the list was not enough.
It looks like the stream needs to be freed by the garbage collector before the file can be reused. If the time between closing and opening is too short, you can use
GC.Collect();
right after you closed the stream. This worked for me.
I guess the solutions of Ian Mercer to put the thread to sleep might have the same effect, giving the GC time to free the resources.
My problem is in regards file copying performance. We have a media management system that requires a lot of moving files around on the file system to different locations including windows shares on the same network, FTP sites, AmazonS3, etc. When we were all on one windows network we could get away with using System.IO.File.Copy(source, destination) to copy a file. Since many times all we have is an input Stream (like a MemoryStream), we tried abstracting the Copy operation to take an input Stream and an output Stream but we are seeing a massive performance decrease. Below is some code for copying a file to use as a discussion point.
public void Copy(System.IO.Stream inStream, string outputFilePath)
{
int bufferSize = 1024 * 64;
using (FileStream fileStream = new FileStream(outputFilePath, FileMode.OpenOrCreate, FileAccess.Write))
{
int bytesRead = -1;
byte[] bytes = new byte[bufferSize];
while ((bytesRead = inStream.Read(bytes, 0, bufferSize)) > 0)
{
fileStream.Write(bytes, 0, bytesRead);
fileStream.Flush();
}
}
}
Does anyone know why this performs so much slower than File.Copy? Is there anything I can do to improve performance? Am I just going to have to put special logic in to see if I'm copying from one windows location to another--in which case I would just use File.Copy and in the other cases I'll use the streams?
Please let me know what you think and whether you need additional information. I have tried different buffer sizes and it seems like a 64k buffer size is optimal for our "small" files and 256k+ is a better buffer size for our "large" files--but in either case it performs much worse than File.Copy(). Thanks in advance!
File.Copy was build around CopyFile Win32 function and this function takes lot of attention from MS crew (remember this Vista-related threads about slow copy performance).
Several clues to improve performance of your method:
Like many said earlier remove Flush method from your cycle. You do not need it at all.
Increasing buffer may help, but only on file-to-file operations, for network shares, or ftp servers this will slow down instead. 60 * 1024 is ideal for network shares, at least before vista. for ftp 32k will be enough in most cases.
Help os by providing your caching strategy (in your case sequential reading and writing), use FileStream constructor override with FileOptions parameter (SequentalScan).
You can speed up copying by using asynchronous pattern (especially useful for network-to-file cases), but do not use threads for this, instead use overlapped io (BeginRead, EndRead, BeginWrite, EndWrite in .net), and do not forget set Asynchronous option in FileStream constructor (see FileOptions)
Example of asynchronous copy pattern:
int Readed = 0;
IAsyncResult ReadResult;
IAsyncResult WriteResult;
ReadResult = sourceStream.BeginRead(ActiveBuffer, 0, ActiveBuffer.Length, null, null);
do
{
Readed = sourceStream.EndRead(ReadResult);
WriteResult = destStream.BeginWrite(ActiveBuffer, 0, Readed, null, null);
WriteBuffer = ActiveBuffer;
if (Readed > 0)
{
ReadResult = sourceStream.BeginRead(BackBuffer, 0, BackBuffer.Length, null, null);
BackBuffer = Interlocked.Exchange(ref ActiveBuffer, BackBuffer);
}
destStream.EndWrite(WriteResult);
}
while (Readed > 0);
Three changes will dramatically improve performance:
Increase your buffer size, try 1MB (well -just experiment)
After you open your fileStream, call fileStream.SetLength(inStream.Length) to allocate the entire block on disk up front (only works if inStream is seekable)
Remove fileStream.Flush() - it is redundant and probably has the single biggest impact on performance as it will block until the flush is complete. The stream will be flushed anyway on dispose.
This seemed about 3-4 times faster in the experiments I tried:
public static void Copy(System.IO.Stream inStream, string outputFilePath)
{
int bufferSize = 1024 * 1024;
using (FileStream fileStream = new FileStream(outputFilePath, FileMode.OpenOrCreate, FileAccess.Write))
{
fileStream.SetLength(inStream.Length);
int bytesRead = -1;
byte[] bytes = new byte[bufferSize];
while ((bytesRead = inStream.Read(bytes, 0, bufferSize)) > 0)
{
fileStream.Write(bytes, 0, bytesRead);
}
}
}
Dusting off reflector we can see that File.Copy actually calls the Win32 API:
if (!Win32Native.CopyFile(fullPathInternal, dst, !overwrite))
Which resolves to
[DllImport("kernel32.dll", CharSet=CharSet.Auto, SetLastError=true)]
internal static extern bool CopyFile(string src, string dst, bool failIfExists);
And here is the documentation for CopyFile
You'll never going to able to beat the operating system at doing something so fundemental with your own code, not even if you crafted it carefully in assembler.
If you need make sure that your operations occur with the best performance AND you want to mix and match various sources then you will need to create a type that describes the resource locations. You then create an API that has functions such as Copy that takes two such types and having examined the descriptions of both chooses the best performing copy mechanism. E.g., having determined that both locations are windows file locations you it would choose File.Copy OR if the source is windows file but the destination is to be HTTP POST it uses a WebRequest.
Try to remove the Flush call, and move it to be outside the loop.
Sometimes the OS knows best when to flush the IO.. It allows it to better use its internal buffers.
Here's a similar answer
How do I copy the contents of one stream to another?
Your main problem is the call to Flush(), that will bind your performance to the speed of the I/O.
Mark Russinovich would be the authority on this.
He wrote on his blog an entry Inside Vista SP1 File Copy Improvements which sums up the Windows state of the art through Vista SP1.
My semi-educated guess would be that File.Copy would be most robust over the greatest number of situations. Of course, that doesn't mean in some specific corner case, your own code might beat it...
One thing that stands out is that you are reading a chunk, writing that chunk, reading another chunk and so on.
Streaming operations are great candidates for multithreading. My guess is that File.Copy implements multithreading.
Try reading in one thread and writing in another thread. You will need to coordinate the threads so that the write thread doesn't start writing away a buffer until the read thread is done filling it up. You can solve this by having two buffers, one that is being read while the other is being written, and a flag that says which buffer is currently being used for which purpose.
I've been playing around with what I thought was a simple idea. I want to be able to read in a file from somewhere (website, filesystem, ftp), perform some operations on it (compress, encrypt, etc.) and then save it somewhere (somewhere may be a filesystem, ftp, or whatever). It's a basic pipeline design. What I would like to do is to read in the file and put it onto a MemoryStream, then perform the operations on the data in the MemoryStream, and then save that data in the MemoryStream somewhere. I was thinking I could use the same Stream to do this but run into a couple of problems:
Everytime I use a StreamWriter or StreamReader I need to close it and that closes the stream so that I cannot use it anymore. That seems like there must be some way to get around that.
Some of these files may be big and so I may run out of memory if I try to read the whole thing in at once.
I was hoping to be able to spin up each of the steps as separate threads and have the compression step begin as soon as there is data on the stream, and then as soon as the compression has some compressed data available on the stream I could start saving it (for example). Is anything like this easily possible with the C# Streams? ANyone have thoughts as to how to accomplish this best?
Thanks,
Mike
Using a helper method to drive the streaming:
static public void StreamCopy(Stream source, Stream target)
{
byte[] buffer = new byte[8 * 1024];
int size;
do
{
size = source.Read(buffer, 0, 8 * 1024);
target.Write(buffer, 0, size);
} while (size > 0);
}
You can easily combine whatever you need:
using (FileStream iFile = new FileStream(...))
using (FileStream oFile = new FileStream(...))
using (DeflateStream oZip = new DeflateStream(outFile, CompressionMode.Compress))
StreamCopy(iFile, oZip);
Depending on what you are actually trying to do, you'd chain the streams differently. This also uses relatively little memory, because only the data being operated upon is in memory.
StreamReader/StreamWriter shouldn't have been designed to close their underlying stream -- that's a horrible misfeature in the BCL. But they do, they won't be changed (because of backward compatibility), so we're stuck with this disaster of an API.
But there are some well-established workarounds, if you want to use StreamReader/Writer but keep the Stream open afterward.
For a StreamReader: don't Dispose the StreamReader. It's that simple. It's harmless to just let a StreamReader go without ever calling Dispose. The only effect is that your Stream won't get prematurely closed, which is actually a plus.
For a StreamWriter: there may be buffered data, so you can't get away with just letting it go. You have to call Flush, to make sure that buffered data gets written out to the Stream. Then you can just let the StreamWriter go. (Basically, you put a Flush where you normally would have put a Dispose.)
Unless you're reading in streams bigger than your hard drive, I don't think you'll run out of memory:
http://blogs.msdn.com/ericlippert/archive/2009/06/08/out-of-memory-does-not-refer-to-physical-memory.aspx