What is the ideal size for FileSystemWatcher InternalBufferSize? - c#

I have a issue with my FileSystemWatcher.
I have application that needs to monitor a great, really great, amount of files which have been created in a folder, in a short period of time.
When I start developing it, I realize that a lot of files where not been notified, if my buffer was less then 64kb, which is what Microsoft recommends. I tried increasing the buffer size beyond this until I reached a value that worked for me, which is 2621440 bytes!
What could you recommend to use a small size for this case, or what would be the ideal size of buffer?
My example code :
WATCHER = new FileSystemWatcher(SignerDocument.UnsignedPath, "*.pdf");
WATCHER.InternalBufferSize = 2621440; //Great and expensive buffer 2.5mb size!
WATCHER.IncludeSubdirectories = true;
WATCHER.EnableRaisingEvents = true;
WATCHER.Created += new FileSystemEventHandler(watcher_Created);
WATCHER.Renamed += new RenamedEventHandler(watcher_Renamed);
And what Microsoft say about this in .NET 2.0 :
Increasing buffer size is expensive, as it comes from non paged memory
that cannot be swapped out to disk, so keep the buffer as small as
possible. To avoid a buffer overflow, use the NotifyFilter and
IncludeSubdirectories properties to filter out unwanted change
notifications.
link : FileSystemWatcher.InternalBufferSize Property

For such a huge workload you might want to opt for "periodic sweep" approach instead of instant notifications. You could for instance scan the directory every 5 seconds and process the added files. If you move the file to another directory after it's processed, your periodic workload might even become minimal.
That is also a safer approach because even if your processing code crashes you can always recover, unlike notifications, your checkpoint wouldn't get lost.

You can set the buffer to 4 KB or larger, but it must not exceed 64 KB. If you try to set the InternalBufferSize property to less than 4096 bytes, your value is discarded and the InternalBufferSize property is set to 4096 bytes. For best performance, use a multiple of 4 KB on Intel-based computers.
From:
http://msdn.microsoft.com/de-de/library/system.io.filesystemwatcher.internalbuffersize(v=vs.110).aspx

Related

Why CPU usage not increase?

Having the second code:
class Methods
{
public MemoryStream UniqPicture(string imagePath)
{
var photoBytes = File.ReadAllBytes(imagePath); // change imagePath with a valid image path
var quality = 70;
var format = ImageFormat.Jpeg; // we gonna convert a jpeg image to a png one
var size = new Size(200, 200);
using (var inStream = new MemoryStream(photoBytes))
{
using (var outStream = new MemoryStream())
{
using (var imageFactory = new ImageFactory())
{
imageFactory.Load(inStream)
.Rotate(new Random().Next(-7, 7))
.RoundedCorners(new RoundedCornerLayer(190))
.Pixelate(3, null)
.Contrast(new Random().Next(-15, 15))
.Brightness(new Random().Next(-15, 15))
.Quality(quality)
.Save(outStream);
}
return outStream;
}
}
}
public void StartUniq()
{
var files = Directory.GetFiles("mypath");
Parallel.ForEach(files, (picture) => { UniqPicture(picture); });
}
}
When I start method StartUniq() my CPU bound to 12-13% and no more. Can I use more CPU % for doing this operation? Why it not increase?
I try to do it from python, it's also only 12-13%. It's Core i7 8700.
The only way to do it operation faster it's to start the second window of application.
It's windows limit? Using Windows Server 2016.
I think this is system limit, because if I try this simple code it's bound 12% CPU too!
while (true)
{
var a = 1 + 2;
}
A bit of research shows that you are using ImageFactory from https://imageprocessor.org/, which wraps System.Drawing. System.Drawing itself is often a wrapper for GDI/GDI+, which... incorporates process-wide locks, so your attempts at multithreading will be severely bottlenecked. Try a better image library.
(See Robert McKee's answer, although maybe this could be about disk IO but maybe not.)
So, I haven't used Paralell.ForEach before, but it seems like you should be running your UniqPicture method in parallel for all files in a given directory. I think your approach is good here, but ultimately your hard drive is probably killing the speed of your program (and vice versa).
Have you tried running UniqPicture in a loop sequentially? My concern here is that your hard drive is thrashing possibly. But in general, it's most likely that the input / output (IO) from your hard drive is taking a considerable amount of time, so the CPU is waiting a considerable amount of time before it can operate on the images in UniqPicture. If you could pre-load the images into memory, I would think the CPU utilization would be much higher, if not maxing out your CPU.
In no particular order, here are some thoughts
What happens if you run this sequentially? This will max out one core on the CPU at max, but it may prevent hard drive thrashing. If there are 100 threads being spun up, that's a lot of requests for the hard drive to deal with at once.
You should be able to add this option to make it run sequentially (or just make it a normal loop without Parallel.):
new ParallelOptions { MaxDegreeOfParallelism = 1 },
Maybe try 2, 3, or 4 threads and see if anything changes.
Check your hard drive utilization in task manager. What's the latency on the hard drive where the images are stored? What's the percentage that Winows reports it as busy? You want the hard drive to be busy the entire time (100% usage), but you also want it to be grabbing your images with the highest throughput possible so the CPU can do its job.
A spinning hard drive (HDD) has far lower IOPS (IO per second) than an SSD in general. An SSD will usually have 1000 to 100,000+ IOPS, but a HDD is around 200, I believe, and has much lower throughput usually. An SSD should help your program utilize the CPU much more.
The size of the image files could have an impact here, again relating to IO.
Or maybe see Robert Mckee's answer about your threads getting bottlenecked. Maybe 13% CPU utilization is the best you can get. 1 / 6 (your CPU has 6 cores) cores being maxed is ~16.7%, so you actually aren't that far off on maxing one core already.
Ultimately, time how long it's taking. CPU utilization should scale inversely linearly (higher CPU usage = lower run time) with the time this takes to run, but time it just be to be sure since that's the real benchmark.

c# FileSystemWatcher stops firing events after sometime

I want to track file changes of particular path and I am pretty much done with the code which is now working fine.it is tracking file creation , renamed and changed .
My problem is when I am launching Filesystemwatcher it's working fine but after some time its stop working i.e it stops firing creation ,deleted and changed event.
Can anybody help me out?
Thank you in advance.
Here is my code
lstFolder is my multiple path list
this.listFileSystemWatcher = new List();
// Loop the list to process each of the folder specifications found
if (lstFolder.Count > 0)// check if path is available to watch else exit file watcher
{
foreach (CustomFolderSettings customFolder in lstFolder)
{
DirectoryInfo dir = new DirectoryInfo(customFolder.FWPath);
// Checks whether the folder is enabled and
// also the directory is a valid location
if (dir.Exists)//customFolder.FolderEnabled &&
{
customFolder.AllowedFiles = customFolder.FWExtension;// setting extension to allowed filw extension to log .
foreach (var strExt in customFolder.FWExtension.Split(','))
{
// Creates a new instance of FileSystemWatcher
//FileSystemWatcher fileSWatch = new FileSystemWatcher();
this.fileSWatch = new FileSystemWatcher();
// Sets the filter
fileSWatch.Filter = strExt;// customFolder.FolderFilter;
// Sets the folder location
fileSWatch.Path = customFolder.FWPath;
fileSWatch.InternalBufferSize = 64000;
// Sets the action to be executed
StringBuilder actionToExecute = new StringBuilder(customFolder.ExecutableFile);
// List of arguments
StringBuilder actionArguments = new StringBuilder(customFolder.ExecutableArguments);
// Subscribe to notify filters
fileSWatch.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.FileName | NotifyFilters.DirectoryName;
// Associate the events that will be triggered when a new file Created,Changed,Deleted,Renamed //
// is added to the monitored folder, using a lambda expression
fileSWatch.Created += (senderObj, fileSysArgs) => fileSWatch_Created(senderObj, fileSysArgs, actionToExecute.ToString(), customFolder.AllowedFiles);
fileSWatch.Changed += (senderObj, fileSysArgs) => fileSWatch_Changed(senderObj, fileSysArgs, actionToExecute.ToString(), customFolder.AllowedFiles);
fileSWatch.Deleted += (senderObj, fileSysArgs) => fileSWatch_Deleted(senderObj, fileSysArgs, actionToExecute.ToString(), customFolder.AllowedFiles);
fileSWatch.Renamed += (senderObj, fileSysArgs) => fileSWatch_Renamed(senderObj, fileSysArgs, actionToExecute.ToString(), customFolder.AllowedFiles);
fileSWatch.Error += (senderObj, fileSysArgs) => fileSWatch_Error(senderObj, fileSysArgs, actionToExecute.ToString(), customFolder.AllowedFiles);
// will track changes in sub-folders as well
fileSWatch.IncludeSubdirectories = customFolder.FWSubFolders;
// Begin watching
fileSWatch.EnableRaisingEvents = true;
// Add the systemWatcher to the list
listFileSystemWatcher.Add(fileSWatch);
GC.KeepAlive(fileSWatch);
GC.KeepAlive(listFileSystemWatcher);
}
}
}
}
else
{
Application.Exit();
}
Don't use
GC.KeepAlive(fileSWatch);
GC.KeepAlive(listFileSystemWatcher);
Create a List<FileSystemWatcher> and store each one instead
Also have a look at
Events and Buffer Sizes
Note that several factors can affect which file system change events
are raised, as described by the following:
Common file system operations might raise more than one event. For example, when a file is moved from one directory to another, several
OnChanged and some OnCreated and OnDeleted events might be raised.
Moving a file is a complex operation that consists of multiple simple
operations, therefore raising multiple events. Likewise, some
applications (for example, antivirus software) might cause additional
file system events that are detected by FileSystemWatcher.
The FileSystemWatcher can watch disks as long as they are not switched or removed. The FileSystemWatcher does not raise events for
CDs and DVDs, because time stamps and properties cannot change. Remote
computers must have one of the required platforms installed for the
component to function properly.
If multiple FileSystemWatcher objects are watching the same UNC path in Windows XP prior to Service Pack 1, or Windows 2000 SP2 or earlier,
then only one of the objects will raise an event. On machines running
Windows XP SP1 and newer, Windows 2000 SP3 or newer or Windows Server
2003, all FileSystemWatcher objects will raise the appropriate events.
Note that a FileSystemWatcher may miss an event when the buffer size
is exceeded. To avoid missing events, follow these guidelines:
Increase the buffer size by setting the InternalBufferSize property.
Avoid watching files with long file names, because a long file name contributes to filling up the buffer. Consider renaming these files
using shorter names.
Keep your event handling code as short as possible.
FileSystemWatcher.InternalBufferSize Property
Remarks
You can set the buffer to 4 KB or larger, but it must not exceed 64
KB. If you try to set the InternalBufferSize property to less than
4096 bytes, your value is discarded and the InternalBufferSize
property is set to 4096 bytes. For best performance, use a multiple of
4 KB on Intel-based computers.
The system notifies the component of file changes, and it stores those
changes in a buffer the component creates and passes to the APIs. Each
event can use up to 16 bytes of memory, not including the file name.
If there are many changes in a short time, the buffer can overflow.
This causes the component to lose track of changes in the directory,
and it will only provide blanket notification.
Increasing the size of the buffer can prevent missing file system
change events. However, increasing buffer size is expensive, because
it comes from non-paged memory that cannot be swapped out to disk, so
keep the buffer as small as possible. To avoid a buffer overflow, use
the NotifyFilter and IncludeSubdirectories properties to filter out
unwanted change notifications.

FileSystemWatcher unreliable for changes in subdirectory

I am currently implementing file content watchers for OpenFOAM output files. These files get written by OpenFOAM in an Unix environment and consumed by my applications in a Windows environment.
Please consider my first, working watcher for convergence files (these files get updated after each iteration of the solution):
FileSystemWatcher watcher;
watcher = new FileSystemWatcher(WatchPath, "convergenceUp*.out");
watcher.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.Attributes | NotifyFilters.FileName | NotifyFilters.Size;
watcher.Changed += Watcher_Changed;
watcher.EnableRaisingEvents = true;
private void Watcher_Changed(object sender, FileSystemEventArgs e)
{
Files = Directory.GetFiles(WatchPath, "convergenceUp*.out").OrderBy(x => x).ToList(); // Update List of all files in the directory
ReadFiles(); // Do fancy stuff with the files
}
This works as expected. Everytime a file matching the pattern is changed in the watched directory (Notepad++ does notify me that the file has changed aswell) the files are processed.
Moving on from this simple "all files are in one directory" scenario I started to build a watcher for a different type of file (Force function objects for those familiar with OpenFOAM). These files are saved in a hierarchical folder structure like thus:
NameOfFunctionObject
|_StartTimeOfSolutionSetup#1
| |_forces.dat
|_StartTimeOfSolutionSetup#2
|_forces.dat
My goal is to read all forces.dat from "NameOfFunctionObject" and do some trickery with all the contained data. Additionally I also like to have the chance of reading and watching just one file. So my implementation (which borrows heavily from the above) currently looks like this:
FileSystemWatcher watcher;
if (isSingleFile)
watcher = new FileSystemWatcher(Directory.GetParent(WatchPath).ToString(), Path.GetFileName(WatchPath));
else
watcher = new FileSystemWatcher(WatchPath, "forces.dat");
watcher.IncludeSubdirectories = !isSingleFile;
watcher.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.Attributes | NotifyFilters.FileName | NotifyFilters.Size | NotifyFilters.DirectoryName | NotifyFilters.LastAccess | NotifyFilters.CreationTime | NotifyFilters.Security;
watcher.Changed += Watcher_Changed;
watcher.Created += Watcher_Created;
watcher.Deleted += Watcher_Deleted;
watcher.Error += Watcher_Error;
watcher.Renamed += Watcher_Renamed;
watcher.EnableRaisingEvents = isWatchEnabled;
So depending on wether I want to watch just one file or multiple files I set up the directory to watch and the file filter. If I watch multiple files I set the watcher to watch subdirectories aswell. Because of vigorous testing I filter for all notifications and catch all watcher events.
If I test the single file option, everything works as expected, changes to the file are reported and processed correctly (again, the check with trusty old Notepad++ works)
On testing the multi-file option though, things get pear shaped.
The file paths are correct, the initial read works as expected. But neither watcher event fires. Here comes the curious bit: Notepad++ beeps still away, saying the file has changed, Windows explorer shows a new file date and a new file size. If I save the file within Notepad++, the watcher gets triggered. If I create a new file matching the pattern insinde the watched directory (top level or below does not matter!) the watcher gets triggered. Even watching for a filter of . to catch creation of temporary files does not trigger, so it is safe to assume that no temporary files are created.
In general, the watcher behaves as expected, it can detect changes to a single file, it can detect creations of files in the root watched folder and its subfolders. It just fails to recognise non-windows-changes to a file once it is located in a subfolder. Is this behaviour by design? And more importantly: how can I work elegantly around it without resorting to using a timer and polling by hand?
I think this might be relevant to you
FileSystemWatcher uses ReadDirectoryChangesW Winapi call with a few relevant flags
When you first call ReadDirectoryChangesW, the system allocates a
buffer to store change information. This buffer is associated with the
directory handle until it is closed and its size does not change
during its lifetime. Directory changes that occur between calls to
this function are added to the buffer and then returned with the next
call. If the buffer overflows, the entire contents of the buffer are
discarded
The analogue in FileSystemWatcher is the FileSystemWatcher.InternalBufferSize property
Remarks You can set the buffer to 4 KB or larger, but it must not
exceed 64 KB. If you try to set the InternalBufferSize property to
less than 4096 bytes, your value is discarded and the
InternalBufferSize property is set to 4096 bytes. For best
performance, use a multiple of 4 KB on Intel-based computers.
The system notifies the component of file changes, and it stores those
changes in a buffer the component creates and passes to the APIs. Each
event can use up to 16 bytes of memory, not including the file name.
If there are many changes in a short time, the buffer can overflow.
This causes the component to lose track of changes in the directory,
and it will only provide blanket notification. Increasing the size of
the buffer can prevent missing file system change events. However,
increasing buffer size is expensive, because it comes from non-paged
memory that cannot be swapped out to disk, so keep the buffer as small
as possible. To avoid a buffer overflow, use the NotifyFilter and
IncludeSubdirectories properties to filter out unwanted change
notifications.
If worse comes to worse, you can use a mix of polling and tracking, it has helped me out of trouble a few times

Limit Disk I/O in a single thread C# [duplicate]

I'm developing an application (.NET 4.0, C#) that:
1. Scans file system.
2. Opens and reads some files.
The app will work in background and should have low impact on the disk usage. It shouldn't bother users if they are doing their usual tasks and the disk usage is high. And vice versa, the app can go faster if nobody is using the disk.
The main issue is I don't know real amount and size of I/O operations because of using API (mapi32.dll) to read files. If I ask API to do something I don't know how many bytes it reads to handle my response.
So the question is how to monitor and manage the disk usage? Including file system scanning and files reading...
Check performance counters that are used by standard Performance Monitor tool? Or any other ways?
Using the System.Diagnostics.PerformanceCounter class, attach to the PhysicalDisk counter related to the drive that you are indexing.
Below is some code to illustrate, although its currently hard coded to the "C:" drive. You will want to change "C:" to whichever drive your process is scanning. (This is rough sample code only to illustrate the existence of performance counters - don't take it as providing accurate information - should always be used as a guide only. Change for your own purpose)
Observe the % Idle Time counter which indicates how often the drive is doing anything.
0% idle means the disk is busy, but does not necessarily mean that it is flat-out and cannot transfer more data.
Combine the % Idle Time with Current Disk Queue Length and this will tell you if the drive is getting so busy that it cannot service all the requests for data. As a general guideline, anything over 0 means the drive is probably flat-out busy and anything over 2 means the drive is completely saturated. These rules apply to both SSD and HDD fairly well.
Also, any value that you read is an instantaneous value at a point in time. You should do a running average over a few results, e.g. take a reading every 100ms and average 5 readings before using the information from the result to make a decision (i.e., waiting until the counters settle before making your next IO request).
internal DiskUsageMonitor(string driveName)
{
// Get a list of the counters and look for "C:"
var perfCategory = new PerformanceCounterCategory("PhysicalDisk");
string[] instanceNames = perfCategory.GetInstanceNames();
foreach (string name in instanceNames)
{
if (name.IndexOf("C:") > 0)
{
if (string.IsNullOrEmpty(driveName))
driveName = name;
}
}
_readBytesCounter = new PerformanceCounter("PhysicalDisk",
"Disk Read Bytes/sec",
driveName);
_writeBytesCounter = new PerformanceCounter("PhysicalDisk",
"Disk Write Bytes/sec",
driveName);
_diskQueueCounter = new PerformanceCounter("PhysicalDisk",
"Current Disk Queue Length",
driveName);
_idleCounter = new PerformanceCounter("PhysicalDisk",
"% Idle Time",
driveName);
InitTimer();
}
internal event DiskUsageResultHander DiskUsageResult;
private void InitTimer()
{
StopTimer();
_perfTimer = new Timer(_updateResolutionMillisecs);
_perfTimer.Elapsed += PerfTimerElapsed;
_perfTimer.Start();
}
private void PerfTimerElapsed(object sender, ElapsedEventArgs e)
{
float diskReads = _readBytesCounter.NextValue();
float diskWrites = _writeBytesCounter.NextValue();
float diskQueue = _diskQueueCounter.NextValue();
float idlePercent = _idleCounter.NextValue();
if (idlePercent > 100)
{
idlePercent = 100;
}
if (DiskUsageResult != null)
{
var stats = new DiskUsageStats
{
DriveName = _readBytesCounter.InstanceName,
DiskQueueLength = (int)diskQueue,
ReadBytesPerSec = (int)diskReads,
WriteBytesPerSec = (int)diskWrites,
DiskUsagePercent = 100 - (int)idlePercent
};
DiskUsageResult(stats);
}
}
A long term ago Microsoft Research published a paper on this (sorry I can’t remember the url).
From what I recall:
The program started off doing very few "work items".
They measured how long it took for each of their "work item".
After running for a bit, they could work out how fast an "work item" was with no load on the system.
From then on, if the "work item" were fast (e.g. no other programmers making requests), they made more requests, otherwise they backed-off
The basic ideal is:
“if they are slowing me down, then I
must be slowing them down, so do less
work if I am being slowed down”
Something to ponder: what if there are other processes which follow the same (or a similar) strategy? Which one would run during the "idle time"? Would the other processes get a chance to make use of the idle time at all?
Obviously this can't be done correctly unless there is some well-known OS mechanism for fairly dividing resources during idle time. In windows, this is done by calling SetPriorityClass.
This document about I/O prioritization in Vista seems to imply that IDLE_PRIORITY_CLASS will not really lower the priority of I/O requests (though it will reduce the scheduling priority for the process). Vista added new PROCESS_MODE_BACKGROUND_BEGIN and PROCESS_MODE_BACKGROUND_END values for that.
In C#, you can normally set the process priority with the Process.PriorityClass property. The new values for Vista are not available though, so you'll have to call the Windows API function directly. You can do that like this:
[DllImport("kernel32.dll", CharSet=CharSet.Auto, SetLastError=true)]
public static extern bool SetPriorityClass(IntPtr handle, uint priorityClass);
const uint PROCESS_MODE_BACKGROUND_BEGIN = 0x00100000;
static void SetBackgroundMode()
{
if (!SetPriorityClass(new IntPtr(-1), PROCESS_MODE_BACKGROUND_BEGIN))
{
// handle error...
}
}
I did not test the code above. Don't forget that it can only work on Vista or better. You'll have to use Environment.OSVersion to check for earlier operating systems and implement a fall-back strategy.
See this question and this also for related queries. I would suggest for a simple solution just querying for the current disk & CPU usage % every so often, and only continue with the current task when they are under a defined threshold. Just make sure your work is easily broken into tasks, and that each task can be easily & efficiently start/stopped.
Check if the screensaver is running ? Good indication that the user is away from the keyboard

Memory mapped files that are contiguous on disk

I've read quite a few SO posts and general articles on trying to allocate over 1GB of memory so before getting shot down like the others here is some context.
This app will run as a kiosk with a dedicated machine running no unnecessary processes.
My app acquires images from a high-speed camera with a rolling shutter at a rate of 120 frames per second at a resolution of 1920 x 1080 with a bit depth of 24. The app needs to write every single frame to disk for post-processing. The current problem I am facing is that the Disk I/O won't keep up with the capture rate even though it is limited to 120 frames per second. The Disk I/O bandwidth needed is around 750MBps!
The total length of the recording needs to be at least 10 seconds (7.5GB) in raw form. Performing any on-the-fly transcoding or compression brings the frame-rate down to utterly unacceptable levels.
To work around this, I have tried the following:
Compromising on quality by reducing the bit-depth at hardware-level to 16 which is still around 500MBps.
Disabled all image encoding and writing raw camera data to disk. This has saved some processing time.
Creating a single 10GB file on disk and doing a sequential write-through as frames come in. This has helped most so far. All dev and production systems have a 100GB dedicated drive for this application.
Using Contig.exe from Sysinternals to defragment the file. This has had astonishing gains on non-SSD drives.
Out of options to explore here. I am not familiar with memory-mapped files and when trying to create them, I get an IOException saying Not enough storage is available to process this command..
using (var file = MemoryMappedFile.CreateFromFile(#"D:\Temp.VideoCache", FileMode.OpenOrCreate, "MyMapName", int.MaxValue, MemoryMappedFileAccess.CopyOnWrite))
{
...
}
The large file I currently use requires either sequential write-though or sequential read access. Any pointers would be appreciated.
I could even force the overall recording size down to 1.8GB if only there was a way to allocate that much RAM. Once again, this will run on a dedicated with 8GB available memory and 100GB free space. However, not all production systems will have SSD drives.
32 bit processes on a 64 bit system can allocate 4 GB of RAM, so it should be possible to get 1.8 GB of RAM for storing the video, but of course you need to consider loaded DLLs and a buffer until the video is compressed.
Other than that, you could use a RAMDisk, e.g. from DataRam. You just need to find a balance between how much memory the application needs and how much memory you can grant the disk. IMHO a 5 GB / 3 GB setting could work well: 1 GB for the OS, 4 GB for your application and 3 GB for the file.
Don't forget to copy the file from the RAM disk to HDD if you want it persistent.
Commodity hardware is cheap for a reason. You need faster hardware.
Buy a faster disk system. A good RAID controller and four SSDs. Put the drives into a RAID 1+0 configuration and be done with this problem.
How much money is your company planning on spending developing and testing software to push cheap hardware past its limitations? And even if you can get it to work fast enough, how much do they plan on spending to maintain that software?
Memory mapped files don't speed-up very much writing to a file...
If you have a big file, you normally don't try to map it entirely in RAM... you map a "window" of it, then "move" the window (in C#/Windows API you create a "view" of the file starting at any one location and with a certain size)
Example of code: (here the window is 1mb big... bigger windows are possible... at 32 bits it should be possible to allocate a 64 or 128mb window without any problem)
const string fileName = "Test.bin";
const long fileSize = 1024L * 1024 * 16;
const long windowSize = 1024 * 1024;
if (!File.Exists(fileName)) {
using (var file = File.Create(fileName)) {
file.SetLength(fileSize);
}
}
long realFileSize = new FileInfo(fileName).Length;
if (realFileSize < fileSize) {
using (var file = File.Create(fileName)) {
file.SetLength(fileSize);
}
}
using (var mm = MemoryMappedFile.CreateFromFile(fileName, FileMode.Open)) {
long start = 0;
while (true) {
long size = Math.Min(fileSize - start, windowSize);
if (size <= 0) {
break;
}
using (var acc = mm.CreateViewAccessor(start, size)) {
for (int i = 0; i < size; i++) {
// It is probably faster if you write the file with
// acc.WriteArray()
acc.Write(i, (byte)i);
}
}
start += windowSize;
}
}
Note that here I'm writing code that will write a fixed pre-known number of bytes (fileSize)... Your code should be different (because you can't pre-know the "exact" fileSize). Still remember: Memory mapped files don't speed-up very much writing to a file.

Categories

Resources