Multiple Threads searching on same folder at same time - c#

Currently I have a .txt file of about 170,000 jpg file names and I read them all into a List (fileNames).
I want to search ONE folder (this folder has sub-folders) to check if each file in fileNames exists in this folder and if it does, copy it to a new folder.
I was making a rough estimate but each search and copy for each file name in fileNames takes about .5 seconds. So 170,000 seconds is roughly 48 hours so divide by 2 that will take about 24 hours for my app to have searched for every single file name using 1 thread! Obviously this is too long so I want to narrow this down and speed the process up. What is the best way to go about doing this using multi-threading?
Currently I was thinking of making 20 separate threads and splitting my list (fileNames) into 20 different lists and search for the files simultaneously. For example I would have 20 different threads executing the below at the same time:
foreach (string str in fileNames)
{
foreach (var file in Directory.GetFiles(folderToCheckForFileName, str, SearchOption.AllDirectories))
{
string combinedPath = Path.Combine(newTargetDirectory, Path.GetFileName(file));
if (!File.Exists(combinedPath))
{
File.Copy(file, combinedPath);
}
}
}
UPDATED TO SHOW MY SOLUTION BELOW:
string[] folderToCheckForFileNames = Directory.GetFiles("C:\\Users\\Alex\\Desktop\\ok", "*.jpg", SearchOption.AllDirectories);
foreach(string str in fileNames)
{
Parallel.ForEach(folderToCheckForFileNames, currentFile =>
{
string filename = Path.GetFileName(currentFile);
if (str == filename)
{
string combinedPath = Path.Combine(targetDir, filename);
if (!File.Exists(combinedPath))
{
File.Copy(currentFile, combinedPath);
Console.WriteLine("FOUND A MATCH AND COPIED" + currentFile);
}
}
}
);
}
Thank you everyone for your contributions! Greatly Appreciated!

Instead of using ordinary foreach statement in doing your search, you should use parallel linq. Parallel linq combines the simplicity and readability of LINQ syntax with the power of parallel programming. Just like code that targets the Task Parallel Library. This will shield you from low level thread manipulation and probable exceptions (hard to find/debug exceptions) while splitting your work among many threads. So you might do something like this:
fileNames.AsParallel().ForAll(str =>
{
var files = Directory.GetFiles(folderToCheckForFileName, str, SearchOption.AllDirectories);
files.AsParallel().ForAll(file =>
{
if (!string.IsNullOrEmpty(file))
{
string combinedPath = Path.Combine(newTargetDirectory, Path.GetFileName(file));
if (!File.Exists(combinedPath))
{
File.Copy(file, combinedPath);
}
}
});
});

20 different threads won't help if your computer has fewer than 20 cores. In fact, it can make the process slower because you will 1) have to spend time context switching between each thread (which is your CPU's way of emulating more than 1 thread / core) and 2) a Thread in .NET reserves 1 MB for its stack, which is pretty hefty.
Instead, try dividing your I/O into async workloads, using Task.Run for the CPU-bound / intensive parts. Also, keep your number of Tasks to maybe 4 to 8 at the max.
Sample code:
var tasks = new Task[8];
var names = fileNames.ToArray();
for (int i = 0; i < tasks.Length; i++)
{
int index = i;
tasks[i] = Task.Run(() =>
{
for (int current = index; current < names.Length; current += 8)
{
// execute the workload
string str = names[current];
foreach (var file in Directory.GetFiles(folderToCheckForFileName, str, SearchOption.AllDirectories))
{
string combinedPath = Path.Combine(newTargetDirectory, Path.GetFileName(file));
if (!File.Exists(combinedPath))
{
File.Copy(file, combinedPath);
}
}
}
});
}
Task.WaitAll(tasks);

Related

How to read and write more then 25000 records/lines into text file at a time?

I am connecting my application with stock market live data provider using web socket. So when market is live and socket is open then it's giving me nearly 45000 lines in a minute. at a time I am deserializing it line by line
and then write that line into text file and also reading text file and removing first line of text file. So handling another process with socket becomes slow. So please can you help me that how should I perform that process very fast like nearly 25000 lines in a minute.
string filePath = #"D:\Aggregate_Minute_AAPL.txt";
var records = (from line in File.ReadLines(filePath).AsParallel()
select line);
List<string> str = records.ToList();
str.ForEach(x =>
{
string result = x;
result = result.TrimStart('[').TrimEnd(']');
var jsonString = Newtonsoft.Json.JsonConvert.DeserializeObject<List<LiveAMData>>(x);
foreach (var item in jsonString)
{
string value = "";
string dirPath = #"D:\COMB1\MinuteAggregates";
string[] fileNames = null;
fileNames = System.IO.Directory.GetFiles(dirPath, item.sym+"_*.txt", System.IO.SearchOption.AllDirectories);
if(fileNames.Length > 0)
{
string _fileName = fileNames[0];
var lineList = System.IO.File.ReadAllLines(_fileName).ToList();
lineList.RemoveAt(0);
var _item = lineList[lineList.Count - 1];
if (!_item.Contains(item.sym))
{
lineList.RemoveAt(lineList.Count - 1);
}
System.IO.File.WriteAllLines((_fileName), lineList.ToArray());
value = $"{item.sym},{item.s},{item.o},{item.h},{item.c},{item.l},{item.v}{Environment.NewLine}";
using (System.IO.StreamWriter sw = System.IO.File.AppendText(_fileName))
{
sw.Write(value);
}
}
}
});
How to make process fast, if application perform this then it takes nearly 3000 to 4000 symbols. and if there is no any process then it executes 25000 lines per minute. So how to increase line execution time/process with all this code ?
First you need to cleanup you code to gain more visibility, i did a quick refactor and this is what i got
const string FilePath = #"D:\Aggregate_Minute_AAPL.txt";
class SomeClass
{
public string Sym { get; set; }
public string Other { get; set; }
}
private void Something() {
File
.ReadLines(FilePath)
.AsParallel()
.Select(x => x.TrimStart('[').TrimEnd(']'))
.Select(JsonConvert.DeserializeObject<List<SomeClass>>)
.ForAll(WriteRecord);
}
private const string DirPath = #"D:\COMB1\MinuteAggregates";
private const string Separator = #",";
private void WriteRecord(List<SomeClass> data)
{
foreach (var item in data)
{
var fileNames = Directory
.GetFiles(DirPath, item.Sym+"_*.txt", SearchOption.AllDirectories);
foreach (var fileName in fileNames)
{
var fileLines = File.ReadAllLines(fileName)
.Skip(1).ToList();
var lastLine = fileLines.Last();
if (!lastLine.Contains(item.Sym))
{
fileLines.RemoveAt(fileLines.Count - 1);
}
fileLines.Add(
new StringBuilder()
.Append(item.Sym)
.Append(Separator)
.Append(item.Other)
.Append(Environment.NewLine)
.ToString()
);
File.WriteAllLines(fileName, fileLines);
}
}
}
From here should be more easy to play with List.AsParallel to check how and with what parameters the code is faster.
Also:
You are opening the write file twice
The removes are also somewhat expensive, in the index 0 is more (however, if there are few elements this could not make much difference
if(fileNames.Length > 0) is useless, use a for, if the list is empty, then he for will simply skip
You can try StringBuilder instead string interpolation
I hope this hints can help you to improve your time! and that i have not forgetting something.
Edit
We have nearly 10,000 files in our directory. So when process is
running then it's passing an error that The Process can not access the
file because it is being used by another process
Well, is there a possibility that in your process lines there is duplicated file names?
If that is the case, you could try a simple approach, a retry after some milliseconds, something like
private const int SleepMillis = 5;
private const int MaxRetries = 3;
public void WriteFile(string fileName, string[] fileLines, int retries = 0)
{
try
{
File.WriteAllLines(fileName, fileLines);
}
catch(Exception e) //Catch the special type if you can
{
if (retries >= MaxRetries)
{
Console.WriteLine("Too many tries with no success");
throw; // rethrow exception
}
Thread.Sleep(SleepMillis);
WriteFile(fileName, fileLines, ++retries); // try again
}
}
I tried to keep it simple, but there are some annotations:
- If you can make your methods async, it could be an improvement by changing the sleep for a Task.Delay, but you need to know and understand well how async works
- If the collision happens a lot, then you should try another approach, something like a concurrent map with semaphores
Second edit
In real scenario I am connecting to websocket and receiving 70,000 to
1 lac records on every minute and after that I am bifurcating those
records with live streaming data and storing in it's own file. And
that becomes slower when I am applying our concept with 11,000 files
It is a hard problem, from what i understand, you're talking about 1166 records per second, at this size the little details can become big bottlenecks.
At that phase i think it is better to think about other solutions, it could be so much I/O for the disk, could be many threads, or too few, network...
You should start by profiling the app to check where the app is spending more time to focus in that area, how much resources is using? how much resources do you have? how is the memory, processor, garbage collector, network? do you have an SSD?
You need a clear view of what is slowing you down so you can attack that directly, it will depend on a lot of things, it will be hard to help with that part :(.
There are tons of tools for profile c# apps, and many ways to attack this problem (spread the charge in several servers, use something like redis to save data really quick, some event store so you can use events....

C# Fastest way to grab all of the subdirectories from a given path [duplicate]

I have a base directory that contains several thousand folders. Inside of these folders there can be between 1 and 20 subfolders that contains between 1 and 10 files. I'd like to delete all files that are over 60 days old. I was using the code below to get the list of files that I would have to delete:
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
FileInfo[] oldFiles =
dirInfo.GetFiles("*.*", SearchOption.AllDirectories)
.Where(t=>t.CreationTime < DateTime.Now.AddDays(-60)).ToArray();
But I let this run for about 30 minutes and it still hasn't finished. I'm curious if anyone can see anyway that I could potentially improve the performance of the above line or if there is a different way I should be approaching this entirely for better performance? Suggestions?
This is (probably) as good as it's going to get:
DateTime sixtyLess = DateTime.Now.AddDays(-60);
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
FileInfo[] oldFiles =
dirInfo.EnumerateFiles("*.*", SearchOption.AllDirectories)
.AsParallel()
.Where(fi => fi.CreationTime < sixtyLess).ToArray();
Changes:
Made the the 60 days less DateTime constant, and therefore less CPU load.
Used EnumerateFiles.
Made the query parallel.
Should run in a smaller amount of time (not sure how much smaller).
Here is another solution which might be faster or slower than the first, it depends on the data:
DateTime sixtyLess = DateTime.Now.AddDays(-60);
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
FileInfo[] oldFiles =
dirInfo.EnumerateDirectories()
.AsParallel()
.SelectMany(di => di.EnumerateFiles("*.*", SearchOption.AllDirectories)
.Where(fi => fi.CreationTime < sixtyLess))
.ToArray();
Here it moves the parallelism to the main folder enumeration. Most of the changes from above apply too.
A possibly faster alternative is to use WINAPI FindNextFile. There is an excellent Faster Directory Enumeration Tool for this. Which can be used as follows:
HashSet<FileData> GetPast60(string dir)
{
DateTime retval = DateTime.Now.AddDays(-60);
HashSet<FileData> oldFiles = new HashSet<FileData>();
FileData [] files = FastDirectoryEnumerator.GetFiles(dir);
for (int i=0; i<files.Length; i++)
{
if (files[i].LastWriteTime < retval)
{
oldFiles.Add(files[i]);
}
}
return oldFiles;
}
EDIT
So, based on comments below, I decided to do a benchmark of suggested solutions here as well as others I could think of. It was interesting enough to see that EnumerateFiles seemed to out-perform FindNextFile in C#, while EnumerateFiles with AsParallel was by far the fastest followed surprisingly by command prompt count. However do note that AsParallel wasn't getting the complete file count or was missing some files counted by the others so you could say the command prompt method is the best.
Applicable Config:
Windows 7 Service Pack 1 x64
Intel(R) Core(TM) i5-3210M CPU #2.50GHz 2.50GHz
RAM: 6GB
Platform Target: x64
No Optimization (NB: Compiling with optimization will produce drastically poor performance)
Allow UnSafe Code
Start Without Debugging
Below are three screenshots:
I have included my test code below:
static void Main(string[] args)
{
Console.Title = "File Enumeration Performance Comparison";
Stopwatch watch = new Stopwatch();
watch.Start();
var allfiles = GetPast60("C:\\Users\\UserName\\Documents");
watch.Stop();
Console.WriteLine("Total time to enumerate using WINAPI =" + watch.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles);
Stopwatch watch1 = new Stopwatch();
watch1.Start();
var allfiles1 = GetPast60Enum("C:\\Users\\UserName\\Documents\\");
watch1.Stop();
Console.WriteLine("Total time to enumerate using EnumerateFiles =" + watch1.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles1);
Stopwatch watch2 = new Stopwatch();
watch2.Start();
var allfiles2 = Get1("C:\\Users\\UserName\\Documents\\");
watch2.Stop();
Console.WriteLine("Total time to enumerate using Get1 =" + watch2.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles2);
Stopwatch watch3 = new Stopwatch();
watch3.Start();
var allfiles3 = Get2("C:\\Users\\UserName\\Documents\\");
watch3.Stop();
Console.WriteLine("Total time to enumerate using Get2 =" + watch3.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles3);
Stopwatch watch4 = new Stopwatch();
watch4.Start();
var allfiles4 = RunCommand(#"dir /a: /b /s C:\Users\UserName\Documents");
watch4.Stop();
Console.WriteLine("Total time to enumerate using Command Prompt =" + watch4.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles4);
Console.WriteLine("Press Any Key to Continue...");
Console.ReadLine();
}
private static int RunCommand(string command)
{
var process = new Process()
{
StartInfo = new ProcessStartInfo("cmd")
{
UseShellExecute = false,
RedirectStandardInput = true,
RedirectStandardOutput = true,
CreateNoWindow = true,
Arguments = String.Format("/c \"{0}\"", command),
}
};
int count = 0;
process.OutputDataReceived += delegate { count++; };
process.Start();
process.BeginOutputReadLine();
process.WaitForExit();
return count;
}
static int GetPast60Enum(string dir)
{
return new DirectoryInfo(dir).EnumerateFiles("*.*", SearchOption.AllDirectories).Count();
}
private static int Get2(string myBaseDirectory)
{
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
return dirInfo.EnumerateFiles("*.*", SearchOption.AllDirectories)
.AsParallel().Count();
}
private static int Get1(string myBaseDirectory)
{
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
return dirInfo.EnumerateDirectories()
.AsParallel()
.SelectMany(di => di.EnumerateFiles("*.*", SearchOption.AllDirectories))
.Count() + dirInfo.EnumerateFiles("*.*", SearchOption.TopDirectoryOnly).Count();
}
private static int GetPast60(string dir)
{
return FastDirectoryEnumerator.GetFiles(dir, "*.*", SearchOption.AllDirectories).Length;
}
NB: I concentrated on count in the benchmark not modified date.
I realize this is very late to the party but if someone else is looking for this then you can speed things up by orders of magnitude by directly parsing the the MFT or FAT of the file system, this requires admin privileges as I think it will return all files regardless of security but can probably take your 30 mins down to 30 seconds for the enumeration stage at least.
A library for NTFS is here https://github.com/LordMike/NtfsLib there is also https://discutils.codeplex.com/ which I haven't personally used.
I would only use these methods for initial discovery of files over x days old and then verify them individual before deleting, it might be overkill but I'm cautious like that.
The method Get1 in above answer (#itsnotalie & #Chibueze Opata) is missing to count the files in the root directory, so it should read:
private static int Get1(string myBaseDirectory)
{
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
return dirInfo.EnumerateDirectories()
.AsParallel()
.SelectMany(di => di.EnumerateFiles("*.*", SearchOption.AllDirectories))
.Count() + dirInfo.EnumerateFiles("*.*", SearchOption.TopDirectoryOnly).Count();
}
When using SearchOption.AllDirectories EnumerateFiles took ages to return the first item. After reading several good answers here, I have for now ended up with the function below. By only have it work on one directory at a time and calling it recursively it now returns first item almost immediately.
But I must admit that I'm not totally sure on the correct way to use .AsParallel() so don't use this blindly.
Instead of working with arrays I would strongly suggest working with enumeration instead.
Some mentions that speed of disk is limiting factor and threads won't help, in terms of total time that is very likely as long as nothing is cached by the OS, but by using multiple threads you can get the cached data returned first, while otherwise it might be possible that the cache is pruned to make space for the new results.
Recursive calls might affect stack, but there is a limit on most FSs for how many levels there can be, so should not become a real issue.
private static IEnumerable<FileInfo> EnumerateFilesParallel(DirectoryInfo dir)
{
return dir.EnumerateDirectories()
.AsParallel()
.SelectMany(EnumerateFilesParallel)
.Concat(dir.EnumerateFiles("*", SearchOption.TopDirectoryOnly).AsParallel());
}
You are using a Linq. It would be faster if you wrote your own method for searching Directories recursively with you're special case.
public static DateTime retval = DateTime.Now.AddDays(-60);
public static void WalkDirectoryTree(System.IO.DirectoryInfo root)
{
System.IO.FileInfo[] files = null;
System.IO.DirectoryInfo[] subDirs = null;
// First, process all the files directly under this folder
try
{
files = root.GetFiles("*.*");
}
// This is thrown if even one of the files requires permissions greater
// than the application provides.
catch (UnauthorizedAccessException e)
{
// This code just writes out the message and continues to recurse.
// You may decide to do something different here. For example, you
// can try to elevate your privileges and access the file again.
log.Add(e.Message);
}
catch (System.IO.DirectoryNotFoundException e)
{
Console.WriteLine(e.Message);
}
if (files != null)
{
foreach (System.IO.FileInfo fi in files)
{
if (fi.LastWriteTime < retval)
{
oldFiles.Add(files[i]);
}
Console.WriteLine(fi.FullName);
}
// Now find all the subdirectories under this directory.
subDirs = root.GetDirectories();
foreach (System.IO.DirectoryInfo dirInfo in subDirs)
{
// Resursive call for each subdirectory.
WalkDirectoryTree(dirInfo);
}
}
}
If you really want to improve performance, get your hands dirty and use the NtQueryDirectoryFile that's internal to Windows, with a large buffer size.
FindFirstFile is already slow, and while FindFirstFileEx is a bit better, the best performance will come from calling the native function directly.

Improve the performance for enumerating files and folders using .NET

I have a base directory that contains several thousand folders. Inside of these folders there can be between 1 and 20 subfolders that contains between 1 and 10 files. I'd like to delete all files that are over 60 days old. I was using the code below to get the list of files that I would have to delete:
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
FileInfo[] oldFiles =
dirInfo.GetFiles("*.*", SearchOption.AllDirectories)
.Where(t=>t.CreationTime < DateTime.Now.AddDays(-60)).ToArray();
But I let this run for about 30 minutes and it still hasn't finished. I'm curious if anyone can see anyway that I could potentially improve the performance of the above line or if there is a different way I should be approaching this entirely for better performance? Suggestions?
This is (probably) as good as it's going to get:
DateTime sixtyLess = DateTime.Now.AddDays(-60);
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
FileInfo[] oldFiles =
dirInfo.EnumerateFiles("*.*", SearchOption.AllDirectories)
.AsParallel()
.Where(fi => fi.CreationTime < sixtyLess).ToArray();
Changes:
Made the the 60 days less DateTime constant, and therefore less CPU load.
Used EnumerateFiles.
Made the query parallel.
Should run in a smaller amount of time (not sure how much smaller).
Here is another solution which might be faster or slower than the first, it depends on the data:
DateTime sixtyLess = DateTime.Now.AddDays(-60);
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
FileInfo[] oldFiles =
dirInfo.EnumerateDirectories()
.AsParallel()
.SelectMany(di => di.EnumerateFiles("*.*", SearchOption.AllDirectories)
.Where(fi => fi.CreationTime < sixtyLess))
.ToArray();
Here it moves the parallelism to the main folder enumeration. Most of the changes from above apply too.
A possibly faster alternative is to use WINAPI FindNextFile. There is an excellent Faster Directory Enumeration Tool for this. Which can be used as follows:
HashSet<FileData> GetPast60(string dir)
{
DateTime retval = DateTime.Now.AddDays(-60);
HashSet<FileData> oldFiles = new HashSet<FileData>();
FileData [] files = FastDirectoryEnumerator.GetFiles(dir);
for (int i=0; i<files.Length; i++)
{
if (files[i].LastWriteTime < retval)
{
oldFiles.Add(files[i]);
}
}
return oldFiles;
}
EDIT
So, based on comments below, I decided to do a benchmark of suggested solutions here as well as others I could think of. It was interesting enough to see that EnumerateFiles seemed to out-perform FindNextFile in C#, while EnumerateFiles with AsParallel was by far the fastest followed surprisingly by command prompt count. However do note that AsParallel wasn't getting the complete file count or was missing some files counted by the others so you could say the command prompt method is the best.
Applicable Config:
Windows 7 Service Pack 1 x64
Intel(R) Core(TM) i5-3210M CPU #2.50GHz 2.50GHz
RAM: 6GB
Platform Target: x64
No Optimization (NB: Compiling with optimization will produce drastically poor performance)
Allow UnSafe Code
Start Without Debugging
Below are three screenshots:
I have included my test code below:
static void Main(string[] args)
{
Console.Title = "File Enumeration Performance Comparison";
Stopwatch watch = new Stopwatch();
watch.Start();
var allfiles = GetPast60("C:\\Users\\UserName\\Documents");
watch.Stop();
Console.WriteLine("Total time to enumerate using WINAPI =" + watch.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles);
Stopwatch watch1 = new Stopwatch();
watch1.Start();
var allfiles1 = GetPast60Enum("C:\\Users\\UserName\\Documents\\");
watch1.Stop();
Console.WriteLine("Total time to enumerate using EnumerateFiles =" + watch1.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles1);
Stopwatch watch2 = new Stopwatch();
watch2.Start();
var allfiles2 = Get1("C:\\Users\\UserName\\Documents\\");
watch2.Stop();
Console.WriteLine("Total time to enumerate using Get1 =" + watch2.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles2);
Stopwatch watch3 = new Stopwatch();
watch3.Start();
var allfiles3 = Get2("C:\\Users\\UserName\\Documents\\");
watch3.Stop();
Console.WriteLine("Total time to enumerate using Get2 =" + watch3.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles3);
Stopwatch watch4 = new Stopwatch();
watch4.Start();
var allfiles4 = RunCommand(#"dir /a: /b /s C:\Users\UserName\Documents");
watch4.Stop();
Console.WriteLine("Total time to enumerate using Command Prompt =" + watch4.ElapsedMilliseconds + "ms.");
Console.WriteLine("File Count: " + allfiles4);
Console.WriteLine("Press Any Key to Continue...");
Console.ReadLine();
}
private static int RunCommand(string command)
{
var process = new Process()
{
StartInfo = new ProcessStartInfo("cmd")
{
UseShellExecute = false,
RedirectStandardInput = true,
RedirectStandardOutput = true,
CreateNoWindow = true,
Arguments = String.Format("/c \"{0}\"", command),
}
};
int count = 0;
process.OutputDataReceived += delegate { count++; };
process.Start();
process.BeginOutputReadLine();
process.WaitForExit();
return count;
}
static int GetPast60Enum(string dir)
{
return new DirectoryInfo(dir).EnumerateFiles("*.*", SearchOption.AllDirectories).Count();
}
private static int Get2(string myBaseDirectory)
{
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
return dirInfo.EnumerateFiles("*.*", SearchOption.AllDirectories)
.AsParallel().Count();
}
private static int Get1(string myBaseDirectory)
{
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
return dirInfo.EnumerateDirectories()
.AsParallel()
.SelectMany(di => di.EnumerateFiles("*.*", SearchOption.AllDirectories))
.Count() + dirInfo.EnumerateFiles("*.*", SearchOption.TopDirectoryOnly).Count();
}
private static int GetPast60(string dir)
{
return FastDirectoryEnumerator.GetFiles(dir, "*.*", SearchOption.AllDirectories).Length;
}
NB: I concentrated on count in the benchmark not modified date.
I realize this is very late to the party but if someone else is looking for this then you can speed things up by orders of magnitude by directly parsing the the MFT or FAT of the file system, this requires admin privileges as I think it will return all files regardless of security but can probably take your 30 mins down to 30 seconds for the enumeration stage at least.
A library for NTFS is here https://github.com/LordMike/NtfsLib there is also https://discutils.codeplex.com/ which I haven't personally used.
I would only use these methods for initial discovery of files over x days old and then verify them individual before deleting, it might be overkill but I'm cautious like that.
The method Get1 in above answer (#itsnotalie & #Chibueze Opata) is missing to count the files in the root directory, so it should read:
private static int Get1(string myBaseDirectory)
{
DirectoryInfo dirInfo = new DirectoryInfo(myBaseDirectory);
return dirInfo.EnumerateDirectories()
.AsParallel()
.SelectMany(di => di.EnumerateFiles("*.*", SearchOption.AllDirectories))
.Count() + dirInfo.EnumerateFiles("*.*", SearchOption.TopDirectoryOnly).Count();
}
When using SearchOption.AllDirectories EnumerateFiles took ages to return the first item. After reading several good answers here, I have for now ended up with the function below. By only have it work on one directory at a time and calling it recursively it now returns first item almost immediately.
But I must admit that I'm not totally sure on the correct way to use .AsParallel() so don't use this blindly.
Instead of working with arrays I would strongly suggest working with enumeration instead.
Some mentions that speed of disk is limiting factor and threads won't help, in terms of total time that is very likely as long as nothing is cached by the OS, but by using multiple threads you can get the cached data returned first, while otherwise it might be possible that the cache is pruned to make space for the new results.
Recursive calls might affect stack, but there is a limit on most FSs for how many levels there can be, so should not become a real issue.
private static IEnumerable<FileInfo> EnumerateFilesParallel(DirectoryInfo dir)
{
return dir.EnumerateDirectories()
.AsParallel()
.SelectMany(EnumerateFilesParallel)
.Concat(dir.EnumerateFiles("*", SearchOption.TopDirectoryOnly).AsParallel());
}
You are using a Linq. It would be faster if you wrote your own method for searching Directories recursively with you're special case.
public static DateTime retval = DateTime.Now.AddDays(-60);
public static void WalkDirectoryTree(System.IO.DirectoryInfo root)
{
System.IO.FileInfo[] files = null;
System.IO.DirectoryInfo[] subDirs = null;
// First, process all the files directly under this folder
try
{
files = root.GetFiles("*.*");
}
// This is thrown if even one of the files requires permissions greater
// than the application provides.
catch (UnauthorizedAccessException e)
{
// This code just writes out the message and continues to recurse.
// You may decide to do something different here. For example, you
// can try to elevate your privileges and access the file again.
log.Add(e.Message);
}
catch (System.IO.DirectoryNotFoundException e)
{
Console.WriteLine(e.Message);
}
if (files != null)
{
foreach (System.IO.FileInfo fi in files)
{
if (fi.LastWriteTime < retval)
{
oldFiles.Add(files[i]);
}
Console.WriteLine(fi.FullName);
}
// Now find all the subdirectories under this directory.
subDirs = root.GetDirectories();
foreach (System.IO.DirectoryInfo dirInfo in subDirs)
{
// Resursive call for each subdirectory.
WalkDirectoryTree(dirInfo);
}
}
}
If you really want to improve performance, get your hands dirty and use the NtQueryDirectoryFile that's internal to Windows, with a large buffer size.
FindFirstFile is already slow, and while FindFirstFileEx is a bit better, the best performance will come from calling the native function directly.

Why does some file get missed out if i use Parallel.ForEach()?

Following is the code which processes about 10000 files.
var files = Directory.GetFiles(directorypath, "*.*", SearchOption.AllDirectories).Where(
name => !name.EndsWith(".gif") && !name.EndsWith(".jpg") && !name.EndsWith(".png")).ToList();
Parallel.ForEach(files,Countnumberofwordsineachfile);
And the Countnumberofwordsineachfile function prints the number of words in each file into the text.
Whenever i implement Parallel.ForEach(), i miss about 4-5 files everytime while processing.
Can anyone suggest as to why this happens?
public void Countnumberofwordsineachfile(string filepath)
{
string[] arrwordsinfile = Regex.Split(File.ReadAllText(filepath).Trim(), #"\s+");
Charactercount = Convert.ToInt32(arrwordsinfile.Length);
filecontent.AppendLine(filepath + "=" + Charactercount);
}
fileContent is probably not threadsafe. So if two (or more) tasks attempt to append to it at the same time one will win, the other will not. You need to remember to either lock the sections that are shared, or don't used shared data.
This is probably the easiest solution for your code. Locking, synchronises access (other tasks have to queue up to access the locked section) so it will slow down the algorithm, but since this is very short compared to the part that counts the words is likely to be then it isn't really going to be much of an issue.
private object myLock = new object();
public void Countnumberofwordsineachfile(string filepath)
{
string[] arrwordsinfile = Regex.Split(File.ReadAllText(filepath).Trim(), #"\s+");
Charactercount = Convert.ToInt32(arrwordsinfile.Length);
lock(myLock)
{
filecontent.AppendLine(filepath + "=" + Charactercount);
}
}
The cause has already been found, here is an alternative implementation:
//Parallel.ForEach(files,Countnumberofwordsineachfile);
var fileContent = files
.AsParallel()
.Select(f=> f + "=" + Countnumberofwordsineachfile(f));
and that requires a more useful design for the count method:
// make this an 'int' function, more reusable as well
public int Countnumberofwordsineachfile(string filepath)
{ ...; return characterCount; }
But do note that going parallel won't help you much here, your main function (ReadAllText) is I/O bound so you will most likely see a degradation from using AsParallel().
The better option is to use Directory.EnumerateFiles and then collect the results without parallelism:
var files = Directory.EnumerateFiles(....);
var fileContent = files
//.AsParallel()
.Select(f=> f + "=" + Countnumberofwordsineachfile(f));

How to set a dynamic number of threadCounter variables?

I'm not really into multithreading so probably the question is stupid but it seems I cannot find a way to solve this problem (especially because I'm using C# and I've been using it for a month).
I have a dynamic number of directories (I got it from a query in the DB). Inside those queries there are a certain amount of files.
For each directory I need to use a method to transfer these files using FTP in a cuncurrent way because I have basically no limit in FTP max connections (not my word, it's written in the specifics).
But I still need to control the max amount of files transfered per directory. So I need to count the files I'm transfering (increment/decrement).
How could I do it? Should I use something like an array and use the Monitor class?
Edit: Framework 3.5
You can use the Semaphore class to throttle the number of concurrent files per directory. You would probably want to have one semaphore per directory so that the number of FTP uploads per directory can be controlled independently.
public class Example
{
public void ProcessAllFilesAsync()
{
var semaphores = new Dictionary<string, Semaphore>();
foreach (string filePath in GetFiles())
{
string filePathCapture = filePath; // Needed to perform the closure correctly.
string directoryPath = Path.GetDirectoryName(filePath);
if (!semaphores.ContainsKey(directoryPath))
{
int allowed = NUM_OF_CONCURRENT_OPERATIONS;
semaphores.Add(directoryPath, new Semaphore(allowed, allowed));
}
var semaphore = semaphores[directoryPath];
ThreadPool.QueueUserWorkItem(
(state) =>
{
semaphore.WaitOne();
try
{
DoFtpOperation(filePathCapture);
}
finally
{
semaphore.Release();
}
}, null);
}
}
}
var allDirectories = db.GetAllDirectories();
foreach(var directoryPath in allDirectories)
{
DirectoryInfo directories = new DirectoryInfo(directoryPath);
//Loop through every file in that Directory
foreach(var fileInDir in directories.GetFiles()) {
//Check if we have reached our max limit
if (numberFTPConnections == MAXFTPCONNECTIONS){
Thread.Sleep(1000);
}
//code to copy to FTP
//This can be Aync, when then transfer is completed
//decrement the numberFTPConnections so then next file can be transfered.
}
}
You can try something along the lines above. Note that It's just the basic logic and there are proberly better ways to do this.

Categories

Resources