Catch exception, then throw/send exception and continue - c#

So the title might be bit misleading, but what I wanted to accomplish is reading an array of files and then combine them into one, which is where I am now.
The problem is that I have a catch that looks for the exception "FileNotFoundException", when this is called I want to continue my try statement (Using "continue") but let the user know that the file is missing.
My setup is a class that is called from a form (It's in the form where the error should show up)
I thought about creating an event that can be registered from my form, but is that the right way?
public void MergeClientFiles(string directory)
{
// Find all clients
Array clients = Enum.GetValues(typeof(Clients));
// Create a new array of files
string[] files = new string[clients.Length];
// Combine the clients with the .txt extension
for (int i = 0; i < clients.Length; i++)
files[i] = clients.GetValue(i) + ".txt";
// Merge the files into directory
using (var output = File.Create(directory))
{
foreach (var file in files)
{
try
{
using (var input = File.OpenRead(file))
{
input.CopyTo(output);
}
}
catch (FileNotFoundException)
{
// Its here I want to send the error to the form
continue;
}
}
}
}

You want the method to do its job and report user about problems, right?
Then Oded has suggested right thing. With small modification, the code could look like this:
public List<string> MergeClientFiles( string path )
{
// Find all clients
Array clients = Enum.GetValues( typeof( Clients ) );
// Create a new array of files
string[] files = new string[clients.Length];
// Combine the clients with the .txt extension
for( int i = 0; i < clients.Length; i++ )
files[i] = clients.GetValue( i ) + ".txt";
List<string> errors = new List<string>();
// Merge the files into AllClientData
using( var output = File.Create( path ) ) {
foreach( var file in files ) {
try {
using( var input = File.OpenRead( file ) ) {
input.CopyTo( output );
}
}
catch( FileNotFoundException ) {
errors.Add( file );
}
}
}
return errors;
}
Then, in caller you just check if MergeClientFiles returns non-empty collection.

You can collect the exceptions into a List<FileNotFoundException> and at the end of iteration, if the list is not empty, throw a custom exception assigning this list to a corresponding member.
This will allow any code calling the above to catch your custom exception, iterate over the FileNotFoundExceptions and notify the user.

you could define a delegate that you pass as an argument of your method.
public delegate void FileNotFoundCallback(string file);
public void MergeClientFiles(string directory, FileNotFoundCallback callback)
{
// Find all clients
Array clients = Enum.GetValues(typeof(Clients));
// Create a new array of files
string[] files = new string[clients.Length];
// Combine the clients with the .txt extension
for (int i = 0; i < clients.Length; i++)
files[i] = clients.GetValue(i) + ".txt";
// Merge the files into directory
using (var output = File.Create(directory))
{
foreach (var file in files)
{
try
{
using (var input = File.OpenRead(file))
{
input.CopyTo(output);
}
}
catch (FileNotFoundException)
{
// Its here I want to send the error to the form
callback( file );
continue;
}
}
}
}

Rather than catching FileNotFoundExceptions you should actively check if the file exists and then don't try to open it if it doesn't.
You can change the method to return a list of merged files, a list of missing files, or a list of all files along with an indicator if they were merged or missing. Returning a single list gives the caller the option to process the missing files all at once and know how many were missing, instead of one-by-one as would be the case with an event or callback.

For some inspiration, have a look at the documentation for the new parallel constructs in c#, such as Parallel.For and the Reactive Framework (rx).
In the first, exceptions are collected in an AggregateException, in Rx exceptions are communicated via a callback interface.
I think I prefer the approach used in Parallel.For, but choose what fits your scenario best.

Related

Make syntax shorter when using if & else with statements

I'm currently working on a dll library project.
if (!Directory.Exists(MenGinPath))
{
Directory.CreateDirectory(MenGinPath + #"TimedMessages");
File.WriteAllLines(MenGinPath + #"TimedMessages\timedmessages.txt", new string[] { "Seperate each message with a new line" });
}
else if (!File.Exists(MenGinPath + #"TimedMessages\timedmessages.txt"))
{
Directory.CreateDirectory(MenGinPath + #"TimedMessages");
File.WriteAllLines(MenGinPath + #"TimedMessages\timedmessages.txt", new string[] { "Seperate each message with a new line" });
}
As you can see if the statement Directory.Exists is false a specific directory (MenGinPath) will be created. However, if the same path, with another file in addition is false, the second functions will be called.
My question is the following: is there any way to make this shorter?
Because as you can see I'm calling 2 times the same functions:
Directory.CreateDirectory(MenGinPath + #TimedMessages\timedmessages.txt
and
File.WriteAllLines(MenGinPath + #"\TimedMessages\timedmessages.txt"))
Any help would be welcome!!
You don't need to check if directory exists because Directory.CreateDirectory automatically creates the directory if it does not exists and does nothing if the directory already exists.
Also, do not include the filename when creating the directory. Yes, it wont error but just for clarity sake.
Another one is to use Path.Combine instead of hardcoding the path. This will improve readability of your code.
So, here's what I can come up with:
string dir = Path.Combine(MenGinPath, #"Groups\TimesMessages");
string file = Path.Combine(dir, "timedmessages.txt");
// this automatically creates all directories in specified path
// unless it already exists
Directory.CreateDirectory(dir);
//of course, you still need to check if the file exists
if (!File.Exists(file) {
File.WriteAllLines(filePath, new string[] { "Seperate each message with a new line" });
}
/* or if file exists, do your stuff (optional)
* else {
* //do something else? maybe edit the file?
* }
*/
You can make your code shorter given the fact that CreateDirectory does nothing when the directory exists. Moreover do not pullute your code with all that string concatenations to create the path and the file names.
Just do it one time before entering the logic using the appropriate method to create filenames and pathnames (Path.Combine).
string messagePath = Path.Combine(MenGinPath, "TimedMessages");
string fileName = Path.Combine(messagePath, "timedmessages.txt");
// call the create even if it exists. The CreateDirectory checks the fact
// by itself and thus, if you add your own check, you are checking two times.
Directory.CreateDirectory(messagePath);
if (!File.Exists(fileName)
File.WriteAllLines(fileName, new string[] { "Seperate each message with a new line" });
Would something like this work?
string strAppended = string.Empty;
if (!Directory.Exists(MenGinPath))
{
strAppended = MenGinPath + #"Groups\timedmessages.txt";
}
else if (!File.Exists(MenGinPath + #"TimedMessages\timedmessages.txt"))
{
strAppended = MenGinPath + #"TimedMessages\TimedMessages.txt";
}
else
{
return;
}
Directory.CreateDirectory(strAppended);
File.WriteAllLines(strAppended, new string[] { "Seperate each message with a new line" });
I have found that it is a great idea to reuse blocks of code like this instead of hiding them in if statements because it makes code maintenance and debugging easier and less prone to missed bugs.
It seems the only difference between the 2 cases is the path. So just get only this path in your if-else
const string GroupsPath = #"Groups\timedmessages.txt";
const string TimedMessagesTxt = #"TimedMessages\TimedMessages.txt";
string addPath = null;
if (!Directory.Exists(MenGinPath)) {
addPath = GroupsPath;
} else if (!File.Exists(Path.Combine(MenGinPath, TimedMessagesTxt))) {
addPath = TimedMessagesTxt;
}
If (addPath != null) {
Directory.CreateDirectory(Path.Combine(MenGinPath, addPath));
File.WriteAllLines(Path.Combine(MenGinPath, TimedMessagesTxt),
new string[] { "Seperate each message with a new line" });
}
Note: Using Path.Combine instead of string concatenation has the advantage that missig or extra \ are added or removed automatically.

Save and delete Excel file saved using HttpPostedFileBase

I am uploading an Excel file and extracting data from that and saving it into a database. I am using MVC4 .NET Framework. This is my code from class:
public static void Upload(HttpPostedFileBase File)
{
NIKEntities1 obj = new NIKEntities1();
MyApp = new Excel.Application();
MyApp.Visible = false;
string extension = System.IO.Path.GetExtension(File.FileName);
string pic = "Excel" + extension;
string path = System.IO.Path.Combine(System.Web.HttpContext.Current.Server.MapPath("~/Excel"), pic);
File.SaveAs(path);
MyBook = MyApp.Workbooks.Open(path);
MySheet = (Excel.Worksheet)MyBook.Sheets[1]; // Explicit cast is not required here
int lastRow = MySheet.Cells.SpecialCells(Excel.XlCellType.xlCellTypeLastCell).Row;
List<Employee> EmpList = new List<Employee>();
for (int index = 2; index <= lastRow; index++)
{
System.Array MyValues = (System.Array)MySheet.get_Range("A" +
index.ToString(), "B" + index.ToString()).Cells.Value;
EmpList.Add(new Employee
{
BatchID = MyValues.GetValue(1, 1).ToString(),
BatchName = MyValues.GetValue(1, 2).ToString()
});
}
for (int i = 0; i < EmpList.Count; i++)
{
int x=obj.USP_InsertBatches(EmpList[i].BatchID, EmpList[i].BatchName);
}
}
}
class Employee
{
public string BatchID;
public string BatchName;
}
This code is working perfectly the first time but next time it says that file is currently in use. So I thought of deleting the file at the end of code using the following line:
File.Delete(path);
But this line threw error:
HttpPostedFileBase does not contain definition for Delete
Also, if I don't write this line and try to execute code again it says that it can't save because a file exists with same name and could not be replaced because it is currently in use.
What should I do to get rid of this:
(File.Delete()) Error
Any other way of accessing the Excel file which I am receiving without saving will also be very helpful because I have to just access the data one time.
The File you use there is your variable that is the input parameter of your method. That parameter is of type HttpPostedFileBase and that type has no instance methods (nor static ones for that matter) that allow you to delete that File instance.
You are probably looking for the static Delete method on the File type that is in the System.IO namespace.
A quickfix would be to be explicit about which File you mean:
System.IO.File.Delete(path);
You might want to consider a different naming guideline for your variables though. In c# we tend to write variables starting with a lower case letter. Almost all types in the framework start with an Uppercase letter. Which makes it easier to distinguish the thing file and the type File.
Do notice that a file can only be deleted if it is closed by all processes and all file handles are cleared by the filesystem. In your case you have to make sure Excel closed the file and released it's handles. If you have the search indexer running or a rough virus scanner you might have to try a few times before giving up.
I normally use this code:
// make sure here all Ole Automation servers (like Excel or Word)
// have closed the file (so close the workbook, document etc)
// we iterate a couple of times (10 in this case)
for(int i=0; i< 10; i++)
{
try
{
System.IO.File.Delete(path);
break;
} catch (Exception exc)
{
Trace.WriteLine("failed delete {0}", exc.Message);
// let other threads do some work first
// http://blogs.msmvps.com/peterritchie/2007/04/26/thread-sleep-is-a-sign-of-a-poorly-designed-program/
Thread.Sleep(0);
}
}
From what I can tell, you are opening Excel, reading the file but never closing the Excel.
Add:
MyApp.Workbooks.Close();
MyApp.Quit();
at the end of the Upload function. Even better, wrap whole code you got in
try{
//here goes your current code
}
catch(Exception e)
{
//manage exception
}
finally
{
MyApp.Workbooks.Close();
MyApp.Quit();
}
You initialize MyApp outside try catch block, then whatever happens close the file.

Best way StreamReader skipping Null or WhiteSpace line

After searching and trying the different ways I found I either wasn't happy with the way I was doing the code or it didn't work right for me. I'm new at programming so my understanding is limited. Please keep in mind with the answer.
I want to read a .csv file line by line and skipping lines that are blank. With the contents of the lines I want to put into a list of object. I have everything working except for the skipping line part. Also any feedback about improving any parts of my code are all welcome. I like constructive criticism.
public void CardaxCsvFileReader()
{
string cardaxCsvPath = (#"C:\Cardax2WkbTest\Cardax\CardaxTable.csv");
try
{
using (System.IO.StreamReader cardaxSR =
new System.IO.StreamReader(System.IO.File.OpenRead(cardaxCsvPath)))
{
string line = "";
string[] value = line.Split(',');
while (!cardaxSR.EndOfStream)
{ // this commented out part is what I would like to work but doesn't seem to work.
line = cardaxSR.ReadLine();//.Skip(1).Where(item => !String.IsNullOrWhiteSpace(item));
value = line.Split(',');
if (line != ",,,,,") // using this as temp to skip the line because the above commented out part doesn't work.
{
CardaxDataObject cardaxCsvTest2 = new CardaxDataObject();
cardaxCsvTest2.EventID = Convert.ToInt32(value[0]);
cardaxCsvTest2.FTItemID = Convert.ToInt32(value[1]);
cardaxCsvTest2.PayrollNumber = Convert.ToInt32(value[2]);
cardaxCsvTest2.EventDateTime = Convert.ToDateTime(value[3]);
cardaxCsvTest2.CardholderFirstName = value[4];
cardaxCsvTest2.CardholderLastName = value[5];
Globals.CardaxQueryResult.Add(cardaxCsvTest2);
}
}
}
}
catch (Exception)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\"", cardaxCsvPath);
}
EDITED
If you are lines are not truly blank and contain commas, you can split with RemoveEmptyEntries option and then check the column count.
while (!cardaxSR.EndOfStream)
{ // this commented out part is what I would like to work but doesn't seem to work.
line = cardaxSR.ReadLine();//.Skip(1).Where(item => !String.IsNullOrWhiteSpace(item));
value = line.Split(new char[] {','}, StringSplitOptions.RemoveEmptyEntries); // <-- Remove empty columns while splitting. It has a side-effect: Any record with just a single blank column will also get discarded by the if that follows.
if (value.length < 6)
continue;
CardaxDataObject cardaxCsvTest2 = new CardaxDataObject();
cardaxCsvTest2.EventID = Convert.ToInt32(value[0]);
cardaxCsvTest2.FTItemID = Convert.ToInt32(value[1]);
cardaxCsvTest2.PayrollNumber = Convert.ToInt32(value[2]);
cardaxCsvTest2.EventDateTime = Convert.ToDateTime(value[3]);
cardaxCsvTest2.CardholderFirstName = value[4];
cardaxCsvTest2.CardholderLastName = value[5];
Globals.CardaxQueryResult.Add(cardaxCsvTest2);
}
Another improvement feedback I have: When you catch an exception, it's a good practice to log the exception in addition to your custom error line. A custom error line might be good for say website users, but as a developer running some service you will appreciate the actual exception stack trace. It will help you debug a bug easier.
catch (Exception ex)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\".\r\n Exception: {1}", cardaxCsvPath, ex.ToString());
}
Just check if value.Length == 6, this way it'll skip lines which don't contain enough data for your columns
Use a dedicated CSV parser, such as the EasyCSV class available here*:
https://github.com/jcoehoorn/EasyCSV
public void CardaxCsvFileReader()
{
try
{
string cardaxCsvPath = (#"C:\Cardax2WkbTest\Cardax\CardaxTable.csv");
Globals.CardaxQueryResult =
EasyCSV.FromFile(cardaxCsvPath)
.Where(r => r.Any(c => !string.IsNullOrEmpty(c)))
.Select(r => CardaxDataObject() {
cardaxCsvTest2.EventID = int.Parse(r[0]),
cardaxCsvTest2.FTItemID = int.Parse(r[1]),
cardaxCsvTest2.PayrollNumber = int.Parse(r[2]),
cardaxCsvTest2.EventDateTime = DateTinme.Parse(r[3]),
cardaxCsvTest2.CardholderFirstName = r[4],
cardaxCsvTest2.CardholderLastName = r[5]
}).ToList();
}
catch (Exception)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\"", cardaxCsvPath);
}
}
I also recommend re-thinking how you structure this. The code below is better practice:
public IEnumerable<CardaxDataObject> ReadCardaxCsvFile(string filename)
{
//no try block at this level. Catch that in the method that calls this method
return EasyCSV.FromFile(cardaxCsvPath)
.Where(r => r.Any(c => !string.IsNullOrEmpty(c)))
// You may want to put a try/catch inside the `Select()` projection, though.
// It would allow you continue if you fail to parse an individual record
.Select(r => CardaxDataObject() {
cardaxCsvTest2.EventID = int.Parse(r[0]),
cardaxCsvTest2.FTItemID = int.Parse(r[1]),
cardaxCsvTest2.PayrollNumber = int.Parse(r[2]),
cardaxCsvTest2.EventDateTime = DateTinme.Parse(r[3]),
cardaxCsvTest2.CardholderFirstName = r[4],
cardaxCsvTest2.CardholderLastName = r[5]
});
}
Suddenly the method boils down to one statement (albeit a very long statement). Code like this is better, because it's more powerful, for three reasons: it's not limited to using just the one input file, it's not limited to only sending it's output to the one location, and it's not limited to only one way to handle errors. You'd call it like this:
try
{
string cardaxCsvPath = (#"C:\Cardax2WkbTest\Cardax\CardaxTable.csv");
Globals.CardaxQueryResult = ReadCardaxCsvFile(cardaxCsvPath).ToList();
}
catch (Exception)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\"", cardaxCsvPath);
}
or like this:
try
{
string cardaxCsvPath = (#"C:\Cardax2WkbTest\Cardax\CardaxTable.csv");
foreach (var result in ReadCardaxCsvFile(cardaxCsvPath))
{
Globals.CardaxQueryResult.Add(result);
}
}
catch (Exception)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\"", cardaxCsvPath);
}
I also recommend against using a Globalsclass like this. Find a more meaningful object with which you can associate this data.
* Disclaimer: I am the author of that parser

How to set a dynamic number of threadCounter variables?

I'm not really into multithreading so probably the question is stupid but it seems I cannot find a way to solve this problem (especially because I'm using C# and I've been using it for a month).
I have a dynamic number of directories (I got it from a query in the DB). Inside those queries there are a certain amount of files.
For each directory I need to use a method to transfer these files using FTP in a cuncurrent way because I have basically no limit in FTP max connections (not my word, it's written in the specifics).
But I still need to control the max amount of files transfered per directory. So I need to count the files I'm transfering (increment/decrement).
How could I do it? Should I use something like an array and use the Monitor class?
Edit: Framework 3.5
You can use the Semaphore class to throttle the number of concurrent files per directory. You would probably want to have one semaphore per directory so that the number of FTP uploads per directory can be controlled independently.
public class Example
{
public void ProcessAllFilesAsync()
{
var semaphores = new Dictionary<string, Semaphore>();
foreach (string filePath in GetFiles())
{
string filePathCapture = filePath; // Needed to perform the closure correctly.
string directoryPath = Path.GetDirectoryName(filePath);
if (!semaphores.ContainsKey(directoryPath))
{
int allowed = NUM_OF_CONCURRENT_OPERATIONS;
semaphores.Add(directoryPath, new Semaphore(allowed, allowed));
}
var semaphore = semaphores[directoryPath];
ThreadPool.QueueUserWorkItem(
(state) =>
{
semaphore.WaitOne();
try
{
DoFtpOperation(filePathCapture);
}
finally
{
semaphore.Release();
}
}, null);
}
}
}
var allDirectories = db.GetAllDirectories();
foreach(var directoryPath in allDirectories)
{
DirectoryInfo directories = new DirectoryInfo(directoryPath);
//Loop through every file in that Directory
foreach(var fileInDir in directories.GetFiles()) {
//Check if we have reached our max limit
if (numberFTPConnections == MAXFTPCONNECTIONS){
Thread.Sleep(1000);
}
//code to copy to FTP
//This can be Aync, when then transfer is completed
//decrement the numberFTPConnections so then next file can be transfered.
}
}
You can try something along the lines above. Note that It's just the basic logic and there are proberly better ways to do this.

How do I get a directory size (files in the directory) in C#?

I want to be able to get the size of one of the local directories using C#. I'm trying to avoid the following (pseudo like code), although in the worst case scenario I will have to settle for this:
int GetSize(Directory)
{
int Size = 0;
foreach ( File in Directory )
{
FileInfo fInfo of File;
Size += fInfo.Size;
}
foreach ( SubDirectory in Directory )
{
Size += GetSize(SubDirectory);
}
return Size;
}
Basically, is there a Walk() available somewhere so that I can walk through the directory tree? Which would save the recursion of going through each sub-directory.
A very succinct way to get a folder size in .net 4.0 is below. It still suffers from the limitation of having to traverse all files recursively, but it doesn't load a potentially huge array of filenames, and it's only two lines of code. Make sure to use the namespaces System.IO and System.Linq.
private static long GetDirectorySize(string folderPath)
{
DirectoryInfo di = new DirectoryInfo(folderPath);
return di.EnumerateFiles("*.*", SearchOption.AllDirectories).Sum(fi => fi.Length);
}
If you use Directory.GetFiles you can do a recursive seach (using SearchOption.AllDirectories), but this is a bit flaky anyway (especially if you don't have access to one of the sub-directories) - and might involve a huge single array coming back (warning klaxon...).
I'd be happy with the recursion approach unless I could show (via profiling) a bottleneck; and then I'd probably switch to (single-level) Directory.GetFiles, using a Queue<string> to emulate recursion.
Note that .NET 4.0 introduces some enumerator-based file/directory listing methods which save on the big arrays.
Here my .NET 4.0 approach
public static long GetFileSizeSumFromDirectory(string searchDirectory)
{
var files = Directory.EnumerateFiles(searchDirectory);
// get the sizeof all files in the current directory
var currentSize = (from file in files let fileInfo = new FileInfo(file) select fileInfo.Length).Sum();
var directories = Directory.EnumerateDirectories(searchDirectory);
// get the size of all files in all subdirectories
var subDirSize = (from directory in directories select GetFileSizeSumFromDirectory(directory)).Sum();
return currentSize + subDirSize;
}
Or even nicer:
// get IEnumerable from all files in the current dir and all sub dirs
var files = Directory.EnumerateFiles(searchDirectory,"*",SearchOption.AllDirectories);
// get the size of all files
long sum = (from file in files let fileInfo = new FileInfo(file) select fileInfo .Length).Sum();
As Gabriel pointed out this will fail if you have a restricted directory under the searchDirectory!
You could hide your recursion behind an extension method (to avoid the issues Marc has highlighted with the GetFiles() method):
public static class UserExtension
{
public static IEnumerable<FileInfo> Walk(this DirectoryInfo directory)
{
foreach(FileInfo file in directory.GetFiles())
{
yield return file;
}
foreach(DirectoryInfo subDirectory in directory.GetDirectories())
{
foreach(FileInfo file in subDirectory.Walk())
{
yield return file;
}
}
}
}
(You probably want to add some exception handling to this for protected folders etc.)
Then:
using static UserExtension;
long totalSize = 0L;
var startFolder = new DirectoryInfo("<path to folder>");
// iteration
foreach(FileInfo file in startFolder.Walk())
{
totalSize += file.Length;
}
// linq
totalSize = di.Walk().Sum(s => s.Length);
Basically the same code, but maybe a little neater...
First, forgive my poor english ;o)
I had a problem that took me to this page : enumerate files of a directory and his subdirectories without blocking on an UnauthorizedAccessException, and, like the new method of .Net 4 DirectoryInfo.Enumerate..., get the first result before the end of the entire query.
With the help of various examples found here and there on the web, I finally write this method :
public static IEnumerable<FileInfo> EnumerateFiles_Recursive(this DirectoryInfo directory, string searchPattern, SearchOption searchOption, Func<DirectoryInfo, Exception, bool> handleExceptionAccess)
{
Queue<DirectoryInfo> subDirectories = new Queue<DirectoryInfo>();
IEnumerable<FileSystemInfo> entries = null;
// Try to get an enumerator on fileSystemInfos of directory
try
{
entries = directory.EnumerateFileSystemInfos(searchPattern, SearchOption.TopDirectoryOnly);
}
catch (Exception e)
{
// If there's a callback delegate and this delegate return true, we don't throw the exception
if (handleExceptionAccess == null || !handleExceptionAccess(directory, e))
throw;
// If the exception wasn't throw, we make entries reference an empty collection
entries = EmptyFileSystemInfos;
}
// Yield return file entries of the directory and enqueue the subdirectories
foreach (FileSystemInfo entrie in entries)
{
if (entrie is FileInfo)
yield return (FileInfo)entrie;
else if (entrie is DirectoryInfo)
subDirectories.Enqueue((DirectoryInfo)entrie);
}
// If recursive search, we make recursive call on the method to yield return entries of the subdirectories.
if (searchOption == SearchOption.AllDirectories)
{
DirectoryInfo subDir = null;
while (subDirectories.Count > 0)
{
subDir = subDirectories.Dequeue();
foreach (FileInfo file in subDir.EnumerateFiles_Recursive(searchPattern, searchOption, handleExceptionAccess))
{
yield return file;
}
}
}
else
subDirectories.Clear();
}
I use a Queue and a recursive method to keep traditional order (content of directory and then content of first subdirectory and his own subdirectories and then content of the second...). The parameter "handleExceptionAccess" is just a function call when an exception is thrown with a directory; the function must return true to indicate that the exception must be ignored.
With this methode, you can write :
DirectoryInfo dir = new DirectoryInfo("c:\\temp");
long size = dir.EnumerateFiles_Recursive("*", SearchOption.AllDirectories, (d, ex) => true).Sum(f => f.Length);
And here we are : all exception when trying to enumerate a directory will be ignore !
Hope this help
Lionel
PS : for a reason I can't explain, my method is more quick than the framework 4 one...
PPS : you can get my test solutions with source for those methods : here TestDirEnumerate. I write EnumerateFiles_Recursive, EnumerateFiles_NonRecursive (use a queue to avoid recursion) and EnumerateFiles_NonRecursive_TraditionalOrder (use a stack of queue to avoid recursion and keep traditional order). Keep those 3 methods has no interest, I write them only for test the best one. I think to keep only the last one.
I also wrote the equivalent for EnumerateFileSystemInfos and EnumerateDirectories.
Have a look at this post:
http://social.msdn.microsoft.com/forums/en-US/vbgeneral/thread/eed54ebe-facd-4305-b64b-9dbdc65df04e
Basically there is no clean .NET way, but there is a quite straightforward COM approach so if you're happy with using COM interop and being tied to Windows, this could work for you.
the solution is already here https://stackoverflow.com/a/12665904/1498669
as in the duplicate How do I Get Folder Size in C#? shown -> you can do this also in c#
first, add the COM reference "Microsoft Scripting Runtime" to your project and use:
var fso = new Scripting.FileSystemObject();
var folder = fso.GetFolder(#"C:\Windows");
double sizeInBytes = folder.Size;
// cleanup COM
System.Runtime.InteropServices.Marshal.ReleaseComObject(folder);
System.Runtime.InteropServices.Marshal.ReleaseComObject(fso);
remember to cleanup the COM references
I've been looking some time ago for a function like the one you ask for and from what I've found on the Internet and in MSDN forums, there is no such function.
The recursive way is the only I found to obtain the size of a Folder considering all the files and subfolders that contains.
You should make it easy on yourself. Make a method and passthrough the location of the directory.
private static long GetDirectorySize(string location) {
return new DirectoryInfo(location).GetFiles("*.*", SearchOption.AllDirectories).Sum(file => file.Length);
}
-G

Categories

Resources