First, my disclaimer: I'm a parallel noob. I thought this would be an easy "embarrassingly parallel" problem to tackle, but it's thrown me for a loop.
I'm trying to download some photos in parallel from the web. The original photos are Hi-Res and take up quite a bit of space, so I'm going to compact them once they're downloaded.
Here's the code:
private static void DownloadPhotos(ISet<MyPhoto> photos)
{
List<MyPhoto> failed = new List<MyPhoto>();
DateTime now = DateTime.Now;
string folderDayOfYear = now.DayOfYear.ToString();
string folderYear = now.Year.ToString();
string imagesFolder = string.Format("{0}{1}\\{2}\\", ImagePath, folderYear, folderDayOfYear);
if (!Directory.Exists(imagesFolder))
{
Directory.CreateDirectory(imagesFolder);
}
Parallel.ForEach(photos, photo =>
{
if (!SavePhotoFile(photo.Url, photo.Duid + ".jpg", imagesFolder))
{
failed.Add(photo);
Console.WriteLine("adding to failed photos: {0} ", photo.Duid.ToString());
}
});
Console.WriteLine();
Console.WriteLine("failed photos count: {0}", failed.Count);
RemoveHiResPhotos(string.Format(#"{0}\{1}\{2}", ImagePath, folderYear, folderDayOfYear));
}
private static bool SavePhotoFile(string url, string fileName, string imagesFolder)
{
string fullFileName = imagesFolder + fileName;
string originalFileName = fileName.Replace(".jpg", "-original.jpg");
string fullOriginalFileName = imagesFolder + originalFileName;
if (!File.Exists(fullFileName))
{
using (WebClient webClient = new WebClient())
{
try
{
webClient.DownloadFile(url, fullOriginalFileName);
}
catch (Exception ex)
{
Console.WriteLine();
Console.WriteLine("failed to download photo: {0}", fileName);
return false;
}
}
CreateStandardResImage(fullOriginalFileName, fullOriginalFileName.Replace("-original.jpg", ".jpg"));
}
return true;
}
private static void CreateStandardResImage(string hiResFileName, string stdResFileName)
{
Image image = Image.FromFile(hiResFileName);
Image newImage = image.Resize(1024, 640);
newImage.SaveAs(hiResFileName, stdResFileName, 70, ImageFormat.Jpeg);
}
So here's where things confuse me: each of the photos hits the Catch{} block of the SavePhotoFile() method at the webClient.DownloadFile line. The error message is an exception occured during a WebClient request and the inner detail is "The process cannot access the file . . . -original.jpg because it is being used by another process."
If I wasn't confused enough by this error, I'm confused even more by what happens next. It turns out that if I just ignore the message and wait, the image will eventually download and be processed.
What's going on?
OK, so it appears in my focus on parallelism that I made a simple error: I assumed something about my data that wasn't true. Brianestey figured out the problem: Duid isn't unique. It's supposed to be unique, except for some missing code in the process to create the list.
The fix was to add this to the MyPhoto class
public override bool Equals(object obj)
{
if (obj is MyPhoto)
{
var objPhoto = obj as MyPhoto;
if (objPhoto.Duid == this.Duid)
return true;
}
return false;
}
public override int GetHashCode()
{
return this.Duid.GetHashCode();
}
Related
I am using C# and AWSSDK v3 to upload files into an S3 bucket. The file is encrypted using ServerSideEncryptionCustomerMethod. I can upload the file, but if I check if the file exists using S3FileInfo().Exists, an error is thrown as a (400) Bad Request. However, if I comment out the lines that specify encryption in the upload routine, the S3FileInfo().Exists finds the file without throwing an error. What I am doing wrong? Or is there a different way to check if a file exists when it is encrypted?
Here is my upload routine:
public static string wfUpload(Stream pFileStream, string pBucketName, string pKeyName, string pCryptoKey) {
string retVal = "";
try {
using (var lS3Client = new AmazonS3Client()) {
Aes aesEncryption = Aes.Create();
aesEncryption.KeySize = 256;
aesEncryption.GenerateKey();
string lCryptoKey = Convert.ToBase64String(aesEncryption.Key);
PutObjectRequest request = new PutObjectRequest {
BucketName = pBucketName,
Key = pKeyName,
ServerSideEncryptionCustomerMethod = ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = lCryptoKey,
};
request.InputStream = pFileStream;
PutObjectResponse response = lS3Client.PutObject(request);
retVal = lCryptoKey;
}
}
catch (AmazonS3Exception s3Exception) {
Console.WriteLine(s3Exception.Message,
s3Exception.InnerException);
throw (s3Exception);
}
catch (Exception e) {
throw (e);
}
return retVal;
}
And my routine to check if the file exists or not:
public static bool wfFileExists(String pBucketName, String pKeyName) {
bool retVal = false;
try {
using (var lS3Client = new AmazonS3Client()) {
if (new Amazon.S3.IO.S3FileInfo(lS3Client, pBucketName, pKeyName).Exists) {
retVal = true;
}
}
}
catch (AmazonS3Exception s3Exception) {
Console.WriteLine(s3Exception.Message,
s3Exception.InnerException);
throw (s3Exception);
}
catch (Exception e) {
throw (e);
}
return retVal;
}
Well, I think the class/method I was using is one of the high level APIs that doesn't support encryption. I changed my code to do a meta-data query to see if anything comes back. If it can't find the file it throws a "NotFound" ErrorCode in the s3Exception that I check for. Hopefully this helps someone else. If someone else suggests a better approach, I'd love to learn it too.
public static bool wfFileExists(String pBucketName, String pKeyName, String pCryptoKey) {
bool retVal = false;
try {
using (var lS3Client = new AmazonS3Client()) {
GetObjectMetadataRequest request = new GetObjectMetadataRequest {
BucketName = pBucketName,
Key = pKeyName,
ServerSideEncryptionCustomerMethod = ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = pCryptoKey,
};
GetObjectMetadataResponse lMetaData = lS3Client.GetObjectMetadata(request);
// If an error is not thrown, we found the metadata.
retVal = true;
}
}
catch (AmazonS3Exception s3Exception) {
Console.WriteLine(s3Exception.Message,
s3Exception.InnerException);
if (s3Exception.ErrorCode != "NotFound") {
throw (s3Exception);
}
}
catch (Exception e) {
throw (e);
}
return retVal;
}
I'm looking to parse the WebCacheV01.dat file using C# to find the last file location for upload in an Internet browser.
%LocalAppData%\Microsoft\Windows\WebCache\WebCacheV01.dat
I using the Managed Esent nuget package.
Esent.Isam
Esent.Interop
When I try and run the below code it fails at:
Api.JetGetDatabaseFileInfo(filePath, out pageSize, JET_DbInfo.PageSize);
Or if I use
Api.JetSetSystemParameter(instance, JET_SESID.Nil, JET_param.CircularLog, 1, null);
at
Api.JetAttachDatabase(sesid, filePath, AttachDatabaseGrbit.ReadOnly);
I get the following error:
An unhandled exception of type
'Microsoft.Isam.Esent.Interop.EsentFileAccessDeniedException' occurred
in Esent.Interop.dll
Additional information: Cannot access file, the file is locked or in use
string localAppDataPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
string filePathExtra = #"\Microsoft\Windows\WebCache\WebCacheV01.dat";
string filePath = string.Format("{0}{1}", localAppDataPath, filePathExtra);
JET_INSTANCE instance;
JET_SESID sesid;
JET_DBID dbid;
JET_TABLEID tableid;
String connect = "";
JET_SNP snp;
JET_SNT snt;
object data;
int numInstance = 0;
JET_INSTANCE_INFO [] instances;
int pageSize;
JET_COLUMNDEF columndef = new JET_COLUMNDEF();
JET_COLUMNID columnid;
Api.JetCreateInstance(out instance, "instance");
Api.JetGetDatabaseFileInfo(filePath, out pageSize, JET_DbInfo.PageSize);
Api.JetSetSystemParameter(JET_INSTANCE.Nil, JET_SESID.Nil, JET_param.DatabasePageSize, pageSize, null);
//Api.JetSetSystemParameter(instance, JET_SESID.Nil, JET_param.CircularLog, 1, null);
Api.JetInit(ref instance);
Api.JetBeginSession(instance, out sesid, null, null);
//Do stuff in db
Api.JetEndSession(sesid, EndSessionGrbit.None);
Api.JetTerm(instance);
Is it not possible to read this without making modifications?
Viewer
http://www.nirsoft.net/utils/ese_database_view.html
Python
https://jon.glass/attempts-to-parse-webcachev01-dat/
libesedb
impacket
Issue:
The file is probably in use.
Solution:
in order to free the locked file, please stop the Schedule Task -\Microsoft\Windows\Wininet\CacheTask.
The Code
public override IEnumerable<string> GetBrowsingHistoryUrls(FileInfo fileInfo)
{
var fileName = fileInfo.FullName;
var results = new List<string>();
try
{
int pageSize;
Api.JetGetDatabaseFileInfo(fileName, out pageSize, JET_DbInfo.PageSize);
SystemParameters.DatabasePageSize = pageSize;
using (var instance = new Instance("Browsing History"))
{
var param = new InstanceParameters(instance);
param.Recovery = false;
instance.Init();
using (var session = new Session(instance))
{
Api.JetAttachDatabase(session, fileName, AttachDatabaseGrbit.ReadOnly);
JET_DBID dbid;
Api.JetOpenDatabase(session, fileName, null, out dbid, OpenDatabaseGrbit.ReadOnly);
using (var tableContainers = new Table(session, dbid, "Containers", OpenTableGrbit.ReadOnly))
{
IDictionary<string, JET_COLUMNID> containerColumns = Api.GetColumnDictionary(session, tableContainers);
if (Api.TryMoveFirst(session, tableContainers))
{
do
{
var retrieveColumnAsInt32 = Api.RetrieveColumnAsInt32(session, tableContainers, columnIds["ContainerId"]);
if (retrieveColumnAsInt32 != null)
{
var containerId = (int)retrieveColumnAsInt32;
using (var table = new Table(session, dbid, "Container_" + containerId, OpenTableGrbit.ReadOnly))
{
var tableColumns = Api.GetColumnDictionary(session, table);
if (Api.TryMoveFirst(session, table))
{
do
{
var url = Api.RetrieveColumnAsString(
session,
table,
tableColumns["Url"],
Encoding.Unicode);
var downloadedFileName = Api.RetrieveColumnAsString(
session,
table,
columnIds2["Filename"]);
if(string.IsNullOrEmpty(downloadedFileName)) // check for download history only.
continue;
// Order by access Time to find the last uploaded file.
var accessedTime = Api.RetrieveColumnAsInt64(
session,
table,
columnIds2["AccessedTime"]);
var lastVisitTime = accessedTime.HasValue ? DateTime.FromFileTimeUtc(accessedTime.Value) : DateTime.MinValue;
results.Add(url);
}
while (Api.TryMoveNext(session, table.JetTableid));
}
}
}
} while (Api.TryMoveNext(session, tableContainers));
}
}
}
}
}
catch (Exception ex)
{
// log goes here....
}
return results;
}
Utils
Task Scheduler Wrapper
You can use Microsoft.Win32.TaskScheduler.TaskService Wrapper to stop it using c#, just add this Nuget package [nuget]:https://taskscheduler.codeplex.com/
Usage
public static FileInfo CopyLockedFileRtl(DirectoryInfo directory, FileInfo fileInfo, string remoteEndPoint)
{
FileInfo copiedFileInfo = null;
using (var ts = new TaskService(string.Format(#"\\{0}", remoteEndPoint)))
{
var task = ts.GetTask(#"\Microsoft\Windows\Wininet\CacheTask");
task.Stop();
task.Enabled = false;
var byteArray = FileHelper.ReadOnlyAllBytes(fileInfo);
var filePath = Path.Combine(directory.FullName, "unlockedfile.dat");
File.WriteAllBytes(filePath, byteArray);
copiedFileInfo = new FileInfo(filePath);
task.Enabled = true;
task.Run();
task.Dispose();
}
return copiedFileInfo;
}
I was not able to get Adam's answer to work. What worked for me was making a copy with AlphaVSS (a .NET class library that has a managed API for the Volume Shadow Copy Service). The file was in "Dirty Shutdown" state, so I additionally wrote this to handle the exception it threw when I opened it:
catch (EsentErrorException ex)
{ // Usually after the database is copied, it's in Dirty Shutdown state
// This can be verified by running "esentutl.exe /Mh WebCacheV01.dat"
logger.Info(ex.Message);
switch (ex.Error)
{
case JET_err.SecondaryIndexCorrupted:
logger.Info("Secondary Index Corrupted detected, exiting...");
Api.JetTerm2(instance, TermGrbit.Complete);
return false;
case JET_err.DatabaseDirtyShutdown:
logger.Info("Dirty shutdown detected, attempting to recover...");
try
{
Api.JetTerm2(instance, TermGrbit.Complete);
Process.Start("esentutl.exe", "/p /o " + newPath);
Thread.Sleep(5000);
Api.JetInit(ref instance);
Api.JetBeginSession(instance, out sessionId, null, null);
Api.JetAttachDatabase(sessionId, newPath, AttachDatabaseGrbit.None);
}
catch (Exception e2)
{
logger.Info("Could not recover database " + newPath + ", will try opening it one last time. If that doesn't work, try using other esentutl commands", e2);
}
break;
}
}
I'm thinking about using the 'Recent Items' folder as when you select a file to upload an entry is written here:
C:\Users\USER\AppData\Roaming\Microsoft\Windows\Recent
string recent = (Environment.GetFolderPath(Environment.SpecialFolder.Recent));
I have a batch of PDFs that I want to convert to Text. It's easy to get text with something like this from iTextSharp:
PdfTextExtractor.GetTextFromPage(reader, pageNumber);
It's easy to get Images using this answer (or similar answers in the thread).
What I can't figure out easily... is how to interleave image placeholders in the text.
Given a PDF, a page # and GetTextFromPage I expect the output to be:
line 1
line 2
line 3
When I'd like it to be (Where 1.1 means page 1, image 1... Page 1, image 2):
line 1
[1.1]
line 2
[1.2]
line 3
Is there a way to get an "image placeholder" for iTextSharp, PdfSharp or anything similar? I'd like a GetTextAndPlaceHoldersFromPage method (or similar).
PS: Hrm... it's not letting me tag iTextSHARP - not iText. C# not Java.
C# Pdf to Text with image placeholder
https://stackoverflow.com/a/28087521/
https://stackoverflow.com/a/33697745/
Although this doesn't have the exact layout mentioned in my question (Since that was a simplified version of what I really wanted anyways), it does have the starting parts as listed by the second note (translated from iText Java)... with extra information pulled from the third note (Some of the reflection used in Java didn't seem to work in C#, so that info came from #3).
Working from this, I'm able to get a List of Strings representing lines in the PDF (all pages, instead of just page 1)... with text added where images should be (Huzzah!). ByteArrayToFile extension method added for flavor (Although I didn't include other parts/extensions that may break a copy/paste usages of this code).
I've also been able to greatly simplify other parts of my process and gut half of the garbage I had working before. Huzzah!!! Thanks #Mkl
internal class Program
{
public static void Main(string[] args)
{
var dir = Settings.TestDirectory;
var file = Settings.TestFile;
Log.Info($"File to Process: {file.FullName}");
using (var reader = new PdfReader(file.FullName))
{
var parser = new PdfReaderContentParser(reader);
var listener = new SimpleMixedExtractionStrategy(file, dir);
parser.ProcessContent(1, listener);
var x = listener.GetResultantText().Split('\n');
}
}
}
public class SimpleMixedExtractionStrategy : LocationTextExtractionStrategy
{
public static readonly ILog Log = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);
public DirectoryInfo OutputPath { get; }
public FileInfo OutputFile { get; }
private static readonly LineSegment UNIT_LINE = new LineSegment(new Vector(0, 0, 1), new Vector(1, 0, 1));
private int _counter;
public SimpleMixedExtractionStrategy(FileInfo outputFile, DirectoryInfo outputPath)
{
OutputPath = outputPath;
OutputFile = outputFile;
}
public override void RenderImage(ImageRenderInfo renderInfo)
{
try
{
var image = renderInfo.GetImage();
if (image == null) return;
var number = _counter++;
var imageFile = new FileInfo($"{OutputFile.FullName}-{number}.{image.GetFileType()}");
imageFile.ByteArrayToFile(image.GetImageAsBytes());
var segment = UNIT_LINE.TransformBy(renderInfo.GetImageCTM());
var location = new TextChunk("[" + imageFile + "]", segment.GetStartPoint(), segment.GetEndPoint(), 0f);
var locationalResultField = typeof(LocationTextExtractionStrategy).GetField("locationalResult", BindingFlags.NonPublic | BindingFlags.Instance);
var LocationalResults = (List<TextChunk>)locationalResultField.GetValue(this);
LocationalResults.Add(location);
}
catch (Exception ex)
{
Log.Debug($"{ex.Message}");
Log.Verbose($"{ex.StackTrace}");
}
}
}
public static class ByteArrayExtensions
{
public static bool ByteArrayToFile(this FileInfo fileName, byte[] byteArray)
{
try
{
// Open file for reading
var fileStream = new FileStream(fileName.FullName, FileMode.Create, FileAccess.Write);
// Writes a block of bytes to this stream using data from a byte array.
fileStream.Write(byteArray, 0, byteArray.Length);
// close file stream
fileStream.Close();
return true;
}
catch (Exception exception)
{
// Error
Log.Error($"Exception caught in process: {exception.Message}", exception);
}
// error occured, return false
return false;
}
}
I'm trying to test a method from a Test project like so:
[TestMethod]
public void TestEmailGeneratedReport()
{
List<String> recipients = new List<string>();
recipients.Add("bclayshannon#hotmail.net");
recipients.Add("axx3andspace#male.edu");
recipients.Add("cshannon#PlatypiRUs.com");
bool succeeded = RoboReporterConstsAndUtils.EmailGeneratedReport(recipients);
Assert.IsTrue(succeeded);
}
...but it blows up; I get, "Could not find a part of the path."
It works fine, though, when I call it like this from the project's main form's Load event:
List<String> recipients = new List<string>();
recipients.Add("bclayshannon#hotmail.net");
recipients.Add("axx3andspace#male.edu");
recipients.Add("cshannon#PlatypiRUs.com");
bool succeeded =
RoboReporterConstsAndUtils.EmailGeneratedReport(recipients);
if (succeeded) MessageBox.Show("emailing succeeded");
...I see the "emailing succeeded" message.
The method under test conditionally creates a folder:
if (string.IsNullOrWhiteSpace(uniqueFolder))
{
uniqueFolder = GetUniqueFolder("Test");
ConditionallyCreateDirectory(uniqueFolder);
}
So virtually the same code works in the real project, but fails from the Test project; I assume the crux of the problem is the creation of the folder. Are tests, or "remote" code disallowed from manipulating the file system in this way, is that what's happening here? If so, how can a method that does such things be tested?
UPDATE
Note: I am able to read from the file system; this test succeeds:
[TestMethod]
public void TestGetLastReportsGenerated()
{
string testFolderThatHasExcelFiles = "C:\\Misc";
FileInfo[] aBunchOfFiles =
RoboReporterConstsAndUtils.GetLastReportsGenerated(
testFolderThatHasExcelFiles);
Assert.IsTrue(aBunchOfFiles.Length > 0);
}
UPDATE 2
And I'm able to manipulate files, too:
[TestMethod]
public void TestMarkFileAsSent()
{
string fileToRename = "C:\\Misc\\csharpExcelTest.xlsx";
string desiredRenamedFileName = "C:\\Misc\\csharpExcelTest_PROCESSED.xlsx";
RoboReporterConstsAndUtils.MarkFileAsSent(fileToRename);
bool oldFileNameExists = System.IO.File.Exists(fileToRename);
bool newFileNameExists = System.IO.File.Exists(desiredRenamedFileName);
Assert.IsTrue((newFileNameExists) && (!oldFileNameExists));
}
...so...?!?
UPDATE 3
I temporarily commmented out the folder creation code, and it still breaks, so it wasn't that...maybe Testing and Outlook Interop don't mix?
UPDATE 4
For Arturo:
internal static bool EmailGeneratedReport(List<string> recipients)
{
bool success = true;
try
{
Microsoft.Office.Interop.Outlook.Application app = new Microsoft.Office.Interop.Outlook.Application();
MailItem mailItem = app.CreateItem(OlItemType.olMailItem);
Recipients _recipients = mailItem.Recipients;
foreach (string recip in recipients)
{
Recipient outlookRecipient = _recipients.Add(recip);
outlookRecipient.Type = (int)OlMailRecipientType.olTo;
outlookRecipient.Resolve();
}
mailItem.Subject = String.Format("Platypus Reports generated {0}", GetYYYYMMDDHHMM());
List<String> htmlBody = new List<string>
{
"<html><body><img src=\"http://www.platypiRUs.com/wp-content/themes/platypi/images/pru_logo_notag.png\" alt=\"Platypus logo\" ><p>Your Platypus reports are attached. You can also view them online here:</p>"
};
htmlBody.Add("</body></html>");
mailItem.HTMLBody = string.Join(Environment.NewLine, htmlBody.ToArray());
// Commented this out to see if it was the problem with the test failing (it wasn't)
if (string.IsNullOrWhiteSpace(uniqueFolder))
{
uniqueFolder = GetUniqueFolder("Test");
ConditionallyCreateDirectory(uniqueFolder);
}
FileInfo[] rptsToEmail = GetLastReportsGenerated(uniqueFolder);
foreach (var file in rptsToEmail)
{
String fullFilename = String.Format("{0}\\{1}", uniqueFolder, file.Name);
if (!File.Exists(fullFilename)) continue;
if (!file.Name.Contains(PROCESSED_FILE_APPENDAGE))
{
mailItem.Attachments.Add(fullFilename);
}
MarkFileAsSent(fullFilename);
}
mailItem.Importance = OlImportance.olImportanceHigh;
mailItem.Display(false);
}
catch (System.Exception ex)
{
String exDetail = String.Format(ExceptionFormatString, ex.Message,
Environment.NewLine, ex.Source, ex.StackTrace, ex.InnerException);
MessageBox.Show(exDetail);
success = false;
}
return success;
}
UPDATE 5
More for Arturo:
// Provided the unit name, returns a folder name like "C:\\RoboReporter\\Gramps\\201602260807
internal static string GetUniqueFolder(string _unit)
{
if (uniqueFolder.Equals(String.Empty))
{
uniqueFolder = String.Format("{0}\\{1}\\{2}", OUTPUT_DIRECTORY, _unit, GetYYYYMMDDHHMM());
}
return uniqueFolder;
}
internal static FileInfo[] GetLastReportsGenerated(string _uniqueFolder)
{
DirectoryInfo d = new DirectoryInfo(_uniqueFolder);
return d.GetFiles(ALL_EXCEL_FILE_EXTENSION);
}
I think you should do better checks about reports folder.
Try replacing:
if (string.IsNullOrWhiteSpace(uniqueFolder))
{
uniqueFolder = GetUniqueFolder("Test");
ConditionallyCreateDirectory(uniqueFolder);
}
with:
if (string.IsNullOrWhiteSpace(uniqueFolder))
uniqueFolder = GetUniqueFolder("Test");
if (!Directory.Exists(uniqueFolder))
ConditionallyCreateDirectory(uniqueFolder);
Also, you should use Path class to work with paths:
String fullFilename = Path.Combine(uniqueFolder, file.Name);
first of all - don't look at the code and say it's too long
it only looks that way.
I'm writing a program that will search my computer and delete files based on their MD5 value (and to speed things up i don't want to search all the files, just those that have specific file names).
I am sending a FileInfo to a method named ConditionallyDeleteNotWantedFile, it then takes that file's name and trys to find it in the dictionary - retrieve that file's MD5 and computes the current FileInfo MD5 to see if they are the same.
If it does - delete the file.
the problem? exception is thrown when i try to delete... even tho no other process uses it. when i try to delete the file using windows explorer it says vshost (meaning:VS...)
what am i missing ?
public static bool ConditionallyDeleteNotWantedFile(FileInfo fi)
{
string dictMD5;
if (NotWanted.TryGetValue(fi.Name, out dictMD5))
{
string temp = ComputeMD5Hash(fi.FullName);
// temp will only be null if we couldn't open the file for
// read in the md5 calc operation. probably file is in use.
if (temp == null)
return false;
if (temp == dictMD5)
{
try
{
fi.Delete();
}
catch { fi.Delete(); // this exception is raised with
// "being used by another process"
}
return true;
}
}
return false;
}
public static string ComputeMD5Hash(string fileName)
{
return ComputeHash(fileName, new MD5CryptoServiceProvider());
}
public static string ComputeHash(string fileName, HashAlgorithm
hashAlgorithm)
{
try
{
FileStream stmcheck = File.OpenRead(fileName);
try
{
stmcheck = File.OpenRead(fileName);
byte[] hash = hashAlgorithm.ComputeHash(stmcheck);
string computed = BitConverter.ToString(hash).Replace("-", "");
stmcheck.Close();
return computed;
}
finally
{
stmcheck.Close();
}
}
catch
{
return null;
}
}
I don't know if that's the key, but you're opening the stream twice in ComputeHash, and there's a path that does not close it. May I suggest this:
public static string ComputeHash(string fileName, HashAlgorithm hashAlgorithm)
{
string hashFixed = null;
try
{
using (FileStream stmcheck = File.OpenRead(fileName))
{
try
{
byte[] hash = hashAlgorithm.ComputeHash(stmcheck);
hashFixed = BitConverter.ToString(hash).Replace("-", "");
}
catch
{
//logging as needed
}
finally
{
stmcheck.Close();
}
}
}
catch
{
//logging as needed
}
return hashFixed;
}