Parse WebCacheV01.dat in C# - c#

I'm looking to parse the WebCacheV01.dat file using C# to find the last file location for upload in an Internet browser.
%LocalAppData%\Microsoft\Windows\WebCache\WebCacheV01.dat
I using the Managed Esent nuget package.
Esent.Isam
Esent.Interop
When I try and run the below code it fails at:
Api.JetGetDatabaseFileInfo(filePath, out pageSize, JET_DbInfo.PageSize);
Or if I use
Api.JetSetSystemParameter(instance, JET_SESID.Nil, JET_param.CircularLog, 1, null);
at
Api.JetAttachDatabase(sesid, filePath, AttachDatabaseGrbit.ReadOnly);
I get the following error:
An unhandled exception of type
'Microsoft.Isam.Esent.Interop.EsentFileAccessDeniedException' occurred
in Esent.Interop.dll
Additional information: Cannot access file, the file is locked or in use
string localAppDataPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
string filePathExtra = #"\Microsoft\Windows\WebCache\WebCacheV01.dat";
string filePath = string.Format("{0}{1}", localAppDataPath, filePathExtra);
JET_INSTANCE instance;
JET_SESID sesid;
JET_DBID dbid;
JET_TABLEID tableid;
String connect = "";
JET_SNP snp;
JET_SNT snt;
object data;
int numInstance = 0;
JET_INSTANCE_INFO [] instances;
int pageSize;
JET_COLUMNDEF columndef = new JET_COLUMNDEF();
JET_COLUMNID columnid;
Api.JetCreateInstance(out instance, "instance");
Api.JetGetDatabaseFileInfo(filePath, out pageSize, JET_DbInfo.PageSize);
Api.JetSetSystemParameter(JET_INSTANCE.Nil, JET_SESID.Nil, JET_param.DatabasePageSize, pageSize, null);
//Api.JetSetSystemParameter(instance, JET_SESID.Nil, JET_param.CircularLog, 1, null);
Api.JetInit(ref instance);
Api.JetBeginSession(instance, out sesid, null, null);
//Do stuff in db
Api.JetEndSession(sesid, EndSessionGrbit.None);
Api.JetTerm(instance);
Is it not possible to read this without making modifications?
Viewer
http://www.nirsoft.net/utils/ese_database_view.html
Python
https://jon.glass/attempts-to-parse-webcachev01-dat/
libesedb
impacket

Issue:
The file is probably in use.
Solution:
in order to free the locked file, please stop the Schedule Task -\Microsoft\Windows\Wininet\CacheTask.
The Code
public override IEnumerable<string> GetBrowsingHistoryUrls(FileInfo fileInfo)
{
var fileName = fileInfo.FullName;
var results = new List<string>();
try
{
int pageSize;
Api.JetGetDatabaseFileInfo(fileName, out pageSize, JET_DbInfo.PageSize);
SystemParameters.DatabasePageSize = pageSize;
using (var instance = new Instance("Browsing History"))
{
var param = new InstanceParameters(instance);
param.Recovery = false;
instance.Init();
using (var session = new Session(instance))
{
Api.JetAttachDatabase(session, fileName, AttachDatabaseGrbit.ReadOnly);
JET_DBID dbid;
Api.JetOpenDatabase(session, fileName, null, out dbid, OpenDatabaseGrbit.ReadOnly);
using (var tableContainers = new Table(session, dbid, "Containers", OpenTableGrbit.ReadOnly))
{
IDictionary<string, JET_COLUMNID> containerColumns = Api.GetColumnDictionary(session, tableContainers);
if (Api.TryMoveFirst(session, tableContainers))
{
do
{
var retrieveColumnAsInt32 = Api.RetrieveColumnAsInt32(session, tableContainers, columnIds["ContainerId"]);
if (retrieveColumnAsInt32 != null)
{
var containerId = (int)retrieveColumnAsInt32;
using (var table = new Table(session, dbid, "Container_" + containerId, OpenTableGrbit.ReadOnly))
{
var tableColumns = Api.GetColumnDictionary(session, table);
if (Api.TryMoveFirst(session, table))
{
do
{
var url = Api.RetrieveColumnAsString(
session,
table,
tableColumns["Url"],
Encoding.Unicode);
var downloadedFileName = Api.RetrieveColumnAsString(
session,
table,
columnIds2["Filename"]);
if(string.IsNullOrEmpty(downloadedFileName)) // check for download history only.
continue;
// Order by access Time to find the last uploaded file.
var accessedTime = Api.RetrieveColumnAsInt64(
session,
table,
columnIds2["AccessedTime"]);
var lastVisitTime = accessedTime.HasValue ? DateTime.FromFileTimeUtc(accessedTime.Value) : DateTime.MinValue;
results.Add(url);
}
while (Api.TryMoveNext(session, table.JetTableid));
}
}
}
} while (Api.TryMoveNext(session, tableContainers));
}
}
}
}
}
catch (Exception ex)
{
// log goes here....
}
return results;
}
Utils
Task Scheduler Wrapper
You can use Microsoft.Win32.TaskScheduler.TaskService Wrapper to stop it using c#, just add this Nuget package [nuget]:https://taskscheduler.codeplex.com/
Usage
public static FileInfo CopyLockedFileRtl(DirectoryInfo directory, FileInfo fileInfo, string remoteEndPoint)
{
FileInfo copiedFileInfo = null;
using (var ts = new TaskService(string.Format(#"\\{0}", remoteEndPoint)))
{
var task = ts.GetTask(#"\Microsoft\Windows\Wininet\CacheTask");
task.Stop();
task.Enabled = false;
var byteArray = FileHelper.ReadOnlyAllBytes(fileInfo);
var filePath = Path.Combine(directory.FullName, "unlockedfile.dat");
File.WriteAllBytes(filePath, byteArray);
copiedFileInfo = new FileInfo(filePath);
task.Enabled = true;
task.Run();
task.Dispose();
}
return copiedFileInfo;
}

I was not able to get Adam's answer to work. What worked for me was making a copy with AlphaVSS (a .NET class library that has a managed API for the Volume Shadow Copy Service). The file was in "Dirty Shutdown" state, so I additionally wrote this to handle the exception it threw when I opened it:
catch (EsentErrorException ex)
{ // Usually after the database is copied, it's in Dirty Shutdown state
// This can be verified by running "esentutl.exe /Mh WebCacheV01.dat"
logger.Info(ex.Message);
switch (ex.Error)
{
case JET_err.SecondaryIndexCorrupted:
logger.Info("Secondary Index Corrupted detected, exiting...");
Api.JetTerm2(instance, TermGrbit.Complete);
return false;
case JET_err.DatabaseDirtyShutdown:
logger.Info("Dirty shutdown detected, attempting to recover...");
try
{
Api.JetTerm2(instance, TermGrbit.Complete);
Process.Start("esentutl.exe", "/p /o " + newPath);
Thread.Sleep(5000);
Api.JetInit(ref instance);
Api.JetBeginSession(instance, out sessionId, null, null);
Api.JetAttachDatabase(sessionId, newPath, AttachDatabaseGrbit.None);
}
catch (Exception e2)
{
logger.Info("Could not recover database " + newPath + ", will try opening it one last time. If that doesn't work, try using other esentutl commands", e2);
}
break;
}
}

I'm thinking about using the 'Recent Items' folder as when you select a file to upload an entry is written here:
C:\Users\USER\AppData\Roaming\Microsoft\Windows\Recent
string recent = (Environment.GetFolderPath(Environment.SpecialFolder.Recent));

Related

Is it possible to get a batch of text content through Azure DevOps REST API?

I need to get (not download) the content from 10.000~ manifest files within a project in Azure DevOps, but I don't manage to achieve this. I have found several ways to retrieve the content from one file at a time, but in this context, it is neither an efficient nor sustainable solution. I have managed to retrieve all files of a particular file type by checking if the file path ends with the name of the file, then using the TfvcHttpClientBase.GetItemsBatch method. However, this method does not return the item's content.
Program.cs
using Microsoft.TeamFoundation.SourceControl.WebApi;
AzureRest azureRest = new AzureRest();
var tfvcItems = azureRest.GetTfvcItems();
List<TfvcItemDescriptor> itemDescriptorsList = new List<TfvcItemDescriptor>();
foreach(var item in tfvcItems)
{
//Example manifest file .NET
if (item.Path.EndsWith("packages.config"))
{
var itemDescriptor = new TfvcItemDescriptor()
{
Path = item.Path,
RecursionLevel = VersionControlRecursionType.None,
Version = "",
VersionOption = TfvcVersionOption.None,
VersionType = TfvcVersionType.Latest
};
itemDescriptorsList.Add(itemDescriptor);
}
}
TfvcItemDescriptor[] itemDescriptorsArray = itemDescriptorsList.ToArray();
var itemBatch = azureRest.GetTfvcItemsBatch(itemDescriptorsArray);
foreach(var itemList in itemBatch)
{
foreach(var itemListList in itemList)
{
Console.WriteLine("Content: " + itemListList.Content); //empty/null
Console.WriteLine("ContentMetadata: " + itemListList.ContentMetadata); //not empty/null
}
}
AzureRest.cs
using Microsoft.TeamFoundation.SourceControl.WebApi;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
public class AzureRest
{
const string ORG_URL = "https://org/url/url";
const string PROJECT = "Project";
const string PAT = "PersonalAccessToken";
private string GetTokenConfig()
{
return PAT;
}
private string GetProjectNameConfig()
{
return PROJECT;
}
private VssConnection Authenticate()
{
string token = GetTokenConfig();
string projectName = GetProjectNameConfig();
var credentials = new VssBasicCredential(string.Empty, token);
var connection = new VssConnection(new Uri(ORG_URL), credentials);
return connection;
}
public List<TfvcItem> GetTfvcItems()
{
var connection = Authenticate();
using (TfvcHttpClient tfvcClient = connection.GetClient<TfvcHttpClient>())
{
var tfvcItems = tfvcClient.GetItemsAsync(scopePath: "/Path", recursionLevel: VersionControlRecursionType.Full, true).Result;
return tfvcItems;
}
}
public List<List<TfvcItem>> GetTfvcItemsBatch(TfvcItemDescriptor[] itemDescriptors)
{
TfvcItemRequestData requestData = new TfvcItemRequestData()
{
IncludeContentMetadata = true,
IncludeLinks = true,
ItemDescriptors = itemDescriptors
};
var connection = Authenticate();
using (TfvcHttpClient tfvcClient = connection.GetClient<TfvcHttpClient>())
{
var tfvcItems = tfvcClient.GetItemsBatchAsync(requestData).Result;
return tfvcItems;
}
}
}
}
For reference:
I have tested the codes you shared and when debugging at "itemDescriptorsList" and have found that there is no content specified in it, so that's why you cannot get the txt content.
You should first check and add the content property into the "itemDescriptorsList".

DeleteMessageAsync() not deleting message in SQS queue .Net Core

I am trying to delete message in SQS queue, but it is not deleting in the queue. I have been trying to make a lot of changes, but is still not working. I am new to c#, .net core, and AWS. Can anyone please help me with this?
Here is my main method:
[HttpGet]
public async Task<ReceiveMessageResponse> Get()
{
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest
{
WaitTimeSeconds = 3 //it'll ping the queue for 3 seconds if I don't do this, sometimes I receive message and sometimes I don't
};
receiveMessageRequest.QueueUrl = myQueueUrl;
receiveMessageRequest.MaxNumberOfMessages = 10; // can change number of messages as needed
//receiveing messages/responses
var receiveMessageResponse = await amazonSQSClient.ReceiveMessageAsync(receiveMessageRequest);
if (receiveMessageResponse.Messages.Count > 0){
var bucketName = getBucketName(receiveMessageResponse);
var objectKey = getObjectKey(receiveMessageResponse);
var versionId = getVersionId(receiveMessageResponse);
string filePath = "C:\\InputPdfFile\\"; // change it later
string path = filePath + objectKey;
//get the file from s3 bucket and download it in in
var downloadInputFile = await DownloadAsync(path, versionId, objectKey);
//Get score from the output file
string jsonOutputFileName = "\\file-1.txt"; //change it later from text file to json file
string jsonOutputPath = "C:\\OutputJsonFile"; //change it later
string jasonArchivePath = "C:\\ArchiveJsonFile"; //change it later
int score = GetOutputScore(jsonOutputPath, jsonOutputFileName);
//update metadata from the score received from ML worker (GetOutputScore)
PutObjectResponse putObjectResponse = await UpdateMetadataAsync(score);
//Move file from output to archive after updating metadata
string sourceFile = jsonOutputPath + jsonOutputFileName;
string destFile = jasonArchivePath + jsonOutputFileName;
if (!Directory.Exists(jasonArchivePath))
{
Directory.CreateDirectory(jasonArchivePath);
}
System.IO.File.Move(sourceFile, destFile);
//delete message after moving file from archive
*DeleteMessage(receiveMessageResponse);* //not sure why it is not deleting**
}
return receiveMessageResponse;
}
Here is my Delete method:
public async void DeleteMessage(ReceiveMessageResponse receiveMessageResponse)
{
if (receiveMessageResponse.Messages.Count > 0)
{
foreach (var message in receiveMessageResponse.Messages)
{
var delRequest = new DeleteMessageRequest
{
QueueUrl = myQueueUrl,
ReceiptHandle = message.ReceiptHandle
};
var deleteMessage = await amazonSQSClient.DeleteMessageAsync(delRequest);
}
}
else // It is not going in else because the message was found but still not deleting it
{
Console.WriteLine("No message found");
}
}
Any help would be greatly appreciated!

How to create zip file in memory?

I have to create a zip file from set of urls. and it should have a proper folder structure.
So i tried like
public async Task<byte[]> CreateZip(Guid ownerId)
{
try
{
string startPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "zipFolder");//base folder
if (Directory.Exists(startPath))
{
DeleteAllFiles(startPath);
Directory.Delete(startPath);
}
Directory.CreateDirectory(startPath);
string zipPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, $"{ownerId.ToString()}"); //folder based on ownerid
if (Directory.Exists(zipPath))
{
DeleteAllFiles(zipPath);
Directory.Delete(zipPath);
}
Directory.CreateDirectory(zipPath);
var attachemnts = await ReadByOwnerId(ownerId);
attachemnts.Data.ForEach(i =>
{
var fileLocalPath = $"{startPath}\\{i.Category}";
if (!Directory.Exists(fileLocalPath))
{
Directory.CreateDirectory(fileLocalPath);
}
using (var client = new WebClient())
{
client.DownloadFile(i.Url, $"{fileLocalPath}//{i.Flags ?? ""}_{i.FileName}");
}
});
var zipFilename = $"{zipPath}//result.zip";
if (File.Exists(zipFilename))
{
File.Delete(zipFilename);
}
ZipFile.CreateFromDirectory(startPath, zipFilename, CompressionLevel.Fastest, true);
var result = System.IO.File.ReadAllBytes(zipFilename);
return result;
}
catch (Exception ex)
{
var a = ex;
return null;
}
}
currently im writing all files in my base directory(may be not a good idea).corrently i have to manually delete all folders and files to avoid exception/unwanted files. Can everything be written in memory?
What changes required to write all files and folder structure in memory?
No you can't. Not with the built in Dotnet any way.
As per my comment I would recommend storing the files in a custom location based on a Guid or similar. Eg:
"/xxxx-xxxx-xxxx-xxxx/Folder-To-Zip/....".
This would ensure you could handle multiple requests with the same files or similar file / folder names.
Then you just have to cleanup and delete the folder again afterwards so you don't run out of space.
Hope the below code does the job.
public async Task<byte[]> CreateZip(Guid ownerId)
{
try
{
string startPath = Path.Combine(Path.GetTempPath(), $"{Guid.NewGuid()}_zipFolder");//folder to add
Directory.CreateDirectory(startPath);
var attachemnts = await ReadByOwnerId(ownerId);
attachemnts.Data = filterDuplicateAttachments(attachemnts.Data);
//filtering youtube urls
attachemnts.Data = attachemnts.Data.Where(i => !i.Flags.Equals("YoutubeUrl", StringComparison.OrdinalIgnoreCase)).ToList();
attachemnts.Data.ForEach(i =>
{
var fileLocalPath = $"{startPath}\\{i.Category}";
if (!Directory.Exists(fileLocalPath))
{
Directory.CreateDirectory(fileLocalPath);
}
using (var client = new WebClient())
{
client.DownloadFile(i.Url, $"{fileLocalPath}//{i.Flags ?? ""}_{i.FileName}");
}
});
using (var ms = new MemoryStream())
{
using (var zipArchive = new ZipArchive(ms, ZipArchiveMode.Create, true))
{
System.IO.DirectoryInfo di = new DirectoryInfo(startPath);
var allFiles = di.GetFiles("",SearchOption.AllDirectories);
foreach (var attachment in allFiles)
{
var file = File.OpenRead(attachment.FullName);
var type = attachemnts.Data.Where(i => $"{ i.Flags ?? ""}_{ i.FileName}".Equals(attachment.Name, StringComparison.OrdinalIgnoreCase)).FirstOrDefault();
var entry = zipArchive.CreateEntry($"{type.Category}/{attachment.Name}", CompressionLevel.Fastest);
using (var entryStream = entry.Open())
{
file.CopyTo(entryStream);
}
}
}
var result = ms.ToArray();
return result;
}
}
catch (Exception ex)
{
var a = ex;
return null;
}
}

If creating files from code that is exercised via Tests is disallowed, how can such methods be tested?

I'm trying to test a method from a Test project like so:
[TestMethod]
public void TestEmailGeneratedReport()
{
List<String> recipients = new List<string>();
recipients.Add("bclayshannon#hotmail.net");
recipients.Add("axx3andspace#male.edu");
recipients.Add("cshannon#PlatypiRUs.com");
bool succeeded = RoboReporterConstsAndUtils.EmailGeneratedReport(recipients);
Assert.IsTrue(succeeded);
}
...but it blows up; I get, "Could not find a part of the path."
It works fine, though, when I call it like this from the project's main form's Load event:
List<String> recipients = new List<string>();
recipients.Add("bclayshannon#hotmail.net");
recipients.Add("axx3andspace#male.edu");
recipients.Add("cshannon#PlatypiRUs.com");
bool succeeded =
RoboReporterConstsAndUtils.EmailGeneratedReport(recipients);
if (succeeded) MessageBox.Show("emailing succeeded");
...I see the "emailing succeeded" message.
The method under test conditionally creates a folder:
if (string.IsNullOrWhiteSpace(uniqueFolder))
{
uniqueFolder = GetUniqueFolder("Test");
ConditionallyCreateDirectory(uniqueFolder);
}
So virtually the same code works in the real project, but fails from the Test project; I assume the crux of the problem is the creation of the folder. Are tests, or "remote" code disallowed from manipulating the file system in this way, is that what's happening here? If so, how can a method that does such things be tested?
UPDATE
Note: I am able to read from the file system; this test succeeds:
[TestMethod]
public void TestGetLastReportsGenerated()
{
string testFolderThatHasExcelFiles = "C:\\Misc";
FileInfo[] aBunchOfFiles =
RoboReporterConstsAndUtils.GetLastReportsGenerated(
testFolderThatHasExcelFiles);
Assert.IsTrue(aBunchOfFiles.Length > 0);
}
UPDATE 2
And I'm able to manipulate files, too:
[TestMethod]
public void TestMarkFileAsSent()
{
string fileToRename = "C:\\Misc\\csharpExcelTest.xlsx";
string desiredRenamedFileName = "C:\\Misc\\csharpExcelTest_PROCESSED.xlsx";
RoboReporterConstsAndUtils.MarkFileAsSent(fileToRename);
bool oldFileNameExists = System.IO.File.Exists(fileToRename);
bool newFileNameExists = System.IO.File.Exists(desiredRenamedFileName);
Assert.IsTrue((newFileNameExists) && (!oldFileNameExists));
}
...so...?!?
UPDATE 3
I temporarily commmented out the folder creation code, and it still breaks, so it wasn't that...maybe Testing and Outlook Interop don't mix?
UPDATE 4
For Arturo:
internal static bool EmailGeneratedReport(List<string> recipients)
{
bool success = true;
try
{
Microsoft.Office.Interop.Outlook.Application app = new Microsoft.Office.Interop.Outlook.Application();
MailItem mailItem = app.CreateItem(OlItemType.olMailItem);
Recipients _recipients = mailItem.Recipients;
foreach (string recip in recipients)
{
Recipient outlookRecipient = _recipients.Add(recip);
outlookRecipient.Type = (int)OlMailRecipientType.olTo;
outlookRecipient.Resolve();
}
mailItem.Subject = String.Format("Platypus Reports generated {0}", GetYYYYMMDDHHMM());
List<String> htmlBody = new List<string>
{
"<html><body><img src=\"http://www.platypiRUs.com/wp-content/themes/platypi/images/pru_logo_notag.png\" alt=\"Platypus logo\" ><p>Your Platypus reports are attached. You can also view them online here:</p>"
};
htmlBody.Add("</body></html>");
mailItem.HTMLBody = string.Join(Environment.NewLine, htmlBody.ToArray());
// Commented this out to see if it was the problem with the test failing (it wasn't)
if (string.IsNullOrWhiteSpace(uniqueFolder))
{
uniqueFolder = GetUniqueFolder("Test");
ConditionallyCreateDirectory(uniqueFolder);
}
FileInfo[] rptsToEmail = GetLastReportsGenerated(uniqueFolder);
foreach (var file in rptsToEmail)
{
String fullFilename = String.Format("{0}\\{1}", uniqueFolder, file.Name);
if (!File.Exists(fullFilename)) continue;
if (!file.Name.Contains(PROCESSED_FILE_APPENDAGE))
{
mailItem.Attachments.Add(fullFilename);
}
MarkFileAsSent(fullFilename);
}
mailItem.Importance = OlImportance.olImportanceHigh;
mailItem.Display(false);
}
catch (System.Exception ex)
{
String exDetail = String.Format(ExceptionFormatString, ex.Message,
Environment.NewLine, ex.Source, ex.StackTrace, ex.InnerException);
MessageBox.Show(exDetail);
success = false;
}
return success;
}
UPDATE 5
More for Arturo:
// Provided the unit name, returns a folder name like "C:\\RoboReporter\\Gramps\\201602260807
internal static string GetUniqueFolder(string _unit)
{
if (uniqueFolder.Equals(String.Empty))
{
uniqueFolder = String.Format("{0}\\{1}\\{2}", OUTPUT_DIRECTORY, _unit, GetYYYYMMDDHHMM());
}
return uniqueFolder;
}
internal static FileInfo[] GetLastReportsGenerated(string _uniqueFolder)
{
DirectoryInfo d = new DirectoryInfo(_uniqueFolder);
return d.GetFiles(ALL_EXCEL_FILE_EXTENSION);
}
I think you should do better checks about reports folder.
Try replacing:
if (string.IsNullOrWhiteSpace(uniqueFolder))
{
uniqueFolder = GetUniqueFolder("Test");
ConditionallyCreateDirectory(uniqueFolder);
}
with:
if (string.IsNullOrWhiteSpace(uniqueFolder))
uniqueFolder = GetUniqueFolder("Test");
if (!Directory.Exists(uniqueFolder))
ConditionallyCreateDirectory(uniqueFolder);
Also, you should use Path class to work with paths:
String fullFilename = Path.Combine(uniqueFolder, file.Name);

Parallel.ForEach Error when using WebClient

First, my disclaimer: I'm a parallel noob. I thought this would be an easy "embarrassingly parallel" problem to tackle, but it's thrown me for a loop.
I'm trying to download some photos in parallel from the web. The original photos are Hi-Res and take up quite a bit of space, so I'm going to compact them once they're downloaded.
Here's the code:
private static void DownloadPhotos(ISet<MyPhoto> photos)
{
List<MyPhoto> failed = new List<MyPhoto>();
DateTime now = DateTime.Now;
string folderDayOfYear = now.DayOfYear.ToString();
string folderYear = now.Year.ToString();
string imagesFolder = string.Format("{0}{1}\\{2}\\", ImagePath, folderYear, folderDayOfYear);
if (!Directory.Exists(imagesFolder))
{
Directory.CreateDirectory(imagesFolder);
}
Parallel.ForEach(photos, photo =>
{
if (!SavePhotoFile(photo.Url, photo.Duid + ".jpg", imagesFolder))
{
failed.Add(photo);
Console.WriteLine("adding to failed photos: {0} ", photo.Duid.ToString());
}
});
Console.WriteLine();
Console.WriteLine("failed photos count: {0}", failed.Count);
RemoveHiResPhotos(string.Format(#"{0}\{1}\{2}", ImagePath, folderYear, folderDayOfYear));
}
private static bool SavePhotoFile(string url, string fileName, string imagesFolder)
{
string fullFileName = imagesFolder + fileName;
string originalFileName = fileName.Replace(".jpg", "-original.jpg");
string fullOriginalFileName = imagesFolder + originalFileName;
if (!File.Exists(fullFileName))
{
using (WebClient webClient = new WebClient())
{
try
{
webClient.DownloadFile(url, fullOriginalFileName);
}
catch (Exception ex)
{
Console.WriteLine();
Console.WriteLine("failed to download photo: {0}", fileName);
return false;
}
}
CreateStandardResImage(fullOriginalFileName, fullOriginalFileName.Replace("-original.jpg", ".jpg"));
}
return true;
}
private static void CreateStandardResImage(string hiResFileName, string stdResFileName)
{
Image image = Image.FromFile(hiResFileName);
Image newImage = image.Resize(1024, 640);
newImage.SaveAs(hiResFileName, stdResFileName, 70, ImageFormat.Jpeg);
}
So here's where things confuse me: each of the photos hits the Catch{} block of the SavePhotoFile() method at the webClient.DownloadFile line. The error message is an exception occured during a WebClient request and the inner detail is "The process cannot access the file . . . -original.jpg because it is being used by another process."
If I wasn't confused enough by this error, I'm confused even more by what happens next. It turns out that if I just ignore the message and wait, the image will eventually download and be processed.
What's going on?
OK, so it appears in my focus on parallelism that I made a simple error: I assumed something about my data that wasn't true. Brianestey figured out the problem: Duid isn't unique. It's supposed to be unique, except for some missing code in the process to create the list.
The fix was to add this to the MyPhoto class
public override bool Equals(object obj)
{
if (obj is MyPhoto)
{
var objPhoto = obj as MyPhoto;
if (objPhoto.Duid == this.Duid)
return true;
}
return false;
}
public override int GetHashCode()
{
return this.Duid.GetHashCode();
}

Categories

Resources