How to share file access between some processes using Mutex? - c#

I have a simple class for copying files from one directory to another. Also, I need to get a file checksum after copying. The copying method may be called by many instances, for example, 5 processes are copying one file to 5 different directories in parallel, so when some processes try to get checksum, I get an IO exception.
So I've tried to tell each process to wait until the source file is unlocked:
bool IsFileLocked(FileInfo file)
{
try
{
using (FileStream stream = file.Open(FileMode.Open, FileAccess.Read, FileShare.None))
{
stream.Close();
}
}
catch (IOException)
{
return true;
}
return false;
}
Such a decision works, but only if I call Thread.Sleep(10) in while loop(in getting checksum method for waiting), otherwise I get the same error.
while (IsFileLocked(fi))
{
System.Threading.Thread.Sleep(10);
}
So I see, that it's a very bad solution.
Now I try to use Mutex:
string GetFileHash(string path)
{
string hashValue = null;
using (SHA256 sha256 = SHA256.Create())
{
FileInfo fi = new FileInfo(path);
try
{
mutexObj.WaitOne();
using (FileStream fileStream = fi.Open(FileMode.Open))
{
fileStream.Position = 0;
hashValue = System.Text.Encoding.Default.GetString(sha256.ComputeHash(fileStream));
}
}
catch (IOException ex)
{
Console.WriteLine($"GHM:I/O Exception: {ex.Message}");
}
catch (UnauthorizedAccessException ex)
{
Console.WriteLine($"GHM:Access Exception: {ex.Message}");
}
finally
{
mutexObj.ReleaseMutex();
}
}
return hashValue;
}
But that doesn't work. I think, that the problem is in different Mutex instances in 5 independent processes.
So, tell me, please, how to solve this? Is there a way to declare global mutex?

Related

Azure Blob Storage : DownloadToStreamAsync downloading 0kb streams

Looking at my code below, I am amazed with the amount of boilerplate code I am required to write just to ensure that a library downloads a file correctly.
Are there any reason why I see 0kb downloaded streams or is this just normal to write a method like this?
public static async Task<string> DownloadSASUriInputDataAsync(string workingDirectory, string sasUri)
{
Trace.TraceInformation("{0}", sasUri);
var input = new CloudBlockBlob(new Uri(sasUri));
input.ServiceClient.DefaultRequestOptions.RetryPolicy = new ExponentialRetry(TimeSpan.FromMilliseconds(100), 10);
var fileName = Path.GetFileName(input.Name);
await Retry.LinearAsync(async () =>
{
try
{
using (var ms = new MemoryStream())
{
await input.DownloadToStreamAsync(ms);
ms.Seek(0, SeekOrigin.Begin);
if (ms.Length == 0)
{
throw new RunAlgorithmException("Downloaded file was 0 byte");
}
using (var fs = new FileStream(Path.Combine(workingDirectory, fileName), FileMode.Create, FileAccess.Write))
{
await ms.CopyToAsync(fs);
}
}
Trace.TraceInformation("downloaded file");
}
catch (StorageException ex)
{
Trace.TraceError("Failed to DownloadSASUriInputDataAsync : {0}", ex.ToString());
throw;
}
}, TimeSpan.FromMilliseconds(500),10);
return fileName;
}
The issue with all the 0kb streams was that the blobs was still being copied.
Blobs can still be accessed even though they are being copied and it will give the behavior above.
Adding checks before tryingto download that the blob.CopyState is completed or missing ensures that it work as the SLA states.

Deletion of compressed file after extracting it with nunrar

I used the lib Nunrar site to extract a .rar file:
RarArchive.WriteToDirectory(fs.Name, Path.Combine(#"D:\DataDownloadCenter", path2), ExtractOptions.Overwrite);
the decompression works fine, but I can't after this operation of extract delete the original compressed file
System.IO.File.Delete(path);
because the file is is used by another process
the hole function :
try
{
FileStream fs = File.OpenRead(path);
if(path.Contains(".rar")){
try
{
RarArchive.WriteToDirectory(fs.Name, Path.Combine(#"D:\DataDownloadCenter", path2), ExtractOptions.Overwrite);
fs.Close();
}
catch { }
}
catch { return; }
finally
{
if (zf != null)
{
zf.IsStreamOwner = true; // Makes close also shut the underlying stream
zf.Close(); // Ensure we release resources
}
}
try
{
System.IO.File.Delete(path);
}
catch { }
So can I delete the compressed file after extract it?
I don't know what zf is but you can also likely wrap that in a using statement. Try replacing your FileStream fs part with this
using( FileStream fs = File.OpenRead(path))
{
if(path.Contains(".rar"))
{
try
{
RarArchive.WriteToDirectory(fs.Name, Path.Combine(#"D:\DataDownloadCenter", path2), ExtractOptions.Overwrite);
}
catch { }
}
}
This way fs is closed even if path doesn't contain .rar. You're only closing the fs if rar exists within the filename.
Also, does the library have its own stream handling? It could have a method that closes it.
I also had this issue with nunrar, nether close() or a using statement seem to fix this.
unfortunately the Documentation is scarce, so im now using the SharpCompress library it is a fork of the nunrar library according to the devs of nunrar.The documentation on SharpCompress is also scarce (but less) so here is my method im using:
private static bool unrar(string filename)
{
bool error = false;
string outputpath = Path.GetDirectoryName(filename);
try
{
using (Stream stream = File.OpenRead(filename))
{
var reader = ReaderFactory.Open(stream);
while (reader.MoveToNextEntry())
{
if (!reader.Entry.IsDirectory)
{
Console.WriteLine(reader.Entry.Key);
reader.WriteEntryToDirectory(outputpath, new ExtractionOptions() { ExtractFullPath = true, Overwrite = true });
}
}
}
}
catch (Exception e)
{
Console.WriteLine("Failed: " + e.Message);
error = true;
}
if (!error)
{
File.Delete(filename);
}
return error;
}
Add the following libraries to the top
using SharpCompress.Common;
using SharpCompress.Readers;
Install using nuget.This method works for SharpCompress v0.22.0(latest at the time of writing)

Special Characters in StreamWriter

I am using streamwriter to write a string into stream. Now when I access the data from the stream, it adds "\0\0\0" characters to end of the content. I have to append the stream contents, so it creates problem as I am not able to remove these characters by trim() or remove() or replace() methods.
Below is the code I am using:
FOR WRITING :
using (MemoryMappedViewStream stream = mmf.CreateViewStream())
{
using (StreamWriter writer = new StreamWriter(stream, System.Text.Encoding.Unicode))
{
try
{
string[] files = System.IO.Directory.GetFiles(folderName, "*.*", System.IO.SearchOption.AllDirectories);
foreach (string str in files)
{
writer.WriteLine(str);
}
// writer.WriteLine(folderName);
}
catch (Exception ex)
{
Debug.WriteLine("Unable to write string. " + ex);
}
finally
{
mutex.ReleaseMutex();
mutex.WaitOne();
}
}
}
FOR READING :
StringBuilder sb = new StringBuilder();
string str = #"D:\Other Files\Test_Folder\New Text Document.txt";
using (var stream = mmf.CreateViewStream())
{
System.IO.StreamReader reader = new System.IO.StreamReader(stream);
sb.Append(reader.ReadToEnd());
sb.ToString().Trim('\0');
sb.Append("\n" + str);
}
How can I prevent this?
[UPDATES]
Writing
// Lock
bool mutexCreated;
Mutex mutex = new Mutex(true, fileName, out mutexCreated);
if (!mutexCreated)
mutex = new Mutex(true);
try
{
using (MemoryMappedViewStream stream = mmf.CreateViewStream())
{
using (BinaryWriter writer = new BinaryWriter(stream))
{
try
{
string[] files = System.IO.Directory.GetFiles(folderName, "*.*", System.IO.SearchOption.AllDirectories);
foreach (string str in files)
{
writer.Write(str);
}
writer.Flush();
}
catch (Exception ex)
{
Debug.WriteLine("Unable to write string. " + ex);
}
finally
{
mutex.ReleaseMutex();
mutex.WaitOne();
}
}
}
}
catch (Exception ex)
{
Debug.WriteLine("Unable to monitor memory file. " + ex);
}
Reading
StringBuilder sb = new StringBuilder();
string str = #"D:\Other Files\Test_Folder\New Text Document.txt";
try
{
using (var stream = mmf.CreateViewStream())
{
System.IO.BinaryReader reader = new System.IO.BinaryReader(stream);
sb.Append(reader.ReadString());
sb.Append("\n" + str);
}
using (var stream = mmf.CreateViewStream())
{
System.IO.BinaryWriter writer = new System.IO.BinaryWriter(stream);
writer.Write(sb.ToString());
}
using (var stream = mmf.CreateViewStream())
{
System.IO.BinaryReader reader = new System.IO.BinaryReader(stream);
Console.WriteLine(reader.ReadString());
}
}
catch (Exception ex)
{
Debug.WriteLine("Unable to monitor memory file. " + ex);
}
No '\0' are getting appended by StreamWriter. These are just the content of the memory-mapped file, stuff that was there before you started writing. The StreamReader needs an end-of-file indicator to know when the stop reading. There isn't any in an mmf beyond the size of the section. Like the 2nd argument you pass to MemoryMappedFile.CreateNew(string, long).
Or in other words, you created a mmf that's too large to fit the stream. Well, of course, you didn't have the time machine to guess how large to make it. You definitely need to do something about it, trimming the zeros isn't good enough. That goes wrong the second time you write a stream that's shorter. The reader will now still sees the bytes from the previous stream content and they won't be zero.
This is otherwise a common headache with mmfs, they are just chunks of memory and a stream is a very poor abstraction of that. One of the big reasons it took so long for mmfs to get supported by .NET, even though they are a very core OS feature. You need pointers to map a mmf and that's just not well supported in a managed language.
I don't see a good way to teach StreamReader new tricks in this case. Copying the bytes from the mmf into a MemoryStream would fix the problem but rather defeats the point of a mmf.
Consider using a pipe instead.
Your combination of a MMF and TextWriter/TextReader, especially ReadToEnd() is not a good match.
A textReader needs the EOF concept of the underlying file and a MMF just does not supply that in the same way. You will get your strings stuffed with \0\0... up to the capacity of the MMF.
As a possible fix:
collect the strings to write in a StringBuilder
use a BinaryWriter to write it as 1 string
read it back with a BinaryReader.
Another options is to use WriteLine/ReadLine and define some EOF marker (empty line or special string).
The BinaryWriter will prefix the string with a length-prefix so that the Reader knows when to stop.

Datacontractserializer doesn't overwrite all data

I've noticed that if I persist an object back into file using a Datacontractserializer, if the length of the new xml is shorter than the xml originally present in the file the remnants of the original xml outwith the length of the new xml will remain in the file and will break the xml.
Does anyone have a good solution to fix this?
Here's the code I am using to persist the object:
/// <summary>
/// Flushes the current instance of the given type to the datastore.
/// </summary>
private void Flush()
{
try
{
string directory = Path.GetDirectoryName(this.fileName);
if (!Directory.Exists(directory))
{
Directory.CreateDirectory(directory);
}
FileStream stream = null;
try
{
stream = new FileStream(this.fileName, FileMode.OpenOrCreate);
for (int i = 0; i < 3; i++)
{
try
{
using (XmlDictionaryWriter writer = XmlDictionaryWriter.CreateTextWriter(stream, new System.Text.UTF8Encoding(false)))
{
stream = null;
// The serializer is initialized upstream.
this.serializer.WriteObject(writer, this.objectValue);
}
break;
}
catch (IOException)
{
Thread.Sleep(200);
}
}
}
finally
{
if (stream != null)
{
stream.Dispose();
}
}
}
catch
{
// TODO: Localize this
throw;
//throw new IOException(String.Format(CultureInfo.CurrentCulture, "Unable to save persistable object to file {0}", this.fileName));
}
}
It's because of how you are opening your stream with:
stream = new FileStream(this.fileName, FileMode.OpenOrCreate);
Try using:
stream = new FileStream(this.fileName, FileMode.Create);
See FileMode documentation.
I believe this is due to using FileMode.OpenOrCreate. If the file already exits, I think the file is being opened and parts of the data are being overwritten from the start byte. If you change to using FileMode.Create it forces any existing files to be overwritten.

The process cannot access the file because it is being used by another process

When I execute the code below, I get the common exception The process cannot access the file *filePath* because it is being used by another process.
What is the most efficient way to allow this thread to wait until it can safely access this file?
Assumptions:
the file has just been created by me, so it is unlikely that another app is accessing it.
more than one thread from my app might be trying to run this code to append text to the file.
:
using (var fs = File.Open(filePath, FileMode.Append)) //Exception here
{
using (var sw = new StreamWriter(fs))
{
sw.WriteLine(text);
}
}
So far, the best that I have come up with is the following. Are there any downsides to doing this?
private static void WriteToFile(string filePath, string text, int retries)
{
const int maxRetries = 10;
try
{
using (var fs = File.Open(filePath, FileMode.Append))
{
using (var sw = new StreamWriter(fs))
{
sw.WriteLine(text);
}
}
}
catch (IOException)
{
if (retries < maxRetries)
{
Thread.Sleep(1);
WriteToFile(filePath, text, retries + 1);
}
else
{
throw new Exception("Max retries reached.");
}
}
}
If you have multiple threads attempting to access the same file, consider using a locking mechanism. The simplest form could be:
lock(someSharedObject)
{
using (var fs = File.Open(filePath, FileMode.Append)) //Exception here
{
using (var sw = new StreamWriter(fs))
{
sw.WriteLine(text);
}
}
}
As an alternative, consider:
File.AppendText(text);
You can set a FileShare to allow multiple access with this File.Open command like
File.Open(path, FileMode.Open, FileAccess.Write, FileShare.ReadWrite)
But i think the cleanest way if you have multiple threads that are trying to write into one file would be to put all these messages into a Queue<T> and have one additional thread that writes all elements of the queue into the file.

Categories

Resources