Deletion of compressed file after extracting it with nunrar - c#

I used the lib Nunrar site to extract a .rar file:
RarArchive.WriteToDirectory(fs.Name, Path.Combine(#"D:\DataDownloadCenter", path2), ExtractOptions.Overwrite);
the decompression works fine, but I can't after this operation of extract delete the original compressed file
System.IO.File.Delete(path);
because the file is is used by another process
the hole function :
try
{
FileStream fs = File.OpenRead(path);
if(path.Contains(".rar")){
try
{
RarArchive.WriteToDirectory(fs.Name, Path.Combine(#"D:\DataDownloadCenter", path2), ExtractOptions.Overwrite);
fs.Close();
}
catch { }
}
catch { return; }
finally
{
if (zf != null)
{
zf.IsStreamOwner = true; // Makes close also shut the underlying stream
zf.Close(); // Ensure we release resources
}
}
try
{
System.IO.File.Delete(path);
}
catch { }
So can I delete the compressed file after extract it?

I don't know what zf is but you can also likely wrap that in a using statement. Try replacing your FileStream fs part with this
using( FileStream fs = File.OpenRead(path))
{
if(path.Contains(".rar"))
{
try
{
RarArchive.WriteToDirectory(fs.Name, Path.Combine(#"D:\DataDownloadCenter", path2), ExtractOptions.Overwrite);
}
catch { }
}
}
This way fs is closed even if path doesn't contain .rar. You're only closing the fs if rar exists within the filename.
Also, does the library have its own stream handling? It could have a method that closes it.

I also had this issue with nunrar, nether close() or a using statement seem to fix this.
unfortunately the Documentation is scarce, so im now using the SharpCompress library it is a fork of the nunrar library according to the devs of nunrar.The documentation on SharpCompress is also scarce (but less) so here is my method im using:
private static bool unrar(string filename)
{
bool error = false;
string outputpath = Path.GetDirectoryName(filename);
try
{
using (Stream stream = File.OpenRead(filename))
{
var reader = ReaderFactory.Open(stream);
while (reader.MoveToNextEntry())
{
if (!reader.Entry.IsDirectory)
{
Console.WriteLine(reader.Entry.Key);
reader.WriteEntryToDirectory(outputpath, new ExtractionOptions() { ExtractFullPath = true, Overwrite = true });
}
}
}
}
catch (Exception e)
{
Console.WriteLine("Failed: " + e.Message);
error = true;
}
if (!error)
{
File.Delete(filename);
}
return error;
}
Add the following libraries to the top
using SharpCompress.Common;
using SharpCompress.Readers;
Install using nuget.This method works for SharpCompress v0.22.0(latest at the time of writing)

Related

How to share file access between some processes using Mutex?

I have a simple class for copying files from one directory to another. Also, I need to get a file checksum after copying. The copying method may be called by many instances, for example, 5 processes are copying one file to 5 different directories in parallel, so when some processes try to get checksum, I get an IO exception.
So I've tried to tell each process to wait until the source file is unlocked:
bool IsFileLocked(FileInfo file)
{
try
{
using (FileStream stream = file.Open(FileMode.Open, FileAccess.Read, FileShare.None))
{
stream.Close();
}
}
catch (IOException)
{
return true;
}
return false;
}
Such a decision works, but only if I call Thread.Sleep(10) in while loop(in getting checksum method for waiting), otherwise I get the same error.
while (IsFileLocked(fi))
{
System.Threading.Thread.Sleep(10);
}
So I see, that it's a very bad solution.
Now I try to use Mutex:
string GetFileHash(string path)
{
string hashValue = null;
using (SHA256 sha256 = SHA256.Create())
{
FileInfo fi = new FileInfo(path);
try
{
mutexObj.WaitOne();
using (FileStream fileStream = fi.Open(FileMode.Open))
{
fileStream.Position = 0;
hashValue = System.Text.Encoding.Default.GetString(sha256.ComputeHash(fileStream));
}
}
catch (IOException ex)
{
Console.WriteLine($"GHM:I/O Exception: {ex.Message}");
}
catch (UnauthorizedAccessException ex)
{
Console.WriteLine($"GHM:Access Exception: {ex.Message}");
}
finally
{
mutexObj.ReleaseMutex();
}
}
return hashValue;
}
But that doesn't work. I think, that the problem is in different Mutex instances in 5 independent processes.
So, tell me, please, how to solve this? Is there a way to declare global mutex?

Azure Blob Storage : DownloadToStreamAsync downloading 0kb streams

Looking at my code below, I am amazed with the amount of boilerplate code I am required to write just to ensure that a library downloads a file correctly.
Are there any reason why I see 0kb downloaded streams or is this just normal to write a method like this?
public static async Task<string> DownloadSASUriInputDataAsync(string workingDirectory, string sasUri)
{
Trace.TraceInformation("{0}", sasUri);
var input = new CloudBlockBlob(new Uri(sasUri));
input.ServiceClient.DefaultRequestOptions.RetryPolicy = new ExponentialRetry(TimeSpan.FromMilliseconds(100), 10);
var fileName = Path.GetFileName(input.Name);
await Retry.LinearAsync(async () =>
{
try
{
using (var ms = new MemoryStream())
{
await input.DownloadToStreamAsync(ms);
ms.Seek(0, SeekOrigin.Begin);
if (ms.Length == 0)
{
throw new RunAlgorithmException("Downloaded file was 0 byte");
}
using (var fs = new FileStream(Path.Combine(workingDirectory, fileName), FileMode.Create, FileAccess.Write))
{
await ms.CopyToAsync(fs);
}
}
Trace.TraceInformation("downloaded file");
}
catch (StorageException ex)
{
Trace.TraceError("Failed to DownloadSASUriInputDataAsync : {0}", ex.ToString());
throw;
}
}, TimeSpan.FromMilliseconds(500),10);
return fileName;
}
The issue with all the 0kb streams was that the blobs was still being copied.
Blobs can still be accessed even though they are being copied and it will give the behavior above.
Adding checks before tryingto download that the blob.CopyState is completed or missing ensures that it work as the SLA states.

How to extract a folder from zip file using SharpZipLib?

I have a test.zip file which contains inside a Folder with a bunch of other files and folders in it.
I found SharpZipLib after figuring out that .gz / GzipStream was not the way to go since its only for individual files. More importantly, doing this is similar to using GZipStream meaning it will create a FILE. But I have whole folder zipped. How do I unzip to a
For some reason the example unzipping here is set to ignore directories, so I'm not totally sure how that is done.
Also, I need to use .NET 2.0 for accomplish this.
I think it is the easier way.
Default functionality (please look here for more info https://github.com/icsharpcode/SharpZipLib/wiki/FastZip)
it extract with folders.
code:
using System;
using ICSharpCode.SharpZipLib.Zip;
var zipFileName = #"T:\Temp\Libs\SharpZipLib_0860_Bin.zip";
var targetDir = #"T:\Temp\Libs\unpack";
FastZip fastZip = new FastZip();
string fileFilter = null;
// Will always overwrite if target filenames already exist
fastZip.ExtractZip(zipFileName, targetDir, fileFilter);
This is how I did it:
public void UnZipp(string srcDirPath, string destDirPath)
{
ZipInputStream zipIn = null;
FileStream streamWriter = null;
try
{
Directory.CreateDirectory(Path.GetDirectoryName(destDirPath));
zipIn = new ZipInputStream(File.OpenRead(srcDirPath));
ZipEntry entry;
while ((entry = zipIn.GetNextEntry()) != null)
{
string dirPath = Path.GetDirectoryName(destDirPath + entry.Name);
if (!Directory.Exists(dirPath))
{
Directory.CreateDirectory(dirPath);
}
if (!entry.IsDirectory)
{
streamWriter = File.Create(destDirPath + entry.Name);
int size = 2048;
byte[] buffer = new byte[size];
while ((size = zipIn.Read(buffer, 0, buffer.Length)) > 0)
{
streamWriter.Write(buffer, 0, size);
}
}
streamWriter.Close();
}
}
catch (System.Threading.ThreadAbortException lException)
{
// do nothing
}
catch (Exception ex)
{
throw (ex);
}
finally
{
if (zipIn != null)
{
zipIn.Close();
}
if (streamWriter != null)
{
streamWriter.Close();
}
}
}
It's sloppy but I hope it helps!

Process cannot Access a Locked File that is only used by my own Application

I keep getting the following error when I try to write to a temporary file:
The process cannot access the file
'C:\Users\jdoe\AppData\Local\Temp\jdoe.tmp' because it is being used
by another process.
These are the only methods that do anything with the file:
private void LoadData(string filePath)
{
if (!File.Exists(filePath))
{
File.Create(filePath);
return;
}
var fileDetails = new FileInfo(filePath);
if (fileDetails.Length > 0)
{
using (var fileStream = new FileStream(filePath, FileMode.Open))
{
// Do stuff...
fileStream.Close();
}
}
}
private void SaveData(string filePath)
{
using (var fileStream = new FileStream(filePath, FileMode.Create))
{
// Do stuff...
fileStream.Close();
}
}
What is locking the file?
Turns out File.Create(filePath) returns a FileStream, which needs to be closed. The error disappeared by simply changing the File.Create() to this:
if (!File.Exists(filePath))
{
File.Create(filePath).Close();
return;
}
you have to remove fist block of the code..
because when you will write .. if file is not there it will create the file or if file already there then it should append..

Datacontractserializer doesn't overwrite all data

I've noticed that if I persist an object back into file using a Datacontractserializer, if the length of the new xml is shorter than the xml originally present in the file the remnants of the original xml outwith the length of the new xml will remain in the file and will break the xml.
Does anyone have a good solution to fix this?
Here's the code I am using to persist the object:
/// <summary>
/// Flushes the current instance of the given type to the datastore.
/// </summary>
private void Flush()
{
try
{
string directory = Path.GetDirectoryName(this.fileName);
if (!Directory.Exists(directory))
{
Directory.CreateDirectory(directory);
}
FileStream stream = null;
try
{
stream = new FileStream(this.fileName, FileMode.OpenOrCreate);
for (int i = 0; i < 3; i++)
{
try
{
using (XmlDictionaryWriter writer = XmlDictionaryWriter.CreateTextWriter(stream, new System.Text.UTF8Encoding(false)))
{
stream = null;
// The serializer is initialized upstream.
this.serializer.WriteObject(writer, this.objectValue);
}
break;
}
catch (IOException)
{
Thread.Sleep(200);
}
}
}
finally
{
if (stream != null)
{
stream.Dispose();
}
}
}
catch
{
// TODO: Localize this
throw;
//throw new IOException(String.Format(CultureInfo.CurrentCulture, "Unable to save persistable object to file {0}", this.fileName));
}
}
It's because of how you are opening your stream with:
stream = new FileStream(this.fileName, FileMode.OpenOrCreate);
Try using:
stream = new FileStream(this.fileName, FileMode.Create);
See FileMode documentation.
I believe this is due to using FileMode.OpenOrCreate. If the file already exits, I think the file is being opened and parts of the data are being overwritten from the start byte. If you change to using FileMode.Create it forces any existing files to be overwritten.

Categories

Resources