I am trying to compress files with GZIP and my application monitors a folder for new files. When a new file comes in, it should be compressed and then application should continue doing this every time a new file comes in folder.
private void Compress(string filePath)
{
using (FileStream inputStream = new FileStream(filePath,FileMode.OpenOrCreate,FileAccess.ReadWrite))
{
using (FileStream outputStream = new FileStream(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), #"C:\\Users\\maki\\Desktop\\Input"), FileMode.OpenOrCreate, FileAccess.ReadWrite))//'System.UnauthorizedAccessException'
{
using (GZipStream gzip = new GZipStream(outputStream, CompressionMode.Compress))
{
inputStream.CopyTo(gzip);
}
}
}
}
when I execute the application, I get this exception:
An unhandled exception of type 'System.UnauthorizedAccessException' occurred in mscorlib.dll
Additional information:
Access to the path 'C:\Users\maki\Desktop\Input' is denied.
I've searched a lot in internet but couldn't find a proper answer.
Can anyone help me with issue?
The issue could be related you to the way the file stream is instantiated. In your code, you are combining a path, with the Path.Combine method with another fully qualified path.
Please see the code below. Another issue could be related to the hard coded path. Is the file named Input or Input.gz? Also note the ability to stack using statements for reduced nesting.
private void Compress(string filePath)
{
using (FileStream inputStream =
new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.ReadWrite))
using (FileStream outputStream =
new FileStream(#"C:\\Users\\maki\\Desktop\\Input",
FileMode.OpenOrCreate, FileAccess.ReadWrite))
using (GZipStream gzip = new GZipStream(outputStream, CompressionMode.Compress))
{
inputStream.CopyTo(gzip);
}
}
Related
I am copying files asynchronously with the article Microsoft provided https://learn.microsoft.com/en-us/dotnet/standard/io/asynchronous-file-i-o
The issue I am running into using this is that when the files are finished copying, it doesn't keep the date modified value and is set to the time the file was created.
To compensate for that, I am trying to set the date modified time for each file after their finished copying with the File.SetLastWriteTime static method.
foreach (var file in dir.EnumerateFiles())
{
string temppath = Path.Combine(destDirName, file.Name);
using (FileStream reader = new FileStream(file.FullName, FileMode.Open, FileAccess.Read))
{
using (FileStream writer = new FileStream(temppath, FileMode.Create, FileAccess.ReadWrite))
{
await reader.CopyToAsync(writer);
File.SetLastWriteTime(temppath, file.LastWriteTime);
}
}
}
Unfortunately, it seems that the File.SetLastWriteTime method executes immediately before await reader.CopyToAsync(writer) has finished.
How can I make sure that the File.SetLastWriteTime method isn't executed until after reader.CopyToAsync has finished?
It appears to work as intended if I change the method to copy synchronously within a Task.Run, but not sure if that is the correct way to do it.
I was able to figure it out.
The reason why it couldn't set the file time is because it was still within the stream.
I simply moved the method outside of the write stream and that resolved the problem.
foreach (var file in dir.EnumerateFiles())
{
string temppath = Path.Combine(destDirName, file.Name);
using (FileStream reader = new FileStream(file.FullName, FileMode.Open, FileAccess.Read))
{
using (FileStream writer = new FileStream(temppath, FileMode.Create, FileAccess.ReadWrite))
{
await reader.CopyToAsync(writer);
}
File.SetLastWriteTime(temppath, file.LastWriteTime);
}
}
i have problems during parsing request files.
my file size is 1338521 bytes, but Nancy says, that file size is some times 1751049 or 3200349.
on my windows pc it works fine, on linux server this problem appears, so i can't save file.
string result = Convert.ToBase64String(Core.ReadBytesFromStream(file.Value));
using (MemoryStream ms = new MemoryStream(Convert.FromBase64String(result)))
{
using (Bitmap bm2 = new Bitmap(ms))
{
bm2.Save(path);
}
}
any ideas?
You don't need to convert the file like that.
var filename = Path.Combine(storagePath, Request.Files[0].Name);
using (var fileStream = new FileStream(filename, FileMode.Create))
{
Request.Files[0].Value.CopyTo(fileStream);
}
Validate the file when it comes in to ensure the extension is accepted, create a save path, and copy the stream to a new file on the filesystem.
That's it.
I have downloaded a zip file from blob storage and save it to isolated storage of windows phone like this :- FileStream fs is downloaded from blob.
public static void SaveToIsolatedStorage(FileStream fs, string fileName)
{
var isolatedStorage = IsolatedStorageFile.GetUserStoreForApplication();
using (var streamWriter =
new StreamWriter(new IsolatedStorageFileStream(fileName,
FileMode.Create,
FileAccess.ReadWrite,
isolatedStorage)))
{
streamWriter.Write(fs);
}
}
But when checked this zip file using IsoStoreSpy it is showing corrupted. I have checked it by reading from isolated storage and tried to unzip it but not working. I am sure that it is corrupted because when i replace this file using IsoStoreSpy with some other zip and then tried to unzip it then it is working.
Edit:-
Code for downloading from Blob
private async Task DownloadFileFromBlobStorage()
{
var filename = "AppId_2.zip";
var blobContainer = GetBlobClient.GetContainerReference("testwpclientiapcontainer");
var blob = blobContainer.GetBlockBlobReference(filename);
using (var filestream = new FileStream(filename, FileMode.Create))
{
await blob.DownloadToStreamAsync(filestream);
SaveToIsolatedStorage(filestream, filename);
}
}
So anybody know how can i save the zip file to isolated storage without getting it corrupted ?
You're using a StreamWriter. That's for text. You shouldn't be using it to copy a zip file at all. Never use any TextWriter for binary data.
Next you're using StreamWriter.Write(object), which is basically going to call ToString on the FileStream. That's not going to work either.
You should just create an IsolatedStorageStream, and then call fs.CopyTo(output).
public static void SaveToIsolatedStorage(Stream input, string fileName)
{
using (var storage = IsolatedStorageFile.GetUserStoreForApplication())
{
// Simpler than calling the IsolatedStorageFileStream constructor...
using (var output = storage.CreateFile(fileName))
{
input.CopyTo(output);
}
}
}
In your edit you've shown code which saves to a FileStream first, and then copies the stream from the current position. As you've noted in comments, you needed to rewind it first.
Personally I wouldn't use a FileStream at all here - why do you want to save it as a normal file and an isolated file? Just use a MemoryStream:
using (var stream = new MemoryStream())
{
await blob.DownloadToStreamAsync(filestream);
stream.Position = 0;
SaveToIsolatedStorage(stream, filename);
}
(Note that your SaveToIsolatedStorage method is still synchronous... you may wish to consider an asynchronous version.)
I am using following code to zip a file and it works fine but when I decompress with WinRar I get the original file name without the extension, any clue why if filename is myReport.xls when I decompress I get only myReport ?
using (var fs = new FileStream(fileName, FileMode.Open))
{
byte[] input = new byte[fs.Length];
fs.Read(input, 0, input.Length);
fs.Close();
using (var fsOutput = new FileStream(zipName, FileMode.Create, FileAccess.Write))
using(var zip = new GZipStream(fsOutput, CompressionMode.Compress))
{
zip.Write(input, 0, input.Length);
zip.Close();
fsOutput.Close();
}
}
GZip compresses only one file - without knowing the name. Therefore if you compress the file myReport.xls you should name it myReport.xls.gz. On decompression the last file extension will be removed so you end up with the original filename.
That its the way how it is used in Unix/Linux for ages...
Very weird indeed. A brief search came up with the following:
http://dotnetzip.codeplex.com/discussions/268293
Which says that GZipStream has no way of knowing the name of the stream that is being written, and suggests you set the FileName property directly.
Hope that helps.
I'm trying to do so but the program throws this exception:
An unhandled exception of type 'System.IO.IOException' occurred in mscorlib.dll
Additional information: The process cannot access the file 'C:\Users\Roy\documents\visual studio 2010\Projects\Assignment3\Assignment3\bin\Debug\Images\Chrysanthemum.jpg' because it is being used by another process.
Is there a way to use it while it's open?
code:
if (imgAddMessage.Source != null)
{
BitmapImage src = (BitmapImage)imgAddMessage.Source;
if (!Directory.Exists("Images"))
{
Directory.CreateDirectory("Images");
}
FileStream stream = new FileStream("Images/" + imageName, FileMode.Create, FileAccess.ReadWrite);
JpegBitmapEncoder encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(src));
encoder.Save(stream);
stream.Close();
}
Two things.
1) Your posted code doesn't have a using statement around the FileStream nor does it call Dispose on it.
2) Whatever piece of code that is showing the picture that you are trying to save, must use FileAccess.Read, because only a single piece of code can hold a write lock on a file. Better yet, if it's your own program that is showing the file, just cache the image into memory and show the image based on the bytes in memory. Then you'll hold no locks on the file and can do whatever you want with it later on...
Just curious if your still having issues.
Did you try the following?
if (imgAddMessage.Source != null)
{
string imageDirectory = "pack://application:,,,/Images";
BitmapImage src = (BitmapImage)imgAddMessage.Source;
if (!Directory.Exists(imageDirectory))
Directory.CreateDirectory(imageDirectory);
using (FileStream stream = new FileStream(Path.Combine(imageDirectory, imageName), FileMode.Create, FileAccess.ReadWrite))
{
JpegBitmapEncoder encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(src));
encoder.Save(stream);
}
}
And as #Marco said you must have read/write permission