gzipstream memory stream to file - c#

I am trying to compress JSON files using Gzip compression to be sent to another location. It needs to process 5,000 - 10,000 files daily, and I don't need the compressed version of the file on the local machine (they are actually being transferred to AWS S3 for long-term archiving).
Since I don't need them, I am trying to compress to a memory stream and then use that to write to AWS, rather than compress each one to disk. Whenever I try to do this, the files are broken (as in, when I open them in 7-Zip and try to open the JSON file inside, I get "Data error File is Broken).
The same thing happens when I try to write the memory stream to a local file, so I'm trying to solve that for now. Here's the code:
string[] files = Directory.GetFiles(#"C:\JSON_Logs");
foreach(string file in files)
{
FileInfo fileToCompress = new FileInfo(file);
using (FileStream originalFileStream = fileToCompress.OpenRead())
{
using (MemoryStream compressedMemStream = new MemoryStream())
{
using (GZipStream compressionStream = new GZipStream(compressedMemStream, CompressionMode.Compress))
{
originalFileStream.CopyTo(compressionStream);
compressedMemStream.Seek(0, SeekOrigin.Begin);
FileStream compressedFileStream = File.Create(fileToCompress.FullName + ".gz");
//Eventually this will be the AWS transfer, but that's not important here
compressedMemStream.WriteTo(compressedFileStream);
}
}
}
}

Rearrange your using statements so the GZipStream is definitely done by the time you read the memory stream contents:
foreach(string file in files)
{
FileInfo fileToCompress = new FileInfo(file);
using (MemoryStream compressedMemStream = new MemoryStream())
{
using (FileStream originalFileStream = fileToCompress.OpenRead())
using (GZipStream compressionStream = new GZipStream(
compressedMemStream,
CompressionMode.Compress,
leaveOpen: true))
{
originalFileStream.CopyTo(compressionStream);
}
compressedMemStream.Seek(0, SeekOrigin.Begin);
FileStream compressedFileStream = File.Create(fileToCompress.FullName + ".gz");
//Eventually this will be the AWS transfer, but that's not important here
compressedMemStream.WriteTo(compressedFileStream);
}
}
Disposing a stream takes care of flushing and closing it.

Related

Any reason why this code may have corrupted couple of file while creating zip file

The following code creates a zip file from S3 by pulling them into memory and write the final product to a file on disk. However, it is observer it corrupted few file (out of thousands) while creating the zip. I've checked, there is nothing wrong with files which got corrupted during the process, because same file(s) get zipped properly by other means. Any suggestions to fine tune the code?
Code:
public static async Task S3ToZip(List<string> pdfBatch, string zipPath, IAmazonS3 s3Client)
{
FileStream fileStream = new FileStream(zipPath, FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite);
using (ZipArchive archive = new ZipArchive(fileStream, ZipArchiveMode.Update, true))
{
foreach (var file in pdfBatch)
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = "sample-bucket",
Key = file
};
using GetObjectResponse response = await s3Client.GetObjectAsync(request);
using Stream responseStream = response.ResponseStream;
ZipArchiveEntry zipFileEntry = archive.CreateEntry(file.Split('/')[^1]);
using Stream zipEntryStream = zipFileEntry.Open();
await responseStream.CopyToAsync(zipEntryStream);
zipEntryStream.Seek(0, SeekOrigin.Begin);
zipEntryStream.CopyTo(fileStream);
}
archive.Dispose();
fileStream.Close();
}
}
Don't call Dispose() or Close() explicitly, let using do all the job. And you don't need to write anything to fileStream writing to ZipArchiveEntrystream does it under the hood. You also need to use FileMode.Create to guarantee that your file is always truncated before writing to it. Also as you only creating archive not updating it, you should use ZipArchiveMode.Create to enable memory efficient streaming (thanks to #canton7 for some deep diving in details of zip archive format).
public static async Task S3ToZip(List<string> pdfBatch, string zipPath, IAmazonS3 s3Client)
{
using FileStream fileStream = new FileStream(zipPath, FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite);
using ZipArchive archive = new ZipArchive(fileStream, ZipArchiveMode.Create, true);
foreach (var file in pdfBatch)
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = "sample-bucket",
Key = file
};
using GetObjectResponse response = await s3Client.GetObjectAsync(request);
using Stream responseStream = response.ResponseStream;
ZipArchiveEntry zipFileEntry = archive.CreateEntry(file.Split('/')[^1]);
using Stream zipEntryStream = zipFileEntry.Open();
await responseStream.CopyToAsync(zipEntryStream);
}
}

How to Compress Large Files C#

I am using this method to compress files and it works great until I get to a file that is 2.4 GB then it gives me an overflow error:
void CompressThis (string inFile, string compressedFileName)
{
FileStream sourceFile = File.OpenRead(inFile);
FileStream destinationFile = File.Create(compressedFileName);
byte[] buffer = new byte[sourceFile.Length];
sourceFile.Read(buffer, 0, buffer.Length);
using (GZipStream output = new GZipStream(destinationFile,
CompressionMode.Compress))
{
output.Write(buffer, 0, buffer.Length);
}
// Close the files.
sourceFile.Close();
destinationFile.Close();
}
What can I do to compress huge files?
You should not to write the whole file to into the memory. Use Stream.CopyTo instead. This method reads the bytes from the current stream and writes them to another stream using a specified buffer size (81920 bytes by default).
Also you don't need to close Stream objects if use using keyword.
void CompressThis (string inFile, string compressedFileName)
{
using (FileStream sourceFile = File.OpenRead(inFile))
using (FileStream destinationFile = File.Create(compressedFileName))
using (GZipStream output = new GZipStream(destinationFile, CompressionMode.Compress))
{
sourceFile.CopyTo(output);
}
}
You can find a more complete example on Microsoft Docs (formerly MSDN).
You're trying to allocate all of this into memory. That just isn't necessary, you can feed the input stream directly into the output stream.
Alternative solution for zip format without allocating memory -
using (var sourceFileStream = new FileStream(this.GetFilePath(sourceFileName), FileMode.Open))
{
using (var destinationStream =
new FileStream(this.GetFilePath(zipFileName), FileMode.Create, FileAccess.ReadWrite))
{
using (var archive = new ZipArchive(destinationStream, ZipArchiveMode.Create, true))
{
var file = archive.CreateEntry(sourceFileName, CompressionLevel.Optimal);
using (var entryStream = file.Open())
{
var fileStream = sourceFileStream;
await fileStream.CopyTo(entryStream);
}
}
}
}
The solution will write directly from input stream to output stream

How do I convert this to read a zip file? [duplicate]

This question already has answers here:
Unzipping a .gz file using C#
(3 answers)
Closed 8 years ago.
I am reading an unzipped binary file from disk like this:
string fn = #"c:\\MyBinaryFile.DAT";
byte[] ba = File.ReadAllBytes(fn);
MemoryStream msReader = new MemoryStream(ba);
I now want to increase speed of I/O by using a zipped binary file. But how do I fit it into the above schema?
string fn = #"c:\\MyZippedBinaryFile.GZ";
//Put something here
byte[] ba = File.ReadAllBytes(fn);
//Or here
MemoryStream msReader = new MemoryStream(ba);
What is the best way to achieve this pls.
I need to end up with a MemoryStream as my next step is to deserialize it.
You'd have to use a GZipStream on the content of your file.
So basically it should be like this:
string fn = #"c:\\MyZippedBinaryFile.GZ";
byte[] ba = File.ReadAllBytes(fn);
using (MemoryStream msReader = new MemoryStream(ba))
using (GZipStream zipStream = new GZipStream(msReader, CompressionMode.Decompress))
{
// Read from zipStream instead of msReader
}
To account for the valid comment by flindenberg, you can also open the file directly without having to read the entire file into memory first:
string fn = #"c:\\MyZippedBinaryFile.GZ";
using (FileStream stream = File.OpenRead(fn))
using (GZipStream zipStream = new GZipStream(stream, CompressionMode.Decompress))
{
// Read from zipStream instead of stream
}
You need to end up with a memory stream? No problem:
string fn = #"c:\\MyZippedBinaryFile.GZ";
using (FileStream stream = File.OpenRead(fn))
using (GZipStream zipStream = new GZipStream(stream, CompressionMode.Decompress))
using (MemoryStream ms = new MemoryStream()
{
zipStream.CopyTo(ms);
ms.Seek(0, SeekOrigin.Begin); // don't forget to rewind the stream!
// Read from ms
}

Saving a MemoryStream to a FileStream as ASCII

I have a memory stream that writes to a file stream. I need to be change the code below to save the memory stream as ASCII.
using (var ms = new memoryStream)
{
//...DownloadFile(file, ms);
using (var fs = File.Create(file))
{
ms.WriteTo(fs);
}
}
Use WriteAllBytes:
File.WriteAllBytes(file, ms.ToArray())

GZipStream not reading the whole file

I have some code that downloads gzipped files, and decompresses them. The problem is, I can't get it to decompress the whole file, it only reads the first 4096 bytes and then about 500 more.
Byte[] buffer = new Byte[4096];
int count = 0;
FileStream fileInput = new FileStream("input.gzip", FileMode.Open, FileAccess.Read, FileShare.Read);
FileStream fileOutput = new FileStream("output.dat", FileMode.Create, FileAccess.Write, FileShare.None);
GZipStream gzipStream = new GZipStream(fileInput, CompressionMode.Decompress, true);
// Read from gzip steam
while ((count = gzipStream.Read(buffer, 0, buffer.Length)) > 0)
{
// Write to output file
fileOutput.Write(buffer, 0, count);
}
// Close the streams
...
I've checked the downloaded file; it's 13MB when compressed, and contains one XML file. I've manually decompressed the XML file, and the content is all there. But when I do it with this code, it only outputs the very beginning of the XML file.
Anyone have any ideas why this might be happening?
EDIT
Try not leaving the GZipStream open:
GZipStream gzipStream = new GZipStream(fileInput, CompressionMode.Decompress,
false);
or
GZipStream gzipStream = new GZipStream(fileInput, CompressionMode.Decompress);
I ended up using a gzip executable to do the decompression instead of a GZipStream. It can't handle the file for some reason, but the executable can.
Same thing happened to me. In my case only reads up to 6 lines and then reached end of file. So I realized that although the extension is gz, it was compressed by another algorithm not supported by GZipStream. So I used SevenZipSharp library and it worked. This is my code
You can use SevenZipSharp library
using (var input = File.OpenRead(lstFiles[0]))
{
using (var ds = new SevenZipExtractor(input))
{
//ds.ExtractionFinished += DsOnExtractionFinished;
var mem = new MemoryStream();
ds.ExtractFile(0, mem);
using (var sr = new StreamReader(mem))
{
var iCount = 0;
String line;
mem.Position = 0;
while ((line = sr.ReadLine()) != null && iCount < 100)
{
iCount++;
LstOutput.Items.Add(line);
}
}
}
}
Are you calling Close or Flush on fileOutput? (Or just wrap it in a using, which is recommended practice.) If you don't the file might not be flushed to disk when your program ends.

Categories

Resources