ICSharpCode.SharpZipLib.Zip example with crc variable details - c#

I am using icsharpziplib dll for zipping sharepoint files using c# in asp.net
When i open the output.zip file, it is showing "zip file is either corrupted or damaged".
And the crc value for files in the output.zip is showing as 000000.
How do we calculate or configure crc value using icsharpziplib dll?
Can any one have the good example how to do zipping using memorystreams?

it seems you're not creating each ZipEntry.
Here's is a code that I adapted to my needs:
http://wiki.sharpdevelop.net/SharpZipLib-Zip-Samples.ashx#Create_a_Zip_fromto_a_memory_stream_or_byte_array_1
Anyway with SharpZipLib there are many ways you can work with zip file: the ZipFile class, the ZipOutputStream and the FastZip.
I'm using the ZipOutputStream to create an in-memory ZIP file, adding in-memory streams to it and finally flushing to disk, and it's working quite good. Why ZipOutputStream? Because it's the only choice available if you want to specify a compression level and use Streams.
Good luck :)

1:
You could do it manually but the ICSharpCode library will take care of it for you. Also something I've discovered: 'zip file is either corrupted or damaged' can also be a result of not adding your zip entry name correctly (such as an entry that sits in a chain of subfolders).
2:
I solved this problem by creating a compressionHelper utility. I had to dynamically compose and return zip files. Temp files were not an option as the process was to be run by a webservice.
The trick with this was a BeginZip(), AddEntry() and EndZip() methods (because I made it into a utility to be invoked. You could just use the code directly if need be).
Something I've excluded from the example are checks for initialization (like calling EndZip() first by mistake) and proper disposal code (best to implement IDisposable and close your zipfileStream and your memoryStream if applicable).
using System.IO;
using ICSharpCode.SharpZipLib.Zip;
public void BeginZipUpdate()
{
_memoryStream = new MemoryStream(200);
_zipOutputStream = new ZipOutputStream(_memoryStream);
}
public void EndZipUpdate()
{
_zipOutputStream.Finish();
_zipOutputStream.Close();
_zipOutputStream = null;
}
//Entry name could be 'somefile.txt' or 'Assemblies\MyAssembly.dll' to indicate a folder.
//Unsure where you'd be getting your file, I'm reading the data from the database.
public void AddEntry(string entryName, byte[] bytes)
{
ZipEntry entry = new ZipEntry(entryName);
entry.DateTime = DateTime.Now;
entry.Size = bytes.Length;
_zipOutputStream.PutNextEntry(entry);
_zipOutputStream.Write(bytes, 0, bytes.Length);
_zipOutputStreamEntries.Add(entryName);
}
So you're actually having the zipOutputStream write to a memoryStream. Then once _zipOutputStream is closed, you can return the contents of the memoryStream.
public byte[] GetResultingZipFile()
{
_zipOutputStream.Finish();
_zipOutputStream.Close();
_zipOutputStream = null;
return _memoryStream.ToArray();
}
Just be aware of how much you want to add to a zipfile (delay in process/IO/timeouts etc).

Related

Compress large files using .NET Framework ZipArchive class on ASP.NET

I have a code that get all files on a directory, compresses each one and creates a .zip file. I'm using the .NET Framework ZipArchive class on the System.IO.Compression namespace and the extension method CreateEntryFromFile. This is working well except when processing large files (aproximately 1GB and up), there it throws a System.IO.Stream Exception "Stream too large".
On the extension method reference on MSDN it states that:
When ZipArchiveMode.Update is present, the size limit of an entry is limited to Int32.MaxValue. This limit is because update mode uses a MemoryStream internally to allow the seeking required when updating an archive, and MemoryStream has a maximum equal to the size of an int.
So this explains the exception I get, but provides no further way of how to overcome this limitation. How can I allow large file proccesing?
Here is my code, its part of a class, just in case, the GetDatabaseBackupFiles() and GetDatabaseCompressedBackupFiles() functions returns a list of FileInfo objects that I iterate:
public void CompressBackupFiles()
{
var originalFiles = GetDatabaseBackupFiles();
var compressedFiles = GetDatabaseCompressedBackupFiles();
var pendingFiles = originalFiles.Where(c => compressedFiles.All(d => Path.GetFileName(d.Name) != Path.GetFileName(c.Name)));
foreach (var file in pendingFiles)
{
var zipPath = Path.Combine(_options.ZippedBackupFilesBasePath, Path.GetFileNameWithoutExtension(file.Name) + ".zip");
using (ZipArchive archive = ZipFile.Open(zipPath, ZipArchiveMode.Update))
{
archive.CreateEntryFromFile(file.FullName, Path.GetFileName(file.Name));
}
}
DeleteFiles(originalFiles);
}
When you are only creating a zip file, replace the ZipArchiveMode.Update with ZipArchiveMode.Create.
The update mode is meant for cases, when you need delete files from an existing archive, or add new files to existing archive.
In the update mode the whole zip file is loaded into memory and it consumes a lot of memory for big files. Therefore this mode should be avoided when possible.

OpenXml Spreadsheet Output gives "General Error" opening in OpenOffice when containing a large number of rows

Spreadsheet output of OpenXML works in Excel (and Google Docs) but throws a runtime error in OpenOffice 4.x...
Specific error is
General Error.
General input/output error.
with no further explanation. It, in practice, has only occurred for me if there were greater than 40 rows for the spreadsheet; however, there did not seems to be a specific number of rows that caused the issue.
I have already created a workaround for the issue. This post is just to share my horrible, horrible solution for those that just need something.
I suspect the cause might be in the Zip headers or part of the zip entries themselves and that their is either a bug in the library that writes the Zip output for the System.IO.Packaging namespace (which I assume that OpenXML uses) or that OpenOffice has a very simple zip reader. Maybe something with the central directory file offsets versus the compressed file size, but I did not bother to check as I had limited time.
I may investigate further one day, or if anyone knows a quick solution, then do let me know. The files are written using the examples found on MSDN and they do open correctly in Excel.
In the meantime, if anyone needs a band-aid to the issue, I am posting my quick fix here since I was unable to find one myself. It expects a byte array (perhaps dumped from MemorySteam or read a FileStream). It outputs another byte array.
Someone clever could have it accept a Stream, seek to 0 relative to Beginning, and then read from there, perhaps writing directly to another passed in stream. That would be an exercise to the reader, unless someone happens to post a response that does the same.
If anyone does have a better solution, I would not mind knowing.
Uses .NET 4.5
References System.IO.Compression
using System;
using System.IO;
using System.IO.Compression;
namespace redmasq {
public static class ExcelFileFixExample {
public static byte[] XLSXOpenOfficePackageFix(byte[] fileData) {
using (MemoryStream ms = new MemoryStream(fileData, false)) {
using (ZipArchive za = new ZipArchive(ms)) {
using (MemoryStream ms2 = new MemoryStream()) {
using (ZipArchive za2 = new ZipArchive(ms2, ZipArchiveMode.Create)) {
foreach (ZipArchiveEntry entry in za.Entries) {
ZipArchiveEntry zae = za2.CreateEntry(entry.FullName, System.IO.Compression.CompressionLevel.Optimal);
using (Stream src = entry.Open()) {
using (Stream dest = zae.Open()) {
src.CopyTo(dest);
}
}
}
}
return ms2.ToArray();
}
}
}
}
}
}

DeflateStream CopyTo writes nothing and throws no exceptions

I've basically copied this code sample directly from msdn with some minimal changes. The CopyTo method is silently failing and I have no idea why. What would cause this behavior? It is being passed a 78 KB zipped folder with a single text file inside of it. The returned FileInfo object points to a 0 KB file. No exceptions are thrown.
public static FileInfo DecompressFile(FileInfo fi)
{
// Get the stream of the source file.
using (FileStream inFile = fi.OpenRead())
{
// Get original file extension,
// for example "doc" from report.doc.cmp.
string curFile = fi.FullName;
string origName = curFile.Remove(curFile.Length
- fi.Extension.Length);
//Create the decompressed file.
using (FileStream outFile = File.Create(origName))
{
// work around for incompatible compression formats found
// here http://george.chiramattel.com/blog/2007/09/deflatestream-block-length-does-not-match.html
inFile.ReadByte();
inFile.ReadByte();
using (DeflateStream Decompress = new DeflateStream(inFile,
CompressionMode.Decompress))
{
// Copy the decompression stream
// into the output file.
Decompress.CopyTo(outFile);
return new FileInfo(origName);
}
}
}
}
In a comment you say that you are trying to decompress a zip file. The DeflateStream class can not be used like this on a zip file. The MSDN example you mentioned uses DeflateStream to create individual compressed files and then uncompresses them.
Although zip files might use the same algorithm (not sure about that) they are not just compressed versions of a single file. A zip file is a container that can hold many files and/or folders.
If you can use .NET Framework 4.5 I would suggest to use the new ZipFile or ZipArchive class. If you must use an earlier framework version there are free libraries you can use (like DotNetZip or SharpZipLib).

How to open a file from Memory Stream

Is it possible to open a file directly from a MemoryStream opposed to writing to disk and doing Process.Start() ? Specifically a pdf file? If not, I guess I need to write the MemoryStream to disk (which is kind of annoying). Could someone then point me to a resource about how to write a MemoryStream to Disk?
It depends on the client :) if the client will accept input from stdin you could push the dta to the client. Another possibility might be to write a named-pipes server or a socket-server - not trivial, but it may work.
However, the simplest option is to just grab a temp file and write to that (and delete afterwards).
var file = Path.GetTempFileName();
using(var fileStream = File.OpenWrite(file))
{
var buffer = memStream.GetBuffer();
fileStream.Write(buffer, 0, (int)memStream.Length);
}
Remember to clean up the file when you are done.
Path.GetTempFileName() returns file name with '.tmp' extension, therefore you cant't use Process.Start() that needs windows file association via extension.
If by opening a file, you mean something like starting Adobe Reader for PDF files, then yes, you have to write it to a file. That is, unless the application provides you with some API do that.
One way to write a stream to file would be:
using (var memoryStream = /* create the memory stream */)
using (var fileStream = File.OpenWrite(fileName))
{
memoryStream.WriteTo(fileStream);
}

ZIP file created with SharpZipLib cannot be opened on Mac OS X

Argh, today is the day of stupid problems and me being an idiot.
I have an application which creates a zip file containing some JPEGs from a certain directory. I use this code in order to:
read all files from the directory
append each of them to a ZIP file
using (var outStream = new FileStream("Out2.zip", FileMode.Create))
{
using (var zipStream = new ZipOutputStream(outStream))
{
foreach (string pathname in pathnames)
{
byte[] buffer = File.ReadAllBytes(pathname);
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
zipStream.PutNextEntry(entry);
zipStream.Write(buffer, 0, buffer.Length);
}
}
}
All works well under Windows, when I open the file e. g. with WinRAR, the files are extracted. But as soon as I try to unzip my archive on Mac OS X, it only creates a .cpgz file. Pretty useless.
A normal .zip file created manually with the same files on Windows is extracted without any problems on Windows and Mac OS X.
I found the above code on the Internet, so I am not absolutely sure if the whole thing is correct. I wonder if it is needed to use zipStream.Write() in order to write directly to the stream?
got the exact same problem today. I tried implementing the CRC stuff as proposed but it didn't help.
I finaly found the solution on this page: http://community.sharpdevelop.net/forums/p/7957/23476.aspx#23476
As a result, I just had to add this line in my code:
oZIPStream.UseZip64 = UseZip64.Off;
And the file opens up as it should on MacOS X :-)
Cheers
fred
I don't know for sure, because I am not very familiar with either SharpZipLib or OSX , but I still might have some useful insight for you.
I've spent some time wading through the zip spec, and actually I wrote DotNetZip, which is a zip library for .NET, unrelated to SharpZipLib.
Currently on the user forums for DotNetZip, there's a discussion going on about zip files generated by DotNetZip that cannot be read on OSX. One of the people using the library is having a problem that seems similar to what you are seeing. Except I have no idea what a .cpgxz file is.
We tracked it down, a little. At this point the most promising theory is that OSX does not like "bit 3" in the "general purpose bitfield" in the header of each zip entry.
Bit 3 is not new. PKWare added bit 3 to the spec 17 years ago. It was intended to support streaming generation of archives, in the way that SharpZipLib works. DotNetZip also has a way to produce a zipfile as it is streamed out, and it will also set bit-3 in the zip file if used in this way, although normally DotNetZip will produce a zipfile with bit-3 unset in it.
From what we can tell, when bit 3 is set, the OSX zip reader (whatever it is - like I said I'm not familiar with OSX) chokes on the zip file. The same zip contents produced without bit 3, allows the zip file to be opened. Actually it's not as simple as just flipping one bit - the presence of the bit signals the presence of other metadata. So I am using "bit 3" as a shorthand for all that.
So the theory is that bit 3 causes the problem. I haven't tested this myself. There's been some impedance mismatch on the communication with the person who has the OSX machine - so it is unresolved as yet.
But, if this theory holds, it would explain your situation: that WinRar and any Windows machine can open the file, but OSX cannot.
On the DotNetZip forums, we had a discussion about what to do about the problem. As near as I can tell, the OSX zip reader is broken, and cannot handle bit 3, so the workaround is to produce a zip file with bit 3 unset. I don't know if SharpZipLib can be convinced to do that.
I do know that if you use DotNetZip, and use the normal ZipFile class, and save to a seekable stream (like a filesystem file), you will get a zip that does not have bit 3 set. If the theory is correct, it should open with no problem on the Mac, every time. This is the result the DotNetZip user has reported. It's just one result so not generalizable yet, but it looks plausible.
example code for your scenario:
using (ZipFile zip = new ZipFile()
{
zip.AddFiles(pathnames);
zip.Save("Out2.zip");
}
Just for the curious, in DotNetZip you will get bit 3 set if you use the ZipFile class and save it to a nonseekable stream (like ASPNET's Response.OutputStream) or if you use the ZipOutputStream class in DotNetZip, which always writes forward only (no seeking back).
I think SharpZipLib's ZipOutputStream is also always "forward only."
So, I searched for a few more examples on how to use SharpZipLib and I finally got it to work on Windows and os x. Basically I added the "Crc32" of the file to the zip archive. No idea what this is though.
Here is the code that worked for me:
using (var outStream = new FileStream("Out3.zip", FileMode.Create))
{
using (var zipStream = new ZipOutputStream(outStream))
{
Crc32 crc = new Crc32();
foreach (string pathname in pathnames)
{
byte[] buffer = File.ReadAllBytes(pathname);
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
entry.Size = buffer.Length;
crc.Reset();
crc.Update(buffer);
entry.Crc = crc.Value;
zipStream.PutNextEntry(entry);
zipStream.Write(buffer, 0, buffer.Length);
}
zipStream.Finish();
// I dont think this is required at all
zipStream.Flush();
zipStream.Close();
}
}
Explanation from cheeso:
CRC is Cyclic Redundancy Check - it's a checksum on the entry data. Normally the header for each entry in a zip file contains a bunch of metadata, including some things that cannot be known until all the entry data has been streamed - CRC, Uncompressed size, and compressed size. When generating a zipfile through a streamed output, the zip spec allows setting a bit (bit 3) to specify that these three data fields will immediately follow the entry data.
If you use ZipOutputStream, normally as you write the entry data, it is compressed and a CRC is calculated, and the 3 data fields are written immediately after the file data.
What you've done is streamed the data twice - the first time implicitly as you calculate the CRC on the file before writing it. If my theory is correct, the what is happening is this: When you provide the CRC to the zipStream before writing the file data, this allows the CRC to appear in its normal place in the entry header, which keeps OSX happy. I'm not sure what happens to the other two quantities (compressed and uncompressed size).
I had exactly the same problem, my mistake was (and in your example code as well) that I didn't supply the file lenght for each entry.
Example code:
...
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
var fileInfo = new FileInfo(pathname)
entry.size = fileInfo.lenght;
...
I was separating the folder names with a backslash... when I changed this to a forward slash it worked!
What's going on with the .cpgz file is that Archive Utility is being launched by a file with a .zip extension. Archive Utility examines the file and thinks it isn't compressed, so it's compressing it. For some bizarre reason, .cpgz (CPIO archiving + gzip compression) is the default. You can set a different default in Archive Utility's Preferences.
If you do indeed discover this is a problem with OS X's zip decoder, please file a bug. You can also try using the ditto command-line tool to unpack it; you may get a better error message. Of course, OS X also ships unzip, the Info-ZIP utility, but I'd expect that to work.
I agree with Cheeso's answer however if the Input file size is greater than 2GB then byte[] buffer = File.ReadAllBytes(pathname); will throw an IO exception.
So i modified Cheeso code and it works like a charm for all the files.
.
long maxDataToBuffer = 104857600;//100MB
using (var outStream = new FileStream("Out3.zip", FileMode.Create))
{
using (var zipStream = new ZipOutputStream(outStream))
{
Crc32 crc = new Crc32();
foreach (string pathname in pathnames)
{
tempBuffLength = maxDataToBuffer;
FileStream fs = System.IO.File.OpenRead(pathname);
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
entry.Size = buffer.Length;
crc.Reset();
long totalBuffLength = 0;
if (fs.Length <= tempBuffLength) tempBuffLength = fs.Length;
byte[] buffer = null;
while (totalBuffLength < fs.Length)
{
if ((fs.Length - totalBuffLength) <= tempBuffLength)
tempBuffLength = (fs.Length - totalBuffLength);
totalBuffLength += tempBuffLength;
buffer = new byte[tempBuffLength];
fs.Read(buffer, 0, buffer.Length);
crc.Update(buffer, 0, buffer.Length);
buffer = null;
}
entry.Crc = crc.Value;
zipStream.PutNextEntry(entry);
tempBuffLength = maxDataToBuffer;
fs = System.IO.File.OpenRead(pathname);
totalBuffLength = 0;
if (fs.Length <= tempBuffLength) tempBuffLength = fs.Length;
buffer = null;
while (totalBuffLength < fs.Length)
{
if ((fs.Length - totalBuffLength) <= tempBuffLength)
tempBuffLength = (fs.Length - totalBuffLength);
totalBuffLength += tempBuffLength;
buffer = new byte[tempBuffLength];
fs.Read(buffer, 0, buffer.Length);
zipStream.Write(buffer, 0, buffer.Length);
buffer = null;
}
fs.Close();
}
zipStream.Finish();
// I dont think this is required at all
zipStream.Flush();
zipStream.Close();
}
}
I had a similar problem but on Windows 7. I updated to the as of this writing latest version of ICSharpZipLib 0.86.0.518. From then on I could no longer decompress any ZIP archives created with the code that was working so far.
There error messages were different depending on the tool I tried to extract with:
Unknown compression method.
Compressed size in local header does not match that of central directory header in new zip file.
What did the trick was to remove the CRC calculation as mentioned here: http://community.sharpdevelop.net/forums/t/8630.aspx
So I removed the line that is:
entry.Crc = crc.Value
And from then on I could again unzip the ZIP archives with any third party tool. I hope this helps someone.
There are two things:
Ensure your underlying output stream is seekable, or SharpZipLib won't be able to back up and fill in any ZipEntry fields that you omitted (size, crc, compressed size, ...). As a result, SharpZipLib will force "bit 3" to be enabled. The background has been explained pretty well in previous answers.
Fill in ZipEntry.Size, or explicitly set stream.UseZip64 = UseZip64.Off. The default is to conservatively assume the stream could be very large. Unzipping then requires "pk 4.5" support.
I encountered weird behavior when archive is empty (no entries inside it) it can not be opened on MAC - generates cpgz only. The idea was to put a dummy .txt file in it in case when no files for archiving.

Categories

Resources