I'm using SimpleITK to read a DICOM, and save a particular slice as a PNG file. I can write back a new DICOM file to disk fine, and it looks as expected. But whenever I try to save it in any other format, it's greatly corrupted. By that I mean it looks nothing like the input, it's completely garbled.
Here is the code:
var imageReader = new ImageFileReader();
imageReader.SetOutputPixelType(PixelIDValueEnum.sitkUInt8);
var dicomFileNames = ImageSeriesReader.GetGDCMSeriesFileNames(#"D:\Study");
imageReader.SetFileName(dicomFileNames[255]);
var image = imageReader.Execute();
var fileWriter = new ImageFileWriter();
fileWriter.SetFileName("slice.png");
fileWriter.Execute(image);
Getting the image's buffer and using it to create a BitMap suffers the same problem. Reading the DICOM series (my eventual goal) instead of one file, and using the 3D volume and extracting a slice in that manner also has the same issue.
What am I missing?
EDIT: Using PixelIDValueEnum.sitkUInt16 greatly improves the output of ImageFileWriter, although the contrast is off and loses some detail. I still cannot convert the buffer to a BitMap and save that as a PNG, this code still creates corrupted data:
var size = image.GetSize();
var length = (int)(size[0] * size[1]) * 2;
var buffer = image.GetBufferAsUInt16();
var rgbValues = new byte[length];
Marshal.Copy(buffer, rgbValues, 0, length);
var newBitmap = new Bitmap((int)image.GetWidth(), (int)image.GetHeight(), (int)image.GetWidth(), PixelFormat.Format16bppArgb1555, buffer);
newBitmap.Save(#"C:\test.png", ImageFormat.Png);
I have tried every PixelFormat.16bpp* value without success, some data must be getting lost, because the output of the ImageFileWriter is more than 50% larger than when I save the bitmap.
Here is the bad BitMap:
Most medical images are 16 bit deep (short or unsigned short), not 8 bit (sitkUInt8). Try a few other pixel formats. If that does not help, attach an extracted PNG slice - that will allow more/better advice.
I use Magick.NET for processing files. And now I need convert raw image format (such as .dng, .3fr, .cr2, .raw, .ptx, etc.) to simple jpg for generating preview on the site. I found example in documentation here
but it's not working. I put dcraw.exe to Magick.NET dll's but always got error in this moment:
//code
using (var originalImg = new MagickImage(abspath))...
//text of error
InnerException = {"iisexpress.exe: FailedToExecuteCommand `dcraw.exe -6 -w -O \"C:/Users/A8F50~1.CHE/AppData/Local/Temp/magick-29445L9OLy_DVIQq.ppm\" \"C:/Users/A8F50~1.CHE/AppData/Local/Temp/magick-294458yvxz2HRaYX\"' (-1) # error/delegate.c/ExternalDelegateCommand/484"}
Message = "iisexpress.exe: UnableToOpenBlob 'C:/Users/A8F50~1.CHE/AppData/Local/Temp/magick-29445L9OLy_DVIQq.ppm': No such file or directory # error/blob.c/OpenBlob/2684"
Is anyone faced with such problem? I have no idea why this shit happens. I wasted a lot of time for this and I'll be glad if you'll help me with this problem
you have to use a FileStream object instead the file path... That will work for me.
Try this:
FileStream fileStream = new FileStream(#"C:\QRcodes\IMG_9540.CR2", FileMode.Open);
using (MagickImage magickImage = new MagickImage(fileStream))
{
byte[] imageBytes = magickImage.ToByteArray();
Console.WriteLine("bytes in image: " + imageBytes.Length);
Console.ReadKey();
}
greetings
Markus
This is not related to the file stream or actual physical path. you can call the MagickImage constructor with a physical path. The problem and the error message you received is that your code cannot find the file. It may be because of the permission or the way you pass the address of the file or any thing that prevent your code to reach the file.
One another things: Converting the MagickImage to ByteArray is highly time consuming and is heavy process in the system. If you try an image with 20000 pixels you will see the result.
I am trying to use a LibTiff.Net library and rewriting a merge tool TiffCP api to use memory streams.
This library has a Tiff class and by passing a stream to this class, it can merge tiff images into this stream.
For testing, I passed on a Filestream and I got what i wanted - it merged and I was able to see multipage tif.
But when I pass a MemoryStream, I am able to verify that the page data is being added to the stream as I loop through but when I write it to the file at the end, I could see only 1st page.
var mso = new MemoryStream();
var fso = new FileStream(#"C:\test\ttest.tif",FileMode.OpenOrCreate); //This works
using (Tiff outImage = Tiff.ClientOpen("custom", "w", mso, tso))
{
//...
//..
System.Drawing.Image tiffImg = System.Drawing.Image.FromStream(mso, true);
tiffImg.Save(#"C:\test\test2.tiff", System.Drawing.Imaging.ImageFormat.Tiff);
tiffImg.Dispose();
//..
//..
}
P.S: I need it in memorystream because, of some folder permissions on servers + vendor API reasons.
You probably using the memory stream before data is actually written into the stream.
Please use Tiff.Flush() method before accessing data in the memory stream. And please make sure you call Tiff.WriteDirectory() method for each page you create.
EDIT:
Please also take a look at Bob Powell's article on Generating Multi-Page TIFF files. The article shows how to use EncoderParameters to actually generate a multipage TIFF.
Using
tiffImg.Save(#"C:\test\test2.tiff", System.Drawing.Imaging.ImageFormat.Tiff);
you are probably save only first frame.
I want to know how many frames a TIFF picture contains. However, to minimize the execution time, I would like to know this information without actually loading the full file. How can I achieve this?
For your information, I am using the following code right now:
FileStream mystream = new FileStream("mypicture.tif", FileMode.Open);
_Image _currentImg = Image.FromStream(mystream );
mystream.Close();
FrameDimension myframeDimensions = new
FrameDimension(_currentImg.FrameDimensionsList[0]);
Int32 numberOfFrames = _currentImg.GetFrameCount(myframeDimensions );
Argh, today is the day of stupid problems and me being an idiot.
I have an application which creates a zip file containing some JPEGs from a certain directory. I use this code in order to:
read all files from the directory
append each of them to a ZIP file
using (var outStream = new FileStream("Out2.zip", FileMode.Create))
{
using (var zipStream = new ZipOutputStream(outStream))
{
foreach (string pathname in pathnames)
{
byte[] buffer = File.ReadAllBytes(pathname);
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
zipStream.PutNextEntry(entry);
zipStream.Write(buffer, 0, buffer.Length);
}
}
}
All works well under Windows, when I open the file e. g. with WinRAR, the files are extracted. But as soon as I try to unzip my archive on Mac OS X, it only creates a .cpgz file. Pretty useless.
A normal .zip file created manually with the same files on Windows is extracted without any problems on Windows and Mac OS X.
I found the above code on the Internet, so I am not absolutely sure if the whole thing is correct. I wonder if it is needed to use zipStream.Write() in order to write directly to the stream?
got the exact same problem today. I tried implementing the CRC stuff as proposed but it didn't help.
I finaly found the solution on this page: http://community.sharpdevelop.net/forums/p/7957/23476.aspx#23476
As a result, I just had to add this line in my code:
oZIPStream.UseZip64 = UseZip64.Off;
And the file opens up as it should on MacOS X :-)
Cheers
fred
I don't know for sure, because I am not very familiar with either SharpZipLib or OSX , but I still might have some useful insight for you.
I've spent some time wading through the zip spec, and actually I wrote DotNetZip, which is a zip library for .NET, unrelated to SharpZipLib.
Currently on the user forums for DotNetZip, there's a discussion going on about zip files generated by DotNetZip that cannot be read on OSX. One of the people using the library is having a problem that seems similar to what you are seeing. Except I have no idea what a .cpgxz file is.
We tracked it down, a little. At this point the most promising theory is that OSX does not like "bit 3" in the "general purpose bitfield" in the header of each zip entry.
Bit 3 is not new. PKWare added bit 3 to the spec 17 years ago. It was intended to support streaming generation of archives, in the way that SharpZipLib works. DotNetZip also has a way to produce a zipfile as it is streamed out, and it will also set bit-3 in the zip file if used in this way, although normally DotNetZip will produce a zipfile with bit-3 unset in it.
From what we can tell, when bit 3 is set, the OSX zip reader (whatever it is - like I said I'm not familiar with OSX) chokes on the zip file. The same zip contents produced without bit 3, allows the zip file to be opened. Actually it's not as simple as just flipping one bit - the presence of the bit signals the presence of other metadata. So I am using "bit 3" as a shorthand for all that.
So the theory is that bit 3 causes the problem. I haven't tested this myself. There's been some impedance mismatch on the communication with the person who has the OSX machine - so it is unresolved as yet.
But, if this theory holds, it would explain your situation: that WinRar and any Windows machine can open the file, but OSX cannot.
On the DotNetZip forums, we had a discussion about what to do about the problem. As near as I can tell, the OSX zip reader is broken, and cannot handle bit 3, so the workaround is to produce a zip file with bit 3 unset. I don't know if SharpZipLib can be convinced to do that.
I do know that if you use DotNetZip, and use the normal ZipFile class, and save to a seekable stream (like a filesystem file), you will get a zip that does not have bit 3 set. If the theory is correct, it should open with no problem on the Mac, every time. This is the result the DotNetZip user has reported. It's just one result so not generalizable yet, but it looks plausible.
example code for your scenario:
using (ZipFile zip = new ZipFile()
{
zip.AddFiles(pathnames);
zip.Save("Out2.zip");
}
Just for the curious, in DotNetZip you will get bit 3 set if you use the ZipFile class and save it to a nonseekable stream (like ASPNET's Response.OutputStream) or if you use the ZipOutputStream class in DotNetZip, which always writes forward only (no seeking back).
I think SharpZipLib's ZipOutputStream is also always "forward only."
So, I searched for a few more examples on how to use SharpZipLib and I finally got it to work on Windows and os x. Basically I added the "Crc32" of the file to the zip archive. No idea what this is though.
Here is the code that worked for me:
using (var outStream = new FileStream("Out3.zip", FileMode.Create))
{
using (var zipStream = new ZipOutputStream(outStream))
{
Crc32 crc = new Crc32();
foreach (string pathname in pathnames)
{
byte[] buffer = File.ReadAllBytes(pathname);
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
entry.Size = buffer.Length;
crc.Reset();
crc.Update(buffer);
entry.Crc = crc.Value;
zipStream.PutNextEntry(entry);
zipStream.Write(buffer, 0, buffer.Length);
}
zipStream.Finish();
// I dont think this is required at all
zipStream.Flush();
zipStream.Close();
}
}
Explanation from cheeso:
CRC is Cyclic Redundancy Check - it's a checksum on the entry data. Normally the header for each entry in a zip file contains a bunch of metadata, including some things that cannot be known until all the entry data has been streamed - CRC, Uncompressed size, and compressed size. When generating a zipfile through a streamed output, the zip spec allows setting a bit (bit 3) to specify that these three data fields will immediately follow the entry data.
If you use ZipOutputStream, normally as you write the entry data, it is compressed and a CRC is calculated, and the 3 data fields are written immediately after the file data.
What you've done is streamed the data twice - the first time implicitly as you calculate the CRC on the file before writing it. If my theory is correct, the what is happening is this: When you provide the CRC to the zipStream before writing the file data, this allows the CRC to appear in its normal place in the entry header, which keeps OSX happy. I'm not sure what happens to the other two quantities (compressed and uncompressed size).
I had exactly the same problem, my mistake was (and in your example code as well) that I didn't supply the file lenght for each entry.
Example code:
...
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
var fileInfo = new FileInfo(pathname)
entry.size = fileInfo.lenght;
...
I was separating the folder names with a backslash... when I changed this to a forward slash it worked!
What's going on with the .cpgz file is that Archive Utility is being launched by a file with a .zip extension. Archive Utility examines the file and thinks it isn't compressed, so it's compressing it. For some bizarre reason, .cpgz (CPIO archiving + gzip compression) is the default. You can set a different default in Archive Utility's Preferences.
If you do indeed discover this is a problem with OS X's zip decoder, please file a bug. You can also try using the ditto command-line tool to unpack it; you may get a better error message. Of course, OS X also ships unzip, the Info-ZIP utility, but I'd expect that to work.
I agree with Cheeso's answer however if the Input file size is greater than 2GB then byte[] buffer = File.ReadAllBytes(pathname); will throw an IO exception.
So i modified Cheeso code and it works like a charm for all the files.
.
long maxDataToBuffer = 104857600;//100MB
using (var outStream = new FileStream("Out3.zip", FileMode.Create))
{
using (var zipStream = new ZipOutputStream(outStream))
{
Crc32 crc = new Crc32();
foreach (string pathname in pathnames)
{
tempBuffLength = maxDataToBuffer;
FileStream fs = System.IO.File.OpenRead(pathname);
ZipEntry entry = new ZipEntry(Path.GetFileName(pathname));
entry.DateTime = now;
entry.Size = buffer.Length;
crc.Reset();
long totalBuffLength = 0;
if (fs.Length <= tempBuffLength) tempBuffLength = fs.Length;
byte[] buffer = null;
while (totalBuffLength < fs.Length)
{
if ((fs.Length - totalBuffLength) <= tempBuffLength)
tempBuffLength = (fs.Length - totalBuffLength);
totalBuffLength += tempBuffLength;
buffer = new byte[tempBuffLength];
fs.Read(buffer, 0, buffer.Length);
crc.Update(buffer, 0, buffer.Length);
buffer = null;
}
entry.Crc = crc.Value;
zipStream.PutNextEntry(entry);
tempBuffLength = maxDataToBuffer;
fs = System.IO.File.OpenRead(pathname);
totalBuffLength = 0;
if (fs.Length <= tempBuffLength) tempBuffLength = fs.Length;
buffer = null;
while (totalBuffLength < fs.Length)
{
if ((fs.Length - totalBuffLength) <= tempBuffLength)
tempBuffLength = (fs.Length - totalBuffLength);
totalBuffLength += tempBuffLength;
buffer = new byte[tempBuffLength];
fs.Read(buffer, 0, buffer.Length);
zipStream.Write(buffer, 0, buffer.Length);
buffer = null;
}
fs.Close();
}
zipStream.Finish();
// I dont think this is required at all
zipStream.Flush();
zipStream.Close();
}
}
I had a similar problem but on Windows 7. I updated to the as of this writing latest version of ICSharpZipLib 0.86.0.518. From then on I could no longer decompress any ZIP archives created with the code that was working so far.
There error messages were different depending on the tool I tried to extract with:
Unknown compression method.
Compressed size in local header does not match that of central directory header in new zip file.
What did the trick was to remove the CRC calculation as mentioned here: http://community.sharpdevelop.net/forums/t/8630.aspx
So I removed the line that is:
entry.Crc = crc.Value
And from then on I could again unzip the ZIP archives with any third party tool. I hope this helps someone.
There are two things:
Ensure your underlying output stream is seekable, or SharpZipLib won't be able to back up and fill in any ZipEntry fields that you omitted (size, crc, compressed size, ...). As a result, SharpZipLib will force "bit 3" to be enabled. The background has been explained pretty well in previous answers.
Fill in ZipEntry.Size, or explicitly set stream.UseZip64 = UseZip64.Off. The default is to conservatively assume the stream could be very large. Unzipping then requires "pk 4.5" support.
I encountered weird behavior when archive is empty (no entries inside it) it can not be opened on MAC - generates cpgz only. The idea was to put a dummy .txt file in it in case when no files for archiving.