When I do a
b.Save(outputFilename, ImageFormat.Bmp);
where b is a 16-bit bitmap it is saved with bitfields compression. How can I make C# save without using any compression?
This is what I did with the links that #Ben Voigt posted:
ImageCodecInfo myImageCodecInfo;
Encoder myEncoder ;
EncoderParameter myEncoderParameter;
EncoderParameters myEncoderParameters;
myEncoder = Encoder.Compression;
myImageCodecInfo = GetEncoderInfo("image/bmp");
myEncoderParameters = new EncoderParameters(1);
myEncoderParameter = new EncoderParameter(myEncoder,
(long)EncoderValue.CompressionNone);
myEncoderParameters.Param[0] = myEncoderParameter;
b.Save(outputFilename, myImageCodecInfo, myEncoderParameters );
When I pass an 8-bit bitmap no compression is used. But when I pass a 16-bit RGB bitmap it still uses bitfields compression.
Bitmap in windows means DIB.
"A device-independent bitmap (DIB) is a format used to define device-independent bitmaps in various color resolutions. The main purpose of DIBs is to allow bitmaps to be moved from one device to another (hence, the device-independent part of the name). A DIB is an external format, in contrast to a device-dependent bitmap, which appears in the system as a bitmap object (created by an application...). A DIB is normally transported in metafiles (usually using the StretchDIBits() function), BMP files, and the Clipboard (CF_DIB data format)."
And as we have discussed already in comments ,BITFIELDS compression is only used in 16 and 32 bit DIBs, and simply describes how the data is packed. In the case of a 16-bit DIB it can define the resolution of the green channel (i.e. 5:6:5 or 5:5:5), where as for 32-bit DIBs it defines whether the data is stored in RGB or BGR order (and, when using a BMIHv4/5 header, whether the alpha channel is used.)
There is only one reason to it. It is to keep the BMP to be device independent which is about the format being independent of the device it may be used in. So it means, its always kept in a DIB format according to Windows ! The format is kept intact by means of the compression.
EncoderParameters codecParams = new EncoderParameters(1);
codecParams.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
b.Save(outputFilename, myImageCodecInfo, codecparams );
This should ensure your quality.
There's an overload of the Save function that accepts an EncoderParameters argument, via which you can control compression.
See http://msdn.microsoft.com/en-us/library/ytz20d80.aspx and http://msdn.microsoft.com/en-us/library/system.drawing.imaging.encoder.compression.aspx
It appears that your real need is not to "save a 16-bit bitmap without bitfields compression", but to avoid lossy compression.
The problem is that the 16-bit bitmap format by its nature uses bitfields, (something like 5 bits for red, 6 bits for green, and another 5 bits for blue,) so some information will be lost if the original image had a pixel format wider than 16 bits. The most frequently used image format nowadays is the RGB format, which uses 8 bits per pixel, so it requires 3 × 8 = 24 bits per pixel. You simply cannot fit 24 bits of information into 16 bits; some bits will fall off, it is a fact of life.
So, your problem is unsolvable for as long as a 16-bit bitmap is a requirement. The only way around it is to refrain from using 16-bit bitmaps.
Note:
The reason why bitfield compression is used even though you have specified "no compression" is because bitfield compression is not really compression; it is just a pixel format, so it should be called "bitfield format" rather than "bitfield compression". RLE is a real (albeit extremely simple) compression method.
RLE stands for "Run-Length Encoding" and it works as follows: series of identical pixel values of length N are substituted with the number N followed by a single occurence of the pixel value. So, RLE is absolutely lossless.
Related
i have a trouble converting byte array to an image by common methods for example:
using (var ms = new MemoryStream(byteArrayIn))
{
return Image.FromStream(ms); ->exception
}
and
System.Drawing.ImageConverter converter = new System.Drawing.ImageConverter();
Image img = (Image)converter.ConvertFrom(ImgInBytes); -> exception
The exception is
Parameter is not valid
moreover, i used a 4 byte array length which was initiated by zero value.
it was supposed to show a black image but it didn't
i used a 4 byte array length which was initiated by zero value.
The API expects a valid image stream; 4 bytes with value zero is not a valid image stream. The method is going to inspect the stream, trying to identify the image format (streams are broadly comparable to files, except without any concept of a filename) - it isn't just looking for pixel data. That means it is going to be looking for an image header that it recognizes (for example, png always starts with the byte values 137 80 78 71 13 10 26 10); once it has identified the format, it'll want to decode the image header (dimensions, color depth, possibly a palette, etc), and then finally there might be some pixel data - or there might not, if it isn't a pixel format (it could be a vector image format). So; there's a lot more to consider than just some pixel data.
If you want a black image: perhaps start with Bitmap - maybe see this answer
I have a sample wpf app here and wondering, why BMP is loading faster than PNG. Here is the exact setup:
- Windows 7
- Visual Studio 2013
- landscape.png, 1920x1080, 2.4mb
- landscape.bmp, 1920x1080, 5.6mb
- wpf-app, class MainWindow.xaml.cs constructor, no other code before
Code:
var sw = Stopwatch.StartNew();
var Bitmap1 = new Bitmap("landscape.bmp");
long t1 = sw.ElapsedMilliseconds;
sw.Restart();
var Bitmap2 = new Bitmap("landscape.png");
long t2 = sw.ElapsedMilliseconds;
So the BMP loads with around 6ms, the PNG needs 40ms.
Why is that so?
First, we need to understand how digital images are stored and shown, a digital image is represented as a matrix where each element of the matrix is the color of the pixel, if you have a grayscale image then each element is a uint8 (unsigned 8-bit integer) number between 0 and 255 and in some cases, it's an int8 (signed 8-bit integer) number between -128 and 127. if the element is 0 (or -128 in int8 version) the color is solid black and if the element is 255 (or 127 in int8 version) the color is solid white.
For RGB images each element of the said matrix takes 24 bit or 3 Byte to store (one Byte for each color), a very common resolution for digital cameras and smartphones is 3264 x 2448 for an 8-megapixel camera, now imagine we want to save a 3264 row matrix where each row has 2448 element and each element is 3 Byte, we need about 24 MegaByte to store that image which is not very efficient for posting on the internet or transferring or most of the other purposes. That is why we should compress the image, we can go for JPEG which is a lossy compression method and that means we do lose some quality or we can choose a lossless compression method like PNG which will give us less compression ratio but instead we are not gonna lose quality.
Whether we chose to compress the image or not, when we want to see the image, we can only show the uncompressed version of the image, if the image is not compressed at all, there is no problem, we show exactly what it is, but if it's compressed, we have to decode it first (uncompress it).
With all that being said, let's answer the question. BMP is a format for somewhat raw images, there is either no compression at all or much fewer compression techniques are used than PNG or JPEG but the file size is bigger. When you want to show a BMP image, because it's bigger, there is more data to read into memory, but when it is read, you can show it very faster because there is either no need for decoding or much less decoding is required, on the other hand when you want to show a PNG image, the image will be read into memory much faster but compared to BMP the decoding is going take more time.
If you have a very slow storage, BMP images will be shown slow.
If you have a very slow CPU or your decoding software is not efficient PNG images will be shown slow.
Our goal is to make a movie from the Infrared images coming from the Kinect. We are using the AForge videowriter, we already have working code to work with a normal RGB stream. (RgbResolution640x480Fps30)
Looking at the documentation ( http://msdn.microsoft.com/en-us/library/microsoft.kinect.colorimageformat.aspx ) the image are in a 16 bits format, but only the first 10 are used? So do we have a 10 bits format image or how does this work?
Looking at the Aforge documentation only the following formats are accepted : 24 or 32 bpp image or grayscale 8 bpp (indexed) image. (http://www.aforgenet.com/framework/docs/html/84a560df-bfd5-e0d6-2812-f810b56a254d.htm)
Why are only 8 bpp indexed images accepted?
Is it possible to transform the 16 (10??) bit images from the Kinect to a 8 bpp indexed image
Or maybe allow AForge to accept 16 bit images as well
Thanks!
Kinect is supposed to produce depth or z with 11 bits of resolution packed in 16bit samples. To work around this problem you can divide the samples by 2^3=8, which would produce blunt results, or use a tone mapping technique like those used in HDR photography. This last one makes sense given the fact that Kinect doesn't have the same resolution for close by objects as for distant ones (see this StackOverflow question) so a non linear mapping can be used between the 11b samples and the 8b reduced resolution as explained in the OpenKinect Wiki.
On the Aforge, I'd say it'd be common to support 8bit gray, 24bit RGB (8bit per plane) and 32bit RGBA (8bit per plane).
AForge can convert to an 8 bit image for you:
Bitmap grayImage8bit = AForge.Imaging.Filters.Grayscale.CommonAlgorithms.BT709.Apply(original_bitmap);
I've been using this to convert 32 bit rgba bitmaps.
I am attempting to read a bitmap manually. So I read the bitmap file using a filestream. There wasn't a problem until I had to deal with 24 bit bitmap files. Is there a method to actually read a 24 bitmap image into a 24 bit array ?
I hold a 8 bit bitmap image in a byte array like this
byte[] fileBufferArray = new byte[fileLength];
A few options:
If you're not too worried about memory (you don't have a large number or very large bitmaps open), you can store it as 32-bit numbers instead. Often the fourth byte is then interpreted as "alpha" (a blending specifier when rendering the image on a background.) Most modern image manipulation libraries treat color images in this way now.
You can pack the colors into a byte array and access them individually. RGB and BGR are the two most common packing orders. Usually you also end up putting padding bytes at the end of each row so that the width in bytes lines up with DWORD (4-byte) boundaries.
You can split the image into three separate byte array 'planes', which are basically 8-bit images for Red, Green and Blue respectively. This is another common format when doing image processing, as often your filtering steps operate on channels independently.
I need to convert 24- and 32-bits jpeg and png-files to a lower bit depth (16). I found some code to do this, but the resulting images keep the bit depth of the original file although there file size is lower.
Image img = Image.FromFile(filePathOriginal);
Bitmap bmp = ConvertTo16bpp(img);
EncoderParameters parameters = new EncoderParameters();
parameters.Param[0] = new EncoderParameter(Encoder.ColorDepth, 16);
bmp.Save(filePathNew, jpgCodec, parameters);
bmp.Dispose();
img.Dispose();
...
private static Bitmap ConvertTo16bpp(Image img) {
var bmp = new Bitmap(img.Width, img.Height, System.Drawing.Imaging.PixelFormat.Format16bppRgb555);
using (var gr = Graphics.FromImage(bmp))
{
gr.DrawImage(img, new Rectangle(0, 0, img.Width, img.Height));
}
return bmp;
}
Any ideas what's going wrong?
Thanks,
Frank
JPEG is a three-color format. It usually has 8 bits per color, but can have 12 or 16. 24=3x8 bits of color is therefore reasonable, but 16 or 32 is simply impossible. It just doesn't divide by three. 3x16=48 would be possible, but that's a higher color depth. JPEG is designed for photo's, and it doesn't make sense to support lower bit depths than 3x8. There's no benefit in that.
Now, what is the 16 bit image in your code? It's an imprecise in-memory approximation of the original, using only 65535 colors. When you save that back, you get a 24 bits JPEG. Apparently your JPEG encoder doesn't know how to create an 48 bits JPEG. Even if it did, it would be a waste of bits since the in-memory image only has 65536 colors anyway.
To summarize: what is going wrong is the task. There's no such thing as a 65536 color JPEG.
This question is a bit old, but I will add this for anyone searching in the future.
If it is a 32 bit file, then most likely it is in the CMYK colorspace. This is typically used for printing, but its rare enough that many tools that use RGB and display to screens rather than print can't handle it.
You can convert it to RGB using imagemagic:
convert imageInCMYK.jpg -colorspace RGB imageInRGB.jpg
JPEG is a three colors format and it does not allow you to save as 16 bit depth but bmp format does.
I encountered the same problem and I solved it as below. Hope it will help you.
bmp.Save("test.jpeg", ImageFormat.Bmp);