i have a trouble converting byte array to an image by common methods for example:
using (var ms = new MemoryStream(byteArrayIn))
{
return Image.FromStream(ms); ->exception
}
and
System.Drawing.ImageConverter converter = new System.Drawing.ImageConverter();
Image img = (Image)converter.ConvertFrom(ImgInBytes); -> exception
The exception is
Parameter is not valid
moreover, i used a 4 byte array length which was initiated by zero value.
it was supposed to show a black image but it didn't
i used a 4 byte array length which was initiated by zero value.
The API expects a valid image stream; 4 bytes with value zero is not a valid image stream. The method is going to inspect the stream, trying to identify the image format (streams are broadly comparable to files, except without any concept of a filename) - it isn't just looking for pixel data. That means it is going to be looking for an image header that it recognizes (for example, png always starts with the byte values 137 80 78 71 13 10 26 10); once it has identified the format, it'll want to decode the image header (dimensions, color depth, possibly a palette, etc), and then finally there might be some pixel data - or there might not, if it isn't a pixel format (it could be a vector image format). So; there's a lot more to consider than just some pixel data.
If you want a black image: perhaps start with Bitmap - maybe see this answer
Related
According to the documentation stride for bitmap creating from a byteArray needs to be:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
I have a byteArray for an image of 640 x 512 pixels. The byte array is created from an arraybuffer that is coming in from a live camera. I am creating the image for the first time. The image format is PixelFormat.Format24bppRgb so one byte per Red, Green and Blue for a total of three bytes per pixel. That makes one line of the image 640 * 3 bytes = 1,920 bytes. Checking to see if this is divisible by four I get 1,920/4 = 480.
When I use a stride of 640 * 3 = 1,920 I get a bad result (a garbled image). What am I doing wrong?
I have verified that my byteArray has the right number of elements and looks correct. My code looks like this:
int stride = 1920;
using (Bitmap image = new Bitmap(640, 512, stride, PixelFormat.Format24bppRgb, new IntPtr(ptr)))
{
return (Bitmap)image.Clone();
}
EDIT
It is sounding like stride at 1920 is correct and you all are confirming that I understand how stride works. My guess at this point is that my pixel layout (ARGB VS RGB or something like that) is not correct.
This is how the image should look (using a free GenIcam viewer:
How the "garbled" image looks:
Solved!
Understanding how stride works was the key. Since I had the correct stride I started looking elsewhere. What happened was I was passing an array with the wrong value to make the bitmap. The values should have values for temperature:
23.5, 25.2, 29.8 ...
The values actually getting passed were in the thousands range ...
7501, 7568, 7592 ...
Garbage in, garbage out...
Rambling thoughts:
I need to learn how to set up unit tests when doing things like this. Units test are easy when you are testing something like a method to calculate the circumference of a circle. I guess I need to figure out a way to set up a raw data file that simulates the raw data from the camera and run it through all the conversions to get to an image.
Thank you for all your help.
I have a sample wpf app here and wondering, why BMP is loading faster than PNG. Here is the exact setup:
- Windows 7
- Visual Studio 2013
- landscape.png, 1920x1080, 2.4mb
- landscape.bmp, 1920x1080, 5.6mb
- wpf-app, class MainWindow.xaml.cs constructor, no other code before
Code:
var sw = Stopwatch.StartNew();
var Bitmap1 = new Bitmap("landscape.bmp");
long t1 = sw.ElapsedMilliseconds;
sw.Restart();
var Bitmap2 = new Bitmap("landscape.png");
long t2 = sw.ElapsedMilliseconds;
So the BMP loads with around 6ms, the PNG needs 40ms.
Why is that so?
First, we need to understand how digital images are stored and shown, a digital image is represented as a matrix where each element of the matrix is the color of the pixel, if you have a grayscale image then each element is a uint8 (unsigned 8-bit integer) number between 0 and 255 and in some cases, it's an int8 (signed 8-bit integer) number between -128 and 127. if the element is 0 (or -128 in int8 version) the color is solid black and if the element is 255 (or 127 in int8 version) the color is solid white.
For RGB images each element of the said matrix takes 24 bit or 3 Byte to store (one Byte for each color), a very common resolution for digital cameras and smartphones is 3264 x 2448 for an 8-megapixel camera, now imagine we want to save a 3264 row matrix where each row has 2448 element and each element is 3 Byte, we need about 24 MegaByte to store that image which is not very efficient for posting on the internet or transferring or most of the other purposes. That is why we should compress the image, we can go for JPEG which is a lossy compression method and that means we do lose some quality or we can choose a lossless compression method like PNG which will give us less compression ratio but instead we are not gonna lose quality.
Whether we chose to compress the image or not, when we want to see the image, we can only show the uncompressed version of the image, if the image is not compressed at all, there is no problem, we show exactly what it is, but if it's compressed, we have to decode it first (uncompress it).
With all that being said, let's answer the question. BMP is a format for somewhat raw images, there is either no compression at all or much fewer compression techniques are used than PNG or JPEG but the file size is bigger. When you want to show a BMP image, because it's bigger, there is more data to read into memory, but when it is read, you can show it very faster because there is either no need for decoding or much less decoding is required, on the other hand when you want to show a PNG image, the image will be read into memory much faster but compared to BMP the decoding is going take more time.
If you have a very slow storage, BMP images will be shown slow.
If you have a very slow CPU or your decoding software is not efficient PNG images will be shown slow.
I'm working on a program which requires 8BIM profile information to be present in the tiff file for the processing to continue.
The sample tiff file (which does not contain the 8BIM profile information) when opened and saved in Adobe Photoshop gets this metadata information.
I'm clueless as to how to approach this problem.
The target framework is .net 2.0.
Any information related to this would be helpful.
No idea why you need the 8BIM to be present in your TIFF file. I will just give some general information and structure about 8BIM.
8BIM is the signature for Photoshop Image Resource Block (IRB). This kind of information could be found in images such as TIFF, JPEG, Photoshop native image format etc. It could also be found in non-image documents such as in PDF.
The structure of the IRB is as follows:
Each IRB block starts with 4 bytes signature which translates to string "8BIM." After that, is a 2 bytes unique identifier denoting the kind of resource for this IRB. For example: 0x040c for thumbnail; 0x041a for slices; 0x0408 for grid information; 0x040f for ICC Profile etc.
After the identifier is a variable length string for name. The first byte of the string tells the length of the string (excluding the first length byte). After the first byte comes the string itself. There is a requirement that the length of the whole string (including the length byte) should be even. Otherwise, pad one more byte after the string.
The next 4 bytes specifies the size of the actual data for this resource block followed by the data with the specified length. The total length of the data also should be an even number. So if the size of the data is odd, pad another one byte. This finishes a whole 8BIM.
There could be more than one IRBs but they all conform to the same structure as described above. How to interpret the data depends on the unique identifier.
Now let's see how the IRBs are include in images. For a JPEG image, metadata could be present as one of the application (APPn) segment. Since different application could use the same APPn segment to store it's own metadata, there must be some kind of identifier to let the image reader know what kind of information is contained inside the APPn. Photoshop uses APP13 as it's IRB container and the APP13 contains "Photoshop 3.0" as it's identifier.
For TIFF image which is tag based and arranged in a directory structure. There is a private tag 0x8649 called "PHOTOSHOP" to insert IRB information.
Let's take a look at the TIFF image format (quoted from this source):
The basic structure of a TIFF file is as follows:
The first 8 bytes forms the header. The first two bytes of which is
either "II" for little endian byte ordering or "MM" for big endian
byte ordering. In what follows we'll be assuming big endian ordering.
Note: any true TIFF reading software is supposed to be handle both
types. The next two bytes of the header should be 0 and 42dec (2ahex).
The remaining 4 bytes of the header is the offset from the start of
the file to the first "Image File Directory" (IFD), this normally
follows the image data it applies to. In the example below there is
only one image and one IFD.
An IFD consists of two bytes indicating the number of entries followed
by the entries themselves. The IFD is terminated with 4 byte offset to
the next IFD or 0 if there are none. A TIFF file must contain at least
one IFD!
Each IFD entry consists of 12 bytes. The first two bytes identifies
the tag type (as in Tagged Image File Format). The next two bytes are
the field type (byte, ASCII, short int, long int, ...). The next four
bytes indicate the number of values. The last four bytes is either the
value itself or an offset to the values. Considering the first IFD
entry from the example gievn below:
0100 0003 0000 0001 0064 0000
| | | |
tag --+ | | |
short int -+ | |
one value ------+ |
value of 100 -------------+
In order to be able to read a TIFF IFD, two things must be done first:
A way to be able to read either big or little endian data
A random access input stream which wraps the image input so that we can jump forward and backward while reading the directory.
Now let's assume we have a structure for each and every 12 bytes IFD entry called Entry. We read the first two bytes (the endianess is not applied here since it's either MM or II) to determine the endianess. Now we can read the remaining IFD data and interpret them according to the the endianess we already know.
Right now we have a list of Entry. It's not so difficult to insert a new Entry into the list - in our case, it's a "Photoshop" Entry. The difficult part is how to write the data back to create a new TIFF. You can't just write the Entries back to the output stream directly which will break the
overall structure of the TIFF. Cautions must be taken to keep track of where you write the data and update the pointer of the data accordingly.
From the above description, we can see that it's not so easy to insert new Entries into TIFF format. JPEG format will make it much easier given the fact that each JPEG segment is self-contained.
I don't have related C# code but there is a Java library here which could manipulate metadata for JPEG and TIFF images like insert EXIF, IPTC, thumbnail etc as 8BIM. In your case, if file size is not a big issue, the above mentioned library can insert a small thumbnail into a Photoshop tag as one 8BIM.
When I do a
b.Save(outputFilename, ImageFormat.Bmp);
where b is a 16-bit bitmap it is saved with bitfields compression. How can I make C# save without using any compression?
This is what I did with the links that #Ben Voigt posted:
ImageCodecInfo myImageCodecInfo;
Encoder myEncoder ;
EncoderParameter myEncoderParameter;
EncoderParameters myEncoderParameters;
myEncoder = Encoder.Compression;
myImageCodecInfo = GetEncoderInfo("image/bmp");
myEncoderParameters = new EncoderParameters(1);
myEncoderParameter = new EncoderParameter(myEncoder,
(long)EncoderValue.CompressionNone);
myEncoderParameters.Param[0] = myEncoderParameter;
b.Save(outputFilename, myImageCodecInfo, myEncoderParameters );
When I pass an 8-bit bitmap no compression is used. But when I pass a 16-bit RGB bitmap it still uses bitfields compression.
Bitmap in windows means DIB.
"A device-independent bitmap (DIB) is a format used to define device-independent bitmaps in various color resolutions. The main purpose of DIBs is to allow bitmaps to be moved from one device to another (hence, the device-independent part of the name). A DIB is an external format, in contrast to a device-dependent bitmap, which appears in the system as a bitmap object (created by an application...). A DIB is normally transported in metafiles (usually using the StretchDIBits() function), BMP files, and the Clipboard (CF_DIB data format)."
And as we have discussed already in comments ,BITFIELDS compression is only used in 16 and 32 bit DIBs, and simply describes how the data is packed. In the case of a 16-bit DIB it can define the resolution of the green channel (i.e. 5:6:5 or 5:5:5), where as for 32-bit DIBs it defines whether the data is stored in RGB or BGR order (and, when using a BMIHv4/5 header, whether the alpha channel is used.)
There is only one reason to it. It is to keep the BMP to be device independent which is about the format being independent of the device it may be used in. So it means, its always kept in a DIB format according to Windows ! The format is kept intact by means of the compression.
EncoderParameters codecParams = new EncoderParameters(1);
codecParams.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
b.Save(outputFilename, myImageCodecInfo, codecparams );
This should ensure your quality.
There's an overload of the Save function that accepts an EncoderParameters argument, via which you can control compression.
See http://msdn.microsoft.com/en-us/library/ytz20d80.aspx and http://msdn.microsoft.com/en-us/library/system.drawing.imaging.encoder.compression.aspx
It appears that your real need is not to "save a 16-bit bitmap without bitfields compression", but to avoid lossy compression.
The problem is that the 16-bit bitmap format by its nature uses bitfields, (something like 5 bits for red, 6 bits for green, and another 5 bits for blue,) so some information will be lost if the original image had a pixel format wider than 16 bits. The most frequently used image format nowadays is the RGB format, which uses 8 bits per pixel, so it requires 3 × 8 = 24 bits per pixel. You simply cannot fit 24 bits of information into 16 bits; some bits will fall off, it is a fact of life.
So, your problem is unsolvable for as long as a 16-bit bitmap is a requirement. The only way around it is to refrain from using 16-bit bitmaps.
Note:
The reason why bitfield compression is used even though you have specified "no compression" is because bitfield compression is not really compression; it is just a pixel format, so it should be called "bitfield format" rather than "bitfield compression". RLE is a real (albeit extremely simple) compression method.
RLE stands for "Run-Length Encoding" and it works as follows: series of identical pixel values of length N are substituted with the number N followed by a single occurence of the pixel value. So, RLE is absolutely lossless.
I am attempting to read a bitmap manually. So I read the bitmap file using a filestream. There wasn't a problem until I had to deal with 24 bit bitmap files. Is there a method to actually read a 24 bitmap image into a 24 bit array ?
I hold a 8 bit bitmap image in a byte array like this
byte[] fileBufferArray = new byte[fileLength];
A few options:
If you're not too worried about memory (you don't have a large number or very large bitmaps open), you can store it as 32-bit numbers instead. Often the fourth byte is then interpreted as "alpha" (a blending specifier when rendering the image on a background.) Most modern image manipulation libraries treat color images in this way now.
You can pack the colors into a byte array and access them individually. RGB and BGR are the two most common packing orders. Usually you also end up putting padding bytes at the end of each row so that the width in bytes lines up with DWORD (4-byte) boundaries.
You can split the image into three separate byte array 'planes', which are basically 8-bit images for Red, Green and Blue respectively. This is another common format when doing image processing, as often your filtering steps operate on channels independently.