Reading IMG file and creating an image - c#

I have an IMG file that I need to read and display as a picture. Each pixel is represented by 2 bytes type ushort.
So far I've read the file into a byte array and combined two bytes (byte0 and byte1, byte2 and byte3...) to create single ushort value, but now I'm lost on how to create an actual image from these values that seem to be ranging from zero to little over 65000.

The classes you want to look at are Image and Bitmap.
Bitmap has a constructor that looks like this:
public Bitmap(
int width,
int height,
int stride,
PixelFormat format,
IntPtr scan0
)
Its possible that you IMG is using the PixelFormat Format48bppRgb which is 16bits for Red, Green and Blue, so it might just work.
Failing that you could define the bitmap of the correct dimensions, then define the pixel format as above and then manually SetPixel() on every pixel in the image.
I haven't tried any of this, but hopefully it will be a steer in the right direction.

Related

Is there a way to pass 24bit Bayer tiles to the GPU using OpenGL?

I'm developing a camera interface in C#.
I have an HDR camera that generates 24-bit HDR raw images.
The raw image buffer is a byte array, byte[], with 3 bytes per each Bayer tile.
I'm looking for a way to pass this array buffer to the GPU as a texture using OpenTK (a C# wrapper of OpenGL). And then demosaic the the raw image pixels into 24-bit RGB pixels, then tonemap it to 8-bit RGB pixels.
I saw some example code that uses the Luminance pixel format:
GL.BindTexture(TextureTarget.Texture2D, this.handle);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Luminance, width, height, 0, PixelFormat.Luminance, PixelType.UnsignedByte, imageBuffer);
GL.BindTexture(TextureTarget.Texture2D, 0);
But I'm not sure how the fragment shader would sample this image buffer with this pixel format into a color pixel. Not sure if this is the right way to do it for a 24 bit pixel.
I also tried the option of creating an integer array (int[]) from the byte array image buffer, by combining every 3 bytes into an int on the CPU. But then I run into similar problems passing the integer array into the GPU as a texture. Not sure what pixel format should apply here.
Would anyone be able to point me in the right direction?

what is byte array of image?

CaptureImage(bool FullImage, ref int Width, ref int Height, ref byte Image, string ImageFile)
[Parameter]
FullImage
Whether to capture the entire image. Return True if the device captures the whole image. Return False if the device captures only the fingerprint.
Width
specify width of the image
Height
specify height of the image
Image
Byte array of image
ImageFile
Storage name of the specified fingerprint image to be captured (including the storage path)
Please give me an example how to use this function.
Thank you very much
Byte array is actually place where data about each pixel of your image is stored. Its array of bytes. We usually use this type of arrays for storing images. Take a look at this article about byte arrays. It will be much more clear.
www.dotnetperls.com/byte-array

Yet another issue with stride

According to the documentation stride for bitmap creating from a byteArray needs to be:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
I have a byteArray for an image of 640 x 512 pixels. The byte array is created from an arraybuffer that is coming in from a live camera. I am creating the image for the first time. The image format is PixelFormat.Format24bppRgb so one byte per Red, Green and Blue for a total of three bytes per pixel. That makes one line of the image 640 * 3 bytes = 1,920 bytes. Checking to see if this is divisible by four I get 1,920/4 = 480.
When I use a stride of 640 * 3 = 1,920 I get a bad result (a garbled image). What am I doing wrong?
I have verified that my byteArray has the right number of elements and looks correct. My code looks like this:
int stride = 1920;
using (Bitmap image = new Bitmap(640, 512, stride, PixelFormat.Format24bppRgb, new IntPtr(ptr)))
{
return (Bitmap)image.Clone();
}
EDIT
It is sounding like stride at 1920 is correct and you all are confirming that I understand how stride works. My guess at this point is that my pixel layout (ARGB VS RGB or something like that) is not correct.
This is how the image should look (using a free GenIcam viewer:
How the "garbled" image looks:
Solved!
Understanding how stride works was the key. Since I had the correct stride I started looking elsewhere. What happened was I was passing an array with the wrong value to make the bitmap. The values should have values for temperature:
23.5, 25.2, 29.8 ...
The values actually getting passed were in the thousands range ...
7501, 7568, 7592 ...
Garbage in, garbage out...
Rambling thoughts:
I need to learn how to set up unit tests when doing things like this. Units test are easy when you are testing something like a method to calculate the circumference of a circle. I guess I need to figure out a way to set up a raw data file that simulates the raw data from the camera and run it through all the conversions to get to an image.
Thank you for all your help.

24 bit data type array to hold 24 bit bitmap file

I am attempting to read a bitmap manually. So I read the bitmap file using a filestream. There wasn't a problem until I had to deal with 24 bit bitmap files. Is there a method to actually read a 24 bitmap image into a 24 bit array ?
I hold a 8 bit bitmap image in a byte array like this
byte[] fileBufferArray = new byte[fileLength];
A few options:
If you're not too worried about memory (you don't have a large number or very large bitmaps open), you can store it as 32-bit numbers instead. Often the fourth byte is then interpreted as "alpha" (a blending specifier when rendering the image on a background.) Most modern image manipulation libraries treat color images in this way now.
You can pack the colors into a byte array and access them individually. RGB and BGR are the two most common packing orders. Usually you also end up putting padding bytes at the end of each row so that the width in bytes lines up with DWORD (4-byte) boundaries.
You can split the image into three separate byte array 'planes', which are basically 8-bit images for Red, Green and Blue respectively. This is another common format when doing image processing, as often your filtering steps operate on channels independently.

save jpg with lower bit depth (24 to 16 for instance) (C#)

I need to convert 24- and 32-bits jpeg and png-files to a lower bit depth (16). I found some code to do this, but the resulting images keep the bit depth of the original file although there file size is lower.
Image img = Image.FromFile(filePathOriginal);
Bitmap bmp = ConvertTo16bpp(img);
EncoderParameters parameters = new EncoderParameters();
parameters.Param[0] = new EncoderParameter(Encoder.ColorDepth, 16);
bmp.Save(filePathNew, jpgCodec, parameters);
bmp.Dispose();
img.Dispose();
...
private static Bitmap ConvertTo16bpp(Image img) {
var bmp = new Bitmap(img.Width, img.Height, System.Drawing.Imaging.PixelFormat.Format16bppRgb555);
using (var gr = Graphics.FromImage(bmp))
{
gr.DrawImage(img, new Rectangle(0, 0, img.Width, img.Height));
}
return bmp;
}
Any ideas what's going wrong?
Thanks,
Frank
JPEG is a three-color format. It usually has 8 bits per color, but can have 12 or 16. 24=3x8 bits of color is therefore reasonable, but 16 or 32 is simply impossible. It just doesn't divide by three. 3x16=48 would be possible, but that's a higher color depth. JPEG is designed for photo's, and it doesn't make sense to support lower bit depths than 3x8. There's no benefit in that.
Now, what is the 16 bit image in your code? It's an imprecise in-memory approximation of the original, using only 65535 colors. When you save that back, you get a 24 bits JPEG. Apparently your JPEG encoder doesn't know how to create an 48 bits JPEG. Even if it did, it would be a waste of bits since the in-memory image only has 65536 colors anyway.
To summarize: what is going wrong is the task. There's no such thing as a 65536 color JPEG.
This question is a bit old, but I will add this for anyone searching in the future.
If it is a 32 bit file, then most likely it is in the CMYK colorspace. This is typically used for printing, but its rare enough that many tools that use RGB and display to screens rather than print can't handle it.
You can convert it to RGB using imagemagic:
convert imageInCMYK.jpg -colorspace RGB imageInRGB.jpg
JPEG is a three colors format and it does not allow you to save as 16 bit depth but bmp format does.
I encountered the same problem and I solved it as below. Hope it will help you.
bmp.Save("test.jpeg", ImageFormat.Bmp);

Categories

Resources