According to the documentation stride for bitmap creating from a byteArray needs to be:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
I have a byteArray for an image of 640 x 512 pixels. The byte array is created from an arraybuffer that is coming in from a live camera. I am creating the image for the first time. The image format is PixelFormat.Format24bppRgb so one byte per Red, Green and Blue for a total of three bytes per pixel. That makes one line of the image 640 * 3 bytes = 1,920 bytes. Checking to see if this is divisible by four I get 1,920/4 = 480.
When I use a stride of 640 * 3 = 1,920 I get a bad result (a garbled image). What am I doing wrong?
I have verified that my byteArray has the right number of elements and looks correct. My code looks like this:
int stride = 1920;
using (Bitmap image = new Bitmap(640, 512, stride, PixelFormat.Format24bppRgb, new IntPtr(ptr)))
{
return (Bitmap)image.Clone();
}
EDIT
It is sounding like stride at 1920 is correct and you all are confirming that I understand how stride works. My guess at this point is that my pixel layout (ARGB VS RGB or something like that) is not correct.
This is how the image should look (using a free GenIcam viewer:
How the "garbled" image looks:
Solved!
Understanding how stride works was the key. Since I had the correct stride I started looking elsewhere. What happened was I was passing an array with the wrong value to make the bitmap. The values should have values for temperature:
23.5, 25.2, 29.8 ...
The values actually getting passed were in the thousands range ...
7501, 7568, 7592 ...
Garbage in, garbage out...
Rambling thoughts:
I need to learn how to set up unit tests when doing things like this. Units test are easy when you are testing something like a method to calculate the circumference of a circle. I guess I need to figure out a way to set up a raw data file that simulates the raw data from the camera and run it through all the conversions to get to an image.
Thank you for all your help.
Related
I'm developing a camera interface in C#.
I have an HDR camera that generates 24-bit HDR raw images.
The raw image buffer is a byte array, byte[], with 3 bytes per each Bayer tile.
I'm looking for a way to pass this array buffer to the GPU as a texture using OpenTK (a C# wrapper of OpenGL). And then demosaic the the raw image pixels into 24-bit RGB pixels, then tonemap it to 8-bit RGB pixels.
I saw some example code that uses the Luminance pixel format:
GL.BindTexture(TextureTarget.Texture2D, this.handle);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Luminance, width, height, 0, PixelFormat.Luminance, PixelType.UnsignedByte, imageBuffer);
GL.BindTexture(TextureTarget.Texture2D, 0);
But I'm not sure how the fragment shader would sample this image buffer with this pixel format into a color pixel. Not sure if this is the right way to do it for a 24 bit pixel.
I also tried the option of creating an integer array (int[]) from the byte array image buffer, by combining every 3 bytes into an int on the CPU. But then I run into similar problems passing the integer array into the GPU as a texture. Not sure what pixel format should apply here.
Would anyone be able to point me in the right direction?
I have an IMG file that I need to read and display as a picture. Each pixel is represented by 2 bytes type ushort.
So far I've read the file into a byte array and combined two bytes (byte0 and byte1, byte2 and byte3...) to create single ushort value, but now I'm lost on how to create an actual image from these values that seem to be ranging from zero to little over 65000.
The classes you want to look at are Image and Bitmap.
Bitmap has a constructor that looks like this:
public Bitmap(
int width,
int height,
int stride,
PixelFormat format,
IntPtr scan0
)
Its possible that you IMG is using the PixelFormat Format48bppRgb which is 16bits for Red, Green and Blue, so it might just work.
Failing that you could define the bitmap of the correct dimensions, then define the pixel format as above and then manually SetPixel() on every pixel in the image.
I haven't tried any of this, but hopefully it will be a steer in the right direction.
I have a sample wpf app here and wondering, why BMP is loading faster than PNG. Here is the exact setup:
- Windows 7
- Visual Studio 2013
- landscape.png, 1920x1080, 2.4mb
- landscape.bmp, 1920x1080, 5.6mb
- wpf-app, class MainWindow.xaml.cs constructor, no other code before
Code:
var sw = Stopwatch.StartNew();
var Bitmap1 = new Bitmap("landscape.bmp");
long t1 = sw.ElapsedMilliseconds;
sw.Restart();
var Bitmap2 = new Bitmap("landscape.png");
long t2 = sw.ElapsedMilliseconds;
So the BMP loads with around 6ms, the PNG needs 40ms.
Why is that so?
First, we need to understand how digital images are stored and shown, a digital image is represented as a matrix where each element of the matrix is the color of the pixel, if you have a grayscale image then each element is a uint8 (unsigned 8-bit integer) number between 0 and 255 and in some cases, it's an int8 (signed 8-bit integer) number between -128 and 127. if the element is 0 (or -128 in int8 version) the color is solid black and if the element is 255 (or 127 in int8 version) the color is solid white.
For RGB images each element of the said matrix takes 24 bit or 3 Byte to store (one Byte for each color), a very common resolution for digital cameras and smartphones is 3264 x 2448 for an 8-megapixel camera, now imagine we want to save a 3264 row matrix where each row has 2448 element and each element is 3 Byte, we need about 24 MegaByte to store that image which is not very efficient for posting on the internet or transferring or most of the other purposes. That is why we should compress the image, we can go for JPEG which is a lossy compression method and that means we do lose some quality or we can choose a lossless compression method like PNG which will give us less compression ratio but instead we are not gonna lose quality.
Whether we chose to compress the image or not, when we want to see the image, we can only show the uncompressed version of the image, if the image is not compressed at all, there is no problem, we show exactly what it is, but if it's compressed, we have to decode it first (uncompress it).
With all that being said, let's answer the question. BMP is a format for somewhat raw images, there is either no compression at all or much fewer compression techniques are used than PNG or JPEG but the file size is bigger. When you want to show a BMP image, because it's bigger, there is more data to read into memory, but when it is read, you can show it very faster because there is either no need for decoding or much less decoding is required, on the other hand when you want to show a PNG image, the image will be read into memory much faster but compared to BMP the decoding is going take more time.
If you have a very slow storage, BMP images will be shown slow.
If you have a very slow CPU or your decoding software is not efficient PNG images will be shown slow.
I have a byte[] object which contains the bytes of an image. I need to pad the image adding black zones to the left and right.
My image is 512 height and 384 width and I need to make it 512X512, that is, I need to add 128 colums, 64 to the left and 64 to the right.
I think I need to first copy all image bytes to columns 65 to 448 (that makes my 384 width image), then add 64 columns to the left and 64 columns to the right.
I'm not quite sure how to do this, I would imagine a nested for will suffice but not sure.
I'm programming in C#
Have tested this with a raw image generated by Photoshop and it seems to work OK. Obviously it's designed to only work for your specific case as I'm not sure what you're trying to achieve but I'm sure you could improve upon it :)
public byte[] FixImage(byte[] imageData, int bitsPerPixel)
{
int bytesPerPixel = bitsPerPixel / 8;
List<byte> data = new List<byte>();
for (int i = 0; i < imageData.Length; i += 384 * bytesPerPixel)
{
data.AddRange(new byte[64*bytesPerPixel]);
data.AddRange(imageData.Skip(i).Take(384 * bytesPerPixel));
data.AddRange(new byte[64 * bytesPerPixel]);
}
return data.ToArray();
}
If you end up using more complicated formats than raw byte arrays, it may be worth investigating using the GDI functions in System.Drawing. Let me know if you want an example of that.
I need to convert 24- and 32-bits jpeg and png-files to a lower bit depth (16). I found some code to do this, but the resulting images keep the bit depth of the original file although there file size is lower.
Image img = Image.FromFile(filePathOriginal);
Bitmap bmp = ConvertTo16bpp(img);
EncoderParameters parameters = new EncoderParameters();
parameters.Param[0] = new EncoderParameter(Encoder.ColorDepth, 16);
bmp.Save(filePathNew, jpgCodec, parameters);
bmp.Dispose();
img.Dispose();
...
private static Bitmap ConvertTo16bpp(Image img) {
var bmp = new Bitmap(img.Width, img.Height, System.Drawing.Imaging.PixelFormat.Format16bppRgb555);
using (var gr = Graphics.FromImage(bmp))
{
gr.DrawImage(img, new Rectangle(0, 0, img.Width, img.Height));
}
return bmp;
}
Any ideas what's going wrong?
Thanks,
Frank
JPEG is a three-color format. It usually has 8 bits per color, but can have 12 or 16. 24=3x8 bits of color is therefore reasonable, but 16 or 32 is simply impossible. It just doesn't divide by three. 3x16=48 would be possible, but that's a higher color depth. JPEG is designed for photo's, and it doesn't make sense to support lower bit depths than 3x8. There's no benefit in that.
Now, what is the 16 bit image in your code? It's an imprecise in-memory approximation of the original, using only 65535 colors. When you save that back, you get a 24 bits JPEG. Apparently your JPEG encoder doesn't know how to create an 48 bits JPEG. Even if it did, it would be a waste of bits since the in-memory image only has 65536 colors anyway.
To summarize: what is going wrong is the task. There's no such thing as a 65536 color JPEG.
This question is a bit old, but I will add this for anyone searching in the future.
If it is a 32 bit file, then most likely it is in the CMYK colorspace. This is typically used for printing, but its rare enough that many tools that use RGB and display to screens rather than print can't handle it.
You can convert it to RGB using imagemagic:
convert imageInCMYK.jpg -colorspace RGB imageInRGB.jpg
JPEG is a three colors format and it does not allow you to save as 16 bit depth but bmp format does.
I encountered the same problem and I solved it as below. Hope it will help you.
bmp.Save("test.jpeg", ImageFormat.Bmp);