CaptureImage(bool FullImage, ref int Width, ref int Height, ref byte Image, string ImageFile)
[Parameter]
FullImage
Whether to capture the entire image. Return True if the device captures the whole image. Return False if the device captures only the fingerprint.
Width
specify width of the image
Height
specify height of the image
Image
Byte array of image
ImageFile
Storage name of the specified fingerprint image to be captured (including the storage path)
Please give me an example how to use this function.
Thank you very much
Byte array is actually place where data about each pixel of your image is stored. Its array of bytes. We usually use this type of arrays for storing images. Take a look at this article about byte arrays. It will be much more clear.
www.dotnetperls.com/byte-array
Related
I'm developing a camera interface in C#.
I have an HDR camera that generates 24-bit HDR raw images.
The raw image buffer is a byte array, byte[], with 3 bytes per each Bayer tile.
I'm looking for a way to pass this array buffer to the GPU as a texture using OpenTK (a C# wrapper of OpenGL). And then demosaic the the raw image pixels into 24-bit RGB pixels, then tonemap it to 8-bit RGB pixels.
I saw some example code that uses the Luminance pixel format:
GL.BindTexture(TextureTarget.Texture2D, this.handle);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Luminance, width, height, 0, PixelFormat.Luminance, PixelType.UnsignedByte, imageBuffer);
GL.BindTexture(TextureTarget.Texture2D, 0);
But I'm not sure how the fragment shader would sample this image buffer with this pixel format into a color pixel. Not sure if this is the right way to do it for a 24 bit pixel.
I also tried the option of creating an integer array (int[]) from the byte array image buffer, by combining every 3 bytes into an int on the CPU. But then I run into similar problems passing the integer array into the GPU as a texture. Not sure what pixel format should apply here.
Would anyone be able to point me in the right direction?
According to the documentation stride for bitmap creating from a byteArray needs to be:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
I have a byteArray for an image of 640 x 512 pixels. The byte array is created from an arraybuffer that is coming in from a live camera. I am creating the image for the first time. The image format is PixelFormat.Format24bppRgb so one byte per Red, Green and Blue for a total of three bytes per pixel. That makes one line of the image 640 * 3 bytes = 1,920 bytes. Checking to see if this is divisible by four I get 1,920/4 = 480.
When I use a stride of 640 * 3 = 1,920 I get a bad result (a garbled image). What am I doing wrong?
I have verified that my byteArray has the right number of elements and looks correct. My code looks like this:
int stride = 1920;
using (Bitmap image = new Bitmap(640, 512, stride, PixelFormat.Format24bppRgb, new IntPtr(ptr)))
{
return (Bitmap)image.Clone();
}
EDIT
It is sounding like stride at 1920 is correct and you all are confirming that I understand how stride works. My guess at this point is that my pixel layout (ARGB VS RGB or something like that) is not correct.
This is how the image should look (using a free GenIcam viewer:
How the "garbled" image looks:
Solved!
Understanding how stride works was the key. Since I had the correct stride I started looking elsewhere. What happened was I was passing an array with the wrong value to make the bitmap. The values should have values for temperature:
23.5, 25.2, 29.8 ...
The values actually getting passed were in the thousands range ...
7501, 7568, 7592 ...
Garbage in, garbage out...
Rambling thoughts:
I need to learn how to set up unit tests when doing things like this. Units test are easy when you are testing something like a method to calculate the circumference of a circle. I guess I need to figure out a way to set up a raw data file that simulates the raw data from the camera and run it through all the conversions to get to an image.
Thank you for all your help.
i have a trouble converting byte array to an image by common methods for example:
using (var ms = new MemoryStream(byteArrayIn))
{
return Image.FromStream(ms); ->exception
}
and
System.Drawing.ImageConverter converter = new System.Drawing.ImageConverter();
Image img = (Image)converter.ConvertFrom(ImgInBytes); -> exception
The exception is
Parameter is not valid
moreover, i used a 4 byte array length which was initiated by zero value.
it was supposed to show a black image but it didn't
i used a 4 byte array length which was initiated by zero value.
The API expects a valid image stream; 4 bytes with value zero is not a valid image stream. The method is going to inspect the stream, trying to identify the image format (streams are broadly comparable to files, except without any concept of a filename) - it isn't just looking for pixel data. That means it is going to be looking for an image header that it recognizes (for example, png always starts with the byte values 137 80 78 71 13 10 26 10); once it has identified the format, it'll want to decode the image header (dimensions, color depth, possibly a palette, etc), and then finally there might be some pixel data - or there might not, if it isn't a pixel format (it could be a vector image format). So; there's a lot more to consider than just some pixel data.
If you want a black image: perhaps start with Bitmap - maybe see this answer
I have an IMG file that I need to read and display as a picture. Each pixel is represented by 2 bytes type ushort.
So far I've read the file into a byte array and combined two bytes (byte0 and byte1, byte2 and byte3...) to create single ushort value, but now I'm lost on how to create an actual image from these values that seem to be ranging from zero to little over 65000.
The classes you want to look at are Image and Bitmap.
Bitmap has a constructor that looks like this:
public Bitmap(
int width,
int height,
int stride,
PixelFormat format,
IntPtr scan0
)
Its possible that you IMG is using the PixelFormat Format48bppRgb which is 16bits for Red, Green and Blue, so it might just work.
Failing that you could define the bitmap of the correct dimensions, then define the pixel format as above and then manually SetPixel() on every pixel in the image.
I haven't tried any of this, but hopefully it will be a steer in the right direction.
I am attempting to read a bitmap manually. So I read the bitmap file using a filestream. There wasn't a problem until I had to deal with 24 bit bitmap files. Is there a method to actually read a 24 bitmap image into a 24 bit array ?
I hold a 8 bit bitmap image in a byte array like this
byte[] fileBufferArray = new byte[fileLength];
A few options:
If you're not too worried about memory (you don't have a large number or very large bitmaps open), you can store it as 32-bit numbers instead. Often the fourth byte is then interpreted as "alpha" (a blending specifier when rendering the image on a background.) Most modern image manipulation libraries treat color images in this way now.
You can pack the colors into a byte array and access them individually. RGB and BGR are the two most common packing orders. Usually you also end up putting padding bytes at the end of each row so that the width in bytes lines up with DWORD (4-byte) boundaries.
You can split the image into three separate byte array 'planes', which are basically 8-bit images for Red, Green and Blue respectively. This is another common format when doing image processing, as often your filtering steps operate on channels independently.