Convert 12-bit Monochrome Image to 8-bit Grayscale - c#

I have an image sensor board for embedded development for which I need to capture a stream of images and output them in 8-bit monochrome / grayscale format. The imager output is 12-bit monochrome (which takes 2 bytes per pixel).
In the code, I have an IntPtr to a memory buffer that has the 12-bit image data, from which I have to extract and convert that data down to an 8-bit image. This is represented in memory something like this (with a bright light activating the pixels):
As you can see, every second byte contains the LSB that I want to discard, thereby keeping only the odd-numbered bytes (to put it another way). The best solution I can conceptualize is to iterate through the memory, but that's the rub. I can't get that to work. What I need help with is an algorithm in C# to do this.
Here's a sample image that represents a direct creation of a Bitmap object from the IntPtr as follows:
bitmap = new Bitmap(imageWidth, imageHeight, imageWidth, PixelFormat.Format8bppIndexed, pImage);
// Failed Attempt #1
unsafe
{
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
int i = 0, imageSize = (imageWidth * imageHeight * 2); // two bytes per pixel
byte[] imageData = new byte[imageSize];
do
{
// Should I bitwise shift?
imageData[i] = (byte)(pImage + i) << 8; // Doesn't compile, need help here!
} while (i++ < imageSize);
}
// Failed Attempt #2
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
imageSize = imageWidth * imageHeight;
byte[] imageData = new byte[imageSize];
Marshal.Copy(pImage, imageData, 0, imageSize);
// I tried with and without this loop. Neither gives me images.
for (int i = 0; i < imageData.Length; i++)
{
if (0 == i % 2) imageData[i / 2] = imageData[i];
}
Bitmap bitmap;
using (var ms = new MemoryStream(imageData))
{
bitmap = new Bitmap(ms);
}
// This also introduced a memory leak somewhere.
Alternatively, if there's a way to do this with a Bitmap, byte[], MemoryStream, etc. that works, I'm all ears, but everything I've tried has failed.

Here is the algorithm that my coworkers helped formulate. It creates two new (unmanaged) pointers; one 8-bits wide and the other 16-bits.
By stepping through one word at a time and shifting off the last 4 bits of the source, we get a new 8-bit image with only the MSBs. Each buffer has the same number of words, but since the words are different sizes, they progress at different rates as we iterate over them.
unsafe
{
byte* p_bytebuffer = (byte*)pImage;
short* p_shortbuffer = (short*)pImage;
for (int i = 0; i < imageWidth * imageHeight; i++)
{
*p_bytebuffer++ = (byte)(*p_shortbuffer++ >> 4);
}
}
In terms of performance, this appears to be very fast with no perceivable difference in framerate.
Special thanks to #Herohtar for spending a substantial amount of time in chat with me attempting to help me solve this.

Related

Loading and displaying a 16 (12) bit grayscale png into a PictureBox

I'm using a framework for some camera hardware called IDS Peak and we are receiving 16 bit grayscale images back from the framework, the framework itself can write the files to disk as PNGs and that's all good and well, but how do I display them in a PictureBox in Winforms?
Windows Bitmap does not support 16 bit grayscale so the following code throws a 'Parameter is not valid.' System.ArgumentException
var image = new Bitmap(width, height, stride, System.Drawing.Imaging.PixelFormat.Format16bppGrayScale, iplImg.Data());
iplImg.Data() here is an IntPtr to the bespoke Image format of the framework.
Considering Windows Bitmap does not support the format, and I can write the files using the framework to PNGs, how can I do one of the following:
Convert to a different object type other than Bitmap to display directly in Winforms without reading from the files.
Load the 16-bit grayscale PNG files into the PictureBox control (or any other control type, it doesn't have to be a PictureBox).
(1) is preferable as it doesn't require file IO but if (2) is the only possibility that's completely fine as I need to both save and display them anyway but (1) only requires a write operation and not a secondary read.
The files before writing to disc are actually monochrome with 12 bits per pixel, packed.
While it is possible to display 16-bit images, for example by hosting a wpf control in winforms, you probably want to apply a windowing function to reduce the image to 8 bit before display.
So lets use unsafe code and pointers for speed:
var bitmapData = myBitmap.LockBits(
new Rectangle(0, 0, myBitmap.Width, myBitmap.Height),
ImageLockMode.ReadWrite,
myBitmap.PixelFormat);
try
{
var ptr= (byte*)bitmapData.Scan0;
var stride = bitmapData.Stride;
var width = bitmapData.Width;
var height= bitmapData.Height;
// Conversion Code
}
finally
{
myBitmap.UnlockBits(bitmapData);
}
or using wpf image classes, that generally have better 16-bit support:
var myBitmap= new WriteableBitmap(new BitmapImage(new Uri("myBitmap.jpg", UriKind.Relative)));
writeableBitmap.Lock();
try{
var ptr = (byte*)myBitmap.BackBuffer;
...
}
finally
{
myBitmap.Unlock();
}
To loop over all the pixels you would use a double loop:
for (int y = 0; y < height; y++)
{
var row = (ushort*)(ptr+ y * stride);
for (int x = 0; x < width; x++)
{
var pixelValue = row[x];
// Scaling code
}
}
And to scale the value you could use a linear scaling between the min and max values to the 0-255 range of a byte
var slope = (byte.MaxValue + 1f) / (maxUshortValyue - minUshortValue);
var scaled = (int)(((pixelValue + 0.5f - minUshortValue) * slope)) ;
scaled = scaled > byte.MaxValue ? byte.MaxValue: scaled;
scaled = scaled < 0 ? 0: scaled;
var byteValue = (byte)scaled;
The maxUshortValyue / minUshortValue would either be computed from the max/min value of the image, or configured by the user. You would also need to create a target image in order to write down the result into a target 8-bit grayscale bitmap to be displayed, or write down the same value for each color channel in a color image.

Problem getting the image row where all pixels are white

I am trying to find if the image is clipped from the bottom and if it is, then I will divide it in two images from the last white pixel row. Following are the simple methods I created to check clipping and get the empty white pixel rows. Also, as you can see this is not a very good solution. This might cause performance issues for larger images. So if anyone can suggest me better ways then it will be a great help:
private static bool IsImageBottomClipping(Bitmap image)
{
for (int i = 0; i < image.Width; i++)
{
var pixel = image.GetPixel(i, image.Height - 1);
if (pixel.ToArgb() != Color.White.ToArgb())
{
return true;
}
}
return false;
}
private static int GetLastWhiteLine(Bitmap image)
{
for (int i = image.Height - 1; i >= 0; i--)
{
int whitePixels = 0;
for (int j = 0; j < image.Width; j++)
{
var pixel = image.GetPixel(j, i);
if (pixel.ToArgb() == Color.White.ToArgb())
{
whitePixels = j + 1;
}
}
if (whitePixels == image.Width)
return i;
}
return -1;
}
IsImageBottomClipping is working fine. But other method is not sending correct white pixel row. It is only sending one less row. Example image:
In this case, row around 180 should be the return value of GetLastWhiteLine method. But it is returning 192.
All right, so... we got two of subjects to tackle here. First, the optimising, then, your bug. I'll start with the optimising.
The fastest way is to work in memory directly, but, honestly, it's kind of unwieldy. The second-best choice, which is what I generally use, is to copy the raw image data bytes out of the image object. This will make you end up with four vital pieces of data:
The width, which you can just get from the image.
The height, which you can just get from the image.
The byte array, containing the image bytes.
The stride, which gives you the amount of bytes used for each line on the image.
(Technically, there's a fifth one, namely the pixel format, but we'll just force things to 32bpp here so we don't have to take that into account along the way.)
Note that the stride, technically, is not just the amount of bytes used per pixel multiplied by the image width. It is rounded up to the next multiple of 4 bytes. When working with 32-bit ARGB content, this isn't really an issue, since 32-bit is 4 bytes, but in general, it's better to use the stride and not just the multiplied width, and write all code assuming there could be padded bytes behind each line. You'll thank me if you're ever processing 24-bit RGB content with this kind of system.
However, when going over the image's content you obviously should only check the exact range that contains pixel data, and not the full stride.
The way to get these things is quite simple: use LockBits on the image, tell it to expose the image as 32 bit per pixel ARGB data (it will actually convert it if needed), get the line stride, and use Marshal.Copy to copy the entire image contents into a byte array.
Int32 width = image.Width;
Int32 height = image.Height;
BitmapData sourceData = image.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
Int32 stride = sourceData.Stride;
Byte[] data = new Byte[stride * height];
Marshal.Copy(sourceData.Scan0, data, 0, data.Length);
image.UnlockBits(sourceData);
As mentioned, this is forced to 32-bit ARGB format. If you would want to use this system to get the data out in the original format it has inside the image, just change PixelFormat.Format32bppArgb to image.PixelFormat.
Now, you have to realise, LockBits is a rather heavy operation, which copies the data out, in the requested pixel format, to new memory, where it can be read or (if not specified as read-only as I did here) edited. What makes this more optimal than your method is, quite simply, that GetPixel performs a LockBits operation every time you request a single pixel value. So you're cutting down the amount of LockBits calls from several thousands to just one.
Anyway, now, as for your functions.
The first method is, in my opinion, completely unnecessary; you should just run the second one on any image you get. Its output gives you the last white line of the image, so if that value equals height-1 you're done, and if it doesn't, you immediately have the value needed for the further processing. The first function does exactly the same as the second, after all; it checks if all pixels on a line are white. The only difference is that it only processes the last line.
So, onto the second method. This is where things go wrong. You set the amount of white pixels to the "current pixel index plus one", rather than incrementing it to check if all pixels matched, meaning the method goes over all pixels but only really checks if the last pixel on the row was white. Since your image indeed has a white pixel at the end of the last row, it aborts after one row.
Also, whenever you find a pixel that does not match, you should just abort the scan of that line immediately, like your first method does; there's no point in continuing on that line after that.
So, let's fix that second function, and rewrite it to work with that set of "byte array", "stride", "width" and "height", rather than an image. I added the "white" colour as parameter too, to make it more reusable, so it's changed from GetLastWhiteLine to GetLastClearLine.
One general usability note: if you are iterating over the height and width, do actually call your loop variables y and x; it makes things a lot more clear in your code.
I explained the used systems in the code comments.
private static Int32 GetLastClearLine(Byte[] sourceData, Int32 stride, Int32 width, Int32 height, Color checkColor)
{
// Get color as UInt32 in advance.
UInt32 checkColVal = (UInt32)checkColor.ToArgb();
// Use MemoryStream with BinaryReader since it can read UInt32 from a byte array directly.
using (MemoryStream ms = new MemoryStream(sourceData))
using (BinaryReader sr = new BinaryReader(ms))
{
for (Int32 y = height - 1; y >= 0; --y)
{
// Set position in the memory stream to the start of the current row.
ms.Position = stride * y;
Int32 matchingPixels = 0;
// Read UInt32 pixels for the whole row length.
for (Int32 x = 0; x < width; ++x)
{
// Read a UInt32 for one whole 32bpp ARGB pixel.
UInt32 colorVal = sr.ReadUInt32();
// Compare with check value.
if (colorVal == checkColVal)
matchingPixels++;
else
break;
}
// Test if full line matched the given color.
if (matchingPixels == width)
return y;
}
}
return -1;
}
This can be simplified, though; the loop variable x already contains the value you need, so if you simply declare it before the loop, you can check after the loop what value it had when the loop stopped, and there is no need to increment a second variable. And, honestly, the value read from the stream can be compared directly, without the colorVal variable. Making the contents of the y-loop:
{
ms.Position = stride * y;
Int32 x;
for (x = 0; x < width; ++x)
if (sr.ReadUInt32() != checkColVal)
break;
if (x == width)
return y;
}
For your example image, this gets me value 178, which is correct when I check in Gimp.

C# Convert or compare int to (unsafe) byte*

Original Scenario
I massively misunderstood my own code, and this scenario is invalid.
This is way out of my normal wheelhouse, so I'm going to explain best
I can.
I have a user-set color code. Example:
int R = 255;
int G = 255;
int B = 255;
And I have a lot of large images where I need to check the color of
pixels at certain sets of coordinates against the user-set color. I
can successfully get the byte* of any pixel in an image, and get the
values I expect.
I do this using BitmapData from Bitmap.LockBits(...). My
understanding is that locking will be important to performance
reasons. There will be a great many instances of this being used
across a very large collections of images, so performance is a major
consideration.
For those same performance reasons I'm trying to avoid converting the
retrieved pixel-colors represented by unsafe bytes to integers - I'd
much rather convert my int to a byte one time and use that for the
likely millions of pixels this will be run against each time it is
invoked.
However... I cannot figure out how to get any of my user-set integers
into an unsafe byte (byte*) and compare it to the unsafe byte
retrieved from a pixel.
The unsafe byte (byte*) was the 8-bit pointer of the data of the pixel (at least, that's how I understand it) but I am getting the individual colors as regular old bytes.
byte* pixel = //value here pulled from image;
pixel[2] //red value byte
pixel[1] //green value byte
pixel[0] //blue value byte
So I don't need to convert my ints to unsafe bytes ...pointers?..., just a simple Converter.ToByte(myInt).
The real question
But since I think this is still possibly a valid question outside my scenario, I'm going to leave this part up for someone to answer and hopefully help someone in the future:
How do you take any given int in C# and compare it to an "unsafe byte" pointer 'byte*'?
You would just want to dereference the byte pointer and compare it to the integer.
unsafe void Main()
{
byte x = 15;
int y = 15;
Console.WriteLine(AreEqual(&x, y)); // True
}
public unsafe bool AreEqual(byte* bytePtr, int val) {
var byteVal = *bytePtr;
return byteVal == val;
}
Let us open a open a bitmap and process each pixel
//Note this has several overloads, including a path to an image
//Use the proper one for yourself
Bitmap b = new Bitmap(_image);
//Lock(and Load baby)
BitmapData bData = b.LockBits(new Rectangle(0, 0, _image.Width, _image.Height), ImageLockMode.ReadWrite, b.PixelFormat);
//Bits per pixel, obviously
byte bitsPerPixel = Image.GetPixelFormatSize(bitmap.PixelFormat);
//Gets the address of the first pixel data in the bitmap.
//This can also be thought of as the first scan line in the bitmap.
byte* scan0 = (byte*)bData.Scan0.ToPointer();
for (int i = 0; i < bData.Height; ++i)
{
for (int j = 0; j < bData.Width; ++j)
{
byte* data = scan0 + i * bData.Stride + j * bitsPerPixel / 8;
//data is a pointer to the first byte of the 3-byte color data
//Do your magic here, compare your RGB values here
byte R = *b; //Dereferencing pointer here
byte G = *(b+1);
byte B = *(b+2);
}
}
//Unlocking here is important or memoryleak
b.UnlockBits(bData);

I have a PGM format image with 8bpp 1024 X 1024 size and need to calculate GLCM (Grey Level Co-occurrence Matrix) from it

The image is of big size and I used getPixel and and setPixel methods to access bits but found out that it was way too slow so I went to implement lock and unlock bits but could not get my head around it. I also went through tutorials of Bob Powell but the tutorials but could not understand. So, I am asking for some help here to get GLCM from the image.
GLCM is generally a very computationally intensive algorithm. It iterates through each pixel, for each neighbor. Even C++ image processing libraries have this issue.
GLCM does however lend itself quite nicely to parallel (multi-threaded) implementations as the calculations for each reference pixel are independent.
With regards to using lock and unlock bits see the example code below. One thing to keep in mind is that the image can be padded for optimization reasons. Also, if your image has a different bit depth or multiple channels you will need to adjust the code accordingly.
BitmapData data = image.LockBits(new Rectangle(0, 0, width, height),
ImageLockMode.ReadOnly, PixelFormat.Gray8);
byte* dataPtr = (byte*)data.Scan0;
int rowPadding = data.Stride - (image.Width);
// iterate over height (rows)
for (int i = 0; i < height; i++)
{
// iterate over width (columns)
for (int j = 0; j < width; j++)
{
// pixel value
int value = dataPtr[0];
// advance to next pixel
dataPtr++;
// at the end of each column, skip extra padding
if (rowPadding > 0)
{
dataPtr += rowPadding;
}
}
image.UnlockBits(data1);

OpenGl 16 bit display via Tao/C#

I have some scientific image data that's coming out of a detector device in a 16 bit range which then gets rendered in an image. In order to display this data, I'm using OpenGL, because it should support ushorts as part of the library. I've managed to get this data into textures rendering on an OpenGL 1.4 platform, a limitation that is a requirement of this project.
Unfortunately, the resulting textures look like they're being reduced to 8 bits, rather than 16 bits. I test this by generating a gradient image and displaying it; while the image itself has each pixel different from its neighbors, the displayed texture is showing stripe patterns where all pixels next to one another are showing up as equal values.
I've tried doing this with GlDrawPixels, and the resulting image actually looks like it's really rendering all 16 bits.
How can I force these textures to display properly?
To give more background, the LUT (LookUp Table) is being determined by the following code:
String str = "!!ARBfp1.0\n" +
"ATTRIB tex = fragment.texcoord[0];\n" +
"PARAM cbias = program.local[0];\n" +
"PARAM cscale = program.local[1];\n" +
"OUTPUT cout = result.color;\n" +
"TEMP tmp;\n" +
"TXP tmp, tex, texture[0], 2D;\n" +
"SUB tmp, tmp, cbias;\n" +
"MUL cout, tmp, cscale;\n" +
"END";
Gl.glEnable(Gl.GL_FRAGMENT_PROGRAM_ARB);
Gl.glGenProgramsARB(1, out mFragProg);
Gl.glBindProgramARB(Gl.GL_FRAGMENT_PROGRAM_ARB, mFragProg);
System.Text.Encoding ascii = System.Text.Encoding.ASCII;
Byte[] encodedBytes = ascii.GetBytes(str);
Gl.glProgramStringARB(Gl.GL_FRAGMENT_PROGRAM_ARB, Gl.GL_PROGRAM_FORMAT_ASCII_ARB,
count, encodedBytes);
GetGLError("Shader");
Gl.glDisable(Gl.GL_FRAGMENT_PROGRAM_ARB);
Where cbias and cScale are between 0 and 1.
Thanks!
EDIT: To answer some of the other questions, the line with glTexImage:
Gl.glBindTexture(Gl.GL_TEXTURE_2D, inTexData.TexName);
Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_LUMINANCE, inTexData.TexWidth, inTexData.TexHeight,
0, Gl.GL_LUMINANCE, Gl.GL_UNSIGNED_SHORT, theTexBuffer);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR); // Linear Filtering
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR); // Linear Filtering
theTexBuffer = null;
GC.Collect();
GC.WaitForPendingFinalizers();
The pixel format is set when the context is initialized:
Gdi.PIXELFORMATDESCRIPTOR pfd = new Gdi.PIXELFORMATDESCRIPTOR();// The pixel format descriptor
pfd.nSize = (short)Marshal.SizeOf(pfd); // Size of the pixel format descriptor
pfd.nVersion = 1; // Version number (always 1)
pfd.dwFlags = Gdi.PFD_DRAW_TO_WINDOW | // Format must support windowed mode
Gdi.PFD_SUPPORT_OPENGL | // Format must support OpenGL
Gdi.PFD_DOUBLEBUFFER; // Must support double buffering
pfd.iPixelType = (byte)Gdi.PFD_TYPE_RGBA; // Request an RGBA format
pfd.cColorBits = (byte)colorBits; // Select our color depth
pfd.cRedBits = 0; // Individual color bits ignored
pfd.cRedShift = 0;
pfd.cGreenBits = 0;
pfd.cGreenShift = 0;
pfd.cBlueBits = 0;
pfd.cBlueShift = 0;
pfd.cAlphaBits = 0; // No alpha buffer
pfd.cAlphaShift = 0; // Alpha shift bit ignored
pfd.cAccumBits = 0; // Accumulation buffer
pfd.cAccumRedBits = 0; // Individual accumulation bits ignored
pfd.cAccumGreenBits = 0;
pfd.cAccumBlueBits = 0;
pfd.cAccumAlphaBits = 0;
pfd.cDepthBits = 16; // Z-buffer (depth buffer)
pfd.cStencilBits = 0; // No stencil buffer
pfd.cAuxBuffers = 0; // No auxiliary buffer
pfd.iLayerType = (byte)Gdi.PFD_MAIN_PLANE; // Main drawing layer
pfd.bReserved = 0; // Reserved
pfd.dwLayerMask = 0; // Layer masks ignored
pfd.dwVisibleMask = 0;
pfd.dwDamageMask = 0;
pixelFormat = Gdi.ChoosePixelFormat(mDC, ref pfd); // Attempt to find an appropriate pixel format
if (!Gdi.SetPixelFormat(mDC, pixelFormat, ref pfd))
{ // Are we not able to set the pixel format?
BigMessageBox.ShowMessage("Can not set the chosen PixelFormat. Chosen PixelFormat was " + pixelFormat + ".");
Environment.Exit(-1);
}
If you create a texture the 'type' parameter of glTexImage is only the data type your texture data is in before it is converted by OpenGL into its own format. To create a texture with 16 bit per channel you need something like GL_LUMINANCE16 as format (internal format remains GL_LUMINANCE). If there's no GL_LUMINANCE16 for OpenGL 1.4 check if GL_EXT_texture is available and try it with GL_LUMINANCE16_EXT.
One of these should work. However if it doesn't you can encode your 16 bit values as two 8 bit pairs with GL_LUMINANCE_ALPHA and decode it again inside a shader.
I've never worked in depths higher (deeper) than 8bit per channel, but here's what I'd try first:
Turn off filtering on the texture and see how it affects the output.
Set texturing glHints to best quality.
You could consider using a single channel floating point texture through one of the GL_ARB_texture_float, GL_ATI_texture_float or GL_NV_float_buffer extensions if the hardware supports it, I can't recall if GL 1.4 has floating point textures or not though.

Categories

Resources