using (Bitmap bmp = (Bitmap)Bitmap.FromFile(C:\Users\112\AppData\Local\Temp\113837.dcm))
{
// obtain the XResolution and YResolution TIFFTAG values
PropertyItem piXRes = bmp.GetPropertyItem(282);
PropertyItem piYRes = bmp.GetPropertyItem(283);
// values are stored as a rational number - numerator/denominator pair
numerator = BitConverter.ToInt32(piXRes.Value, 0);
denominator = BitConverter.ToInt32(piXRes.Value, 4);
float xRes = numerator / denominator;
numerator = BitConverter.ToInt32(piYRes.Value, 0);
denominator = BitConverter.ToInt32(piYRes.Value, 4);
float yRes = numerator / denominator;
// now set the values
byte[] numeratorBytes = new byte[4];
byte[] denominatorBytes = new byte[4];
numeratorBytes = BitConverter.GetBytes(600); // specify resolution in numerator
denominatorBytes = BitConverter.GetBytes(1);
Array.Copy(numeratorBytes, 0, piXRes.Value, 0, 4); // set the XResolution value
Array.Copy(denominatorBytes, 0, piXRes.Value, 4, 4);
Array.Copy(numeratorBytes, 0, piYRes.Value, 0, 4); // set the YResolution value
Array.Copy(denominatorBytes, 0, piYRes.Value, 4, 4);
bmp.SetPropertyItem(piXRes); // finally set the image property resolution
bmp.SetPropertyItem(piYRes);
bmp.SetResolution(600, 600); // now set the bitmap resolution
bmp.Save(#"C:\output.tif"); // save the image
}
I'm getting an "Out of memory" error on the line using (Bitmap bmp = .... How can I solve that?
The "out of memory" is misleading. It really means that the image format can not be determined by .Net.
Sorry but .Net does not support DICOM images. See http://msdn.microsoft.com/en-us/library/system.drawing.bitmap.aspx for info on supported image formats.
With this line...
(Bitmap)Bitmap.FromFile(C:\Users\112\AppData\Local\Temp\113837.dcm)
...you are reading the whole raw data contained in a dicom file. That includes the Dicom Data Elements (fields containing information).
Extracting the image data is much more complicated than this. You should begin looking for some information about the Dicom format.
Other good sources of information to start can be found on Dabsoft and Medical Connections and, of course, on the David Clunie's website.
Related
I'm using a framework for some camera hardware called IDS Peak and we are receiving 16 bit grayscale images back from the framework, the framework itself can write the files to disk as PNGs and that's all good and well, but how do I display them in a PictureBox in Winforms?
Windows Bitmap does not support 16 bit grayscale so the following code throws a 'Parameter is not valid.' System.ArgumentException
var image = new Bitmap(width, height, stride, System.Drawing.Imaging.PixelFormat.Format16bppGrayScale, iplImg.Data());
iplImg.Data() here is an IntPtr to the bespoke Image format of the framework.
Considering Windows Bitmap does not support the format, and I can write the files using the framework to PNGs, how can I do one of the following:
Convert to a different object type other than Bitmap to display directly in Winforms without reading from the files.
Load the 16-bit grayscale PNG files into the PictureBox control (or any other control type, it doesn't have to be a PictureBox).
(1) is preferable as it doesn't require file IO but if (2) is the only possibility that's completely fine as I need to both save and display them anyway but (1) only requires a write operation and not a secondary read.
The files before writing to disc are actually monochrome with 12 bits per pixel, packed.
While it is possible to display 16-bit images, for example by hosting a wpf control in winforms, you probably want to apply a windowing function to reduce the image to 8 bit before display.
So lets use unsafe code and pointers for speed:
var bitmapData = myBitmap.LockBits(
new Rectangle(0, 0, myBitmap.Width, myBitmap.Height),
ImageLockMode.ReadWrite,
myBitmap.PixelFormat);
try
{
var ptr= (byte*)bitmapData.Scan0;
var stride = bitmapData.Stride;
var width = bitmapData.Width;
var height= bitmapData.Height;
// Conversion Code
}
finally
{
myBitmap.UnlockBits(bitmapData);
}
or using wpf image classes, that generally have better 16-bit support:
var myBitmap= new WriteableBitmap(new BitmapImage(new Uri("myBitmap.jpg", UriKind.Relative)));
writeableBitmap.Lock();
try{
var ptr = (byte*)myBitmap.BackBuffer;
...
}
finally
{
myBitmap.Unlock();
}
To loop over all the pixels you would use a double loop:
for (int y = 0; y < height; y++)
{
var row = (ushort*)(ptr+ y * stride);
for (int x = 0; x < width; x++)
{
var pixelValue = row[x];
// Scaling code
}
}
And to scale the value you could use a linear scaling between the min and max values to the 0-255 range of a byte
var slope = (byte.MaxValue + 1f) / (maxUshortValyue - minUshortValue);
var scaled = (int)(((pixelValue + 0.5f - minUshortValue) * slope)) ;
scaled = scaled > byte.MaxValue ? byte.MaxValue: scaled;
scaled = scaled < 0 ? 0: scaled;
var byteValue = (byte)scaled;
The maxUshortValyue / minUshortValue would either be computed from the max/min value of the image, or configured by the user. You would also need to create a target image in order to write down the result into a target 8-bit grayscale bitmap to be displayed, or write down the same value for each color channel in a color image.
I have an image sensor board for embedded development for which I need to capture a stream of images and output them in 8-bit monochrome / grayscale format. The imager output is 12-bit monochrome (which takes 2 bytes per pixel).
In the code, I have an IntPtr to a memory buffer that has the 12-bit image data, from which I have to extract and convert that data down to an 8-bit image. This is represented in memory something like this (with a bright light activating the pixels):
As you can see, every second byte contains the LSB that I want to discard, thereby keeping only the odd-numbered bytes (to put it another way). The best solution I can conceptualize is to iterate through the memory, but that's the rub. I can't get that to work. What I need help with is an algorithm in C# to do this.
Here's a sample image that represents a direct creation of a Bitmap object from the IntPtr as follows:
bitmap = new Bitmap(imageWidth, imageHeight, imageWidth, PixelFormat.Format8bppIndexed, pImage);
// Failed Attempt #1
unsafe
{
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
int i = 0, imageSize = (imageWidth * imageHeight * 2); // two bytes per pixel
byte[] imageData = new byte[imageSize];
do
{
// Should I bitwise shift?
imageData[i] = (byte)(pImage + i) << 8; // Doesn't compile, need help here!
} while (i++ < imageSize);
}
// Failed Attempt #2
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
imageSize = imageWidth * imageHeight;
byte[] imageData = new byte[imageSize];
Marshal.Copy(pImage, imageData, 0, imageSize);
// I tried with and without this loop. Neither gives me images.
for (int i = 0; i < imageData.Length; i++)
{
if (0 == i % 2) imageData[i / 2] = imageData[i];
}
Bitmap bitmap;
using (var ms = new MemoryStream(imageData))
{
bitmap = new Bitmap(ms);
}
// This also introduced a memory leak somewhere.
Alternatively, if there's a way to do this with a Bitmap, byte[], MemoryStream, etc. that works, I'm all ears, but everything I've tried has failed.
Here is the algorithm that my coworkers helped formulate. It creates two new (unmanaged) pointers; one 8-bits wide and the other 16-bits.
By stepping through one word at a time and shifting off the last 4 bits of the source, we get a new 8-bit image with only the MSBs. Each buffer has the same number of words, but since the words are different sizes, they progress at different rates as we iterate over them.
unsafe
{
byte* p_bytebuffer = (byte*)pImage;
short* p_shortbuffer = (short*)pImage;
for (int i = 0; i < imageWidth * imageHeight; i++)
{
*p_bytebuffer++ = (byte)(*p_shortbuffer++ >> 4);
}
}
In terms of performance, this appears to be very fast with no perceivable difference in framerate.
Special thanks to #Herohtar for spending a substantial amount of time in chat with me attempting to help me solve this.
I am trying to convert an object to an image.
I grab the object from a usb3 camera using the following:
object RawData = axActiveUSB1.GetImageWindow(0,0,608,608);
This returns a Variant (SAFEARRAY)
After reviewing further, RawData = {byte[1824, 608]}. The image is 608 x 608 so I'm guessing 1824 is 3 times the size due to the RGB component of the image.
The camera's pixel format is BayerRGB8 and according to the API I am using the data type is represented in Bytes:
Camera Pixel Format | Output Format | Data type | Dimensions
Bayer8 | 24-bit RGB | Byte | 0 to SizeX * 3 - 1, 0 to Lines - 1
I can convert it to a bytes array using this code found at Convert any object to a byte[]
private byte[] ObjectToByteArray(Object obj)
{
if (obj == null)
return null;
BinaryFormatter bf = new BinaryFormatter();
MemoryStream ms = new MemoryStream();
bf.Serialize(ms, obj);
return ms.ToArray();
}
From here, I then do: (all of this code also found or derived from info on stack)
// convert object to bytes
byte[] imgasbytes = ObjectToByteArray(RawData);
// create a bitmap and put data in it to go into the picturebox
var bitmap = new Bitmap(608, 608, PixelFormat.Format24bppRgb);
var bitmap_data = bitmap.LockBits(new Rectangle(0, 0,bitmap.Width, bitmap.Height), ImageLockMode.WriteOnly, bitmap.PixelFormat);
Marshal.Copy(imgasbytes, 0, bitmap_data.Scan0, imgasbytes.Length );
bitmap.UnlockBits(bitmap_data);
var result = bitmap as Image; // this line not even really necessary
PictureBox.Image = result;
The code works, but I should see this:
But I see this:
I've done this in Python and had similar issues which I was able to resolve, but I'm not as strong in c# and am unable to progress from here. I need to rotate my image 90 degrees, but also I think that my issue relates to incorrectly converting the array. I think that I need to convert my object (SAFEARRAY) to a multidimensional array so that the RGB sits on top of one another. I have looked at many examples on how to do this, but I do not understand how to go about it.
Any feedback is greatly appreciated on what I am doing wrong.
EDIT
I'm looking at this -> Convert RGB8 byte[] to Bitmap
which may be related to my issue.
It looks like the issue was exactly as I described.
In the end, the main issue was that the array needed to be rotated.
I found a solution here ->
Rotate M*N Matrix (90 degrees)
When I rotated the image, It resolved the picture issue that I was seeing above. While my image is inverted now, I understand the issue and as a result, am not seeing the problem any more.
Here is the code in case anyone runs into the same issue
byte[,] newImageAsBytes = new byte[ImageAsBytes.GetLength(1), ImageAsBytes.GetLength(0)];
int newColumn, newRow = 0;
for (int oldColumn = ImageAsBytes.GetLength(1) - 1; oldColumn >= 0; oldColumn--)
{
newColumn = 0;
for (int oldRow = 0; oldRow < ImageAsBytes.GetLength(0); oldRow++)
{
newImageAsBytes[newRow, newColumn] = ImageAsBytes[oldRow, oldColumn];
newColumn++;
}
newRow++;
}
byte[] b = ObjectToByteArray(newImageAsBytes);
var bitmap = new Bitmap(608, 608, PixelFormat.Format24bppRgb); // 608 is my image size and I am working with a camera that uses BayerRGB8
var bitmap_data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.WriteOnly, bitmap.PixelFormat);
Marshal.Copy(b, 0, bitmap_data.Scan0, b.Length);
bitmap.UnlockBits(bitmap_data);
var result = bitmap as Image; // this line not even really necessary
PictureBox.Image = result;
UPDATED:
Been looking around and trying to figure out what alternative there is for windows phone 7.1 for BitmapData. I have Commented out the code in question. I am aware of Lockbits and that its fast in comparison to get set pixels and so on.
As per my understanding, BitmapData Locks the image to memory ready for manipulation.
BmpData.Scan0 acts as a pointer to the memory.
If I were to do this without BitmapData, say Get.Pixel and map it to memory. and manipulate some of image data with Set.Pixel?
P.S: In regards to processing speed; I am not looking to change alot of pixels.
public int Edit(Bitmap BmpIn, byte[] BIn, byte BitsPerByte)
{
int LengthBytes = 1 + 31 / BitsPerByte;
int TextLength = 1 + (8 * BIn.Length - 1) / BitsPerByte;
//BitmapData BmpData = BmpIn.LockBits(new Rectangle(0, 0, BmpIn.Width, BmpIn.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
RGB = new byte[2 + LengthBytes + TextLength];
//Marshal.Copy(BmpData.Scan0, RGB, 0, RGB.Length);
InsertBitsPerByte(BitsPerByte);
SetMasks(BitsPerByte);
InsertLength(LengthBytes, TextLength, BitsPerByte);
InsertBytes(BIn, BitsPerByte);
//Marshal.Copy(RGB, 0, BmpData.Scan0, RGB.Length);
BmpIn.UnlockBits(BmpData);
return TextLength;
}
Any help appreciated.
Thanks
Have a look at WriteableBitmapEx. This will allow you to do pixel manipulation within an image.
I have some scientific image data that's coming out of a detector device in a 16 bit range which then gets rendered in an image. In order to display this data, I'm using OpenGL, because it should support ushorts as part of the library. I've managed to get this data into textures rendering on an OpenGL 1.4 platform, a limitation that is a requirement of this project.
Unfortunately, the resulting textures look like they're being reduced to 8 bits, rather than 16 bits. I test this by generating a gradient image and displaying it; while the image itself has each pixel different from its neighbors, the displayed texture is showing stripe patterns where all pixels next to one another are showing up as equal values.
I've tried doing this with GlDrawPixels, and the resulting image actually looks like it's really rendering all 16 bits.
How can I force these textures to display properly?
To give more background, the LUT (LookUp Table) is being determined by the following code:
String str = "!!ARBfp1.0\n" +
"ATTRIB tex = fragment.texcoord[0];\n" +
"PARAM cbias = program.local[0];\n" +
"PARAM cscale = program.local[1];\n" +
"OUTPUT cout = result.color;\n" +
"TEMP tmp;\n" +
"TXP tmp, tex, texture[0], 2D;\n" +
"SUB tmp, tmp, cbias;\n" +
"MUL cout, tmp, cscale;\n" +
"END";
Gl.glEnable(Gl.GL_FRAGMENT_PROGRAM_ARB);
Gl.glGenProgramsARB(1, out mFragProg);
Gl.glBindProgramARB(Gl.GL_FRAGMENT_PROGRAM_ARB, mFragProg);
System.Text.Encoding ascii = System.Text.Encoding.ASCII;
Byte[] encodedBytes = ascii.GetBytes(str);
Gl.glProgramStringARB(Gl.GL_FRAGMENT_PROGRAM_ARB, Gl.GL_PROGRAM_FORMAT_ASCII_ARB,
count, encodedBytes);
GetGLError("Shader");
Gl.glDisable(Gl.GL_FRAGMENT_PROGRAM_ARB);
Where cbias and cScale are between 0 and 1.
Thanks!
EDIT: To answer some of the other questions, the line with glTexImage:
Gl.glBindTexture(Gl.GL_TEXTURE_2D, inTexData.TexName);
Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_LUMINANCE, inTexData.TexWidth, inTexData.TexHeight,
0, Gl.GL_LUMINANCE, Gl.GL_UNSIGNED_SHORT, theTexBuffer);
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR); // Linear Filtering
Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR); // Linear Filtering
theTexBuffer = null;
GC.Collect();
GC.WaitForPendingFinalizers();
The pixel format is set when the context is initialized:
Gdi.PIXELFORMATDESCRIPTOR pfd = new Gdi.PIXELFORMATDESCRIPTOR();// The pixel format descriptor
pfd.nSize = (short)Marshal.SizeOf(pfd); // Size of the pixel format descriptor
pfd.nVersion = 1; // Version number (always 1)
pfd.dwFlags = Gdi.PFD_DRAW_TO_WINDOW | // Format must support windowed mode
Gdi.PFD_SUPPORT_OPENGL | // Format must support OpenGL
Gdi.PFD_DOUBLEBUFFER; // Must support double buffering
pfd.iPixelType = (byte)Gdi.PFD_TYPE_RGBA; // Request an RGBA format
pfd.cColorBits = (byte)colorBits; // Select our color depth
pfd.cRedBits = 0; // Individual color bits ignored
pfd.cRedShift = 0;
pfd.cGreenBits = 0;
pfd.cGreenShift = 0;
pfd.cBlueBits = 0;
pfd.cBlueShift = 0;
pfd.cAlphaBits = 0; // No alpha buffer
pfd.cAlphaShift = 0; // Alpha shift bit ignored
pfd.cAccumBits = 0; // Accumulation buffer
pfd.cAccumRedBits = 0; // Individual accumulation bits ignored
pfd.cAccumGreenBits = 0;
pfd.cAccumBlueBits = 0;
pfd.cAccumAlphaBits = 0;
pfd.cDepthBits = 16; // Z-buffer (depth buffer)
pfd.cStencilBits = 0; // No stencil buffer
pfd.cAuxBuffers = 0; // No auxiliary buffer
pfd.iLayerType = (byte)Gdi.PFD_MAIN_PLANE; // Main drawing layer
pfd.bReserved = 0; // Reserved
pfd.dwLayerMask = 0; // Layer masks ignored
pfd.dwVisibleMask = 0;
pfd.dwDamageMask = 0;
pixelFormat = Gdi.ChoosePixelFormat(mDC, ref pfd); // Attempt to find an appropriate pixel format
if (!Gdi.SetPixelFormat(mDC, pixelFormat, ref pfd))
{ // Are we not able to set the pixel format?
BigMessageBox.ShowMessage("Can not set the chosen PixelFormat. Chosen PixelFormat was " + pixelFormat + ".");
Environment.Exit(-1);
}
If you create a texture the 'type' parameter of glTexImage is only the data type your texture data is in before it is converted by OpenGL into its own format. To create a texture with 16 bit per channel you need something like GL_LUMINANCE16 as format (internal format remains GL_LUMINANCE). If there's no GL_LUMINANCE16 for OpenGL 1.4 check if GL_EXT_texture is available and try it with GL_LUMINANCE16_EXT.
One of these should work. However if it doesn't you can encode your 16 bit values as two 8 bit pairs with GL_LUMINANCE_ALPHA and decode it again inside a shader.
I've never worked in depths higher (deeper) than 8bit per channel, but here's what I'd try first:
Turn off filtering on the texture and see how it affects the output.
Set texturing glHints to best quality.
You could consider using a single channel floating point texture through one of the GL_ARB_texture_float, GL_ATI_texture_float or GL_NV_float_buffer extensions if the hardware supports it, I can't recall if GL 1.4 has floating point textures or not though.