i am trying to Threshold and also applying the Gaussian Blur on Raw byte image in EmguCV. I have raw array bytes, which are bytes from Dicomfile, i read those bytes and store them in a empty image, once i wanna apply threshold or gaussian blur i get these Exception:
OpenCv: src.type() == CV_8UC1.
Thank you.
var pixelData_ = dataset.GetValues<byte>(DicomTag.PixelData);
var splitPixelSpacing = pixelSpacing.Split('\\');
var IPP = dataset.GetString(DicomTag.ImagePositionPatient);
var iimg = new Image<Gray, ushort>(cols, rows);
var parse = float.Parse(splitPixelSpacing[0], CultureInfo.InvariantCulture.NumberFormat);
var newPixelSpacing = parse* (cols / 512);
iimg.Bytes = pixelData_;
iimg = iimg.Resize(512, 512, Inter.Area);
var imgBin = new Image<Gray, byte>(512, 512, new Gray(10));
var blr = iimg.ThresholdAdaptive(new Gray(100),
AdaptiveThresholdType.GaussianC,
ThresholdType.ToZero,
5,
new Gray(2));
//I set here a breakpoint then i get these Exception:src.type() == CV_8UC1.
Unfortenatly the documentation of EmguCV lacks any information about this error, and also lack any prerequisites. But emguCV is just a wrapper around OpenCV, and if we check that documentation we can find:
src Source 8-bit single-channel image.
I.e. ThresholdAdaptive is limited to 8 bit grayscale and will not work for 16 bit grayscale images.
If you just want a simple threshold you might want to use ThresholdBinary, or one of the other simple thresholding methods.
Related
I'm using a framework for some camera hardware called IDS Peak and we are receiving 16 bit grayscale images back from the framework, the framework itself can write the files to disk as PNGs and that's all good and well, but how do I display them in a PictureBox in Winforms?
Windows Bitmap does not support 16 bit grayscale so the following code throws a 'Parameter is not valid.' System.ArgumentException
var image = new Bitmap(width, height, stride, System.Drawing.Imaging.PixelFormat.Format16bppGrayScale, iplImg.Data());
iplImg.Data() here is an IntPtr to the bespoke Image format of the framework.
Considering Windows Bitmap does not support the format, and I can write the files using the framework to PNGs, how can I do one of the following:
Convert to a different object type other than Bitmap to display directly in Winforms without reading from the files.
Load the 16-bit grayscale PNG files into the PictureBox control (or any other control type, it doesn't have to be a PictureBox).
(1) is preferable as it doesn't require file IO but if (2) is the only possibility that's completely fine as I need to both save and display them anyway but (1) only requires a write operation and not a secondary read.
The files before writing to disc are actually monochrome with 12 bits per pixel, packed.
While it is possible to display 16-bit images, for example by hosting a wpf control in winforms, you probably want to apply a windowing function to reduce the image to 8 bit before display.
So lets use unsafe code and pointers for speed:
var bitmapData = myBitmap.LockBits(
new Rectangle(0, 0, myBitmap.Width, myBitmap.Height),
ImageLockMode.ReadWrite,
myBitmap.PixelFormat);
try
{
var ptr= (byte*)bitmapData.Scan0;
var stride = bitmapData.Stride;
var width = bitmapData.Width;
var height= bitmapData.Height;
// Conversion Code
}
finally
{
myBitmap.UnlockBits(bitmapData);
}
or using wpf image classes, that generally have better 16-bit support:
var myBitmap= new WriteableBitmap(new BitmapImage(new Uri("myBitmap.jpg", UriKind.Relative)));
writeableBitmap.Lock();
try{
var ptr = (byte*)myBitmap.BackBuffer;
...
}
finally
{
myBitmap.Unlock();
}
To loop over all the pixels you would use a double loop:
for (int y = 0; y < height; y++)
{
var row = (ushort*)(ptr+ y * stride);
for (int x = 0; x < width; x++)
{
var pixelValue = row[x];
// Scaling code
}
}
And to scale the value you could use a linear scaling between the min and max values to the 0-255 range of a byte
var slope = (byte.MaxValue + 1f) / (maxUshortValyue - minUshortValue);
var scaled = (int)(((pixelValue + 0.5f - minUshortValue) * slope)) ;
scaled = scaled > byte.MaxValue ? byte.MaxValue: scaled;
scaled = scaled < 0 ? 0: scaled;
var byteValue = (byte)scaled;
The maxUshortValyue / minUshortValue would either be computed from the max/min value of the image, or configured by the user. You would also need to create a target image in order to write down the result into a target 8-bit grayscale bitmap to be displayed, or write down the same value for each color channel in a color image.
I have an image sensor board for embedded development for which I need to capture a stream of images and output them in 8-bit monochrome / grayscale format. The imager output is 12-bit monochrome (which takes 2 bytes per pixel).
In the code, I have an IntPtr to a memory buffer that has the 12-bit image data, from which I have to extract and convert that data down to an 8-bit image. This is represented in memory something like this (with a bright light activating the pixels):
As you can see, every second byte contains the LSB that I want to discard, thereby keeping only the odd-numbered bytes (to put it another way). The best solution I can conceptualize is to iterate through the memory, but that's the rub. I can't get that to work. What I need help with is an algorithm in C# to do this.
Here's a sample image that represents a direct creation of a Bitmap object from the IntPtr as follows:
bitmap = new Bitmap(imageWidth, imageHeight, imageWidth, PixelFormat.Format8bppIndexed, pImage);
// Failed Attempt #1
unsafe
{
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
int i = 0, imageSize = (imageWidth * imageHeight * 2); // two bytes per pixel
byte[] imageData = new byte[imageSize];
do
{
// Should I bitwise shift?
imageData[i] = (byte)(pImage + i) << 8; // Doesn't compile, need help here!
} while (i++ < imageSize);
}
// Failed Attempt #2
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
imageSize = imageWidth * imageHeight;
byte[] imageData = new byte[imageSize];
Marshal.Copy(pImage, imageData, 0, imageSize);
// I tried with and without this loop. Neither gives me images.
for (int i = 0; i < imageData.Length; i++)
{
if (0 == i % 2) imageData[i / 2] = imageData[i];
}
Bitmap bitmap;
using (var ms = new MemoryStream(imageData))
{
bitmap = new Bitmap(ms);
}
// This also introduced a memory leak somewhere.
Alternatively, if there's a way to do this with a Bitmap, byte[], MemoryStream, etc. that works, I'm all ears, but everything I've tried has failed.
Here is the algorithm that my coworkers helped formulate. It creates two new (unmanaged) pointers; one 8-bits wide and the other 16-bits.
By stepping through one word at a time and shifting off the last 4 bits of the source, we get a new 8-bit image with only the MSBs. Each buffer has the same number of words, but since the words are different sizes, they progress at different rates as we iterate over them.
unsafe
{
byte* p_bytebuffer = (byte*)pImage;
short* p_shortbuffer = (short*)pImage;
for (int i = 0; i < imageWidth * imageHeight; i++)
{
*p_bytebuffer++ = (byte)(*p_shortbuffer++ >> 4);
}
}
In terms of performance, this appears to be very fast with no perceivable difference in framerate.
Special thanks to #Herohtar for spending a substantial amount of time in chat with me attempting to help me solve this.
I know that we desaturate an image by decreasing the values in the Saturation channel. I want to acomplish this using c# with emgu
For instance here is c++ code using opencv to do so:
Mat Image = imread("images/any.jpg");
// Specify scaling factor
float saturationScale = 0.01;
Mat hsvImage;
// Convert to HSV color space
cv::cvtColor(Image,hsvImage,COLOR_BGR2HSV);
// Convert to float32
hsvImage.convertTo(hsvImage,CV_32F);
vector<Mat>channels(3);
// Split the channels
split(hsvImage,channels);
// Multiply S channel by scaling factor
channels[1] = channels[1] * saturationScale;
// Clipping operation performed to limit pixel values
// between 0 and 255
min(channels[1],255,channels[1]);
max(channels[1],0,channels[1]);
// Merge the channels
merge(channels,hsvImage);
// Convert back from float32
hsvImage.convertTo(hsvImage,CV_8UC3);
Mat imSat;
// Convert to BGR color space
cv::cvtColor(hsvImage,imSat,COLOR_HSV2BGR);
// Display the images
Mat combined;
cv::hconcat(Image, imSat, combined);
namedWindow("Original Image -- Desaturated Image", CV_WINDOW_AUTOSIZE);
In c# I have:
var img = new Image<Gray, byte>("images/any.jpg");
var imhHsv = img.Convert<Hsv, byte>();
var channels = imhHsv.Split();
// Multiply S channel by scaling factor and clip (limit)
channels[1] = (channels[1] * saturationScale);
I am not sure how to merge modified saturation channel with imHsv, if I do this:
CvInvoke.Merge(channels, imhHsv);
there is error:
cannot convert 'Emgu.CV.Image[]' to
'Emgu.CV.IInputArrayOfArrays'
I put a VectorOfMat into the CvInvoke.Merge and it works.
Mat[] m = new Mat[3];
m[0] = CvInvoke.CvArrToMat(channels[0]);
m[1] = CvInvoke.CvArrToMat(channels[1]);
m[2] = CvInvoke.CvArrToMat(channels[2]);
VectorOfMat vm = new VectorOfMat(m);
CvInvoke.Merge(vm, imhHsv);
I have a 12bit image that I want to work with in Opencv, such as detecting blobs etc.
The image is now a Uint16 array. And i want to convert it to Opencv Mat or Iplimage. I need to do that inorder to threshold and detect blobs.
What im doing is to convert the ushort array to bitmap, and then to Mat using opencv extensions, see below.
ushort[,] red = new ushort [480,640];
//Grab image in to ushort array
//..
bmpred = U16ArrayToBitmap(red);
Mat redMat = new Mat();
redMat = OpenCvSharp.Extensions.BitmapConverter.ToMat(bmpred);
Now redMat, my Mat image is U8C4, as I understand its 4 Chanel Unsigned 8 bit.
This wont work for me, because I want to use all of the 12 bits!
Is it possible to convert a ushort array or a bitmap into a 16bit grayscale Mat or iplimage?
Thanks!
You can use Mat constructor directly:
var mat = new Mat(red.GetLength(0), red.GetLength(1), MatType.CV_16UC1, red);
Or use a MatOfUShort:
var mat = new MatOfUShort(red.GetLength(0), red.GetLength(1), red);
Note that mat is using a pointer to red array. Clone mat if you want to use a copy:
var clone = mat.Clone();
For more information check Mat - The Basic Image Container OpenCV documentation page and OpenCvSharp Wiki.
I'm new to EmguCV, OpenCV and machine vision in general. I translated the code from this stack overflow question from C++ to C#. I also copied their sample image to help myself understand if the code is working as expected or not.
Mat map = CvInvoke.Imread("C:/Users/Cindy/Desktop/coffee_mug.png", Emgu.CV.CvEnum.LoadImageType.AnyColor | Emgu.CV.CvEnum.LoadImageType.AnyDepth);
CvInvoke.Imshow("window", map);
Image<Gray, Byte> imageGray = map.ToImage<Gray, Byte>();
double min = 0, max = 0;
int[] minIndex = new int[5], maxIndex = new int[5];
CvInvoke.MinMaxIdx(imageGray, out min, out max, minIndex, maxIndex, null);
imageGray -= min;
Mat adjMap = new Mat();
CvInvoke.ConvertScaleAbs(imageGray, adjMap, 255/(max-min), 0);
CvInvoke.Imshow("Out", adjMap);
Original Image:
After Processing:
This doesn't look like a depth map to me, it just looks like a slightly modified grayscale image, so I'm curious where I went wrong in my code. MinMaxIdx() doesn't work without converting the image to grayscale first, unlike the code I linked above. Ultimately, what I'd like to do is to be able to generate relative depth maps from a single webcamera.