I have a 12bit image that I want to work with in Opencv, such as detecting blobs etc.
The image is now a Uint16 array. And i want to convert it to Opencv Mat or Iplimage. I need to do that inorder to threshold and detect blobs.
What im doing is to convert the ushort array to bitmap, and then to Mat using opencv extensions, see below.
ushort[,] red = new ushort [480,640];
//Grab image in to ushort array
//..
bmpred = U16ArrayToBitmap(red);
Mat redMat = new Mat();
redMat = OpenCvSharp.Extensions.BitmapConverter.ToMat(bmpred);
Now redMat, my Mat image is U8C4, as I understand its 4 Chanel Unsigned 8 bit.
This wont work for me, because I want to use all of the 12 bits!
Is it possible to convert a ushort array or a bitmap into a 16bit grayscale Mat or iplimage?
Thanks!
You can use Mat constructor directly:
var mat = new Mat(red.GetLength(0), red.GetLength(1), MatType.CV_16UC1, red);
Or use a MatOfUShort:
var mat = new MatOfUShort(red.GetLength(0), red.GetLength(1), red);
Note that mat is using a pointer to red array. Clone mat if you want to use a copy:
var clone = mat.Clone();
For more information check Mat - The Basic Image Container OpenCV documentation page and OpenCvSharp Wiki.
Related
i am trying to Threshold and also applying the Gaussian Blur on Raw byte image in EmguCV. I have raw array bytes, which are bytes from Dicomfile, i read those bytes and store them in a empty image, once i wanna apply threshold or gaussian blur i get these Exception:
OpenCv: src.type() == CV_8UC1.
Thank you.
var pixelData_ = dataset.GetValues<byte>(DicomTag.PixelData);
var splitPixelSpacing = pixelSpacing.Split('\\');
var IPP = dataset.GetString(DicomTag.ImagePositionPatient);
var iimg = new Image<Gray, ushort>(cols, rows);
var parse = float.Parse(splitPixelSpacing[0], CultureInfo.InvariantCulture.NumberFormat);
var newPixelSpacing = parse* (cols / 512);
iimg.Bytes = pixelData_;
iimg = iimg.Resize(512, 512, Inter.Area);
var imgBin = new Image<Gray, byte>(512, 512, new Gray(10));
var blr = iimg.ThresholdAdaptive(new Gray(100),
AdaptiveThresholdType.GaussianC,
ThresholdType.ToZero,
5,
new Gray(2));
//I set here a breakpoint then i get these Exception:src.type() == CV_8UC1.
Unfortenatly the documentation of EmguCV lacks any information about this error, and also lack any prerequisites. But emguCV is just a wrapper around OpenCV, and if we check that documentation we can find:
src Source 8-bit single-channel image.
I.e. ThresholdAdaptive is limited to 8 bit grayscale and will not work for 16 bit grayscale images.
If you just want a simple threshold you might want to use ThresholdBinary, or one of the other simple thresholding methods.
I know that we desaturate an image by decreasing the values in the Saturation channel. I want to acomplish this using c# with emgu
For instance here is c++ code using opencv to do so:
Mat Image = imread("images/any.jpg");
// Specify scaling factor
float saturationScale = 0.01;
Mat hsvImage;
// Convert to HSV color space
cv::cvtColor(Image,hsvImage,COLOR_BGR2HSV);
// Convert to float32
hsvImage.convertTo(hsvImage,CV_32F);
vector<Mat>channels(3);
// Split the channels
split(hsvImage,channels);
// Multiply S channel by scaling factor
channels[1] = channels[1] * saturationScale;
// Clipping operation performed to limit pixel values
// between 0 and 255
min(channels[1],255,channels[1]);
max(channels[1],0,channels[1]);
// Merge the channels
merge(channels,hsvImage);
// Convert back from float32
hsvImage.convertTo(hsvImage,CV_8UC3);
Mat imSat;
// Convert to BGR color space
cv::cvtColor(hsvImage,imSat,COLOR_HSV2BGR);
// Display the images
Mat combined;
cv::hconcat(Image, imSat, combined);
namedWindow("Original Image -- Desaturated Image", CV_WINDOW_AUTOSIZE);
In c# I have:
var img = new Image<Gray, byte>("images/any.jpg");
var imhHsv = img.Convert<Hsv, byte>();
var channels = imhHsv.Split();
// Multiply S channel by scaling factor and clip (limit)
channels[1] = (channels[1] * saturationScale);
I am not sure how to merge modified saturation channel with imHsv, if I do this:
CvInvoke.Merge(channels, imhHsv);
there is error:
cannot convert 'Emgu.CV.Image[]' to
'Emgu.CV.IInputArrayOfArrays'
I put a VectorOfMat into the CvInvoke.Merge and it works.
Mat[] m = new Mat[3];
m[0] = CvInvoke.CvArrToMat(channels[0]);
m[1] = CvInvoke.CvArrToMat(channels[1]);
m[2] = CvInvoke.CvArrToMat(channels[2]);
VectorOfMat vm = new VectorOfMat(m);
CvInvoke.Merge(vm, imhHsv);
I have a byte array generate from a camera.
I want to convert this c# array (byte[] myBuffer) to a Emgu BgrImage.
The RAW image from the camera has 2590 x 1940 pixel.
The format is the BayerGB 8bit, so there are 2590(cols) x 1940(rows) x 2(byte) = 10059560 bytes.
I grab the data form the camera and I have a byte array with the data
byte[] myBuffer = myGrabberFuncion();
//myBuffer.lenght == 10059560
now I want to convert it to a BGR Image. I think I should use the Emgu.CV.CvInvoke.CvtColor(SRC, DEST, COLOR_CONV, CHANN ) function.
But i can't feed the function with a byte[] as the SRC argument, as it needs a "IInputArray".
So I can feed in a Mat, or a Matrix, or an Image<# , #>
But how should I put "buffer" data in to one of this structures?
If I use Matrix< byte > Bayer = new new ecv.Matrix<byte>(rows, cols, channels) I can't set 1 channel, because it would contain half the bytes of myBuffer
I can't set it to 2 channels, because an "assert" of the Opencv tells me that the color conversion CvEnum.ColorConversion.BayerGb2Bgr wants a 1-channel input image (it tells: OpenCV: scn == 1 && (dcn == 3 || dcn == 4))
I tried, as a work-around, to make a image with double width and convert this image, this don't give an error, but the output image is black (all zeroes)
byte[] buffer = myGrabberFunction();
Image<Structure.Bgr, byte> lastFrameBgr = new Image<Structure.Bgr, byte>(2590, 1942);
//test with Matrix of bytes:
ecv.Matrix<byte> bayerMatrix = new ecv.Matrix<byte>(lastFrameBgr.Height, lastFrameBgr.Rows, 2);
bayerMatrix.Bytes = buffer;
ecv.CvInvoke.CvtColor(bayerMatrix, lastFrameBgr, ecv.CvEnum.ColorConversion.BayerGb2Bgr, 3);
//result in a black Image
//test with an 1 channel Byte depth image
Image<Structure.Gray, byte> bayerGrayImage = new Image<Structure.Gray, byte>(lastFrameBgr.Width * 2, lastFrameBgr.Height);
bayerGrayImage.Bytes = buffer;
ecv.CvInvoke.CvtColor(bayerGrayImage, lastFrameBgr, ecv.CvEnum.ColorConversion.BayerGb2Bgr, 3);
//result in a black Image
My Question:
How can i transform my array of bytes in something i can use as source in Emgu.CV.CvInvoke.CvtColor(... ) funcion?
Or How should I do to convert am array of bytes into a Emgu Bgr Image?
I'm new to EmguCV, OpenCV and machine vision in general. I translated the code from this stack overflow question from C++ to C#. I also copied their sample image to help myself understand if the code is working as expected or not.
Mat map = CvInvoke.Imread("C:/Users/Cindy/Desktop/coffee_mug.png", Emgu.CV.CvEnum.LoadImageType.AnyColor | Emgu.CV.CvEnum.LoadImageType.AnyDepth);
CvInvoke.Imshow("window", map);
Image<Gray, Byte> imageGray = map.ToImage<Gray, Byte>();
double min = 0, max = 0;
int[] minIndex = new int[5], maxIndex = new int[5];
CvInvoke.MinMaxIdx(imageGray, out min, out max, minIndex, maxIndex, null);
imageGray -= min;
Mat adjMap = new Mat();
CvInvoke.ConvertScaleAbs(imageGray, adjMap, 255/(max-min), 0);
CvInvoke.Imshow("Out", adjMap);
Original Image:
After Processing:
This doesn't look like a depth map to me, it just looks like a slightly modified grayscale image, so I'm curious where I went wrong in my code. MinMaxIdx() doesn't work without converting the image to grayscale first, unlike the code I linked above. Ultimately, what I'd like to do is to be able to generate relative depth maps from a single webcamera.
I'd like to convert a Bgr value (one pixel) to an Hsv value. How can I do that (without writing the conversion code from scratch) in EmguCV?
Please note that I am not interested in converting whole image's color space but only one pixel, therefore CvInvoke.cvCvtColor() does not work for me.
If you want to do this within EmguCV just read the image into a Image the get the value of the pixel and stick it in a Hsv structure.
For example:
static void Main(string[] args)
{
bool haveOpenCL = CvInvoke.HaveOpenCL;
bool haveOpenClGpu = CvInvoke.HaveOpenCLCompatibleGpuDevice;
CvInvoke.UseOpenCL = true;
Emgu.CV.Image<Bgr, Byte> lenaBgr = new Image<Bgr, Byte>(#"D:\OpenCV\opencv-3.2.0\samples\data\Lena.jpg");
CvInvoke.Imshow("Lena BSG", lenaBgr);
Bgr color = lenaBgr[100, 100];
Console.WriteLine("Bgr: {0}", color.ToString());
Emgu.CV.Image<Hsv, Byte> lenaHsv = new Image<Hsv, Byte>(#"D:\OpenCV\opencv-3.2.0\samples\data\Lena.jpg");
CvInvoke.Imshow("Lena HSV", lenaHsv);
Hsv colorHsv = lenaHsv[100, 100];
Console.WriteLine("HSV: {0}", colorHsv.ToString());
CvInvoke.WaitKey(0);
}
}
The result:
Bgr: [87,74,182]
HSV: [176,151,182]
Lena BGR Lena HSV
Okay I already found a way using some help from .NET framework
given a Bgr pixel ;
1- Converting the color to System.Drawing.Color :
System.Drawing.Color intermediate = System.Drawing.Color.FromArgb((int)pixel.Red, (int)pixel.Green, (int)pixel.Blue);
2- Constructing Hsv from the intermediate.
Hsv hsvPixel = new Hsv(intermediate.GetHue(), intermediate.GetSaturation(), intermediate.GetBrightness());
Cheers.