I got a kinect mount in the ceiling pointing down to a table (that is roughly 2 to 3 meters of the kinect) and my objective is using the kinect depth stream, locate the objects and get their position to late send to unity. So for that i am using c sharp and opencv (emgu wrapper), i first use canny to get the edges and then use BoundingRect to create a box around the object.
The result is the as follows:
Original:
As you can see the stationary objects are no problem but the box around the hands (the first was a flyer on it) is to big and sometimes is even worse. Is there any other way (using opencv ) of getting the positions of the objects?
The objective is then to send the position and dimension (of the object) to unity (trough tcp/ip), so preferably the shapes have to squares or rectangles to easy manipulation on unity.
The code I have so far is:
Image<Bgr, Byte> grayImage = new Image<Bgr, Byte>("C:\\Users\\Pedro\\Desktop\\imgRva\\KinectSnapshot-03-48-01.png");
Image<Gray, Byte> gray = grayImage.Convert<Gray, Byte>().PyrDown().PyrUp();
CvInvoke.cvShowImage("texto", gray);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> bin = gray.ThresholdBinary(new Gray(40), new Gray(255));
Image<Gray, Byte> cannyEdges = bin.Canny(300, 300);
CvInvoke.cvShowImage("texto", cannyEdges);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> canny = new Image<Gray, byte>(cannyEdges.Size);
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage); contours != null; contours = contours.HNext)
{
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1 = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
Debug.WriteLine(r.Location + " x " + r.Width + " y " + r.Height);
}
CvInvoke.cvShowImage("texto", canny);
CvInvoke.cvWaitKey(0);
Ty
Related
I'm trying to detect contour of an ellipse-like water droplet with Emgu CV. I wrote code for contour detection:
public List<int> GetDiameters()
{
string inputFile = #"path.jpg";
Image<Bgr, byte> imageInput = new Image<Bgr, byte>(inputFile);
Image<Gray, byte> grayImage = imageInput.Convert<Gray, byte>();
Image<Gray, byte> bluredImage = grayImage;
CvInvoke.MedianBlur(grayImage, bluredImage, 9);
Image<Gray, byte> edgedImage = bluredImage;
CvInvoke.Canny(bluredImage, edgedImage, 50, 5);
Image<Gray, byte> closedImage = edgedImage;
Mat kernel = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Ellipse, new System.Drawing.Size { Height = 100, Width = 250}, new System.Drawing.Point(-1, -1));
CvInvoke.MorphologyEx(edgedImage, closedImage, Emgu.CV.CvEnum.MorphOp.Close, kernel, new System.Drawing.Point(-1, -1), 0, Emgu.CV.CvEnum.BorderType.Replicate, new MCvScalar());
System.Drawing.Point(100, 250), 10000, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar()
Image<Gray, byte> contoursImage = closedImage;
Image<Bgr, byte> imageOut = imageInput;
VectorOfVectorOfPoint rescontours1 = new VectorOfVectorOfPoint();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(contoursImage, contours, null, Emgu.CV.CvEnum.RetrType.List,
Emgu.CV.CvEnum.ChainApproxMethod.LinkRuns);
MCvScalar color = new MCvScalar(0, 0, 255);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour,
0.01 * CvInvoke.ArcLength(contour, true), true);
var area = CvInvoke.ContourArea(contour);
if (area > 0 && approxContour.Size > 10)
{
rescontours1.Push(approxContour);
}
CvInvoke.DrawContours(imageOut, rescontours1, -1, color, 2);
}
}
}
}
result so far:
I think there is a problem with approximation. How to get rid of internal lines and close external contour?
I might need some more information to exactly pinpoint your issue, but it might be something to do with your median blur. I would see if you are blurring enough that EmguCV things the blur is enough that you can canny edge detection. Another method that you could use is Dilate. Try Dialating your Canny edge detection and see if you get any better results.
EDIT
Here is the code below
public List<int> GetDiameters()
{
//List to hold output diameters
List<int> diametors = new List<int>();
//File path to where the image is located
string inputFile = #"C:\Users\jones\Desktop\Image Folder\water.JPG";
//Read in the image and store it as a mat object
Mat img = CvInvoke.Imread(inputFile, Emgu.CV.CvEnum.ImreadModes.AnyColor);
//Mat object that will hold the output of the gaussian blur
Mat gaussianBlur = new Mat();
//Blur the image
CvInvoke.GaussianBlur(img, gaussianBlur, new System.Drawing.Size(21, 21), 20, 20, Emgu.CV.CvEnum.BorderType.Default);
//Mat object that will hold the output of the canny
Mat canny = new Mat();
//Canny the image
CvInvoke.Canny(gaussianBlur, canny, 40, 40);
//Mat object that will hold the output of the dilate
Mat dilate = new Mat();
//Dilate the canny image
CvInvoke.Dilate(canny, dilate, null, new System.Drawing.Point(-1, -1), 6, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar(0, 0, 0));
//Vector that will hold all found contours
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
//Find the contours and draw them on the image
CvInvoke.FindContours(dilate, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(img, contours, -1, new MCvScalar(255, 0, 0), 5, Emgu.CV.CvEnum.LineType.FourConnected);
//Variables to hold relevent info on what is the biggest contour
int biggest = 0;
int index = 0;
//Find the biggest contour
for (int i = 0; i < contours.Size; i++)
{
if (contours.Size > biggest)
{
biggest = contours.Size;
index = i;
}
}
//Once all contours have been looped over, add the biggest contour's index to the list
diametors.Add(index);
//Return the list
return diametors;
}
The first thing you do is blur the image.
Then you canny the image.
Then you dilate the image, as to make the final output contours more uniform.
Then you just find contours.
I know the final contours are a little bigger than the water droplet, but this is the best that I could come up with. You can probably fiddle around with some of the settings and the code above to make the result a little cleaner.
INPUT IMAGE
Hi I am try to learn EmguCV 3.3 and I have a question about blob counting.As you see in INPUT IMAGE I have black uneven blobs.
I am try to do something like this.
OUTPUT IMAGE
I need to draw rectangle around blobs and count them.
I tryied some approches but non of it work.
I need Help();
You can use FindCountours() or SimpleBlobDetection() to achieve that, here is an example code uses the first one:
Image<Gray, Byte> grayImage = new Image<Gray,Byte>(mRGrc.jpg);
Image<Gray, Byte> canny = new Image<Gray, byte>(grayImage.Size);
int counter = 0;
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage);contours != null; contours = contours.HNext)
{
contours.ApproxPoly(contours.Perimeter * 0.05, storage);
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
counter++;
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
}
Console.Writeline("Number of blobs: " + counter);
I'm doing a project to process brain injuries using Image Processing. In order to improve its accuracy, I need to extract only the brain matter from the skull.
Using EmguCV I was able to identify the inner and outer contours (Blue and Dark blue). Is there anyway to extract these identified contours into another image?
Image<Gray, byte> grayImage = new Image<Gray, byte>(bitmap);
Image<Bgr, byte> color = new Image<Bgr, byte>(bitmap);
grayImage = grayImage.ThresholdBinary(new Gray(220), new Gray(255));
using (MemStorage storage = new MemStorage())
{
for (Contour<Point> contours = grayImage.FindContours(
Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage); contours != null; contours = contours.HNext)
{
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * 0.015, storage);
if (currentContour.BoundingRectangle.Width > 20)
{
CvInvoke.cvDrawContours(color, contours, new MCvScalar(100), new MCvScalar(255), -1, 2, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
}
}
}
EmguCV 2.2.2
Expected output:
I am new to EmguCV and OpenCV. I want to detect the text Regions from an Image using EmguCV.
There are already some solutions posted on Stack using OpenCV.
Extracting text OpenCV
But unable to convert that OpenCV code to EmguCV.
Here is a direct conversion of the accepted answer in the link you provided into c# with EMGU. You might have to make some alterations since its a slightly different implementation but it should get you started. I also doubt it is a very robust so depending on your specific use it might not be suitable. Best of luck.
public List<Rectangle> detectLetters(Image<Bgr, Byte> img)
{
List<Rectangle> rects = new List<Rectangle>();
Image<Gray, Single> img_sobel;
Image<Gray, Byte> img_gray, img_threshold;
img_gray = img.Convert<Gray, Byte>();
img_sobel = img_gray.Sobel(1,0,3);
img_threshold = new Image<Gray, byte>(img_sobel.Size);
CvInvoke.cvThreshold(img_sobel.Convert<Gray, Byte>(), img_threshold, 0, 255, Emgu.CV.CvEnum.THRESH.CV_THRESH_OTSU);
StructuringElementEx element = new StructuringElementEx(3, 17, 1, 6, Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_RECT);
CvInvoke.cvMorphologyEx(img_threshold, img_threshold, IntPtr.Zero, element, Emgu.CV.CvEnum.CV_MORPH_OP.CV_MOP_CLOSE, 1);
for (Contour<System.Drawing.Point> contours = img_threshold.FindContours(); contours != null; contours = contours.HNext)
{
if (contours.Area > 100)
{
Contour<System.Drawing.Point> contours_poly = contours.ApproxPoly(3);
rects.Add(new Rectangle(contours_poly.BoundingRectangle.X, contours_poly.BoundingRectangle.Y, contours_poly.BoundingRectangle.Width, contours_poly.BoundingRectangle.Height));
}
}
return rects;
}
Usage:
Image<Bgr, Byte> img = new Image<Bgr, Byte>("VfDfJ.png");
List<Rectangle> rects = detectLetters(img);
for (int i=0;i<rects.Count();i++)
img.Draw(rects.ElementAt<Rectangle>(i),new Bgr(0,255,0),3);
CvInvoke.cvShowImage("Display", img.Ptr);
CvInvoke.cvWaitKey(0);
CvInvoke.cvDestroyWindow("Display");
I'm doing a small experiment using fourier transform on emgu cv. My aim is to have the fourier transform of an image, then take the inverse fourier transform again, and check if the image shows up or not. mathematically, it should.
this is my code which i believe is correct
Image<Gray, float> image = new Image<Gray, float>("c://box1.png");
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(complexImage); // Initialize all elements to Zero
CvInvoke.cvSetImageCOI(complexImage, 1);
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0);
Matrix<float> dft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
Matrix<float> idft = new Matrix<float>(dft.Rows, dft.Cols, 2);
CvInvoke.cvDFT(dft, idft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, 0);
IntPtr complexImage2 = CvInvoke.cvCreateImage(idft.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_8U, 2);
CvInvoke.cvShowImage("picture", idft);
System.Threading.Thread.Sleep(99999); // to wait and see the picture
I have two problems:
1- error : an error shows up saying " OpenCV: Source image must have 1, 3 or 4 channels " i believe its related about the IDFT, but I couldn't solve it
2- it still shows an output image, but unfortunately, its not the original image that was input. all what show is a plain grey image.
Thanks.
Try this,
Image<Gray, System.Single> _image = new Image<Gray, System.Single>("c://box1.png");
Image<Gray, System.Single> DFTimage = new Image<Gray, System.Single>(_image.Size);
Image<Gray, System.Single> Original = new Image<Gray, System.Single>(_image.Size);
CvInvoke.cvDFT(_image.Ptr, DFTimage.Ptr,Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT(DFTimage.Ptr, Original.Ptr,Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
CvInvoke.cvShowImage("picture", Original);