I'm trying to detect contour of an ellipse-like water droplet with Emgu CV. I wrote code for contour detection:
public List<int> GetDiameters()
{
string inputFile = #"path.jpg";
Image<Bgr, byte> imageInput = new Image<Bgr, byte>(inputFile);
Image<Gray, byte> grayImage = imageInput.Convert<Gray, byte>();
Image<Gray, byte> bluredImage = grayImage;
CvInvoke.MedianBlur(grayImage, bluredImage, 9);
Image<Gray, byte> edgedImage = bluredImage;
CvInvoke.Canny(bluredImage, edgedImage, 50, 5);
Image<Gray, byte> closedImage = edgedImage;
Mat kernel = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Ellipse, new System.Drawing.Size { Height = 100, Width = 250}, new System.Drawing.Point(-1, -1));
CvInvoke.MorphologyEx(edgedImage, closedImage, Emgu.CV.CvEnum.MorphOp.Close, kernel, new System.Drawing.Point(-1, -1), 0, Emgu.CV.CvEnum.BorderType.Replicate, new MCvScalar());
System.Drawing.Point(100, 250), 10000, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar()
Image<Gray, byte> contoursImage = closedImage;
Image<Bgr, byte> imageOut = imageInput;
VectorOfVectorOfPoint rescontours1 = new VectorOfVectorOfPoint();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(contoursImage, contours, null, Emgu.CV.CvEnum.RetrType.List,
Emgu.CV.CvEnum.ChainApproxMethod.LinkRuns);
MCvScalar color = new MCvScalar(0, 0, 255);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour,
0.01 * CvInvoke.ArcLength(contour, true), true);
var area = CvInvoke.ContourArea(contour);
if (area > 0 && approxContour.Size > 10)
{
rescontours1.Push(approxContour);
}
CvInvoke.DrawContours(imageOut, rescontours1, -1, color, 2);
}
}
}
}
result so far:
I think there is a problem with approximation. How to get rid of internal lines and close external contour?
I might need some more information to exactly pinpoint your issue, but it might be something to do with your median blur. I would see if you are blurring enough that EmguCV things the blur is enough that you can canny edge detection. Another method that you could use is Dilate. Try Dialating your Canny edge detection and see if you get any better results.
EDIT
Here is the code below
public List<int> GetDiameters()
{
//List to hold output diameters
List<int> diametors = new List<int>();
//File path to where the image is located
string inputFile = #"C:\Users\jones\Desktop\Image Folder\water.JPG";
//Read in the image and store it as a mat object
Mat img = CvInvoke.Imread(inputFile, Emgu.CV.CvEnum.ImreadModes.AnyColor);
//Mat object that will hold the output of the gaussian blur
Mat gaussianBlur = new Mat();
//Blur the image
CvInvoke.GaussianBlur(img, gaussianBlur, new System.Drawing.Size(21, 21), 20, 20, Emgu.CV.CvEnum.BorderType.Default);
//Mat object that will hold the output of the canny
Mat canny = new Mat();
//Canny the image
CvInvoke.Canny(gaussianBlur, canny, 40, 40);
//Mat object that will hold the output of the dilate
Mat dilate = new Mat();
//Dilate the canny image
CvInvoke.Dilate(canny, dilate, null, new System.Drawing.Point(-1, -1), 6, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar(0, 0, 0));
//Vector that will hold all found contours
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
//Find the contours and draw them on the image
CvInvoke.FindContours(dilate, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(img, contours, -1, new MCvScalar(255, 0, 0), 5, Emgu.CV.CvEnum.LineType.FourConnected);
//Variables to hold relevent info on what is the biggest contour
int biggest = 0;
int index = 0;
//Find the biggest contour
for (int i = 0; i < contours.Size; i++)
{
if (contours.Size > biggest)
{
biggest = contours.Size;
index = i;
}
}
//Once all contours have been looped over, add the biggest contour's index to the list
diametors.Add(index);
//Return the list
return diametors;
}
The first thing you do is blur the image.
Then you canny the image.
Then you dilate the image, as to make the final output contours more uniform.
Then you just find contours.
I know the final contours are a little bigger than the water droplet, but this is the best that I could come up with. You can probably fiddle around with some of the settings and the code above to make the result a little cleaner.
Related
I have a contour of a number plate and I want to check if it's tilted or not. I used CvInvoke.MinAreaRect(contour) but it always returns the angle == -90 even when the plate is obviously tilted, you can see the contour I draw in the picture below.
Does anyone know what happened and solution for my problem?
Here is the code:
Image<Gray, byte> gray = new Image<Gray, byte>("2.PNG");
Image<Gray, byte> adaptive_threshold_img = gray.ThresholdAdaptive(new Gray(255), AdaptiveThresholdType.GaussianC, ThresholdType.BinaryInv, 11, new Gray(2));
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hier = new Mat();
CvInvoke.FindContours(adaptive_threshold_img, contours, hier, RetrType.Tree, ChainApproxMethod.ChainApproxSimple);
double max_area = 0;
VectorOfPoint max_contour = new VectorOfPoint();
for (int i = 0; i < contours.Size; i++)
{
double temp = CvInvoke.ContourArea(contours[i]);
if (temp > max_area)
{
max_area = temp;
max_contour = contours[i];
}
}
VectorOfVectorOfPoint contour_to_draw = new VectorOfVectorOfPoint(max_contour);
CvInvoke.DrawContours(gray, contour_to_draw, 0, new MCvScalar(255), 2);
CvInvoke.Imshow("plate", gray);
RotatedRect plate_feature = CvInvoke.MinAreaRect(max_contour);
CvInvoke.WaitKey();
CvInvoke.DestroyAllWindows();
Try CvInvoke.threshold() instead of gray.ThresholdAdaptive(). Set proper threshold and you'll get better contour than before.
I am using EmguCV and C#. I want to remove small connected objects from my image using ConnectedComponentsWithStats
I have following binary image as input
I am able to draw a rectangle for specified area. Now i want to remove objects from binary image
Here is my code
Image<Gray, byte> imgry = image.Convert<Gray, byte>();
var mask = imgry.InRange(new Gray(50), new Gray(255));
var label = new Mat();
var stats = new Mat();
var centroids = new Mat();
int labels = CvInvoke.ConnectedComponentsWithStats(mask, label, stats,
centroids,
LineType.EightConnected,
DepthType.Cv32S);
var img = stats.ToImage<Gray, int>();
Image<Gray, byte> imgout1 = new Image<Gray, byte>(image.Width, image.Height);
for (var i = 1; i < labels; i++)
{
var area = img[i, (int)ConnectecComponentsTypes.Area].Intensity;
var width = img[i, (int)ConnectecComponentsTypes.Width].Intensity;
var height = img[i, (int)ConnectecComponentsTypes.Height].Intensity;
var top = img[i, (int)ConnectecComponentsTypes.Top].Intensity;
var left = img[i, (int)ConnectecComponentsTypes.Left].Intensity;
var roi = new Rectangle((int)left, (int)top, (int)width, (int)height);
if (area > 1000)
{
CvInvoke.Rectangle(imgout1, roi, new MCvScalar(255), 1);
}
}
Drawn Rectangle for specified area
How can i remove objects of specified size
My output image should be like this
I have achieved one way by using contours it works fine for small image, when i have large image 10240*10240 and more number of particles my application entering to break mode
INPUT IMAGE
Hi I am try to learn EmguCV 3.3 and I have a question about blob counting.As you see in INPUT IMAGE I have black uneven blobs.
I am try to do something like this.
OUTPUT IMAGE
I need to draw rectangle around blobs and count them.
I tryied some approches but non of it work.
I need Help();
You can use FindCountours() or SimpleBlobDetection() to achieve that, here is an example code uses the first one:
Image<Gray, Byte> grayImage = new Image<Gray,Byte>(mRGrc.jpg);
Image<Gray, Byte> canny = new Image<Gray, byte>(grayImage.Size);
int counter = 0;
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage);contours != null; contours = contours.HNext)
{
contours.ApproxPoly(contours.Perimeter * 0.05, storage);
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
counter++;
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
}
Console.Writeline("Number of blobs: " + counter);
I have an image of a "windows control" lets say a Text-box and I want to get background color of the text written within the text box by finding max color occurred in that picture by pixel color comparison.
I searched in google and I found that every one is talking about histogram and also some code is given to find out histogram of an image but no one described the procedure after finding histogram.
the code I found on some sites is like
// Create a grayscale image
Image<Gray, Byte> img = new Image<Gray, byte>(bmp);
// Fill image with random values
img.SetRandUniform(new MCvScalar(), new MCvScalar(255));
// Create and initialize histogram
DenseHistogram hist = new DenseHistogram(256, new RangeF(0.0f, 255.0f));
// Histogram Computing
hist.Calculate<Byte>(new Image<Gray, byte>[] { img }, true, null);
Currently I have used the code which takes a line segment from the image and finds the max color but which is not the right way to do it.
the currently used code is as follows
Image<Bgr, byte> image = new Image<Bgr, byte>(temp);
int height = temp.Height / 2;
Dictionary<Bgr, int> colors = new Dictionary<Bgr, int>();
for (int i = 0; i < (image.Width); i++)
{
Bgr pixel = new Bgr();
pixel = image[height, i];
if (colors.ContainsKey(pixel))
colors[pixel] += 1;
else
colors.Add(pixel, 1);
}
Bgr result = colors.FirstOrDefault(x => x.Value == colors.Values.Max()).Key;
please help me if any one knows how to get it. Take this image as input ==>
Emgu.CV's DenseHistogram exposes the method MinMax() which finds the maximum and minimum bin of the histogram.
So after computing your histogram like in your first code snippet:
// Create a grayscale image
Image<Gray, Byte> img = new Image<Gray, byte>(bmp);
// Fill image with random values
img.SetRandUniform(new MCvScalar(), new MCvScalar(255));
// Create and initialize histogram
DenseHistogram hist = new DenseHistogram(256, new RangeF(0.0f, 255.0f));
// Histogram Computing
hist.Calculate<Byte>(new Image<Gray, byte>[] { img }, true, null);
...find the peak of the histogram with this method:
float minValue, maxValue;
Point[] minLocation;
Point[] maxLocation;
hist.MinMax(out minValue, out maxValue, out minLocation, out maxLocation);
// This is the value you are looking for (the bin representing the highest peak in your
// histogram is the also the main color of your image).
var mainColor = maxLocation[0].Y;
I found a code snippet in stackoverflow which does my work.
code goes like this
int BlueHist;
int GreenHist;
int RedHist;
Image<Bgr, Byte> img = new Image<Bgr, byte>(bmp);
DenseHistogram Histo = new DenseHistogram(255, new RangeF(0, 255));
Image<Gray, Byte> img2Blue = img[0];
Image<Gray, Byte> img2Green = img[1];
Image<Gray, Byte> img2Red = img[2];
Histo.Calculate(new Image<Gray, Byte>[] { img2Blue }, true, null);
double[] minV, maxV;
Point[] minL, maxL;
Histo.MinMax(out minV, out maxV, out minL, out maxL);
BlueHist = maxL[0].Y;
Histo.Clear();
Histo.Calculate(new Image<Gray, Byte>[] { img2Green }, true, null);
Histo.MinMax(out minV, out maxV, out minL, out maxL);
GreenHist = maxL[0].Y;
Histo.Clear();
Histo.Calculate(new Image<Gray, Byte>[] { img2Red }, true, null);
Histo.MinMax(out minV, out maxV, out minL, out maxL);
RedHist = maxL[0].Y;
I got a kinect mount in the ceiling pointing down to a table (that is roughly 2 to 3 meters of the kinect) and my objective is using the kinect depth stream, locate the objects and get their position to late send to unity. So for that i am using c sharp and opencv (emgu wrapper), i first use canny to get the edges and then use BoundingRect to create a box around the object.
The result is the as follows:
Original:
As you can see the stationary objects are no problem but the box around the hands (the first was a flyer on it) is to big and sometimes is even worse. Is there any other way (using opencv ) of getting the positions of the objects?
The objective is then to send the position and dimension (of the object) to unity (trough tcp/ip), so preferably the shapes have to squares or rectangles to easy manipulation on unity.
The code I have so far is:
Image<Bgr, Byte> grayImage = new Image<Bgr, Byte>("C:\\Users\\Pedro\\Desktop\\imgRva\\KinectSnapshot-03-48-01.png");
Image<Gray, Byte> gray = grayImage.Convert<Gray, Byte>().PyrDown().PyrUp();
CvInvoke.cvShowImage("texto", gray);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> bin = gray.ThresholdBinary(new Gray(40), new Gray(255));
Image<Gray, Byte> cannyEdges = bin.Canny(300, 300);
CvInvoke.cvShowImage("texto", cannyEdges);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> canny = new Image<Gray, byte>(cannyEdges.Size);
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage); contours != null; contours = contours.HNext)
{
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1 = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
Debug.WriteLine(r.Location + " x " + r.Width + " y " + r.Height);
}
CvInvoke.cvShowImage("texto", canny);
CvInvoke.cvWaitKey(0);
Ty