I have many shapes in image which I want to save their contours in arrays .
I mean that I want the coordinates for contours for shape 1 in array 1 , for shape 2 in array 2 ext...
And if there are two shapes how can I draw the shortest line between them using their coordinates?
for example I had this result after many operations on an image
after finding contours :
So I need the coordinates for each shape contour to calculate the shortest distance between them
You can refer this link & this wiki for detecting Contours from an Image.
To find the min Distance from two Shapes follow the following steps:
Find the two Contours for which you want to find the min distance between them.
Cycle through each point in the Two contours & find the distance between them.
Take the minimum Distance by comparing all other distances & Mark that Points.
Here is the EMGUCV Implementation for this algorithm.
private void button2_Click(object sender, EventArgs e)
{
Image<Gray, byte> Img_Scene_Gray = Img_Source_Bgr.Convert<Gray, byte>();
Image<Bgr, byte> Img_Result_Bgr = Img_Source_Bgr.Copy();
LineSegment2D MinIntersectionLineSegment = new LineSegment2D();
Img_Scene_Gray = Img_Scene_Gray.ThresholdBinary(new Gray(10), new Gray(255));
#region Finding Contours
using (MemStorage Scene_ContourStorage = new MemStorage())
{
for (Contour<Point> Contours_Scene = Img_Scene_Gray.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_EXTERNAL, Scene_ContourStorage); Contours_Scene != null; Contours_Scene = Contours_Scene.HNext)
{
if (Contours_Scene.Area > 25)
{
if (Contours_Scene.HNext != null)
{
MinIntersectionLine(Contours_Scene, Contours_Scene.HNext, ref MinIntersectionLineSegment);
Img_Result_Bgr.Draw(MinIntersectionLineSegment, new Bgr(Color.Green), 2);
}
Img_Result_Bgr.Draw(Contours_Scene, new Bgr(Color.Red), 1);
}
}
}
#endregion
imageBox1.Image = Img_Result_Bgr;
}
void MinIntersectionLine(Contour<Point> a, Contour<Point> b,ref LineSegment2D Line)
{
double MinDist = 10000000;
for (int i = 0; i < a.Total; i++)
{
for (int j = 0; j < b.Total; j++)
{
double Dist = Distance_BtwnPoints(a[i], b[j]);
if (Dist < MinDist)
{
Line.P1 = a[i];
Line.P2 = b[j];
MinDist = Dist;
}
}
}
}
double Distance_BtwnPoints(Point p, Point q)
{
int X_Diff = p.X - q.X;
int Y_Diff = p.Y - q.Y;
return Math.Sqrt((X_Diff * X_Diff) + (Y_Diff * Y_Diff));
}
Related
The code does this:
Find the squares
Find the closest centroid to the center of the image ( by distance )
Draw the closest square centroid (purple) on the image
PROBLEM:
My problem is that it's drawing on every square so it's not finding the closest centroid. It draws the purple circle on both squares
CODE:
double distancia_menor = double.MaxValue;
using (MemStorage storage = new MemStorage()) //aloca espaço na memoria
{
//Procura contornos
for (Contour<System.Drawing.Point> contours = frame_drone_canny.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage); contours != null; contours = contours.HNext)
{
Contour<System.Drawing.Point> currentContour = contours.ApproxPoly(contours.Perimeter * 0.05, storage); //AproxContour
if (currentContour.Area > 3300/*currentContour.Area >= area_contornos_min.Value && currentContour.Area <= area_contornos_max.Value*/) //se a area estiver dentro dos valores das trackbars
{
if (currentContour.Total == 4) //se for retangulo/quadrado
{
retangular = true;
pontos = currentContour.ToArray(); //pontos para array
LineSegment2D[] edges = PointCollection.PolyLine(pontos, true);
for (int i = 0; i < edges.Length; i++)
{
double angle = Math.Abs(edges[(i + 1) % edges.Length].GetExteriorAngleDegree(edges[i]));
if (angle < 75 || angle > 105) //Limitação do angulo para determinar se é quadrado ou nao
{
retangular = false; //não é quadrado
centroid = new Point(0, 0);
//posicao_atual = new PointF(0, 0);
}
}
if (retangular)
{
centroid.X = (int)currentContour.GetMoments().GravityCenter.x;
centroid.Y = (int)currentContour.GetMoments().GravityCenter.y;
double c = Math.Sqrt((Math.Pow(centroid.X - tamanho_imagem.X / 2, 2) + Math.Pow(centroid.X - tamanho_imagem.X / 2, 2)));
Debug.WriteLine(c);
}
}
if (c < distancia_menor)
{
distancia_menor = c;
centroid_mais_proximo = new PointF(centroid.X, centroid.Y);
frame_drone_copia.Draw(new CircleF(new System.Drawing.PointF(centroid_mais_proximo.X, centroid_mais_proximo.Y), 1), new Bgr(Color.Purple), 17);
}
}
}
}
IMAGE:
You're testing both shapes, and drawing the purple circle as you update the min. If the loop happens to test the closest one first, it will draw only the right one. But since it happens to test the farther one first, it draws two circles.
Change your code to find the closest point inside the loop and move the code to draw the purple circle outside the loop.
I want to detect a display on an image (more precisely its corners).
I segment the image in display color and not display color:
Image<Gray, byte> segmentedImage = greyImage.InRange(new Gray(180), new Gray(255));
Then I use corner Harris to find the corners:
Emgu.CV.Image<Emgu.CV.Structure.Gray, Byte> harrisImage = new Image<Emgu.CV.Structure.Gray, Byte>(greyImage.Size);
CvInvoke.CornerHarris(segmentedImage, harrisImage, 2);
CvInvoke.Normalize(harrisImage, harrisImage, 0, 255, NormType.MinMax, DepthType.Cv32F);
There are now white pixels in the corners, but I cannot access them:
for (int j = 0; j < harrisImage.Rows; j++)
{
for (int i = 0; i < harrisImage.Cols; i++)
{
Console.WriteLine(harrisImage[j, i].Intensity);
}
}
It writes only 0s. How can I access them? And if I can access them, how can I find the 4 corners of the screen in the harris image? Is there a function to find a perspectively transformed rectangle from points?
EDIT:
On the OpenCV IRC they said FindContours is not that precise. And when I try to run it on the segmentedImage, I get this:
(ran FindContours on the segmentedImage, then ApproxPolyDP and drew the found contour on the original greyscale image)
I cannot get it to find the contours more precise...
EDIT2:
I cannot get this to work for me. Even with your code, I get the exact same result...
Here is my full Emgu code:
Emgu.CV.Image<Emgu.CV.Structure.Gray, Byte> imageFrameGrey = new Image<Emgu.CV.Structure.Gray, Byte>(bitmap);
Image<Gray, byte> segmentedImage = imageFrameGrey.InRange(new Gray(180), new Gray(255));
// get rid of small objects
int morph_size = 2;
Mat element = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Rectangle, new System.Drawing.Size(2 * morph_size + 1, 2 * morph_size + 1), new System.Drawing.Point(morph_size, morph_size));
CvInvoke.MorphologyEx(segmentedImage, segmentedImage, Emgu.CV.CvEnum.MorphOp.Open, element, new System.Drawing.Point(-1, -1), 1, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar());
// Find edges that form rectangles
List<RotatedRect> boxList = new List<RotatedRect>();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(segmentedImage, contours, null, Emgu.CV.CvEnum.RetrType.External, ChainApproxMethod.ChainApproxSimple);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour, CvInvoke.ArcLength(contour, true) * 0.01, true);
if (CvInvoke.ContourArea(approxContour, false) > 10000)
{
if (approxContour.Size == 4)
{
bool isRectangle = true;
System.Drawing.Point[] pts = approxContour.ToArray();
LineSegment2D[] edges = Emgu.CV.PointCollection.PolyLine(pts, true);
for (int j = 0; j < edges.Length; j++)
{
double angle = Math.Abs(edges[(j + 1) % edges.Length].GetExteriorAngleDegree(edges[j]));
if (angle < 80 || angle > 100)
{
isRectangle = false;
break;
}
}
if (isRectangle)
boxList.Add(CvInvoke.MinAreaRect(approxContour));
}
}
}
}
}
So as promised i tried it myself. In C++ but you should adopt it easy to Emgu.
First i get rid of small object in your segmented image with an opening:
int morph_elem = CV_SHAPE_RECT;
int morph_size = 2;
Mat element = getStructuringElement(morph_elem, Size(2 * morph_size + 1, 2 * morph_size + 1), Point(morph_size, morph_size));
// Apply the opening
morphologyEx(segmentedImage, segmentedImage_open, CV_MOP_OPEN, element);
Then detect all the contours and take the large ones and check for rectangular shape:
vector< vector<Point>> contours;
findContours(segmentedImage_open, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
for each (vector<Point> var in contours)
{
double area = contourArea(var);
if (area> 30000)
{
vector<Point> approx;
approxPolyDP(var, approx, 0.01*arcLength(var, true), true);
if (4 == approx.size()) //rectangular shape
{
// do something
}
}
}
Here is the result with the contour in red and the approximated curve in green:
Edit:
You can improve your code by increasing the approximation factor until you get a contour with 4 points or you pass a threshold. Just wrap a for loop around approxPolyDP. You can define a range for your approximation value and prevent your code to fail if your object differs too much from a rectangle.
I want to Convolve Lena with itself in the Frequency Domain. Here is an excerpt from a book. which suggests how should the output of the convolution be:
I have written the following application to achieve the Convolution of two images in the frequency domain. The steps I followed are as follows:
Convert Lena into a matrix of complex numbers.
Apply FFT to obtain a complex matrix.
Multiply two complex matrices element by element (if that is the definition of Convolution).
Apply IFFT to the result of the multiplication.
The output seems to be not coming as expected:
Two issues are visible here:
The output only contains a black background with only one dot at its center.
The original image gets distorted after the the execution of convolution.
.
Note. FFT and I-FFT are working perfectly with the same libraries.
Note-2. There is a thread in SO that seems to be discussing the same topic.
.
Source Code:
public static class Convolution
{
public static Complex[,] Convolve(Complex[,]image, Complex[,]mask)
{
Complex[,] convolve = null;
int imageWidth = image.GetLength(0);
int imageHeight = image.GetLength(1);
int maskWidth = mask.GetLength(0);
int maskeHeight = mask.GetLength(1);
if (imageWidth == maskWidth && imageHeight == maskeHeight)
{
FourierTransform ftForImage = new FourierTransform(image); ftForImage.ForwardFFT();
FourierTransform ftForMask = new FourierTransform(mask); ftForMask.ForwardFFT();
Complex[,] fftImage = ftForImage.FourierTransformedImageComplex;
Complex[,] fftKernel = ftForMask.FourierTransformedImageComplex;
Complex[,] fftConvolved = new Complex[imageWidth, imageHeight];
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
fftConvolved[i, j] = fftImage[i, j] * fftKernel[i, j];
}
}
FourierTransform ftForConv = new FourierTransform();
ftForConv.InverseFFT(fftConvolved);
convolve = ftForConv.GrayscaleImageComplex;
//convolve = fftConvolved;
}
else
{
throw new Exception("padding needed");
}
return convolve;
}
}
private void convolveButton_Click(object sender, EventArgs e)
{
Bitmap lena = inputImagePictureBox.Image as Bitmap;
Bitmap paddedMask = paddedMaskPictureBox.Image as Bitmap;
Complex[,] cLena = ImageDataConverter.ToComplex(lena);
Complex[,] cPaddedMask = ImageDataConverter.ToComplex(paddedMask);
Complex[,] cConvolved = Convolution.Convolve(cLena, cPaddedMask);
Bitmap convolved = ImageDataConverter.ToBitmap(cConvolved);
convolvedImagePictureBox.Image = convolved;
}
There is a difference in how you call InverseFFT between the working FFT->IFFT application, and the broken Convolution application. In the latter case you do not pass explicitly the Width and Height parameters (which you are supposed to get from the input image):
public void InverseFFT(Complex[,] fftImage)
{
if (FourierTransformedImageComplex == null)
{
FourierTransformedImageComplex = fftImage;
}
GrayscaleImageComplex = FourierFunction.FFT2D(FourierTransformedImageComplex, Width, Height, -1);
GrayscaleImageInteger = ImageDataConverter.ToInteger(GrayscaleImageComplex);
InputImageBitmap = ImageDataConverter.ToBitmap(GrayscaleImageInteger);
}
As a result both Width and Height are 0 and the code skips over most of the inverse 2D transformation. Initializing those parameters should give you something which is at least not all black.
if (FourierTransformedImageComplex == null)
{
FourierTransformedImageComplex = fftImage;
Width = fftImage.GetLength(0);
Height = fftImage.GetLength(1);
}
Then you should notice some sharp white/black edges. Those are caused by wraparounds in the output values. To avoid this, you may want to rescale the output after the inverse transform to fit the available scale with something such as:
double maxAmp = 0.0;
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
maxAmp = Math.Max(maxAmp, convolve[i, j].Magnitude);
}
}
double scale = 255.0 / maxAmp;
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
convolve[i, j] = new Complex(convolve[i, j].Real * scale, convolve[i, j].Imaginary * scale);
maxAmp = Math.Max(maxAmp, convolve[i, j].Magnitude);
}
}
This should then give the more reasonable output:
However that is still not as depicted in your book. At this point we have a 2D circular convolution. To get a 2D linear convolution, you need to make sure the images are both padded to the sum of the dimensions:
Bitmap lena = inputImagePictureBox.Image as Bitmap;
Bitmap mask = paddedMaskPictureBox.Image as Bitmap;
Bitmap paddedLena = ImagePadder.Pad(lena, lena.Width+ mask.Width, lena.Height+ mask.Height);
Bitmap paddedMask = ImagePadder.Pad(mask, lena.Width+ mask.Width, lena.Height+ mask.Height);
Complex[,] cLena = ImageDataConverter.ToComplex(paddedLena);
Complex[,] cPaddedMask = ImageDataConverter.ToComplex(paddedMask);
Complex[,] cConvolved = Convolution.Convolve(cLena, cPaddedMask);
And as you adjust the padding, you may want to change the padding color to black otherwise your padding will in itself introduce a large correlation between the two images:
public class ImagePadder
{
public static Bitmap Pad(Bitmap maskImage, int newWidth, int newHeight)
{
...
Grayscale.Fill(resizedImage, Color.Black);
Now you should be getting the following:
We are getting close, but the peak of the autocorrelation result is not in the center, and that's because you FourierShifter.FFTShift in the forward transform but do not call the corresponding FourierShifter.RemoveFFTShift in the inverse transform. If we adjust those (either remove FFTShift in ForwardFFT, or add RemoveFFTShift in InverseFFT), then we finally get:
I am doing project in Emgu cv in C#.
I have stuck in this first step. I calculated opticalflow.HS and LK and I don't know how to add velx and vely to draw them in frame as points and show them in ImageBox.
OpticalFlow.HS(prev, frame1, true, velx, vely, 0.1d, new MCvTermCriteria(100));
Does anyone can describe me what to do or even better some code example will be a lot of help? I don't want to show color of direction, only motion as points in frame.
I found the solution, so if anyone needs here is my example.
Image<Gray, Byte> coloredMotion2 = new Image<Gray, Byte>(frame1.Size);
for (int i = 0; i < coloredMotion2.Width; i+=2)
{
for (int j = 0; j < coloredMotion2.Height; j+=2)
{
dx = (int)CvInvoke.cvGetReal2D(velx, j, i);
dy = (int)CvInvoke.cvGetReal2D(vely, j, i);
int pomi = i + dx;
int pomj = j + dy;
if (i != pomi && j != pomj)
// uncoment line below if you want lines but it needs rgb image not gray {
//CvInvoke.cvLine(coloredMotion, new Point(i,j),new Point(i+dx,j+dy), new MCvScalar(255,0,0), 1, Emgu.CV.CvEnum.LINE_TYPE.CV_AA, 0);
CvInvoke.cvCircle(coloredMotion2, new Point(pomi, pomj), 1, new MCvScalar(255,255,255), 1, Emgu.CV.CvEnum.LINE_TYPE.CV_AA, 0);
}
motionImageBox.Image = coloredMotion2;
In OpenCV I use std::vector<std::vector<cv::Point>>::const_iterator like the code here:
std::vector<std::vector<cv::Point>> contours;
cv::findContours(contour,contours,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE);
std::vector<std::vector<cv::Point>>::const_iterator itContours = contours.begin();
while(itContours != contours.end())
{
if(Condition1)
itContours = contours.erase(itContours);
else if(Condition2)
itContours = contours.erase(itContours);
else if(Condition3)
itContours = contours.erase(itContours);
else
++itContours;
}
But now I start using EmguCV but I can't find how to do like the code above. How can I do it?
Have a look at the shape detection example in the EMGU.Examples folder. It shows you how to deal with contours. I have copied the relevant code below for your reference but it's much better to have a look at the example.
#region Find triangles and rectangles
List<Triangle2DF> triangleList = new List<Triangle2DF>();
List<MCvBox2D> boxList = new List<MCvBox2D>(); //a box is a rotated rectangle
using (MemStorage storage = new MemStorage()) //allocate storage for contour approximation
for (Contour<Point> contours = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_LIST, storage); contours != null; contours = contours.HNext)
{
Contour<Point> currentContour = contours.ApproxPoly(contours.Perimeter * 0.05, storage);
if (currentContour.Area > 250) //only consider contours with area greater than 250
{
if (currentContour.Total == 3) //The contour has 3 vertices, it is a triangle
{
Point[] pts = currentContour.ToArray();
triangleList.Add(new Triangle2DF(
pts[0],
pts[1],
pts[2]
));
}
else if (currentContour.Total == 4) //The contour has 4 vertices.
{
#region determine if all the angles in the contour are within [80, 100] degree
bool isRectangle = true;
Point[] pts = currentContour.ToArray();
LineSegment2D[] edges = PointCollection.PolyLine(pts, true);
for (int i = 0; i < edges.Length; i++)
{
double angle = Math.Abs(
edges[(i + 1) % edges.Length].GetExteriorAngleDegree(edges[i]));
if (angle < 80 || angle > 100)
{
isRectangle = false;
break;
}
}
#endregion
if (isRectangle) boxList.Add(currentContour.GetMinAreaRect());
}
}
}
#endregion
Let me know if you need any additional help and if any errors pop up,
Cheers,
Chris