CvInvoke.MinAreaRect(contour) returns the wrong angle - c#

I have a contour of a number plate and I want to check if it's tilted or not. I used CvInvoke.MinAreaRect(contour) but it always returns the angle == -90 even when the plate is obviously tilted, you can see the contour I draw in the picture below.
Does anyone know what happened and solution for my problem?
Here is the code:
Image<Gray, byte> gray = new Image<Gray, byte>("2.PNG");
Image<Gray, byte> adaptive_threshold_img = gray.ThresholdAdaptive(new Gray(255), AdaptiveThresholdType.GaussianC, ThresholdType.BinaryInv, 11, new Gray(2));
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hier = new Mat();
CvInvoke.FindContours(adaptive_threshold_img, contours, hier, RetrType.Tree, ChainApproxMethod.ChainApproxSimple);
double max_area = 0;
VectorOfPoint max_contour = new VectorOfPoint();
for (int i = 0; i < contours.Size; i++)
{
double temp = CvInvoke.ContourArea(contours[i]);
if (temp > max_area)
{
max_area = temp;
max_contour = contours[i];
}
}
VectorOfVectorOfPoint contour_to_draw = new VectorOfVectorOfPoint(max_contour);
CvInvoke.DrawContours(gray, contour_to_draw, 0, new MCvScalar(255), 2);
CvInvoke.Imshow("plate", gray);
RotatedRect plate_feature = CvInvoke.MinAreaRect(max_contour);
CvInvoke.WaitKey();
CvInvoke.DestroyAllWindows();

Try CvInvoke.threshold() instead of gray.ThresholdAdaptive(). Set proper threshold and you'll get better contour than before.

Related

OpenCV FindContours does not find the correct rectangle

Using CvInvoke.Canny and CvInvoke.FindContours I'm trying to find the rectangle containing the item name (Schematic: Maple Desk). This rectangle is shown in the image below:
I'm able to find a lot of rectangles but I'm not able to get this one. Tried a lot of different thresholds for Canny but to no effect. The following image shows all rectangles I currently get:
Any ideas how to tackle this? Do I need to use other thresholds or another approach? I already experimented using grayscale and blurring but that didn't give better result. I added my current source below and the original image I'm using is this:
public Mat ProcessImage(Mat img)
{
UMat filter = new UMat();
UMat cannyEdges = new UMat();
Mat rectangleImage = new Mat(img.Size, DepthType.Cv8U, 3);
//Convert the image to grayscale and filter out the noise
//CvInvoke.CvtColor(img, filter, ColorConversion.Bgr2Gray);
//Remove noise
//CvInvoke.GaussianBlur(filter, filter, new System.Drawing.Size(3, 3), 1);
// Canny and edge detection
double cannyThreshold = 1.0; //180.0
double cannyThresholdLinking = 1.0; //120.0
//CvInvoke.Canny(filter, cannyEdges, cannyThreshold, cannyThresholdLinking);
CvInvoke.Canny(img, cannyEdges, cannyThreshold, cannyThresholdLinking);
// Find rectangles
List<RotatedRect> rectangleList = new List<RotatedRect>();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(cannyEdges, contours, null, RetrType.List, ChainApproxMethod.ChainApproxSimple);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour, CvInvoke.ArcLength(contour, true) * 0.05, true);
// Only consider contours with area greater than 250
if (CvInvoke.ContourArea(approxContour, false) > 250)
{
// The contour has 4 vertices.
if (approxContour.Size == 4)
{
// Determine if all the angles in the contour are within [80, 100] degree
bool isRectangle = true;
System.Drawing.Point[] pts = approxContour.ToArray();
LineSegment2D[] edges = Emgu.CV.PointCollection.PolyLine(pts, true);
for (int j = 0; j < edges.Length; j++)
{
double angle = Math.Abs(edges[(j + 1) % edges.Length].GetExteriorAngleDegree(edges[j]));
if (angle < 80 || angle > 100)
{
isRectangle = false;
break;
}
}
if (isRectangle) rectangleList.Add(CvInvoke.MinAreaRect(approxContour));
}
}
}
}
}
// Draw rectangles
foreach (RotatedRect rectangle in rectangleList)
{
CvInvoke.Polylines(rectangleImage, Array.ConvertAll(rectangle.GetVertices(), System.Drawing.Point.Round), true,
new Bgr(Color.DarkOrange).MCvScalar, 2);
}
//Drawing a light gray frame around the image
CvInvoke.Rectangle(rectangleImage,
new Rectangle(System.Drawing.Point.Empty,
new System.Drawing.Size(rectangleImage.Width - 1, rectangleImage.Height - 1)),
new MCvScalar(120, 120, 120));
//Draw the labels
CvInvoke.PutText(rectangleImage, "Rectangles", new System.Drawing.Point(20, 20),
FontFace.HersheyDuplex, 0.5, new MCvScalar(120, 120, 120));
Mat result = new Mat();
CvInvoke.VConcat(new Mat[] { img, rectangleImage }, result);
return result;
}
Edit 1:
After some more fine tuning with the following thresholds for Canny
cannyThreshold 100
cannyThresholdLinking 400
And using a minimum size for all ContourAreas of 10000 I can get the following result:
Edit 2:
For those interested, solved using the current detection, no changes were needed. Used the 2 detected rectangles in the screenshot above to get the location of the missing rectangle containing the item name.
Result can be found here:
https://github.com/josdemmers/NewWorldCompanion
For those interested, solved using the current detection, no changes were needed. Used the 2 detected rectangles in the screenshot above to get the location of the missing rectangle containing the item name.
Result can be found here: https://github.com/josdemmers/NewWorldCompanion

How to fix skewed image for OCR?

I just started OCR project a few weeks ago. I stuck with image skewed problem I tried many different methods nothing seems to work so plz help:)
Skewed image given below
I want final image such as given below
I already tried to deskew image but I was unable to get final image.
public Image<Gray, byte> ImageDeskewOuter(Image<Gray, byte> img)
{
img = img.Resize(img.Height, img.Width, Inter.Linear);
Image<Gray, byte> tmp = new Image<Gray, byte>(img.Bitmap);
tmp = tmp.ThresholdToZero(new Gray(180));
int nZero = tmp.CountNonzero()[0] == 0 ? 1 : tmp.CountNonzero()[0];
if (tmp.Bytes.Length / nZero < 10)
img = tmp.Not();
else
img = img.ThresholdToZero(new Gray(80)).InRange(new Gray(0), new Gray(60)).Not();
tmp = new Image<Gray, byte>(img.Bitmap).Canny(50, 150);
List<Rectangle> rlist = new List<Rectangle>();
Rectangle min = new Rectangle();
Rectangle max = new Rectangle();
VectorOfVectorOfPoint contour = new VectorOfVectorOfPoint();
Mat hier = new Mat();
CvInvoke.FindContours(tmp, contour, hier, RetrType.External, ChainApproxMethod.ChainApproxSimple);
if (contour.Size > 0)
{
for (int i = 0; i < contour.Size; i++)
{
Rectangle rec = CvInvoke.BoundingRectangle(contour[i]);
if (rec.Width > 30 && rec.Width < 120 && rec.Height > 50 && rec.Height < 120)
{
rlist.Add(rec);
}
}
min = rlist.OrderBy(x => x.X).FirstOrDefault();
max = rlist.OrderByDescending(x => x.X).FirstOrDefault();
Rectangle roi = Rectangle.Union(min, max);
img.ROI = roi;
}
if (rlist.Count > 0)
{
double angle = LineAngle(min.X, min.Bottom, max.X, max.Bottom, min.X, min.Bottom, max.X, min.Bottom) + 3;
img = img.Rotate(angle, new Gray(255), false);
}
return img;
}
Final image using above function
Step 1 > Select desired contour and apply CvInvoke.MinAreaRect() and copy image area into new image
Step 2> Create a bitmap image whose width is sum of all cropped image width(if you want to put each image side by side)
Step 3> Using Graphics draw image on bitmap
public Image<Gray, byte> ImageDeskew(Image<Gray, byte> img)
{
img = img.Resize(img.Height, img.Width, Inter.Linear);
Image<Gray, byte> tmp = new Image<Gray, byte>(img.Bitmap);
tmp = new Image<Gray, byte>(AdjustContrast(tmp.Bitmap, 70));
img = tmp.Not().ThresholdAdaptive(new Gray(255), AdaptiveThresholdType.MeanC, ThresholdType.Binary, 45, new Gray(10));
tmp = new Image<Gray, byte>(img.Bitmap).Canny(50, 150);
List<Image<Gray, byte>> imglist = new List<Image<Gray, byte>>();
Rectangle min = new Rectangle();
Rectangle max = new Rectangle();
VectorOfVectorOfPoint contour = new VectorOfVectorOfPoint();
Mat hier = new Mat();
CvInvoke.FindContours(tmp, contour, hier, RetrType.External, ChainApproxMethod.ChainApproxSimple);
if (contour.Size > 0)
{
for (int i = 0; i < contour.Size; i++)
{
RotatedRect rRect = CvInvoke.MinAreaRect(contour[i]);
float area = rRect.Size.Width * rRect.Size.Height;
if (area > (img.Bytes.Length / 10))
{
rlist.Add(rec);
if (rRect.Angle > -45) imglist.Add(img.Copy(rRect));
else imglist.Add(img.Copy(rRect).Rotate(-90, new Gray(255), false));
}
}
}
if (imglist.Count > 0)
{
int xPx = imglist.Sum(x => x.Width);
Bitmap bitmap = new Bitmap(xPx, img.Height);
using (Graphics g = Graphics.FromImage(bitmap))
{
xPx = 0;
imglist.Reverse();
foreach (Image<Gray, byte> i in imglist)
{
g.DrawImage(i.Not().Bitmap, xPx, 0);
xPx += i.Width;
}
}
img = new Image<Gray, byte>(bitmap).Not();
}
return img;
}

How to get rid of unnecessary lines with emgu cv

I'm trying to detect contour of an ellipse-like water droplet with Emgu CV. I wrote code for contour detection:
public List<int> GetDiameters()
{
string inputFile = #"path.jpg";
Image<Bgr, byte> imageInput = new Image<Bgr, byte>(inputFile);
Image<Gray, byte> grayImage = imageInput.Convert<Gray, byte>();
Image<Gray, byte> bluredImage = grayImage;
CvInvoke.MedianBlur(grayImage, bluredImage, 9);
Image<Gray, byte> edgedImage = bluredImage;
CvInvoke.Canny(bluredImage, edgedImage, 50, 5);
Image<Gray, byte> closedImage = edgedImage;
Mat kernel = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Ellipse, new System.Drawing.Size { Height = 100, Width = 250}, new System.Drawing.Point(-1, -1));
CvInvoke.MorphologyEx(edgedImage, closedImage, Emgu.CV.CvEnum.MorphOp.Close, kernel, new System.Drawing.Point(-1, -1), 0, Emgu.CV.CvEnum.BorderType.Replicate, new MCvScalar());
System.Drawing.Point(100, 250), 10000, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar()
Image<Gray, byte> contoursImage = closedImage;
Image<Bgr, byte> imageOut = imageInput;
VectorOfVectorOfPoint rescontours1 = new VectorOfVectorOfPoint();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(contoursImage, contours, null, Emgu.CV.CvEnum.RetrType.List,
Emgu.CV.CvEnum.ChainApproxMethod.LinkRuns);
MCvScalar color = new MCvScalar(0, 0, 255);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour,
0.01 * CvInvoke.ArcLength(contour, true), true);
var area = CvInvoke.ContourArea(contour);
if (area > 0 && approxContour.Size > 10)
{
rescontours1.Push(approxContour);
}
CvInvoke.DrawContours(imageOut, rescontours1, -1, color, 2);
}
}
}
}
result so far:
I think there is a problem with approximation. How to get rid of internal lines and close external contour?
I might need some more information to exactly pinpoint your issue, but it might be something to do with your median blur. I would see if you are blurring enough that EmguCV things the blur is enough that you can canny edge detection. Another method that you could use is Dilate. Try Dialating your Canny edge detection and see if you get any better results.
EDIT
Here is the code below
public List<int> GetDiameters()
{
//List to hold output diameters
List<int> diametors = new List<int>();
//File path to where the image is located
string inputFile = #"C:\Users\jones\Desktop\Image Folder\water.JPG";
//Read in the image and store it as a mat object
Mat img = CvInvoke.Imread(inputFile, Emgu.CV.CvEnum.ImreadModes.AnyColor);
//Mat object that will hold the output of the gaussian blur
Mat gaussianBlur = new Mat();
//Blur the image
CvInvoke.GaussianBlur(img, gaussianBlur, new System.Drawing.Size(21, 21), 20, 20, Emgu.CV.CvEnum.BorderType.Default);
//Mat object that will hold the output of the canny
Mat canny = new Mat();
//Canny the image
CvInvoke.Canny(gaussianBlur, canny, 40, 40);
//Mat object that will hold the output of the dilate
Mat dilate = new Mat();
//Dilate the canny image
CvInvoke.Dilate(canny, dilate, null, new System.Drawing.Point(-1, -1), 6, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar(0, 0, 0));
//Vector that will hold all found contours
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
//Find the contours and draw them on the image
CvInvoke.FindContours(dilate, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(img, contours, -1, new MCvScalar(255, 0, 0), 5, Emgu.CV.CvEnum.LineType.FourConnected);
//Variables to hold relevent info on what is the biggest contour
int biggest = 0;
int index = 0;
//Find the biggest contour
for (int i = 0; i < contours.Size; i++)
{
if (contours.Size > biggest)
{
biggest = contours.Size;
index = i;
}
}
//Once all contours have been looped over, add the biggest contour's index to the list
diametors.Add(index);
//Return the list
return diametors;
}
The first thing you do is blur the image.
Then you canny the image.
Then you dilate the image, as to make the final output contours more uniform.
Then you just find contours.
I know the final contours are a little bigger than the water droplet, but this is the best that I could come up with. You can probably fiddle around with some of the settings and the code above to make the result a little cleaner.

EmguCV Blob Counter

INPUT IMAGE
Hi I am try to learn EmguCV 3.3 and I have a question about blob counting.As you see in INPUT IMAGE I have black uneven blobs.
I am try to do something like this.
OUTPUT IMAGE
I need to draw rectangle around blobs and count them.
I tryied some approches but non of it work.
I need Help();
You can use FindCountours() or SimpleBlobDetection() to achieve that, here is an example code uses the first one:
Image<Gray, Byte> grayImage = new Image<Gray,Byte>(mRGrc.jpg);
Image<Gray, Byte> canny = new Image<Gray, byte>(grayImage.Size);
int counter = 0;
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage);contours != null; contours = contours.HNext)
{
contours.ApproxPoly(contours.Perimeter * 0.05, storage);
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
counter++;
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
}
Console.Writeline("Number of blobs: " + counter);

EmguCV SURF with cam?

i am new on EMGU CV. I would like to SURF detect more than one patterns with using cam. Like this video. But now, i try to develop this just one pattern for starting point.
I examined EMGUCV's SURF example. When i try to implement this codes to cam capture's example, error turns on run time. I searched more but did not find any code example.
So, do you suggest me a code snippet or tutorial which is explained good.
Thank very much already now.
Codes are below which i am working on;
...........................................
FrameRaw = capture.QueryFrame();
CamImageBox.Image = FrameRaw;
Run(FrameRaw);
...........................................
private void Run(Image<Bgr, byte> TempImage)
{
Image<Gray, Byte> modelImage = new Image<Gray, byte>("sample.jpg");
Image<Gray, Byte> observedImage = TempImage.Convert<Gray, Byte>();
// Image<Gray, Byte> observedImage = new Image<Gray,byte>("box_in_scene.png");
Stopwatch watch;
HomographyMatrix homography = null;
SURFDetector surfCPU = new SURFDetector(500, false);
VectorOfKeyPoint modelKeyPoints;
VectorOfKeyPoint observedKeyPoints;
Matrix<int> indices;
Matrix<float> dist;
Matrix<byte> mask;
if (GpuInvoke.HasCuda)
{
GpuSURFDetector surfGPU = new GpuSURFDetector(surfCPU.SURFParams, 0.01f);
using (GpuImage<Gray, Byte> gpuModelImage = new GpuImage<Gray, byte>(modelImage))
//extract features from the object image
using (GpuMat<float> gpuModelKeyPoints = surfGPU.DetectKeyPointsRaw(gpuModelImage, null))
using (GpuMat<float> gpuModelDescriptors = surfGPU.ComputeDescriptorsRaw(gpuModelImage, null, gpuModelKeyPoints))
using (GpuBruteForceMatcher matcher = new GpuBruteForceMatcher(GpuBruteForceMatcher.DistanceType.L2))
{
modelKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuModelKeyPoints, modelKeyPoints);
watch = Stopwatch.StartNew();
// extract features from the observed image
using (GpuImage<Gray, Byte> gpuObservedImage = new GpuImage<Gray, byte>(observedImage))
using (GpuMat<float> gpuObservedKeyPoints = surfGPU.DetectKeyPointsRaw(gpuObservedImage, null))
using (GpuMat<float> gpuObservedDescriptors = surfGPU.ComputeDescriptorsRaw(gpuObservedImage, null, gpuObservedKeyPoints))
using (GpuMat<int> gpuMatchIndices = new GpuMat<int>(gpuObservedDescriptors.Size.Height, 2, 1))
using (GpuMat<float> gpuMatchDist = new GpuMat<float>(gpuMatchIndices.Size, 1))
{
observedKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuObservedKeyPoints, observedKeyPoints);
matcher.KnnMatch(gpuObservedDescriptors, gpuModelDescriptors, gpuMatchIndices, gpuMatchDist, 2, null);
indices = new Matrix<int>(gpuMatchIndices.Size);
dist = new Matrix<float>(indices.Size);
gpuMatchIndices.Download(indices);
gpuMatchDist.Download(dist);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DTracker.VoteForUniqueness(dist, 0.8, mask);
int nonZeroCount = CvInvoke.cvCountNonZero(mask);
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DTracker.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 3);
}
watch.Stop();
}
}
}
else
{
//extract features from the object image
modelKeyPoints = surfCPU.DetectKeyPointsRaw(modelImage, null);
//MKeyPoint[] kpts = modelKeyPoints.ToArray();
Matrix<float> modelDescriptors = surfCPU.ComputeDescriptorsRaw(modelImage, null, modelKeyPoints);
watch = Stopwatch.StartNew();
// extract features from the observed image
observedKeyPoints = surfCPU.DetectKeyPointsRaw(observedImage, null);
Matrix<float> observedDescriptors = surfCPU.ComputeDescriptorsRaw(observedImage, null, observedKeyPoints);
BruteForceMatcher matcher = new BruteForceMatcher(BruteForceMatcher.DistanceType.L2F32);
matcher.Add(modelDescriptors);
int k = 2;
indices = new Matrix<int>(observedDescriptors.Rows, k);
dist = new Matrix<float>(observedDescriptors.Rows, k);
matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DTracker.VoteForUniqueness(dist, 0.8, mask);
int nonZeroCount = CvInvoke.cvCountNonZero(mask);
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DTracker.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 3);
}
watch.Stop();
}
//Draw the matched keypoints
Image<Bgr, Byte> result = Features2DTracker.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
indices, new Bgr(255, 255, 255), new Bgr(255, 255, 255), mask, Features2DTracker.KeypointDrawType.NOT_DRAW_SINGLE_POINTS);
#region draw the projected region on the image
if (homography != null)
{ //draw a rectangle along the projected model
Rectangle rect = modelImage.ROI;
PointF[] pts = new PointF[] {
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)};
homography.ProjectPoints(pts);
result.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Bgr(Color.Red), 5);
}
#endregion
// ImageViewer.Show(result, String.Format("Matched using {0} in {1} milliseconds", GpuInvoke.HasCuda ? "GPU" : "CPU", watch.ElapsedMilliseconds));
}
I found the SURF tutorial you used, but I don't see why it should cause an error. Have you been able to execute the tutorial code by itself, without the GPU acceleration complication?
Moreover, what error occurred?

Categories

Resources