OpenCV/EMGU (C#) detection of objects - c#

I'm trying to write some image detection code for a pick and place machine. I'm new to OpenCV and have been going through a lot of examples - but still ahve two outstanding questions. The first one I think I have a solution for but I'm lost on the second.
I'm trying to detect the offset and angle of the bottom of a part. Essentially, how far is the object from the cross (just an indicator of the center of the frame), and what the angle of rotation the part has about the part's center. I've used filters to show the pads of the components.
I'm pretty sure that I want to implement something like this http://felix.abecassis.me/2011/10/opencv-bounding-box-skew-angle/ - but I'm not sure how to translate the code into C# (http://www.emgu.com/wiki/index.php/Main_Page). Any pointers would be helpful.
One issue is if the part is smaller than the needle that's holding it and you can see both the part and the needle.
The square bit is the part I want to detect. The round part is part of the needle that is still exposed. I've got no clue how to approach this - I'm thinking something along the lines of detecting the straight lines and discarding the curved ones to generate a shape. Again, I'm interested in the offset from the center and the angle of rotation.

First you should detect every Object with findContours. Then you can use the minimum area rectangle function on every found contour. I assume you know the size and coordiantes of your cross so you can use the Center Coordinates of the MCvBox2D to get the Offset to it. Furthermore you can read the angle property of the box so it should fit your purpose.
For the second part i would try to fit a least square reactangle. The round part seems to be very small compared to the square one so maybe it will work.
Maybe the Detection of Quadrilaterlas in the AForge Library could help you too.
Edit:
To merge your the contours i would try something like this:
Rectangle merged = New Rectangle(New Point(img.Width, img.Height), New Size(0, 0)); //img is your binarized image
Contour<Point> Pad_contours= img.FindContours(CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,CvEnum.RETR_TYPE.CV_RETR_LIST);
while(Pad_contours !=null)
{
if(Pad_contours.Area <Pad_Area_max &&Pad_contours>Pad_Area_min)//Filter Pads to avoid false positive Contours
{
//Merge Pad contours into a single rectangle
if(merged.Height >0)
merged.Bottom=Math.Max(merged.Bottom,Pad_Contours.BoundingRectangle.Bottom);
else
merged.Height = contours.BoundingRectangle.Height;
merged.Top=Math.Min(merged.Top,Pad_Contours.BoundingRectangle.Top);
merged.Left=math.Min(merged.Left,Pad_Contours.BoundingRectangle.Left);
if(merged.Width>0)
merged.Right=Max(merged.Right,pad_Contours.BoundingRectangle.Right);
else
merged.Width=Pad_Contours.BoundingRectangle.Width;
}
//get next Pad
If(Pad_contours.VNext==null)
Pad_contours=Pad_contours.HNext;
else
Pad_contours = Pad_contours.VNext;
}
The rectangle "merged" should enclose all your Pads now. The problem is you wont get an angle this way because the rectangle is always vertically 90°. To solve this I would iterate through the contours like shown above and store every point of every contour in an extra datacontainer. Then I would use the minimum area rectangle function mentioned above and apply it on all gathered points. This should give you a bounding rectangle with an angle property.

Related

Get Smooth Fitting Curve for given set of points in C# WPF

I have set of Points(x,y) for given image - red marked line. need to convert red lined shape points to blue lined smooth curve (egg like curve / oval shape / elliptical curve) in C# - WPF. Required to remove uneven border of uneven shape. Any help please?
Edit:
I Got Red line points from EMGU-CV FindContours method. If it is possible to get Blue lined curved using image processing then it's also fine.
The approach I would take is to first decimate the curve using something like Ramer–Douglas–Peucker or Visvalingam–Whyatt. Apply this until you get a few points. You might need to do some adaptation to make these work for closed polylines.
Once you have only a few points you should be able to use these as control points for a spline. Either creating a polynomial for the entire curve, or creating multiple quadric/cubic segments.

c# detect points of polygon in black/white image

I have to detect all the points of a white polygon on a black background in c#. Here is an image of several examples. I wouldn't think it is too difficult, but I am unable to detect this properly with all the variations. My code is too much to post here, but basically I went through each side and look for when it changes from black and white. Should I use Open CV? I was hoping for a simple algorithm I could implement in C#. Any suggestions? Thank you.
In your case I would do this:
pre process image
so remove noise in color if present (like JPG distortion etc) and binarize image.
select circumference pixels
simply loop through all pixels and set each white pixel that has at least one black neighbor to distinct color that will represent your circumference ROI mask or add the pixel position to some list of points instead.
apply connected components analysis
so you need to find out the order of the points (how are connected together). The easiest way to do this is use flood filing of the ROI from first found pixel until all ROI is filled and remember the order of filled points (similar to A*). There should be 2 distinct paths at some point and both should join at last. So identify these 2 points and construct the circumference point order (by reversing one half and handling the shared part if present).
find vertexes
if you compute the angle change between all consequent pixels then on straight lines the angle change should be near zero and near vertexes much bigger. So threshold that and you got your vertexes. To make this robust you need to compute slope angle from a bit more distant pixels then the closest pixels. Also thresholding this angle change against sliding average often provides more stable results.
So find out how far the pixels should be to compute angle so you got not too big noise and vertexes has still big peaks and also find out the threshold value that is safe above any noise.
This can be done also by hough transform and or find contours functions that are present in many CV libs. Another option is also regress/fit the lines in the point list directly and compute intersections which can provide sub pixel precision.
For more info see related QAs:
Backtracking in A star
Finding holes in 2d point sets
growth fill

Get object's rotation angle on image

I'm new to images processing. Now I have a problem.
I'm writing a simple program on C#, that have to define some objects on images through some samples.
For example here's the sample:
Later I have to compare with it objects that I find on a loadable image.
Sizes of objects and samples are always equal. Images are binarized. We always know rotation point (it's the image's center). Samples are always normalized, but we never know object's rotation angle relative to the normal.
Here are some objects that I find on the loadable image:
The question is how to find angle #1.
Sorry for my English and thanks.
if you are using aforge libraries, you can utilize his extension too, named Accord.Net.
Accord.Net is similar at Aforge, you install it, add the references at your project and you are done.
After that you can use the simply the RawMoments by passing the target image, and after you can use them to compute CentralMoments
At this point you can get the angle of your image with the method of CentralMoments GetOrientation() and you get the angle.
I used it on an hand-gestures recognition project and worked like a charm.
UPDATE:
I have just checked that GetOrientation get only the angle but not the direction.
So an upside-down image have the same angle of the original.
A fix, can be the pixel counting, but this time you will get only 2 (worst case) samples to check and not 360 (worst case) samples.
Update2
If you have a lot of samples, i suggest you to filter them with the size of the rotated image.
Example:
I get the image, i see that is in a Horizontal position (90°) i rotate it of 90° and now i have the original width and heigth that i can utilize to skip the samples that are not similar, like:
If (Founded.Width != Sample.Width) //You can add a range too if in case during
Continue; //the rotation are added some pixels
To recap, you have a sample image and a rotated image of the same source image. You also have two values 0,1 for the pixels.
A simple pseudo-code that can yield moderate success can be implemented using a binary search :
Start with a value for the rotation to be 180 degress - both clockwise and counter-clockwise
Rotate the image to both values.
XOR the original image from the rotated one.
Count the number of 0 pixels and check if it's less than the margin of error you define.
Continue the search with half of the rotation angle.
look at this
Rotation angle of scanned document

Finding simple template in image of unknown scale

I have a floor layout (fairly simple, white background, black content) and a template of a chair on the floor. I know all orientations I need to look for (simple up, down, left, right) but I do not know the scale of the floor template coming in.
I have it working with AForge where, when copying a chair from the layout so I know the exact scale, I can find all chairs on the floor. That is giving me exactly what I want (I just need the center x,y of the chair). Going forward I would like to automate this. I won't know the exact scale of the floor plan being uploaded
I played with the Emgu.CV examples to try and find it (SURFFeature example project) but using just the chair as the template did not work. It doesn't seem to find any observedDescriptors (it is null), I assume because the chair on its own isn't too complex. I tried a more complex template (chair+desk, though it wouldn't work normally because the chair relative to desk isn't consistent). The results didn't seem useful, it pointed to a few random places on the floor plan but didn't seem quite right.
Any ideas on ways to determine the scale?
By using the wrong scale, readings are not accurate. This may cause a package weight to be misprinted on a cereal box that is made floor scale instead of a balance scale. This is because the percision on these scales is different to accommodate a business's diverse needs. And, by using different calabrations, this can cause a difference in the weight between an identical product when measured on a floor scale and than a counting scale.
Alright, I was able to get this working. What I ended up doing is drawing a square inside a circle and placing the object I want inside the square
Then I use: Blob[] blobs = blobCounter.GetObjectsInformation( ); to get all blobs on the page.
Loop through the blobs and look for all circles and squares, add them to a list of each
if (shapeChecker.IsCircle(edgePoints, out center, out radius))
{
circs.Add(b);
}
else if (corners.Count == 3)
tris.Add(b);
else if (corners.Count == 4)
boxes.Add(b);
Loop through each circle, and for each circle all squares, and look for two with approximately the same center point.
To get the object inside I copy a crop of the image from inside the square (add a few to x,y, remove a few from width, height). This gives me the white space and object within the square
I then use an autocrop (from here, though modified because I didn't need to rotate/greyscale) to cut away the white space and am left with just the image I want!
sorry I don't have example images in this - I don't have enough rep to post them yet

Outer Bounding Points of Image/Shape using C#

I have some images that I'd like to draw a polyon around the outer edges. The images themselves are on transparent backgrounds and I've created an array of the pixels in the images which contain a point and are not transparent (or white).
Now, my question is: how do I draw an accurate polygon around the outer edge points? I've used a Graham Scan algorithm that I read about to create a convex hull around the edges but this doesn't seem to work for objects with concavities. For example:
http://i48.tinypic.com/4s0lna.png
The image on the left gets filled in using this method with the one on the right. As you can see, it's 'filling in' a little too much.
I assume there must be some other algorithm or approach that can be used to solve this, but I'm not sure of where to look or what it might be called. Could anyone point me in the right direction? I'm using C#/.net and hopefully there might be something that already exists which could work along these lines.
I think the 2D "Alpha shapes" algorithm would the right choice for you.
http://www.cgal.org/Manual/latest/doc_html/cgal_manual/Alpha_shapes_2/Chapter_main.html
Alpha shapes can be considered as a generalization for the "convex Hull" algorithm that allows for generation of more general shapes.
By using alpha shapes you will be having control over the level of details to be captured by the resultant shape by changing the alpha parameter value.
You can try the java applet here : http://cgm.cs.mcgill.ca/~godfried/teaching/projects97/belair/alpha.html
to have better understanding about does this algorithm do.
You can start on a pixel by pixel level, using a flood-fill approach.
Start in the corner, checking that it does have zero alpha.
Check the neighbours for zero alpha and iterate until we have no unchecked neighhours.
This gives you a mask for the image which will consist of two simply connected regions, the interior and exterior.
The set you seek then consists of:
all the points in the exterior which are on the boundary of the interior.
You can then turn that into a polygon by:
Take an initial polygon that consists of all the points in the edge set
Remove redundant vertices that lie along straight edges.

Categories

Resources