I'm new to images processing. Now I have a problem.
I'm writing a simple program on C#, that have to define some objects on images through some samples.
For example here's the sample:
Later I have to compare with it objects that I find on a loadable image.
Sizes of objects and samples are always equal. Images are binarized. We always know rotation point (it's the image's center). Samples are always normalized, but we never know object's rotation angle relative to the normal.
Here are some objects that I find on the loadable image:
The question is how to find angle #1.
Sorry for my English and thanks.
if you are using aforge libraries, you can utilize his extension too, named Accord.Net.
Accord.Net is similar at Aforge, you install it, add the references at your project and you are done.
After that you can use the simply the RawMoments by passing the target image, and after you can use them to compute CentralMoments
At this point you can get the angle of your image with the method of CentralMoments GetOrientation() and you get the angle.
I used it on an hand-gestures recognition project and worked like a charm.
UPDATE:
I have just checked that GetOrientation get only the angle but not the direction.
So an upside-down image have the same angle of the original.
A fix, can be the pixel counting, but this time you will get only 2 (worst case) samples to check and not 360 (worst case) samples.
Update2
If you have a lot of samples, i suggest you to filter them with the size of the rotated image.
Example:
I get the image, i see that is in a Horizontal position (90°) i rotate it of 90° and now i have the original width and heigth that i can utilize to skip the samples that are not similar, like:
If (Founded.Width != Sample.Width) //You can add a range too if in case during
Continue; //the rotation are added some pixels
To recap, you have a sample image and a rotated image of the same source image. You also have two values 0,1 for the pixels.
A simple pseudo-code that can yield moderate success can be implemented using a binary search :
Start with a value for the rotation to be 180 degress - both clockwise and counter-clockwise
Rotate the image to both values.
XOR the original image from the rotated one.
Count the number of 0 pixels and check if it's less than the margin of error you define.
Continue the search with half of the rotation angle.
look at this
Rotation angle of scanned document
Related
I have two sets of X,Y co-ordinates as separate lists. Both represent the same irregular polygonal shape, but in different orientations and sizes/scale.
Need to write a program in C#, to compare both the points set, rotate any one of the shape such that it aligns with the another, so that they are in same orientation.
Tried searching for solution, and got to know using concave hull with angles difference can help, but could not find a good C# implementation for the same.
Can some one help me, if there is a minimal way to achieve this?
Edit: The two points-set might not be the same. One may contain more points than other.
I have contour co-ordinates of a shape and a PNG which is of same shape, but orientation is different. I want to read the PNG, calculate the angle to turn it to the fit the Contour.
Calculate image moments for point cloud
Evaluate orientation of both clouds with Theta angle.
Rotate one cloud by theta difference.
Use other moments (centroid etc) to find translation and scale
I have to detect all the points of a white polygon on a black background in c#. Here is an image of several examples. I wouldn't think it is too difficult, but I am unable to detect this properly with all the variations. My code is too much to post here, but basically I went through each side and look for when it changes from black and white. Should I use Open CV? I was hoping for a simple algorithm I could implement in C#. Any suggestions? Thank you.
In your case I would do this:
pre process image
so remove noise in color if present (like JPG distortion etc) and binarize image.
select circumference pixels
simply loop through all pixels and set each white pixel that has at least one black neighbor to distinct color that will represent your circumference ROI mask or add the pixel position to some list of points instead.
apply connected components analysis
so you need to find out the order of the points (how are connected together). The easiest way to do this is use flood filing of the ROI from first found pixel until all ROI is filled and remember the order of filled points (similar to A*). There should be 2 distinct paths at some point and both should join at last. So identify these 2 points and construct the circumference point order (by reversing one half and handling the shared part if present).
find vertexes
if you compute the angle change between all consequent pixels then on straight lines the angle change should be near zero and near vertexes much bigger. So threshold that and you got your vertexes. To make this robust you need to compute slope angle from a bit more distant pixels then the closest pixels. Also thresholding this angle change against sliding average often provides more stable results.
So find out how far the pixels should be to compute angle so you got not too big noise and vertexes has still big peaks and also find out the threshold value that is safe above any noise.
This can be done also by hough transform and or find contours functions that are present in many CV libs. Another option is also regress/fit the lines in the point list directly and compute intersections which can provide sub pixel precision.
For more info see related QAs:
Backtracking in A star
Finding holes in 2d point sets
growth fill
I have a known object (square) in the 3D space and I know the exact position of its corners*.
I take a photo of the object, and I am already able to precisely identify which pixel on the photo corresponds to which corner of the square. (I also know the camera's sensor resolution and the lens' focal length**).
How do I calculate the position and orientation of the camera? I want to implement the solution in C#. This sounds like rather basic matrix operation used in 3D game engines all the time, just executing the opposite direction. I hope it truly is. :)
*All position and length info is expressed in [meter] in a local coordinate system. No lat/lons are used.
**Focal length is not expressed in 35mm equivalent, but by the width and height of the viewport that is 1 meter away from the focal point.
The term for what you're trying to do is 'Homography'.
The OpenCV library provides a variety of functions to achieve this - you can read about some of the maths here:
http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography
In particular, the findHomography function will utilise a list of point correspondences between a sample image and a camera image to calculate a matrix for the camera position.
I'm trying to write some image detection code for a pick and place machine. I'm new to OpenCV and have been going through a lot of examples - but still ahve two outstanding questions. The first one I think I have a solution for but I'm lost on the second.
I'm trying to detect the offset and angle of the bottom of a part. Essentially, how far is the object from the cross (just an indicator of the center of the frame), and what the angle of rotation the part has about the part's center. I've used filters to show the pads of the components.
I'm pretty sure that I want to implement something like this http://felix.abecassis.me/2011/10/opencv-bounding-box-skew-angle/ - but I'm not sure how to translate the code into C# (http://www.emgu.com/wiki/index.php/Main_Page). Any pointers would be helpful.
One issue is if the part is smaller than the needle that's holding it and you can see both the part and the needle.
The square bit is the part I want to detect. The round part is part of the needle that is still exposed. I've got no clue how to approach this - I'm thinking something along the lines of detecting the straight lines and discarding the curved ones to generate a shape. Again, I'm interested in the offset from the center and the angle of rotation.
First you should detect every Object with findContours. Then you can use the minimum area rectangle function on every found contour. I assume you know the size and coordiantes of your cross so you can use the Center Coordinates of the MCvBox2D to get the Offset to it. Furthermore you can read the angle property of the box so it should fit your purpose.
For the second part i would try to fit a least square reactangle. The round part seems to be very small compared to the square one so maybe it will work.
Maybe the Detection of Quadrilaterlas in the AForge Library could help you too.
Edit:
To merge your the contours i would try something like this:
Rectangle merged = New Rectangle(New Point(img.Width, img.Height), New Size(0, 0)); //img is your binarized image
Contour<Point> Pad_contours= img.FindContours(CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,CvEnum.RETR_TYPE.CV_RETR_LIST);
while(Pad_contours !=null)
{
if(Pad_contours.Area <Pad_Area_max &&Pad_contours>Pad_Area_min)//Filter Pads to avoid false positive Contours
{
//Merge Pad contours into a single rectangle
if(merged.Height >0)
merged.Bottom=Math.Max(merged.Bottom,Pad_Contours.BoundingRectangle.Bottom);
else
merged.Height = contours.BoundingRectangle.Height;
merged.Top=Math.Min(merged.Top,Pad_Contours.BoundingRectangle.Top);
merged.Left=math.Min(merged.Left,Pad_Contours.BoundingRectangle.Left);
if(merged.Width>0)
merged.Right=Max(merged.Right,pad_Contours.BoundingRectangle.Right);
else
merged.Width=Pad_Contours.BoundingRectangle.Width;
}
//get next Pad
If(Pad_contours.VNext==null)
Pad_contours=Pad_contours.HNext;
else
Pad_contours = Pad_contours.VNext;
}
The rectangle "merged" should enclose all your Pads now. The problem is you wont get an angle this way because the rectangle is always vertically 90°. To solve this I would iterate through the contours like shown above and store every point of every contour in an extra datacontainer. Then I would use the minimum area rectangle function mentioned above and apply it on all gathered points. This should give you a bounding rectangle with an angle property.
I am working on a C# program to process images ( given as int[,] )..
I have a 2D array of pixels, and I need to rotate them around a point, then scale them down to fit the original array. I already found articles about using matrix to transform to a point and rotate then transform back. What remains is to scale the resultant image to fit an array of original size.
How that can be done? (preferably with 2 equations one for x and one for y )
In the Matrix class you have both functions Rotate(At) and Scale. What other would you find out?
Have a look here. That should give you all the math behind doing coordinate rotations.
You need to find a transform from the resultant array to the original image. You then transform points in the destination to points in the source image and copy. Anti-aliasing via oversampling is also an option. Your rotation matrix can also apply a scaling - just multiply the matrix by the scale factor (this assumes a 2x2). If you're doing 3x3 matrix for rotation, scaling, and translation, then just multiply the upper left 2x2 by the scale factor.
Lastly, at the risk of some humility here is a link to some old TP6/asm DOS code I wrote for doing full screen roto-zooming. Strange the stuff that sticks around on the net:
http://www.hornet.org/cgi-bin/scene-search.cgi?search=Paul%20H.%20Kahler
Everything you need to do can be done with Bitmap images in GDI+ (using the System.Drawing... namespaces). These classes are designed and optimized for doing exactly this sort of thing (image manipulation). Is there any particular reason you can't work with an actual Bitmap instead of an int[,]? You could even write a very simple routine to create a Bitmap from an int[,], do whatever you need to to on the Bitmap, and then convert it back to int[,] at the end.