I've been handed a csv file containing a series of coordinates, from which lines should be drawn on top of a bitmap grid; I can get the values out and convert them into ints for the DrawLine function, etc, just fine.
The problem is that these coordinates are basically percentages; x:0.5 and y:0.5 represent dead centre (being 50% of X and 50% of Y) and x:1.0/y:1.0 would be in the top right regardless of the absolute dimensions of what is being plotted on to (in this instance a 1000x1500 bitmap). In addition screen/window coordinates start in the top left which doesn't affect the x-axis but the y-axis needs to be somehow inverted.
So what do I need to do to the coordinates to get them to plot correctly? To be honest I've got the X-axis working fine, it's the Y-axis giving me the problems.
(The window containing the bitmap is 1600x1600, FWIW.)
Well the naive way is to simply calculate the single closest pixel i.e. round(WIDTH*x)
But thats bad generally, because some pixels would be left blank, some would be mapped multiple times.
What I'd do is to calculate the percentage of covering per each point - i.e. a point can cover 75% of one pixel, and 25% of it's neighbor and then fill the color of a pixel accordingly.
Without more details of what is wrong, I'll take a guess and say that you are calculating your Y value up-side-down. Try it this way:
round(HEIGHT*(1.0-y))
Then, give us more details of what you are having trouble with.
Related
I have to detect all the points of a white polygon on a black background in c#. Here is an image of several examples. I wouldn't think it is too difficult, but I am unable to detect this properly with all the variations. My code is too much to post here, but basically I went through each side and look for when it changes from black and white. Should I use Open CV? I was hoping for a simple algorithm I could implement in C#. Any suggestions? Thank you.
In your case I would do this:
pre process image
so remove noise in color if present (like JPG distortion etc) and binarize image.
select circumference pixels
simply loop through all pixels and set each white pixel that has at least one black neighbor to distinct color that will represent your circumference ROI mask or add the pixel position to some list of points instead.
apply connected components analysis
so you need to find out the order of the points (how are connected together). The easiest way to do this is use flood filing of the ROI from first found pixel until all ROI is filled and remember the order of filled points (similar to A*). There should be 2 distinct paths at some point and both should join at last. So identify these 2 points and construct the circumference point order (by reversing one half and handling the shared part if present).
find vertexes
if you compute the angle change between all consequent pixels then on straight lines the angle change should be near zero and near vertexes much bigger. So threshold that and you got your vertexes. To make this robust you need to compute slope angle from a bit more distant pixels then the closest pixels. Also thresholding this angle change against sliding average often provides more stable results.
So find out how far the pixels should be to compute angle so you got not too big noise and vertexes has still big peaks and also find out the threshold value that is safe above any noise.
This can be done also by hough transform and or find contours functions that are present in many CV libs. Another option is also regress/fit the lines in the point list directly and compute intersections which can provide sub pixel precision.
For more info see related QAs:
Backtracking in A star
Finding holes in 2d point sets
growth fill
I'm trying to write some image detection code for a pick and place machine. I'm new to OpenCV and have been going through a lot of examples - but still ahve two outstanding questions. The first one I think I have a solution for but I'm lost on the second.
I'm trying to detect the offset and angle of the bottom of a part. Essentially, how far is the object from the cross (just an indicator of the center of the frame), and what the angle of rotation the part has about the part's center. I've used filters to show the pads of the components.
I'm pretty sure that I want to implement something like this http://felix.abecassis.me/2011/10/opencv-bounding-box-skew-angle/ - but I'm not sure how to translate the code into C# (http://www.emgu.com/wiki/index.php/Main_Page). Any pointers would be helpful.
One issue is if the part is smaller than the needle that's holding it and you can see both the part and the needle.
The square bit is the part I want to detect. The round part is part of the needle that is still exposed. I've got no clue how to approach this - I'm thinking something along the lines of detecting the straight lines and discarding the curved ones to generate a shape. Again, I'm interested in the offset from the center and the angle of rotation.
First you should detect every Object with findContours. Then you can use the minimum area rectangle function on every found contour. I assume you know the size and coordiantes of your cross so you can use the Center Coordinates of the MCvBox2D to get the Offset to it. Furthermore you can read the angle property of the box so it should fit your purpose.
For the second part i would try to fit a least square reactangle. The round part seems to be very small compared to the square one so maybe it will work.
Maybe the Detection of Quadrilaterlas in the AForge Library could help you too.
Edit:
To merge your the contours i would try something like this:
Rectangle merged = New Rectangle(New Point(img.Width, img.Height), New Size(0, 0)); //img is your binarized image
Contour<Point> Pad_contours= img.FindContours(CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,CvEnum.RETR_TYPE.CV_RETR_LIST);
while(Pad_contours !=null)
{
if(Pad_contours.Area <Pad_Area_max &&Pad_contours>Pad_Area_min)//Filter Pads to avoid false positive Contours
{
//Merge Pad contours into a single rectangle
if(merged.Height >0)
merged.Bottom=Math.Max(merged.Bottom,Pad_Contours.BoundingRectangle.Bottom);
else
merged.Height = contours.BoundingRectangle.Height;
merged.Top=Math.Min(merged.Top,Pad_Contours.BoundingRectangle.Top);
merged.Left=math.Min(merged.Left,Pad_Contours.BoundingRectangle.Left);
if(merged.Width>0)
merged.Right=Max(merged.Right,pad_Contours.BoundingRectangle.Right);
else
merged.Width=Pad_Contours.BoundingRectangle.Width;
}
//get next Pad
If(Pad_contours.VNext==null)
Pad_contours=Pad_contours.HNext;
else
Pad_contours = Pad_contours.VNext;
}
The rectangle "merged" should enclose all your Pads now. The problem is you wont get an angle this way because the rectangle is always vertically 90°. To solve this I would iterate through the contours like shown above and store every point of every contour in an extra datacontainer. Then I would use the minimum area rectangle function mentioned above and apply it on all gathered points. This should give you a bounding rectangle with an angle property.
I have a question about sorting found corners from chessboard.
I'm doing my program in C# using OpenCVSharp.
I need to sort found corners which are points described by X and Y.
This is the part of my code:
...
CvPoint2D32f[] corners;
bool found = Cv.FindChessboardCorners(gray, board_sz, out corners, out corner_count,
ChessboardFlag.NormalizeImage | ChessboardFlag.FilterQuads);
Cv.FindCornerSubPix(gray, corners, corner_count, new CvSize(11,11), new CvSize(-1,-1),
Cv.TermCriteria(CriteriaType.Epsilon | CriteriaType.Iteration, 30, 0.1));
Cv.DrawChessboardCorners(img1, board_sz, corners, found);
...
After that I'm displaying found corners in ImageBox:
see good order in all pictures
and this is the order of corners what I need always, but when I rotate the chessboard a bit - found corners changes like this:
see bad order in all pictures
I need always the same (like in picture 1) order of these points so I decided to use:
var ordered = corners.OrderBy(p => p.Y).ThenBy(p => p.X);
corners = ordered.ToArray();
but it doesn't work like I want:
see bad result 1 in all pictures
see bad result 2 in all pictures
The main point is that my chessboard won't be rotated too much, just for a little angle.
The second point is that the corners must be ordered from the first white square on the top left side of the board.
I know, that the base point (0,0) is on the left top corner of the image and the positive values of Y are increasing in the direction to the bottom of image and positive values of X are increasing in direction to the right side of image.
I'm working on the program to obtain this ordering (these pistures are edited in picture editor):
see example 1 in all pictures
see example 2 in all pictures
Thanks for any help.
Please find below some examples for how OpenCV's findChessboardCorners might return the corner point list and how drawChessboardCorners might display the corners.
For more clarity the order of indices of the subscribing quadrilateral is added as 0,1,2,3.
Basically there are 4 possible rotations of the result leading to the initial red marker being either:
topLeft
topRight
bottomLeft
bottomRight
So when you'd like to resort you can take this knowledge into account and change the order of indices accordingly. The good news is that with this approach you don't have to look at all the x,y values. It's sufficient to compare the first and last value in the list to find out what the rotation is.
To simplify the sorting you might want to "reshape" the list to an array that fits the chesspattern that you supplied to findChessBoardCorners in the fist place e.g. 9x9. In Python numpy there is a function for that i don't know how this would be done in C#.
Work on straightened points. Determine the slope of the image, for instance by taking the difference of the upper right and upper left points. See Rotation (mathematics). Instead of taking the cos you could as well take -diff.Y (the minus because we want to rotate back) and diff.X for the sin. The effect of taking these "wrong" values will result in a scaling.
Now determine the minimum and maximum of x and y of these straightened points. You get two pieces of information from these: 1) an offset from the coordinate origin. 2) The size of the board. Now rescale the transformed point to make them have coordinates between 0.0 and 8.0. Now if the image was perfect all the points’ coordinates should have integer values.
Because they don't, round the coordinates to make them all integers. Sorting these integer coordinates by y and then by x should yield your desired order. This is because the points on the same horizontal line now really have the same y value. This was not the case before. Since they probably all had different y-coordinates, only the second sorting by x had an effect.
In order to sort the original points, put the transformed ones and the original ones into the same class or struct (e.g. a Tuple) and sort them together.
I have a floor layout (fairly simple, white background, black content) and a template of a chair on the floor. I know all orientations I need to look for (simple up, down, left, right) but I do not know the scale of the floor template coming in.
I have it working with AForge where, when copying a chair from the layout so I know the exact scale, I can find all chairs on the floor. That is giving me exactly what I want (I just need the center x,y of the chair). Going forward I would like to automate this. I won't know the exact scale of the floor plan being uploaded
I played with the Emgu.CV examples to try and find it (SURFFeature example project) but using just the chair as the template did not work. It doesn't seem to find any observedDescriptors (it is null), I assume because the chair on its own isn't too complex. I tried a more complex template (chair+desk, though it wouldn't work normally because the chair relative to desk isn't consistent). The results didn't seem useful, it pointed to a few random places on the floor plan but didn't seem quite right.
Any ideas on ways to determine the scale?
By using the wrong scale, readings are not accurate. This may cause a package weight to be misprinted on a cereal box that is made floor scale instead of a balance scale. This is because the percision on these scales is different to accommodate a business's diverse needs. And, by using different calabrations, this can cause a difference in the weight between an identical product when measured on a floor scale and than a counting scale.
Alright, I was able to get this working. What I ended up doing is drawing a square inside a circle and placing the object I want inside the square
Then I use: Blob[] blobs = blobCounter.GetObjectsInformation( ); to get all blobs on the page.
Loop through the blobs and look for all circles and squares, add them to a list of each
if (shapeChecker.IsCircle(edgePoints, out center, out radius))
{
circs.Add(b);
}
else if (corners.Count == 3)
tris.Add(b);
else if (corners.Count == 4)
boxes.Add(b);
Loop through each circle, and for each circle all squares, and look for two with approximately the same center point.
To get the object inside I copy a crop of the image from inside the square (add a few to x,y, remove a few from width, height). This gives me the white space and object within the square
I then use an autocrop (from here, though modified because I didn't need to rotate/greyscale) to cut away the white space and am left with just the image I want!
sorry I don't have example images in this - I don't have enough rep to post them yet
I am beginner in C#.NET .I am on a project to process approach maps.this map contains the surrounding area of a runway,where the flight can fly in order to land.
this map is a bitmap image.it contains longitudes and latitudes on the borders of the image!
now the aim of the project is to get the geological coordinates(lats/long) of points on the map,(when clicked or hovered on that point by mouse,) based on the given geological coordinates on the border of the map.so if we give the input for a point with its lats/long coordinates the other points on the map can be interpolated.
suppose there be X pixels between any two longitudes and Y pixels between any two latitudes.if we set a reference point ,then depending on the distance(number of pixels from the reference point in x and y direction individually) of the pixel that is being hovered or clicked by mouse pointer we can get the lats/longitudes of that point in a small window(may be like tool tip or pop up).
the math surrounding the interpolation can be:
new lat= ref lat + [ref lat(only minutes)/Y] *(vertical distance between reference point and new point in pixels)
new long= ref long - [ref long(only minutes)/X] *(horizontal distance between reference point and new point in pixels).
there is a point called mid point on the centre of the run way(at the centre of the graph).i also need to find the angle made by line joining midpoint and new point(where the mouse hovers or clicks) with the verticle of the map.
so any one please give me ideas how to start the project and what are things (tool bar controls,methods) i require to build the gui containing picture window and pop up window(containing information about that point or pixel) where ever i click the mouse .thanks in advance.
The question asked in the headline is answered as follows:
double distance = Math.Sqrt(Math.Pow(x2 - x1, 2) + Math.Pow(y2 - y1, 2));
That's the distance between two points on a plane, and hence on a bitmap.
The question asked in the body isn't answered easily at all. If you have a given map, the function defining the distance between two coordinates may not be linear. Here's an article on Map projection that shows some of the different map types. To be able to calculate what you need, you first need to know what kind of map you're actually working on, and hence adjust your formulas accordingly.
If your map is only of small size, this may not make much of a difference. You were talking about a runway at one point, if this is just for one airport, then projection isn't necessarily an issue. If you're working out distances between two runways of different airports, that will be a different matter.
Your question is quite specific to your needs and has a few elements that could be questions in their own right. You might want to break it down into several questions and or research each item independently. e.g.
You'll want to look into WPF or Windows Forms.
You'll need to learn how to calculate the angle between two points.