I have two sets of X,Y co-ordinates as separate lists. Both represent the same irregular polygonal shape, but in different orientations and sizes/scale.
Need to write a program in C#, to compare both the points set, rotate any one of the shape such that it aligns with the another, so that they are in same orientation.
Tried searching for solution, and got to know using concave hull with angles difference can help, but could not find a good C# implementation for the same.
Can some one help me, if there is a minimal way to achieve this?
Edit: The two points-set might not be the same. One may contain more points than other.
I have contour co-ordinates of a shape and a PNG which is of same shape, but orientation is different. I want to read the PNG, calculate the angle to turn it to the fit the Contour.
Calculate image moments for point cloud
Evaluate orientation of both clouds with Theta angle.
Rotate one cloud by theta difference.
Use other moments (centroid etc) to find translation and scale
I was thinking about writing a very simple program to compare 2 ARGB array pixel by pixel. Both images are same resolution, taken with the same camera source.
Since the camera is being hold still, I was expecting it's a fairly simple program to compare bitmap source by
Convert every pixel into Gray scale pixel
Literally compare each pixel from position 0 to N.
Have a isClose method to do an approximate +/- 3.
The result is that I have too much error bits. But when taking JPEG out of it and view it with naked eye, they seem to be identical (which is the case).
Why do you think I am seeing so much error when comparing them?
BTW - I am trying a write a very basic version of motion detection.
If you are tracking a known object you can pre-process your images before you compare them. For example, if it's a ball you're tracking and it appears brighter than its surroundings, you can threshold your greyscale image that will produce an image with only black or white. You then detect what are known as "contours" (see openCV documentation). Once you get the contour you are after (the ball) in any image, you can compare the location of it in each successive image. There are some algorithms that help figure out where a moving object will be next so it helps finding it in the next frame.
Without knowing exactly what you are doing, it's hard to give anything concrete.
And I see you're C#...maybe this will help: .Net (dotNet) wrappers for OpenCV?
b/c the pictures are not the same.
each one you pressed the button of the camera a little differently.
the change is "huge" if you compare pixel by pixel.
I'm not an expert on motion detection, but try to compare averages around a pixel - I think it will give you better results.
What I want to do is take a source image which will contain a black-and-white chequered board of a known physical size and a known number of squares, and identify the boundaries of said board, as well as the angle from which it is being observed (assuming its perfectly flat) and from what distance.
If I can reliably identify the 4 corners of the board then I know how to calculate the angle and distance, so the task is more about identifying the chess board.
What I've tried so far is greyscaling the image and increasing the contrast so I end up with a stark black-and-white image (which to the eye contains blackness with just the white squares) - and while I can identify the boundaries of the board fine from a top-down perspective by measuring the frequency of changes from black->white->black, I'm not sure how to go about doing this for any angle.
Nominally I'm doing this with C#, but as far as actual answers go I'm happy for any code examples with a c-like syntax - more interested in the math and methodology for this one though.
Finding a general 2d object inside a 3d world is often done with SIFT or SURF.
There are 2 steps:
find a manageable number of local features in the image (such as strong corners)
find a correlation between those points in your image and your search pattern
OpenCV has an implementation for that:
Features2D + Homography to find a known object
The Wikipedia article on surf also points out another c# implementation.
Also see this Stackoverflow answer:
Now this is a pretty general method, and I do not know how well it will work with your checkerboard.
But there are specific approaches for chessboard patterns: such as the openCV function cvFindChessboardCorners (tutorial)
I never used it, but I found this description of the algorithm: (Source is in the file cvcalibinit.cpp )
Image binarization by thresholding to segment black and white squares
Find corners of the black squares:
Find contours of the boundaries of the black regions
Select contours of suitable shape
Approximate these contours with 4-vertex polygons
Among these select the quadrangles resembling calibration pattern squares
Extract corners of the selected quads, having at least one corner in vicinity
Group the corners of the selected quadrangles in lines according to calibration object size
I have some images that I'd like to draw a polyon around the outer edges. The images themselves are on transparent backgrounds and I've created an array of the pixels in the images which contain a point and are not transparent (or white).
Now, my question is: how do I draw an accurate polygon around the outer edge points? I've used a Graham Scan algorithm that I read about to create a convex hull around the edges but this doesn't seem to work for objects with concavities. For example:
http://i48.tinypic.com/4s0lna.png
The image on the left gets filled in using this method with the one on the right. As you can see, it's 'filling in' a little too much.
I assume there must be some other algorithm or approach that can be used to solve this, but I'm not sure of where to look or what it might be called. Could anyone point me in the right direction? I'm using C#/.net and hopefully there might be something that already exists which could work along these lines.
I think the 2D "Alpha shapes" algorithm would the right choice for you.
http://www.cgal.org/Manual/latest/doc_html/cgal_manual/Alpha_shapes_2/Chapter_main.html
Alpha shapes can be considered as a generalization for the "convex Hull" algorithm that allows for generation of more general shapes.
By using alpha shapes you will be having control over the level of details to be captured by the resultant shape by changing the alpha parameter value.
You can try the java applet here : http://cgm.cs.mcgill.ca/~godfried/teaching/projects97/belair/alpha.html
to have better understanding about does this algorithm do.
You can start on a pixel by pixel level, using a flood-fill approach.
Start in the corner, checking that it does have zero alpha.
Check the neighbours for zero alpha and iterate until we have no unchecked neighhours.
This gives you a mask for the image which will consist of two simply connected regions, the interior and exterior.
The set you seek then consists of:
all the points in the exterior which are on the boundary of the interior.
You can then turn that into a polygon by:
Take an initial polygon that consists of all the points in the edge set
Remove redundant vertices that lie along straight edges.
I have 1 red polygon say and 50 randomly placed blue polygons - they are situated in geographical 2D space. What is the quickest/speediest algorithim to find the the shortest distance between a red polygon and its nearest blue polygon?
Bear in mind that it is not a simple case of taking the points that make up the vertices of the polygon as values to test for distance as they may not necessarily be the closest points.
So in the end - the answer should give back the closest blue polygon to the singular red one.
This is harder than it sounds!
I doubt there is better solution than calculating the distance between the red one and every blue one and sorting these by length.
Regarding sorting, usually QuickSort is hard to beat in performance (an optimized one, that cuts off recursion if size goes below 7 items and switches to something like InsertionSort, maybe ShellSort).
Thus I guess the question is how to quickly calculate the distance between two polygons, after all you need to make this computation 50 times.
The following approach will work for 3D as well, but is probably not the fastest one:
Minimum Polygon Distance in 2D Space
The question is, are you willing to trade accuracy for speed? E.g. you can pack all polygons into bounding boxes, where the sides of the boxes are parallel to the coordinate system axes. 3D games use this approach pretty often. Therefor you need to find the maximum and minimum values for every coordinate (x, y, z) to construct the virtual bounding box. Calculating the distances of these bounding boxes is then a pretty trivial task.
Here's an example image of more advanced bounding boxes, that are not parallel to the coordinate system axes:
Oriented Bounding Boxes - OBB
However, this makes the distance calculation less trivial. It is used for collision detection, as you don't need to know the distance for that, you only need to know if one edge of one bounding box lies within another bounding box.
The following image shows an axes aligned bounding box:
Axes Aligned Bounding Box - AABB
OOBs are more accurate, AABBs are faster. Maybe you'd like to read this article:
Advanced Collision Detection Techniques
This is always assuming, that you are willing to trade precision for speed. If precision is more important than speed, you may need a more advanced technique.
You might be able to reduce the problem, and then do an intensive search on a small set.
Process each polygon first by finding:
Center of polygon
Maximum radius of polygon (i.e., point on edge/surface/vertex of the polygon furthest from the defined center)
Now you can collect, say, the 5-10 closest polygons to the red one (find the distance center to center, subtract the radius, sort the list and take the top 5) and then do a much more exhaustive routine.
For polygon shapes with a reasonable number of boundary points such as in a GIS or games application it might be quicker easier to do a series of tests.
For each vertex in the red polygon compute the distance to each vertex in the blue polygons and find the closest (hint, compare distance^2 so you don't need the sqrt() )
Find the closest, then check the vertex on each side of the found red and blue vertex to decide which line segments are closest and then find the closest approach between two line segments.
See http://local.wasp.uwa.edu.au/~pbourke/geometry/lineline3d/ (it's easy to simply for the 2d case)
This screening technique is intended to reduce the number of distance computations you need to perform in the average case, without compromising the accuracy of the result. It works on convex and concave polygons.
Find the the minimum distance between each pair of vertexes such that one is a red vertex and one is a blue. Call it r. The distance between the polygons is at most r. Construct a new region from the red polygon where each line segment is moved outward by r and is joined to its neighbors by an arc of radius r is centered at the vertex. Find the distance from each vertex inside this region to every line segment of the opposite color that intersects this region.
Of course you could add an approximate method such as bounding boxes to quickly determine which of the blue polygons can't possibly intersect with the red region.
Maybe the Frechet Distance is what your looking for?
Computing the Fréchet distance between two polygonal curves
Computing the Fréchet Distance Between Simple Polygons
I know you said "the shortest distance" but you really meant the optimal solution or a "good/very good" solution is fine for your problem?
Because if you need to find the optimal solution, you have to calculate the distance between all of your source and destination poligon bounds (not only vertexes). If you are in 3D space then each bound is a plane. That can be a big problem (O(n^2)) depending on how many vertexes you have.
So if you have vertex count that makes that squares to a scarry number AND a "good/very good" solution is fine for you, go for a heuristic solution or approximation.
You might want to look at Voronoi Culling. Paper and video here:
http://www.cs.unc.edu/~geom/DVD/
I would start by bounding all the polygons by a bounding circle and then finding an upper bound of the minimal distance.
Then i would simply check the edges of all blue polygons whose lower bound of distance is lower than the upper bound of minimal distance against all the edges of the red polygon.
upper bound of min distance = min {distance(red's center, current blue's center) + current blue's radius}
for every blue polygon where distance(red's center, current blue's center) - current blue's radius < upper bound of min distance
check distance of edges and vertices
But it all depends on your data. If the blue polygons are relatively small compared to the distances between them and the red polygon, then this approach should work nicely, but if they are very close, you won't save anything (many of them will be close enough). And another thing -- If these polygons don't have many vertices (like if most of them were triangles), then it might be almost as fast to just check each red edge against each blue edge.
hope it helps
As others have mentioned using bounding areas (boxes, circles) may allow you to discard some polygon-polygon interactions. There are several strategies for this, e.g.
Pick any blue polygon and find the distance from the red one. Now pick any other polygon. If the minimum distance between the bounding areas is greater than the already found distance you can ignore this polygon. Continue for all polygons.
Find the minimum distance/centroid distance between the red polygon and all the blue polygons. Sort the distances and consider the smallest distance first. Calculate the actual minimum distance and continue through the sorted list until the maximum distance between the polygons is greater than the minimum distance found so far.
Your choice of circles/axially aligned boxes, or oriented boxes can have a great affect on performance of the algorithm, dependent on the actual layout of the input polygons.
For the actual minimum distance calculation you could use Yang et al's 'A new fast algorithm for computing the distance between two disjoint convex polygons based on Voronoi diagram' which is O(log n + log m).
Gotta run off to a funeral in a sec, but if you break your polygons down into convex subpolies, there are some optimizations you can do. You can do a binary searches on each poly to find the closest vertex, and then I believe the closest point should either be that vertex, or an adjacent edge. This means you should be able to do it in log(log m * n) where m is the average number of vertices on a poly, and n is the number of polies. This is kind of hastey, so it could be wrong. Will give more details later if wanted.
You could start by comparing the distance between the bounding boxes. Testing the distance between rectangles is easier than testing the distance between polygons, and you can immediately eliminate any polygons that are more than nearest_rect + its_diagonal away (possibly you can refine that even more). Then, you can test the remaining polygons to find the closest polygon.
There are algorithms for finding polygon proximity - I'm sure Wikipedia has a good review of them. If I recall correctly, those that only allow convex polygons are substantially faster.
I believe what you are looking for is the A* algorithm, its used in pathfinding.
The naive approach is to find the distance between the red and 50 blue objects -- so you're looking at 50 3d Pythagorean calculations + sorting to find the answer. That would only really work for finding the distance between center points though.
If you want arbitrary polygons, maybe your best best is a raytracing solution that emits rays from the surface of the red polygon with respect to the normal, and reports when another polygon is hit.
A hybrid might work -- we could find the distance from the center points, assuming we had some notion of the relative size of the blue polygons, we could cull the result set to the closest among those, then use raytracing to narrow down the truly closest polygon(s).