I am looking for the right term for this procedure:
Creating polygons with each point (x,y) generated through the application.
Example if I have this picture with this shape (white background is transparent)
the procedure or the application will create XY points in a way that it will define the points for example in the image where this shape can be extracted.
The shape is in final x,y points polygon shape (if points were to be connected it will look like):
I have tried googling different term but nothing was of help.
You might be after the Convex hull. I.e. a convex polygon that fully contain your shape. There are some libraries for creating this on nuget, or ask on https://softwarerecs.stackexchange.com/
You could perhaps also use Marching Squares if you need to extract the outline. But you could probably also just use the pixel coordinate of the border pixels if you do not need sub pixel precision.
I have a set of 3D points that will fit neatly using using a line segment. I need to get the center of that line (no problem, a mean of X, Y and Z will work great for that). I also need to get a couple vectors that describes the orientation of the line in 3D space. In other words I need to describe how much the sampled data's X, Y and Z axis is rotated.
If you imagine an airplane (this is not an aviation application, just a handy example) and the 3d points are randomly spread in the area of the wings. I need to use these points to describe the orientation of the airplane in 3d space to determine exactly what direction the nose is pointed and the location of the tip of the wings.
I have been looking for linear fit libraries but they all seem to be for 2D data sets or are commercial
I could also fit two linear equations through the x/y and x/z data and use those but this seems wrong and a work around.
Does anybody have any thoughts on how to solve this problem?
I have to detect all the points of a white polygon on a black background in c#. Here is an image of several examples. I wouldn't think it is too difficult, but I am unable to detect this properly with all the variations. My code is too much to post here, but basically I went through each side and look for when it changes from black and white. Should I use Open CV? I was hoping for a simple algorithm I could implement in C#. Any suggestions? Thank you.
In your case I would do this:
pre process image
so remove noise in color if present (like JPG distortion etc) and binarize image.
select circumference pixels
simply loop through all pixels and set each white pixel that has at least one black neighbor to distinct color that will represent your circumference ROI mask or add the pixel position to some list of points instead.
apply connected components analysis
so you need to find out the order of the points (how are connected together). The easiest way to do this is use flood filing of the ROI from first found pixel until all ROI is filled and remember the order of filled points (similar to A*). There should be 2 distinct paths at some point and both should join at last. So identify these 2 points and construct the circumference point order (by reversing one half and handling the shared part if present).
find vertexes
if you compute the angle change between all consequent pixels then on straight lines the angle change should be near zero and near vertexes much bigger. So threshold that and you got your vertexes. To make this robust you need to compute slope angle from a bit more distant pixels then the closest pixels. Also thresholding this angle change against sliding average often provides more stable results.
So find out how far the pixels should be to compute angle so you got not too big noise and vertexes has still big peaks and also find out the threshold value that is safe above any noise.
This can be done also by hough transform and or find contours functions that are present in many CV libs. Another option is also regress/fit the lines in the point list directly and compute intersections which can provide sub pixel precision.
For more info see related QAs:
Backtracking in A star
Finding holes in 2d point sets
growth fill
I'm new to images processing. Now I have a problem.
I'm writing a simple program on C#, that have to define some objects on images through some samples.
For example here's the sample:
Later I have to compare with it objects that I find on a loadable image.
Sizes of objects and samples are always equal. Images are binarized. We always know rotation point (it's the image's center). Samples are always normalized, but we never know object's rotation angle relative to the normal.
Here are some objects that I find on the loadable image:
The question is how to find angle #1.
Sorry for my English and thanks.
if you are using aforge libraries, you can utilize his extension too, named Accord.Net.
Accord.Net is similar at Aforge, you install it, add the references at your project and you are done.
After that you can use the simply the RawMoments by passing the target image, and after you can use them to compute CentralMoments
At this point you can get the angle of your image with the method of CentralMoments GetOrientation() and you get the angle.
I used it on an hand-gestures recognition project and worked like a charm.
UPDATE:
I have just checked that GetOrientation get only the angle but not the direction.
So an upside-down image have the same angle of the original.
A fix, can be the pixel counting, but this time you will get only 2 (worst case) samples to check and not 360 (worst case) samples.
Update2
If you have a lot of samples, i suggest you to filter them with the size of the rotated image.
Example:
I get the image, i see that is in a Horizontal position (90°) i rotate it of 90° and now i have the original width and heigth that i can utilize to skip the samples that are not similar, like:
If (Founded.Width != Sample.Width) //You can add a range too if in case during
Continue; //the rotation are added some pixels
To recap, you have a sample image and a rotated image of the same source image. You also have two values 0,1 for the pixels.
A simple pseudo-code that can yield moderate success can be implemented using a binary search :
Start with a value for the rotation to be 180 degress - both clockwise and counter-clockwise
Rotate the image to both values.
XOR the original image from the rotated one.
Count the number of 0 pixels and check if it's less than the margin of error you define.
Continue the search with half of the rotation angle.
look at this
Rotation angle of scanned document
I have 1 red polygon say and 50 randomly placed blue polygons - they are situated in geographical 2D space. What is the quickest/speediest algorithim to find the the shortest distance between a red polygon and its nearest blue polygon?
Bear in mind that it is not a simple case of taking the points that make up the vertices of the polygon as values to test for distance as they may not necessarily be the closest points.
So in the end - the answer should give back the closest blue polygon to the singular red one.
This is harder than it sounds!
I doubt there is better solution than calculating the distance between the red one and every blue one and sorting these by length.
Regarding sorting, usually QuickSort is hard to beat in performance (an optimized one, that cuts off recursion if size goes below 7 items and switches to something like InsertionSort, maybe ShellSort).
Thus I guess the question is how to quickly calculate the distance between two polygons, after all you need to make this computation 50 times.
The following approach will work for 3D as well, but is probably not the fastest one:
Minimum Polygon Distance in 2D Space
The question is, are you willing to trade accuracy for speed? E.g. you can pack all polygons into bounding boxes, where the sides of the boxes are parallel to the coordinate system axes. 3D games use this approach pretty often. Therefor you need to find the maximum and minimum values for every coordinate (x, y, z) to construct the virtual bounding box. Calculating the distances of these bounding boxes is then a pretty trivial task.
Here's an example image of more advanced bounding boxes, that are not parallel to the coordinate system axes:
Oriented Bounding Boxes - OBB
However, this makes the distance calculation less trivial. It is used for collision detection, as you don't need to know the distance for that, you only need to know if one edge of one bounding box lies within another bounding box.
The following image shows an axes aligned bounding box:
Axes Aligned Bounding Box - AABB
OOBs are more accurate, AABBs are faster. Maybe you'd like to read this article:
Advanced Collision Detection Techniques
This is always assuming, that you are willing to trade precision for speed. If precision is more important than speed, you may need a more advanced technique.
You might be able to reduce the problem, and then do an intensive search on a small set.
Process each polygon first by finding:
Center of polygon
Maximum radius of polygon (i.e., point on edge/surface/vertex of the polygon furthest from the defined center)
Now you can collect, say, the 5-10 closest polygons to the red one (find the distance center to center, subtract the radius, sort the list and take the top 5) and then do a much more exhaustive routine.
For polygon shapes with a reasonable number of boundary points such as in a GIS or games application it might be quicker easier to do a series of tests.
For each vertex in the red polygon compute the distance to each vertex in the blue polygons and find the closest (hint, compare distance^2 so you don't need the sqrt() )
Find the closest, then check the vertex on each side of the found red and blue vertex to decide which line segments are closest and then find the closest approach between two line segments.
See http://local.wasp.uwa.edu.au/~pbourke/geometry/lineline3d/ (it's easy to simply for the 2d case)
This screening technique is intended to reduce the number of distance computations you need to perform in the average case, without compromising the accuracy of the result. It works on convex and concave polygons.
Find the the minimum distance between each pair of vertexes such that one is a red vertex and one is a blue. Call it r. The distance between the polygons is at most r. Construct a new region from the red polygon where each line segment is moved outward by r and is joined to its neighbors by an arc of radius r is centered at the vertex. Find the distance from each vertex inside this region to every line segment of the opposite color that intersects this region.
Of course you could add an approximate method such as bounding boxes to quickly determine which of the blue polygons can't possibly intersect with the red region.
Maybe the Frechet Distance is what your looking for?
Computing the Fréchet distance between two polygonal curves
Computing the Fréchet Distance Between Simple Polygons
I know you said "the shortest distance" but you really meant the optimal solution or a "good/very good" solution is fine for your problem?
Because if you need to find the optimal solution, you have to calculate the distance between all of your source and destination poligon bounds (not only vertexes). If you are in 3D space then each bound is a plane. That can be a big problem (O(n^2)) depending on how many vertexes you have.
So if you have vertex count that makes that squares to a scarry number AND a "good/very good" solution is fine for you, go for a heuristic solution or approximation.
You might want to look at Voronoi Culling. Paper and video here:
http://www.cs.unc.edu/~geom/DVD/
I would start by bounding all the polygons by a bounding circle and then finding an upper bound of the minimal distance.
Then i would simply check the edges of all blue polygons whose lower bound of distance is lower than the upper bound of minimal distance against all the edges of the red polygon.
upper bound of min distance = min {distance(red's center, current blue's center) + current blue's radius}
for every blue polygon where distance(red's center, current blue's center) - current blue's radius < upper bound of min distance
check distance of edges and vertices
But it all depends on your data. If the blue polygons are relatively small compared to the distances between them and the red polygon, then this approach should work nicely, but if they are very close, you won't save anything (many of them will be close enough). And another thing -- If these polygons don't have many vertices (like if most of them were triangles), then it might be almost as fast to just check each red edge against each blue edge.
hope it helps
As others have mentioned using bounding areas (boxes, circles) may allow you to discard some polygon-polygon interactions. There are several strategies for this, e.g.
Pick any blue polygon and find the distance from the red one. Now pick any other polygon. If the minimum distance between the bounding areas is greater than the already found distance you can ignore this polygon. Continue for all polygons.
Find the minimum distance/centroid distance between the red polygon and all the blue polygons. Sort the distances and consider the smallest distance first. Calculate the actual minimum distance and continue through the sorted list until the maximum distance between the polygons is greater than the minimum distance found so far.
Your choice of circles/axially aligned boxes, or oriented boxes can have a great affect on performance of the algorithm, dependent on the actual layout of the input polygons.
For the actual minimum distance calculation you could use Yang et al's 'A new fast algorithm for computing the distance between two disjoint convex polygons based on Voronoi diagram' which is O(log n + log m).
Gotta run off to a funeral in a sec, but if you break your polygons down into convex subpolies, there are some optimizations you can do. You can do a binary searches on each poly to find the closest vertex, and then I believe the closest point should either be that vertex, or an adjacent edge. This means you should be able to do it in log(log m * n) where m is the average number of vertices on a poly, and n is the number of polies. This is kind of hastey, so it could be wrong. Will give more details later if wanted.
You could start by comparing the distance between the bounding boxes. Testing the distance between rectangles is easier than testing the distance between polygons, and you can immediately eliminate any polygons that are more than nearest_rect + its_diagonal away (possibly you can refine that even more). Then, you can test the remaining polygons to find the closest polygon.
There are algorithms for finding polygon proximity - I'm sure Wikipedia has a good review of them. If I recall correctly, those that only allow convex polygons are substantially faster.
I believe what you are looking for is the A* algorithm, its used in pathfinding.
The naive approach is to find the distance between the red and 50 blue objects -- so you're looking at 50 3d Pythagorean calculations + sorting to find the answer. That would only really work for finding the distance between center points though.
If you want arbitrary polygons, maybe your best best is a raytracing solution that emits rays from the surface of the red polygon with respect to the normal, and reports when another polygon is hit.
A hybrid might work -- we could find the distance from the center points, assuming we had some notion of the relative size of the blue polygons, we could cull the result set to the closest among those, then use raytracing to narrow down the truly closest polygon(s).