I need to find an irregular polygon with the smallest surface area out of several vertices on a 2D plane.
No this isn't homework. Although I wish I was back in school right now.
There are some requirements on how the polygon can be constructed. Let's say I have 3 different types of vertices (red, green, blue) plotted out on a 8x8 grid. I need to scan all vertices in this grid satisfying the red, green, blue combination requirement and pick the one with the smallest surface area.
Getting the surface area of an irregular polygon is simple enough. I'm mainly concerned about the performance of scanning all possible combinations efficiently.
See the below image for an example. All three types are used to make the polygons however the one circled has the smallest surface area and is my objective.
This scenario is simplified compared to what I'm trying to prototype. The polygons will be constructed of tens if not hundreds of vertices and the grid will be much larger. Also, this will be a process ran 24/7.
I was thinking that maybe I should organize the vertices by type and break them into individual arrays. Then just iterate over the arrays in a tiered fashion to compute the surface area of all combinations. This approach however seems wasteful.
Here is a version based on branch and bound, with some flourishes.
1) Break the grid down into a Quadtree, with annotations in the nodes as needed for the rest.
2) Find the lowest node in the quad tree that has one of each type of node. This gives you a starting solution, which should be at least good enough to speed up the rest of the search.
3) Do a recursive search which takes all possible branches where I say guess, choosing the most promising candidates first where applicable:
3a) Guess a vertex of the least common type.
3b) Using the relative location of points in the quad tree to order your guesses, guess a vertex of the next least common type, so as to guess them in increasing order of distance form the original point...
3z) you have a complete set of vertices.
At each step 3? you have a partial set of vertices, which I presume gives you a lower bound on the area of any complete solution including those vertices (is it the area inside the convex hull of the vertices?). You can discard any partial solutions that are already at least as large as the largest solutions so far. If you can live with an answer that is X% inaccurate, you can discard any partial solutions that are within X% of the largest solution so far. Hopefully this prunes the tree of possibilies you are navigating in (3) far enough to make it tractable.
How about picking the color with the least number of vertices, and checking for each one the immediate neighborhood, if none has the other colors within this neighborhood, increase the stencil size (select next ring around the vertex), and check again. Until at least one of the vertices, has all other colors within the current stencil. If there are more than one, you just need to compare those (simple min reduction) to find the smallest one.
Here's how to find the smallest triangle in time O(n2 log n). Perhaps it will be useful to you.
The high-level idea is to use a rotating sweep-line. At all times we maintain the order of the blue points along the axis perpendicular to the sweep-line, in a binary search tree. When the sweep-line is parallel with the line passing through a red-green pair, we use the BST to find the blue point closest to the red-green line.
As always, we use an event-driven simulation of the sweep-line. For each red-green pair, make one kind of event for its angle. For each pair of blue points, make O(1) of another kind of event for when their relative order changes. Sort all of the events and turn the crank.
If we already have found an area A, we can narrow the search.
The area of a triangle is B*h (base times height).
If you find two points, then B is the distance between them.
Then we can search for a point which is at most A/B (B*h < A => h < A/B) distance from that line. This is the same as searching between two lines parallel to the two points we already have, which are displaced A/B and -A/B.
This should give a complexity of O(n^2*k) where k is the width or height of your grid.
If you don't extract the coordinates you have to do a O(k^5) search, which at least is better than O(k^6) you had to do earlier.
Some more analysis: if p is the probability that a cell contains a vertex then the complexity is: O(k^2p(k^2p(kp))) = O(k^5p^3).
If p=n/k^2, where n is the number of nodes we then get O(n^3/k).
Related
I've searched the Internet and maybe I'm missing some correct keywords but I managed to find nothing like this. I only found the poly-lines (or just the lines) which are not exactly the graphs. I would like to generate a graph outline (of radius r) as seen in the picture. Is there something already available? I would like to avoid reinventing the wheel so to speak.
If anyone can at hint me at something or at least at some basic principle how to do it it would be great. Otherwise I'll "invent" one on my own of course.
Optimally in C#.
Update: I need to calculate outline polygon, not just visually draw it. The green points represents the resulting polygon. Also the "inner" holes are ignored completely. Only one outline polygon should be enough.
Update 2: Better picture to show some more extreme cases. Also the edges of graph never overlap so no need to accommodate for that.
Update 3: Picture updated yet again to reflect the bevel joins.
First, for every "line piece" from point A to B, generate the rectangle to it (all 4 points as "path", so to say). Then search two overlapping rectangles and merge them:
Merging is a bit complicated, the idea: Start with calculating the angle of all 8 lines (eg. if the rectangles are traversed clockwise). Then traverse one rectangle until the first line-line-intersection, check with the angles which direction is "outside", and move along the crossing line of the second rectangle ... until you arrive at the start point again => Now you traversed the shape of both together (and hopefully saved it somewhere).
Merge until only one large piece is left (or multiple non-overlapping pieces). In theory, starting from any point, you can traverse the whole shape, but there´s another problem: Holes are possible.
If one shape has two or more disjuct sets of points (where no point from set 2 is reachable from set 1 and vice-versa), all but one disjunct path is of a hole. An easy possibility to get the real outer border is to search for an extremum, ie. the point with the largest or smallest X or Y coordinate (only one of the 4 combinations in enough). This point surely is a part of the outer border.
I want to find out if one polygon is inside another by giving an array of points of each vertex. Is there any simple way to do that?
Edit: it's not enough to check whether minimum point of inner is greater than outer and maximum point for outer is less then inner. It's not the sufficient condition. Proof:
Once you've checked that the minimum bounding box for polygon A lies inside that for polygon B I think you're going to have to check each edge of A for non-intersection with all the edges of B.
This is, I think, a simple approach, but I suspect you really want a clever approach which is more efficient.
I have done something very similar using Java2D particularly the Area class. The code for that class is freely available if you want to replicate the functionality. An easier option might be to look at this library: http://www.cs.man.ac.uk/~toby/alan/software/ It should allow you to do what you want or give you starting points anyway.
some point of polygon1 lies inside of polygon2
you can use ray casting here
there are no edge-edge intersection between polygons
for large numbers of edges space-partition trees may increase speed, for small number of edges N*M enumeration is OK
First, use axis-aligned boundary boxes to see if they're anywhere near one another. (Essentially, draw an X-Y aligned box around each one and see if they are intersecting. This is MUCH easier than the case for polygons and generally saves a lot of time.)
If the boxes intersect, you should now perform detailed intersection testing. You'll want to draw a line perpendicular to each side of the "outside" polygon and project all of the points from both of them onto the line. Then, check that the resulting points for the inside polygon are between the points projected from the outside polygon.
I understand that example is difficult to visualize at first- I recommend this tutorial about collision detection to people interested in this area:
http://www.wildbunny.co.uk/blog/2011/04/20/collision-detection-for-dummies/
However, your task is slightly different as mentioned because you are projecting onto the perpendicular line for each side and you need ALL of them to contain the segment. I also suggest boning up a bit on the notion of a projection and your linear algebra if you want to do a lot of this.
Your question is underdetermined - just giving the coordinates of each vertex is not enough to specify a polygon. Example: draw a square and fill in the diagonals. Your five vertices are the square's corners and the point at the diagonals' intersection. From these vertices, it is possible to construct four different polygons: each one is constructed by using the edges of the original drawing, while removing one single edge from the square and limiting the diagonals (I hope this is clear enough).
EDIT: Apparently it wasn't clear enough. Let a1, a2, a3, a4 be vertices corresponding to the four points of a square (say, clockwise from top left), and let a5 be a vertex corresponding to the intersection of the square's diagonals. Just for the sake of the example, here are two polygons which fit the above vertices:
1. (a1,a2),(a2,a5),(a5,a3),(a3,a4),(a4,a1). This should look like a right-facing pacman.
2. (a1,a2),(a2,a3),(a3,a4),(a4,a5),(a5,a1). This should look like a left-facing pacman.
I have a set of regions (geo-fences) which are polygons. This set of data is fixed; so there is no need for insertion and deletion of data. Which data structure can be used for searching for regions that a query point (longitude, latitude) is in it?
Note: I have implemented KD-Tree (In fact a 2D-Tree) successfully for a set of points. But it does not work for this problem. I have implemented an R-Tree then; and it solves the problem but it is slow (or my implementation sucks).
Thank you
Note: I have worked on R-Tree implementation and it works fine now.
Since you are not inserting/deleting and presumably have plenty of time to preprocess the data, you can use some additional memory to speed up computations. The basic idea for pre-processing:
Take all of the polygon points and determine the smallest axis-aligned bounding rectangle that contains them all; basically this is the min and max of X and Y.
Choose a partioning factor dX and dY that you will use to create a search grid. Choosing powers of two for the partioning factors can make for slightly faster computation later.
Translate the polygon data so that their bounding rectangle minimum is coincident with (0,0) and expand the rectangle so that it is an integer multiple of the partitioning factor in each dimension.
Consider each grid square and make a list of the polygons that intersect the square. Store this list for each grid square. Depending on the nature of the data (how many polygons you can ever expect to intersect a square), there are various ways you can optimize this for either storage space or speed.
Now, when you want to find regions that contain a point:
Translate the point using the origin we defined earlier and determine the grid square containing the point (if you used a power of two, this is a shift operation; otherwise it's division.)
Look at the list for the grid square. If it's empty, there is no containing polygon. If not, you have to consider each of the polygons in the list and search for intersection.
This works well for spread out and mostly non-intersecting polygons, particularly if you can choose a grid size fine enough so that there are only a few polygons per square. It will be slow in cases where you hit squares with lots of intersecting polygons. One additional optimization is to have a flag for each listed polygon at a square to indicate that the square is completely contained within the polygon; this allows you to avoid the slow containment test in many cases at the cost of a single bit per polygon entry. This is particularly valuable if your grid spacing is fine compared to the polygon sizes, as most squares will not be at intersections or edges.
If you need even more speed, you can start storing edge information at each square with the polygon reference. You only need to test against the polygon edges that actually intersect the area of the square. This can reduce the effort to only a handful of edge tests per polygon.
A R-Tree data structure can be used for this problem.
I have a set of points on the infinite (well, double precision) 2D plane.
Given the convex hull for this set, how can find some points on the inside of the convex hull that are relatively far away from all the points in the input set?
In the image below, the black points are part of the original set and the hatched area represents the space taken up by all the points if we "grow" them with radius R.
The orange points are examples of what I'd like to get. It doesn't really matter where exactly they are, as long as they are relatively far away from all the black points.
Furthest Point Search http://en.wiki.mcneel.com/content/upload/images/point_far_search.png
Update: Using a delaunay algorithm to find large empty triangles seems to be a great approach for this:
Delaunay Solution http://en.wiki.mcneel.com/content/upload/images/DelaunaySolutionToInternalFurthestPoints.png
This is a good example of a problem that may be solved using KD-Trees... there are some good notes in Numerical Recipes 3rd Addition.
If you are trying to find point locations that are relatively isolated... maybe the center of the largest quad elements would be a good candidate.
The complexity would be O(n log^2 n) for creating the KD-Tree... and creating a sorted list of quad sizes would be O(n Log n). Seems reasonable for even a large number of points (of course, depending on your requirements).
This is a naive algorithm:
Get the list of points within the convex shape.
Of those, find the minimum distance to any other point.
Rank all points by their respective R values
Select the top x points.
For (2), thinking of this as a radius search still means you end up calculating the distance from each point to each other point, because finding out whether a point lies within a given radius of another point is the same thing as finding the distance between the points.
To optimize the search, you can divide the space into a grid, and assign each point to a grid location. Then, your search for (2) above would first check within the same square and the surrounding 8 squares. If the minimum distance to another point is within the same square, return that value. If it is from one of the 8 and the distance is such that a point outside the 9 could be closer, you have to then check the next outline of grid locations outside those 9 for any closer than those found within the 9. Rinse/repeat.
I have 1 red polygon say and 50 randomly placed blue polygons - they are situated in geographical 2D space. What is the quickest/speediest algorithim to find the the shortest distance between a red polygon and its nearest blue polygon?
Bear in mind that it is not a simple case of taking the points that make up the vertices of the polygon as values to test for distance as they may not necessarily be the closest points.
So in the end - the answer should give back the closest blue polygon to the singular red one.
This is harder than it sounds!
I doubt there is better solution than calculating the distance between the red one and every blue one and sorting these by length.
Regarding sorting, usually QuickSort is hard to beat in performance (an optimized one, that cuts off recursion if size goes below 7 items and switches to something like InsertionSort, maybe ShellSort).
Thus I guess the question is how to quickly calculate the distance between two polygons, after all you need to make this computation 50 times.
The following approach will work for 3D as well, but is probably not the fastest one:
Minimum Polygon Distance in 2D Space
The question is, are you willing to trade accuracy for speed? E.g. you can pack all polygons into bounding boxes, where the sides of the boxes are parallel to the coordinate system axes. 3D games use this approach pretty often. Therefor you need to find the maximum and minimum values for every coordinate (x, y, z) to construct the virtual bounding box. Calculating the distances of these bounding boxes is then a pretty trivial task.
Here's an example image of more advanced bounding boxes, that are not parallel to the coordinate system axes:
Oriented Bounding Boxes - OBB
However, this makes the distance calculation less trivial. It is used for collision detection, as you don't need to know the distance for that, you only need to know if one edge of one bounding box lies within another bounding box.
The following image shows an axes aligned bounding box:
Axes Aligned Bounding Box - AABB
OOBs are more accurate, AABBs are faster. Maybe you'd like to read this article:
Advanced Collision Detection Techniques
This is always assuming, that you are willing to trade precision for speed. If precision is more important than speed, you may need a more advanced technique.
You might be able to reduce the problem, and then do an intensive search on a small set.
Process each polygon first by finding:
Center of polygon
Maximum radius of polygon (i.e., point on edge/surface/vertex of the polygon furthest from the defined center)
Now you can collect, say, the 5-10 closest polygons to the red one (find the distance center to center, subtract the radius, sort the list and take the top 5) and then do a much more exhaustive routine.
For polygon shapes with a reasonable number of boundary points such as in a GIS or games application it might be quicker easier to do a series of tests.
For each vertex in the red polygon compute the distance to each vertex in the blue polygons and find the closest (hint, compare distance^2 so you don't need the sqrt() )
Find the closest, then check the vertex on each side of the found red and blue vertex to decide which line segments are closest and then find the closest approach between two line segments.
See http://local.wasp.uwa.edu.au/~pbourke/geometry/lineline3d/ (it's easy to simply for the 2d case)
This screening technique is intended to reduce the number of distance computations you need to perform in the average case, without compromising the accuracy of the result. It works on convex and concave polygons.
Find the the minimum distance between each pair of vertexes such that one is a red vertex and one is a blue. Call it r. The distance between the polygons is at most r. Construct a new region from the red polygon where each line segment is moved outward by r and is joined to its neighbors by an arc of radius r is centered at the vertex. Find the distance from each vertex inside this region to every line segment of the opposite color that intersects this region.
Of course you could add an approximate method such as bounding boxes to quickly determine which of the blue polygons can't possibly intersect with the red region.
Maybe the Frechet Distance is what your looking for?
Computing the Fréchet distance between two polygonal curves
Computing the Fréchet Distance Between Simple Polygons
I know you said "the shortest distance" but you really meant the optimal solution or a "good/very good" solution is fine for your problem?
Because if you need to find the optimal solution, you have to calculate the distance between all of your source and destination poligon bounds (not only vertexes). If you are in 3D space then each bound is a plane. That can be a big problem (O(n^2)) depending on how many vertexes you have.
So if you have vertex count that makes that squares to a scarry number AND a "good/very good" solution is fine for you, go for a heuristic solution or approximation.
You might want to look at Voronoi Culling. Paper and video here:
http://www.cs.unc.edu/~geom/DVD/
I would start by bounding all the polygons by a bounding circle and then finding an upper bound of the minimal distance.
Then i would simply check the edges of all blue polygons whose lower bound of distance is lower than the upper bound of minimal distance against all the edges of the red polygon.
upper bound of min distance = min {distance(red's center, current blue's center) + current blue's radius}
for every blue polygon where distance(red's center, current blue's center) - current blue's radius < upper bound of min distance
check distance of edges and vertices
But it all depends on your data. If the blue polygons are relatively small compared to the distances between them and the red polygon, then this approach should work nicely, but if they are very close, you won't save anything (many of them will be close enough). And another thing -- If these polygons don't have many vertices (like if most of them were triangles), then it might be almost as fast to just check each red edge against each blue edge.
hope it helps
As others have mentioned using bounding areas (boxes, circles) may allow you to discard some polygon-polygon interactions. There are several strategies for this, e.g.
Pick any blue polygon and find the distance from the red one. Now pick any other polygon. If the minimum distance between the bounding areas is greater than the already found distance you can ignore this polygon. Continue for all polygons.
Find the minimum distance/centroid distance between the red polygon and all the blue polygons. Sort the distances and consider the smallest distance first. Calculate the actual minimum distance and continue through the sorted list until the maximum distance between the polygons is greater than the minimum distance found so far.
Your choice of circles/axially aligned boxes, or oriented boxes can have a great affect on performance of the algorithm, dependent on the actual layout of the input polygons.
For the actual minimum distance calculation you could use Yang et al's 'A new fast algorithm for computing the distance between two disjoint convex polygons based on Voronoi diagram' which is O(log n + log m).
Gotta run off to a funeral in a sec, but if you break your polygons down into convex subpolies, there are some optimizations you can do. You can do a binary searches on each poly to find the closest vertex, and then I believe the closest point should either be that vertex, or an adjacent edge. This means you should be able to do it in log(log m * n) where m is the average number of vertices on a poly, and n is the number of polies. This is kind of hastey, so it could be wrong. Will give more details later if wanted.
You could start by comparing the distance between the bounding boxes. Testing the distance between rectangles is easier than testing the distance between polygons, and you can immediately eliminate any polygons that are more than nearest_rect + its_diagonal away (possibly you can refine that even more). Then, you can test the remaining polygons to find the closest polygon.
There are algorithms for finding polygon proximity - I'm sure Wikipedia has a good review of them. If I recall correctly, those that only allow convex polygons are substantially faster.
I believe what you are looking for is the A* algorithm, its used in pathfinding.
The naive approach is to find the distance between the red and 50 blue objects -- so you're looking at 50 3d Pythagorean calculations + sorting to find the answer. That would only really work for finding the distance between center points though.
If you want arbitrary polygons, maybe your best best is a raytracing solution that emits rays from the surface of the red polygon with respect to the normal, and reports when another polygon is hit.
A hybrid might work -- we could find the distance from the center points, assuming we had some notion of the relative size of the blue polygons, we could cull the result set to the closest among those, then use raytracing to narrow down the truly closest polygon(s).