We are currently using the following algorithm to detect if a geographic point is inside a complex polygon or not. This works fine, except when the polygon crosses the 180° longitude line.
For example the point (-170, 60) is not detected in polygon 160,65,0 160,15,0 -160,15,0 -160,65,0 160,65,0
Look at the following image:
[Img]http://tinypic.com/r/14x2xl1[/img]
I want everything in the red box. Not the yellow box!
public static bool IsCoordinateInPolygon(IList<KMLCoordinate> polygon, KMLCoordinate testPoint)
{
bool result = false;
int j = polygon.Count - 1;
for (int i = 0; i < polygon.Count; i++)
{
if (polygon[i].Latitude < testPoint.Latitude && polygon[j].Latitude >= testPoint.Latitude || polygon[j].Latitude < testPoint.Latitude && polygon[i].Latitude >= testPoint.Latitude)
{
if (polygon[i].Longitude + (testPoint.Latitude - polygon[i].Latitude) / (polygon[j].Latitude - polygon[i].Latitude) * (polygon[j].Longitude - polygon[i].Longitude) < testPoint.Longitude)
{
result = !result;
}
}
j = i;
}
return result;
}
Does anybody have a better algorithm?
Spheric coordinate system has it quirks
To avoid them use 3D orthogonal/orthonormal cartesian coordinate system instead
convert your polygon vertexes and geolocation
so (long,lat,alt) -> (x,y,z). here you find how to do it. You do not need to apply local transform just the first spheric to 3D cartesian transformation (bullet #1.)
use any inside polygon test ...
I usually count the number of intersections between line cast from your geolocation to any direction and polygon boundary lines.
if it is odd then point is inside
if it is even then the point is outside
if point lies on any line of polygon then it is inside
if your casted line hits any vertex then either take in mind (do not count multiple hits at this vertex) or change the direction a bit and try again
[Notes]
Do not forget to handle all as 3D vectors not 2D !!!
Related
I am trying to make a building game, where you can build anywhere that there is no builds and touching the ground. Here is my code
private void Update()
{
if (buildMode)
{
playerScript.enabled = false;
Vector3 point = Camera.main.ScreenToWorldPoint(Input.mousePosition);
if (Input.GetMouseButtonDown(0))
{
buildOverlay.ClearAllTiles();
Vector3Int selectedTile = buildOverlay.WorldToCell(point);
selectedTile.z = 0;
if (Vector3.Distance(GameObject.Find("Player").transform.position, selectedTile) < buildDistance)
{
//BoundsInt bounds = collidableTilemap.cellBounds;
//TileBase[] allTiles = collidableTilemap.GetTilesBlock(bounds);
//for (int x = 0; x < bounds.size.x; x++)
//{
// for (int y = 0; y < bounds.size.y; y++)
// {
// TileBase tile = allTiles[x + y * bounds.size.x];
// Debug.Log(collidableTilemap.HasTile(new Vector3Int(x, y, 0)));
// if (collidableTilemap.HasTile(new Vector3Int(x, y, 0)))
// {
// buildOverlay.SetTile(selectedTile, notplaceable);
// }
// else
// {
// buildOverlay.SetTile(selectedTile, placeable);
// }
// }
//}
selectedTile.z = 0;
Debug.Log(selectedTile);
Debug.Log(collidableTilemap.HasTile(selectedTile));
if (!collidableTilemap.HasTile(selectedTile))
{
buildOverlay.SetTile(selectedTile, placeable);
}
else
{
buildOverlay.SetTile(selectedTile, notplaceable);
}
}
}
else
{
playerScript.enabled = true;
}
}
}
What this code does so far is turn off the ability to move and check for an overlapping tile. I currently have the variable buildDistance set to infinity, so thats not the problem.
Here are some images:
Unity thinks that the first layer is not there. Here is the scene view to prove that those blocks are in the same tilemap:
This code is supposed to see if a tile exists in the coordinates where the player wants to place. It works fine except for the first layer. Please help!
I'm pretty new to Unity so don't mind my noob mistakes. Thanks!
[EDIT]: I've changed my terrain a bit, and realized a couple new things:
This block is red
This block is green.
I can not build anywhere on this row, except for when the stone ends:
I can build here:
WHAT IS GOING ON!!!!!????!!???
It's hard to say for certain but It seems that the conversion from world space to grid coordinate is off-by-one at least in the y dimension and i would guess in the x direction also.
I believe the best candidate for the bug is this line here
Vector3Int selectedTile = buildOverlay.WorldToCell(point);
This kind of casting from float to int actually won't round the number but will instead floor it. Unity often places it's tilemaps so tiles are 0.5m misaligned with the world grid and because of this flooring the position might be causing these problems.
I would suggest trying
Vector3Int selectedTile = buildOverlay.WorldToCell(Vector3Int.RoundToInt(point))
or if that does not help you could try the uglier
Vector3Int selectedTile = buildOverlay.WorldToCell(point+Vector3.one*0.5f);
(if this still doesn't work you could ommit the 0.5f)
Not the prettiest of solutions but I think this is where your problem is I'd have a play about with it.
So basically it is 1 unit off of the y axis, so simply subtract 1 y unit.
Replace this line:
if (!collidableTilemap.HasTile(selectedTile)
with
if (!collidableTilemap.HasTile(selectedTile - new Vector3Int(0, 1, 0)))
This will basically negate the offset effect that Unity puts on tilemaps from flooring.
I'm trying to get the corners of the following shape:
By corners I mean this (red dots):
The minimum quantity of points that can define this shape.
And I have implemented the following:
public Shape Optimize()
{
// If the vertices are null or empty this can't be executed
if (vertices.IsNullOrEmpty())
return this; // In this case, return the same instance.
if (!edges.IsNullOrEmpty())
edges = null; //Reset edges, because a recalculation was requested
// The corners available on each iteration
var corners = new Point[] { Point.upperLeft, Point.upperRight, Point.downLeft, Point.downRight };
//The idea is to know if any of the following or previous vertice is inside of the the array from upside, if it is true then we can add it.
Point[] vs = vertices.ToArray();
for (int i = 0; i < vertices.Count - 1; ++i)
{
Point backPos = i > 0 ? vs[i - 1] : vs[vertices.Count - 1],
curPos = vs[i], //Punto actual
nextPos = i < vertices.Count - 1 ? vs[i + 1] : vs[0];
// We get the difference point between the actual point and the back & next point
Point backDiff = backPos - curPos,
nextDiff = nextPos - curPos,
totalDiff = nextPos - backPos;
if (corners.Contains(backDiff) || corners.Contains(nextDiff) || corners.Contains(totalDiff))
AddEdge(curPos, center); // If any of the two points are defined in the corners of the point of before or after it means that the actual vertice is a edge/corner
}
return this;
}
This works rectangled shapes, but rotated shapes are very sharp, so, this code doesn't work well:
Blue pixels (in this photo and the following) are the vertices variable processed on Optimize method.
Green pixels are the detected corners/edges (on both photos).
But sharpness in a shape only defines the side inclination, so what can I do to improve this?
Also, I have tested Accord.NET BaseCornersDetector inherited classes, but the best result is obtained with HarrisCornersDetector, but:
Many edges/corners are innecesary, and they aren't in the needed place (see first photo).
Well, after hours of research I found a library called Simplify.NET that internally runs the Ramer–Douglas–Peucker algorithm.
Also, you maybe interested on the Bresenham algorithm, with this algorithm you can draw a line using two Points.
With this algorithm, you can check if your tolerance is too high, comparing the actual points and the points that this algorithm outputs and making some kind of percentage calculator of similarity.
Finally, is interesting to mention Concave Hull and Convex Hull algorithms.
All this is related to Unity3D.
My outputs:
And my implementation.
It's very important to say, that points needs to be sorted forcing them to be connected. If the shape is concave as you can see on the second photo maybe you need to iterate walls of the shape.
You can see an example of implementation here. Thanks to #Bunny83.
I have a canvas where there are several polygons, what I want to do is try detect whether the polygons are overlapping. I'v looked around on various websites and most of what i'v found is to do with object collision - this for example, my polygons aren't moving so that's not going to be an issue.
I was wondering if someone could point me in the right direction on how to detect if they are overlapping. Is there a method that can calculate the space that's used on screen? or the region of the Polygon to compare the two?
So for example like the mock up here, the red shape overlaps the green one.
essentially all i want is to say yes they are overlapping or no they are not.
http://peterfleming.net84.net/Slice%201.png
Thanks in advance.
Pete
This library here (free and open source) will show polygon clipping: http://www.angusj.com/delphi/clipper.php
That said, if by polygons overlapping you mean at least one point of one is inside the other, you can test each polygon's point against the others by either looking at the point in point polygon problem or checking each polygons lines to see if it cuts across another polygon.
These methods will all work with different efficiency, try and see what's best for your situation.
However, your diagram seems to suggest you want to see if these polygons are 'side by side' or something similar. It would help to get clarification on this. Overlapping generally needs some coordinate plan to determine overlap against.
Assuming that each polygon is a Shape (either Path or Polygon) you could use the FillContainsWithDetail method of their RenderedGeometry to pairwise check interscetion.
I was having the same problem too and I used this implementation (which is heavenly inspired by this: C# Point in polygon ):
bool DoesPolygonsOverlap(IList<Point> firstPolygon, IList<Point> secondPolygon)
{
foreach (var item in firstPolygon)
{
if (IsPointInPolygon(secondPolygon, item))
{
return true;
}
}
foreach (var item in secondPolygon)
{
if (IsPointInPolygon(firstPolygon, item))
{
return true;
}
}
return false;
}
bool IsPointInPolygon(IList<Point> polygon, Point testPoint)
{
bool result = false;
int j = polygon.Count() - 1;
for (int i = 0; i < polygon.Count(); i++)
{
if (polygon[i].Y < testPoint.Y && polygon[j].Y >= testPoint.Y || polygon[j].Y < testPoint.Y && polygon[i].Y >= testPoint.Y)
{
if (polygon[i].X + (testPoint.Y - polygon[i].Y) / (polygon[j].Y - polygon[i].Y) * (polygon[j].X - polygon[i].X) < testPoint.X)
{
result = !result;
}
}
j = i;
}
return result;
}
Attention: The function was not very much tested and has a big potential for improvement. Please tell me if you find a bug/problem.
I load multiple meshs from .x files in different mesh variables.
Now I would like to calculate the bounding sphere across all the meshes I have loaded (and which are being displayed)
Please guide me how this could be achieved.
Can VertexBuffers be appended togather in one variable and the boundingSphere be computed using that? (if yes how are they vertexBuffers added togather)
Otherwise what alternative would you suggest!?
Thankx
Its surprisingly easy to do this:
You need to, firstly, average all your vertices. This gives you the center position.
This is done as follows in C++ (Sorry my C# is pretty rusty but it should give ya an idea):
D3DXVECTOR3 avgPos;
const rcpNum = 1.0f / (float)numVerts; // Do this here as divides are far more epxensive than multiplies.
int count = 0;
while( count < numVerts )
{
// Instead of adding everything up and then dividing by the number (which could lead
// to overflows) I'll divide by the number as I go along. The result is the same.
avgPos.x += vert[count].pos.x * rcpNum;
avgPos.y += vert[count].pos.y * rcpNum;
avgPos.z += vert[count].pos.z * rcpNum;
count++;
}
Now you need to go through every vert and work out which vert is the furthest away from the center point.
Something like this would work (in C++):
float maxSqDist = 0.0f;
int count = 0;
while( count < numVerts )
{
D3DXVECTOR3 diff = avgPos - vert[count].pos;
// Note we may as well use the square length as the sqrt is very expensive and the
// maximum square length will ALSO be the maximum length and yet we only need to
// do one sqrt this way :)
const float sqDist = D3DXVec3LengthSq( diff );
if ( sqDist > maxSqDist )
{
maxSqDist = sqDist;
}
count++;
}
const float radius = sqrtf( maxSqDist );
And you now have your center position (avgPos) and your radius (radius) and, thus, all the info you need to define a bounding sphere.
I have an idea, what I would do is that I would determine the center of every single mesh object, and then determine the center of the collection of mesh objects by using the aforementioned information ...
I'm trying to write a simple raytracer as a hobby project and it's all working fine now, except I can't get soft-shadows to work at all. My idea of soft-shadows is that the lightsource is considered to have a location and a radius. To do a shadow test on this light I take the point where the primary ray hit an object in the scene and cast an n-amount of rays towards the lightsource where each new ray has a random component to every axis, where the random component varies between -radius and radius.
If such a ray hits an object in the scene, I increment a hitcounter (if a ray hits multiple objects, it still only increments with one). If it makes it to the lightsource without collisions, I add the distance of the primary ray's intersect point to the lightsource's center to a variable.
When n samples have been taken, I calculate the ratio of rays that have collided and multiply the color of the light by this ratio (so a light with color 1000,1000,1000 will become 500,500,500 with a ratio of 0.5, where half the rays have collided). Then I calculate the average distance to the lightsource by dividing the distance variable of earlier by the amount of non-colliding rays. I return that variable and the function exits.
The problem is: it doesn't work. Not quite at least. What it looks like can be seen here. You can see it sort of resembles soft-shadows, if you squint real hard.
I don't get it, am I making some sort of fundamental flaw here, or is it something tiny? I'm fairly sure the problem is in this method, because when I count the number of partially lit pixels produced directly by this method, there are only about 250, when there should be a lot more. And when you look closely at the picture, you can see there's some partially lit pixels, suggesting the rest of the code processes the partially lit pixels just fine.
Here's the actual light for soft-shadows class:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace MyFirstRayTracer
{
public class AreaLight : ILight
{
private const int _radius = 5;
private const int _samples = 16;
public Color Color { get; set; }
public Vector Location { get; set; }
#region ILight Members
public float GetLightingInformation(Vector point, ISceneObject[] scene, out Color color)
{
int intersectCount = 0;
float distance = -1;
for(int i = 0; i < _samples; i++)
{
bool intersects = false;
float rand = 0;
rand = _radius - (float)(new Random().NextDouble()*(2*_radius));
foreach (ISceneObject obj in scene)
{
Vector iPoint;
Vector loc = new Vector(Location.X + rand, Location.Y + rand, Location.Z + rand);
if (!obj.Intersect(new Ray(point, loc), out iPoint))
{
distance += (Location - point).SqLength;
}
else
{
intersects = true;
distance -= (Location - point).SqLength;
}
}
if (intersects)
intersectCount++;
}
float factor = 1-((float)intersectCount/_samples);
color = new Color(factor*Color.R, factor*Color.G, factor*Color.B);
return (float)Math.Sqrt(distance / (_samples - intersectCount));
}
#endregion
}
}
minor point but is this the best use of the random class..
for(int i = 0; i < _samples; i++)
{
bool intersects = false;
float rand = 0;
rand = _radius - (float)(new Random().NextDouble()*(2*_radius));
should this not be..
var rnd = new Random()
for(int i = 0; i < _samples; i++)
{
bool intersects = false;
float rand = 0;
rand = _radius - (float)(rnd.NextDouble()*(2*_radius));
Try generating a different "rand" for each component of "loc". As is, your jittered points all lie on a line.
You actually generate the point on the line on a line with direction (1, 1, 1). Is the lightsource really linear?
Also, I can barely see anything in your example. Could you make your camera nearer the to-be shadow and not pointing from the direction of the light?
See, this is why I come to this site :)
Every axis has its own random now, and it looks a lot better. It's still a little weird looking, increasing the number of samples helps though. It now looks like this.
Do you know a more efficient way to reduce the pattern-forming?
The biggest help though: not instantiating Random for every sample. It seriously tripled my rendering speed with soft shadows! I never knew that Random was so costly to instantiate. Wow.
Thanks a lot.
In your response you asked for an improved way to make soft shadows. An improvement could be, instead of randomizing all the rays from the same point, to give each ray a different offset on all axes to effectively give them a seperate little window to randomize in. This should result in a more even distribution. I don't know if that was clear but another way to describe it is as a grid which is perpendicular to the shadow ray. Each tile in the grid contains one of the n shadow rays but the location in the grid is random. Here you can find a part of a tutorial which describes how this can be used for soft shadows.