Problem:
I know the x and y of three arbitrary points on a 2d plane.
I know the vague distance from each point to the unknown, though I don't know the x y components.
I want to find the position of the 4th point.
The data is stored in a list >3 of type Data where
public class Data
{
double m_x, m_y, m_distance;
}
I've tried:
Walking the list, calculating the components of the distance, then adding the known x and y. I then calculated the average position of the predicted point from the 3 known points, but the accuracy was inconsistent.
foreach (var item in data_list)
{
var dx = item.m_x + item.m_distance * Math.Cos(item.m_distance);
var dy = item.m_y + item.m_distance * Math.Sin(item.m_distance);
out_list.Add(new Data { m_x = dx, m_y = dy });
}
foreach (var item in out_list)
{
__dx += item.m_x;
__dy += item.m_y;
}
__dx /= return_list.Count;
__dy /= return_list.Count;
Creating three circles at the known x and y, extending their radii equal to the distance component and checking intersection. The problem is that the distance varies since its rather imprecise, more like a suggestion.
Is there a simple, ok-performing, witty solution to this problem that I can't grasp?
I've thought of extending lines 360 degrees around each point and checking where three lines intersect, the shortest distance away from the origin, but I'm not entirely sure about the implementation.
I am computing the distance of an object.
The X and Y position values first stored in two different Lists X and W.
Then I use another List for storing the distance covered by this object. Also, I refresh the lists if their count reaches 10, in order to avoid memory burden.
On the basis of distance value, I have to analyze, if the object is in the static position the distance should not increases. And on the text box display, the computed distance values appears to be static.
Actually, I am using sensors to compute the distance. And due to sensor error even if the object is in the static state the distance value varies. The sensor error threshold is about to be 15cm.
I have developed the logic, However, I receive error:
System.ArgumentOutOfRangeException: 'Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index'
My code is as follows:
void distance()
{
List<double> d = new List<double>();
double sum = 0, sum1 = 0;
for (int i = 1; i < X.Count; i++)
{
//distance computation
if ((d[i] - d[i -1]) > 0.15)
{
sum1 = d.Sum();
sum = sum1 + dis1;
Dis = Math.Round(sum, 3);
}
}
// refresh the Lists when X, W and d List reach the count of 10
}
}
You do it totally wrong. Come up with a method computing a distance for a two given points. That's gonna be a func of signature double -> double -> double or, if you prefer C#, double ComputeDistance(double startPoint, double endPoint).
Then the only thing to do is to apply such a fuction to each pair of points you got. The easiest and most compact way to accomplish that is by means of Linq. It could be done in a regular foreach as well.
Take a note that it would be a way clearer if you will eventually merge your separated lists into a single list. Tuple<double, double> seems to be the best choice including performance.
I am working with geographic information, and recently I needed to draw an ellipse. For compatibility with the OGC convention, I cannot use the ellipse as it is; instead, I use an approximation of the ellipse using a polygon, by taking a polygon which is contained by the ellipse and using arbitrarily many points.
The process I used to generate the ellipse for a given number of point N is the following (using C# and a fictional Polygon class):
Polygon CreateEllipsePolygon(Coordinate center, double radiusX, double radiusY, int numberOfPoints)
{
Polygon result = new Polygon();
for (int i=0;i<numberOfPoints;i++)
{
double percentDone = ((double)i)/((double)numberOfPoints);
double currentEllipseAngle = percentDone * 2 * Math.PI;
Point newPoint = CalculatePointOnEllipseForAngle(currentEllipseAngle, center, radiusX, radiusY);
result.Add(newPoint);
}
return result;
}
This has served me quite while so far, but I've noticed a problem with it: if my ellipse is 'stocky', that is, radiusX is much larger than radiusY, the number of points on the top part of the ellipse is the same as the number of points on the left part of the ellipse.
That is a wasteful use of points! Adding a point on the upper part of the ellipse would hardly affect the precision of my polygon approximation, but adding a point to the left part of the ellipse can have a major effect.
What I'd really like, is a better algorithm to approximate the ellipse with a polygon. What I need from this algorithm:
It must accept the number of points as a parameter; it's OK to accept the number of points in every quadrant (I could iteratively add points in the 'problematic' places, but I need good control on how many points I'm using)
It must be bounded by the ellipse
It must contain the points straight above, straight below, straight to the left and straight to the right of the ellipse's center
Its area should be as close as possible to the area of the ellipse, with preference to optimal for the given number of points of course (See Jaan's answer - appearantly this solution is already optimal)
The minimal internal angle in the polygon is maximal
What I've had in mind is finding a polygon in which the angle between every two lines is always the same - but not only I couldn't find out how to produce such a polygon, I'm not even sure one exists, even if I remove the restrictions!
Does anybody have an idea about how I can find such a polygon?
finding a polygon in which the angle between every two lines is
always the same
Yes, it is possible. We want to find such points of (the first) ellipse quadrant, that angles of tangents in these points form equidistant (the same angle difference) sequence. It is not hard to find that tangent in point
x=a*Cos(fi)
y=b*Sin(Fi)
derivatives
dx=-a*Sin(Fi), dy=b*Cos(Fi)
y'=dy/dx=-b/a*Cos(Fi)/Sin(Fi)=-b/a*Ctg(Fi)
Derivative y' describes tangent, this tangent has angular coefficient
k=b/a*Cotangent(Fi)=Tg(Theta)
Fi = ArcCotangent(a/b*Tg(Theta)) = Pi/2-ArcTan(a/b*Tg(Theta))
due to relation for complementary angles
where Fi varies from 0 to Pi/2, and Theta - from Pi/2 to 0.
So code for finding N + 1 points (including extremal ones) per quadrant may look like (this is Delphi code producing attached picture)
for i := 0 to N - 1 do begin
Theta := Pi/2 * i / N;
Fi := Pi/2 - ArcTan(Tan(Theta) * a/b);
x := CenterX + Round(a * Cos(Fi));
y := CenterY + Round(b * Sin(Fi));
end;
// I've removed Nth point calculation, that involves indefinite Tan(Pi/2)
// It would better to assign known value 0 to Fi in this point
Sketch for perfect-angle polygon:
One way to achieve adaptive discretisations for closed contours (like ellipses) is to run the Ramer–Douglas–Peucker algorithm in reverse:
1. Start with a coarse description of the contour C, in this case 4
points located at the left, right, top and bottom of the ellipse.
2. Push the initial 4 edges onto a queue Q.
while (N < Nmax && Q not empty)
3. Pop an edge [pi,pj] <- Q, where pi,pj are the endpoints.
4. Project a midpoint pk onto the contour C. (I expect that
simply bisecting the theta endpoint values will suffice
for an ellipse).
5. Calculate distance D between point pk and edge [pi,pj].
if (D > TOL)
6. Replace edge [pi,pj] with sub-edges [pi,pk], [pk,pj].
7. Push new edges onto Q.
8. N = N+1
endif
endwhile
This algorithm iteratively refines an initial discretisation of the contour C, clustering points in areas of high curvature. It terminates when, either (i) a user defined error tolerance TOL is satisfied, or (ii) the maximum allowable number of points Nmax is used.
I'm sure that it's possible to find an alternative that's optimised specifically for the case of an ellipse, but the generality of this method is, I think, pretty handy.
I assume that in the OP's question, CalculatePointOnEllipseForAngle returns a point whose coordinates are as follows.
newPoint.x = radiusX*cos(currentEllipseAngle) + center.x
newPoint.y = radiusY*sin(currentEllipseAngle) + center.y
Then, if the goal is to minimize the difference of the areas of the ellipse and the inscribed polygon (i.e., to find an inscribed polygon with maximal area), the OP's original solution is already an optimal one. See Ivan Niven, "Maxima and Minima Without Calculus", Theorem 7.3b. (There are infinitely many optimal solutions: one can get another polygon with the same area by adding an arbitrary constant to currentEllipseAngle in the formulae above; these are the only optimal solutions. The proof idea is quite simple: first one proves that these are the optimal solutions in case of a circle, i.e. if radiusX=radiusY; secondly one observes that under a linear transformation that transforms a circle into our ellipse, e.g. a transformation of multiplying the x-coordinate by some constant, all areas are multiplied by a constant and therefore a maximal-area inscribed polygon of the circle is transformed into a maximal-area inscribed polygon of the ellipse.)
One may also regard other goals, as suggested in the other posts: e.g. maximizing the minimal angle of the polygon or minimizing the Hausdorff distance between the boundaries of the polygon and ellipse. (E.g. the Ramer-Douglas-Peucker algorithm is a heuristic to approximately solve the latter problem. Instead of approximating a polygonal curve, as in the usual Ramer-Douglas-Peucker implementation, we approximate an ellipse, but it is possible to devise a formula for finding on an ellipse arc the farthest point from a line segment.) With respect to these goals, the OP's solution would usually not be optimal and I don't know if finding an exact solution formula is feasible at all. But the OP's solution is not as bad as the OP's picture shows: it seems that the OP's picture has not been produced using this algorithm, as it has less points in the more sharply curved parts of the ellipse than this algorithm produces.
I suggest you switch to polar coordinates:
Ellipse in polar coord is:
x(t) = XRadius * cos(t)
y(t) = YRadius * sin(t)
for 0 <= t <= 2*pi
The problems arise when Xradius >> YRadius (or Yradius >> Yradius)
Instead of using numberOfPoints you can use an array of angles obviously not all identical.
I.e. with 36 points and dividing equally you get angle = 2*pi*n / 36 radiants for each sector.
When you get around n = 0 (or 36) or n = 18 in a "neighborhood" of these 2 values the approx method doesn't works well cause the ellipse sector is significantly different from the triangle used to approximate it. You can decrease the sector size around this points thus increasing precision. Instead of just increasing the number of points that would also increase segments in other unneeded areas. The sequence of angles should become something like (in degrees ):
angles_array = [5,10,10,10,10.....,5,5,....10,10,...5]
The first 5 deg. sequence is for t = 0 the second for t = pi, and again the last is around 2*pi.
Here is an iterative algorithm I've used.
I didn't look for theoretically-optimal solution, but it works quit well for me.
Notice that this algorithm gets as an input the maximal error of the prime of the polygon agains the ellipse, and not the number of points as you wish.
public static class EllipsePolygonCreator
{
#region Public static methods
public static IEnumerable<Coordinate> CreateEllipsePoints(
double maxAngleErrorRadians,
double width,
double height)
{
IEnumerable<double> thetas = CreateEllipseThetas(maxAngleErrorRadians, width, height);
return thetas.Select(theta => GetPointOnEllipse(theta, width, height));
}
#endregion
#region Private methods
private static IEnumerable<double> CreateEllipseThetas(
double maxAngleErrorRadians,
double width,
double height)
{
double firstQuarterStart = 0;
double firstQuarterEnd = Math.PI / 2;
double startPrimeAngle = Math.PI / 2;
double endPrimeAngle = 0;
double[] thetasFirstQuarter = RecursiveCreateEllipsePoints(
firstQuarterStart,
firstQuarterEnd,
maxAngleErrorRadians,
width / height,
startPrimeAngle,
endPrimeAngle).ToArray();
double[] thetasSecondQuarter = new double[thetasFirstQuarter.Length];
for (int i = 0; i < thetasFirstQuarter.Length; ++i)
{
thetasSecondQuarter[i] = Math.PI - thetasFirstQuarter[thetasFirstQuarter.Length - i - 1];
}
IEnumerable<double> thetasFirstHalf = thetasFirstQuarter.Concat(thetasSecondQuarter);
IEnumerable<double> thetasSecondHalf = thetasFirstHalf.Select(theta => theta + Math.PI);
IEnumerable<double> thetas = thetasFirstHalf.Concat(thetasSecondHalf);
return thetas;
}
private static IEnumerable<double> RecursiveCreateEllipsePoints(
double startTheta,
double endTheta,
double maxAngleError,
double widthHeightRatio,
double startPrimeAngle,
double endPrimeAngle)
{
double yDelta = Math.Sin(endTheta) - Math.Sin(startTheta);
double xDelta = Math.Cos(startTheta) - Math.Cos(endTheta);
double averageAngle = Math.Atan2(yDelta, xDelta * widthHeightRatio);
if (Math.Abs(averageAngle - startPrimeAngle) < maxAngleError &&
Math.Abs(averageAngle - endPrimeAngle) < maxAngleError)
{
return new double[] { endTheta };
}
double middleTheta = (startTheta + endTheta) / 2;
double middlePrimeAngle = GetPrimeAngle(middleTheta, widthHeightRatio);
IEnumerable<double> firstPoints = RecursiveCreateEllipsePoints(
startTheta,
middleTheta,
maxAngleError,
widthHeightRatio,
startPrimeAngle,
middlePrimeAngle);
IEnumerable<double> lastPoints = RecursiveCreateEllipsePoints(
middleTheta,
endTheta,
maxAngleError,
widthHeightRatio,
middlePrimeAngle,
endPrimeAngle);
return firstPoints.Concat(lastPoints);
}
private static double GetPrimeAngle(double theta, double widthHeightRatio)
{
return Math.Atan(1 / (Math.Tan(theta) * widthHeightRatio)); // Prime of an ellipse
}
private static Coordinate GetPointOnEllipse(double theta, double width, double height)
{
double x = width * Math.Cos(theta);
double y = height * Math.Sin(theta);
return new Coordinate(x, y);
}
#endregion
}
I am currently working on a project in which I am required to write software that compares two images made up of the same area and draws a box around the differences. I wrote the program in c# .net in a few hours but soon realized it was INCREDIBLY expensive to run. Here are the steps I implemented it in.
Created a Pixel class that stores the x,y coordinates of each pixel and a PixelRectangle class that stores a list of pixels along with width,height,x and y properties.
Looped through every pixel of each image, comparing the colour of each corresponding pixels. If the colour was different I then created a new pixel object with the x,y coordinates of that pixel and added it to a pixelDifference List.
Next I wrote a method that recursively checks each pixel in the pixelDifference list to create PixelRectangle objects that only contain pixels that are directly next to each other. (Pretty sure this bad boy is causing the majority of the destruction as it gave me a stack overflow error.)
I then worked out the x,y coordinates and dimensions of the rectangle based on the pixels that were stored in the list of the PixelRectangle Object and drew a rectangle over the original image to show where the differences were.
My questions are: Am I going about this the correct way? Would a quad tree hold any value for this project? If you could give me the basic steps on how something like this is normally achieved I would be grateful. Thanks in advance.
Dave.
looks like you want to implement blob detection. my suggestion is not to reinvent the wheel and just use openCVSharp or emgu to do this. google 'blob detection' & opencv
if you want to do it yourself here my 2 cents worth:
first of all, let's clarify what you want to do. really two separate things:
compute the difference between two images (i am assuming they are
the same dimensions)
draw a box around 'areas' that are 'different' as measured by 1. questions here are what is an 'area' and what is considered 'different'.
my suggestion for each step:
(my assumption is both images a grey scale. if not, compute the sum of colours for each pixel to get grey value)
1) cycle through all pixels in both images and subtract them. set a threshold on the absolute difference to determine if their difference is sufficient to represent and actual change in the scene (as opposed to sensor noise etc if the images are from a camera). then store the result in a third image. 0 for no difference. 255 for a difference. if done right this should be REALLY fast. however, in C# you must use pointers to get a decent performance. here an example of how to do this (note: code not tested!!) :
/// <summary>
/// computes difference between two images and stores result in a third image
/// input images must be of same dimension and colour depth
/// </summary>
/// <param name="imageA">first image</param>
/// <param name="imageB">second image</param>
/// <param name="imageDiff">output 0 if same, 255 if different</param>
/// <param name="width">width of images</param>
/// <param name="height">height of images</param>
/// <param name="channels">number of colour channels for the input images</param>
unsafe void ComputeDiffernece(byte[] imageA, byte[] imageB, byte[] imageDiff, int width, int height, int channels, int threshold)
{
int ch = channels;
fixed (byte* piA = imageB, piB = imageB, piD = imageDiff)
{
if (ch > 1) // this a colour image (assuming for RGB ch == 3 and RGBA == 4)
{
for (int r = 0; r < height; r++)
{
byte* pA = piA + r * width * ch;
byte* pB = piB + r * width * ch;
byte* pD = piD + r * width; //this has only one channels!
for (int c = 0; c < width; c++)
{
//assuming three colour channels. if channels is larger ignore extra (as it's likely alpha)
int LA = pA[c * ch] + pA[c * ch + 1] + pA[c * ch + 2];
int LB = pB[c * ch] + pB[c * ch + 1] + pB[c * ch + 2];
if (Math.Abs(LA - LB) > threshold)
{
pD[c] = 255;
}
else
{
pD[c] = 0;
}
}
}
}
else //single grey scale channels
{
for (int r = 0; r < height; r++)
{
byte* pA = piA + r * width;
byte* pB = piB + r * width;
byte* pD = piD + r * width; //this has only one channels!
for (int c = 0; c < width; c++)
{
if (Math.Abs(pA[c] - pB[c]) > threshold)
{
pD[c] = 255;
}
else
{
pD[c] = 0;
}
}
}
}
}
}
2)
not sure what you mean by area here. several solutions depending on what you mean. from simplest to hardest.
a) colour each difference pixel red in your output
b) assuming you only have one area of difference (unlikely) compute the bounding box of all 255 pixels in your output image. this can be done using a simple max / min for both x and y positions on all 255 pixels. single pass through the image and should be very fast.
c) if you have lots of different areas that change - compute the "connected components". that is a collection of pixels that are connected to each other. of course this only works in a binary image (i.e. on or off, or 0 and 255 as in our case). you can implement this in c# and i have done this before. but i won't do this for you here. it's a bit involved. algorithms are out there. again opencv or google connected components.
once you have a list of CC's draw a box around each. done.
You're pretty much going about it the right way. Step 3 shouldn't be causing a StackOverflow exception if it's implemented correctly so I'd take a closer look at that method.
What's most likely happening is that your recursive check of each member of PixelDifference is running infinitely. Make sure you keep track of which Pixels have been checked. Once you check a Pixel it no longer needs to be considered when checking neighbouring Pixels. Before checking any neighbouring pixel make sure it hasn't already been checked itself.
As an alternative to keeping track of which Pixels have been checked you can remove an item from PixelDifference once it has been checked. Of course, this may require a change in the way you implement your algorithm since removing an element from a List while checking it can bring a whole new set of issues.
There's a much simpler way of finding the difference of two images.
So if you have two images
Image<Gray, Byte> A;
Image<Gray, Byte> B;
You can get their differences fast by
A - B
Of course, images don't store negative values so to get differences in cases where pixels in image B are greater than image A
B - A
Combining these together
(A - B) + (B - A)
This is ok, but we can do even better.
This can be evaluated using Fourier transforms.
CvInvoke.cvDFT(A.Convert<Gray, Single>().Ptr, DFTA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT(B.Convert<Gray, Single>().Ptr, DFTB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT((DFTB - DFTA).Convert<Gray, Single>().Ptr, AB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
CvInvoke.cvDFT((DFTA - DFTB).Ptr, BA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
I find that the results from this method are much better.
You can make a binary image out of this, ie: threshold the image so pixels with no change have 0 while pixels that have changes store 255.
Now as far as the second part of the problem goes, I suppose there's a simple crude solution:
Partition the image into rectangular regions. Perhaps there's no need to go as far as using quad trees. Say, an 8x8 grid... (For different results, you can experiment with different grid sizes).
Then use the convex hull function within these regions. These convex hulls can be turned into rectangles by finding the min and max x an y coordinates of their vertices.
Should be fast and simple
I load multiple meshs from .x files in different mesh variables.
Now I would like to calculate the bounding sphere across all the meshes I have loaded (and which are being displayed)
Please guide me how this could be achieved.
Can VertexBuffers be appended togather in one variable and the boundingSphere be computed using that? (if yes how are they vertexBuffers added togather)
Otherwise what alternative would you suggest!?
Thankx
Its surprisingly easy to do this:
You need to, firstly, average all your vertices. This gives you the center position.
This is done as follows in C++ (Sorry my C# is pretty rusty but it should give ya an idea):
D3DXVECTOR3 avgPos;
const rcpNum = 1.0f / (float)numVerts; // Do this here as divides are far more epxensive than multiplies.
int count = 0;
while( count < numVerts )
{
// Instead of adding everything up and then dividing by the number (which could lead
// to overflows) I'll divide by the number as I go along. The result is the same.
avgPos.x += vert[count].pos.x * rcpNum;
avgPos.y += vert[count].pos.y * rcpNum;
avgPos.z += vert[count].pos.z * rcpNum;
count++;
}
Now you need to go through every vert and work out which vert is the furthest away from the center point.
Something like this would work (in C++):
float maxSqDist = 0.0f;
int count = 0;
while( count < numVerts )
{
D3DXVECTOR3 diff = avgPos - vert[count].pos;
// Note we may as well use the square length as the sqrt is very expensive and the
// maximum square length will ALSO be the maximum length and yet we only need to
// do one sqrt this way :)
const float sqDist = D3DXVec3LengthSq( diff );
if ( sqDist > maxSqDist )
{
maxSqDist = sqDist;
}
count++;
}
const float radius = sqrtf( maxSqDist );
And you now have your center position (avgPos) and your radius (radius) and, thus, all the info you need to define a bounding sphere.
I have an idea, what I would do is that I would determine the center of every single mesh object, and then determine the center of the collection of mesh objects by using the aforementioned information ...