Angle between two 2d vectors, diff between two methods? - c#

I've got this code snippet, and I'm wondering why the results of the first method differ from the results of the second method, given the same input?
public double AngleBetween_1(vector a, vector b) {
var dotProd = a.Dot(b);
var lenProd = a.Len*b.Len;
var divOperation = dotProd/lenProd;
return Math.Acos(divOperation) * (180.0 / Math.PI);
}
public double AngleBetween_2(vector a, vector b) {
var dotProd = a.Dot(b);
var lenProd = a.Len*b.Len;
var divOperation = dotProd/lenProd;
return (1/Math.Cos(divOperation)) * (180.0 / Math.PI);
}

It's because the first method is correct, while the second method is incorrect.
You may notice that the arccosine function is sometimes written "acos" and sometimes written "cos-1". This is a quirk of mathematical notation: "cos-1" is really the arccosine and NOT the reciprocal of the cosine (which is the secant).
However, if you ever see "cos2", then that's the square of the cosine, and "cos3" is the cube of the cosine. The notation for trigonometric functions is weird this way. Most operators use superscripts to indicate repeated application.

Math.Acos(divOperation) isn't equivalent to 1/Math.Cos(divOperation). arccos is the inverse function of cos, not the multiplicative inverse.

Probably because acos(x) ≠ 1/cos(x).

Related

How Order of Operations Can Seriously Change The Code Speed?

I watched a video about code optimization from Tarodev. At the end of the video, I am shocked because I saw that order of operations can seriously change the code speed. Although I tried it myself, I don't understand how can it be possible? How can Float x Float x Vector 2 times faster than Float x Vector x Float or Vector x Float x Float?
I installed Tarodev's project from GitHub and tried it myself. The result was the same as with the video.
EDIT:
Thanks for all these great answers guys. After your answers, I understood the reason and felt dumb :) Also, I'm adding a video Repository and example code at Orhtej2's suggestion
Just first of all
someVector3 * someFloat
and
someFloat * someVector3
both basically return
new Vector3(someVector3.x * someFloat, someVector3.y * someFloat, someVector3.z * someFloat)
so basically 3 float * float operations + constructor call of Vector3
And then the order matters because your operations will be resolved in the order like this
In the first case you do
(float * float) * Vector3
so in steps:
1. (float * float) => single float * operation => result: float
2. (float * Vector3) => 3 float * operations => result: Vector3
in the second you do
(float * Vector3) * float
so in steps:
1. (float * Vector3) => 3 float * operations => result: Vector3
2. (Vector3 * float) => 3 float * operations => result: Vector3
and in the third equivalent you do
(Vector2 * float) * float
so in steps
1. (Vector3 * float) => 3 float * operations => result: Vector3
2. (Vector3 * float) => 3 float * operations => result: Vector3
In the first case you have only 4 * operations while in the other two cases you have 6.
Plus in the second and third case you also are instantiating an additional Vector3 (the result from the first operation) which also costs performance and memory.
To multiply a float by a Vector3, three multiplies are required. To multiply a float by a float, clearly just one multiply is requred.
When you do:
result = a * b * c;
That means:
result = (a * b) * c;
or more explicitly:
var tmp = a * b;
result = tmp * c;
Suppose a and b are floats and c is a vector. Then a * b performs one multiply (float by float) and results in a float, then the second operation performs three multiplies (float by Vector3). That's four multiplies in total.
Now support a is a vector and b and c are floats. Then a * b performs three multiplies (Vector3 by float) and results in a Vector3, then the second operation again performs three multiplies (Vector3 by float) and results in a Vector3. That's six multiplies in total.
Now, it's possible that the compiler might be able to figure out that these are equivalent and do the cheaper four multiplies instead. But often it's not easy for it to see that. And often it's not quite equivalent. For example, rounding and overflow can depend very much on which way round you do the operations, and the compiler won't mess with them because it could introduce inaccuracies, so it trusts you and simply does exactly what you tell it.
The answer is pretty easy,follow my example:
a Vector is an object made of more than 1 value: e.g. Vector3 v = new Vector3(1,2,3)
if u multiply a number for a Vector3, you are doing 3 operations.
float n = 3.f
Vector3 v = new Vector3(1,2,3)
n*v result is a Vector3(1*n,2*n,3*n),so 3 calculations for one multiplication.
following your example in the first case you are doing n*n*v,so 1 calculation for n*n + 3 calculation for n^2 * v,for a total of 4 calculations.
in the second case you are doing n * v * n,that means 3 calculations for n*v + another 3 calculations for v*n,for a total of 6 calculations.
Vector x Scalar1 x Scalar2 => (Vector x Scalar1) x Scalar2 =>
VectorTemp x Scalar2 = EndResult
So to summarize, we had to operate two times a scalar by vector operation. meaning
2 times a Vector length multiplication.
Where in constrast
Scalar1 x Scalar2 x Vector = (Scalar1 x Scalar2) x Vector =>
Scalar3 x Vector = EndResult.
Here one Vector x Scalar operation is required. Scalar1 by Scalar2 is only multipication
So I guess the key idea here is if operations are commutative, starts with the cheapest ones.

Why is my angle of 2 vectors function return NaN even though i follow the formula

I'm making a function that calculates the angle between 2 given vectors for my unity game using the dot product formula:
vector(a)*vector(b)=|vector(a)|*|vector(b)|*cos(the angle)
so I figured that the angle would equals
acos((vector(a)*vector(b))/(|vector(a)|*|vector(b)|))
Anyway here's my code:
float rotateAngle(Vector2 a,Vector2 b)
{
return Mathf.Acos((a.x * b.x + a.y * b.y) / ((Mathf.Sqrt(a.x * a.x + a.y * a.y)) * (Mathf.Sqrt(b.x * b.x + b.y * b.y)))) * (180 / Mathf.PI);
}
But when i played it the console showed NaN. I've tried and reviewed the code and the formula but returned empty-handed.
Can someone help me? Thank you in advanced!!
float.NaN is the result of undefined (for real numbers) mathematical operations such as 0 / 0 (note from the docs that x / 0 where x != 0 rather returns positive or negative infinity) or the square root of a negative value. As soon as one operant in an operation already is NaN then also the entire operation returns again NaN.
The second (square root of a negative value) can not happen here since you are using squared values so most probably your vectors have a magnitude of 0.
If you look at the Vector2 source code you will find their implementation of Vector2.Angle or Vector2.SignedAngle (which you should rather use btw as they are tested and way more efficient).
public static float Angle(Vector2 from, Vector2 to)
{
// sqrt(a) * sqrt(b) = sqrt(a * b) -- valid for real numbers
float denominator = (float)Math.Sqrt(from.sqrMagnitude * to.sqrMagnitude);
if (denominator < kEpsilonNormalSqrt)
return 0F;
float dot = Mathf.Clamp(Dot(from, to) / denominator, -1F, 1F);
return (float)Math.Acos(dot) * Mathf.Rad2Deg;
}
// Returns the signed angle in degrees between /from/ and /to/. Always returns the smallest possible angle
public static float SignedAngle(Vector2 from, Vector2 to)
{
float unsigned_angle = Angle(from, to);
float sign = Mathf.Sign(from.x * to.y - from.y * to.x);
return unsigned_angle * sign;
}
There you will find that the first thing they check is
float denominator = (float)Math.Sqrt(from.sqrMagnitude * to.sqrMagnitude);
if (denominator < kEpsilonNormalSqrt)
return 0F;
which basically makes exactly sure that both given vectors have a "big enough" magnitude, in particular one that is not 0 ;)
Long story short: Don't reinvent the wheel and rather use already built-in Vector2.Angle or Vector2.SignedAngle
NaN are typically the result of invalid mathematical operations on floating point numbers. A common source is division by zero, so my guess would be that the vector is 0,0.
I would also recommend using the built in functions for computing the normalization, Length/Magnitude, Dot etc. that will make the code much easier to read, and the compiler should be fairly good at optimizing that kind of code. If you need to do any additional optimization, only do so after you have done some measurements.

C# Trigonometry (HELP!)

Ok so here's the situation...
I'm currently working on a project for my Math and Physics for Games class.
I finished coding my solution, and ran the xUnit tests that my teacher made for us.
90% of them fail.
I have a Calculator.cs file that contains all of the methods that I have coded. Each trigonometric method is made to return a Tuple, and that Tuple's items are then used in an xUnit.Assert.Equal(expectedResult, Math.Round(calculatorTuple.Item1, 4))...
For example... I have a method named Trig_Calculate_Adjacent_Hypotenuse that accepts two doubles as it's parameters (Angle in degrees and Opposite)
The calculator finds that Adjacent is equal to 15.4235...
but my real-life calculations show me that it's 56.7128.
Therefore when the test runs, it does Assert.Equal(56.7128, 15.4235) and finds that these two answers are not equal. (obviously)
I looked over the code in my Calculator.cs file multiple times... and cannot for the life of me find the problem.
Here's my method so you can take a look at it:
public static Tuple<double,double> Trig_Calculate_Adjacent_Hypotenuse(double Angle, double Opposite)
{
double Hypotenuse;
double Adjacent;
// SOH CAH TOA
// Using TOA to find Adjacent
// so Adjacent = Opposite / Tan(Angle)
// so Adjacent = 10 / Tan(10)
// which means Adjacent = 56.7128
// However my calculator finds 15.4235 instead...
Adjacent = Opposite / Math.Tan(Calculator.DegreesToRadians(Angle));
// Using SOH to find Hypotenuse
// so Hypotenuse = Opposite / Sin(Angle)
// so Hypotenuse = 10 / Sin(10)
// which means Hypotenuse = 57.5877
// However my calculator finds something different... (unknown due to Adjacent's failure)
Hypotenuse = Opposite / Math.Sin(Calculator.DegreesToRadians(Angle));
return new Tuple<double, double>(Adjacent, Hypotenuse);
}
And here's the test method:
[Theory]
// Student Data
[InlineData(10, 10, 56.7128, 57.5877)]
public void TestCalculateAdjacentHypotenuse(double Angle, double Opposite, double Adjacent, double Hypotenuse)
{
// Act - performing the action
Tuple<double, double> results = Calculator.Trig_Calculate_Adjacent_Hypotenuse(Angle, Opposite);
// Assert - did we get back the correct answer
Assert.Equal(Adjacent, Math.Round(results.Item1, 4));
Assert.Equal(Hypotenuse, Math.Round(results.Item2, 4));
}
I hope you guys can help me find out what the problem is! :)
Thank you!
Math.Tan(Angle) works with radians, not with degrees (Also Sin(), Cos() work with radians).
Try Math.Tan(Angle * Math.PI / 180);

Cant seem to compute normal distribution

I have interprated the formula in wikipedia in c# code, i do get a nice normal curve, but is it rational to get values that exceeds 1? isnt it suppose to be a distribution function?
this is the C# implementation :
double up = Math.Exp(-Math.Pow(x , 2) / ( 2 * s * s ));
double down = ( s * Math.Sqrt(2 * Math.PI) );
return up / down;
i double checked it several times and it seems fine to me so whats wrong? my implementation or understanding?
for example if we define x=0 and s=0.1 this impl would return 3.989...
A distribution function, a pdf, has the property that its values are >= 0 and the integral of the pdf over -inf to +inf must be 1. But the integrand, that is the pdf, can take any value >= 0, including values greater than 1.
In other words, there is no reason, a priori, to believe that a pdf value > 1 indicates a problem.
You can think about this for the normal curve by considering what reducing the variance means. Smaller variance values concentrate the probability mass in the centre. Given that the total mass is always one, as the mass concentrates in the centre, the peak value must increase. You can see that trend in the graph the you link to.
What you should do is compare the output of your code with known good implementations. For instance, Wolfram Alpha gives the same value as you quote: http://www.wolframalpha.com/input/?i=normal+distribution+pdf+mean%3D0+standard+deviation%3D0.1+x%3D0&x=6&y=7
Do a little more testing of this nature, captured in a unit test, and you will be able to rely on your code with confidence.
Wouldn't you want something more like this?
public static double NormalDistribution(double value)
{
return (1 / Math.Sqrt(2 * Math.PI)) * Math.Exp(-Math.Pow(value, 2) / 2);
}
Yes, it's totally OK; The distribution itself (PDF) can be anything from 0 to +infinity; the thing should be in the range [0..1] is the corresponding integral(s) (e.g. CDF).
You can convince yourself if look at the case of non-random value: if the value is not a random at all and can have only one constant value the distribution degenerates (standard error is zero, mean is the value) into Dirac Delta Function: a peak of infinite hight but of zero width; integral however (CDF) from -infinity to +infinity is 1.
// If you have special functions implemented (i.e. Erf)
// outcoume is in [0..inf) range
public static Double NormalPDF(Double value, Double mean, Double sigma) {
Double v = (value - mean) / sigma;
return Math.Exp(-v * v / 2.0) / (sigma * Math.Sqrt(Math.PI * 2));
}
// outcome is in [0..1] range
public static Double NormalCDF(Double value, Double mean, Double sigma, Boolean isTwoTail) {
if (isTwoTail)
value = 1.0 - (1.0 - value) / 2.0;
//TODO: You should have Erf implemented
return 0.5 + Erf((value - mean) / (Math.Sqrt(2) * sigma)) / 2.0;
}

Optimization of a distance calculation function

In my code I have to do a lot of distance calculation between pairs of lat/long values.
the code looks like this:
double result = Math.Acos(Math.Sin(lat2rad) * Math.Sin(lat1rad)
+ Math.Cos(lat2rad) * Math.Cos(lat1rad) * Math.Cos(lon2rad - lon1rad));
(lat2rad e.g. is latitude converted to radians).
I have identified this function as the performance bottleneck of my application. Is there any way to improve this?
(I cannot use look-up tables since the coordinates are varying). I have also looked at this question where a lookup scheme like a grid is suggested, which might be a possibility.
Thanks for your time! ;-)
If your goal is to rank (compare) distances, then approximations (sin and cos table lookups) could drastically reduce your amount of computations required (implement quick reject.)
Your goal is to only proceed with the actual trigonometric computation if the difference between the approximated distances (to be ranked or compared) falls below a certain threshold.
E.g. using lookup tables with 1000 samples (i.e. sin and cos sampled every 2*pi/1000), the lookup uncertainty is at most 0.006284. Using uncertainty calculation for the parameter to ACos, the cumulated uncertainty, also be the threshold uncertainty, will be at most 0.018731.
So, if evaluating Math.Sin(lat2rad) * Math.Sin(lat1rad)
+ Math.Cos(lat2rad) * Math.Cos(lat1rad) * Math.Cos(lon2rad - lon1rad) using sin and cos lookup tables for two coordinate-set pairs (distances) yields a certain ranking (one distance appears greater than the other based on the approximation), and the difference's modulus is greater than the threshold above, then the approximation is valid. Otherwise proceed with the actual trigonometric calculation.
Would the CORDIC algorithm work for you (in regards to speed/accuracy)?
Using inspiration from #Brann I think you can reduce the calculation a bit (Warning its a long time since I did any of this and it will need to be verified). Some sort of lookup of precalculated values probably the fastest though
You have :
1: ACOS( SIN A SIN B + COS A COS B COS(A-B) )
but 2: COS(A-B) = SIN A SIN B + COS A COS B
which is rewritten as 3: SIN A SIN B = COS(A-B) - COS A COS B
replace SIN A SIN B in 1. you have :
4: ACOS( COS(A-B) - COS A COS B + COS A COS B COS(A-B) )
You pre-calculate X = COS(A-B) and Y = COS A COS B and you put the values into 4
to give:
ACOS( X - Y + XY )
4 trig calculations instead of 6 !
Change the way you store long/lat:
struct LongLat
{
float
long,
lat,
x,y,z;
}
When creating a long/lat, also compute the (x,y,z) 3D point that represents the equivalent position on a unit sphere centred at the origin.
Now, to determine if point B is nearer to point A than point C, do the following:
// is B nearer to A than C?
bool IsNearer (LongLat A, LongLat B, LongLat C)
{
return (A.x * B.x + A.y * B.y + A.z * B.z) < (A.x * C.x + A.y * C.y + A.z * C.z);
}
and to get the distance between two points:
float Distance (LongLat A, LongLat B)
{
// radius is the size of sphere your mapping long/lats onto
return radius * acos (A.x * B.x + A.y * B.y + A.z * B.z);
}
You could remove the 'radius' term, effectively normalising the distances.
Switching to lookup tables for sin/cos/acos. Will be faster, there are alot of c/c++ fixed point libraries that also include those.
Here is code from someone else on Memoization. Which might work if the actual values used are more clustered.
Here is an SO question on Fixed Point.
What is the bottle neck? Is the the sine/cosine function calls or the arcsine call?
If your sine/cosine calls are slow, you could use the following theorem to prevent so many calls:
1 = sin(x)^2 + cos(x)^2
cos(x) = sqrt(1 - sin(x)^2)
But I like the mapping idea so that you don't have to recompute values you've already computed. Although be careful as the map could get very large very quickly.
How exact do you need the values to be?
If you round your values a bit then you could store the result of all lookups and check if thay have been used befor each calculation?
Well, since lat and lon are garenteed to be within a certain range, you could try using some form of a lookup table for you Math.* method calls. Say, a Dictionary<double,double>
I would argue that you may want to re-examine how you found that function to be the bottleneck. (IE did you profile the application?)
The equation to me seems very light weight and shouldn't cause any trouble.
Granted, I don't know your application and you say you do a lot of these calculations.
Nevertheless it is something to consider.
As someone else pointed out, are you sure this is your bottleneck?
I've done some performance testing of a similar application I'm building where I call a simple method to return a distance between two points using standard trig. 20,000 calls to it shoves it right at the top of the profiling output, yet there's no way I can make it faster... It's just the shear # of calls.
In this case, I need to reduce the # calls to it... Not that this is the bottleneck.
I use a different algorithm for calculating distance between 2 lati/longi positions, it could be lighter than yours since it only does 1 Cos call and 1 Sqrt call.
public static double GetDistanceBetweenTwoPos(double lat1, double long1, double lat2, double long2)
{
double distance = 0;
double x = 0;
double y = 0;
x = 69.1 * (lat1 - lat2);
y = 69.1 * (long1 - long2) * System.Math.Cos(lat2 / 57.3);
//calculation base : Miles
distance = System.Math.Sqrt(x * x + y * y);
//Distance calculated in Kilometres
return distance * 1.609;
}
someone has already mentioned memoisation and this is a bit similar. if you comparing the same point to many other points then it is better to precalculate parts of that equation.
instead of
double result = Math.Acos(Math.Sin(lat2rad) * Math.Sin(lat1rad)
+ Math.Cos(lat2rad) * Math.Cos(lat1rad) * Math.Cos(lon2rad - lon1rad));
have:
double result = Math.Acos(lat2rad.sin * lat1rad.sin
+ lat2rad.cos * lat1rad.cos * (lon2rad.cos * lon1rad.cos + lon1rad.sin * lon2rad.sin));
and i think that's the same formula as someone else has posted because part of the equation will disappear when you expand the brackets:)

Categories

Resources