I am baffled by this one.
So I have a console application doing a lot of calculations (trust me, thousands of them ). In one method, I have some parameters that need calculating, in different situations. For one of them, the mathematical expression is basically the same, only one difference in a term. Here is the code snippet along with all the lines between the 2 formulas in question, the Nq1 and Nq2 ones ( first formula of the code and last one to be more easy ):
//drained conditions
Nq1 = Math.Round((Math.Pow(Math.E, Math.PI * Math.Tan(studiu.Fi * Constants.ConversionToDeg)) * Math.Pow((Math.Tan(45 + studiu.Fi / 2.00) * Constants.ConversionToDeg), 2)), 2);
//Combination 2
studiu.Fi = FiAfectat;
//drained conditions
Nq2 = Math.Round((Math.Pow(Math.E, Math.PI * Math.Tan(studiu.Fi * Constants.ConversionToDeg)) * Math.Pow((Math.Tan(45 + studiu.Fi / 2.00) * Constants.ConversionToDeg), 2)), 2);
The first formula returns 18.04 but the second one returns 0.01. How is this possible ? Only the studiu.Fi term differs, but not by that much ( 32 in the first case and 27 in the second ).
How can Nq1 be 18 and Nq2 be 0.01 ? Am I missing something here ?
Tan(x) is a periodic function that will change drastically for a small change in x. Since the two formulas only differ by a term inside the Tan function, this is likely your problem.
Also, it is likely that you should be using radians instead of degrees. If this is the case you should use a command that converts real numbers to radians instead of one that converts to degrees.
Related
I've got this Excel equation and I'm struggling to convert it into c#.
The "to the power of" and "log" parts are tripping me up.
The excel equation is as follows:
LOG((10^(PreSkillRating/400)/((-ChangeInRating/KFactor)+1)-10^(PreSkillRating/400)))*400/LOG(10)
So far I have this:
Math.Log((Math.Pow(PreSkillRating / 400, 10)) / (((ChangeInRating * -1) / KFactor) + 1) - Math.Pow((PreSkillRating / 400), 10)) * 400 / Math.Log(10)
I'm also aware that I will have to check for 0 when dividing to stop the Attempted to divide by zero error.
For example when I use the following values for each of the variables I get 1879.588002 as the answer in excel but infinity in c#.
PreSkillRating = 1600
ChangeInRating = 50
KFactor = 60
What am I doing wrong?
Based on earler comments and my first answer, let's summarize:
typecast for double division
wrong order of arguments for Pow
wrong method Math.Log(x). You can use Math.Log(x,10) or Math.Log10(x)
Try following implementation:
Math.Log10((Math.Pow(10, (double)PreSkillRating / 400)) / (((ChangeInRating * -1.0) / KFactor) + 1) - Math.Pow(10, (double)PreSkillRating / 400)) * 400 / Math.Log10(10)
Are your variables int values?
Then you have to add a typecast. See Division in C# to get exact value
Otherwise, divisions are performed as integer divisions, which causes rounding operation for each step separately.
I am trying to work out something for CellSize in Unity.
This is for the GridLayoutGroup component. I found something that when done with a calculator works perfectly fine: Screen.Width/1280 , Screen.Height/720.
Since I am using 720p as the base resolution and everything looks fine on there I use it as a base. 100x100 looks great for cell size on 720p.
For example their resolution is 1280:1024
It should come out (100, 142) However it always comes out (1.0, 1.0).
If the resolution is tiny... such as 320x200 it comes out (0.0, 0.0).
Here is my code:
SlotPanel.GetComponent<GridLayoutGroup>().cellSize = new Vector2((Cam.pixelWidth / 1280) * 100, (Cam.pixelHeight / 720) * 100);
Cam is the Camera that the player uses.
Another code that I tried was:
SlotPanel.GetComponenet<GridLayoutGroup>().cellSize = new Vector2((Screen.Width / 1280) * 100, (Screen.Height / 720) * 100);
Both result in the same issue. At this point I am at a loss of words for how annoyed I am. I don't understand why the math is right, but it does not work right.
Cam.pixelWidth / 1280 &c. is evaluated in integer arithmetic; any remainder is discarded.
Rewrite to 100.0 * Cam.pixelWidth / 1280 to ensure the evaluation takes place in floating point (due to promotion of the two integral arguments). There are other ways, but I find this one to be clearest since changing the first coefficient tells a reader of your code from the get-go what you want to do.
(If you require the type of the expression to be a single precision floating point type then use 100f in place of 100.0).
This is one of those cases where using excess parentheses is actually harmful.
This question already has answers here:
Why is floating point arithmetic in C# imprecise?
(3 answers)
Closed 7 years ago.
I just did a test with LINQPad:
Could you explain me why/how the ceiling method is reacting like this? Notice the 123.12 in the middle.
Math.Ceiling(123.121 * 100) / 100 'display 123.13
Math.Ceiling(123.1200000000001 * 100) / 100 'display 123.13
Math.Ceiling(123.12000000000001 * 100) / 100 'display 123.12
Math.Ceiling(123.12000000000002 * 100) / 100 'display 123.13
I did the test in VB.NET but it should be the same in C#.
This is floating point rounding. C# parses 123.12000000000001 and 123.12 as having the same value. 123.12000000000002 is parsed as the next available double.
var bytes = BitConverter.ToString(BitConverter.GetBytes(123.12));
// outputs 48-E1-7A-14-AE-C7-5E-40
var bytes1 = BitConverter.ToString(BitConverter.GetBytes(123.12000000000001));
// outputs 48-E1-7A-14-AE-C7-5E-40
var bytes2 = BitConverter.ToString(BitConverter.GetBytes(123.12000000000002));
// outputs 49-E1-7A-14-AE-C7-5E-40
Ceiling returns the number passed to it if they are whole numbers, or else the next highest whole number. So 5.0 stays 5.0 but 5.00001 becomes 6.0.
So, of the examples, the following are obvious:
Math.Ceiling(123.121 * 100) / 100 // Obtain 12312.1, next highest is 12313.0, then divide by 100 is 123.13
Math.Ceiling(123.1200000000001 * 100) / 100 // Likewise
Math.Ceiling(123.12000000000002 * 100) / 100 // Likewise
The more confusing one is:
Math.Ceiling(123.12000000000001 * 100) / 100 //display 123.12
However, let's take a look at:
123.12000000000001 * 100 - 12312.0 // returns 0
Compared to:
123.1200000000001 * 100 - 12312.0 // returns 1.09139364212751E-11
123.12000000000002 * 100 - 12312.0 // returns 1.81898940354586E-12
The latter two multiplications have results that are slightly higher than 12312.0, so while (123.12000000000002 * 100).ToString() returns "12312" the actual number produced by 123.12000000000002 * 100 is mathematically 12312.000000000002 the nearest possible double for 123.12000000000002 is is 123.1200000000000181898940354586 so that is what is worked on.
If you are used to only doing decimal arithmetic it may seem strange that 123.12000000000002 is "rounded" to 123.1200000000000181898940354586, but remember that these numbers are stored in terms of binary values, and rounding depends on the base you are working in.
So while the string representation doesn't indicate it, it is indeed slightly higher than 12312 and so its ceiling is 12313.
Meanwhile with 123.12000000000001 * 100, that is mathematically 12312.000000000001 but the nearest possible double to 123.12000000000001 is that it can fit into is 123.12. So that is what is used for the multiplication, and when the result passed to the subsequent call to Ceiling() its result is 12312.
This is due to floating point rounding rather than Math.Ceiling per se which is because floating point values cannot represent all values with 100% accuracy.
Your example is a little contrived anyway because if you try to type 123.12000000000001 in visual studio is changes it to 123.12 because it knows that the value cannot be represented as a double.
Read up on this here: What Every Computer Scientist Should Know About Floating-Point Arithmetic (btw this is not specific to .NET)
To fix your issue you can use a decimal value instead of a double. Math.Ceiling has an overload which accepts a decimal (all of these display 123.13):
Debug.WriteLine(Math.Ceiling(123.121D * 100) / 100)
Debug.WriteLine(Math.Ceiling(123.1200000000001D * 100) / 100)
Debug.WriteLine(Math.Ceiling(123.12000000000001D * 100) / 100)
Debug.WriteLine(Math.Ceiling(123.12000000000002D * 100) / 100)
Whether this fix is appropriate of course depends on what level of accuracy you require.
The Ceiling method returns the next higher integer equivalent.
so Ceiling(123.01) = 124
& Ceiling(123.0) = 123
I wanted to simplify my program a little bit and I am testing math.net, so for example I got matrix 2x2,
det(A) = a * d - b * c = 71 * 137 - 130 * 107 = -4183
Can sb tell me what is going on here, on the second screenshot, you can see that Math.Net Determinant() Function returns -4183,00000000000000018. How is it correct for the given matrix? Where this result came from? If it is double, it should be -4183.0.
Is it some kind of algorithm which count "well enough" but "much faster" for large data?
Screenshot 1
Screenshot 2
Second question, out of curiosity, what would be the quickest way to invert matrix modulo 256, using EXACTLY this method:
A^(-1) = 1/det(A) * (A^D)^T
where by (A^D)^T I mean transposed matrix of cofactors (I believe how it is called in English)
I wrote a method doing that, which works for Matrices or Multidimensional arrays, but I am curious what is the proper way of doing it in math.net, but using the equation I mentioned.
And as always, I truly appreciate every answer guys.
(btw yes, I am aware that I am doing to many casts, vars are declared many times, but try to ignore that, this is simply a testing field)
To make my 1st question more clear
(You can click '-' all you want, I don't care :))
#Szab
Thank you for the answer, I know that there is such behavior for decimal numbers, but to be more precise: I would like to know, why the result - 4183,00000000000000018 is different than:
This Result
There are no decimal places here, C# show very clearly that
det(A) = a * d - b * c = 71.0 * 137.0 - 130.0 * 107.0 = -4183.0
for a, b, c, d and det being all doubles.
/edit
Question answered, thank you all very much.
I think the first problem occurs just because the algorithm you are using is working with values of type double. As you may know, every number is represented in computer's memory as a binary value. The problem with this representation is that not every decimal value may be represented as binary number with 100% accuracy (just like you can't represent 1/3 with 100% accuracy). It's not the algorithm's fault.
Another example of this behaviour:
double a = 86.24;
double b = 86.25;
double c = b - a; // Should be 0.01, but is equal to 0.010000000000005116
I need to convert any range to a -1 to 1 scale. I have multiple ranges that I am using and therefore need this equation to be dynamic. Below is my current equation. It works great for ranges where 0 is the centerpoint. I.E. 200 to -200. I have another range however that isn't converting nicely. 6000 to 4000. I've also tested 0 to 360 and it works.
var offset = -1 + ((2 / yMax) * (point.Y));
One of the major issues that I may have is that sometimes I'll get a value that is outside the range, and as such, the converted value needs to also be outside the -1 to 1 range.
What this is for is to take a value that is a real world value, and I need to be able to plot it into an OpenGL point. I'm using .NET 4.0 and the Tao Framework.
rescaled = -1 + 2 * (point.Y - yMin) / (yMax - yMin);
However, in OpenGL you can do this with a projection matrix (or matrix multiply inside a shader). Read about glTranslatef and glScalef for how to use them or how to duplicate them with matrix multiply.