Float Value Is Lost in Int [duplicate] - c#

This question already has answers here:
C# Float expression: strange behavior when casting the result float to int
(8 answers)
Closed 10 years ago.
Trying to cast float to int but there's something missing
float submittedAmount = 0.51f;
int amount = (int)(submittedAmount * 100);
Why is the answer 50?

Because of floating point aritmethics, the multiplied value isn't exactly 51. When I tried now, *0.51f * 100* gave the result 50.9999990463257.
And when you parse 50.9999990463257 to and int, you surely get 50.
If you want calculations like this to be exact, you will have to use a type like decimal instead of float.
If you want to understand why, read the article I have linked below.
What Every Computer Scientist Should Know About Floating-Point Arithmetic

Try with
int amount = (int)(submittedAmount * 100.0);
When you write 0.51f is not exactly 0.51
Read this great article called What Every Computer Scientist Should Know About Floating-Point Arithmetic

Use Convert class. Otherwise the answer still will be 50.
var amount = Convert.ToInt32(submittedAmount * 100.0));

0.51f is actually 0.509999999999. because floating points are imprecise.
I would like to add that, do not use float for monetary calculations. Use decimal instead.

Related

Why does my double round my variables sometimes? [duplicate]

This question already has answers here:
Difference between decimal, float and double in .NET?
(18 answers)
Closed 3 years ago.
I really dont unterstand why, but my double array just sometimes round my variables, even though it shouldn't. The weird thing is, that it only does this sometimes, as you see in the picture. All of the array elements should have this long precision. The Variable "WertunterschiedPerSec" is also at this precision every time, but still, if i add it to Zwischenwerte[i], then it sometimes just get less precisie, even though i dont do anything anywhere. Does anybody know why?
I would suggest using a decimal, but let's get into the exact details:
double, float and decimal are all floating point.
The difference is double and float are base 2 and decimal is base 10.
Base 2 numbers cannot accurately represent all base 10 numbers.
This is why you're seeing what appears to be "rounding".
I would use the decimal variable instead of double because you can basically do the same functions as a double except for using the Math function. Try to use Decimals and if you need to convert than use:
Double variable = Convert.ToDouble(decimal var);
Decimals are meant for decimals so they will hold more information than a float or decimal

C# rounding of double not working [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
I have this code to round some value. I am using Math.Round() for the same.
double h = 128.015999031067;
double d = Math.Round(h, 3) * 1000;
result is 128015.99999999999
if I write it to console it gives me 128016. I want the value to be 128016
Am I missing any type conversions? Or there is some other way of doing this?
128.016 cannot be represented exactly in binary. There is no way that you will get 128016 by multiplying with 1000.
There are multiple ways around this:
Do nothing. It is already printed correctly. (Unless you want to perform further calculations.)
The "obvious" solution would be to simply round again.
The "simplest" solution would be to multiply first, then round.
The most "correct" solution would be, if you need exact values, you should use an exact type; there are many implementations of bignum, rational, decimal, and precise arithmetic available via NuGet, also consider using decimal.
Have you tried using the round function without any other arguments?
double h = 128.015999031067;
double d = Math.Round(h*1000);

Why is (1/90) = 0? [duplicate]

This question already has answers here:
C# is rounding down divisions by itself
(10 answers)
Closed 8 years ago.
I am working in Unity3D with C# and I get a weird result. Can anyone tell me why my code equals 0?
float A = 1 / 90;
The literals 1 and 90 are interpreted as an int. So integer division is used. After that the result is converted to a float.
In general C# will read all sequences (without decimal dot) of digits as an int. An int will be converted to a float if necessary. But before the assignment, that's not necessary. So all calculations in between are done as ints.
In other words, what you've written is:
float A = (float) ((int) 1)/((int) 90)
(made it explicit here, this is more or less what the compiler reads).
Now a division of two int's is processed such that it takes only the integral part into account. The integral part of 0.011111 is 0 thus zero.
If you however modify one of the literals to a floating point (1f, 1.0f, 90f,...) or both, this will work. Thus use one of these:
float A = 1/90.0f;
float A = 1.0f/90;
float A = 1.0f/90.0f;
In that case, floating point division will be performed. Which takes into account both parts.
etc.

Number type suffixes in C#

I searched for this question first before posting, but all I got was based on C++.
Here is my question:
Is a double with f suffix normal in c#? If yes, why and how is this possible?
Have a look at this code:
double d1 = 1.2f;
double d2 = 2.0f;
Console.WriteLine("{0}", d2 - d1);
decimal dm1 = 1.2m;
decimal dm2 = 2.0m;
Console.WriteLine("{0}", dm2 - dm1);
The answers for the first calculation is 0.799999952316284 with f suffix instead of 0.8. Also, when I change the f to a d which I think should be the normal way, it gives a correct answer of 0.8.
The right hand expression is evaluated as float and then "deposited" in a double variable. Nothing wrong or weird here. I think the difference in result has to do with the precision of the two data types.
Referring to your appreciation of the "correct answer", the fact that 0.8 came out "correct" is not because you changed from a float literal to a double literal. That's just a better approximation of the result. The "correct" result is indeed coming from the second expression, the one using decimal types.
The f suffix stand for float and not double. So 1.2f is a single precission floating point number which will be saved to a double directly after creating it because of an implicit cast to double.
The inprecission you are getting seems to be happening there and not at the calculation as it seems to be working with 1.2d.
Such behaviour is normal when using floating-point values. Use decimal if you do not want such behaviour as you already did in you examples yourself...
Double and Float both are binary numbers.
The Problem is not their precision but the kind of numbers they can store in an exact manner, which must be binary, too. Change 1.2f to 0.5f of 0.25f or 0.125f and so an and you will see 'correct' results. But any number with different factorials must be stored in an approximation. There is a '3' hidden in the 1.2 and you can't store in in a float or double. If you try, only an approximation will be stored.
Decimals are actually storing decimal digits and you won't see any approximations there as long as you don't leave the decimal realm. If you try to store, say, 1/3 in a decimal, it'll have to approximate as well..

Why is this simple calculation of two doubles inaccurate? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
C# (4): double minus double giving precision problems
86.25 - 86.24 = 0.01
generally, I would think the above statement is true right?
However, if I enter this
double a = 86.24;
double b = 86.25;
double c = b - a;
I find that c = 0.010000000000005116
Why is this?
Floating point numbers (in this case doubles) cannot represent decimal values exactly. I would recommend reading David Goldberg's What every computer scientist should know about floating-point arithmetic
This applies to all languages that deal with floating point numbers whether it be C#, Java, JavaScript, ... This is essential reading for any developer working with floating point arithmetic.
As VinayC suggests, if you need an exact (and slower) representation, use System.Decimal (aka decimal in C#) instead.
Double is incorrect data type for decimal calculations, use Decimal for that. Main reason lies in how double stores the number - its essentially a binary approximation of the decimal number.

Categories

Resources