Today I faced a strange problem in C#. I have an ASP.NET page where user can enter certain price, quantity etc. I get the price value, convert it to double, then multiply it with 100 and then typecast it to an integer. When the price is "33.30", after converting it to double it remains 33.3 (obviously...), but after multiplying it with 100, it becomes 3329.9999999999995, and when I cast it to integer by applying simple cast operator "(int) (price * 100) ", it becomes 3329.
Right now I have no idea why this is happening. So I thought may be you guys can help :) .
This happens because of the way doubles are stored. You should use decimal when working with money to avoid rounding errors.
don't cast it, round it using Math.Round. and its better to use a decimal type for currency
This is happening due to floating point rounding errors. Floating point numbers cannot be accurately represented in binary, so rounding errors such as the one you are experiencing happen. See this wikipedia article for more detail.
To overcome this, you should round to the closest integer - this is best achieved by using Math.Round.
When dealing with currencies however, best practice it to use the decimal type instead of double.
If you want to cast to the closest integer there is a Math.Round method for this.
What you are doing by default is flooring - which is exactly what you observe. (and is consistent with C)
The error is because doubles are stored in binary form. While every binary fraction has an exact decimal expansion, most decimals don't have an exact binary expansion. The decimal 33.3 has an inexact binary expansion. This approximation is then multiplied by 100, and converted to its exact decimal expansion, which is 3329.9999999999995. (Actually, this may not be the exact expansion, due to display truncation, but the gist of it is the same.)
Floating Point arithmetic in computing is almost always an approximation of the "Real" value
Related
I'm working on something and I've got a problem which I do not understand.
double d = 95.24 / (double)100;
Console.Write(d); //Break point here
The console output is 0.9524 (as expected) but if I look to 'd' after stoped the program it returns 0.95239999999999991.
I have tried every cast possible and the result is the same. The problem is I use 'd' elsewhere and this precision problem makes my program failed.
So why does it do that? How can I fix it?
Use decimal instead of double.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
The short of it is that a floating-point number is stored in what amounts to base-2 scientific notation. There is an integer significand understood to have one place in front of a decimal point, which is raised to an integer power of two. This allows for the storage of numbers in a relatively compact format; the downside is that the conversion from base ten to base 2 and back can introduce error.
To mitigate this, whenever high precision at low magnitudes is required, use decimal instead of double; decimal is a 128-bit floating point number type designed to have very high precision, at the cost of reduced range (it can only represent numbers up to +- 79 nonillion, 7.9E28, instead of double's +- 1.8E308; still plenty for most non-astronomical, non-physics computer programs) and double the memory footprint of a double.
A very good article that describes a lot: What Every Computer Scientist Should Know About Floating-Point Arithmetic It is not related to C#, but to the float arithmetic in general.
You could use a decimal instead:
decimal d = 95.24 / 100m;
Console.Write(d); //Break point here
Try:
double d = Math.Round(95.24/(double)100, 4);
Console.Write(d);
edit: Or use a decimal, yeah. I was just trying to keep the answer as similar to the question as possible :)
Now i know to use the method of float.Parse but have bumped into a problem.
I'm parsing the string "36.360", however the parsed float becomes 36.3600006103516.
Am i safe to round it off to the 3 decimal places or is there a better tactic for parsing floats from strings.
Obviously i'm looking for the parsed float to be 36.360.
This has nothing to do with the parsing, but is an inherent "feature" of floating-point numbers. Many numbers which have an exact decimal representation cannot be exactly stored as floating-point number, which causes such inequalities to appear.
Wikipedia (any many articles on the web) explain the issues.
Floating point numbers are inherently prone to rounding errors; even different CPU architectures would give a different number out in the millionths decimal place and beyond. This is also why you cannot use == when comparing floating point numbers....they'll rarely evaluate as equal because of floating point precision errors.
This is due to the fact that float or double are both stored in such a way that it is a mathematical process to read the value from memory. If you want to store the value as the actual value a better choice would be decimal.
Per the MSDN Page on System.Decimal:
The Decimal value type is appropriate for financial calculations
requiring large numbers of significant integral and fractional digits
and no round-off errors. The Decimal type does not eliminate the need
for rounding. Rather, it minimizes errors due to rounding.
There are limits in the precision of floating point numbers. Check out this link for additional details.
If you need more precise tracking, consider using something like a double or decimal type.
That's not an odd issue at all, it's just one of the charming features of floats you'll always going to run into. floats can't express that kind of decimal values accurately!
So if you need the result to be exactly 36.36, use a decimal rather than a float.
Otherwise, you're free to round off. Note that rounding won't help though, because it won't be exactly 36.36 after rounding either.
How would I round the following number in C# to obtain the following results.
Value : 500.0349999999
After Rounding to 2 digits after decimal : 500.04
I have tried Math.Round(Value,2, MidpointRounding.AwayFromZero); //but it returns the value 500.03 instead of 500.04
You're asking for non-standard rounding rules. The value 500.03499999999 rounded to the nearest hundredth should be 500.03. Since the thousandths digit is less than 5, the hundredths digit remains unchanged.
One way I can see to achieve your desired result is to round the number to the decimal place one smaller than what you ultimately want. Then round that result to the precision you want.
In your example, you would round the value to 3 decimal places resulting in 500.035. You would then round that to 2 decimal places which should result in 500.04 (assuming you're using MidpointRounding.AwayFromZero.
Hope that helps.
You could make it needlessly complex and wrap a variable Round(Decimal, Int32) function in a for loop. Decrement the Int32 to work it's way back to the decimal precision needed. It's a bit of work but like Eric said, you are asking for non-standard rounding rules.
Extra details: http://msdn.microsoft.com/en-us/library/ms131274.aspx
I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.
This is what I am doing, which works 99.999% of the time:
((int)(customerBatch.Amount * 100.0)).ToString()
The Amount value is a double. I am trying to write the value out in pennies to a text file for transport to a server for processing. The Amount is never more than 2 digits of precision.
If you use 580.55 for the Amount, this line of code returns 58054 as the string value.
This code runs on a web server in 64-bit.
Any ideas?
You should really use decimal for money calculations.
((int)(580.55m * 100.0m)).ToString().Dump();
You could use decimal values for accurate calculations. Double is floating point number which is not guaranteed to be precise during calculations.
I'm guessing that 580.55 is getting converted to 58054.99999999999999999999999999..., in which case int will round it down to 58054. You may want to write your own function that converts your amount to a int with some sort of rounding or threshold to make this not happen.
Try
((int)(Math.Round(customerBatch.Amount * 100.0))).ToString()
You really should not be using a double value to represent currency, due to rounding errors such as this.
Instead you might consider using integral values to represent monetary amounts, so that they are represented exactly. To represent decimals you can use a similar trick of storing 580.55 as the value 58055.
no, multiplying does not introduce rounding errors
but not all values can by represented by floating point numbers.
x.55 is one of them )
Decimal has more precision than a double. Give decimal a try.
http://msdn.microsoft.com/en-us/library/364x0z75%28VS.80%29.aspx
My suggestion would be to store the value as the integer number of pennies and take dollars_part = pennies / 100 and cents_part = pennies % 100. This will completely avoid rounding errors.
Edit: when I wrote this post, I did not see that you could not change the number format. The best answer is probably using the round method as others have suggested.
EDIT 2: As others have pointed out, it would be best to use some sort of fixed point decimal variable. This is better than my original solution because it would store the information about the location of the decimal point in the value where it belongs instead of in the code.