Multiplying a double value by 100.0 introduces rounding errors? - c#

This is what I am doing, which works 99.999% of the time:
((int)(customerBatch.Amount * 100.0)).ToString()
The Amount value is a double. I am trying to write the value out in pennies to a text file for transport to a server for processing. The Amount is never more than 2 digits of precision.
If you use 580.55 for the Amount, this line of code returns 58054 as the string value.
This code runs on a web server in 64-bit.
Any ideas?

You should really use decimal for money calculations.
((int)(580.55m * 100.0m)).ToString().Dump();

You could use decimal values for accurate calculations. Double is floating point number which is not guaranteed to be precise during calculations.

I'm guessing that 580.55 is getting converted to 58054.99999999999999999999999999..., in which case int will round it down to 58054. You may want to write your own function that converts your amount to a int with some sort of rounding or threshold to make this not happen.

Try
((int)(Math.Round(customerBatch.Amount * 100.0))).ToString()

You really should not be using a double value to represent currency, due to rounding errors such as this.
Instead you might consider using integral values to represent monetary amounts, so that they are represented exactly. To represent decimals you can use a similar trick of storing 580.55 as the value 58055.

no, multiplying does not introduce rounding errors
but not all values can by represented by floating point numbers.
x.55 is one of them )

Decimal has more precision than a double. Give decimal a try.
http://msdn.microsoft.com/en-us/library/364x0z75%28VS.80%29.aspx

My suggestion would be to store the value as the integer number of pennies and take dollars_part = pennies / 100 and cents_part = pennies % 100. This will completely avoid rounding errors.
Edit: when I wrote this post, I did not see that you could not change the number format. The best answer is probably using the round method as others have suggested.
EDIT 2: As others have pointed out, it would be best to use some sort of fixed point decimal variable. This is better than my original solution because it would store the information about the location of the decimal point in the value where it belongs instead of in the code.

Related

C# Convert.ToDouble() loses decimal points when converting string to double

Let's say we have the following simple code
string number = "93389.429999999993";
double numberAsDouble = Convert.ToDouble(number);
Console.WriteLine(numberAsDouble);
after that conversion numberAsDouble variable has the value 93389.43. What can i do to make this variable keep the full number as is without rounding it? I have found that Convert.ToDecimal does not behave the same way but i need to have the value as double.
-------------------small update---------------------
putting a breakpoint in line 2 of the above code shows that the numberAsDouble variable has the rounded value 93389.43 before displayed in the console.
93389.429999999993 cannot be represented exactly as a 64-bit floating point number. A double can only hold 15 or 16 digits, while you have 17 digits. If you need that level of precision use a decimal instead.
(I know you say you need it as a double, but if you could explain why, there may be alternate solutions)
This is expected behavior.
A double can't represent every number exactly. This has nothing to do with the string conversion.
You can check it yourself:
Console.WriteLine(93389.429999999993);
This will print 93389.43.
The following also shows this:
Console.WriteLine(93389.429999999993 == 93389.43);
This prints True.
Keep in mind that there are two conversions going on here. First you're converting the string to a double, and then you're converting that double back into a string to display it.
You also need to consider that a double doesn't have infinite precision; depending on the string, some data may be lost due to the fact that a double doesn't have the capacity to store it.
When converting to a double it's not going to "round" any more than it has to. It will create the double that is closest to the number provided, given the capabilities of a double. When converting that double to a string it's much more likely that some information isn't kept.
See the following (in particular the first part of Michael Borgwardt's answer):
decimal vs double! - Which one should I use and when?
A double will not always keep the precision depending on the number you are trying to convert
If you need to be precise you will need to use decimal
This is a limit on the precision that a double can store. You can see this yourself by trying to convert 3389.429999999993 instead.
The double type has a finite precision of 64 bits, so a rounding error occurs when the real number is stored in the numberAsDouble variable.
A solution that would work for your example is to use the decimal type instead, which has 128 bit precision. However, the same problem arises with a smaller difference.
For arbitrary large numbers, the System.Numerics.BigInteger object from the .NET Framework 4.0 supports arbitrary precision for integers. However you will need a 3rd party library to use arbitrary large real numbers.
You could truncate the decimal places to the amount of digits you need, not exceeding double precision.
For instance, this will truncate to 5 decimal places, getting 93389.42999. Just replace 100000 for the needed value
string number = "93389.429999999993";
decimal numberAsDecimal = Convert.ToDecimal(number);
var numberAsDouble = ((double)((long)(numberAsDecimal * 100000.0m))) / 100000.0;

Parsing a string into a float

Now i know to use the method of float.Parse but have bumped into a problem.
I'm parsing the string "36.360", however the parsed float becomes 36.3600006103516.
Am i safe to round it off to the 3 decimal places or is there a better tactic for parsing floats from strings.
Obviously i'm looking for the parsed float to be 36.360.
This has nothing to do with the parsing, but is an inherent "feature" of floating-point numbers. Many numbers which have an exact decimal representation cannot be exactly stored as floating-point number, which causes such inequalities to appear.
Wikipedia (any many articles on the web) explain the issues.
Floating point numbers are inherently prone to rounding errors; even different CPU architectures would give a different number out in the millionths decimal place and beyond. This is also why you cannot use == when comparing floating point numbers....they'll rarely evaluate as equal because of floating point precision errors.
This is due to the fact that float or double are both stored in such a way that it is a mathematical process to read the value from memory. If you want to store the value as the actual value a better choice would be decimal.
Per the MSDN Page on System.Decimal:
The Decimal value type is appropriate for financial calculations
requiring large numbers of significant integral and fractional digits
and no round-off errors. The Decimal type does not eliminate the need
for rounding. Rather, it minimizes errors due to rounding.
There are limits in the precision of floating point numbers. Check out this link for additional details.
If you need more precise tracking, consider using something like a double or decimal type.
That's not an odd issue at all, it's just one of the charming features of floats you'll always going to run into. floats can't express that kind of decimal values accurately!
So if you need the result to be exactly 36.36, use a decimal rather than a float.
Otherwise, you're free to round off. Note that rounding won't help though, because it won't be exactly 36.36 after rounding either.

How to decide what to use - double or decimal? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.

C# Maths gives wrong results!

I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.

C# : Number Conversion Problem

Today I faced a strange problem in C#. I have an ASP.NET page where user can enter certain price, quantity etc. I get the price value, convert it to double, then multiply it with 100 and then typecast it to an integer. When the price is "33.30", after converting it to double it remains 33.3 (obviously...), but after multiplying it with 100, it becomes 3329.9999999999995, and when I cast it to integer by applying simple cast operator "(int) (price * 100) ", it becomes 3329.
Right now I have no idea why this is happening. So I thought may be you guys can help :) .
This happens because of the way doubles are stored. You should use decimal when working with money to avoid rounding errors.
don't cast it, round it using Math.Round. and its better to use a decimal type for currency
This is happening due to floating point rounding errors. Floating point numbers cannot be accurately represented in binary, so rounding errors such as the one you are experiencing happen. See this wikipedia article for more detail.
To overcome this, you should round to the closest integer - this is best achieved by using Math.Round.
When dealing with currencies however, best practice it to use the decimal type instead of double.
If you want to cast to the closest integer there is a Math.Round method for this.
What you are doing by default is flooring - which is exactly what you observe. (and is consistent with C)
The error is because doubles are stored in binary form. While every binary fraction has an exact decimal expansion, most decimals don't have an exact binary expansion. The decimal 33.3 has an inexact binary expansion. This approximation is then multiplied by 100, and converted to its exact decimal expansion, which is 3329.9999999999995. (Actually, this may not be the exact expansion, due to display truncation, but the gist of it is the same.)
Floating Point arithmetic in computing is almost always an approximation of the "Real" value

Categories

Resources