Let's say we have the following simple code
string number = "93389.429999999993";
double numberAsDouble = Convert.ToDouble(number);
Console.WriteLine(numberAsDouble);
after that conversion numberAsDouble variable has the value 93389.43. What can i do to make this variable keep the full number as is without rounding it? I have found that Convert.ToDecimal does not behave the same way but i need to have the value as double.
-------------------small update---------------------
putting a breakpoint in line 2 of the above code shows that the numberAsDouble variable has the rounded value 93389.43 before displayed in the console.
93389.429999999993 cannot be represented exactly as a 64-bit floating point number. A double can only hold 15 or 16 digits, while you have 17 digits. If you need that level of precision use a decimal instead.
(I know you say you need it as a double, but if you could explain why, there may be alternate solutions)
This is expected behavior.
A double can't represent every number exactly. This has nothing to do with the string conversion.
You can check it yourself:
Console.WriteLine(93389.429999999993);
This will print 93389.43.
The following also shows this:
Console.WriteLine(93389.429999999993 == 93389.43);
This prints True.
Keep in mind that there are two conversions going on here. First you're converting the string to a double, and then you're converting that double back into a string to display it.
You also need to consider that a double doesn't have infinite precision; depending on the string, some data may be lost due to the fact that a double doesn't have the capacity to store it.
When converting to a double it's not going to "round" any more than it has to. It will create the double that is closest to the number provided, given the capabilities of a double. When converting that double to a string it's much more likely that some information isn't kept.
See the following (in particular the first part of Michael Borgwardt's answer):
decimal vs double! - Which one should I use and when?
A double will not always keep the precision depending on the number you are trying to convert
If you need to be precise you will need to use decimal
This is a limit on the precision that a double can store. You can see this yourself by trying to convert 3389.429999999993 instead.
The double type has a finite precision of 64 bits, so a rounding error occurs when the real number is stored in the numberAsDouble variable.
A solution that would work for your example is to use the decimal type instead, which has 128 bit precision. However, the same problem arises with a smaller difference.
For arbitrary large numbers, the System.Numerics.BigInteger object from the .NET Framework 4.0 supports arbitrary precision for integers. However you will need a 3rd party library to use arbitrary large real numbers.
You could truncate the decimal places to the amount of digits you need, not exceeding double precision.
For instance, this will truncate to 5 decimal places, getting 93389.42999. Just replace 100000 for the needed value
string number = "93389.429999999993";
decimal numberAsDecimal = Convert.ToDecimal(number);
var numberAsDouble = ((double)((long)(numberAsDecimal * 100000.0m))) / 100000.0;
Related
Is it possible to set precision of double values in c# for all double values included in the project? I have a lot of values there and changing them with Math.Round would be exhausting. I need to have the double value as 5.12345 instead of 5.123455123321321 for example.
This is a fundamental limitation of floating point types.. double is actually store internally as a sign exponent and mantissa, the exponent is base 2, so has a lot of trouble dealing with base 10..
The easiest solution is to use a base 10 64bit floating point type, namely decimal. Its still floating point, it still only limited precision but it is a lot friendly and more accurate to work with in a lot of cases
Update
If all you want to do is change the display output, you can either use rounding (which you know), or the appropriate format specifiers with string.format ToString or string interpolation
Example
var number = 5.123455123321321;
Console.WriteLine(number.ToString("F3",
CultureInfo.InvariantCulture));
Console.WriteLine($"{number:F3}");
// Displays 5.123
// Displays 5.123
In the example below the number 12345678.9 loses accuracy when it's converted to a string as it becomes 1.234568E+07. I just need a way to preserve the accuracy for large floating point numbers. Thanks.
Single sin1 = 12345678.9F;
String str1 = sin1.ToString();
Console.WriteLine(str1); // displays 1.234568E+07
If you want to preserve decimal numbers, you should use System.Decimal. It's as simple as that. System.Single is worse than System.Double in that as per the documentation:
By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
You haven't just lost information when you've converted it to a string - you've lost information in the very first line. That's not just because you're using float instead of double - it's because you're using a floating binary point number.
The decimal number 0.1 can't be represented accurately in a binary floating point system no matter how big you make the type...
See my articles on floating binary point and floating decimal point for more information. Of course, it's possible that you should be using double or even float and just not caring about the loss of precision - it depends on what you're trying to represent. But if you really do care about preserving decimal digits, then use a decimal-based type.
You can't. Simple as that. In memory your number is 12345679. Try the code below.
Single sin1 = 12345678.9F;
String str1 = sin1.ToString("r"); // Shows "all" the number
Console.WriteLine(sin1 == 12345679); // true
Console.WriteLine(str1); // displays 12345679
Technically r means (quoting from MSDN) round-trip: Result: A string that can round-trip to an identical number. so in reality it isn't showing all the decimals. It's only showing all the decimals needed to distinguish it from other possible values of Single. If you want to show all the decimals use F20.
If you want more precision use double or better use decimal. float has the precision that it has. As we say in Italy "Non puoi spremere sangue da una rapa" (You can't squeeze blood from a turnip)
You could also write an IFormatProvider for your purpose - but the precision doesn't get any better unless you use a different type.
this article may help - http://www.csharp-examples.net/string-format-double/
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.
I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.
This is what I am doing, which works 99.999% of the time:
((int)(customerBatch.Amount * 100.0)).ToString()
The Amount value is a double. I am trying to write the value out in pennies to a text file for transport to a server for processing. The Amount is never more than 2 digits of precision.
If you use 580.55 for the Amount, this line of code returns 58054 as the string value.
This code runs on a web server in 64-bit.
Any ideas?
You should really use decimal for money calculations.
((int)(580.55m * 100.0m)).ToString().Dump();
You could use decimal values for accurate calculations. Double is floating point number which is not guaranteed to be precise during calculations.
I'm guessing that 580.55 is getting converted to 58054.99999999999999999999999999..., in which case int will round it down to 58054. You may want to write your own function that converts your amount to a int with some sort of rounding or threshold to make this not happen.
Try
((int)(Math.Round(customerBatch.Amount * 100.0))).ToString()
You really should not be using a double value to represent currency, due to rounding errors such as this.
Instead you might consider using integral values to represent monetary amounts, so that they are represented exactly. To represent decimals you can use a similar trick of storing 580.55 as the value 58055.
no, multiplying does not introduce rounding errors
but not all values can by represented by floating point numbers.
x.55 is one of them )
Decimal has more precision than a double. Give decimal a try.
http://msdn.microsoft.com/en-us/library/364x0z75%28VS.80%29.aspx
My suggestion would be to store the value as the integer number of pennies and take dollars_part = pennies / 100 and cents_part = pennies % 100. This will completely avoid rounding errors.
Edit: when I wrote this post, I did not see that you could not change the number format. The best answer is probably using the round method as others have suggested.
EDIT 2: As others have pointed out, it would be best to use some sort of fixed point decimal variable. This is better than my original solution because it would store the information about the location of the decimal point in the value where it belongs instead of in the code.