Dividing by 100 precision - c#

I'm working on something and I've got a problem which I do not understand.
double d = 95.24 / (double)100;
Console.Write(d); //Break point here
The console output is 0.9524 (as expected) but if I look to 'd' after stoped the program it returns 0.95239999999999991.
I have tried every cast possible and the result is the same. The problem is I use 'd' elsewhere and this precision problem makes my program failed.
So why does it do that? How can I fix it?

Use decimal instead of double.

http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
The short of it is that a floating-point number is stored in what amounts to base-2 scientific notation. There is an integer significand understood to have one place in front of a decimal point, which is raised to an integer power of two. This allows for the storage of numbers in a relatively compact format; the downside is that the conversion from base ten to base 2 and back can introduce error.
To mitigate this, whenever high precision at low magnitudes is required, use decimal instead of double; decimal is a 128-bit floating point number type designed to have very high precision, at the cost of reduced range (it can only represent numbers up to +- 79 nonillion, 7.9E28, instead of double's +- 1.8E308; still plenty for most non-astronomical, non-physics computer programs) and double the memory footprint of a double.

A very good article that describes a lot: What Every Computer Scientist Should Know About Floating-Point Arithmetic It is not related to C#, but to the float arithmetic in general.

You could use a decimal instead:
decimal d = 95.24 / 100m;
Console.Write(d); //Break point here

Try:
double d = Math.Round(95.24/(double)100, 4);
Console.Write(d);
edit: Or use a decimal, yeah. I was just trying to keep the answer as similar to the question as possible :)

Related

Parsing a string into a float

Now i know to use the method of float.Parse but have bumped into a problem.
I'm parsing the string "36.360", however the parsed float becomes 36.3600006103516.
Am i safe to round it off to the 3 decimal places or is there a better tactic for parsing floats from strings.
Obviously i'm looking for the parsed float to be 36.360.
This has nothing to do with the parsing, but is an inherent "feature" of floating-point numbers. Many numbers which have an exact decimal representation cannot be exactly stored as floating-point number, which causes such inequalities to appear.
Wikipedia (any many articles on the web) explain the issues.
Floating point numbers are inherently prone to rounding errors; even different CPU architectures would give a different number out in the millionths decimal place and beyond. This is also why you cannot use == when comparing floating point numbers....they'll rarely evaluate as equal because of floating point precision errors.
This is due to the fact that float or double are both stored in such a way that it is a mathematical process to read the value from memory. If you want to store the value as the actual value a better choice would be decimal.
Per the MSDN Page on System.Decimal:
The Decimal value type is appropriate for financial calculations
requiring large numbers of significant integral and fractional digits
and no round-off errors. The Decimal type does not eliminate the need
for rounding. Rather, it minimizes errors due to rounding.
There are limits in the precision of floating point numbers. Check out this link for additional details.
If you need more precise tracking, consider using something like a double or decimal type.
That's not an odd issue at all, it's just one of the charming features of floats you'll always going to run into. floats can't express that kind of decimal values accurately!
So if you need the result to be exactly 36.36, use a decimal rather than a float.
Otherwise, you're free to round off. Note that rounding won't help though, because it won't be exactly 36.36 after rounding either.

Converting to double is giving not accurate number

Convert.ToDouble is adding zeros and 1 like in this picture:
Why it is turning from 21.62 to 21.620000000000001 ?
Is this about floating point issue?
Double (and Float) are floating-point types, and in a binary system will have some imprecision.
If you need more precise comparisons use decimal instead. If you're just doing calculations double should be fine. If you need to compare doubles for absolute equiality then compare the absolute value of the difference to some small constant:
if (a == b) // not reliable for floating point
{
....
}
double EPSILON = 0.0000001;
if (Math.Abs(a-b) < EPSILON)
{
....
}
The floating point numbers have some problems of approximation.
This is because decimal fraction like 0,00001 can't be represented exactly on a binary system (where fractional numbers are represented in module q/p).
The problem is intrinsic.
In short, yes; between any two bases (in this case, 2 and 10), there are always values that can expressed w/ a finite number of "decimal" (binimal?) places in one that cannot in the other.
Rounding errors like this are common in almost every high level programming language. In Java, to get around this you use a class called BigDecimal if you need guaranteed accuracy. Otherwise, you can just write a method that rounds a decimal to the nearest place you need.

How to decide what to use - double or decimal? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.

C# Maths gives wrong results!

I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.

How to fix precision of variable

In c#
double tmp = 3.0 * 0.05;
tmp = 0.15000000000000002
This has to do with money. The value is really $0.15, but the system wants to round it up to $0.16. 0.151 should probably be rounded up to 0.16, but not 0.15000000000000002
What are some ways I can get the correct numbers (ie 0.15, or 0.16 if the decimal is high enough).
Use a fixed-point variable type, or a base ten floating point type like Decimal. Floating point numbers are always somewhat inaccurate, and binary floating point representations add another layer of inaccuracy when they convert to/from base two.
Money should be stored as decimal, which is a floating decimal point type. The same goes for other data which really is discrete rather than continuous, and which is logically decimal in nature.
Humans have a bias to decimal for obvious reasons, so "artificial" quantities such as money tend to be more appropriate in decimal form. "Natural" quantities (mass, height) are on a more continuous scale, which means that float/double (which are floating binary point types) are often (but not always) more appropriate.
In Patterns of Enterprise Application Architecture, Martin Fowler recommends using a Money abstraction
http://martinfowler.com/eaaCatalog/money.html
Mostly he does it for dealing with Currency, but also precision.
You can see a little of it in this Google Book search result:
http://books.google.com/books?id=FyWZt5DdvFkC&pg=PT520&lpg=PT520&dq=money+martin+fowler&source=web&ots=eEys-C_vdA&sig=jckdxgMLSRJtGDYZtcbYST1ak8M&hl=en&sa=X&oi=book_result&resnum=6&ct=result
'decimal' type was designed especially for this
A decimal data type would work well and is probably your choice.
However, in the past I've been able to do this in an optimized way using fixed point integers. It's ideal for high performance computations where decimal bogs down and you can't have the small precision errors of float.
Start with, say an Int32, and split in half. First half is whole number portion, second half is fractional portion. You get 16-bits of signed integer plus 16 bits of fractional precision. e.g. 1.5 as an 16:16 fixed point would be represented as 0x00018000. Or, alter the distribution of bits to suit your needs.
Fixed point numbers can generally be added/sub/mul/div like any other integer, with a little bit of work around mul/div to avoid overflows.
What you faced is a rounding problem, which I had mentioned earlier in another post
Can I use “System.Currency” in .NET?
And refer to this as well Rounding

Categories

Resources