Parsing a string into a float - c#

Now i know to use the method of float.Parse but have bumped into a problem.
I'm parsing the string "36.360", however the parsed float becomes 36.3600006103516.
Am i safe to round it off to the 3 decimal places or is there a better tactic for parsing floats from strings.
Obviously i'm looking for the parsed float to be 36.360.

This has nothing to do with the parsing, but is an inherent "feature" of floating-point numbers. Many numbers which have an exact decimal representation cannot be exactly stored as floating-point number, which causes such inequalities to appear.
Wikipedia (any many articles on the web) explain the issues.

Floating point numbers are inherently prone to rounding errors; even different CPU architectures would give a different number out in the millionths decimal place and beyond. This is also why you cannot use == when comparing floating point numbers....they'll rarely evaluate as equal because of floating point precision errors.

This is due to the fact that float or double are both stored in such a way that it is a mathematical process to read the value from memory. If you want to store the value as the actual value a better choice would be decimal.
Per the MSDN Page on System.Decimal:
The Decimal value type is appropriate for financial calculations
requiring large numbers of significant integral and fractional digits
and no round-off errors. The Decimal type does not eliminate the need
for rounding. Rather, it minimizes errors due to rounding.

There are limits in the precision of floating point numbers. Check out this link for additional details.
If you need more precise tracking, consider using something like a double or decimal type.

That's not an odd issue at all, it's just one of the charming features of floats you'll always going to run into. floats can't express that kind of decimal values accurately!
So if you need the result to be exactly 36.36, use a decimal rather than a float.
Otherwise, you're free to round off. Note that rounding won't help though, because it won't be exactly 36.36 after rounding either.

Related

why am i losing data when going from double to decimal

I have the following code which I can't change...
public static decimal Convert(decimal value, Measurement currentMeasurement, Measurement targetMeasurement, bool roundResult = true)
{
double result = Convert(System.Convert.ToDouble(value), currentMeasurement, targetMeasurement, roundResult);
return System.Convert.ToDecimal(result);
}
now result is returned as -23.333333333333336 but once the conversion to a decimal takes place it becomes -23.3333333333333M.
I thought decimals could hold bigger values and were hence more accurate so how am I losing data going from double to decimal?
This is by design. Quoting from the documentation of Convert.ToDecimal:
The Decimal value returned by this method contains a maximum of
15 significant digits. If the value parameter contains more than 15
significant digits, it is rounded using rounding to nearest. The
following example illustrates how the Convert.ToDecimal(Double) method
uses rounding to nearest to return a Decimal value with 15
significant digits.
Console.WriteLine(Convert.ToDecimal(123456789012345500.12D)); // Displays 123456789012346000
Console.WriteLine(Convert.ToDecimal(123456789012346500.12D)); // Displays 123456789012346000
Console.WriteLine(Convert.ToDecimal(10030.12345678905D)); // Displays 10030.123456789
Console.WriteLine(Convert.ToDecimal(10030.12345678915D)); // Displays 10030.1234567892
The reason for this is mostly that double can only guarantee 15 decimal digits of precision, anyway. Everything that's displayed after them (converted to a string it's 17 digits because that's what double uses internally and because that's the number you might need to exactly reconstruct every possible double value from a string) is not guaranteed to be part of the exact value being represented. So Convert takes the only sensible route and rounds them away. After all, if you have a type that can represent decimal values exactly, you wouldn't want to start with digits that are inaccurate.
So you're not losing data, per se. In fact, you're only losing garbage. Garbage you thought of being data.
EDIT: To clarify my point from the comments: Conversions between different numeric data types may incur a loss of precision. This is especially the case between double and decimal because both types are capable of representing values the other type cannot represent. Furthermore, both double and decimal have fairly specific use cases they're intended for, which is also evident from the documentation (emphasis mine):
The Double value type represents a double-precision 64-bit number with
values ranging from negative 1.79769313486232e308 to positive
1.79769313486232e308, as well as positive or negative zero, PositiveInfinity, NegativeInfinity, and not a number (NaN). It is
intended to represent values that are extremely large (such as
distances between planets or galaxies) or extremely small (the
molecular mass of a substance in kilograms) and that often are
imprecise (such as the distance from earth to another solar system).
The Double type complies with the IEC 60559:1989 (IEEE 754) standard
for binary floating-point arithmetic.
The Decimal value type represents decimal numbers ranging from
positive 79,228,162,514,264,337,593,543,950,335 to negative
79,228,162,514,264,337,593,543,950,335. The Decimal value type is
appropriate for financial calculations that require large numbers of
significant integral and fractional digits and no round-off errors.
The Decimal type does not eliminate the need for rounding. Rather, it
minimizes errors due to rounding.
This basically means that for quantities that will not grow unreasonably large and you need an accurate decimal representation, you should use decimal (sounds fairly obvious when written that way). In practice this most often means financial calculations, as the documentation already states.
On the other hand, in the vast majority of other cases, double is the right way to go and usually does not hurt as a default choice (languages like Lua and JavaScript get away just fine with double being the only numeric data type).
In your specific case, since you mentioned in the comments that those are temperature readings, it is very, very, very simple: Just use double throughout. You have temperature readings. Wikipedia suggests that highly-specialized thermometers reach around 10−3 °C precision. Which basically means that the differences in value of around 10−13 (!) you are worried about here are simply irrelevant. Your thermometer gives you (let's be generous) five accurate digits here and you worry about the ten digits of random garbage that come after that. Just don't.
I'm sure a physicist or other scientist might be able to chime in here with proper handling of measurements and their precision, but we were taught in school that it's utter bullshit to even give values more precise than the measurements are. And calculations potentially affect (and reduce) that precision.

C# Maths gives wrong results!

I understand the principle behind this problem but it's giving me a headache to think that this is going on throughout my application and I need to find as solution.
double Value = 141.1;
double Discount = 25.0;
double disc = Value * Discount / 100; // disc = 35.275
Value -= disc; // Value = 105.824999999999999
Value = Functions.Round(Value, 2); // Value = 105.82
I'm using doubles to represent quite small numbers. Somehow in the calculation 141.1 - 35.275 the binary representation of the result gives a number which is just 0.0000000000001 out. Unfortunately, since I am then rounding this number, this gives the wrong answer.
I've read about using Decimals instead of Doubles but I can't replace every instance of a Double with a Decimal. Is there some easier way to get around this?
If you're looking for exact representations of values which are naturally decimal, you will need to replace double with decimal everywhere. You're simply using the wrong datatype. If you'd been using short everywhere for integers and then found out that you needed to cope with larger values than that supports, what would you do? It's the same deal.
However, you should really try to understand what's going on to start with... why Value doesn't equal exactly 141.1, for example.
I have two articles on this:
Binary floating point in .NET
Decimal floating point in .NET
You should use decimal – that's what it's for.
The behaviour of floating point arithmetic? That's just what it does. It has limited finite precision. Not all numbers are exactly representable. In fact, there are an infinite number of real valued numbers, and only a finite number can be representable. The key to decimal, for this application, is that it uses a base 10 representation – double uses base 2.
Instead of using Round to round the number, you could use some function you write yourself which uses a small epsilon when rounding to allow for the error. That's the answer you want.
The answer you don't want, but I'm going to give anyway, is that if you want precision, and since you're dealing with money judging by your example you probably do, you should not be using binary floating point maths. Binary floating point is inherently inaccurate and some numbers just can't be represented correctly. Using Decimal, which does base-10 floating point, would be a much better approach everywhere and will avoid you making costly mistakes with your doubles.
After spending most of the morning trying to replace every instance of a 'double' to 'decimal' and realising I was fighting a losing battle, I had another look at my Round function. This may be useful to those who can't implement the proper solution:
public static double Round(double dbl, int decimals) {
return (double)Math.Round((decimal)dbl, decimals, MidpointRounding.AwayFromZero);
}
By first casting the value to a decimal, and then calling Math.Round, this will return the 'correct' value.

Multiplying a double value by 100.0 introduces rounding errors?

This is what I am doing, which works 99.999% of the time:
((int)(customerBatch.Amount * 100.0)).ToString()
The Amount value is a double. I am trying to write the value out in pennies to a text file for transport to a server for processing. The Amount is never more than 2 digits of precision.
If you use 580.55 for the Amount, this line of code returns 58054 as the string value.
This code runs on a web server in 64-bit.
Any ideas?
You should really use decimal for money calculations.
((int)(580.55m * 100.0m)).ToString().Dump();
You could use decimal values for accurate calculations. Double is floating point number which is not guaranteed to be precise during calculations.
I'm guessing that 580.55 is getting converted to 58054.99999999999999999999999999..., in which case int will round it down to 58054. You may want to write your own function that converts your amount to a int with some sort of rounding or threshold to make this not happen.
Try
((int)(Math.Round(customerBatch.Amount * 100.0))).ToString()
You really should not be using a double value to represent currency, due to rounding errors such as this.
Instead you might consider using integral values to represent monetary amounts, so that they are represented exactly. To represent decimals you can use a similar trick of storing 580.55 as the value 58055.
no, multiplying does not introduce rounding errors
but not all values can by represented by floating point numbers.
x.55 is one of them )
Decimal has more precision than a double. Give decimal a try.
http://msdn.microsoft.com/en-us/library/364x0z75%28VS.80%29.aspx
My suggestion would be to store the value as the integer number of pennies and take dollars_part = pennies / 100 and cents_part = pennies % 100. This will completely avoid rounding errors.
Edit: when I wrote this post, I did not see that you could not change the number format. The best answer is probably using the round method as others have suggested.
EDIT 2: As others have pointed out, it would be best to use some sort of fixed point decimal variable. This is better than my original solution because it would store the information about the location of the decimal point in the value where it belongs instead of in the code.

double minus double giving precision problems

I have come across a precision issue with double in .NET I thought this only applied to floats but now I see that double is a float.
double test = 278.97 - 90.46;
Debug.WriteLine(test) //188.51000000000005
//correct answer is 188.51
What is the correct way to handle this? Round? Lop off the unneeded decimal places?
Use the decimal data type. "The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors."
This happens in many languages and stems from the fact that we cannot usually store doubles or floats precisely in digital hardware. For a detailed explanation of how doubles, floats, and all other floating point values are often stored, look at the various IEEE specs described on wikipedia. For example: http://en.wikipedia.org/wiki/Double_precision
Of course there are other formats, such as fixed-point format. But this imprecision is in most languages, and why you often need to use epsilon tests instead of equality tests when using doubles in conditionals (i.e. abs(x - y) <= 0.0001 instead of x == y).
How you deal with this imprecision is up to you, and depends on your application.

How to fix precision of variable

In c#
double tmp = 3.0 * 0.05;
tmp = 0.15000000000000002
This has to do with money. The value is really $0.15, but the system wants to round it up to $0.16. 0.151 should probably be rounded up to 0.16, but not 0.15000000000000002
What are some ways I can get the correct numbers (ie 0.15, or 0.16 if the decimal is high enough).
Use a fixed-point variable type, or a base ten floating point type like Decimal. Floating point numbers are always somewhat inaccurate, and binary floating point representations add another layer of inaccuracy when they convert to/from base two.
Money should be stored as decimal, which is a floating decimal point type. The same goes for other data which really is discrete rather than continuous, and which is logically decimal in nature.
Humans have a bias to decimal for obvious reasons, so "artificial" quantities such as money tend to be more appropriate in decimal form. "Natural" quantities (mass, height) are on a more continuous scale, which means that float/double (which are floating binary point types) are often (but not always) more appropriate.
In Patterns of Enterprise Application Architecture, Martin Fowler recommends using a Money abstraction
http://martinfowler.com/eaaCatalog/money.html
Mostly he does it for dealing with Currency, but also precision.
You can see a little of it in this Google Book search result:
http://books.google.com/books?id=FyWZt5DdvFkC&pg=PT520&lpg=PT520&dq=money+martin+fowler&source=web&ots=eEys-C_vdA&sig=jckdxgMLSRJtGDYZtcbYST1ak8M&hl=en&sa=X&oi=book_result&resnum=6&ct=result
'decimal' type was designed especially for this
A decimal data type would work well and is probably your choice.
However, in the past I've been able to do this in an optimized way using fixed point integers. It's ideal for high performance computations where decimal bogs down and you can't have the small precision errors of float.
Start with, say an Int32, and split in half. First half is whole number portion, second half is fractional portion. You get 16-bits of signed integer plus 16 bits of fractional precision. e.g. 1.5 as an 16:16 fixed point would be represented as 0x00018000. Or, alter the distribution of bits to suit your needs.
Fixed point numbers can generally be added/sub/mul/div like any other integer, with a little bit of work around mul/div to avoid overflows.
What you faced is a rounding problem, which I had mentioned earlier in another post
Can I use “System.Currency” in .NET?
And refer to this as well Rounding

Categories

Resources