Why does System.MidpointRounding.AwayFromZero not round up in this instance? - c#

In .NET, why does System.Math.Round(1.035, 2, MidpointRounding.AwayFromZero) yield 1.03 instead of 1.04? I feel like the answer to my question lies in the section labeled "Note to Callers" at http://msdn.microsoft.com/en-us/library/ef48waz8.aspx, but I'm unable to wrap my head around the explanation.

Your suspicion is exactly right. Numbers with fractional portion, when expressed as literals in .NET, are by default doubles. A double (like a float) is an approximation of a decimal value, not a precise decimal value. It is the closest value that can be expressed in base-2 (binary). In this case, the approximation is ever so vanishingly on the small side of 1.035. If you write it using an explicit Decimal it works as you expect:
Console.WriteLine(Math.Round(1.035m, 2, MidpointRounding.AwayFromZero));
Console.ReadKey();
To understand why doubles and floats work the way they do, imagine representing the number 1/3 in decimal (or binary, which suffers from the same problem). You can't- it translates to .3333333...., meaning that to represent it accurately would require an infinite amount of memory.
Computers get around this using approximations. I'd explain precisely how, but I'd probably get it wrong. You can read all about it here though: http://en.wikipedia.org/wiki/IEEE_754-1985

The binary representation of 1.035d is 0x3FF08F5C28F5C28F, which in fact is 1.03499999999999992006394222699E0, so System.Math.Round(1.035, 2, MidpointRounding.AwayFromZero) yield 1.03 instead of 1.04, so it's correct.
However, the binary representation of 4.005d is 0x4010051EB851EB85, which is 4.00499999999999989341858963598, so System.Math.Round(4.005, 2, MidpointRounding.AwayFromZero) should yield 4.00, but it yield 4.01 which is wrong (or a smart 'fix'). If you check it in MS SQL select ROUND(CAST(4.005 AS float), 2), it's 4.00
I don't understand why .NET apply this 'smart fix' which makes things worse.
You can check binary representation of a double at:
http://www.binaryconvert.com/convert_double.html

I'ts because the BINARY representation of 1.035 closer to 1.03 than 1.04
For better results do it this way -
decimal result = decimal.Round(1.035m, 2, MidpointRounding.AwayFromZero);

I believe the example you're referring to is a different issue; as far as I understand they're saying that 0.1 isn't stored, in float, as exactly 0.1, it's actually slightly off because of how floats are stored in binary. As such let's suppose it actually looks more like 0.0999999999999 (or similar), something very, very slightly less than 0.1 - so slightly that it doesn't tend to make much difference. Well, no, they're saying: one noticeable difference would be that adding this to your number and rounding would actually appear to go the wrong way because even though the numbers are extremely close it's still considered "less than" the .5 for rounding.
If I misunderstood that page, I hope somebody corrects me :)
I don't see how it relates to your call, though, because you're being more explicit. Perhaps it's just storing your number in a similar fashion.

At a guess I'd say that internally 1.035 can't be represented in binary as exactly 1.035 and it's probably (under the hood) 1.0349999999999999, which would be why it rounds down.
Just a guess though.

Decimal rounding is OK, but it is a slow operation.
A faster workaround would be to multiply the number by (1.0 + 1E-15) to do the trick, works like a charm for the MidpointRounding.AwayFromZero enum option.

Related

Rounding a Single

I have this "scientific application" in which a Single value should be rounded before presenting it in the UI. According to this MSDN article, due to "loss of precision" the Math.Round(Double, Int32) method will sometimes behave "unexpectedly", e.g. rounding 2.135 to 2.13 rather than 2.14.
As I understand it, this issue is not related to "banker's rounding" (see for example this question).
In the application, someone apparently chose to address this issue by explicitly converting the Single to a Decimal before rounding (i.e. Math.Round((Decimal)mySingle, 2)) to call the Math.Round(Decimal, Int32) overload instead. Aside from binary-to-decimal conversion issues possibly arising, this "solution" may also cause an OverflowException to be thrown if the Single value is too small or to large to fit the Decimal type.
Catching such errors to return the result from Math.Round(Double, Int32), should the conversion fail, does not strike me as the perfect solution. Nor does rewriting the application to use Decimal all the way.
Is there a more or less "correct" way to deal with this situation, and if so, what might it be?
I would argue that your existing solution (using the Decimal version of Math.Round) is the correct one.
The underlying problem is that you expect numbers to be rounded according to their base 10 representation, but you've stored them as base 2 floating point numbers. The provided example of 2.135 is one of those edge cases where the base 2 representation doesn't exactly match the base 10.
To get the expected rounding behavior, you must convert the numbers to base 10. The easiest way is exactly what you're already doing: temporarily convert the number to a Decimal long enough to call Math.Round.
Since floating point trades precision for range, the decimal value 2.135 can't be exactly represented in binary.
The [closest] binary representation works out to be something like 0.1348876953125 decimal, so the rounding is correct (if not intuitively obvious).
You should read Goldberg's paper, "What every computer scientist should know about floating-point arithmetic" (ACM Computing Surveys, Volume 23 Issue 1, March 1991, pp. 5-48)
Abstract. Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.
I just looked at the documentation and there appears to be a enum you can pass into Math.Round(). If you change to Math.Round(Double, Int32, MidpointRounding.AwayFromZero) you should get the desired result.
https://msdn.microsoft.com/en-us/library/vstudio/ef48waz8(v=vs.100).aspx
Edit: just tested with these numbers. Changed the numbers and
double abc = 2.335;
Console.WriteLine(Math.Round(abc, 2, System.MidpointRounding.AwayFromZero));
abc = 2.345;
Console.WriteLine(Math.Round(abc, 2, System.MidpointRounding.AwayFromZero));
abc = 2.335;
Console.WriteLine(Math.Round(abc, 2));
abc = 2.445;
Console.WriteLine(Math.Round(abc, 2));
and got these results.
2.34
2.35
2.34
2.44
Edit 2: I used the original numbers you gave and it is breaking. I thought that by using AwayFromZero it would solve the double rounding down (I figured it applied only to bankers rounding), it does not. If you do need the precision you are looking for from your rounding you'll have to create your own function that gives you the precision you need by converting to double or another method, but I've been looking for a while and haven't found anything, I'll check back to see if you come up with a solution.
double abc = 2.135;
Console.WriteLine(Math.Round(abc, 2, System.MidpointRounding.AwayFromZero));
abc = 2.145;
Console.WriteLine(Math.Round(abc, 2, System.MidpointRounding.AwayFromZero));
abc = 2.135;
Console.WriteLine(Math.Round(abc, 2));
abc = 2.145;
Console.WriteLine(Math.Round(abc, 2));
2.13
2.15
2.13
2.14

Value of a double variable not exact after multiplying with 100 [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
If I execute the following expression in C#:
double i = 10*0.69;
i is: 6.8999999999999995. Why?
I understand numbers such as 1/3 can be hard to represent in binary as it has infinite recurring decimal places but this is not the case for 0.69. And 0.69 can easily be represented in binary, one binary number for 69 and another to denote the position of the decimal place.
How do I work around this? Use the decimal type?
Because you've misunderstood floating point arithmetic and how data is stored.
In fact, your code isn't actually performing any arithmetic at execution time in this particular case - the compiler will have done it, then saved a constant in the generated executable. However, it can't store an exact value of 6.9, because that value cannot be precisely represented in floating point point format, just like 1/3 can't be precisely stored in a finite decimal representation.
See if this article helps you.
why doesn't the framework work around this and hide this problem from me and give me the
right answer,0.69!!!
Stop behaving like a dilbert manager, and accept that computers, though cool and awesome, have limits. In your specific case, it doesn't just "hide" the problem, because you have specifically told it not to. The language (the computer) provides alternatives to the format, that you didn't choose. You chose double, which has certain advantages over decimal, and certain downsides. Now, knowing the answer, you're upset that the downsides don't magically disappear.
As a programmer, you are responsible for hiding this downside from managers, and there are many ways to do that. However, the makers of C# have a responsibility to make floating point work correctly, and correct floating point will occasionally result in incorrect math.
So will every other number storage method, as we do not have infinite bits. Our job as programmers is to work with limited resources to make cool things happen. They got you 90% of the way there, just get the torch home.
And 0.69 can easily be represented in
binary, one binary number for 69 and
another to denote the position of the
decimal place.
I think this is a common mistake - you're thinking of floating point numbers as if they are base-10 (i.e decimal - hence my emphasis).
So - you're thinking that there are two whole-number parts to this double: 69 and divide by 100 to get the decimal place to move - which could also be expressed as:
69 x 10 to the power of -2.
However floats store the 'position of the point' as base-2.
Your float actually gets stored as:
68999999999999995 x 2 to the power of some big negative number
This isn't as much of a problem once you're used to it - most people know and expect that 1/3 can't be expressed accurately as a decimal or percentage. It's just that the fractions that can't be expressed in base-2 are different.
but why doesn't the framework work around this and hide this problem from me and give me the right answer,0.69!!!
Because you told it to use binary floating point, and the solution is to use decimal floating point, so you are suggesting that the framework should disregard the type you specified and use decimal instead, which is very much slower because it is not directly implemented in hardware.
A more efficient solution is to not output the full value of the representation and explicitly specify the accuracy required by your output. If you format the output to two decimal places, you will see the result you expect. However if this is a financial application decimal is precisely what you should use - you've seen Superman III (and Office Space) haven't you ;)
Note that it is all a finite approximation of an infinite range, it is merely that decimal and double use a different set of approximations. The advantage of decimal is it produces the same approximations that you would if you were performing the calculation yourself. For example if you calculated 1/3, you would eventually stop writing 3's when it was 'good enough'.
For the same reason that 1 / 3 in a decimal systems comes out as 0.3333333333333333333333333333333333333333333 and not the exact fraction, which is infinitely long.
To work around it (e.g. to display on screen) try this:
double i = (double) Decimal.Multiply(10, (Decimal) 0.69);
Everyone seems to have answered your first question, but ignored the second part.

c# Subtract is not accurate even with decimals?

I'm learning TDD and, decided to create a Calculator class to start.
i did the basic first, and now I'm on the Square Root function.
I'm using this method to get the root http://www.math.com/school/subject1/lessons/S1U1L9DP.html
i tested it with few numbers, and I always get the accurate answers.
is pretty easy to understand.
Now I'm having a weird problem, because with some numbers, im getting the right answer, and with some, I don't.
I debugged the code, and found out that I'm not getting the right answer when I use subtract.
I'm using decimals to get the most accurate result.
when I do:
18 / 4.25
I am currently getting: 4.2352941176470588235294117647
when it should be: 4.2352941176470588235294117647059 (using windows calculator)
in the end of the road, this is the closest i get to the root of 18:
4.2426406871192851464050688705 ^ 2 = 18.000000000000000000000022892
my question is:
Can i get more accurate then this?
4.2352941176470588235294117647 contains 29 digits.
decimal is define to have 28-29 significant digits. You can't store a more accurate number in a decimal.
What field of engineering or science are you working in where the 30th and more digits are significant to the accuracy of the overall calculation?
(It would also, possibly, help if you'd shown some more actual code. The only code you've shown is 18 / 4.25, which can't be an actual expression in your code, since the second number is a double literal, and you can't assign the result of this expression to a decimal without a cast).
If you need arbitrary precision, then there isn't a standard "BigRational" type, but there is a BigInteger. You could use that to construct a BigRational type if you need that (storing numerator and denominator as two separate integers). One guess of why there isn't a standard type yet is that decisions on when to e.g. normalize such rationals may affect performance or equality comparisons.
Floating point calculations are not accurate. Decimals make the accuracy better, because they are 128-bit long, but they are still floating point numbers.
Comparing two floating point numbers is not done with ==, but rather:
static bool SameDecimal(decimal a, decimal b)
{
return Math.Abs(a-b) < 1e-10;
}
This method will allow you to compare two decimals (I assume 1e-10 is a small enough difference for you, it should be for everyday uses).

Why is the division result between two integers truncated?

All experienced programmers in C# (I think this comes from C) are used to cast on of the integers in a division to get the decimal / double / float result instead of the int (the real result truncated).
I'd like to know why is this implemented like this? Is there ANY good reason to truncate the result if both numbers are integer?
C# traces its heritage to C, so the answer to "why is it like this in C#?" is a combination of "why is it like this in C?" and "was there no good reason to change?"
The approach of C is to have a fairly close correspondence between the high-level language and low-level operations. Processors generally implement integer division as returning a quotient and a remainder, both of which are of the same type as the operands.
(So my question would be, "why doesn't integer division in C-like languages return two integers", not "why doesn't it return a floating point value?")
The solution was to provide separate operations for division and remainder, each of which returns an integer. In the context of C, it's not surprising that the result of each of these operations is an integer. This is frequently more accurate than floating-point arithmetic. Consider the example from your comment of 7 / 3. This value cannot be represented by a finite binary number nor by a finite decimal number. In other words, on today's computers, we cannot accurately represent 7 / 3 unless we use integers! The most accurate representation of this fraction is "quotient 2, remainder 1".
So, was there no good reason to change? I can't think of any, and I can think of a few good reasons not to change. None of the other answers has mentioned Visual Basic which (at least through version 6) has two operators for dividing integers: / converts the integers to double, and returns a double, while \ performs normal integer arithmetic.
I learned about the \ operator after struggling to implement a binary search algorithm using floating-point division. It was really painful, and integer division came in like a breath of fresh air. Without it, there was lots of special handling to cover edge cases and off-by-one errors in the first draft of the procedure.
From that experience, I draw the conclusion that having different operators for dividing integers is confusing.
Another alternative would be to have only one integer operation, which always returns a double, and require programmers to truncate it. This means you have to perform two int->double conversions, a truncation and a double->int conversion every time you want integer division. And how many programmers would mistakenly round or floor the result instead of truncating it? It's a more complicated system, and at least as prone to programmer error, and slower.
Finally, in addition to binary search, there are many standard algorithms that employ integer arithmetic. One example is dividing collections of objects into sub-collections of similar size. Another is converting between indices in a 1-d array and coordinates in a 2-d matrix.
As far as I can see, no alternative to "int / int yields int" survives a cost-benefit analysis in terms of language usability, so there's no reason to change the behavior inherited from C.
In conclusion:
Integer division is frequently useful in many standard algorithms.
When the floating-point division of integers is needed, it may be invoked explicitly with a simple, short, and clear cast: (double)a / b rather than a / b
Other alternatives introduce more complication both the programmer and more clock cycles for the processor.
Is there ANY good reason to truncate the result if both numbers are integer?
Of course; I can think of a dozen such scenarios easily. For example: you have a large image, and a thumbnail version of the image which is 10 times smaller in both dimensions. When the user clicks on a point in the large image, you wish to identify the corresponding pixel in the scaled-down image. Clearly to do so, you divide both the x and y coordinates by 10. Why would you want to get a result in decimal? The corresponding coordinates are going to be integer coordinates in the thumbnail bitmap.
Doubles are great for physics calculations and decimals are great for financial calculations, but almost all the work I do with computers that does any math at all does it entirely in integers. I don't want to be constantly having to convert doubles or decimals back to integers just because I did some division. If you are solving physics or financial problems then why are you using integers in the first place? Use nothing but doubles or decimals. Use integers to solve finite mathematics problems.
Calculating on integers is faster (usually) than on floating point values. Besides, all other integer/integer operations (+, -, *) return an integer.
EDIT:
As per the request of the OP, here's some addition:
The OP's problem is that they think of / as division in the mathematical sense, and the / operator in the language performs some other operation (which is not the math. division). By this logic they should question the validity of all other operations (+, -, *) as well, since those have special overflow rules, which is not the same as would be expected from their math counterparts. If this is bothersome for someone, they should find another language where the operations perform as expected by the person.
As for the claim on perfomance difference in favor of integer values: When I wrote the answer I only had "folk" knowledge and "intuition" to back up the claim (hece my "usually" disclaimer). Indeed as Gabe pointed out, there are platforms where this does not hold. On the other hand I found this link (point 12) that shows mixed performances on an Intel platform (the language used is Java, though).
The takeaway should be that with performance many claims and intuition are unsubstantiated until measured and found true.
Yes, if the end result needs to be a whole number. It would depend on the requirements.
If these are indeed your requirements, then you would not want to store a decimal and then truncate it. You would be wasting memory and processing time to accomplish something that is already built-in functionality.
The operator is designed to return the same type as it's input.
Edit (comment response):
Why? I don't design languages, but I would assume most of the time you will be sticking with the data types you started with and in the remaining instance, what criteria would you use to automatically assume which type the user wants? Would you automatically expect a string when you need it? (sincerity intended)
If you add an int to an int, you expect to get an int. If you subtract an int from an int, you expect to get an int. If you multiple an int by an int, you expect to get an int. So why would you not expect an int result if you divide an int by an int? And if you expect an int, then you will have to truncate.
If you don't want that, then you need to cast your ints to something else first.
Edit: I'd also note that if you really want to understand why this is, then you should start looking into how binary math works and how it is implemented in an electronic circuit. It's certainly not necessary to understand it in detail, but having a quick overview of it would really help you understand how the low-level details of the hardware filter through to the details of high-level languages.

How deal with the fact that most decimal fractions cannot be accurately represented in binary?

So, we know that fractions such as 0.1, cannot be accurately represented in binary base, which cause precise problems (such as mentioned here: Formatting doubles for output in C#).
And we know we have the decimal type for a decimal representation of numbers... but the problem is, a lot of Math methods, do not supporting decimal type, so we have convert them to double, which ruins the number again.
so what should we do?
Oh, what should we do about the fact that most decimal fractions cannot be represented in binary? or for that matter, that binary fractions cannot be represented in Decimal ?
or, even, that an infinity (in fact, a non-countable infinity) of real numbers in all bases cannot be accurately represented in any computerized system??
nothing! To recall an old cliche, You can get close enough for government work... In fact, you can get close enough for any work... There is no limit to the degree of accuracy the computer can generate, it just cannot be infinite, (which is what would be required for a number representation scheme to be able to represent every possible real number)
You see, for every number representation scheme you can design, in any computer, it can only represent a finite number of distinct different real numbers with 100.00 % accuracy. And between each adjacent pair of those numbers (those that can be represented with 100% accuracy), there will always be an infinity of other numbers that it cannot represent with 100% accuracy.
so what should we do?
We just keep on breathing. It really isn't a structural problem. We have a limited precision but usually more than enough. You just have to remember to format/round when presenting the numbers.
The problem in the following snippet is with the WriteLine(), not in the calculation(s):
double x = 6.9 - 10 * 0.69;
Console.WriteLine("x = {0}", x);
If you have a specific problem, th post it. There usually are ways to prevent loss of precision. If you really need >= 30 decimal digits, you need a special library.
Keep in mind that the precision you need, and the rounding rules required, will depend on your problem domain.
If you are writing software to control a nuclear reactor, or to model the first billionth of a second of the universe after the big bang (my friend actually did that), you will need much higher precision than if you are calculating sales tax (something I do for a living).
In the finance world, for example, there will be specific requirements on precision either implicitly or explicitly. Some US taxing jurisdictions specify tax rates to 5 digits after the decimal place. Your rounding scheme needs to allow for that much precision. When much of Western Europe converted to the Euro, there was a very specific approach to rounding that was written into law. During that transition period, it was essential to round exactly as required.
Know the rules of your domain, and test that your rounding scheme satisfies those rules.
I think everyone implying:
Inverting a sparse matrix? "There's an app for that", etc, etc
Numerical computation is one well-flogged horse. If you have a problem, it was probably put to pasture before 1970 or even much earlier, carried forward library by library or snippet by snippet into the future.
you could shift the decimal point so that the numbers are whole, then do 64 bit integer arithmetic, then shift it back. Then you would only have to worry about overflow problems.
And we know we have the decimal type
for a decimal representation of
numbers... but the problem is, a lot
of Math methods, do not supporting
decimal type, so we have convert them
to double, which ruins the number
again.
Several of the Math methods do support decimal: Abs, Ceiling, Floor, Max, Min, Round, Sign, and Truncate. What these functions have in common is that they return exact results. This is consistent with the purpose of decimal: To do exact arithmetic with base-10 numbers.
The trig and Exp/Log/Pow functions return approximate answers, so what would be the point of having overloads for an "exact" arithmetic type?

Categories

Resources