I am experiencing a division significant digit problem, but I am not sure what's the best way of fixing it.
In C#, If I do:
Math.Round(36 / 4.8)
I get 8 as expected, but if I do:
Math.Round(36 / 4.8f)
typee
I get 7 instead of 8 like I expected.
Looks like some sort of significant digit issue since 36/4.8 = 7.5 (type of double) but 36/4.8f = 7.4999995 (type of single), but I can't seem to figure out why. Is there a way to keep using float but still ensure the result gets rounded 8 correctly?
One way that I was able to to solve this issue is by rounding twice with first one specifying number of digit to round, then round that number again like:
Math.Round(Math.Round(36/4.8f, 5))
But I rather not do this kind of gymnastics if I don't have to. Is there a better to solve this issue or am I doing something wrong?
Thanks in advance
Related
I use decimal variables in c#
I want to get Math.round(0.004449m, 2) = 0.01
I mean start rounding from the last digit ie: 9 in this example. Since Math.round(0.004449m, 5) = 0.00445 treat 0.004449m as 0.0445 at the first place.
Keep rounding. Since Math.round(0.0445m , 4) = 0.045 treat 0.0445m as 0.045
Finally Math.round(0.004449m, 2) is found to be 0.01.
I know this approach is mathematically wrong but there is also a logic behind it so my company decided to round numbers like this.
Is there this kind of function in .net?
Although my question sounds trivial, it really is NOT. Hope you can help me.
I want to implement interval arithmetic in my .NET (C#) project. This means that every number is defined by an lower bound and an upper bound. This is helpfull for problems like
1 / 3 = 0.333333333333333 (15 significant digits)
since you would then have
1 / 3 = [ 0.33333333333333 , 0.333333333333334 ] (14 significant digits each)
, so I now FOR SURE that the right answer lays between those two numbers. Without the interval representation I would already have a rounding error with me (i.e. 0.0000000000000003).
To achieve this I wrote my own Interval type that overloads all standard operators like +-*/, etc. To make this type work correctly I need to be able to round the result of 1 / 3 in two directions. Rounding the result down will give me the lower bound for my interval, rounding the result up will give me the upper bound for my interval.
.NET has the Math.Round(double,int) method which rounds the double to int decimal places. Looks great but it can't be forced to round up/down. Math.Round(1.0/3.0,14) would round down, but the also needed up-rounding to 0.33...34 can't be achieved like this.
But there are Math.Ceil and Math.Floor you might say! Okay, those methods round to the next lower or upper integer. So if I want to round to 14 decimal places I first need to reform my result:
1 / 3 = 0.333333333333333 -> *E14 -> 33333333333333.3
So now I can call Math.Ceil and Math.Floor and get both rounded results after reforming back
33333333333333 & 33333333333334 -> /E14 -> 0.33333333333333 & 0.33333333333334
Looks great, but: Let's say my number goes near the double.MaxValue. I can't just *E14 a value near double.MaxValue since this will give me an OverflowException. So this is no solution either.
And, to top all of these facts: All this fails even harder when trying to round 0.9999999999999999999999999 (more than 15 digits) since the internal representation is already rounded to 1 before I can even start trying to round down.
I could try to somehow parse a string containing the double but this won't help since (1/3 * 3).ToString() will already print 1 instead of 0.99...9.
Decimal does not work either since I don't want that deep precision, 14 digits are enough; but I still want that double range!
In C++, where several interval arithmetic implementations exist, this problem could be solved by telling the processor dynamically to swith its roundmode to for example "always down" or "always up". I couldn't find any way to do this in .NET.
So, do you have any ideas?
Thanks in advance!
Assume nextDown(x) is a function that returns the largest double that is less than x, and nextUp(x) is a function that returns the smallest double that is greater than x. See Get next smallest Double number for implementation ideas.
Where you would have rounded a lower bound result down, instead use the nextDown of the round-to-nearest result. Where you would have rounded an upper bound up, use the nextUp of the round-to-nearest result.
This method ensures the interval continues to contain the exact real number result. It introduces extra rounding error - in some cases the lower bound will be one ULP smaller than it should be, and/or the upper bound will be one ULP bigger. However, it is a minimal widening of the interval, much less widening than you would get working in decimal or by suppressing low significance bits.
This might be more like a long comment than a real answer.
This code returns an "interval" (I just use Tuple<,>, you can use your own Interval type) based on truncating the seven least significant bits:
static Tuple<double, double> GetMinMaxIntervalBasedOnBinaryNumbersThatAreRoundOnLastSevenBits(double number)
{
if (double.IsInfinity(number) || double.IsNaN(number))
return Tuple.Create(number, number); // maybe treat this case differently
var i = BitConverter.DoubleToInt64Bits(number);
const int numberOfBitsToClear = 7; // your seven, can change this value, must be below 52
const long precision = 1L << numberOfBitsToClear;
const long bitMask = ~(precision - 1L);
//truncate i
i &= bitMask;
return Tuple.Create(BitConverter.Int64BitsToDouble(i), BitConverter.Int64BitsToDouble(i + precision));
}
Disclaimer: I am not sure if this is useful for any purpose. In particular not sure it is useful for interval arithmetic.
With this code, GetMinMaxIntervalBasedOnBinaryNumbersThatAreRoundOnLastSevenBits(1.0 / 3.0) returns the tuple (0.333333333333329, 0.333333333333336).
This code, just like the code you ask for in your question, has the obvious "issue" that if the original value is close to (or even equal to) one of the "round" numbers we use, then the returned interval is "skewed", with the original number being close to one of the ends of the interval. For example, with input 42.0 (already round), you get out the tuple (42, 42.0000000000009).
One good thing about this code is I expect it to be extremely fast.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
If I execute the following expression in C#:
double i = 10*0.69;
i is: 6.8999999999999995. Why?
I understand numbers such as 1/3 can be hard to represent in binary as it has infinite recurring decimal places but this is not the case for 0.69. And 0.69 can easily be represented in binary, one binary number for 69 and another to denote the position of the decimal place.
How do I work around this? Use the decimal type?
Because you've misunderstood floating point arithmetic and how data is stored.
In fact, your code isn't actually performing any arithmetic at execution time in this particular case - the compiler will have done it, then saved a constant in the generated executable. However, it can't store an exact value of 6.9, because that value cannot be precisely represented in floating point point format, just like 1/3 can't be precisely stored in a finite decimal representation.
See if this article helps you.
why doesn't the framework work around this and hide this problem from me and give me the
right answer,0.69!!!
Stop behaving like a dilbert manager, and accept that computers, though cool and awesome, have limits. In your specific case, it doesn't just "hide" the problem, because you have specifically told it not to. The language (the computer) provides alternatives to the format, that you didn't choose. You chose double, which has certain advantages over decimal, and certain downsides. Now, knowing the answer, you're upset that the downsides don't magically disappear.
As a programmer, you are responsible for hiding this downside from managers, and there are many ways to do that. However, the makers of C# have a responsibility to make floating point work correctly, and correct floating point will occasionally result in incorrect math.
So will every other number storage method, as we do not have infinite bits. Our job as programmers is to work with limited resources to make cool things happen. They got you 90% of the way there, just get the torch home.
And 0.69 can easily be represented in
binary, one binary number for 69 and
another to denote the position of the
decimal place.
I think this is a common mistake - you're thinking of floating point numbers as if they are base-10 (i.e decimal - hence my emphasis).
So - you're thinking that there are two whole-number parts to this double: 69 and divide by 100 to get the decimal place to move - which could also be expressed as:
69 x 10 to the power of -2.
However floats store the 'position of the point' as base-2.
Your float actually gets stored as:
68999999999999995 x 2 to the power of some big negative number
This isn't as much of a problem once you're used to it - most people know and expect that 1/3 can't be expressed accurately as a decimal or percentage. It's just that the fractions that can't be expressed in base-2 are different.
but why doesn't the framework work around this and hide this problem from me and give me the right answer,0.69!!!
Because you told it to use binary floating point, and the solution is to use decimal floating point, so you are suggesting that the framework should disregard the type you specified and use decimal instead, which is very much slower because it is not directly implemented in hardware.
A more efficient solution is to not output the full value of the representation and explicitly specify the accuracy required by your output. If you format the output to two decimal places, you will see the result you expect. However if this is a financial application decimal is precisely what you should use - you've seen Superman III (and Office Space) haven't you ;)
Note that it is all a finite approximation of an infinite range, it is merely that decimal and double use a different set of approximations. The advantage of decimal is it produces the same approximations that you would if you were performing the calculation yourself. For example if you calculated 1/3, you would eventually stop writing 3's when it was 'good enough'.
For the same reason that 1 / 3 in a decimal systems comes out as 0.3333333333333333333333333333333333333333333 and not the exact fraction, which is infinitely long.
To work around it (e.g. to display on screen) try this:
double i = (double) Decimal.Multiply(10, (Decimal) 0.69);
Everyone seems to have answered your first question, but ignored the second part.
When I round off a small negative number it is rounded off to 0.
E.g: decimal.Round(-0.001M, 2) returns 0.
How can I get the sign if its rounded of to zero. Is there any other better way than to check n<0 then do the round off?
Comparing the bits works for decimal also. Thanks to #JonSkeet, else I'd have never known this trick.
var d = decimal.Round(-0.001M, 2);
bool isNegativeZero = d == decimal.Zero && decimal.GetBits(d).Last() < 0;
Here is the Demo
Is there any other better way than to check n<0 then do the round off?
The simple answer is "no". That is the most straightforward way of doing it. Unless you have a good reason to write code any more complicated than that (that you haven't mentioned in the question), don't do it. You (or another developer) will eventually come back to this code after days or months and wonder why the code was written that way.
This is really annoying, I have two hex numbers I am 90% sure that one of them is exactly 2 increment higher. However when I type them into an online hex to decimal calculator they come out the same. How can this be?
lower number at
0x00010471000001BF001F = 18766781122258862000
higher number at
0x00010471000001BF0021 = 18766781122258862000
? What is going on ?
The calc I used is...
http://www.rapidtables.com/convert/number/hex-to-decimal.htm
The higher number is 2 higher instead of 1. 0x00010471000001BF0020 is in between. I think your problem is related to an overflow issue because the numbers are very large. Probably the calculators you are using are converting the values to floating point which looses accuracy.
The values you are posting need at least 9 bytes to represent (or at least 65 bits)
First basic knowledge of Hex should tell you that 20 is between 1F and 21, so the highest number is the lower number + 2.
Second, if you use an unknown tool, you have to be sure it's reliable. Your tool obviously can't handle such large numbers.
Wolfram Alpha gives you the correct answers :
http://www.wolframalpha.com/input/?i=0x00010471000001BF001F+in+decimal
http://www.wolframalpha.com/input/?i=0x00010471000001BF0021+in+decimal
First things first, why did you classify this question under the C# tag?
The problem is most like caused by the value being too big and the converter doesn't work well with big numbers.
Just because this is tagged with C#.
Add a reference to .NET component System.Numerics.
To convert from large hex to integer use BigInteger.
System.Numerics.BigInteger a;
System.Numerics.BigInteger.TryParse("00010471000001BF001F", System.Globalization.NumberStyles.HexNumber,null,out a);
Console.WriteLine(a.ToString());
System.Numerics.BigInteger.TryParse("00010471000001BF0021", System.Globalization.NumberStyles.HexNumber, null, out a);
Console.WriteLine(a.ToString());
output
18766781122258862111
18766781122258862113