Here is my test code for some maths I am doing:
Why is C# treating this differently?
EXCEL
Numerator = =-0.161361101510599*10000000
Denominator = =(-1*(100-81.26)) * (10000000/100)
This is most likely due to how the numbers are represented in excel vs. C#. Rounding errors / differences are common when doing artihmatic to a high degree of accuracy on different platofrms or using different software.
EDIT: It could of course be due to different numbers being fed in in the first place. Go me looking for the complex answer!
Programmers are notorious at missing the big picture and overlooking the obvious (well - I am...) Case in point!
Following a comment of you on another answer you say you are obtaining the result (0.8610517689999946638207043757m). If you round it like so:
Math.Round(0.8610517689999946638207043757m, 12);
It will output: 0,861051769000
Your problem is that you are feeding Excel different initial values to your C# code. How do you expect them to be the same?
IOW: 0.161361101510599 != 0.161361102
Not many of us who work on numeric computing trust Excel to add 2 1 digit numbers correctly. I ran your calculation in Mathematica, giving each fractional number 64 digits by extending them to the right with 0s. This is the result:
0.861051771611526147278548559231590181430096051227321237993596585
In this case go with C# rather than Excel. And, on 2nd and 3rd thoughts, in every case go with C# rather than with Excel, whose inadequacies for numeric computing are widely known and well documented.
Excel store 15 significant digits of precision. Read the article "Why does Excel Give Me Seemingly Wrong Answers?".
On another note; you should probably simplify your arithmetic a bit to reduce the number of operations. Generally, (but not always) fewer operations gives a better result in f.p. arithmetic.
val = noiseTerm / (81.26/100 - 1)
is mathematically equivalent to your equation and contains 3 operations as opposed to your 8. In particular, the scalingFactor divides out completely, so it is not necessary at all.
First of all, in most cases Excel uses double precision floating point arithmetic for basic operations just as C# does.
As for your specific case, your C# code does not match your Excel formulas. Try this C# code - which uses your Excel formulas:
static void Calc()
{
double numerator = -0.161361101510599 * 10000000.0;
double denominator = (-1.0 * (100.0 - 81.26)) * (10000000.0 / 100.0);
double result = numerator / denominator;
Console.WriteLine("result={0}", result);
}
Run this and note that the output is 0.861051768999995.
Now, format the result in Excel with the custom number format "0.00000000000000000" and you will see that Excel is giving you the same result as C#. By default, Excel uses the "General" format which rounds this number to ~12 significant digits of precision. By changing to the format above, you force Excel to show 15 digits of precision - which is the maximum number of significant digits Excel will use to display a number (internally, they have 15+ digits of precision just as the C# double type does).
You can force C# to display 15+ significant digits (instead of rounding to 15 significant digits) by running the following code:
static void Calc()
{
double numerator = -0.161361101510599 * 10000000.0;
double denominator = (-1.0 * (100.0 - 81.26)) * (10000000.0 / 100.0);
double result = numerator / denominator;
Console.WriteLine("result={0:R}", result);
}
This code will output 0.8610517689999948...but there is no way AFAIK to get Excel to display 15+ digits.
Related
A colleague has written some code along these lines:
var roundedNumber = (float) Math.Round(someFloat, 2);
Console.WriteLine(roundedNumber);
I have an uncertainty about this code - is the number that gets written here even guaranteed to have 2 decimal places any more? It seems plausible to me that truncation of the double Math.Round(someFloat, 2) to float might result in a number whose string representation has more than 2 digits. Can anybody either provide an example of this (demonstrating that such a cast is unsafe) or else demonstrate somehow that it is safe to perform such a cast?
Assuming single and double precision IEEE754 representation and rules, I have checked for the first 2^24 integers i that
float(double( i/100 )) = float(i/100)
in other words, converting a decimal value with 2 decimal places twice (first to the nearest double, then to the nearest single precision float) is the same as converting the decimal directly to single precision, as long as the integer part of the decimal is not too large.
I have no guarantee for larger values.
The double approximation and the single approximation are different, but that's not really the question.
Converting twice is innocuous up to at least 167772.16, it's the same as if Math.Round would have done it directly in single precision.
Here is the testing code in Squeak/Pharo Smalltalk with ArbitraryPrecisionFloat package (sorry to not exhibit it in c# but the language does not really matter, only IEEE rules do).
(1 to: 1<<24)
detect: [:i |
(i/100.0 asArbitraryPrecisionFloatNumBits: 24) ~= (i/100 asArbitraryPrecisionFloatNumBits: 24) ]
ifNone: [nil].
EDIT
Above test was superfluous because, thanks to excellent reference provided by Mark Dickinson (Innocuous double rounding of basic arithmetic operations) , we know that doing float(double(x) / double(y)) produces a correctly-rounded value for x / y, as long as x and y are both representable as floats, which is the case for any 0 <= x <= 2^24 and for y=100.
EDIT
I have checked with numerators up to 2^30 (decimal value > 10 millions), and converting twice is still identical to converting once. Going further with an interpreted language is not good wrt global warming...
I'm using the decimal data type throughout my project in the belief that it would give me the most accurate results. However, I've come across a situation where rounding errors are creeping in and it appears I'd be better off using doubles.
I have this calculation:
decimal tempDecimal = 15M / 78M;
decimal resultDecimal = tempDecimal * 13M;
Here resultDecimal is 2.4999999999999999 when the correct answer for 13*15/78 is 2.5. It seems this is because tempDecimal (the result of 15/78) is a recurring decimal value.
I am subsequently rounding this result to zero decimal places (away from zero) which I was expecting to be 3 in this case but actually becomes 2.
If I use doubles instead:
double tempDouble = 15D / 78D;
double resultDouble = tempDouble * 13D;
Then I get 2.5 in resultDouble which is the answer I'm looking for.
From this example it feels like I'm better of using doubles or floats even though they are lower precision. I'm assuming I manage to get the incorrect result of 2.4999999999999999 simply because a decimal can store a result to that number of decimal places whereas the double rounds it off.
Should I use doubles instead?
EDIT: This calculation is being used in financial software to decide how many contracts are allocated to different portfolios so deciding between 2 or 3 is important. I am more concerned with the correct calculation than with speed.
Strange thing is, if you write it all on one line it results in 2.5.
If precision is cruical (financial calculations) you should definately use decimal. You can print a rounded decimal using Math.Round(myDecimalValue, signesAfterDecimalPoint) or String.Format("{0:0.00}", myDecimalValue), but make the calculations with the exact number. Otherwise double will be just fine.
The following test will fail in C#
Assert.AreEqual<double>(10.0d, 16.1d - 6.1d);
The problem appears to be a floating point error.
16.1d - 6.1d == 10.000000000000002
This is causing me headaches in writing unit tests for code that uses double. Is there a way to fix this?
There is no exact conversion between the decimal system and the binary representation of a double (see excellent comment by #PatriciaShanahan below on why).
In this case the .1 part of the numbers is the problem, it cannot be finitely represented in a double (like 1/3 can't be finitely represented exactly as a decimal number).
A code snippet to explain what happends:
double larger = 16.1d; //Assign closest double representation of 16.1.
double smaller = 6.1; //Assign closest double representation of 6.1.
double diff = larger - smaller; //Assign closest diff between larger and
//smaller, but since a smaller value has a
//larger precision the result will have better
//precision than larger but worse than smaller.
//The difference shows up as the ...000002.
Always use the Assert.Equal overload which takes a delta parameter when comparing doubles.
Alternatively if you really need exact decimal conversion, use the decimal data type, that has another binary representation and would return exactly 10 in your example.
Floatingoint numbers are an estimate of the actual value based on an exponent so the test fails correctly. If you require exact equivalence in two decimal numbers you may need to check out the decimal data type.
If you are using NUnit please use the Within option. Here can you find additional information: http://www.nunit.org/index.php?p=equalConstraint&r=2.6.2.
I agree with anders abel. There won't be a way to do this using a float number representation. In direct result of IEE 1985-754 only the numbers that can be represented by
can be stored and calculated with precisly (as long as the chosen bit number allows this).
For Example : 1024 * 1.75 * 183.375 / 1040.0675 <-- will be stored precisly
10 / 1.1 <-- wont be stored precisly
If you are hardly interested in exact representation of rational numbers you could write your own number-implementation using fractions.
This could be done by saving numerator, denominator and sign. Then operations like multiply, subtract, etc. need to be implemented (very hard to ensure good performance). A toString()-method could look like this (I assume cachedRepresentation, cachedDotIndex and cachedNumerator to be member-variables)
public String getString(int digits) {
if(this.cachedRepresentation == ""){
this.cachedRepresentation += this.positiveSign ? "" : "-";
this.cachedRepresentation += this.numerator/this.denominator;
this.cachedNumerator = 10 * (this.numerator % this.denominator);
this.cachedDotIndex = this.cachedRepresentation.Length;
this.cachedRepresentation += ".";
}
if ((this.cachedDotIndex + digits) < this.cachedRepresentation.Length)
return this.cachedRepresentation.Substring(0, this.cachedDotIndex + digits + 1);
while((this.cachedDotIndex + digits) >= this.cachedRepresentation.Length){
this.cachedRepresentation += this.cachedNumerator / this.denominator;
this.cachedNumerator = 10 * (this.cachedNumerator % denominator);
}
return cachedRepresentation;
}
This worked for me. At the operations itself with long numbers I got some problems with too small datatypes (usually I don't use c#). I think for an experienced c#-developer it should be no problem to implement this without problems of to small datatypes.
If you want to implement this you should do minifications of the fraction at initializing and before operations using euclids greatest-common-divider.
Non rational numbers can (in every case I know) be specified by a algorithm that comes as close to the exact representation as you want (and computer allows).
Our existing application reads some floating point numbers from a file. The numbers are written there by some other application (let's call it Application B). The format of this file was fixed long time ago (and we cannot change it). In this file all the floating point numbers are saved as floats in binary representation (4 bytes in the file).
In our program as soon as we read the data we convert the floats to doubles and use doubles for all calculations because the calculations are quite extensive and we are concerned with the spread of rounding errors.
We noticed that when we convert floats via decimal (see the code below) we are getting more precise results than when we convert directly. Note: Application B also uses doubles internally and only writes them into the file as floats. Let's say Application B had the number 0.012 written to file as float. If we convert it after reading to decimal and then to double we get exactly 0.012, if we convert it directly, we get 0.0120000001043081.
This can be reproduced without reading from a file - with just an assignment:
float readFromFile = 0.012f;
Console.WriteLine("Read from file: " + readFromFile);
//prints 0.012
double forUse = readFromFile;
Console.WriteLine("Converted to double directly: " + forUse);
//prints 0.0120000001043081
double forUse1 = (double)Convert.ToDecimal(readFromFile);
Console.WriteLine("Converted to double via decimal: " + forUse1);
//prints 0.012
Is it always beneficial to convert from float to double via decimal, and if not, under what conditions is it beneficial?
EDIT: Application B can obtain the values which it saves in two ways:
Value can be a result of calculations
Value can be typed in by user as a decimal fraction (so in the example above the user had typed 0.012 into an edit box and it got converted to double, then saved to float)
we get exactly 0.012
No you don't. Neither float nor double can represent 3/250 exactly. What you do get is a value that is rendered by the string formatter Double.ToString() as "0.012". But this happens because the formatter doesn't display the exact value.
Going through decimal is causing rounding. It is likely much faster (not to mention easier to understand) to just use Math.Round with the rounding parameters you want. If what you care about is the number of significant digits, see:
Round a double to x significant figures
For what it's worth, 0.012f (which means the 32-bit IEEE-754 value nearest to 0.012) is exactly
0x3C449BA6
or
0.012000000104308128
and this is exactly representable as a System.Decimal. But Convert.ToDecimal(0.012f) won't give you that exact value -- per the documentation there is a rounding step.
The Decimal value returned by this method contains a maximum of seven significant digits. If the value parameter contains more than seven significant digits, it is rounded using rounding to nearest.
As strange as it may seem, conversion via decimal (with Convert.ToDecimal(float)) may be beneficial in some circumstances.
It will improve the precision if it is known that the original numbers were provided by users in decimal representation and users typed no more than 7 significant digits.
To prove it I wrote a small program (see below). Here is the explanation:
As you recall from the OP this is the sequence of steps:
Application B has doubles coming from two sources:
(a) results of calculations; (b) converted from user-typed decimal numbers.
Application B writes its doubles as floats into the file - effectively
doing binary rounding from 52 binary digits (IEEE 754 single) to the 23 binary digits (IEEE 754 double).
Our Application reads that float and converts it to a double by one of two ways:
(a) direct assignment to double - effectively padding a 23-bit number to a 52-bit number with binary zeros (29 zero-bits);
(b) via conversion to decimal with (double)Convert.ToDecimal(float).
As Ben Voigt properly noticed Convert.ToDecimal(float) (see MSDN in the Remark section) rounds the result to 7 significant decimal digits. In Wikipedia's IEEE 754 article about Single we can read that precision is 24 bits - equivalent to log10(pow(2,24)) ≈ 7.225 decimal digits. So, when we do the conversion to decimal we lose that 0.225 of a decimal digit.
So, in the generic case, when there is no additional information about doubles, the conversion to decimal will in most cases make us loose some precision.
But (!) if there is the additional knowledge that originally (before being written to a file as floats) the doubles were decimals with no more than 7 digits, the rounding errors introduced in decimal rounding (step 3(b) above) will compensate the rounding errors introduced with the binary rounding (in step 2. above).
In the program to prove the statement for the generic case I randomly generate doubles, then cast it to float, then convert it back to double (a) directly, (b) via decimal, then I measure the distance between the original double and the double (a) and double (b). If the double(a) is closer to the original than the double(b), I increment pro-direct conversion counter, in the opposite case I increment the pro-viaDecimal counter. I do it in a loop of 1 mln. cycles, then I print the ratio of pro-direct to pro-viaDecimal counters. The ratio turns out to be about 3.7, i.e. approximately in 4 cases out of 5 the conversion via decimal will spoil the number.
To prove the case when the numbers are typed in by users I used the same program with the only change that I apply Math.Round(originalDouble, N) to the doubles. Because I get originalDoubles from the Random class, they all will be between 0 and 1, so the number of significant digits coincides with the number of digits after the decimal point. I placed this method in a loop by N from 1 significant digit to 15 significant digits typed by user. Then I plotted it on the graph. The dependency of (how many times direct conversion is better than conversion via decimal) from the number of significant digits typed by user.
.
As you can see, for 1 to 7 typed digits the conversion via Decimal is always better than the direct conversion. To be exact, for a million of random numbers only 1 or 2 are not improved by conversion to decimal.
Here is the code used for the comparison:
private static void CompareWhichIsBetter(int numTypedDigits)
{
Console.WriteLine("Number of typed digits: " + numTypedDigits);
Random rnd = new Random(DateTime.Now.Millisecond);
int countDecimalIsBetter = 0;
int countDirectIsBetter = 0;
int countEqual = 0;
for (int i = 0; i < 1000000; i++)
{
double origDouble = rnd.NextDouble();
//Use the line below for the user-typed-in-numbers case.
//double origDouble = Math.Round(rnd.NextDouble(), numTypedDigits);
float x = (float)origDouble;
double viaFloatAndDecimal = (double)Convert.ToDecimal(x);
double viaFloat = x;
double diff1 = Math.Abs(origDouble - viaFloatAndDecimal);
double diff2 = Math.Abs(origDouble - viaFloat);
if (diff1 < diff2)
countDecimalIsBetter++;
else if (diff1 > diff2)
countDirectIsBetter++;
else
countEqual++;
}
Console.WriteLine("Decimal better: " + countDecimalIsBetter);
Console.WriteLine("Direct better: " + countDirectIsBetter);
Console.WriteLine("Equal: " + countEqual);
Console.WriteLine("Betterness of direct conversion: " + (double)countDirectIsBetter / countDecimalIsBetter);
Console.WriteLine("Betterness of conv. via decimal: " + (double)countDecimalIsBetter / countDirectIsBetter );
Console.WriteLine();
}
Here's a different answer - I'm not sure that it's any better than Ben's (almost certainly not), but it should produce the right results:
float readFromFile = 0.012f;
decimal forUse = Convert.ToDecimal(readFromFile.ToString("0.000"));
So long as .ToString("0.000") produces the "correct" number (which should be easy to spot-check), then you'll get something you can work with and not have to worry about rounding errors. If you need more precision, just add more 0's.
Of course, if you actually need to work with 0.012f out to the maximum precision, then this won't help, but if that's the case, then you don't want to be converting it from a float in the first place.
I'm messing around with Fourier transformations. Now I've created a class that does an implementation of the DFT (not doing anything like FFT atm). This is the implementation I've used:
public static Complex[] Dft(double[] data)
{
int length = data.Length;
Complex[] result = new Complex[length];
for (int k = 1; k <= length; k++)
{
Complex c = Complex.Zero;
for (int n = 1; n <= length; n++)
{
c += Complex.FromPolarCoordinates(data[n-1], (-2 * Math.PI * n * k) / length);
}
result[k-1] = 1 / Math.Sqrt(length) * c;
}
return result;
}
And these are the results I get from Dft({2,3,4})
Well it seems pretty okay, since those are the values I expect. There is only one thing I find confusing. And it all has to do with the rounding of doubles.
First of all, why are the first two numbers not exactly the same (0,8660..443 8 ) vs (0,8660..443). And why can't it calculate a zero, where you'd expect it. I know 2.8E-15 is pretty close to zero, but well it's not.
Anyone know how these, marginal, errors occur and if I can and want to do something about it.
It might seem that there's not a real problem, because it's just small errors. However, how do you deal with these rounding errors if you're for example comparing 2 values.
5,2 + 0i != 5,1961524 + i2.828107*10^-15
Cheers
I think you've already explained it to yourself - limited precision means limited precision. End of story.
If you want to clean up the results, you can do some rounding of your own to a more reasonable number of siginificant digits - then your zeros will show up where you want them.
To answer the question raised by your comment, don't try to compare floating point numbers directly - use a range:
if (Math.Abs(float1 - float2) < 0.001) {
// they're the same!
}
The comp.lang.c FAQ has a lot of questions & answers about floating point, which you might be interested in reading.
From http://support.microsoft.com/kb/125056
Emphasis mine.
There are many situations in which precision, rounding, and accuracy in floating-point calculations can work to generate results that are surprising to the programmer. There are four general rules that should be followed:
In a calculation involving both single and double precision, the result will not usually be any more accurate than single precision. If double precision is required, be certain all terms in the calculation, including constants, are specified in double precision.
Never assume that a simple numeric value is accurately represented in the computer. Most floating-point values can't be precisely represented as a finite binary value. For example .1 is .0001100110011... in binary (it repeats forever), so it can't be represented with complete accuracy on a computer using binary arithmetic, which includes all PCs.
Never assume that the result is accurate to the last decimal place. There are always small differences between the "true" answer and what can be calculated with the finite precision of any floating point processing unit.
Never compare two floating-point values to see if they are equal or not- equal. This is a corollary to rule 3. There are almost always going to be small differences between numbers that "should" be equal. Instead, always check to see if the numbers are nearly equal. In other words, check to see if the difference between them is very small or insignificant.
Note that although I referenced a microsoft document, this is not a windows problem. It's a problem with using binary and is in the CPU itself.
And, as a second side note, I tend to use the Decimal datatype instead of double: See this related SO question: decimal vs double! - Which one should I use and when?
In C# you'll want to use the 'decimal' type, not double for accuracy with decimal points.
As to the 'why'... repsensenting fractions in different base systems gives different answers. For example 1/3 in a base 10 system is 0.33333 recurring, but in a base 3 system is 0.1.
The double is a binary value, at base 2. When converting to base 10 decimal you can expect to have these rounding errors.