Is it safe to cast from double to decimal in the following manner in C#:
int downtimeMinutes = 90;
TimeSpan duration = TimeSpan.FromHours(2d);
decimal calculatedDowntimePercent = duration.TotalMinutes > 0?
(downtimeMinutes / (decimal)duration.TotalMinutes) * 100.0m : 0.0m;
If the answer is yes, then no fuss, I'll just mark as accepted.
In general, double -> decimal conversions aren't safe, because decimal has a smaller range.
However, as long as TotalMinutes is less than the maximum decimal value* it will be fine. This is true, because TimeSpan.MaxValue.TotalMinutes < (double)decimal.MaxValue (I believe TimeSpan uses a long internally.)
So: yes.
*: (79,228,162,514,264,337,593,543,950,335 minutes is 1.1×10^13 times the age of the universe)
No, in general, casting from double to decimal is not always safe:
[TestCase(double.MinValue)]
[TestCase(double.MaxValue)]
[TestCase(double.NaN)]
[TestCase(double.NegativeInfinity)]
[TestCase(double.PositiveInfinity)]
public void WillFail(double input)
{
decimal result = (decimal)input; // Throws OverflowException!
}
As OP clarified in a comment to the question, "safe" being "doesn't cause run time exceptions", the above shows that exceptions can occur when casting a double to a decimal.
The above is the generic answer many Googlers might've come here for. However, to also answer the specific question by OP, here's a strong indication that the code will not throw exceptions, even on edge cases:
[Test]
public void SpecificCodeFromOP_WillNotFail_NotEvenOnEdgeCases()
{
int downtimeMinutes = 90;
foreach (TimeSpan duration in new[] {
TimeSpan.FromHours(2d), // From OP
TimeSpan.MinValue,
TimeSpan.Zero,
TimeSpan.MaxValue })
{
decimal calculatedDowntimePercent = duration.TotalMinutes > 0 ?
(downtimeMinutes / (decimal)duration.TotalMinutes) * 100.0m : 0.0m;
}
}
Yes it is safe, because decimal has greater precision
http://msdn.microsoft.com/en-us/library/364x0z75(VS.80).aspx
The compiler will put in casts around the other non decimal numbers, but they'll all fit into decimal * (see caveat).
--
Caveat
Decimal is not a floating point type. Its mandate is to always uphold precision. Whereas a floating point number such as double (which I mostly use) makes a tradeoff on precision to accommodate very large numbers). Very large or very small numbers will not fit into decimal. So Lisa needs to ask herself if the magnitude of the operation is likely to be less than 28 significant digital digits. 28 significant digits are adequate for most scenarios.
Floating point is good for astronomically large or infintessimally small numbers... or operations inbetween that yield enough accuracy. I should look this up, but double is okay for plus or minus a few billion with accuracy of up to several decimal points (up to 7 or 8?).
in the sciences there's no point measuring beyond the accuracy of your equipment. In finance, often the logical choice is double because a double is computationally more efficient for most situations (sometimes they want a bit more accuracy, but the efficiency is not worth throwing away for something like decimal). In the end we all have to get pragmatic and map business needs to a digital domain. There are tools out there that have a dynamic number representation. Probably there are libraries in .net for the same. However, is it worth it? Sometimes it is. Often it's overkill.
Related
A colleague has written some code along these lines:
var roundedNumber = (float) Math.Round(someFloat, 2);
Console.WriteLine(roundedNumber);
I have an uncertainty about this code - is the number that gets written here even guaranteed to have 2 decimal places any more? It seems plausible to me that truncation of the double Math.Round(someFloat, 2) to float might result in a number whose string representation has more than 2 digits. Can anybody either provide an example of this (demonstrating that such a cast is unsafe) or else demonstrate somehow that it is safe to perform such a cast?
Assuming single and double precision IEEE754 representation and rules, I have checked for the first 2^24 integers i that
float(double( i/100 )) = float(i/100)
in other words, converting a decimal value with 2 decimal places twice (first to the nearest double, then to the nearest single precision float) is the same as converting the decimal directly to single precision, as long as the integer part of the decimal is not too large.
I have no guarantee for larger values.
The double approximation and the single approximation are different, but that's not really the question.
Converting twice is innocuous up to at least 167772.16, it's the same as if Math.Round would have done it directly in single precision.
Here is the testing code in Squeak/Pharo Smalltalk with ArbitraryPrecisionFloat package (sorry to not exhibit it in c# but the language does not really matter, only IEEE rules do).
(1 to: 1<<24)
detect: [:i |
(i/100.0 asArbitraryPrecisionFloatNumBits: 24) ~= (i/100 asArbitraryPrecisionFloatNumBits: 24) ]
ifNone: [nil].
EDIT
Above test was superfluous because, thanks to excellent reference provided by Mark Dickinson (Innocuous double rounding of basic arithmetic operations) , we know that doing float(double(x) / double(y)) produces a correctly-rounded value for x / y, as long as x and y are both representable as floats, which is the case for any 0 <= x <= 2^24 and for y=100.
EDIT
I have checked with numerators up to 2^30 (decimal value > 10 millions), and converting twice is still identical to converting once. Going further with an interpreted language is not good wrt global warming...
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
decimal vs double! - Which one should I use and when?
I'm using double type for price in my trading software.
I've noticed that sometimes there are a odd errors.
They occur if price contains 4 digits after "dot", like 2.1234.
When I sent from my program "2.1234" on the market order appears at the price of "2.1235".
I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.
The question is - where is the line? When to use decimal?
Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)
I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.
Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.
Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.
Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:
double has a larger range (it can handle very large and very small magnitudes);
decimal has more precision (has more significant digits);
you may need to use double to interact with some older APIs that are not aware of decimal;
double is faster than decimal;
decimal has a larger memory footprint;
When accuracy is needed and important, use decimal.
When accuracy is not that important, then you can use double.
In your case, you should be using decimal, as its financial matter.
For financial operation I always use the decimal type
Use decimal it's built for representing powers of 10 well (i.e. prices).
Decimal is the way to go when dealing with prices.
If it's financial software you should probably use decimal. This wiki article summarises quite nicely.
A simple response is in this example:
decimal d = 0.3M+0.3M+0.3M;
bool ret = d == 0.9M; // true
double db = 0.3 + 0.3 + 0.3;
bool dret = db == 0.9; // false
the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.
Double is meant as a generic floating-point data type, decimal is specifically meant for money and financial domains. Even though double usually works just fine decimal might prevent problems in some cases (e.g. rounding errors when you get to values in the billions)
There is an Explantion of it on MSDN
As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.
However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.
A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.
If you always know the maximum amount of decimals you are going to have (digits after the point). Then the best practice is to use fixed point notation. That will give you an exact result while still working very fast.
The simplest manner in which to use fixed point is to simply store the number in an int of thousand parts. For example if the price always have 2 decimals you would be saving the amount of cents ($12.45 is stored in an int with value 1245 which thus would represent 1245 cents). With four decimals you would be storing pieces of ten-thousands (12.3456 would be stored in an int with value 123456 representing 123456 ten-thousandths) etc etc.
The disadvantage of this is that you would sometimes need a conversion if for example you are multiplying two values together (0.1 * 0.1 = 0.01 while 1 * 1 = 1, the unit has changed from tenths to hundredths). And if you are going to use some other mathematical functions you also has to take things like this into consideration.
On the other hand if the amount of decimals vary a lot using fixed point is a bad idea. And if high-precision floating point calculations are needed the decimal datatype was constructed for exactly that purpose.
I'm messing around with Fourier transformations. Now I've created a class that does an implementation of the DFT (not doing anything like FFT atm). This is the implementation I've used:
public static Complex[] Dft(double[] data)
{
int length = data.Length;
Complex[] result = new Complex[length];
for (int k = 1; k <= length; k++)
{
Complex c = Complex.Zero;
for (int n = 1; n <= length; n++)
{
c += Complex.FromPolarCoordinates(data[n-1], (-2 * Math.PI * n * k) / length);
}
result[k-1] = 1 / Math.Sqrt(length) * c;
}
return result;
}
And these are the results I get from Dft({2,3,4})
Well it seems pretty okay, since those are the values I expect. There is only one thing I find confusing. And it all has to do with the rounding of doubles.
First of all, why are the first two numbers not exactly the same (0,8660..443 8 ) vs (0,8660..443). And why can't it calculate a zero, where you'd expect it. I know 2.8E-15 is pretty close to zero, but well it's not.
Anyone know how these, marginal, errors occur and if I can and want to do something about it.
It might seem that there's not a real problem, because it's just small errors. However, how do you deal with these rounding errors if you're for example comparing 2 values.
5,2 + 0i != 5,1961524 + i2.828107*10^-15
Cheers
I think you've already explained it to yourself - limited precision means limited precision. End of story.
If you want to clean up the results, you can do some rounding of your own to a more reasonable number of siginificant digits - then your zeros will show up where you want them.
To answer the question raised by your comment, don't try to compare floating point numbers directly - use a range:
if (Math.Abs(float1 - float2) < 0.001) {
// they're the same!
}
The comp.lang.c FAQ has a lot of questions & answers about floating point, which you might be interested in reading.
From http://support.microsoft.com/kb/125056
Emphasis mine.
There are many situations in which precision, rounding, and accuracy in floating-point calculations can work to generate results that are surprising to the programmer. There are four general rules that should be followed:
In a calculation involving both single and double precision, the result will not usually be any more accurate than single precision. If double precision is required, be certain all terms in the calculation, including constants, are specified in double precision.
Never assume that a simple numeric value is accurately represented in the computer. Most floating-point values can't be precisely represented as a finite binary value. For example .1 is .0001100110011... in binary (it repeats forever), so it can't be represented with complete accuracy on a computer using binary arithmetic, which includes all PCs.
Never assume that the result is accurate to the last decimal place. There are always small differences between the "true" answer and what can be calculated with the finite precision of any floating point processing unit.
Never compare two floating-point values to see if they are equal or not- equal. This is a corollary to rule 3. There are almost always going to be small differences between numbers that "should" be equal. Instead, always check to see if the numbers are nearly equal. In other words, check to see if the difference between them is very small or insignificant.
Note that although I referenced a microsoft document, this is not a windows problem. It's a problem with using binary and is in the CPU itself.
And, as a second side note, I tend to use the Decimal datatype instead of double: See this related SO question: decimal vs double! - Which one should I use and when?
In C# you'll want to use the 'decimal' type, not double for accuracy with decimal points.
As to the 'why'... repsensenting fractions in different base systems gives different answers. For example 1/3 in a base 10 system is 0.33333 recurring, but in a base 3 system is 0.1.
The double is a binary value, at base 2. When converting to base 10 decimal you can expect to have these rounding errors.
Every time I use Math.Round/Floor/Ceiling I always cast to int (or perhaps long if necessary). Why exactly do they return double if it's always returning an integer.
The result might not fit into an int (or a long). The range of a double is much greater.
Approximate range of double: ±5.0 × 10−324 to ±1.7 × 10308
(Source)
I agree with Mark's answer that the result might not fit in a long, but you might wonder: what if C# had a much longer long type? Well, here's what happens in Python with it's arbitary-length integers:
>>> round(1.23e45)
1229999999999999973814869011019624571608236032
Most of the digits are "noise" from the floating-point rounding error. Perhaps part of the motivation for Round/Floor/Ceiling returning double in C# was to avoid the illusion of false precision.
An alternative explanation is that the .NET Math module uses code written in C, in which floor and ceil return floating-point types.
Range arguments aside, none of these answers addresses what, to me, is a fundamental problem with returning a floating point number when you really want an exact integer. It seems to me that the calculated floating point number could be less than or greater than the desired integer by a small round off error, so the cast operation could create an off by one error. I would think that, instead of casting, you need to apply an integer (not double) round-nearest function to the double result of floor(). Or else write your own code. The C library versions of floor() and ceil() are very slow anyway.
Is this true, or am I missing something? There is something about an exact representation of integers in an IEEE floating point standard, but I am not sure whether or not this makes the cast safe.
I would rather have range checking in the function (if it is needed to avoid overflow) and return a long. For my own private code, I can skip the range checking. I have been doing this:
long int_floor(double x)
{
double remainder;
long truncate;
truncate = (long) x; // rounds down if + x, up if negative x
remainder = x - truncate; // normally + for + x, - for - x
//....Adjust down (toward -infinity) for negative x, negative remainder
if (remainder < 0 && x < 0)
return truncate - 1;
else
return truncate;
}
Counterparts exist for ceil() and round() with different considerations for negative and positive numbers.
There is no reason given on the docs that I could find. My best guess is that if you are working with doubles, chances are you would want any operations on doubles to return a double. Rounding it to cast to an int was deemed by the language designer less common then rounding and keeping as a double.
You could write your own method that cast it to an int for you in about 2 lines of code, and much less work than posting a question on stack overflow...
I have a simple C# function:
public static double Floor(double value, double step)
{
return Math.Floor(value / step) * step;
}
That calculates the higher number, lower than or equal to "value", that is multiple of "step". But it lacks precision, as seen in the following tests:
[TestMethod()]
public void FloorTest()
{
int decimals = 6;
double value = 5F;
double step = 2F;
double expected = 4F;
double actual = Class.Floor(value, step);
Assert.AreEqual(expected, actual);
value = -11.5F;
step = 1.1F;
expected = -12.1F;
actual = Class.Floor(value, step);
Assert.AreEqual(Math.Round(expected, decimals),Math.Round(actual, decimals));
Assert.AreEqual(expected, actual);
}
The first and second asserts are ok, but the third fails, because the result is only equal until the 6th decimal place. Why is that? Is there any way to correct this?
Update If I debug the test I see that the values are equal until the 8th decimal place instead of the 6th, maybe because Math.Round introduces some imprecision.
Note In my test code I wrote the "F" suffix (explicit float constant) where I meant "D" (double), so if I change that I can have more precision.
I actually sort of wish they hadn't implemented the == operator for floats and doubles. It's almost always the wrong thing to do to ever ask if a double or a float is equal to any other value.
If you want precision, use System.Decimal. If you want speed, use System.Double (or System.Float). Floating point numbers are not "infinite precision" numbers, and therefore asserting equality must include a tolerance. As long as your numbers have a reasonable number of significant digits, this is ok.
If you're looking to do math on very large AND very small numbers, don't use float or double.
If you need infinite precision, don't use float or double.
If you are aggregating a very large number of values, don't use float or double (the errors will compound themselves).
If you need speed and size, use float or double.
See this answer (also by me) for a detailed analysis of how precision affects the outcome of your mathematical operations.
Floating point arithmetic on computers are not Exact Science :).
If you want exact precision to a predefined number of decimals use Decimal instead of double or accept a minor interval.
If you omit all the F postfixes (ie -12.1 instead of -12.1F) you will get equality to a few digits more. Your constants (and especially the expected values) are now floats because of the F. If you are doing that on purpose then please explain.
But for the rest i concur with the other answers on comparing double or float values for equality, it's just not reliable.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
For example, the non-representability of 0.1 and 0.01 (in binary) means that the result of attempting to square 0.1 is neither 0.01 nor the representable number closest to it.
Only use floating point if you want a machine's interpretation (binary) of number systems. You can't represent 10 cents.
Check the answers to this question: Is it safe to check floating point values for equality to 0?
Really, just check for "within tolerance of..."
floats and doubles cannot accurately store all numbers. This is a limitation with the IEEE floating point system. In order to have faithful precision you need to use a more advanced math library.
If you don't need precision past a certain point, then perhaps decimal will work better for you. It has a higher precision than double.
For the similar issue, I end up using the following implementation which seems to success most of my test case (up to 5 digit precision):
public static double roundValue(double rawValue, double valueTick)
{
if (valueTick <= 0.0) return 0.0;
Decimal val = new Decimal(rawValue);
Decimal step = new Decimal(valueTick);
Decimal modulo = Decimal.Round(Decimal.Divide(val,step));
return Decimal.ToDouble(Decimal.Multiply(modulo, step));
}
Sometimes the result is more precise than you would expect from strict:FP IEEE 754.
That's because HW uses more bits for the computation.
See C# specification and this article
Java has strictfp keyword and C++ have compiler switches. I miss that option in .NET