Should I worry about losing precision with this DateTime math? - c#

This code:
if (dt.Subtract(prevDt).TotalMinutes == 15)
("dt" and "prevDt" are DateTime vars that contain values such as "7/20/2012 7:30:00 AM" and "7/20/2012 7:45:00 AM")
...causes ReSharper to warn me with:
"Comparison of floating point numbers with equality operator. Possible loss of precision while rounding values."
Is this a valid warning, and if so, how would I appease it? I wish ReSharper were a little more like Eclipse, which offers to fix things it complains about.
At any rate, the code seems to work fine, although I'd rather not have it stink up the joint if this is a code smell.

If you are sure that your timestamps are exactly on 15 minute boundaries and not a few milliseconds off, then your code will work fine. Values that can be represented exactly as an int can also be represented exactly as a double.
If you want to try to rewrite your code to avoid the warning, you might want to try this:
if (prevDt.AddMinutes(15) == dt)

No, it's not a valid warning if your dates will always be exactly 15 minutes apart, with no seconds or milliseconds (or ticks) of difference otherwise.

You may use Minutes with all other properties (Days/Hours...) to compare TimeSpans for portions you care about (i.e. ignore seconds).
Otherwise it may be better to check if TotalMinutes not too far off instead of exact match if your values are ever could contain seconds/milliseconds.

Related

Failure of the Round-trip format specifier "R" in .Net

The documentation recommends that I use G17 rather than R, as R can sometimes fail to round-trip.
However, (1.0/10).ToString("G17") gives "0.10000000000000001", which is pretty horrible. while the round-trip format seems to work just fine (and gives "0.1"). I'm happy to spend a few cpu cycles in order to get a more aesthetic result. But potential round-trip failures are more concerning.
For what sort of (Double) values does R fail to round-trip? And how badly?
Does the .Net version (we run on both Net Framework 4.72 and NetCore 3.1) affect things? Could writing on one platform and reading on another make round-trip failure more frequent?
We are considering writing doubles to R first, parsing to check the round-trip, and falling back to G17 only if that fails. Is there a better way to get nicely formatted, reliable results?
Round trip here is a numeric value that is converted to a string is parsed back into the same numeric value.
OP's dismay with (1.0/10).ToString("G17") gives "0.10000000000000001", which is pretty horrible. is an incorrect assessment of round trip success. The intermediate string is only half of the round trip.
Double exactly encodes about 264 different values. All encodable values are some limited integer * 2some_power. 0.1 is not one of them. 1.0/10 makes a math quotient of 0.1, but a slightly different Double value. The closest Double value and it two closet Double neighbors:
Before 0.099999999999999991673...
0.100000000000000005551...
After 0.100000000000000019428...
OP report 0.10000000000000001
Digit count 12345678901234567
OP's example should then be <(0.100000000000000005551).ToString("G17") gives "0.10000000000000001"> which is good.
Printing a Double with G17 provides 17 significant digits, enough to successfully round trip.
For what sort of (Double) values does R fail to round-trip? And how badly?
For this, I go on memory. R sometimes used less than 17 significant digits, like 15, to form the intermediate string. The algorithm used to determine the digit count sometimes came up a bit short and hence "some cases fails to successfully round-trip the original value".
Using G17 always works. For some values, less than 17 also would have worked. The down-side to G17 is exactly in cases like this. Fewer than 17 digits would have worked and have provided a more pleasing, shorter intermediate string.
A pleasing human readable string is not the goal of round-tripping. The goal is to form the same Double after going form value to string to value, even if the intermediate string has extra digits in select cases.
Is there a better way to get nicely formatted, reliable results?
"nicely formatted" is an additional burden to round trip. MS attempted to do so with R and failed in some cases, preferring to retain the same broken functionality than to fix it.
OP would be wise to avoid that path and forego the goal of nicely formatted intermediate string and focus on the round-trip goal of getting the final same value back.
Use G17.
We are considering writing doubles to R first, parsing to check the round-trip, and falling back to G17 only if that fails.
That would work if done correctly. To assess correctness, test your code with many values and also post it and the test harness for code review.

Strange behavior when casting decimal to double

I'm experiencing strange issue when casting decimal to double.
Following code returns true:
Math.Round(0.010000000312312m, 2) == 0.01m //true
However, when I cast this to double it returns false:
(double)Math.Round(0.010000000312312m, 2) == (double)0.01m //false
I've experienced this problem when I wanted to use Math.Pow and was forced to cast decimal to double since there is no Math.Pow overload for decimal.
Is this documented behavior? How can I avoid it when I'm forced to cast decimal to double?
Screenshot from Visual Studio:
Casting Math.Round to double me following result:
(double)Math.Round(0.010000000312312m, 2) 0.0099999997764825821 double
(double)0.01m 0.01 double
UPDATE
Ok, I'm reproducing the issue as follows:
When I run WPF application and check the output in watch just after it started I get true like on empty project.
There is a part of application that sends values from the slider to the calculation algorithm. I get wrong result and I put breakpoint on the calculation method. Now, when I check the value in watch window I get false (without any modifications, I just refresh watch window).
As soon as I reproduce the issue in some smaller project I will post it here.
UPDATE2
Unfortunately, I cannot reproduce the issue in smaller project. I think that Eric's answer explains why.
People are reporting in the comments here that sometimes the result of the comparison is true and sometimes it is false.
Unfortunately, this is to be expected. The C# compiler, the jitter and the CPU are all permitted to perform arithmetic on doubles in more than 64 bit double precision, as they see fit. This means that sometimes the results of what looks like "the same" computation can be done in 64 bit precision in one calculation, 80 or 128 bit precision in another calculation, and the two results might differ in their last bit.
Let me make sure that you understand what I mean by "as they see fit". You can get different results for any reason whatsoever. You can get different results in debug and retail. You can get different results if you make the compiler do the computation in constants and if you make the runtime do the computation at runtime. You can get different results when the debugger is running. You can get different results in the runtime and the debugger's expression evaluator. Any reason whatsoever. Double arithmetic is inherently unreliable. This is due to the design of the floating point chip; double arithmetic on these chips cannot be made more repeatable without a considerable performance penalty.
For this and other reasons you should almost never compare two doubles for exact equality. Rather, subtract the doubles, and see if the absolute value of the difference is smaller than a reasonable bound.
Moreover, it is important that you understand why rounding a double to two decimal places is a difficult thing to do. A non-zero, finite double is a number of the form (1 + f) x 2e where f is a fraction with a denominator that is a power of two, and e is an exponent. Clearly it is not possible to represent 0.01 in that form, because there is no way to get a denominator equal to a power of ten out of a denominator equal to a power of two.
The double 0.01 is actually the binary number 1.0100011110101110000101000111101011100001010001111011 x 2-7, which in decimal is 0.01000000000000000020816681711721685132943093776702880859375. That is the closest you can possibly get to 0.01 in a double. If you need to represent exactly that value then use decimal. That's why its called decimal.
Incidentally, I have answered variations on this question many times on StackOverflow. For example:
Why differs floating-point precision in C# when separated by parantheses and when separated by statements?
Also, if you need to "take apart" a double to see what its bits are, this handy code that I whipped up a while back is quite useful. It requires that you install Solver Foundation, but that's a free download.
http://ericlippert.com/2011/02/17/looking-inside-a-double/
This is documented behavior. The decimal data type is more precise than the double type. So when you convert from decimal to double there is the possibility of data loss. This is why you are required to do an explicit conversion of the type.
See the following MSDN C# references for more information:
decimal data type: http://msdn.microsoft.com/en-us/library/364x0z75(v=vs.110).aspx
double data type: http://msdn.microsoft.com/en-us/library/678hzkk9(v=vs.110).aspx
casting and type conversion: http://msdn.microsoft.com/en-us/library/ms173105.aspx

'Beautify' number by rounding erroneous digits appropriately

I want my cake and to eat it. I want to beautify (round) numbers to the largest extent possible without compromising accuracy for other calculations. I'm using doubles in C# (with some string conversion manipulation too).
Here's the issue. I understand the inherent limitations in double number representation (so please don't explain that). HOWEVER, I want to round the number in some way to appear aesthetically pleasing to the end user (I am making a calculator). The problem is rounding by X significant digits works in one case, but not in the other, whilst rounding by decimal place works in the other, but not the first case.
Observe:
CASE A: Math.Sin(Math.Pi) = 0.000000000000000122460635382238
CASE B: 0.000000000000001/3 = 0.000000000000000333333333333333
For the first case, I want to round by DECIMAL PLACES. That would give me the nice neat zero I'm looking for. Rounding by Sig digits would mean I would keep the erroneous digits too.
However for the second case, I want to round by SIGNIFICANT DIGITS, as I would lose tons of accuracy if I rounded merely by decimal places.
Is there a general way I can cater to both types of calculation?
I don't thinks it's feasible to do that to the result itself and precision has nothing to do with it.
Consider this input: (1+3)/2^3 . You can "beautify" it by showing the result as sin(30) or cos(60) or 1/2 and a whole lot of other interpretations. Choosing the wrong "beautification" can mislead your user, making them think their function has something to do with sin(x).
If your calculator keeps all the initial input as variables you could keep all the operations postponed until you need the result and then make sure you simplify the result until it matches your needs. And you'll need to consider using rational numbers, e, Pi and other irrational numbers may not be as easy to deal with.
The best solution to this is to keep every bit you can get during calculations, and leave the display format up to the end user. The user should have some idea how many significant digits make sense in their situation, given both the nature of the calculations and the use of the result.
Default to a reasonable number of significant digits for a few calculations in the floating point format you are using internally - about 12 if you are using double. If the user changes the format, immediately redisplay in the new format.
The best solution is to use arbitrary-precision and/or symbolic arithmetic, although these result in much more complex code and slower speed. But since performance isn't important for a calculator (in case of a button calculator and not the one that you enter expressions to calculate) you can use them without issue
Anyway there's a good trade-off which is to use decimal floating point. You'll need to limit the input/output precision but use a higher precision for the internal representation so that you can discard values very close to zero like the sin case above. For better results you could detect some edge cases such as sine/cosine of 45 degree's multiples... and directly return the exact result.
Edit: just found a good solution but haven't had an opportunity to try.
Here’s something I bet you never think about, and for good reason: how are floating-point numbers rendered as text strings? This is a surprisingly tough problem, but it’s been regarded as essentially solved since about 1990.
Prior to Steele and White’s "How to print floating-point numbers accurately", implementations of printf and similar rendering functions did their best to render floating point numbers, but there was wide variation in how well they behaved. A number such as 1.3 might be rendered as 1.29999999, for instance, or if a number was put through a feedback loop of being written out and its written representation read back, each successive result could drift further and further away from the original.
...
In 2010, Florian Loitsch published a wonderful paper in PLDI, "Printing floating-point numbers quickly and accurately with integers", which represents the biggest step in this field in 20 years: he mostly figured out how to use machine integers to perform accurate rendering! Why do I say "mostly"? Because although Loitsch's "Grisu3" algorithm is very fast, it gives up on about 0.5% of numbers, in which case you have to fall back to Dragon4 or a derivative
Here be dragons: advances in problems you didn’t even know you had

c# Subtract is not accurate even with decimals?

I'm learning TDD and, decided to create a Calculator class to start.
i did the basic first, and now I'm on the Square Root function.
I'm using this method to get the root http://www.math.com/school/subject1/lessons/S1U1L9DP.html
i tested it with few numbers, and I always get the accurate answers.
is pretty easy to understand.
Now I'm having a weird problem, because with some numbers, im getting the right answer, and with some, I don't.
I debugged the code, and found out that I'm not getting the right answer when I use subtract.
I'm using decimals to get the most accurate result.
when I do:
18 / 4.25
I am currently getting: 4.2352941176470588235294117647
when it should be: 4.2352941176470588235294117647059 (using windows calculator)
in the end of the road, this is the closest i get to the root of 18:
4.2426406871192851464050688705 ^ 2 = 18.000000000000000000000022892
my question is:
Can i get more accurate then this?
4.2352941176470588235294117647 contains 29 digits.
decimal is define to have 28-29 significant digits. You can't store a more accurate number in a decimal.
What field of engineering or science are you working in where the 30th and more digits are significant to the accuracy of the overall calculation?
(It would also, possibly, help if you'd shown some more actual code. The only code you've shown is 18 / 4.25, which can't be an actual expression in your code, since the second number is a double literal, and you can't assign the result of this expression to a decimal without a cast).
If you need arbitrary precision, then there isn't a standard "BigRational" type, but there is a BigInteger. You could use that to construct a BigRational type if you need that (storing numerator and denominator as two separate integers). One guess of why there isn't a standard type yet is that decisions on when to e.g. normalize such rationals may affect performance or equality comparisons.
Floating point calculations are not accurate. Decimals make the accuracy better, because they are 128-bit long, but they are still floating point numbers.
Comparing two floating point numbers is not done with ==, but rather:
static bool SameDecimal(decimal a, decimal b)
{
return Math.Abs(a-b) < 1e-10;
}
This method will allow you to compare two decimals (I assume 1e-10 is a small enough difference for you, it should be for everyday uses).

C# Wrong conversion using Convert.ChangeType()

I am using Convert.ChangeType() to convert from Object (which I get from DataBase) to a generic type T. The code looks like this:
T element = (T)Convert.ChangeType(obj, typeof(T));
return element;
and this works great most of the time, however I have discovered that if I try to cast something as simple as return of the following sql query
select 3.2
the above code (T being double) wont return 3.2, but 3.2000000000000002. I can't realise why this is happening, or how to fix it. Please help!
What you're seeing is an artifact of the way floating-point numbers are represented in memory. There's quite a bit of information available on exactly why this is, but this paper is a good one. This phenomenon is why you can end up with seemingly anomalous behavior. A double or single should never be displayed to the user unformatted, and you should avoid equality comparisons like the plague.
If you need numbers that are accurate to a greater level of precision (ie, representing currency values), then use decimal.
This probably is because of floating point arithmetic. You probably should use decimal instead of double.
It is not a problem of Convert. Internally double type represent as infinite fraction of 2 of real number, that is why you got such result. Depending of your purpose use:
Either Decimal
Or use precise formating {0:F2}
Use Math.Flor/Math.Ceil

Categories

Resources