TimeSpan FromMilliseconds strange implementation? - c#

I recently encountered some weird behaviour in the .NET TimeSpan implementation.
TimeSpan test = TimeSpan.FromMilliseconds(0.5);
double ms = test.TotalMilliseconds; // Returns 0
FromMilliseconds takes a double as parameter. However, it seems the value is rounded internally.
If I instantiate a new TimeSpan with 5000 ticks (.5 ms), the value of TotalMilliseconds is correct.
Looking at the TimeSpan implementation in reflector reveals that the input is in fact casted to a long.
Why did Microsoft design the FromMilliseconds method to take a double a parameter instead of a long (since a double value is useless given this implementation)?

The first consideration is wondering why they selected a double as the return value. Using long would have been an obvious choice. Although there already is a perfectly good property that is long, Ticks is unambiguous with a unit of 100 nanoseconds. But they picked double, probably with the intention to return a fractional value.
That however created a new problem, one that was possibly only discovered later. A double can store only 15 significant digits. A TimeSpan can store 10,000 years. It is very desirable to convert from TimeSpan to milliseconds, then back to TimeSpan and get the same value.
That isn't possible with a double. Doing the math: 10,000 years is roughly 10000 x 365.4 x 24 x 3600 x 1000 = 315,705,600,000,000 milliseconds. Count off 15 digits, best a double can do, and you get exactly one millisecond as the smallest unit that can still be stored without round-off error. Any extra digits will be random noise.
Stuck between a rock and a hard place, the designers (testers?) had to choose between rounding the value when converting from TimeSpan to milliseconds. Or to do it later when going from milliseconds to TimeSpan. They chose to do it early, a courageous decision.
Solve your problem by using the Ticks property and multiplying by 1E-4 to get milliseconds.

This is by design, obviously. The documentation says as much:
The value parameter is converted to
ticks, and that number of ticks is
used to initialize the new TimeSpan.
Therefore, value will only be
considered accurate to the nearest
millisecond.

Accepting a double is a logical design. You can have fractions of milliseconds.
What's happening internally is an implementation design. Even if all current implementations (of the CLI) round it first that doesn't have to be the case in the future.

The problem with your code is actually the first line, where you call FromMilliseconds. As noted previously, the remarks in the documentation state the following:
The value parameter is converted to ticks, and that number of ticks is used to initialize the new TimeSpan. Therefore, value will only be considered accurate to the nearest millisecond.
In reality, this statement is neither correct nor logically sound. In reverse order:
Ticks are defined as "one hundred nanoseconds". By this definition, the documentation should have been written as:
Therefore, value will only be considered accurate to the nearest millisecond tick, or one ten-millionth of a second.
Due to a bug or oversight, the value parameter is not converted directly to ticks prior to initializing the new TimeSpan instance. This can be seen in the reference source for TimeSpan, where the millis value is rounded prior to its conversion to ticks, rather than after. If maximum precision were to be preserved, this line of code should have read as follows (and the adjustment by 0.5 milliseconds 3 lines earlier would be removed):
return new TimeSpan((long)(millis * TicksPerMillisecond));
Summary:
The documentation for the various TimeSpan.From*, with the exception of FromTicks, should be updated to state that the argument is rounded to the nearest millisecond (without including the reference to ticks).

Or, you could do:
double x = 0.4;
TimeSpan t = TimeSpan.FromTicks((long)(TimeSpan.TicksPerMillisecond * x)); // where x can be a double
double ms = t.TotalMilliseconds; //return 0.4
--sarcasm
TimeSpan converts the double of milliseconds to ticks so "OBVIOUSLY" you can have a TimeSpan with less than a 1ms granularity.
-/sarcasm
-- this isn't obvious at all...
why this isn't done inside the .FromMilliseconds method is beyond me.

Related

Exact double precision by correct rounding

Although my question sounds trivial, it really is NOT. Hope you can help me.
I want to implement interval arithmetic in my .NET (C#) project. This means that every number is defined by an lower bound and an upper bound. This is helpfull for problems like
1 / 3 = 0.333333333333333 (15 significant digits)
since you would then have
1 / 3 = [ 0.33333333333333 , 0.333333333333334 ] (14 significant digits each)
, so I now FOR SURE that the right answer lays between those two numbers. Without the interval representation I would already have a rounding error with me (i.e. 0.0000000000000003).
To achieve this I wrote my own Interval type that overloads all standard operators like +-*/, etc. To make this type work correctly I need to be able to round the result of 1 / 3 in two directions. Rounding the result down will give me the lower bound for my interval, rounding the result up will give me the upper bound for my interval.
.NET has the Math.Round(double,int) method which rounds the double to int decimal places. Looks great but it can't be forced to round up/down. Math.Round(1.0/3.0,14) would round down, but the also needed up-rounding to 0.33...34 can't be achieved like this.
But there are Math.Ceil and Math.Floor you might say! Okay, those methods round to the next lower or upper integer. So if I want to round to 14 decimal places I first need to reform my result:
1 / 3 = 0.333333333333333 -> *E14 -> 33333333333333.3
So now I can call Math.Ceil and Math.Floor and get both rounded results after reforming back
33333333333333 & 33333333333334 -> /E14 -> 0.33333333333333 & 0.33333333333334
Looks great, but: Let's say my number goes near the double.MaxValue. I can't just *E14 a value near double.MaxValue since this will give me an OverflowException. So this is no solution either.
And, to top all of these facts: All this fails even harder when trying to round 0.9999999999999999999999999 (more than 15 digits) since the internal representation is already rounded to 1 before I can even start trying to round down.
I could try to somehow parse a string containing the double but this won't help since (1/3 * 3).ToString() will already print 1 instead of 0.99...9.
Decimal does not work either since I don't want that deep precision, 14 digits are enough; but I still want that double range!
In C++, where several interval arithmetic implementations exist, this problem could be solved by telling the processor dynamically to swith its roundmode to for example "always down" or "always up". I couldn't find any way to do this in .NET.
So, do you have any ideas?
Thanks in advance!
Assume nextDown(x) is a function that returns the largest double that is less than x, and nextUp(x) is a function that returns the smallest double that is greater than x. See Get next smallest Double number for implementation ideas.
Where you would have rounded a lower bound result down, instead use the nextDown of the round-to-nearest result. Where you would have rounded an upper bound up, use the nextUp of the round-to-nearest result.
This method ensures the interval continues to contain the exact real number result. It introduces extra rounding error - in some cases the lower bound will be one ULP smaller than it should be, and/or the upper bound will be one ULP bigger. However, it is a minimal widening of the interval, much less widening than you would get working in decimal or by suppressing low significance bits.
This might be more like a long comment than a real answer.
This code returns an "interval" (I just use Tuple<,>, you can use your own Interval type) based on truncating the seven least significant bits:
static Tuple<double, double> GetMinMaxIntervalBasedOnBinaryNumbersThatAreRoundOnLastSevenBits(double number)
{
if (double.IsInfinity(number) || double.IsNaN(number))
return Tuple.Create(number, number); // maybe treat this case differently
var i = BitConverter.DoubleToInt64Bits(number);
const int numberOfBitsToClear = 7; // your seven, can change this value, must be below 52
const long precision = 1L << numberOfBitsToClear;
const long bitMask = ~(precision - 1L);
//truncate i
i &= bitMask;
return Tuple.Create(BitConverter.Int64BitsToDouble(i), BitConverter.Int64BitsToDouble(i + precision));
}
Disclaimer: I am not sure if this is useful for any purpose. In particular not sure it is useful for interval arithmetic.
With this code, GetMinMaxIntervalBasedOnBinaryNumbersThatAreRoundOnLastSevenBits(1.0 / 3.0) returns the tuple (0.333333333333329, 0.333333333333336).
This code, just like the code you ask for in your question, has the obvious "issue" that if the original value is close to (or even equal to) one of the "round" numbers we use, then the returned interval is "skewed", with the original number being close to one of the ends of the interval. For example, with input 42.0 (already round), you get out the tuple (42, 42.0000000000009).
One good thing about this code is I expect it to be extremely fast.

Measure precision of timer (e.g. Stopwatch/QueryPerformanceCounter)

Given that the Stopwatch class in C# can use something like three different timers underneath e.g.
System timer e.g. precision of approx +-10 ms depending on timer resolution that can be set with timeBeginPeriod it can be approx +-1 ms.
Time Stamp Counter (TSC) e.g. with a tick frequency of 2.5MHz or 1 tick = 400 ns so ideally a precision of that.
High Precision Event Timer (HPET) e.g. with a tick frequency of 25MHz or 1 tick = 40 ns so ideally a precision of that.
how can we measure the observable precision of this? Precision being defined as
Precision refers to the closeness of two or more measurements to each
other.
Now if the Stopwatch uses HPET does this mean we can use Stopwatch to get measurements of a precision equivalent to the frequency of the timer?
I don't think so, since this requires us to be able to use the timer with zero variance or a completely fixed overhead, which as far as I can tell is not true for Stopwatch. For example, when using HPET and calling:
var before_ticks = Stopwatch.GetTimestamp();
var after_ticks = Stopwatch.GetTimestamp();
var diff_ticks = after_ticks - before_ticks;
then the diff will be say approx 100 ticks or 4000 ns and it will have some variance too.
So how could one experimentally measure the observable precision of the Stopwatch? So it supports all possible timer modes underneath.
My idea would be to search for the minimum number of ticks != 0, to first establish the overhead in ticks of the Stopwatch that is for system timer this would be 0 until e.g. 10ms which is 10 * 1000 * 10 = 100,000 ticks since system timer has a tick resolution of 100ns, but the precision is far from this. For HPET it will never be 0 since the overhead of calling Stopwatch.GetTimestamp() is higher than the frequency of the timer.
But this says nothing about how precise we can measure using the timer. My definition would be how small a difference in ticks we can measure reliably.
The search could be performed by measuring different number of iterations ala:
var before = Stopwatch.GetTimestamp();
for (int i = 0; i < iterations; ++i)
{
action(); // Calling a no-op delegate Action since this cannot be inlined
}
var after = Stopwatch.GetTimestamp();
First a lower bound could be found where all of say 10 of measurements for a given number of iterations yield a non-zero number of ticks, save these measurements in long ticksLower[10]. Then the closest possible number of iterations that yield tick difference that is always higher that any of the first 10 measurements could be found, save these in long ticksUpper[10].
Worst case precision would then be the highest ticks in ticksUpper minus lowest ticks in ticksLower.
Does this sound reasonable?
Why do I want to know the observable precision of the Stopwatch? Because this can be used for determining the length of time you would need to measure for to get a certain level of precision of micro-benchmarking measurements. I.e. for 3 digit precision the length should be >1000 times the precision of the timer. Of course, one would measure multiple times with this length.
The Stopwatch class exposes a Frequency property that is the direct result of calling SafeNativeMethods.QueryPerformanceFrequency. Here is an excerpt of the property page:
The Frequency value depends on the resolution of the underlying timing
mechanism. If the installed hardware and operating system support a
high-resolution performance counter, then the Frequency value reflects
the frequency of that counter. Otherwise, the Frequency value is based
on the system timer frequency.

How do I truncate milliseconds off "Ticks" without converting to datetime?

I have two times in Ticks like so:
//2016-01-22​T17:34:52.648Z
var tick1 = 635890808926480754;
//2016-01-22​T17:34:52.000Z
var tick2 = 635890808920000000;
Now as you can see comparing these two numbers tick1 == tick2 returns false
although the dates are the same (apart from milliseconds).
I would like to truncate the milliseconds off these numbers without converting it to a datetime (because this would reduce efficiency)
I have looked at Math.Round which says:
Rounds a value to the nearest integer or to the specified number of fractional digits.
and also Math.Truncate neither of which I think do what I need.
Looking at Datetime.Ticks it says:
A single tick represents one hundred nanoseconds or one ten-millionth of a second. There are 10,000 ticks in a millisecond, or 10 million ticks in a second.
Therefore I need to round the number down to the nearest ten million.
Is this possible?
You could use integer division:
if (tick1 / TimeSpan.TicksPerSecond == tick2 / TimeSpan.TicksPerSecond)
This works because if you divide a long/int by a long/int the result is also a long/int therefore truncating the decimal portion.
You can use this:
if(Math.Abs(tick1 - tick2) < TimeSpan.TicksPerSecond)
Which avoid doing divisions.
You may adjust the precision you need with any of the following:
TimeSpan.TicksPerDay
TimeSpan.TicksPerHour
TimeSpan.TicksPerMinute
TimeSpan.TicksPerSecond
TimeSpan.TicksPerMillisecond
Divide it by 1000 like this:
Long Seconds = 635890808926480754/1000
//Seconds = 635890808926480

What are the value of Stopwatch's ticks if their duration varies?

I'm not talking about the bloodsucking spider-like disease-spreader, but rather the sort of tick I recorded here:
Stopwatch noLyme = new Stopwatch();
noLyme.Start();
. . .
noLyme.Stop();
MessageBox.Show(string.Format(
"elapsed milliseconds == {0}, elapsed ticks == {1}",
noLyme.ElapsedMilliseconds, noLyme.ElapsedTicks));
What the message box showed me was 17357 milliseconds and 56411802 ticks; this equates to 3250.089416373797 ticks per millisecond, or approximately 3.25 million ticks per second.
Since the ratio is such an odd one (3250.089416373797:1), I assume the time length of a tick changes based on hardware used, or other factors. That being the case, in what practical way are tick counts used? To me, it seems milliseconds hold more value. IOW: why would I care about ticks (the variable time slices)?
From the documentation (with Frequency being another property of the Stopwatch):
Each tick in the ElapsedTicks value represents the time interval equal
to 1 second divided by the Frequency.
Stopwatch.ElapsedTicks (MSDN)
Ticks are useful if you need very precise timing based on the specifics of your hardware.
You would use ticks if you want to know a very precise performance measurement that is specific to a given machine. Internal hardware mechanisms determine the conversion from ticks to actual time.
ticks are the raw, low level units in which the hardware measures time.
It's like asking "what use are bits when we can use ints". Well, if we didn't have bits, ints wouldn't exist!
However, ticks can be useful. When counting ticks, converting them to milliseconds is a costly process. When measuring accurately, you can count everything in ticks and convert the results to seconds at the end of the process. When comparing measurements, absolute values nay not be relevant, it may only be relative differences that are of interest.
Of course, in these days of high level language and multitasking there aren't many cases where you would go to the bare metal in this way, but why shouldn't the raw hardware value be exposed through the higher level interfaces?

Should I worry about losing precision with this DateTime math?

This code:
if (dt.Subtract(prevDt).TotalMinutes == 15)
("dt" and "prevDt" are DateTime vars that contain values such as "7/20/2012 7:30:00 AM" and "7/20/2012 7:45:00 AM")
...causes ReSharper to warn me with:
"Comparison of floating point numbers with equality operator. Possible loss of precision while rounding values."
Is this a valid warning, and if so, how would I appease it? I wish ReSharper were a little more like Eclipse, which offers to fix things it complains about.
At any rate, the code seems to work fine, although I'd rather not have it stink up the joint if this is a code smell.
If you are sure that your timestamps are exactly on 15 minute boundaries and not a few milliseconds off, then your code will work fine. Values that can be represented exactly as an int can also be represented exactly as a double.
If you want to try to rewrite your code to avoid the warning, you might want to try this:
if (prevDt.AddMinutes(15) == dt)
No, it's not a valid warning if your dates will always be exactly 15 minutes apart, with no seconds or milliseconds (or ticks) of difference otherwise.
You may use Minutes with all other properties (Days/Hours...) to compare TimeSpans for portions you care about (i.e. ignore seconds).
Otherwise it may be better to check if TotalMinutes not too far off instead of exact match if your values are ever could contain seconds/milliseconds.

Categories

Resources