Best way to implement a high resolution DateTime.UtcNow in C#? - c#

I am trying to implement a time service that will report time with greater accuracy than 1ms. I thought an easy solution would be to take an initial measurement and use StopWatch to add a delta to it. The problem is that this method seems to diverge extremely fast from wall time. For example, the following code attempts to measure the divergence between Wall Time and my High Resolution Clock:
public static void Main(string[] args)
{
System.Diagnostics.Stopwatch s = new System.Diagnostics.Stopwatch();
DateTime baseDateTime = DateTime.UtcNow;
s.Start();
long counter = 0;
while(true)
{
DateTime utcnow = DateTime.UtcNow;
DateTime hpcutcnow = baseDateTime + s.Elapsed;
Console.WriteLine(String.Format("{0}) DT:{1} HP:{2} DIFF:{3}",
++counter, utcnow, hpcutcnow, utcnow - hpcutcnow));
Thread.Sleep(1000);
}
}
I am diverging at a rate of about 2ms/minute on fairly recent sever hardware.
Is there another time facility in windows that I am not aware of that will be more accurate? If not, is there a better approach to creating a high resolution clock or a 3rd party library I should be using?

Getting an accurate clock is difficult. Stopwatch has a very high resolution, but it is not accurate, deriving its frequency from a signal in the chipset. Which operates at typical electronic part tolerances. Cut-throat competition in the hardware business preempted the expensive crystal oscillators with a guaranteed and stable frequency.
DateTime.UtcNow isn't all that accurate either, but it gets help. Windows periodically contacts a time service, the default one is time.windows.com to obtain an update of a high quality clock. And uses it to recalibrate the machine's clock, inserting small adjustments to get the clock to catch up or slow down.
You need lots of bigger tricks to get it accurate down to a millisecond. You can only get a guarantee like that for code that runs in kernel mode, running at interrupt priority so it cannot get pre-empted by other code and with its code and data pages page-locked so it can't get hit with page faults. Commercial solutions use a GPS radio to read the clock signal of the GPS satellites, backed up by an oscillator that runs in an oven to provide temperature stability. Reading such a clock is the hard problem, you don't have much use for a sub-millisecond clock source when your program that uses it can get pre-empted by the operating system just as it obtained the time and not start running again until ~45 msec later. Or worse.
DateTime.UtcNow is accurate to 15.625 milliseconds and stable over very long periods thanks to the time service updates. Going lower than that just doesn't make much sense, you can't get the execution guarantee you need in user mode to take advantage of it.

Apparently in Windows 8/Server 2012 a new API was added specifically for getting high resolution timestamps, the GetSystemTimePreciseAsFileTime API. I haven't had a chance to play around with this, but it looks promising.

3rd party library
I tried to create my own based on several Internet sources. Here is a link to it: https://github.com/Anonymous87549236/HighResolutionDateTime/releases .
But then I realized that in .NET Core the resolution is already the maximum. So I think .NET Core is the best implementation.

Related

Is DateTime.Now an I/O bound operation?

What happens when you call DateTime.Now?
I followed the property code in Reflector and it appears to add the time zone offset of the current locale to UtcNow. Following UTCNow led me, turn by turn, finally to a Win32 API call.
I reflected on it and asked a related question but haven't received a satisfactory response yet. From the links in the present comment on that question, I infer that there is a hardware unit that keeps time. But I also want to know what unit it keeps time in and whether or not it uses the CPU to convert time into a human readable unit. This will shed some light on whether the retrieval of date and time information is I/O bound or compute-bound.
You are deeply in undocumented territory with this question. Time is provided by the kernel: the underlying native API call is NtQuerySystemTime(). This does get tinkered with across Windows versions - Windows 8 especially heavily altered the underlying implementation, with visible side-effects.
It is I/O bound in nature: time is maintained by the RTC (Real Time Clock) which used to be a dedicated chip but nowadays is integrated in the chipset. But there is very strong evidence that it isn't I/O bound in practice. Time updates in sync with the clock interrupt so very likely the interrupt handler reads the RTC and you get a copy of the value. Something you can see when you tinker with timeBeginPeriod().
And you can see when you profile it that it only takes ~7 nanoseconds on Windows 10 - entirely too fast to be I/O bound.
You seem to be concerned with blocking. There are two cases where you'd want to avoid that.
On the UI thread it's about latency. It does not matter what you do (IO or CPU), it can't take long. Otherwise it freezes the UI thread. UtcNow is super fast so it's not a concern.
Sometimes, non-blocking IO is being uses as a way to scale throughput as more load is added. Here, the only reason is to save threads because each thread consumes a lot of resources. Since there is no async way to call UtcNow the question is moot. You just have to call it as is.
Since time on Windows usually advances at 60 Hz I'd assume that a call to UtcNow reads from an in-memory variable that is written to at 60 Hz. That makes is CPU bound. But it does not matter either way.
.NET relies on the API. MSDN has to say this about the API:
https://msdn.microsoft.com/de-de/library/windows/desktop/ms724961(v=vs.85).aspx
When the system first starts, it sets the system time to a value based on the real-time clock of the computer and then regularly updates the time [...] GetSystemTime copies the time to a SYSTEMTIME [...]
I have found no reliable sources to back up my claim that it is stored as SYSTEMTIME structure, updated therein, and just copied into the receiving buffer of GetSystemTime when called. The smallest logical unit is 100ns from the NtQuerySystemTime system call, but we end up with 1 millisecond in the CLR's DateTime object. Resolution is not always the same.
We might be able to figure that out for Mono on Linux, but hardly for Windows given that the API code itself is not public. So here is an assumption: Current time is a variable in the kernel address space. It will be updated by the OS (frequently by the system clock timer interrupt, less frequently maybe from a network source -- the documentation mentions that callers may not rely on monotonic behavior, as a network sync can correct the current time backwards). The OS will synchronize access to prevent concurrent writing but otherwise it will not be an I/O-expensive operation.
On recent computers, the timer interval is no longer fixed, and can be controlled by the BIOS and OS. Applications can even request lower or higher clock rates: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted

What is the meaning of timer precision and resolution?

I don't understand the meaning of timer precision and resolution. Can anyone explain it to me?
NOTE: This question is related to Stopwatch.
Accuracy and precision are opposing goals, you can't get both. An example of a very accurate timing source is DateTime.UtcNow. It provides absolute time that's automatically corrected for clock rate errors by the kernel, using a timing service to periodically re-calibrate the clock. You probably heard of time.windows.com, the NTP server that most Windows PC use. Very accurate, you can count on less than a second of error over an entire year. But not precise, the value only updates 64 times per second. It is useless to time anything that takes less than a second with any kind of decent precision.
The clock source for Stopwatch is very different. It uses a free running counter that is driven by a frequency source available somewhere in the chipset. This used to be a dedicate crystal running at the color burst frequency (3.579545 MHz) but relentless cost cutting has eliminated that from most PCs. Stopwatch is very precise, you can tell from its Frequency property. You should get something between a megahertz and the cpu clock frequency, allowing you to time down to a microsecond or better. But it is not accurate, it is subject to electronic part tolerances. Particularly mistrust any Frequency beyond a gigahertz, that's derived from a multiplier which also multiplies the error. And beware the Heisenberg principle, starting and stopping the Stopwatch takes non-zero overhead that will affect the accuracy of very short measurements. Another common accuracy problem with Stopwatch is the operating system switching out your thread to allow other code to run. You need to take multiple samples and use the median value.
They are the same as with any measurement. See this Wikipedia article for more details --
http://en.wikipedia.org/wiki/Accuracy_and_precision
There are different types of times in .net (3 or 4 of them, if i remember correctly), each working with his own algorithm. The precision of timer means how accurate it is in informing the using application on the ticking events. For example, if you use a timer and set it to trigger its ticking event every 1000 ms, the precision of the timer means how close to the specified 1000 ms it will actually tick.
for more information (at least in c#), i suggest u read the msdn page on timers:
From MSDN Stopwatch Class: (emphasis mine)
"The Stopwatch measures elapsed time by counting timer ticks in the underlying timer mechanism. If the installed hardware and operating system support a high-resolution performance counter, then the Stopwatch class uses that counter to measure elapsed time. Otherwise, the Stopwatch class uses the system timer to measure elapsed time. Use the Frequency and IsHighResolution fields to determine the precision and resolution of the Stopwatch timing implementation."

What is best way to measure the time cycles for a C# function?

Really, I'm looking for a good function that measure the time cycles accurately for a given C# function under Windows operating system. I tried these functions, but they both do not get accurate measure:
DateTime StartTime = DateTime.Now;
TimeSpan ts = DateTime.Now.Subtract(StartTime);
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
//code to be measured
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
Really, each time I call them, they give me different time for the same function
Please, if anyone know better way to measure time consuming accurately, please help me and thanks alot alot
The Stopwatch is the recommended way to measure time it takes for a function to execute. It will never be the same from run to run due to various software and hardware factors, which is why performance analysis is usually done on a large number of runs and the time is averaged out.
"they give me different time for the same function" - that's expected. Things fluctuate because you are not the only process running on a system.
Run the code you want to time in a large loop to average out any fluctuations (divide total time by the number of loops).
Stopwatch is an accurate timer, and is more than adequate timing for most situations.
const int numLoops = 1000000; // ...or whatever number is appropriate
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
for (int i = 0; i < numLoops; i++)
{
// code to be timed...
}
stopWatch.Stop();
TimeSpan elapsedTotal = stopWatch.Elapsed;
double timeMs = elapsedTotal.TotalMilliseconds / numLoops;
The gold-standard is to use StopWatch. It is a high resolution timer and it works very well.
I'd suggest you check the elapsed time using .Elapsed.TotalMilliSeconds as you get a double rather than .Elapsed.MilliSeconds which gives you an int. This might be throwing your results off.
Also, you might find that garbage collections occur during your timing tests and these can significantly change the resulting time. It's useful to check the GC collection count before and after your timing test and discard the result if any garbage collections occurred.
Otherwise your results can vary simply because other threads and processes take over the CPU and other system resources during your tests. There's not much you can do here except to run your tests multiple times and statistically analyze your results by calculating mean & standard deviations timings etc.
I hope this helps.
While it is possible for you to measure your code in clock cycles, it will still be just as prone to variability as measuring in seconds and will be much less useful (because seconds are just as good, if not better, a unit of measurement than clock cycles). The only way to get a measurement that is unaffected by other processes is to ensure none are running, and you can't do that on Windows -- the OS itself will always be running doing some things, because it's not a single-process OS.
The closest you can come to the measurement you want is to build and run your code as described here. You can then view the x86 assembly for the JIT'd code for the methods you want to time by setting a breakpoint at the start of your code, and then stepping through. You can cross-reference each x86 instruction with its cycle timing in the Intel architecture manuals, and add them up to get an accurate cycle count.
This is, of course, extremely painful and basically useless. It also might be invalidated by code changes that cause the JIT to take slightly different approaches to producing x86 from your IL.
You need profiler to measure code execution (see What Are Some Good .NET Profilers? to start your search).
Looking at your comments it is unclear what you trying to optimize. Generally you need to go down to measure CPU clock cycles when your code is executed 1000s of times and purely CPU bound, in other cases per-function execution time is usually enough. But you are saying that your code is too slow to run that many times to calculate average time with stopwatch.
You also need to figure out if CPU is the bottleneck for your application or there is somthing else that makes it slow. Looking at CPU % in TaskManager can give you information on it - less then 100% CPU usage pretty much guarantees that there is something else (i.e. network or disk activity) makes program slow.
Basically providing more detail on type of code you trying to measure to meet your performance goals will make helping you much easier.
To echo the others: The Stopwatch class is the best way to do this.
To answer your questions about only measuring clock cycles: The fact that you're running on a multi-tasking OS on a modern processor makes measuring clock cycles almost useless. A context switch has a good chance of removing your code and data from the processors cache, and the OS might decide to swap your working set out in the meantime.
The processor could decide to reorder your instructions based on cache waits or memory accesses, and execute what it can while it's waiting. Or it may not if it is in the cache.
So, in short, performing multiple runs and averaging them is really the only way to go.
To get less jitter in the time, you could elevate the priority of the thread/process, but this can result is a slew of other issues (Bumping to real-time priority, and getting stuck in a long loop will essentially stop all other processing. If a bug occurs, and you get stuck in an infinite loop, your only choice is the reset button), and is not recommended at all, especially in on a users computer, or in a production environment. And since you can't do that where it matters, it makes the benchmarks you run in your machine, with any priority modifications, invalid.
how about using Environment.TickCount to capture start and end and then to TimeSpan.FromTicks() on it ?

How does XNA timing work?

How does XNA maintain a consistent and precise 60 FPS frame rate? Additionally, how does it maintain such precise timing without pegging the CPU at 100%?
While luke’s code above is theoretical right the used methods and properties are not the best choices:
As the precision of DateTime.Now is only about 30ms (see C# DateTime.Now precision and give or take 20ms) its use for high performance timing is not advisable (60 FPS are 16ms). System.Diagnostics.Stopwatch is the timer of choice for real time .NET
Thread.Sleep suffers the same precision/resolution problem and is not guaranteed to sleep for the specified time only
The current XNA FX seems to hook into the Windows message loop and execute its internal pre-update each step and calling Game.Update only if it the elapsed time since the last update matches the specified framerate (e.g. each 16ms for the default settings). If you want to really know how the XNA FX does the job Reflector is your friend :)
Random tidbit: Back in the XNA GameStudio 1.0 Alpha/Beta time frame there were quite a few blog posts about the “perfect WinForms game loop”, albeit I fail to find them now…
I don't know specifically how XNA does it but when playing around with OpenGL a few years ago I accomplished the same thing using some very simple code.
at the core of it i assume XNA has some sort of rendering loop, it may or may not be integrated with a standard even processing loop but for the sake of example lets assume it isn't. in this case you could write it some thing like this.
TimeSpan FrameInterval = TimeSpan.FromMillis(1000.0/60.0);
DateTime PrevFrameTime = DateTime.MinValue;
while(true)
{
DateTime CurrentFrameTime = DateTime.Now;
TimeSpan diff = DateTime.Now - PrevFrameTime;
if(diff < FrameInterval)
{
DrawScene();
PrevFrameTime = CurrentFrameTime;
}
else
{
Thread.Sleep(FrameInterval - diff);
}
}
In reality you would probably use something like Environment.Ticks instead of DateTimes (it would be more accurate), but i think that this illustrates the point. This should only call drawScene about 60 times a second, and the rest of the time the thread will be sleeping so it will not incur any CPU time.
games should run at 60fps but that doesn't mean that they will. That's actually an upper limit for a released game.
If you run a game in debug mode you could get much higher frames per second - for example a blank starter template on my laptop in debug mode runs well over 1,000fps.
That being said the XNA framework in a released game will do its best to run at 60fps - but the code you included in your project has the chance of lowering that performance. For example having something constantly fire the garbage collection you normally would see a dip in the fps of the game, or throwing some complex math in the update or draw methods - thus having them fire every frame..which would usually be a bit excessive. There are a number of things to keep in mind to keep your game as streamlined as possible.
If you are asking how the XNA framework makes that ceiling happen - that I cant really explain - but I can say that depending on how you layout your code - and what you can do can definitely negatively impact this number, and it doesnt always have to be CPU related. In the instance of garbage Collection its just the cleaning up of a RAM which may not show a spike in CPU usage at all - but could impact your FPS, depending on the amount of garbage and interval it has to run.
You can read all about how the XNA timer was implemented here Game timing in XNA Game Studio but basicly it would try and wiat 1/60 of a second before continuing the loop again, also note that update can be called multiple times before a render if XNA needs to "catch up".

How accurate is System.Diagnostics.Stopwatch?

How accurate is System.Diagnostics.Stopwatch? I am trying to do some metrics for different code paths and I need it to be exact. Should I be using stopwatch or is there another solution that is more accurate.
I have been told that sometimes stopwatch gives incorrect information.
I've just written an article that explains how a test setup must be done to get an high accuracy (better than 0.1ms) out of the stopwatch. I Think it should explain everything.
http://www.codeproject.com/KB/testing/stopwatch-measure-precise.aspx
The System.Diagnostics.Stopwatch class does accurately measure time elapsed, but the way that the ElapsedTicks method works has led some people to the conclusion that it is not accurate, when they really just have a logic error in their code.
The reason that some developers think that the Stopwatch is not accurate is that the ElapsedTicks from the Stopwatch DO NOT EQUATE to the Ticks in a DateTime.
The problem arises when the application code uses the ElapsedTicks to create a new DateTime.
var watch = new Stopwatch();
watch.Start();
... (perform a set of operations)
watch.Stop();
var wrongDate = new DateTime(watch.ElapsedTicks); // This is the WRONG value.
If necessary, the stopwatch duration can be converted to a DateTime in the following way:
// This converts stopwatch ticks into DateTime ticks.
// First convert to TimeSpan, then convert to DateTime
var rightDate = new DateTime(watch.Elapsed.Ticks);
Here is an article that explains the problem in more detail:
http://geekswithblogs.net/BlackRabbitCoder/archive/2012/01/12/c.net-little-pitfalls-stopwatch-ticks-are-not-timespan-ticks.aspx
Note that the content is no longer available at the original link. Here is a reference to the archived content from the Wayback Machine:
https://web.archive.org/web/20190104073827/http://geekswithblogs.net:80/BlackRabbitCoder/archive/2012/01/12/c.net-little-pitfalls-stopwatch-ticks-are-not-timespan-ticks.aspx
Stopwatch class return different values under different configuration as Frequency depends on the installed hardware & operating system.
Using stopwatch class we can have only the rough estimation of execution time. And for each execution it returns different value so we have to take average of different execution.
More Info : http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx
First, exact is of course not a possible or meaningful concept when talking about time or space, since no empyrical measurement of a physical magnitude can ever pretend to be exact.
Second, David Bolton's blog article may be useful. I'm quoting:
If this was timed with the high resolution counter then it will be accurate to microseconds. It is actually accurate to nanoseconds (10-9 seconds, ie a billionth of a second) but there is so much other stuff going on that nanosecond accuracy is really a bit pointless. When doing timing or benchmarking of code, you should do a number of runs and take the average time- because of other processes running under Windows, how much swapping to disk is occurring etc, the values between two runs may vary.
MSDN has some examples of the stopwatch. They also have it showing how accurate it is within Nanoseconds. Hope this helps!
Why you don't profile your code instead of focusing on micro-benchmarks?
There are some good Open Source profilers like:
NProf
Prof-It for C#
NProfiler
ProfileSharp
In addition to seconding the advice of HUAGHAGUAH above, I'd add that you should be VERY skeptical of micro-benchmarks in general. While close-focused performance testing has a legitimate place, it's very easy to tweak an unimportant detail. So write and verify code that is designed for readability and clarity, then profile it to find out where the hot spots are (or whether there are any worth worrying about), and then tune (only) those portions.
I recall working with a programmer who micro-optimized a bit of code that executed while the system waited for human input. The time savings absolutely disappeared in the between-keystroke lag!
If you want more precise timings. Take a look at the QueryPerformanceCounter. MSDN link for QueryPerformanceCounter. A neat implementation is given here. The example loads coredll.dll for CE, for Windows you should load the Kernel32.dll as stated in the MSDN documentation.

Categories

Resources