I don't understand the meaning of timer precision and resolution. Can anyone explain it to me?
NOTE: This question is related to Stopwatch.
Accuracy and precision are opposing goals, you can't get both. An example of a very accurate timing source is DateTime.UtcNow. It provides absolute time that's automatically corrected for clock rate errors by the kernel, using a timing service to periodically re-calibrate the clock. You probably heard of time.windows.com, the NTP server that most Windows PC use. Very accurate, you can count on less than a second of error over an entire year. But not precise, the value only updates 64 times per second. It is useless to time anything that takes less than a second with any kind of decent precision.
The clock source for Stopwatch is very different. It uses a free running counter that is driven by a frequency source available somewhere in the chipset. This used to be a dedicate crystal running at the color burst frequency (3.579545 MHz) but relentless cost cutting has eliminated that from most PCs. Stopwatch is very precise, you can tell from its Frequency property. You should get something between a megahertz and the cpu clock frequency, allowing you to time down to a microsecond or better. But it is not accurate, it is subject to electronic part tolerances. Particularly mistrust any Frequency beyond a gigahertz, that's derived from a multiplier which also multiplies the error. And beware the Heisenberg principle, starting and stopping the Stopwatch takes non-zero overhead that will affect the accuracy of very short measurements. Another common accuracy problem with Stopwatch is the operating system switching out your thread to allow other code to run. You need to take multiple samples and use the median value.
They are the same as with any measurement. See this Wikipedia article for more details --
http://en.wikipedia.org/wiki/Accuracy_and_precision
There are different types of times in .net (3 or 4 of them, if i remember correctly), each working with his own algorithm. The precision of timer means how accurate it is in informing the using application on the ticking events. For example, if you use a timer and set it to trigger its ticking event every 1000 ms, the precision of the timer means how close to the specified 1000 ms it will actually tick.
for more information (at least in c#), i suggest u read the msdn page on timers:
From MSDN Stopwatch Class: (emphasis mine)
"The Stopwatch measures elapsed time by counting timer ticks in the underlying timer mechanism. If the installed hardware and operating system support a high-resolution performance counter, then the Stopwatch class uses that counter to measure elapsed time. Otherwise, the Stopwatch class uses the system timer to measure elapsed time. Use the Frequency and IsHighResolution fields to determine the precision and resolution of the Stopwatch timing implementation."
Related
I'm working on online game and have client and server. For client I use Unity3D and C#, server is written in C#. For synchronization I use timers, and as we know timers depends from ticks. Ticks counter in C# is class Stopwatch, and count of ticks in 1 second equals Stopwatch.Frequency, but on server and client, values of Stopwatch.Frequency are different, and it kills my synchronization because the timer on server too slow unlike a timer on the client. Stopwatch.Frequency on the server equals 3.124.980, and Stopwatch.Frequency on the client equals 10.000.000. So why??? How i can change the value of Stopwatch.Frequency for timers synchronization? Thanks and sorry for my bad English
Stopwatch can be unreliable on a PC with multiple processors, or processors that do not have a constant clock speed (processors that can reduce clock to conserve energy), so you simply can't use it in a game (because you want it to work in every computer).
Many games uses a global watch, and I've seen that even the simplest global watch algorithm can be good enough to synchronize clients with a server for a game. Take a look at the Cristian's algorithm.
Having a global watch, you can simply use DateTime.UtcNow to measure time.
StopWatch.Frequency cannot be changed - it gets the frequency of the timer as the number of ticks per second. This field is read-only.
The timer used by the Stopwatch class depends on the system hardware and operating system.
The field Stopwatch.IsHighResolution is true if the Stopwatch timer is based on a high-resolution performance counter. Otherwise, IsHighResolution is false, in which case it indicates that the Stopwatch timer is based on the system timer.
I am trying to implement a time service that will report time with greater accuracy than 1ms. I thought an easy solution would be to take an initial measurement and use StopWatch to add a delta to it. The problem is that this method seems to diverge extremely fast from wall time. For example, the following code attempts to measure the divergence between Wall Time and my High Resolution Clock:
public static void Main(string[] args)
{
System.Diagnostics.Stopwatch s = new System.Diagnostics.Stopwatch();
DateTime baseDateTime = DateTime.UtcNow;
s.Start();
long counter = 0;
while(true)
{
DateTime utcnow = DateTime.UtcNow;
DateTime hpcutcnow = baseDateTime + s.Elapsed;
Console.WriteLine(String.Format("{0}) DT:{1} HP:{2} DIFF:{3}",
++counter, utcnow, hpcutcnow, utcnow - hpcutcnow));
Thread.Sleep(1000);
}
}
I am diverging at a rate of about 2ms/minute on fairly recent sever hardware.
Is there another time facility in windows that I am not aware of that will be more accurate? If not, is there a better approach to creating a high resolution clock or a 3rd party library I should be using?
Getting an accurate clock is difficult. Stopwatch has a very high resolution, but it is not accurate, deriving its frequency from a signal in the chipset. Which operates at typical electronic part tolerances. Cut-throat competition in the hardware business preempted the expensive crystal oscillators with a guaranteed and stable frequency.
DateTime.UtcNow isn't all that accurate either, but it gets help. Windows periodically contacts a time service, the default one is time.windows.com to obtain an update of a high quality clock. And uses it to recalibrate the machine's clock, inserting small adjustments to get the clock to catch up or slow down.
You need lots of bigger tricks to get it accurate down to a millisecond. You can only get a guarantee like that for code that runs in kernel mode, running at interrupt priority so it cannot get pre-empted by other code and with its code and data pages page-locked so it can't get hit with page faults. Commercial solutions use a GPS radio to read the clock signal of the GPS satellites, backed up by an oscillator that runs in an oven to provide temperature stability. Reading such a clock is the hard problem, you don't have much use for a sub-millisecond clock source when your program that uses it can get pre-empted by the operating system just as it obtained the time and not start running again until ~45 msec later. Or worse.
DateTime.UtcNow is accurate to 15.625 milliseconds and stable over very long periods thanks to the time service updates. Going lower than that just doesn't make much sense, you can't get the execution guarantee you need in user mode to take advantage of it.
Apparently in Windows 8/Server 2012 a new API was added specifically for getting high resolution timestamps, the GetSystemTimePreciseAsFileTime API. I haven't had a chance to play around with this, but it looks promising.
3rd party library
I tried to create my own based on several Internet sources. Here is a link to it: https://github.com/Anonymous87549236/HighResolutionDateTime/releases .
But then I realized that in .NET Core the resolution is already the maximum. So I think .NET Core is the best implementation.
I don't have a great deal of experience with threads. I'm using .NET 4 and would like to use the .NET 4 threading features to solve this. Here is what I want to do.
I have a class with two methods, 'A' and 'B'. I want 'A' to call 'B' some number of times (like 100) every some number of milliseconds (like 3000). I want to record the average execution time of method 'B' when it's done executing its 100 (or whatever) times. The class will have some private properties to keep track of the total elapsed execution time of 'B' in order to calculate an average.
I'm not sure if method 'A' should call 'B' via a System.Timers.Timer thread (where the interval can be set, but not the number of times) or if there is a better (.NET 4) way of doing this.
Thanks very much.
In reading over your question, I think the root question you have is about safely kicking off a set of events and timing their execution in a thread-safe manner. In your example, you are running 100 iterations every 3000ms. That means that at most each iteration should only take 30ms. Unfortunately, the System.Timers.Timer (which is System.Threading.Timer with a wrapper around it) is not that precise. Expect a precision of 10ms at best and possibly a lot worse. In order to get the 1ms precision you really need, you are going to need to tap into the native interop. Here is a quote I found on this:
The precision of multithreaded timers depends on the operating system, and is typically in the 10–20 ms region. If you need greater precision, you can use native interop and call the Windows multimedia timer. This has precision down to 1 ms and it is defined in winmm.dll. First call timeBeginPeriod to inform the operating system that you need high timing precision, and then call timeSetEvent to start a multimedia timer. When you’re done, call timeKillEvent to stop the timer and timeEndPeriod to inform the OS that you no longer need high timing precision. You can find complete examples on the Internet that use the multimedia timer by searching for the keywords dllimport winmm.dll timesetevent
-Joseph Albahari ( http://www.albahari.com/threading/part3.aspx )
If you follow his advice, you should get the precision you need.
Really, I'm looking for a good function that measure the time cycles accurately for a given C# function under Windows operating system. I tried these functions, but they both do not get accurate measure:
DateTime StartTime = DateTime.Now;
TimeSpan ts = DateTime.Now.Subtract(StartTime);
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
//code to be measured
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
Really, each time I call them, they give me different time for the same function
Please, if anyone know better way to measure time consuming accurately, please help me and thanks alot alot
The Stopwatch is the recommended way to measure time it takes for a function to execute. It will never be the same from run to run due to various software and hardware factors, which is why performance analysis is usually done on a large number of runs and the time is averaged out.
"they give me different time for the same function" - that's expected. Things fluctuate because you are not the only process running on a system.
Run the code you want to time in a large loop to average out any fluctuations (divide total time by the number of loops).
Stopwatch is an accurate timer, and is more than adequate timing for most situations.
const int numLoops = 1000000; // ...or whatever number is appropriate
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
for (int i = 0; i < numLoops; i++)
{
// code to be timed...
}
stopWatch.Stop();
TimeSpan elapsedTotal = stopWatch.Elapsed;
double timeMs = elapsedTotal.TotalMilliseconds / numLoops;
The gold-standard is to use StopWatch. It is a high resolution timer and it works very well.
I'd suggest you check the elapsed time using .Elapsed.TotalMilliSeconds as you get a double rather than .Elapsed.MilliSeconds which gives you an int. This might be throwing your results off.
Also, you might find that garbage collections occur during your timing tests and these can significantly change the resulting time. It's useful to check the GC collection count before and after your timing test and discard the result if any garbage collections occurred.
Otherwise your results can vary simply because other threads and processes take over the CPU and other system resources during your tests. There's not much you can do here except to run your tests multiple times and statistically analyze your results by calculating mean & standard deviations timings etc.
I hope this helps.
While it is possible for you to measure your code in clock cycles, it will still be just as prone to variability as measuring in seconds and will be much less useful (because seconds are just as good, if not better, a unit of measurement than clock cycles). The only way to get a measurement that is unaffected by other processes is to ensure none are running, and you can't do that on Windows -- the OS itself will always be running doing some things, because it's not a single-process OS.
The closest you can come to the measurement you want is to build and run your code as described here. You can then view the x86 assembly for the JIT'd code for the methods you want to time by setting a breakpoint at the start of your code, and then stepping through. You can cross-reference each x86 instruction with its cycle timing in the Intel architecture manuals, and add them up to get an accurate cycle count.
This is, of course, extremely painful and basically useless. It also might be invalidated by code changes that cause the JIT to take slightly different approaches to producing x86 from your IL.
You need profiler to measure code execution (see What Are Some Good .NET Profilers? to start your search).
Looking at your comments it is unclear what you trying to optimize. Generally you need to go down to measure CPU clock cycles when your code is executed 1000s of times and purely CPU bound, in other cases per-function execution time is usually enough. But you are saying that your code is too slow to run that many times to calculate average time with stopwatch.
You also need to figure out if CPU is the bottleneck for your application or there is somthing else that makes it slow. Looking at CPU % in TaskManager can give you information on it - less then 100% CPU usage pretty much guarantees that there is something else (i.e. network or disk activity) makes program slow.
Basically providing more detail on type of code you trying to measure to meet your performance goals will make helping you much easier.
To echo the others: The Stopwatch class is the best way to do this.
To answer your questions about only measuring clock cycles: The fact that you're running on a multi-tasking OS on a modern processor makes measuring clock cycles almost useless. A context switch has a good chance of removing your code and data from the processors cache, and the OS might decide to swap your working set out in the meantime.
The processor could decide to reorder your instructions based on cache waits or memory accesses, and execute what it can while it's waiting. Or it may not if it is in the cache.
So, in short, performing multiple runs and averaging them is really the only way to go.
To get less jitter in the time, you could elevate the priority of the thread/process, but this can result is a slew of other issues (Bumping to real-time priority, and getting stuck in a long loop will essentially stop all other processing. If a bug occurs, and you get stuck in an infinite loop, your only choice is the reset button), and is not recommended at all, especially in on a users computer, or in a production environment. And since you can't do that where it matters, it makes the benchmarks you run in your machine, with any priority modifications, invalid.
how about using Environment.TickCount to capture start and end and then to TimeSpan.FromTicks() on it ?
Debug.WriteLine("Timer is high-resolution: {0}", Stopwatch.IsHighResolution);
Debug.WriteLine("Timer frequency: {0}", Stopwatch.Frequency);
Result:
Timer is high-resolution: True
Timer frequency: 2597705
This article (from 2005!) mentions a Frequency of 3579545, a million more than mine. This blog post mentions a Frequency of 3,325,040,000, which is insane.
Why is my Frequency so much comparatively lower? I'm on an i7 920 machine, so shouldn't it be faster?
3,579,545 is the magic number. That's the frequency in Hertz before dividing it by 3 and feeding it into the 8053 timer chip in the original IBM PC. The odd looking number wasn't chosen by accident, it is the frequency of the color burst signal in the NTSC TV system used in the US and Japan. The IBM engineers were looking for a cheap crystal to implement the oscillator, nothing was cheaper than the one used in every TV set.
Once IBM clones became widely available, it was still important for their designers to choose the same frequency. A lot of MS-DOS software relied on the timer ticking at that rate. Directly addressing the chip was a common crime.
That changed once Windows came around. A version of Windows 2 was the first one to virtualize the timer chip. In other words, software wasn't allowed to directly address the timer chip anymore. The processor was configured to run in protected mode and intercepted the attempt to use the I/O instruction. Running kernel code instead, allowing the return value of the instruction to be faked. It was now possible to have multiple programs using the timer without them stepping on each other's toes. An important first step to break the dependency on how the hardware is actually implemented.
The Win32 API (Windows NT 3.1 and Windows 95) formalized access to the timer with an API, QueryPerformanceCounter() and QueryPerformanceFrequency(). A kernel level component, the Hardware Adaption Layer, allows the BIOS to pass that frequency. Now it was possible for the hardware designers to really drop the dependency on the exact frequency. That took a long time btw, around 2000 the vast majority of machines still had the legacy rate.
But the never-ending quest to cut costs in PC design put an end to that. Nowadays, the hardware designer just picks any frequency that happens to be readily available in the chipset. 3,325,040,000 would be such a number, it is most probably the CPU clock rate. High frequencies like that are common in cheap designs, especially the ones that have an AMD core. Your number is pretty unusual, some odds that your machine wasn't cheap. And that the timer is a lot more accurate, CPU clocks have typical electronic component tolerances.
The frequence depends on the HAL (Hardware abstraction layer). Back in the pentium days, it was common to use the CPU tick (which was based on the CPU clock rate) so you ended up with really high frequency timers.
With multi-processor and multi-core machines, and especially with variable rate CPUs (the CPU clock slows down for low power states) using the CPU tick as the timer becomes difficult and error prone, so the writers of the HAL seem to have chosen to use a slower, but more reliable hardware clock, like the real time clock.
The Stopwatch.Frequency value is per second, so your frequency of 2,597,705 means you have more than 2.5 million ticks per second. Exactly how much precision do you need?
As for the variations in frequency, that is a hardware-dependent thing. Some of the most common hardware differences are the number of cores, the frequency of each core, the current power state of your cpu (or cores), whether you have enabled the OS to dynamically adjust the cpu frequency, etc. Your frequency will not always be the same, and depending on what state your cpu is in when you check it, it may be lower or higher, but generally around the same (for you, probably around 2.5 million.)
I think 2,597,705 = your processor frequency. Myne is 2,737,822. i7 930