How accurate is System.Diagnostics.Stopwatch? I am trying to do some metrics for different code paths and I need it to be exact. Should I be using stopwatch or is there another solution that is more accurate.
I have been told that sometimes stopwatch gives incorrect information.
I've just written an article that explains how a test setup must be done to get an high accuracy (better than 0.1ms) out of the stopwatch. I Think it should explain everything.
http://www.codeproject.com/KB/testing/stopwatch-measure-precise.aspx
The System.Diagnostics.Stopwatch class does accurately measure time elapsed, but the way that the ElapsedTicks method works has led some people to the conclusion that it is not accurate, when they really just have a logic error in their code.
The reason that some developers think that the Stopwatch is not accurate is that the ElapsedTicks from the Stopwatch DO NOT EQUATE to the Ticks in a DateTime.
The problem arises when the application code uses the ElapsedTicks to create a new DateTime.
var watch = new Stopwatch();
watch.Start();
... (perform a set of operations)
watch.Stop();
var wrongDate = new DateTime(watch.ElapsedTicks); // This is the WRONG value.
If necessary, the stopwatch duration can be converted to a DateTime in the following way:
// This converts stopwatch ticks into DateTime ticks.
// First convert to TimeSpan, then convert to DateTime
var rightDate = new DateTime(watch.Elapsed.Ticks);
Here is an article that explains the problem in more detail:
http://geekswithblogs.net/BlackRabbitCoder/archive/2012/01/12/c.net-little-pitfalls-stopwatch-ticks-are-not-timespan-ticks.aspx
Note that the content is no longer available at the original link. Here is a reference to the archived content from the Wayback Machine:
https://web.archive.org/web/20190104073827/http://geekswithblogs.net:80/BlackRabbitCoder/archive/2012/01/12/c.net-little-pitfalls-stopwatch-ticks-are-not-timespan-ticks.aspx
Stopwatch class return different values under different configuration as Frequency depends on the installed hardware & operating system.
Using stopwatch class we can have only the rough estimation of execution time. And for each execution it returns different value so we have to take average of different execution.
More Info : http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx
First, exact is of course not a possible or meaningful concept when talking about time or space, since no empyrical measurement of a physical magnitude can ever pretend to be exact.
Second, David Bolton's blog article may be useful. I'm quoting:
If this was timed with the high resolution counter then it will be accurate to microseconds. It is actually accurate to nanoseconds (10-9 seconds, ie a billionth of a second) but there is so much other stuff going on that nanosecond accuracy is really a bit pointless. When doing timing or benchmarking of code, you should do a number of runs and take the average time- because of other processes running under Windows, how much swapping to disk is occurring etc, the values between two runs may vary.
MSDN has some examples of the stopwatch. They also have it showing how accurate it is within Nanoseconds. Hope this helps!
Why you don't profile your code instead of focusing on micro-benchmarks?
There are some good Open Source profilers like:
NProf
Prof-It for C#
NProfiler
ProfileSharp
In addition to seconding the advice of HUAGHAGUAH above, I'd add that you should be VERY skeptical of micro-benchmarks in general. While close-focused performance testing has a legitimate place, it's very easy to tweak an unimportant detail. So write and verify code that is designed for readability and clarity, then profile it to find out where the hot spots are (or whether there are any worth worrying about), and then tune (only) those portions.
I recall working with a programmer who micro-optimized a bit of code that executed while the system waited for human input. The time savings absolutely disappeared in the between-keystroke lag!
If you want more precise timings. Take a look at the QueryPerformanceCounter. MSDN link for QueryPerformanceCounter. A neat implementation is given here. The example loads coredll.dll for CE, for Windows you should load the Kernel32.dll as stated in the MSDN documentation.
Related
In using Stopwatch.GetTimestamp() we find that if you record the return value and then continue calling it and comparing to the previous return value, it will eventually but unpredictably return a value less than the original.
Is this expected behavior?
The purpose of doing this in the production code is to have a microsecond accurate sytem time.
The technique involves calling DateTime.UtcNow and also calling Stopwatch.GetTimestamp() as originalUtcNow and originalTimestamp, respectively.
From that point forward, the application simply calls Stopwatch.GetTimestamp() and using Stopwatch.Frequency it calculates the difference from the originalTimestamp variable and then adds that difference to the originalUtcNow.
Then, Voila...an efficient and accurate microsecond DateTime.
But, we find that sometimes the Stopwatch.GetTimestamp() will return lower number.
It happens quite rarely. Our thinking is to simply "reset" when that happens and continue.
HOWEVER, it makes us doubt the accuracy of the Stopwatch.GetTimestamp() or suspect there is a bug in the .Net library.
If you can shed some light on this, please do.
FYI, based on the current timestamp value, the frequence, and the long.MaxValue it appears unlikely that it will roll over during our lifetime unless it's a hardware issue.
EDIT: We're now calculating this value "per thread" and then "clamping it" to watch for jumps between cores to reset it.
It's possible that you get the jump in time because your thread is jumping cores. See the "note" on this page: http://msdn.microsoft.com/en-us/library/ebf7z0sw.aspx
The behavior of the Stopwatch class will vary from system to system depending on hardware support.
See: http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.ishighresolution.aspx
Also, I believe the underlying equivalent win32 call (QueryPerformanceCounter) contains useful documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx
I don't know exactly regarding about running backwards (which sounds like a small change backwards), but I have so far experienced three times that the value of Stopwatch.GetTimestamp() can change so enormously that it causes overflow exceptions in some further calculations of form about like this:
(Stopwatch.GetTimestamp() - ProgramStartStopwatchTimestamp) * n
where n is some large value, but small enough that if the Stopwatch weren't jumping around enormously, then the program could run years without having overflow exception. Note also that these exceptions occurred many hours after the program started, so the issue is not just that the Stopwatch ran backwards a little bit immediately after start. It just jumped to totally different range, in whatever direction.
Regarding Stopwatch rolling over, in one of the above cases it (not the difference, but Stopwatch) obtained value of something a la 0xFF4? ???? ???? ????, so it jumped to a range that was very close to rolling over. After restarting the program multiple times, this new range was still consistently in effect. If that matters anymore considering the need to handle the jumps anyway...
If it was additionally necessary to determine which core the timestamp was taken on then it probably helps to know executing core number. For this end, there are functions called GetCurrentProcessorNumber (available since Server 2003 and Vista) and GetCurrentProcessorNumberEx (available since Server 2008 R2 and Windows 7). See also this question's answers for more options (including Windows XP).
Note that core number can be changed by the scheduler any time. But when one reads the core number before reading Stopwatch timestamp, and after, and the core number remained same, then perhaps one can assume that the Stopwatch read was also performed on this core...
To specifically answer the high-level question "How often does Stopwatch.GetTimestamp() roll over?", Microsoft's answer is:
Not less than 100 years from the most recent system boot, and potentially longer based on the underlying hardware timer used. For most applications, rollover isn't a concern.
In my initialization method I call some other methods, manipulate some variables and iterate over some lists. Now I noticed that the loading method takes a little bit long (approximately 2 minutes).
But the problem is, I'm not quite sure which part of the method is consuming this much time. So I'd like to measure it so that I can work on this part that has the highest potential to reduce time.
But what is a good approach to measure that?
If you don't want to use a profiler such as the Ants performance profiler, you can use the Stopwatch to measure how long it took some code to run.
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
// Code to time
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
This, of course changes your code and requires you to make these amendments in every point you want to measure.
I would recommend going with one of the many good profilers out there (I am certain other answers will point out some good ones).
The dotTrace Performance Profiler 4.0 provides line-by-line profiling. That's what you need.
I'm not quite sure which part of the
method is consuming this much time. So
I'd like to measure it
Things that take much longer than they should are very easy to find.
For example, if it's taking 10 times longer than it should, that means 90% of the time it is doing something not needed. So, if you run it under the IDE and pause it, the chance you will catch it in the act is 90%.
Just look at the call stack, because you know the problem is somewhere on it.
If you're not sure you've caught it, try it several times. The problem will appear on multiple samples.
Typical things I've found in .net app startup:
Looking up resources such as international strings, needlessly.
Walking notification trees in data structures, to a needless extent.
What you find will probably be different, but you will find it.
This is a low-tech but effective method. It works whether the time is spent in CPU or in I/O. It does not measure the problem very precisely, but it does locate it very precisely.
(Check the last paragraph of this post.)
Really, I'm looking for a good function that measure the time cycles accurately for a given C# function under Windows operating system. I tried these functions, but they both do not get accurate measure:
DateTime StartTime = DateTime.Now;
TimeSpan ts = DateTime.Now.Subtract(StartTime);
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
//code to be measured
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
Really, each time I call them, they give me different time for the same function
Please, if anyone know better way to measure time consuming accurately, please help me and thanks alot alot
The Stopwatch is the recommended way to measure time it takes for a function to execute. It will never be the same from run to run due to various software and hardware factors, which is why performance analysis is usually done on a large number of runs and the time is averaged out.
"they give me different time for the same function" - that's expected. Things fluctuate because you are not the only process running on a system.
Run the code you want to time in a large loop to average out any fluctuations (divide total time by the number of loops).
Stopwatch is an accurate timer, and is more than adequate timing for most situations.
const int numLoops = 1000000; // ...or whatever number is appropriate
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
for (int i = 0; i < numLoops; i++)
{
// code to be timed...
}
stopWatch.Stop();
TimeSpan elapsedTotal = stopWatch.Elapsed;
double timeMs = elapsedTotal.TotalMilliseconds / numLoops;
The gold-standard is to use StopWatch. It is a high resolution timer and it works very well.
I'd suggest you check the elapsed time using .Elapsed.TotalMilliSeconds as you get a double rather than .Elapsed.MilliSeconds which gives you an int. This might be throwing your results off.
Also, you might find that garbage collections occur during your timing tests and these can significantly change the resulting time. It's useful to check the GC collection count before and after your timing test and discard the result if any garbage collections occurred.
Otherwise your results can vary simply because other threads and processes take over the CPU and other system resources during your tests. There's not much you can do here except to run your tests multiple times and statistically analyze your results by calculating mean & standard deviations timings etc.
I hope this helps.
While it is possible for you to measure your code in clock cycles, it will still be just as prone to variability as measuring in seconds and will be much less useful (because seconds are just as good, if not better, a unit of measurement than clock cycles). The only way to get a measurement that is unaffected by other processes is to ensure none are running, and you can't do that on Windows -- the OS itself will always be running doing some things, because it's not a single-process OS.
The closest you can come to the measurement you want is to build and run your code as described here. You can then view the x86 assembly for the JIT'd code for the methods you want to time by setting a breakpoint at the start of your code, and then stepping through. You can cross-reference each x86 instruction with its cycle timing in the Intel architecture manuals, and add them up to get an accurate cycle count.
This is, of course, extremely painful and basically useless. It also might be invalidated by code changes that cause the JIT to take slightly different approaches to producing x86 from your IL.
You need profiler to measure code execution (see What Are Some Good .NET Profilers? to start your search).
Looking at your comments it is unclear what you trying to optimize. Generally you need to go down to measure CPU clock cycles when your code is executed 1000s of times and purely CPU bound, in other cases per-function execution time is usually enough. But you are saying that your code is too slow to run that many times to calculate average time with stopwatch.
You also need to figure out if CPU is the bottleneck for your application or there is somthing else that makes it slow. Looking at CPU % in TaskManager can give you information on it - less then 100% CPU usage pretty much guarantees that there is something else (i.e. network or disk activity) makes program slow.
Basically providing more detail on type of code you trying to measure to meet your performance goals will make helping you much easier.
To echo the others: The Stopwatch class is the best way to do this.
To answer your questions about only measuring clock cycles: The fact that you're running on a multi-tasking OS on a modern processor makes measuring clock cycles almost useless. A context switch has a good chance of removing your code and data from the processors cache, and the OS might decide to swap your working set out in the meantime.
The processor could decide to reorder your instructions based on cache waits or memory accesses, and execute what it can while it's waiting. Or it may not if it is in the cache.
So, in short, performing multiple runs and averaging them is really the only way to go.
To get less jitter in the time, you could elevate the priority of the thread/process, but this can result is a slew of other issues (Bumping to real-time priority, and getting stuck in a long loop will essentially stop all other processing. If a bug occurs, and you get stuck in an infinite loop, your only choice is the reset button), and is not recommended at all, especially in on a users computer, or in a production environment. And since you can't do that where it matters, it makes the benchmarks you run in your machine, with any priority modifications, invalid.
how about using Environment.TickCount to capture start and end and then to TimeSpan.FromTicks() on it ?
The idea is that an existing project uses timeGetTime() (for windows targets) quite frequently.
milliseconds = timeGetTime();
Now, this could be replaced with
double tmp = (double) lpPerformanceCount.QuadPart/ lpFrequency.QuadPart;
milliseconds = rint(tmp * 1000);
with lpPerformanceCount.QuadPart and lpFrequency.QuadPart being taken from the use of a single call to QueryPerformanceCounter() and QueryPerformanceFrequency().
I know Windows' internals are kind of voodoo, but can someone decipher which of the two is more accurate or/and has more overheads?
I suspect accuracy might be same but QueryPerformanceCounter might have less overheads. But I have no hard data to back it up.
Of course I wouldn't be surprised if the opposite is true.
If overheads are tiny in any way I would be more interested on whether there's any difference in accuracy.
The accuracy of timeGetTime() is variable, based on the last used timeBeginPeriod. It will never be better than one millisecond. QueryPerformanceCounter is variable too, depending on hardware support. It will never be worse than about a microsecond.
Neither of them have notable overhead, QPC is probably a bit heavier. Whether that's significant to you is quite unclear from your question. I doubt it, but measure. With QPC.
Be careful: QueryPerformanceCounter may be processor dependent. If your thread grabs the perf counter on one CPU, and ends up on another CPU before it grabs again, the results may not be reliable. See the MSDN entry.
Accuracy is better on QPC. timeGetTime is accurate within the 1-10ms range (and its resolution is no finer than 1ms), whereas QPC can give you accuracy in the microsecond range.
The overhead varies. QPC uses the best hardware timer available. That may be some lightweight one built into the CPU, or it may have to go out to the motherboard which adds significant latency. And it might be made more expensive by having to go through a driver correcting for the timer hardware being buggy.
But neither is prohibitively expensive. If you're not going to call the timer millions of times per second, the overhead is insignificant for both.
We have updated the documentation for QueryPerformanceCounter and this should help to answer the questions above. Please see
http://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx
Ed Briggs
Microsoft Corporation
QueryPerformanceCounter does not quite give you a time. In order to convert its values
into time measures you'd have to use QueryPerformanceFrequency which is supposed to
let you know at which rate the counter increments. But the frequency value is more or
less an estimate. The frequency of the counter can vary with the underlaying hardware and with the version of the OS. But it shall not be considered beeing a constant. It has an offset and is sometimes acompanied by thermal drift.
Saying that, I would recommend to use QueryPerformanceCounter with care.
Some still mix accuracy with granularity. QueryPerformanceCounter has finer granularity,
while timeGetTime has better accuracy.
However, the fastest source is GetSystemTimeAsFileTime which returns a time value in 100ns units. But its granularity is not 100ns. Its granularity depends on the result of
timeGetDevCaps and the setting of timeBeginPeriod. Setting the latter properly can result in a granularity of about 10000 which corresponds to about 1ms.
I've written some more details here.
I need an accurate timer, and DateTime.Now seems not accurate enough. From the descriptions I read, System.Diagnostics.Stopwatch seems to be exactly what I want.
But I have a phobia. I'm nervous about using anything from System.Diagnostics in actual production code. (I use it extensively for debugging with Asserts and PrintLns etc, but never yet for production stuff.) I'm not merely trying to use a timer to benchmark my functions - my app needs an actual timer. I've read on another forum that System.Diagnostics.StopWatch is only for benchmarking, and shouldn't be used in retail code, though there was no reason given. Is this correct, or am I (and whoever posted that advice) being too closed minded about System.Diagnostics? ie, is it ok to use System.Diagnostics.Stopwatch in production code?
Thanks
Adrian
Under the hood, pretty much all Stopwatch does is wrap QueryPerformanceCounter. As I understand it, Stopwatch is there to provide access to the high-resolution timer - if you need this resolution in production code I don't see anything wrong with using it.
Yes, System.Diagnostics does sound like it is for debugging only, but don't let the name deceive you. The System.Diagnostics namespace may seem a bit scary sounding for use in production code at first (it did for me), but there are plenty of useful things in that namespace.
Some things, such as the Process class, are useful for interacting with the system. With Process.Start you can start other applications, launch a website for the user, open a file or folder, etc.
Others things, such as the Trace class, can help you track down bugs in production code. Granted, you will not always use them in production code, but they are very useful for logging and tracking down that elusive bug on a remote machine.
Don't worry about the name.
You say you've read on another forum not to use classes from System.Diagnostics in production. But the only source you should worry about is Microsoft, who created the code. They say that the StopWatch class:
Provides a set of methods and properties that you can use to accurately measure elapsed time.
They don't say, "except in production".
Afaik StopWatch is a shell over QueryPerformanceCounter functionality. This function is the basis of a lot of performance counters related measurements. QPF is very fast to call and perfectly safe. IF you feel paranoid about the Diagnostics namespace, pInvoke the QPF directly.
The stopwatch is basically a neat wrapper around the native QueryPerformanceCounter and QueryPerformanceFrequency methods. If you don't feel comfortable using the System.Diagnostic namespace, you can access these directly.
Using the Performance Counter is very common, there is nothing wrong with that. AFAIK, there is no higher timer precision available. Note the QPF might lead to problems with multi processor machines, but the MSDN Article linked before gives some additional information on that. It is advisable to make sure System.Diagnostics.Stopwatch does that in the background or to call the SetThreadAffinity manually - otherwise your timer might jump back in time!
Note that for very high precision measurements, there are some subtleties that need to be taken into account. If you need this much precision, these might be of some concern.
There are several different timer classes in the .NET base class library - which one is best suited to your needs can only be determined by you.
Here is a good article from MSDN magazine on the subject (Comparing the Timer Classes in the .NET Framework Class Library).
Depending on what you're using the timer for, you may have other issues to consider. Windows does not provide guarantees on timing of execution, so you shouldn't rely on it for any real-time processing (there are real-time extensions you can get for Windows that provide hard real-time scheduling). I also suspect you could lose precision as a result of context switching after you capture the time interval and before you do something with it that depends on its precision. In principle, this could be an arbitrarily long period of time; in practice it should be on the order of milliseconds. It really depends on how mission-critical this timing is.