Execute same method consume far less time - c#

When I analyzed my old posts exact solution, I found a contradiction so I try to code this in Form1 constructor (after InitializeComponent();)
Stopwatch timer = Stopwatch.StartNew();
var timeStartGetStatistic = DateTime.Now.ToString("HH:mm:ss:fff");
var timeEndGetStatistic = DateTime.Now.ToString("HH:mm:ss:fff");
timer.Stop();
Console.WriteLine("Convert take time {0}", timer.Elapsed);
Console.WriteLine("First StopWatch\nStart:\t{0}\nStop:\t{1}",
timeStartGetStatistic, timeEndGetStatistic);
Stopwatch timer2 = Stopwatch.StartNew();
var timeStartGetStatistic2 = DateTime.Now.ToString("HH:mm:ss:fff");
var timeEndGetStatistic2 = DateTime.Now.ToString("HH:mm:ss:fff");
timer2.Stop();
Console.WriteLine("Convert take time {0}", timer2.Elapsed);
Console.WriteLine("Second StopWatch\nStart:\t{0}\nStop:\t{1}",
timeStartGetStatistic2, timeEndGetStatistic2);
Result
Convert take time 00:00:00.0102284
First StopWatch
Start: 02:42:29:929
Stop: 02:42:29:939
Convert take time 00:00:00.0000069
Second StopWatch
Start: 02:42:29:940
Stop: 02:42:29:940
I found that only FIRST DateTime.Now.ToString("HH:mm:ss:fff"); consumes 10ms but 3 others consume less than 10us in same scope, may I know the exact reason?
Is it because the FIRST one make the code on the memory so the following 3 get the advantage of it to consume more less time to do the same things? thanks.

At first, I thought it was indeed the JIT.. but it doesn't make sense because the framework itself is compiled AOT, when installed.
I think it is loading the current culture (ToString does that, and indeed it is the function that takes time)
================ EDIT =============
I did some tests, and there are two things which take time the first time they are called.
DateTime.Now make a call to the internal method TimeZoneInfo.GetDateTimeNowUtcOffsetFromUtc which take about half of the consumed time...
the rest is consumed by to string..
if instead of calling DateTime.Now you had called DateTime.UtcNow, you wouldn't notice that first-time impact of getting the local time offset.
and about ToString - what's consuming most of it's running time, was generating the DateTimeFormat for the current culture.
I think this answered your question pretty well :)

First call to any individual method from a class requires JIT so first call is always slow (unless assembly pre-JITed with NGen).
There also additional cost to load necessary resources/data depending on method associated with first call. I.e. likely DateTime.Now and DateTime.ToString require some locale data to be loaded/preprocessed.
Standard way to measure performance with Stopwatch is to call function/code that you want to measure once to kick all JIT related to the code and than do measurement. Usually you'd run code many times and average results.
// --- Warm up --- ignore time here unless meauring startup perf.
var timeStartGetStatistic = DateTime.Now.ToString("HH:mm:ss:fff");
var timeEndGetStatistic = DateTime.Now.ToString("HH:mm:ss:fff");
// Actual timing after JIT and all startup cost is payed.
Stopwatch timer2 = Stopwatch.StartNew();
// Consider multiple iterations i.e. to run code for about a second.
var timeStartGetStatistic2 = DateTime.Now.ToString("HH:mm:ss:fff");
var timeEndGetStatistic2 = DateTime.Now.ToString("HH:mm:ss:fff");
timer2.Stop();
Console.WriteLine("Convert take time {0}", timer2.Elapsed);
Console.WriteLine("Second StopWatch\nStart:\t{0}\nStop:\t{1}",
timeStartGetStatistic2, timeEndGetStatistic2);

As already noted you're measuring wrong. You should not include first call to any code as it needs to be jited, that time will be included.
Also it is not good to rely on result just ran only once; You need to run the code number of times and you should be calculating the average time taken.
Be sure that you do it in release mode without debugger attached.

Related

calculate execution time of my program

im actually doing a tool, where every ms counts, and i want to debug my program to see where is it slow and where its fast enough, cause somewhere it goes slowly, the program is like 600lines, is there any way to see execution time of EVERY function in my program? i could only find this method
Stopwatch stopWatch = Stopwatch.StartNew();
Thread.Sleep(10000);
stopWatch.Stop();
// Get the elapsed time as a TimeSpan value.
TimeSpan ts = stopWatch.Elapsed;
but i saw its not actually so accurate and i need it to be as accurate as possible. thanks!
I ran the following:
var stopWatch = Stopwatch.StartNew();
Thread.Sleep(1000);
stopWatch.Stop();
Console.WriteLine(stopWatch.Elapsed);
The output was 00:00:01.0002463, please note that in this case, you might expect a second (reasonable expectation), but, it's slightly over a second. Why? Thread.Sleep() isn't particularly accurate! Stopwatch is accurate enough to demonstrate that, and is in fact, generally accurate to nanosecond resolution.
That said, you're going about this the wrong way if you're trying to use stopwatches, you should instead profile your application with something like dotTrace and see what the hot spots are.

Get milliseconds passed

A just need a stable count of the current program's progression in milliseconds in C#. I don't care about what timestamp it goes off of, whether it's when the program starts, midnight, or the epoch, I just need a single function that returns a stable millisecond value that does not change in an abnormal manner besides increasing by 1 each millisecond. You'd be surprised how few comprehensive and simple answers I could find by searching.
Edit: Why did you remove the C# from my title? I'd figure that's a pretty important piece of information.
When your program starts create a StopWatch and Start() it.
private StopWatch sw = new StopWatch();
public void StartMethod()
{
sw.Start();
}
At any point you can query the Stopwatch:
public void SomeMethod()
{
var a = sw.ElapsedMilliseconds;
}
If you want something accurate/precise then you need to use a StopWatch, and please read Eric Lippert's Blog (formerly the Principal Developer of the C# compiler Team) Precision and accuracy of DateTime.
Excerpt:
Now, the question “how much time has elapsed from start to finish?” is a completely different question than “what time is it right now?” If the question you want to ask is about how long some operation took, and you want a high-precision, high-accuracy answer, then use the StopWatch class. It really does have nanosecond precision and accuracy that is close to its precision.
If you don't need an accurate time, and you don't care about precision and the possibility of edge-cases that cause your milliseconds to actually be negative then use DateTime.
Do you mean DateTime.Now? It holds absolute time, and subtracting two DateTime instances gives you a TimeSpan object which has a TotalMilliseconds property.
You could store the current time in milliseconds when the program starts, then in your function get the current time again and subtract
edit:
if what your going for is a stable count of process cycles, I would use processor clocks instead of time.
as per your comment you can use DateTime.Ticks, which is 1/10,000 of a millisecond per tick
Also, if you wanted to do the time thing you can use DateTime.Now as your variable you store when you start your program, and do another DateTime.Now whenever you want the time. It has a millisecond property.
Either way DateTime is what your looking for
It sounds like you are just trying to get the current date and time, in milliseconds. If you are just trying to get the current time, in milliseconds, try this:
long milliseconds = DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond;

Why is the first iteration always faster then the next in a loop?

I would like to understand why the first iteration in the loop executes quicker than the rest.
Stopwatch sw = new Stopwatch ();
sw.Start ();
for(int i=0; i<10; i++)
{
System.Threading.Thread.Sleep ( 100 );
Console.WriteLine ( "Finished at : {0}", ((double) sw.ElapsedTicks / Stopwatch.Frequency ) * 1e3 );
}
When I execute the code I get the following:
Initially I thought it could be due to the accuracy factor of Stopwatch class, but then why is it applicable only to the first element? Correct me if I'm missing something.
This is a very flawed benchmark. For one, Thread.Sleep does not guarantee you that you'll sleep for exactly 100ms. Try much longer sleeps and you'll see more consistent results.
So it might be even just scheduling - the next iterations are always just doing sleep after sleep. Since Sleep works thanks to the system interrupt clock, the sleeps after the first should take similar amount of time, while the first has to "sync up" with the clock first.
If you add another sleep before the cycle (and before starting the stopwatch), you'll likely get closer times for each of the iterations.
Or even better, don't use sleeps. If you use some actual CPU work instead, you'll avoid thread switches (provided you've got enough CPU to do that) and many other costs not associated with the cycle itself. For example,
Stopwatch sw = new Stopwatch ();
sw.Start ();
for(int i=0; i<10; i++)
{
Thread.SpinWait(10000000);
Console.WriteLine ( "Finished at : {0}", ((double) sw.ElapsedTicks / Stopwatch.Frequency ) * 1e3 );
}
This will give you much more consistent results, because it doesn't depend on the clock at all.
There's many other things that can complicate a benchmark like this, which is why benchmarks simply aren't done this way. There will always be deviations, and they can get rather big, especially on a system with a lot of work.
In other words, if you're getting differences in CPU work execution time on the scale of milliseconds, someone is stealing your work. There's nothing in a modern CPU that would account for such a huge difference just based on e.g. i++ being there or not.
I could describe a lot more issues with your code, but it probably isn't worth it. Just google for some best practices on CPU work benchmarking in C#, and you'll get much more worth out of it.
Oh, and just to help hammer the point home more, on my computer, the first tends to go anywhere from 99 up to 100. This would be highly unusual, since the default is 15.6ms, rather than 1ms, but the culprit is easily found - Chrome sets it to 1ms. Ouch.
What you're outputting for times is the total time elapsed since the start. so, time increasing by about 100ms is exactly what you should be expecting
But, when you use Thread.Sleep you're giving up control of the thread and maybe for something close to the time you've specified. That time will be in multiples of the system quantum--so, what you specify cannot possibly be exact. If other threads of higher priority are doing work, it's less likely that your thread will be given processor time at a granularity close to the time you've suggested.

speed of calling System.DateTime.Now

when I run the following code it shows the elapsed time equal to zero but when I comment the first line it shows elapsed time equal to 20ms! why?
does calling System.DateTime.Now loads something in runtime and this cause the difference?
string time1 = System.DateTime.Now.ToString();
var sw = System.Diagnostics.Stopwatch.StartNew();
string time = System.DateTime.Now.ToString();
string te = sw.ElapsedMilliseconds.ToString(); ;
Console.WriteLine(te);
sw.Stop();
First and foremost, never profile in Debug. Further, even in Release, never profile with the debugger attached. Your results are skewed and hold no real value.
This code, in Release takes 0 milliseconds. I executed it and verified that output. Below is the output:
0
Press any key to continue . . .
After disassembling mscorlib and analysing the results, that twenty millisecond delay may very well be caused by DateTime.Now.
At least in version 4.0 of the .NET Framework, that property calls the internal TimeZoneInfo.GetDateTimeNowUtcOffsetFromUtc() method. That method, in turn, invokes TimeZoneInfo.s_cachedData.GetOneYearLocalFromUtc(), which may exhibit a performance penalty on the first call (when that data is not cached yet).
Depending on the result of that call, TimeZoneInfo.GetDateTimeNowUtcOffsetFromUtc() can also invoke TimeZoneInfo.GetIsDaylightSavingsFromUtc(), which is non-trivial and involves date arithmetic.

Measure code speed in .net in milliseconds

I want to get the maximum count I have to execute a loop for it to take x milliseconds to finish.
For eg.
int GetIterationsForExecutionTime(int ms)
{
int count = 0;
/* pseudocode
do
some code here
count++;
until executionTime > ms
*/
return count;
}
How do I accomplish something like this?
I want to get the maximum count I have to execute a loop for it to take x milliseconds to finish.
First off, simply do not do that. If you need to wait a certain number of milliseconds do not busy-wait in a loop. Rather, start a timer and return. When the timer ticks, have it call a method that resumes where you left off. The Task.Delay method might be a good one to use; it takes care of the timer details for you.
If your question is actually about how to time the amount of time that some code takes then you need much more than simply a good timer. There is a lot of art and science to getting accurate timings.
First you should always use Stopwatch and never use DateTime.Now for these timings. Stopwatch is designed to be a high-precision timer for telling you how much time elapsed. DateTime.Now is a low-precision timer for telling you if it is time to watch Doctor Who yet. You wouldn't use a wall clock to time an Olympic race; you'd use the highest precision stopwatch you could get your hands on. So use the one provided for you.
Second, you need to remember that C# code is compiled Just In Time. The first time you go through a loop can therefore be hundreds or thousands of times more expensive than every subsequent time due to the cost of the jitter analyzing the code that the loop calls. If you are intending on measuring the "warm" cost of a loop then you need to run the loop once before you start timing it. If you are intending on measuring the average cost including the jit time then you need to decide how many times makes up a reasonable number of trials, so that the average works out correctly.
Third, you need to make sure that you are not wearing any lead weights when you are running. Never make performance measurements while debugging. It is astonishing the number of people who do this. If you are in the debugger then the runtime may be talking back and forth with the debugger to make sure that you are getting the debugging experience you want, and that chatter takes time. The jitter is generating worse code than it normally would, so that your debugging experience is more consistent. The garbage collector is collecting less aggressively. And so on. Always run your performance measurements outside the debugger, and with optimizations turned on.
Fourth, remember that virtual memory systems impose costs similar to those of jitters. If you are already running a managed program, or have recently run one, then the pages of the CLR that you need are likely "hot" -- already in RAM -- where they are fast. If not, then the pages might be cold, on disk, and need to be page faulted in. That can change timings enormously.
Fifth, remember that the jitter can make optimizations that you do not expect. If you try to time:
// Let's time addition!
for (int i = 0; i < 1000000; ++i) { int j = i + 1; }
the jitter is entirely within its rights to remove the entire loop. It can realize that the loop computes no value that is used anywhere else in the program and remove it entirely, giving it a time of zero. Does it do so? Maybe. Maybe not. That's up to the jitter. You should measure the performance of realistic code, where the values computed are actually used somehow; the jitter will then know that it cannot optimize them away.
Sixth, timings of tests which create lots of garbage can be thrown off by the garbage collector. Suppose you have two tests, one that makes a lot of garbage and one that makes a little bit. The cost of the collection of the garbage produced by the first test can be "charged" to the time taken to run the second test if by luck the first test manages to run without a collection but the second test triggers one. If your tests produce a lot of garbage then consider (1) is my test realistic to begin with? It doesn't make any sense to do a performance measurement of an unrealistic program because you cannot make good inferences to how your real program will behave. And (2) should I be charging the cost of garbage collection to the test that produced the garbage? If so, then make sure that you force a full collection before the timing of the test is done.
Seventh, you are running your code in a multithreaded, multiprocessor environment where threads can be switched at will, and where the thread quantum (the amount of time the operating system will give another thread until yours might get a chance to run again) is about 16 milliseconds. 16 milliseconds is about fifty million processor cycles. Coming up with accurate timings of sub-millisecond operations can be quite difficult if the thread switch happens within one of the several million processor cycles that you are trying to measure. Take that into consideration.
var sw = Stopwatch.StartNew();
...
long elapsedMilliseconds = sw.ElapsedMilliseconds;
You could also use the Stopwatch class:
int GetIterationsForExecutionTime(int ms)
{
int count = 0;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
do
{
// some code here
count++;
} while (stopwatch.ElapsedMilliseconds < ms);
stopwatch.Stop();
return count;
}
Good points from Eric Lippert.
I'd been benchmarking and unit testing for a while and I'd advise you should discard every first-pass on you code cause JIT compilation.
So in a benchmarking code which use loop and Stopwatch remember to put this at the end of the loop:
// JIT optimization.
if (i == 0)
{
// Discard every result you've collected.
// And restart the timer.
stopwatch.Restart();
}

Categories

Resources