when I run the following code it shows the elapsed time equal to zero but when I comment the first line it shows elapsed time equal to 20ms! why?
does calling System.DateTime.Now loads something in runtime and this cause the difference?
string time1 = System.DateTime.Now.ToString();
var sw = System.Diagnostics.Stopwatch.StartNew();
string time = System.DateTime.Now.ToString();
string te = sw.ElapsedMilliseconds.ToString(); ;
Console.WriteLine(te);
sw.Stop();
First and foremost, never profile in Debug. Further, even in Release, never profile with the debugger attached. Your results are skewed and hold no real value.
This code, in Release takes 0 milliseconds. I executed it and verified that output. Below is the output:
0
Press any key to continue . . .
After disassembling mscorlib and analysing the results, that twenty millisecond delay may very well be caused by DateTime.Now.
At least in version 4.0 of the .NET Framework, that property calls the internal TimeZoneInfo.GetDateTimeNowUtcOffsetFromUtc() method. That method, in turn, invokes TimeZoneInfo.s_cachedData.GetOneYearLocalFromUtc(), which may exhibit a performance penalty on the first call (when that data is not cached yet).
Depending on the result of that call, TimeZoneInfo.GetDateTimeNowUtcOffsetFromUtc() can also invoke TimeZoneInfo.GetIsDaylightSavingsFromUtc(), which is non-trivial and involves date arithmetic.
Related
A just need a stable count of the current program's progression in milliseconds in C#. I don't care about what timestamp it goes off of, whether it's when the program starts, midnight, or the epoch, I just need a single function that returns a stable millisecond value that does not change in an abnormal manner besides increasing by 1 each millisecond. You'd be surprised how few comprehensive and simple answers I could find by searching.
Edit: Why did you remove the C# from my title? I'd figure that's a pretty important piece of information.
When your program starts create a StopWatch and Start() it.
private StopWatch sw = new StopWatch();
public void StartMethod()
{
sw.Start();
}
At any point you can query the Stopwatch:
public void SomeMethod()
{
var a = sw.ElapsedMilliseconds;
}
If you want something accurate/precise then you need to use a StopWatch, and please read Eric Lippert's Blog (formerly the Principal Developer of the C# compiler Team) Precision and accuracy of DateTime.
Excerpt:
Now, the question “how much time has elapsed from start to finish?” is a completely different question than “what time is it right now?” If the question you want to ask is about how long some operation took, and you want a high-precision, high-accuracy answer, then use the StopWatch class. It really does have nanosecond precision and accuracy that is close to its precision.
If you don't need an accurate time, and you don't care about precision and the possibility of edge-cases that cause your milliseconds to actually be negative then use DateTime.
Do you mean DateTime.Now? It holds absolute time, and subtracting two DateTime instances gives you a TimeSpan object which has a TotalMilliseconds property.
You could store the current time in milliseconds when the program starts, then in your function get the current time again and subtract
edit:
if what your going for is a stable count of process cycles, I would use processor clocks instead of time.
as per your comment you can use DateTime.Ticks, which is 1/10,000 of a millisecond per tick
Also, if you wanted to do the time thing you can use DateTime.Now as your variable you store when you start your program, and do another DateTime.Now whenever you want the time. It has a millisecond property.
Either way DateTime is what your looking for
It sounds like you are just trying to get the current date and time, in milliseconds. If you are just trying to get the current time, in milliseconds, try this:
long milliseconds = DateTime.Now.Ticks / TimeSpan.TicksPerMillisecond;
When I analyzed my old posts exact solution, I found a contradiction so I try to code this in Form1 constructor (after InitializeComponent();)
Stopwatch timer = Stopwatch.StartNew();
var timeStartGetStatistic = DateTime.Now.ToString("HH:mm:ss:fff");
var timeEndGetStatistic = DateTime.Now.ToString("HH:mm:ss:fff");
timer.Stop();
Console.WriteLine("Convert take time {0}", timer.Elapsed);
Console.WriteLine("First StopWatch\nStart:\t{0}\nStop:\t{1}",
timeStartGetStatistic, timeEndGetStatistic);
Stopwatch timer2 = Stopwatch.StartNew();
var timeStartGetStatistic2 = DateTime.Now.ToString("HH:mm:ss:fff");
var timeEndGetStatistic2 = DateTime.Now.ToString("HH:mm:ss:fff");
timer2.Stop();
Console.WriteLine("Convert take time {0}", timer2.Elapsed);
Console.WriteLine("Second StopWatch\nStart:\t{0}\nStop:\t{1}",
timeStartGetStatistic2, timeEndGetStatistic2);
Result
Convert take time 00:00:00.0102284
First StopWatch
Start: 02:42:29:929
Stop: 02:42:29:939
Convert take time 00:00:00.0000069
Second StopWatch
Start: 02:42:29:940
Stop: 02:42:29:940
I found that only FIRST DateTime.Now.ToString("HH:mm:ss:fff"); consumes 10ms but 3 others consume less than 10us in same scope, may I know the exact reason?
Is it because the FIRST one make the code on the memory so the following 3 get the advantage of it to consume more less time to do the same things? thanks.
At first, I thought it was indeed the JIT.. but it doesn't make sense because the framework itself is compiled AOT, when installed.
I think it is loading the current culture (ToString does that, and indeed it is the function that takes time)
================ EDIT =============
I did some tests, and there are two things which take time the first time they are called.
DateTime.Now make a call to the internal method TimeZoneInfo.GetDateTimeNowUtcOffsetFromUtc which take about half of the consumed time...
the rest is consumed by to string..
if instead of calling DateTime.Now you had called DateTime.UtcNow, you wouldn't notice that first-time impact of getting the local time offset.
and about ToString - what's consuming most of it's running time, was generating the DateTimeFormat for the current culture.
I think this answered your question pretty well :)
First call to any individual method from a class requires JIT so first call is always slow (unless assembly pre-JITed with NGen).
There also additional cost to load necessary resources/data depending on method associated with first call. I.e. likely DateTime.Now and DateTime.ToString require some locale data to be loaded/preprocessed.
Standard way to measure performance with Stopwatch is to call function/code that you want to measure once to kick all JIT related to the code and than do measurement. Usually you'd run code many times and average results.
// --- Warm up --- ignore time here unless meauring startup perf.
var timeStartGetStatistic = DateTime.Now.ToString("HH:mm:ss:fff");
var timeEndGetStatistic = DateTime.Now.ToString("HH:mm:ss:fff");
// Actual timing after JIT and all startup cost is payed.
Stopwatch timer2 = Stopwatch.StartNew();
// Consider multiple iterations i.e. to run code for about a second.
var timeStartGetStatistic2 = DateTime.Now.ToString("HH:mm:ss:fff");
var timeEndGetStatistic2 = DateTime.Now.ToString("HH:mm:ss:fff");
timer2.Stop();
Console.WriteLine("Convert take time {0}", timer2.Elapsed);
Console.WriteLine("Second StopWatch\nStart:\t{0}\nStop:\t{1}",
timeStartGetStatistic2, timeEndGetStatistic2);
As already noted you're measuring wrong. You should not include first call to any code as it needs to be jited, that time will be included.
Also it is not good to rely on result just ran only once; You need to run the code number of times and you should be calculating the average time taken.
Be sure that you do it in release mode without debugger attached.
I want to get the maximum count I have to execute a loop for it to take x milliseconds to finish.
For eg.
int GetIterationsForExecutionTime(int ms)
{
int count = 0;
/* pseudocode
do
some code here
count++;
until executionTime > ms
*/
return count;
}
How do I accomplish something like this?
I want to get the maximum count I have to execute a loop for it to take x milliseconds to finish.
First off, simply do not do that. If you need to wait a certain number of milliseconds do not busy-wait in a loop. Rather, start a timer and return. When the timer ticks, have it call a method that resumes where you left off. The Task.Delay method might be a good one to use; it takes care of the timer details for you.
If your question is actually about how to time the amount of time that some code takes then you need much more than simply a good timer. There is a lot of art and science to getting accurate timings.
First you should always use Stopwatch and never use DateTime.Now for these timings. Stopwatch is designed to be a high-precision timer for telling you how much time elapsed. DateTime.Now is a low-precision timer for telling you if it is time to watch Doctor Who yet. You wouldn't use a wall clock to time an Olympic race; you'd use the highest precision stopwatch you could get your hands on. So use the one provided for you.
Second, you need to remember that C# code is compiled Just In Time. The first time you go through a loop can therefore be hundreds or thousands of times more expensive than every subsequent time due to the cost of the jitter analyzing the code that the loop calls. If you are intending on measuring the "warm" cost of a loop then you need to run the loop once before you start timing it. If you are intending on measuring the average cost including the jit time then you need to decide how many times makes up a reasonable number of trials, so that the average works out correctly.
Third, you need to make sure that you are not wearing any lead weights when you are running. Never make performance measurements while debugging. It is astonishing the number of people who do this. If you are in the debugger then the runtime may be talking back and forth with the debugger to make sure that you are getting the debugging experience you want, and that chatter takes time. The jitter is generating worse code than it normally would, so that your debugging experience is more consistent. The garbage collector is collecting less aggressively. And so on. Always run your performance measurements outside the debugger, and with optimizations turned on.
Fourth, remember that virtual memory systems impose costs similar to those of jitters. If you are already running a managed program, or have recently run one, then the pages of the CLR that you need are likely "hot" -- already in RAM -- where they are fast. If not, then the pages might be cold, on disk, and need to be page faulted in. That can change timings enormously.
Fifth, remember that the jitter can make optimizations that you do not expect. If you try to time:
// Let's time addition!
for (int i = 0; i < 1000000; ++i) { int j = i + 1; }
the jitter is entirely within its rights to remove the entire loop. It can realize that the loop computes no value that is used anywhere else in the program and remove it entirely, giving it a time of zero. Does it do so? Maybe. Maybe not. That's up to the jitter. You should measure the performance of realistic code, where the values computed are actually used somehow; the jitter will then know that it cannot optimize them away.
Sixth, timings of tests which create lots of garbage can be thrown off by the garbage collector. Suppose you have two tests, one that makes a lot of garbage and one that makes a little bit. The cost of the collection of the garbage produced by the first test can be "charged" to the time taken to run the second test if by luck the first test manages to run without a collection but the second test triggers one. If your tests produce a lot of garbage then consider (1) is my test realistic to begin with? It doesn't make any sense to do a performance measurement of an unrealistic program because you cannot make good inferences to how your real program will behave. And (2) should I be charging the cost of garbage collection to the test that produced the garbage? If so, then make sure that you force a full collection before the timing of the test is done.
Seventh, you are running your code in a multithreaded, multiprocessor environment where threads can be switched at will, and where the thread quantum (the amount of time the operating system will give another thread until yours might get a chance to run again) is about 16 milliseconds. 16 milliseconds is about fifty million processor cycles. Coming up with accurate timings of sub-millisecond operations can be quite difficult if the thread switch happens within one of the several million processor cycles that you are trying to measure. Take that into consideration.
var sw = Stopwatch.StartNew();
...
long elapsedMilliseconds = sw.ElapsedMilliseconds;
You could also use the Stopwatch class:
int GetIterationsForExecutionTime(int ms)
{
int count = 0;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
do
{
// some code here
count++;
} while (stopwatch.ElapsedMilliseconds < ms);
stopwatch.Stop();
return count;
}
Good points from Eric Lippert.
I'd been benchmarking and unit testing for a while and I'd advise you should discard every first-pass on you code cause JIT compilation.
So in a benchmarking code which use loop and Stopwatch remember to put this at the end of the loop:
// JIT optimization.
if (i == 0)
{
// Discard every result you've collected.
// And restart the timer.
stopwatch.Restart();
}
In my application, I have used the number of System.Threading.Timer and set this timer to fire every 1 second. My application execute the thread at every 1 second but it execution of the millisecond is different.
In my application i have used the OPC server & OPC group .one thread reading the data from the OPC server (like one variable changing it's value & i want to log this moment of the changes values into my application every 1 s)
then another thread to read this data read this data from the first thread every 1s & second thread used for store data into the MYSQL database .
in this process when i will read the data from the first thread then i will get the old data values like , read the data at 10:28:01.530 this second then i will get the information of 10:28:00.260 this second.so i want to mange these threads the first thread worked at 000 millisecond & second thread worked at 500 millisecond. using this first thread update the data at 000 second & second thread read the data at 500 millisecond.
My output is given below:
10:28:32.875
10:28:33.390
10:28:34.875
....
10:28:39.530
10:28:40.875
However, I want following results:
10:28:32.000
10:28:33.000
10:28:34.000
....
10:28:39.000
10:28:40.000
How can the timer be set so the callback is executed at "000 milliseconds"?
First of all, it's impossible. Even if you are to schedule your 'events' for a time that they are fired few milliseconds ahead of schedule, then compare millisecond component of the current time with zero in a loop, the flow control for your code could be taken away at the any given moment.
You will have to rethink your design a little, and not depend on when the event would fire, but think of the algorithm that will compensate for the milliseconds delayed.
Also, you won't have much help with the Threading.Timer, you would have better chance if you have used your own thread, periodically:
check for the current time, see what is the time until next full second
Sleep() for that amount minus the 'spice' factor
do the work you have to do.
You'll calculate your 'spice' factor depending on the results you are getting - does the sleep finishes ahead or behind the schedule.
If you are to give more information about your apparent need for having event at exactly zero ms, I could help you get rid of that requirement.
HTH
I would say that its impossible. You have to understand that switching context for cpu takes time (if other process is running you have to wait - cpu shelduler is working). Each CPU tick takes some time so synchronization to 0 milliseconds is impossible. Maybe with setting high priority of your process you can get closer to 0 but you won't achive it ever.
IMHO it will be impossible to really get a timer to fire exactly every 1sec (on the milisecond) - even in hardcore assembler this would be a very hard task on your normal windows-machine.
I think first what you need to do: is to set right dueTime for a timer. I do it so:
dueTime = 1000 - DateTime.Now.Milliseconds + X; where X - is serving for accuracy and you need select It by testing. Then Threading.Timer each time It ticks running on thread from CLR thread pool and, how tests show - this thread is different each time. Creating threads slows timer, because of this you can use WaitableTimer, which always will be running at the same thread. Instead of WaitableTimer you can using Thread.Sleep method in such way:
Thread.CurrentThread.Priority = Priority.High; //If time is really critical
Thread.Sleep (1000 - DateTime.Now + 50); //Make bound = 1s
while (SomeBoolCondition)
{
Thread.Sleep (980); //1000 ms = 1 second, but something ms will be spent on exit from Sleep
//Here you write your code
}
It will be work faster then a timer.
So a simple enough question really.
How exactly does the interval for System.Timers work?
Does it fire 1 second, each second, regardless of how long the timeout event takes or does it require the routine to finish first and then restarts the interval?
So either:
1 sec....1 sec....1 sec and so on
1 sec + process time....1 sec + process time....1 sec + process time and so on
The reason I ask this is I know my "processing" takes much less than 1 second but I would like to fire it every one second on the dot (or as close as).
I had been using a Thread.Sleep method like so:
Thread.Sleep(1000 - ((int)(DateTime.Now.Subtract(start).TotalMilliseconds) >= 1000 ? 0 : (int)(DateTime.Now.Subtract(start).TotalMilliseconds)));
Where start time is registered at start of the routine. The problem here is that Thread.Sleep only works in milliseconds. So my routine could restart at 1000ms or a fraction over like 1000.0234ms, which can happen as one of my routines takes 0ms according to "TimeSpan" but obviously it has used ticks/nanoseconds - which would then mean the timing is off and is no longer every second. If I could sleep by ticks or nanoseconds it would be bang on.
If number 1 applies to System.Timers then I guess I'm sorted. If not I need some way to "sleep" the thread to a higher resolution of time i.e ticks/nanoseconds.
You might ask why I do an inline IF statement, well sometimes the processing can go above 1000ms so we need to make sure we don't create a minus figure. Also, by the time we determine this, the ending time has changed slightly - not by much, but, it could make the thread delay slightly longer causing the entire subsequent sleeping off.
I know, I know, the time would be negligible... but what happens if the system suddenly stalled for a few ms... it would protect against that in this case.
Update 1
Ok. So I didn't realise you can put a TimeSpan in as the timing value. So I used the below code:
Thread.Sleep(TimeSpan.FromMilliseconds(1000) - ((DateTime.Now.Subtract(start).TotalMilliseconds >= 1000) ? TimeSpan.FromMilliseconds(0) : DateTime.Now.Subtract(start)));
If I am right, this should then allow me to repeat the thread at exactly 1 second - or as close as the system will allow.
IF you have set AutoReset = true; then your theory 1 is true, otherwise you would have to deal with it in code – see the docuementation for Timer on MSDN.