Recently i am looking for a solution for my windows application, which was causing improper logging behind file..
I found come to know issue in resolution time of windows dateime.now
So, I am trying to avoid that. i created sample application based on possible hint given in the forms answers....
Here`s the sample code and its output....
private Timer tmr = new Timer();
private Stopwatch ss;
void tmr_Tick( object sender, EventArgs e )
{
WriteData();
}
private void button1_Click( object sender, EventArgs e )
{
ss = new Stopwatch();
ss.Start();
tmr.Interval = 1;
tmr.Tick += new EventHandler( tmr_Tick );
tmr.Start();
WriteData();
}
void WriteData()
{
Console.WriteLine(ss.ElapsedMilliseconds);
}
OutPut:
0
19
29
49
69
79
99
109
122
142
162
172
192
202
222
232
252
282
272
294
314
334
341
How can i get more accuracy in this code....????
tmr.Interval = 1;
That's a small number, some virtue in not being able to make it smaller. But no, you are not going to get a timer to tick a thousand times per second. There are three factors that determine the actual rate at which the Timer fires the Tick event:
how busy your UI thread is doing other things. Like running a Click event handler or painting the window. The Tick event handler can only run when the UI thread isn't busy with anything, it must be idle. This is of course a highly variable number, a matters a great deal what the user of your app is doing. Some things he can do are pretty expensive, like resizing the window of your app. Typical UI thread duties, like painting, take a handful of milliseconds. Quick enough for the human eye, but quite noticeable if you require that Tick event to run so frequently. It just isn't going to run, could be hundreds of milliseconds if your UI is convoluted. You'll need a different kind of Timer class to avoid this, either System.Threading.Timer or System.Timers.Timer. They are timer classes that run their code on a thread-pool thread. With the advantage that it won't be delayed by what's going on in the UI thread. And the significant disadvantage that you have to be very careful what you do in that code, you certainly can't update the UI.
how busy the machine is doing other things. Like executing the thread of another process or device driver. This is a primary task of the operating system, allowing hundreds of threads to share the processor. The thread scheduler in Windows determines when a thread gets a chance to run. Once it gains the processor and executes, it is allowed to run for a while, a period called the quantum. A typical quantum is 45 milliseconds for a thread owned by a window in the foreground. Other threads are suspended until they get a chance to run. This of course spells doom to your plan to run a Tick event a thousand times per second. You'll need to carefully control what other programs run on the machine, a many-core processor will help a great deal.
how often Windows updates a timer. The core operating system feature that's involved with that is the clock interrupt. Used for many things, it is the basic heartbeat of the operating system. The normal state of the processor is to not be doing anything, it is turned off. The clock tick interrupt wakes it up from the halt state and the operating system checks if any work needs to be performed. The default interrupt rate is 64 times per second, once every 15.625 millisecond. This will also affect the timer's accuracy, nothing can happen when the processor is halted. So by design, you will never get a 1 msec rate, it can never be less than 16 msec.
The latter bullet is certainly the biggest hangup. You can in fact change the interrupt rate in your program, it requires a pinvoke call to the timeBeginPeriod() winapi function. This answer shows how. Also mentions the timeSetEvent() function, the one you'd need to get anywhere close to a timer that has a reasonable guarantee of being able to sustain a 1 millisecond rate.
Stopwatch uses the Windows Performance Counter APIs under the covers so it should be pretty accurate (the Performance Counter APIs are quoted as having a resolution in the low microsecond range).
Your test is invalid, though, because you are actually measuring the drift in the Timer rather than any inaccuracy in the Stopwatch.
On the subject of the Timer - it looks like you are using the System.Windows.Forms.Timer class in your sample. That's not a great choice for accuracy; it's sole advantage is that it raises the Tick event on the UI thread. Try the System.Threading.Timer class instead - you may find it gives you results closer to what you expect (although still suffers from some drift).
var timerCallback = (TimerCallback)(sw => Console.WriteLine(((Stopwatch)sw).Elapsed));
var timerInterval = TimeSpan.FromMilliseconds(1);
var stopwatch = Stopwatch.StartNew();
var timer = new Timer(timerCallback, stopwatch, timerInterval, timerInterval);
Ultimately it's pretty difficult to test the accuracy of the Stopwatch because there isn't anything else readily available that is more accurate!
Related
In a C# System.Windows.Forms.Timer, what would happen if the code within the timer tick took longer to calculate than the tick length?
For example, in the code below, what would happen if updating the label took longer than the interval of the tick (1 second)?
private void timerProgress_Tick( object sender, EventArgs e )
{
if( label.Value <= label.Maximum )
{
label.Value = item;
}
update_label();
}
I can't seem to find any answers for this, though it seems like an obvious question.
As mentioned in the comments for the question, the System.Windows.Forms.Timer will queue the Tick events, blocking the UI thread if all Tick events take longer than the set interval.
The event will continue to calculate for as long as it needs, regardless of the interval time.
For example, if you were to make a countdown timer with a tick of one second, but have it contain calculations that take 1.3 seconds, it would be delayed. This means your count down time will be incorrect as a 30 second count down will actually last around 39 seconds, regardless of the one second Tick length.
Of course, long-running tasks should not be completed within a Timer event, as these are forced onto the UI thread, and you shouldn't be blocking that thread.
System.Windows.Forms.Timer will fire on the UI thread only (the thread that the owning form is bound to), and will fire at some time after the time period has elapsed, when the thread is otherwise idle (including repainting the window). The event handler blocks the thread from doing anything else (including handling user input and repaint events) until it has returned.
To avoid an unresponsive UI, you should typically use this timer only for animations. You could queue a task to the thread pool, but you might as well use System.Threading.Timer, which fires its events on the thread pool. Note, however, that System.Threading.Timer does not check that the previous event handler has returned.
Under the covers, System.Windows.Forms.Timer is based on the Win32 SetTimer API.
This question already has answers here:
high frequency timing .NET
(4 answers)
Closed 7 years ago.
Currently I have a timer which I assumed called the method defined in the TimerCallback field every 5ms, but after reading about the Timer class and it's resolution I discovered that the Timer class only has an resolution of 15ms or greater.
e.g.
autoDispatchTimer = new Timer(new TimerCallback(this.SendData), null, 0, 5);
I also looked into using a Stopwatch as this does allow for a higher resolution, but I am still unsure of how to use Stopwatch to carry out the same operation. Is there anyway this can be done using a Stopwatch object?
Would it be a a case of reading the ElapsedMilliseconds property and calling the SendData method there, or am I misunderstanding how the Stopwatch in C# work?
Help on this query would be greatly appreciated.
EDIT: I completely understand that the timer has an accuracy of 15ms, hence the rest of my question asking about an alternative which is faster. Title has now been modified to reflect this.
The problem is that Timer suspends in the background and at some point gets time to do something. That is guaranteed to be not faster than the time specified in your timer (so it will take at least the time specified, it can take more). I will usually take some milliseconds before it comes back. The timer accuracy is 15ms on Windows 7 and higher, so there is no chance to get it faster like this.
The only thing that is faster than a Timer is a while loop. You could use that, or rethink your design (at least, that is what I would do).
The timer resolution documentation says:
The system timer resolution determines
how frequently Windows performs two
main actions:
Update the timer tick
count if a full tick has elapsed.
Check whether a scheduled timer object
has expired.
A timer tick is a notion of elapsed
time that Windows uses to track the
time of day and thread quantum times.
By default, the clock interrupt and
timer tick are the same, but Windows
or an application can change the clock
interrupt period.
The default timer
resolution on Windows 7 is 15.6
milliseconds (ms). Some applications
reduce this to 1 ms, which reduces the
battery run time on mobile systems by
as much as 25 percent.
However you can try this:
[DllImport("ntdll.dll", EntryPoint = "NtSetTimerResolution")]
public static extern void NtSetTimerResolution(uint DesiredResolution, bool SetResolution, ref uint CurrentResolution);
private void Foo()
{
uint DesiredResolution = 9000;
bool SetResolution= true;
uint CurrentResolution = 0;
NtSetTimerResolution(DesiredResolution, SetResolution, ref CurrentResolution);
}
Source
You can also refer: High-Performance Timer in C#
I need to write a component that receives an event (the event has a unique ID). Each event requires me to send out a request. The event specifies a timeout period, which to wait for a response from the request.
If the response comes before the timer fires, great, I cancel the timer.
If the timer fires first, then the request timed out, and I want to move on.
This timeout period is specified in the event, so it's not constant.
The expected timeout period is in the range of 30 seconds to 5 minutes.
I can see two ways of implementing this.
Create a timer for each event and put it into a dictionary linking the event to the timer.
Create an ordered list containing the DateTime of the timeout, and a new thread looping every 100ms to check if something timed out.
Option 1 would seem like the easiest solution, but I'm afraid that creating so many timers might not be a good idea because timers might be too expensive. Are there any pitfalls when creating a large number of timers? I suspect that in the background, the timer implementation might actually be an efficient implementation of Option 2. If this option is a good idea, which timer should I use? System.Timers.Timer or System.Threading.Timer.
Option 2 seems like more work, and may not be an efficient solution compared to Option 1.
Update
The maximum number of timers I expect is in the range of 10000, but more likely in the range of 100. Also, the normal case would be the timer being canceled before firing.
Update 2
I ran a test using 10K instances of System.Threading.Timer and System.Timers.Timer, keeping an eye on thread count and memory. System.Threading.Timer seems to be "lighter" compared to System.Timers.Timer judging by memory usage, and there was no creation of excessive number of threads for both timers (ie - thread pooling working properly). So I decided to go ahead and use System.Threading.Timer.
I do this a lot in embedded systems (pure c), where I can't burn a lot of resources (e.g. 4k of RAM is the system memory). This is one approach that has been used (successfully):
Create a single system timer (interrupt) that goes off on a periodic basis (e.g. every 10 ms).
A "timer" is an entry in a dynamic list that indicates how many "ticks" are left till the timer goes off.
Each time the system timer goes off, iterate the list and decrement each of the "timers". Each one that is zero is "fired". Remove it from the list and do whatever the timer was supposed to do.
What happens when the timer goes off depends on the application. It may be a state machine gets run. It may be a function gets called. It may be an enumeration telling the execution code what to do with the parameter sent it the "Create Timer" call. The information in the timer structure is whatever is necessary in the context of the design. The "tick count" is the secret sauce.
We also have created this returning an "ID" for the timer (usually the address of the timer structure, which is drawn from a pool) so it can be cancelled or status on it can be obtained.
Convenience functions convert "seconds" to "ticks" so the API of creating the timers is always in terms of "seconds" or "milliseconds".
You set the "tick" interval to a reasonable value for granularity tradeoff.
I have done other implementations of this in C++, C#, objective-C, with little change in the general approach. It is a very general timer subsystem design/architecture. You just need something to create the fundamental "tick".
I even did it once with a tight "main" loop and a stopwatch from the high-precision internal timer to create my own "simulated" tick when I did not have a timer. I do not recommend this approach; I was simulating hardware in a straight console app and did not have access to the system timers, so it was a bit of an extreme case.
Iterating over a list of a hundreds of timers 10 times a second is not that big a deal on a modern processor. There are ways you can overcome this as well by inserting the items with "delta seconds" and putting them into the list in sorted order. This way you only have to check the ones at the front of the list. This gets you past scaling issues, at least in terms of iterating the list.
Was this helpful?
You should do it the simplest way possible. If you are concerned about performance, you should run your application through a profiler and determine the bottlenecks. You might be very surprised to find out it was some code which you least expected, and you had optimized your code for no reason. I always write the simplest code possible as this is the easiest. See PrematureOptimization
I don't see why there would be any pitfalls with a large number of timers. Are we talking about a dozen, or 100, or 10,000? If it's very high you could have issues. You could write a quick test to verify this.
As for which of those Timer classes to use: I don't want to steal anyone elses answer who probably did much more research: check out this answer to that question`
The first option simply isn't going to scale, you are going to need to do something else if you have a lot of concurrent timeouts. (If you don't know if how many you have is enough to be a problem though, feel free to try using timers to see if you actually have a problem.)
That said, your second option would need a bit of tweaking. Rather than having a tight loop in a new thread, just create a single timer and set its interval (each time it fires) to be the timespan between the current time and the "next" timeout time.
Let me propose a different architecture: for each event, just create a new Task and send the request and wait1 for the response there.
The ~1000 tasks should scale just fine, as shown in this early demo. I suspect ~10000 tasks would still scale, but I haven't tested that myself.
1 Consider implementing the wait by attaching a continuation on Task.Delay (instead of just Thread.Sleep), to avoid under-subscription.
I think Task.Delay is a really good option. Here is the test code for measuring how many concurrent tasks can be executed in different delay times. This code is also calculating error statistics for waiting time accuracy.
static async Task Wait(int delay, double[] errors, int index)
{
var sw = new Stopwatch();
sw.Start();
await Task.Delay(delay);
sw.Stop();
errors[index] = Math.Abs(sw.ElapsedMilliseconds - delay);
}
static void Main(string[] args)
{
var trial = 100000;
var minDelay = 1000;
var maxDelay = 5000;
var errors = new double[trial];
var tasks = new Task[trial];
var rand = new Random();
var sw = new Stopwatch();
sw.Start();
for (int i = 0; i < trial; i++)
{
var delay = rand.Next(minDelay, maxDelay);
tasks[i] = Wait(delay, errors, i);
}
sw.Stop();
Console.WriteLine($"{trial} tasks started in {sw.ElapsedMilliseconds} milliseconds.");
Task.WaitAll(tasks);
Console.WriteLine($"Avg Error: {errors.Average()}");
Console.WriteLine($"Min Error: {errors.Min()}");
Console.WriteLine($"Max Error: {errors.Max()}");
Console.ReadLine();
}
You may change the parameters to see different results. Here are several results in milliseconds:
100000 tasks started in 9353 milliseconds.
Avg Error: 9.10898
Min Error: 0
Max Error: 110
I need to enqueue items into a Queue at roughly 4 to 8ms intervals.
Separately, my UI layer needs to dequeue, process, and display info from these items at roughly 33ms intervals (it may dequeue multiple times at that interval).
I'm not quite sure what combination of Timers and Queue I should use to get this working.
I think I should user the ConcurrentQueue class for the queue, but what timer mechanism should I use for the Enqueueing and Dequeuing?
UPDATE:
I ended up going with something like Brian Gideon's and Alberto's answers.
Without going into all the details here is what I did:
I used the following timer to for both my 4ms timer and my 33ms timer. (http://www.codeproject.com/Articles/98346/Microsecond-and-Millisecond-NET-Timer)
My 4ms timer reads data from a high speed camera, does a small amount of processing and enqueues the data into a ConcurrentQueue.
My 33ms timer dequeues all items from the queue, does some more processing on each item and sends the data to another object that computes a rolling average over some given interval. (Queues are used to manage the rolling averages.)
Within the CompositionTarget.Rendering event, I grab the value(s) from the rolling average object and plot them on my custom line graph control.
I mentioned 33ms for the UI because this data is being fed into a realtime graph. 33ms is about 30 fps... anything slower than that and some smoothness is lost.
I did end up using the ConccuentQueue as well. Works great.
CPU takes a bit of a hit. I think it's due to the high performance timers.
Thanks for the help everyone.
Those are some really tight timing requirements. I question the ~33ms value for the UI updates. The UI should not have to be updated any faster than a human can perceive it and even then that is probably overkill.
What I would do instead is to use a producer-consumer pipeline.
Producer -> Processor -> UI
In my primitive illustration above the Producer will do the step of generating the messages and queueing them. The processor will monitor this queue and do the non-UI related processing of the messages. After processing is complete it will generate messages with just enough information required to update the UI thread. Each step in this pipeline will run on a designated thread. I am assuming you do have a legitimate need for two distinct intervals (4ms and 33ms respectively). I am suggesting you add a 3rd for the UI. The polling intervals might be:
~4ms -> ~33ms -> 500ms
I used the tilde (~) intentionally to highlight the fact that lower interval timings are very hard to achieve in .NET. You might be able to hit 33ms occassionally, but the standard deviation on an arbitrary population of "ticks" will be very high using any of the timers built into the BCL. And, of course, 4ms is out of the question.
You will need to experiment with multimedia timers or other HPET (high performance event timer) mechanisms. Some of these mechanisms use special hardware. If you go this route then you might get closer to that 4ms target. Do not expect miracles though. The CLR is going to stack the deck against you from the very beginning (garbage collection).
See Jim Mischel's answer here for a pretty good write up on some of your options.
You can use one DispatcherTimer for dequeue elements and publish them to the UI and another Timer to enqueue.
For example:
class Producer
{
public readonly Timer timer;
public ConcurrentQueue<int> Queue {get;private set;}
Producer()
{
timer = new Timer(Callback, null, 0, 8);
Queue = new Concurrent<int>();
}
private void Callback(object state)
{
Queue.Enqueue(123);
}
}
class Consumer
{
private readonly Producer producer;
private readonly DispatcherTimer timer;
Consumer(Producer p)
{
producer = p;
timer = new DispatcherTimer();
timer.Interval = TimeSpan.FromMilliseconds(33);
timer.Tick += new EventHandler(dispatcherTimer_Tick);
timer.Start();
}
private void dispatcherTimer_Tick(object sender, EventArgs e)
{
int value;
if(producer.Queue.TryDequeue(out value))
{
// Update your UI here
}
}
}
Since you are dealing with the UI, you could use a couple of DispatcherTimer instead the classic timers. This timer is designed just for the interaction with the UI, thus your queue should be able to enqueue/dequeue without any problem.
I want to call thread sleep with less than 1 millisecond.
I read that neither thread.Sleep nor Windows-OS support that.
What's the solution for that?
For all those who wonder why I need this:
I'm doing a stress test, and want to know how many messages my module can handle per second.
So my code is:
// Set the relative part of Second hat will be allocated for each message
//For example: 5 messages - every message will get 200 miliseconds
var quantum = 1000 / numOfMessages;
for (var i = 0; i < numOfMessages; i++)
{
_bus.Publish(new MyMessage());
if (rate != 0)
Thread.Sleep(quantum);
}
I'll be glad to get your opinion on that.
You can't do this. A single sleep call will typically block for far longer than a millisecond (it's OS and system dependent, but in my experience, Thread.Sleep(1) tends to block for somewhere between 12-15ms).
Windows, in general, is not designed as a real-time operating system. This type of control is typically impossible to achieve on normal (desktop/server) versions of Windows.
The closest you can get is typically to spin and eat CPU cycles until you've achieved the wait time you want (measured with a high performance counter). This, however, is pretty awful - you'll eat up an entire CPU, and even then, you'll likely get preempted by the OS at times and effectively "sleep" for longer than 1ms...
The code below will most definitely offer a more precise way of blocking, rather than calling Thread.Sleep(x); (although this method will block the thread, not put it to sleep). Below we are using the StopWatch class to measure how long we need to keep looping and block the calling thread.
using System.Diagnostics;
private static void NOP(double durationSeconds)
{
var durationTicks = Math.Round(durationSeconds * Stopwatch.Frequency);
var sw = Stopwatch.StartNew();
while (sw.ElapsedTicks < durationTicks)
{
}
}
Example usage,
private static void Main()
{
NOP(5); // Wait 5 seconds.
Console.WriteLine("Hello World!");
Console.ReadLine();
}
Why?
Usually there are a very limited number of CPUs and cores on one machine - you get just a small number if independent execution units.
From the other hands there are a number of processes and many more threads. Each thread requires some processor time, that is assigned internally by Windows core processes. Usually Windows blocks all threads and gives a certain amount of CPU core time to particular threads, then it switches the context to other threads.
When you call Thread.Sleep no matter how small you kill the whole time span Windows gave to the thread, as there is no reason to simply wait for it and the context is switched straight away. It can take a few ms when Windows gives your thread some CPU next time.
What to use?
Alternatively, you can spin your CPU, spinning is not a terrible thing to do and can be very useful. It is for example used in System.Collections.Concurrent namespace a lot with non-blocking collections, e.g.:
SpinWait sw = new SpinWait();
sw.SpinOnce();
Most of the legitimate reasons for using Thread.Sleep(1) or Thread.Sleep(0) involve fairly advanced thread synchronization techniques. Like Reed said, you will not get the desired resolution using conventional techniques. I do not know for sure what it is you are trying to accomplish, but I think I can assume that you want to cause an action to occur at 1 millisecond intervals. If that is the case then take a look at multimedia timers. They can provide resolution down to 1ms. Unfortunately, there is no API built into the .NET Framework (that I am aware of) that taps into this Windows feature. But you can use the interop layer to call directly into the Win32 APIs. There are even examples of doing this in C# out there.
In the good old days, you would use the "QueryPerformanceTimer" API of Win32, when sub milisecond resolution was needed.
There seems to be more info on the subject over on Code-Project: http://www.codeproject.com/KB/cs/highperformancetimercshar.aspx
This won't allow you to "Sleep()" with the same resolution as pointed out by Reed Copsey.
Edit:
As pointed out by Reed Copsey and Brian Gideon the QueryPerfomanceTimer has been replaced by Stopwatch in .NET
I was looking for the same thing as the OP, and managed to find an answer that works for me. I'm surprised that none of the other answers mentioned this.
When you call Thread.Sleep(), you can use one of two overloads: An int with the number of milliseconds, or a TimeSpan.
A TimeSpan's Constructor, in turn, has a number of overloads. One of them is a single long denoting the number of ticks the TimeSpan represents. One tick is a lot less than 1ms. In fact, another part of TimeSpan's docs gave an example of 10000 ticks happening in 1ms.
Therefore, I think the closest answer to the question is that if you want Thread.Sleep for less than 1ms, you would create a TimeSpan with less than 1ms worth of ticks, then pass that to Thread.Sleep().