I need to enqueue items into a Queue at roughly 4 to 8ms intervals.
Separately, my UI layer needs to dequeue, process, and display info from these items at roughly 33ms intervals (it may dequeue multiple times at that interval).
I'm not quite sure what combination of Timers and Queue I should use to get this working.
I think I should user the ConcurrentQueue class for the queue, but what timer mechanism should I use for the Enqueueing and Dequeuing?
UPDATE:
I ended up going with something like Brian Gideon's and Alberto's answers.
Without going into all the details here is what I did:
I used the following timer to for both my 4ms timer and my 33ms timer. (http://www.codeproject.com/Articles/98346/Microsecond-and-Millisecond-NET-Timer)
My 4ms timer reads data from a high speed camera, does a small amount of processing and enqueues the data into a ConcurrentQueue.
My 33ms timer dequeues all items from the queue, does some more processing on each item and sends the data to another object that computes a rolling average over some given interval. (Queues are used to manage the rolling averages.)
Within the CompositionTarget.Rendering event, I grab the value(s) from the rolling average object and plot them on my custom line graph control.
I mentioned 33ms for the UI because this data is being fed into a realtime graph. 33ms is about 30 fps... anything slower than that and some smoothness is lost.
I did end up using the ConccuentQueue as well. Works great.
CPU takes a bit of a hit. I think it's due to the high performance timers.
Thanks for the help everyone.
Those are some really tight timing requirements. I question the ~33ms value for the UI updates. The UI should not have to be updated any faster than a human can perceive it and even then that is probably overkill.
What I would do instead is to use a producer-consumer pipeline.
Producer -> Processor -> UI
In my primitive illustration above the Producer will do the step of generating the messages and queueing them. The processor will monitor this queue and do the non-UI related processing of the messages. After processing is complete it will generate messages with just enough information required to update the UI thread. Each step in this pipeline will run on a designated thread. I am assuming you do have a legitimate need for two distinct intervals (4ms and 33ms respectively). I am suggesting you add a 3rd for the UI. The polling intervals might be:
~4ms -> ~33ms -> 500ms
I used the tilde (~) intentionally to highlight the fact that lower interval timings are very hard to achieve in .NET. You might be able to hit 33ms occassionally, but the standard deviation on an arbitrary population of "ticks" will be very high using any of the timers built into the BCL. And, of course, 4ms is out of the question.
You will need to experiment with multimedia timers or other HPET (high performance event timer) mechanisms. Some of these mechanisms use special hardware. If you go this route then you might get closer to that 4ms target. Do not expect miracles though. The CLR is going to stack the deck against you from the very beginning (garbage collection).
See Jim Mischel's answer here for a pretty good write up on some of your options.
You can use one DispatcherTimer for dequeue elements and publish them to the UI and another Timer to enqueue.
For example:
class Producer
{
public readonly Timer timer;
public ConcurrentQueue<int> Queue {get;private set;}
Producer()
{
timer = new Timer(Callback, null, 0, 8);
Queue = new Concurrent<int>();
}
private void Callback(object state)
{
Queue.Enqueue(123);
}
}
class Consumer
{
private readonly Producer producer;
private readonly DispatcherTimer timer;
Consumer(Producer p)
{
producer = p;
timer = new DispatcherTimer();
timer.Interval = TimeSpan.FromMilliseconds(33);
timer.Tick += new EventHandler(dispatcherTimer_Tick);
timer.Start();
}
private void dispatcherTimer_Tick(object sender, EventArgs e)
{
int value;
if(producer.Queue.TryDequeue(out value))
{
// Update your UI here
}
}
}
Since you are dealing with the UI, you could use a couple of DispatcherTimer instead the classic timers. This timer is designed just for the interaction with the UI, thus your queue should be able to enqueue/dequeue without any problem.
Related
I need to write a component that receives an event (the event has a unique ID). Each event requires me to send out a request. The event specifies a timeout period, which to wait for a response from the request.
If the response comes before the timer fires, great, I cancel the timer.
If the timer fires first, then the request timed out, and I want to move on.
This timeout period is specified in the event, so it's not constant.
The expected timeout period is in the range of 30 seconds to 5 minutes.
I can see two ways of implementing this.
Create a timer for each event and put it into a dictionary linking the event to the timer.
Create an ordered list containing the DateTime of the timeout, and a new thread looping every 100ms to check if something timed out.
Option 1 would seem like the easiest solution, but I'm afraid that creating so many timers might not be a good idea because timers might be too expensive. Are there any pitfalls when creating a large number of timers? I suspect that in the background, the timer implementation might actually be an efficient implementation of Option 2. If this option is a good idea, which timer should I use? System.Timers.Timer or System.Threading.Timer.
Option 2 seems like more work, and may not be an efficient solution compared to Option 1.
Update
The maximum number of timers I expect is in the range of 10000, but more likely in the range of 100. Also, the normal case would be the timer being canceled before firing.
Update 2
I ran a test using 10K instances of System.Threading.Timer and System.Timers.Timer, keeping an eye on thread count and memory. System.Threading.Timer seems to be "lighter" compared to System.Timers.Timer judging by memory usage, and there was no creation of excessive number of threads for both timers (ie - thread pooling working properly). So I decided to go ahead and use System.Threading.Timer.
I do this a lot in embedded systems (pure c), where I can't burn a lot of resources (e.g. 4k of RAM is the system memory). This is one approach that has been used (successfully):
Create a single system timer (interrupt) that goes off on a periodic basis (e.g. every 10 ms).
A "timer" is an entry in a dynamic list that indicates how many "ticks" are left till the timer goes off.
Each time the system timer goes off, iterate the list and decrement each of the "timers". Each one that is zero is "fired". Remove it from the list and do whatever the timer was supposed to do.
What happens when the timer goes off depends on the application. It may be a state machine gets run. It may be a function gets called. It may be an enumeration telling the execution code what to do with the parameter sent it the "Create Timer" call. The information in the timer structure is whatever is necessary in the context of the design. The "tick count" is the secret sauce.
We also have created this returning an "ID" for the timer (usually the address of the timer structure, which is drawn from a pool) so it can be cancelled or status on it can be obtained.
Convenience functions convert "seconds" to "ticks" so the API of creating the timers is always in terms of "seconds" or "milliseconds".
You set the "tick" interval to a reasonable value for granularity tradeoff.
I have done other implementations of this in C++, C#, objective-C, with little change in the general approach. It is a very general timer subsystem design/architecture. You just need something to create the fundamental "tick".
I even did it once with a tight "main" loop and a stopwatch from the high-precision internal timer to create my own "simulated" tick when I did not have a timer. I do not recommend this approach; I was simulating hardware in a straight console app and did not have access to the system timers, so it was a bit of an extreme case.
Iterating over a list of a hundreds of timers 10 times a second is not that big a deal on a modern processor. There are ways you can overcome this as well by inserting the items with "delta seconds" and putting them into the list in sorted order. This way you only have to check the ones at the front of the list. This gets you past scaling issues, at least in terms of iterating the list.
Was this helpful?
You should do it the simplest way possible. If you are concerned about performance, you should run your application through a profiler and determine the bottlenecks. You might be very surprised to find out it was some code which you least expected, and you had optimized your code for no reason. I always write the simplest code possible as this is the easiest. See PrematureOptimization
I don't see why there would be any pitfalls with a large number of timers. Are we talking about a dozen, or 100, or 10,000? If it's very high you could have issues. You could write a quick test to verify this.
As for which of those Timer classes to use: I don't want to steal anyone elses answer who probably did much more research: check out this answer to that question`
The first option simply isn't going to scale, you are going to need to do something else if you have a lot of concurrent timeouts. (If you don't know if how many you have is enough to be a problem though, feel free to try using timers to see if you actually have a problem.)
That said, your second option would need a bit of tweaking. Rather than having a tight loop in a new thread, just create a single timer and set its interval (each time it fires) to be the timespan between the current time and the "next" timeout time.
Let me propose a different architecture: for each event, just create a new Task and send the request and wait1 for the response there.
The ~1000 tasks should scale just fine, as shown in this early demo. I suspect ~10000 tasks would still scale, but I haven't tested that myself.
1 Consider implementing the wait by attaching a continuation on Task.Delay (instead of just Thread.Sleep), to avoid under-subscription.
I think Task.Delay is a really good option. Here is the test code for measuring how many concurrent tasks can be executed in different delay times. This code is also calculating error statistics for waiting time accuracy.
static async Task Wait(int delay, double[] errors, int index)
{
var sw = new Stopwatch();
sw.Start();
await Task.Delay(delay);
sw.Stop();
errors[index] = Math.Abs(sw.ElapsedMilliseconds - delay);
}
static void Main(string[] args)
{
var trial = 100000;
var minDelay = 1000;
var maxDelay = 5000;
var errors = new double[trial];
var tasks = new Task[trial];
var rand = new Random();
var sw = new Stopwatch();
sw.Start();
for (int i = 0; i < trial; i++)
{
var delay = rand.Next(minDelay, maxDelay);
tasks[i] = Wait(delay, errors, i);
}
sw.Stop();
Console.WriteLine($"{trial} tasks started in {sw.ElapsedMilliseconds} milliseconds.");
Task.WaitAll(tasks);
Console.WriteLine($"Avg Error: {errors.Average()}");
Console.WriteLine($"Min Error: {errors.Min()}");
Console.WriteLine($"Max Error: {errors.Max()}");
Console.ReadLine();
}
You may change the parameters to see different results. Here are several results in milliseconds:
100000 tasks started in 9353 milliseconds.
Avg Error: 9.10898
Min Error: 0
Max Error: 110
Recently i am looking for a solution for my windows application, which was causing improper logging behind file..
I found come to know issue in resolution time of windows dateime.now
So, I am trying to avoid that. i created sample application based on possible hint given in the forms answers....
Here`s the sample code and its output....
private Timer tmr = new Timer();
private Stopwatch ss;
void tmr_Tick( object sender, EventArgs e )
{
WriteData();
}
private void button1_Click( object sender, EventArgs e )
{
ss = new Stopwatch();
ss.Start();
tmr.Interval = 1;
tmr.Tick += new EventHandler( tmr_Tick );
tmr.Start();
WriteData();
}
void WriteData()
{
Console.WriteLine(ss.ElapsedMilliseconds);
}
OutPut:
0
19
29
49
69
79
99
109
122
142
162
172
192
202
222
232
252
282
272
294
314
334
341
How can i get more accuracy in this code....????
tmr.Interval = 1;
That's a small number, some virtue in not being able to make it smaller. But no, you are not going to get a timer to tick a thousand times per second. There are three factors that determine the actual rate at which the Timer fires the Tick event:
how busy your UI thread is doing other things. Like running a Click event handler or painting the window. The Tick event handler can only run when the UI thread isn't busy with anything, it must be idle. This is of course a highly variable number, a matters a great deal what the user of your app is doing. Some things he can do are pretty expensive, like resizing the window of your app. Typical UI thread duties, like painting, take a handful of milliseconds. Quick enough for the human eye, but quite noticeable if you require that Tick event to run so frequently. It just isn't going to run, could be hundreds of milliseconds if your UI is convoluted. You'll need a different kind of Timer class to avoid this, either System.Threading.Timer or System.Timers.Timer. They are timer classes that run their code on a thread-pool thread. With the advantage that it won't be delayed by what's going on in the UI thread. And the significant disadvantage that you have to be very careful what you do in that code, you certainly can't update the UI.
how busy the machine is doing other things. Like executing the thread of another process or device driver. This is a primary task of the operating system, allowing hundreds of threads to share the processor. The thread scheduler in Windows determines when a thread gets a chance to run. Once it gains the processor and executes, it is allowed to run for a while, a period called the quantum. A typical quantum is 45 milliseconds for a thread owned by a window in the foreground. Other threads are suspended until they get a chance to run. This of course spells doom to your plan to run a Tick event a thousand times per second. You'll need to carefully control what other programs run on the machine, a many-core processor will help a great deal.
how often Windows updates a timer. The core operating system feature that's involved with that is the clock interrupt. Used for many things, it is the basic heartbeat of the operating system. The normal state of the processor is to not be doing anything, it is turned off. The clock tick interrupt wakes it up from the halt state and the operating system checks if any work needs to be performed. The default interrupt rate is 64 times per second, once every 15.625 millisecond. This will also affect the timer's accuracy, nothing can happen when the processor is halted. So by design, you will never get a 1 msec rate, it can never be less than 16 msec.
The latter bullet is certainly the biggest hangup. You can in fact change the interrupt rate in your program, it requires a pinvoke call to the timeBeginPeriod() winapi function. This answer shows how. Also mentions the timeSetEvent() function, the one you'd need to get anywhere close to a timer that has a reasonable guarantee of being able to sustain a 1 millisecond rate.
Stopwatch uses the Windows Performance Counter APIs under the covers so it should be pretty accurate (the Performance Counter APIs are quoted as having a resolution in the low microsecond range).
Your test is invalid, though, because you are actually measuring the drift in the Timer rather than any inaccuracy in the Stopwatch.
On the subject of the Timer - it looks like you are using the System.Windows.Forms.Timer class in your sample. That's not a great choice for accuracy; it's sole advantage is that it raises the Tick event on the UI thread. Try the System.Threading.Timer class instead - you may find it gives you results closer to what you expect (although still suffers from some drift).
var timerCallback = (TimerCallback)(sw => Console.WriteLine(((Stopwatch)sw).Elapsed));
var timerInterval = TimeSpan.FromMilliseconds(1);
var stopwatch = Stopwatch.StartNew();
var timer = new Timer(timerCallback, stopwatch, timerInterval, timerInterval);
Ultimately it's pretty difficult to test the accuracy of the Stopwatch because there isn't anything else readily available that is more accurate!
Is it efficient to
SpinWait.SpinUntil(() => myPredicate(), 10000)
for a timeout of 10000ms
or
Is it more efficient to use Thread.Sleep polling for the same condition
For example something along the lines of the following SleepWait function:
public bool SleepWait(int timeOut)
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
while (!myPredicate() && stopwatch.ElapsedMilliseconds < timeOut)
{
Thread.Sleep(50)
}
return myPredicate()
}
I'm concerned that all the yielding of SpinWait may not be a good usage pattern if we are talking about timeouts over 1sec? Is this a valid assumption?
Which approach do you prefer and why? Is there another even better approach?
Update - Becoming more specific:
Is there a way to Make BlockingCollection Pulse a sleeping thread when it reaches bounded capacity? I rather avoid a busy waits alltogether as Marc Gravel suggests.
In .NET 4 SpinWait performs CPU-intensive spinning for 10 iterations before yielding. But it does not return to the caller immediately after each of those cycles; instead, it calls Thread.SpinWait to spin via the CLR (essentially the OS) for a set time period. This time period is initially a few tens of nano-seconds but doubles with each iteration until the 10 iterations are complete. This enables clarity/predictability in the total time spent spinning (CPU-intensive) phase, which the system can tune according to conditions (number of cores etc.). If SpinWait remains in the spin-yielding phase for too long it will periodically sleep to allow other threads to proceed (see J. Albahari's blog for more information). This process is guaranteed to keep a core busy...
So, SpinWait limits the CPU-intensive spinning to a set number of iterations, after which it yields its time slice on every spin (by actually calling Thread.Yield and Thread.Sleep), lowering its resource consumption. It will also detect if the user is running a single core machine and yield on every cycle if that is the case.
With Thread.Sleep the thread is blocked. But this process will not be as expensive as the above in terms of CPU.
The best approach is to have some mechanism to actively detect the thing becoming true (rather than passively polling for it having become true); this could be any kind of wait-handle, or maybe a Task with Wait, or maybe an event that you can subscribe to to unstick yourself. Of course, if you do that kind of "wait until something happens", that is still not as efficient as simply having the next bit of work done as a callback, meaning: you don't need to use a thread to wait. Task has ContinueWith for this, or you can just do the work in an event when it gets fired. The event is probably the simplest approach, depending on the context. Task, however, already provides most-everything you are talking about here, including both "wait with timeout" and "callback" mechanisms.
And yes, spinning for 10 seconds is not great. If you want to use something like your current code, and if you have reason to expect a short delay, but need to allow for a longer one - maybe SpinWait for (say) 20ms, and use Sleep for the rest?
Re the comment; here's how I'd hook an "is it full" mechanism:
private readonly object syncLock = new object();
public bool WaitUntilFull(int timeout) {
if(CollectionIsFull) return true; // I'm assuming we can call this safely
lock(syncLock) {
if(CollectionIsFull) return true;
return Monitor.Wait(syncLock, timeout);
}
}
with, in the "put back into the collection" code:
if(CollectionIsFull) {
lock(syncLock) {
if(CollectionIsFull) { // double-check with the lock
Monitor.PulseAll(syncLock);
}
}
}
I'am creating a "man-in-middle" style application that applies a network latency to the transmissions, not for malicious use I should declare.
However I'm having difficulty with the correct output mechanisms on the data structure (LinkedList<string> buffer = new LinkedList<string>();).
What should happen:
Read data into structure from clientA.
if (buffer.First != null && buffer.Last != null)
{
buffer.AddAfter(buffer.Last, ServerRead.ReadLine().ToString());
}
else
buffer.AddFirst(ServerRead.ReadLine().ToString());
Using an individual or overall timer to track when to release the data to ClientB. (adjustable timer to adjust latency)
Timer on item in structure triggers, thus releasing the packet to clientB.
Clean up free data structure node
if (buffer.First != null)
{
clientWrite.WriteLine(buffer.First.Value.ToString());
clientWrite.Flush();
buffer.RemoveFirst();
}
However I have been trying to use the System.Windows.Forms.Timer to create a global timer that triggers a thread which handles the data output to clientB. However I'am finding this technique to be too slow, even when setting the myTimer.Interval = 1; This creates a concurrency problem with when clearing up the list and adding to it, the temporary solution is by locking the resource but I feel this is adding to the slow performance of data output.
Question:
I need some ideas on a solution that can store data into a data structure and apply a timer (like an egg timer effect) on the data stored and when that timer runs out it will be sent on its way to the other clients.
Regards, House.
The linked list will work, and it's unlikely that locking it (if done properly) will cause poor performance. You'd probably be much better off using ConcurrentQueue. It's thread-safe, so you don't have to do any explicit blocking.
I would suggest using System.Threading.Timer rather than the Windows Forms timer. Note, though, that you're still going to be limited to about 15 ms resolution. That is, even with a timer interval of 1, your effective delay times will be in the range of 15 to 25 ms rather than 1 ms. It's just the way the timers are implemented.
Also, since you want to delay each item for a specified period of time (which I assume is constant), you need some notion of "current time." I don't recommend using DateTime.Now or any of its variants, because the time can change. Rather, I use Stopwatch to get an application-specific time.
Also, you'll need some way to keep track of release times for the items. A class to hold the item, and the time it will be sent. Something like:
class BufferItem
{
public string Data { get; private set; }
public TimeSpan ReleaseTime { get; private set; }
public BufferItem(string d, TimeSpan ts)
{
data = d;
ReleaseTime = ts;
}
}
Okay. Let's put it all together.
// the application clock
Stopwatch AppTime = Stopwatch.StartNew();
// Amount of time to delay an item
TimeSpan DelayTime = TimeSpan.FromSeconds(1.0);
ConcurrentQueue<BufferItem> ItemQueue = new ConcurrentQueue<BufferItem>();
// Timer will check items for release every 15 ms.
System.ThreadingTimer ReleaseTimer = new System.Threading.Timer(CheckRelease, null, 15, 15);
Receiving an item:
// When an item is received:
// Compute release time and add item to buffer.
var item = new BufferItem(data, AppTime.Elapsed + DelayTime);
ItemQueue.Add(item);
The timer proc.
void CheckRelease(object state)
{
BufferItem item;
while (ItemQueue.TryPeek(out item) && item.ReleaseTime >= AppTime)
{
if (ItemQueue.TryDequeue(out item))
{
// send the item
}
}
}
That should perform well and you shouldn't have any concurrency issues.
If you don't like that 15 ms timer ticking all the time even when there aren't any items, you could make the timer a one-shot and have the CheckRelease method re-initialize it with the next release time after dequeing items. Of course, you'll also have to make the receive code initialize it the first time, or when there aren't any items in the queue. You'll need a lock to synchronize access to updating the timer.
I have a fairly process intensive method that takes a given collection, copies the items(the Item class has its Copy() method properly defined), populates the item with data and returns the populated collection to the class's collection property
//populate Collection containing 40 items
MyClass.CollectionOfItems = GetPopulatedCollection(MyClass.CollectionOfItems );
This method is called in two ways: upon request and via a System.Timers.Timer object's 'Elapsed' event.
Now 40 items in the collection take almost no time at all. Whether being populated 'ad hoc' by say a button_click or populated by the Timer object.
Now when I increase the size of the collection (another MyClass object that has 1000 items), the process predictably takes longer, but around 6sec in total.
That's fine, no problems there. Being called upon initialization (form_load) or being called ad hoc (button_click) it stays around 6sec.
//populate Collection containing 1000 items
MyClass.CollectionOfItems = GetPopulatedCollection(MyClass.CollectionOfItems );
But, the SAME METHOD (as in the exact line of code) is being called by the System.Timers.Timer object. And that Elapsed takes around 60 seconds (other runs unclide 56sec, 1min 2Sec, 1min 10 sec... you get the idea).
Ten times as long for the same process!
I know the System.Timers.Timer object is executed in the Thread-pool. Could this be the reason? Is the thread-pool given lower priority or is the whole queuing thing taking up the time?
In short, what would be a better approach to this? Using the System.Windows.Forms.Timer to execute in the same UI thread?
Thanks!
Ok, some additional info:
The timer operation is occurring within a DLL being called by the UI. The main 'handler' class itself has a collection of timer objects all subscribing to the same event handler.
The handler class' initialization works kinda like this:
UpdateIntervalTimer tmr = new UpdateIntervalTimer(indexPosition);
tmr.Interval = MyClass.UpdateInterval * 60000; //Time in minutes
tmr.Elapsed += new System.Timers.ElapsedEventHandler(tmr_Elapsed);
this.listIntervalTimers.Add(tmr);
I've actually inherited the Timer class to give it an 'index' property (The eventArgs as well). That way within one event handler (tmr_Elapsed) i can identify which MyClass object the timer is for and take action.
The handler class is already running in a thread of its own and fires a custom event to give insight to its operations.
The event is handled in the UI (cross-threading access of UI controls and whatnot) and displayed with the time recievedthe event is handled. This is true for both 'initialization' and 'ad hoc' calls (there is no problem in those cases).
The actual Elapsed event looks as follows:
private void tmr_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
UpdateIntervalTimer tmr;
tmr = (UpdateIntervalTimer)sender;
MyClass c = listOfClasses[tmr.IndexPosition];
observerEventArguments = new MyHandlerEventArgs("Timer is updating data for " + MyClass.ID);
MessagePosted(this, observerEventArguments);
try
{
//preparation related code
MyClass.CollectionOfItems = GetPopulatedCollection(MyClass.CollectionOfItems);
observerEventArguments = new ProfileObserverEventArgs(MyClass.ID + ": Data successfully updated");
MessagePosted(this, observerEventArguments);
}
catch (Exception exUpdateData)
{
observerEventArguments = new MyHandlerEventArgs("There was an error updating the data for '" + MyClass.ID + "': " + exUpdateData.Message);
MessagePosted(this, observerEventArguments);
}
}
Well, the UI thread is likely to have a higher priority - after all, it's meant to keep the UI responsive. However, there are other things that could be going on. Does your method access the UI in a thread-safe manner? If so, obviously it's going to be faster when it doesn't need to marshall between threads.
You could try boosting the priority of the thread-pool thread to see if that improves performance - but apart from that, we'll need more info.
I wouldn't advise you to do this on the UI thread - hanging the UI for 6 seconds doesn't make for a nice user experience :(
Is the interval elapsing while you're still doing work in the timer, causing it to do the same work multiple times? That's the only reason I can think that the UI timer works faster than the system.timers.timer/system.threading.timer, since the UI timer is single threaded and cannot elapse again until it's finished, while the others can.