System.Threading.Timer expiring hundreds of milliseconds late - c#

I'm using a System.Threading.Timer which is configured to expire in 100ms. Typically it will call the callback method within 10ms of the expected time, however, the callback is frequently called up to 500ms late. By frequent, I mean around 25% of the time.
Can anybody explain this?

Well, 500 msec is in fact a magic number in a .NET program. That's how often the threadpool manager considers adding another thread to the pool because the existing ones appear to be stuck and not making progress. The Timer callback is made on a tp thread.
So, sight unseen, a conclusion you can draw is that the callback cannot run soon enough because you have far too many threadpool threads active in your program. So the timer callback only gets a chance to run when the tp manager forcibly adds another thread to the pool.
If accurate, this is pretty unhealthy. Could be that you have a lot of tp threads that are burning 100% core. Easy to see from Task Manager, you'll see the CPU usage completely pegged at 100%. But far more common is that they are not being used effectively, instead of executing code they are blocking. Most typically on an I/O request, a socket read or dbase query for example. Such code should not run on a tp thread, it should run on a regular Thread or a Task that was configured with TaskCreationOptions.LongRunning. Or made more efficient by making it asynchronous, the C# v5 asych/await keywords can make that a lot easier.
A sledge-hammer solution is to call ThreadPool.SetMinThreads() and bump up the minimum. Only ever consider this if you cannot afford the time to do it right.

It's quite simple:
1) your OS is not RTOS (RealTime OS)
2) System.Threading.Timer executes its callback on thread from ThreadPool and it can guarantee only the fact that callback will be called after time interval elapsed.

Related

Using task vs using a thread for task monitoring

Context: we have a task which might take from 30 seconds to 5 minutes depending on a service we are consuming in some Azure Functions.
We are planning to monitor the current status of that task object to make sure it's running and has not been cancelled/faulted.
There are two ways to go around it:
Create a Task, run it and then cancel it when the main task is finished. Alternatively, maybe use Task.Delay along with a while with a condition.
Create a Thread, run it and wait for it to finish (with a while condition to avoid a while that runs forever).
We have done some research and have realised that both have pros and cons. But we are still not sure about which one would be the best approach and why.
In a similar scenario, what would you use? A task, a thread, or something else?
Using a thread is a bit wasteful, but slightly more reliable.
It is wasteful because each thread allocates 1 MB of memory just for its mere existence.
It is more reliable because it doesn't depend on the availability of ThreadPool threads for running a timer event. A sudden burst in demand for ThreadPool threads could leave the ThreadPool starved for several seconds, or even minutes (in extreme scenarios).
So if wasting 1 MB of memory is a non-issue for the app, use a thread. On the other hand if the absolute precision in the timing of the events is something unimportant, use a task.
You could also use a task started with the option LongRunning, but this is essentially a thread in disguise.

Fire and Forget not getting scheduled

I am using Task.Run(() => this.someMethod()) to schedule a back ground job. I am not interested in the operation result and need to move on with flow of application.
But, sometimes my background task is not getting scheduled for a long time.This has started to happen since we moved from .Net 4.7 from 4.5. Even while debugging the break points are either not hit or hit after considerable delay( > 10 minutes).
Has anyone noticed this behavior or know whats causing it?
I am running on i7 core, 16 GB RAM.
Having your Task taking 10 minutes to even start sounds fishy. My guess is your system is under heavy load, or you have a lot of tasks running.
I'm going to attack the later (for a specific situation).
TaskCreationOptions
LongRunning Specifies that a task will be a long-running,
coarse-grained operation involving fewer, larger components than
fine-grained systems. It provides a hint to the TaskScheduler that
oversubscription may be warranted. Oversubscription lets you create
more threads than the available number of hardware threads. It also
provides a hint to the task scheduler that an additional thread might
be required for the task so that it does not block the forward
progress of other threads or work items on the local thread-pool
queue.
var task = new Task(() => MyLongRunningMethod(),TaskCreationOptions.LongRunning);
task.Start();
This is a quote from Stephen Toub - MSFT on this post
Under the covers, it's going to result in a higher number of threads
being used, because its purpose is to allow the ThreadPool to continue
to process work items even though one task is running for an extended
period of time; if that task were running in a thread from the pool,
that thread wouldn't be able to service other tasks. You'd typically
only use LongRunning if you found through performance testing that not
using it was causing long delays in the processing of other work.
Its hard to know what is your problem with out looking at all your code, however i posted this as a sugestion.
I don't know how many tasks you start this way, but unless the number is really high I would focus the debugging on the method being called, not the caller. A delay of 10 minutes is more likely caused by a deadlock or network issue than the task scheduling.
Some ideas:
For a start, I would add something to the beginning and the end of the method being called that lets you know when it starts to execute and when it finishes. Like a Debug.WriteLine() with a timestamp and task ID.
Make sure the method being called releases all resources, even if it crashes. Crashing threads/tasks may go unnoticed, because they don't cause the application to crash.
Double check that the method being called is thread-safe. You may have been lucky in the past and some new framework optimization is now causing havoc.

.NET Scheduling many operations: single timer vs one timer for operation

I am developing a Windows Service application, in .NET, which executes many functions (it is a WCF service host), and one of the targets is running scheduled tasks.
I chose to create a System.Threading.Timer for every operation, with a dueTime set to the next execution and no period to avoid reentrancy.
Every time the operation ends, it changes the dueTime to match the next scheduled execution.
Most of the operations are scheduled to run every minute, not all toghether but delayed by some seconds each other.
Now, after adding a number of operations, about 30, it seems that the timers start to be inaccurate, starting the operations many seconds late, or even minutes late.
I am running the operation logic directly in the callback method of the timer, so the running thread should be the same as the timer.
Should I create a Task to run the operation instead of running it in the callback method to improve accuracy?
Or should I use a single timer with a fixed (1 second) dueTime to check which operations need to be started?
I don't like this last option because it would be more difficult to handle reentrancy..
Timers fire on a thread pool thread, so you are probably finding that as you add lots of timers that you are exhausting the thread pool.
You could increase the size of the thread pool, or alternatively ensure you have fewer timers than the thread pool size.
Firing off Tasks from the callback likely won't help - since you are going to be fighting for threads from the same thread pool. Unless you use long-running tasks.
We usually setup multiple timers to handle different actions within a single service. We set the intervals and start, stop the timer on the Service Start/Stop/Shutdown events (and have a variable indicating the status for each one, i.e. bool Stopped)
When the timer ticks over, we stop the timer, run the processing (which may take a while depending on the process, i.e. may take longer than the interval if its short.. (this code needs to be in a try--catch so it keeps going on errors)
After the code has processed, we check the Stopped variable and if its not stopped we start the timer again (this handles the reentrancy that you've mentioned and allows the code to stick to the interval as much as possible)
Timers are generally more accurate after about 100ms as far as I know, but should be close enough for what you want to do.
We have run this concept for years, and it hasn't let us down.
If you running these tasks as a sub-system of an ASP.NET app, you should also look at HangFire, which can handle background processing, eliminating the need for the windows service.
How accurate do the timers need to be? you could always use a single timer and run multiple processing threads at the same time? or queue the calls to some operations if less critical.
Ok, I came to a decision: since I am not able to easily reproduce the behavior, I chose to solve the root problem and use the Service process to only:
serve WCF requests done by clients
schedule operations (which was problematic)
Every operation that could eat CPU is executed by another process, which is controlled directly by the main process (with System.Diagnostics.Process and its events) and communicates with it through WCF.
When I start the secondary process, I pass to it the PID of the main process through command line. If the latter gets killed, the Process.Exited event fires, and I can close the child process too.
This way the main service usually doesn't use much CPU time, and is free to schedule happily without delays.
Thanks to all who gave me some advices!

What does "long-running tasks" mean?

By default, the CLR runs tasks on pooled threads, which is ideal for
short-running compute-bound work. For longer-running and blocking
operations, you can prevent use of a pooled thread as follows:
Task task = Task.Factory.StartNew (() => ...,
TaskCreationOptions.LongRunning);
I am reading topic about thread and task. Can you explain to me what are "long[er]-running" and "short-running" tasks?
In general thread pooling, you distinguish short-running and long-running threads based on the comparison between their start-up time and run time.
Threads generally take some time to be created and get up to the point where they can start running your code.
The means that if you run a large number of threads where they each take a minute to start but only run for a second (not accurate times but the intent here is simply to show the relationship), the run time of each will be swamped by the time taken to get them going in the first place.
That's one of the reasons for using a thread pool: the threads aren't terminated once their work is done. Instead, they hang around to be reused so that the start-up time isn't incurred again.
So, in that sense, a long running thread is one whose run time is far greater than the time required to start it. In that case, the start-up time is far less important than it is for short running threads.
Conversely, short running threads are ones whose run time is less than or comparable to the start-up time.
For .NET specifically, it's a little different in operation. The thread pooling code will, once it's reached the minimum number of threads, attempt to limit thread creation to one per half-second.
Hence, if you know your thread is going to be long running, you should notify the scheduler so that it can adjust itself accordingly. This will probably mean just creating a new thread rather than grabbing one from the pool, so that the pool can be left to service short-running tasks as intended (no guarantees on that behaviour but it would make sense to do it that way).
However, that doesn't change the meaning of long-running and short-running, all it means is that there's some threshold at which it makes sense to distinguish between the two. For .NET, I would suggest the half-second figure would be a decent choice.

.NET Thread Pool - Unresponsive WinForms UI

Scenario
I have a Windows Forms Application. Inside the main form there is a loop that iterates around 3000 times, Creating a new instance of a class on a new thread to perform some calculations. Bearing in mind that this setup uses a Thread Pool, the UI does stay responsive when there are only around 100 iterations of this loop (100 Assets to process). But as soon as this number begins to increase heavily, the UI locks up into eggtimer mode and the thus the log that is writing out to the listbox on the form becomes unreadable.
Question
Am I right in thinking that the best way around this is to use a Background Worker?
And is the UI locking up because even though I'm using lots of different threads (for speed), the UI itself is not on its own separate thread?
Suggested Implementations greatly appreciated.
EDIT!!
So lets say that instead of just firing off and queuing up 3000 assets to process, I decide to do them in batches of 100. How would I go about doing this efficiently? I made an attempt earlier at adding "Thread.Sleep(5000);" after every batch of 100 were fired off, but the whole thing seemed to crap out....
If you are creating 3000 separate threads, you are pushing a documented limitation of the ThreadPool class:
If an application is subject to bursts
of activity in which large numbers of
thread pool tasks are queued, use the
SetMinThreads method to increase the
minimum number of idle threads.
Otherwise, the built-in delay in
creating new idle threads could cause
a bottleneck.
See that MSDN topic for suggestions to configure the thread pool for your situation.
If your work is CPU intensive, having that many separate threads will cause more overhead than it's worth. However, if it's very IO intensive, having a large number of threads may help things somewhat.
.NET 4 introduces outstanding support for parallel programming. If that is an option for you, I suggest you have a look at that.
More threads does not equal top speed. In fact too many threads equals less speed. If your task is simply CPU related you should only be using as many threads as you have cores otherwise you're wasting resources.
With 3,000 iterations and your form thread attempting to create a thread each time what's probably happening is you are maxing out the thread pool and the form is hanging because it needs to wait for a prior thread to complete before it can allocate a new one.
Apparently ThreadPool doesn't work this way. I have never checked it with threads before so I am not sure. Another possibility is that the tasks begin flooding the UI thread with invocations at which point it will give up on the GUI.
It's difficult to tell without seeing code - but, based on what you're describing, there is one suspect.
You mentioned that you have this running on the ThreadPool now. Switching to a BackgroundWorker won't change anything, dramatically, since it also uses the ThreadPool to execute. (BackgroundWorker just simplifies the invoke calls...)
That being said, I suspect the problem is your notifications back to the UI thread for your ListBox. If you're invoking too frequently, your UI may become unresponsive while it tries to "catch up". This can happen if you're feeding too much status info back to the UI thread via Control.Invoke.
Otherwise, make sure that ALL of your work is being done on the ThreadPool, and you're not blocking on the UI thread, and it should work.
If every thread logs something to your ui, every written log line must invoke the main thread. Better to cache the log-output and update the gui only every 100 iterations or something like that.
Since I haven't seen your code so this is just a lot of conjecture with some highly hopefully educated guessing.
All a threadpool does is queue up your requests and then fire new threads off as others complete their work. Now 3000 threads doesn't sounds like a lot but if there's a ton of processing going on you could be destroying your CPU.
I'm not convinced a background worker would help out since you will end up re-creating a manager to handle all the pooling the threadpool gives you. I think more you issue is you've got too much data chunking going on. I think a good place to start would be to throttle the amount of threads you start and maintain. The threadpool manager easily allows you to do this. Find a balance that allows you to process data while still keeping the UI responsive.

Categories

Resources