How can a thread blocking method like WaitOne method exposed by AutoResetEvent not take up resources (CPU etc.)?
I would imagine that such a method would simply have a while loop like:
public void WaitOne()
{
while(IsSet == false)
{
// some code to make the thread sleep
}
// finally call delegate
}
But that's clearly wrong, since it will make the CPU spin. So what's the secret behind all this black magic?
The method is implemented in the kernel. For each thread that isn't ready to run, Windows keeps a list of all the waitable objects (events, etc.) that the thread is waiting on. When a waitable object is signalled, Windows checks if it can wake up any of the waiting threads. No polling required.
This channel9 talk has a lot of information about how it works:
http://channel9.msdn.com/shows/Going+Deep/Arun-Kishan-Farewell-to-the-Windows-Kernel-Dispatcher-Lock/
Typically, these concepts rely on underlying operating system event constructs to wake up the suspended thread once the event is triggered (or a timeout occurs if applicable). Thus, the thread is in a suspended state and not consuming CPU cycles.
That said, there are other variations of wait in other event types, some of which attempt to spin for a few cycles before suspending the thread in case the event is triggered either before or quickly after the call. There are also some lightweight locking primitives that DO perform spins waiting for a trigger (like SpinWait) but they must be used with care as long waits can drive up the CPU.
The AutoResetEvent and ManualResetEvent take advantage of OS functions. See CreateEvent for more information on this topic.
Related
Assume that you have a multi-threaded Windows service which performs lots of different operations which takes a fair share of time, e.g. extracting data from different data stores, parsing said data, posting it to an external server etc. Operations may be performed in different layers, e.g. application layer, repository layer or service layer.
At some point in the lifespan of this Windows service you may wish to shut it down or restart it by way of services.msc, however if you can't stop all operations and terminate all threads in the Windows service within the timespan that services.msc expects to be done with the stop procedure, it will hang and you will have to kill it from Task Manager.
Because of the issue mentioned above, my question is as follows: How would you implement a fail-safe way of handling shutdown of your Windows service? I have a volatile boolean that acts as a shutdown signal, enabled by OnStop() in my service base class, and should gracefully stop my main loop, but that isn't worth anything if there is an operation in some other layer which is taking it's time doing whatever that operation is up to.
How should this be handled? I'm currently at a loss and need some creative input.
I would use a CancellationTokenSource and propagate the cancellation token from the OnStop method down to all layers and all threads and tasks started there. It's in the framework, so it will not break your loose coupling if you care about that (I mean, wherever you use a thread/Task you also have `CancellationToken' available.
This means you need to adjust your async methods to take the cancellation token into consideration.
You should also be aware of ServiceBase.RequestAdditionalTime. In case it is not possible to cancel all tasks in due time, you can request an extension period.
Alternatively, maybe you can explore the IsBackground alternative. All threads in your windows service that have this enabled are stopped by the CLR when the process is about to exit:
A thread is either a background thread or a foreground thread.
Background threads are identical to foreground threads, except that
background threads do not prevent a process from terminating. Once all
foreground threads belonging to a process have terminated, the common
language runtime ends the process. Any remaining background threads
are stopped and do not complete.
After more research and some brainstorming I came to realise that the problems I've been experiencing were being caused by a very common design flaw regarding threads in Windows services.
The design flaw
Imagine you have a thread which does all your work. Your work consists of tasks that should be run again and again indefinitely. This is quite often implemented as follows:
volatile bool keepRunning = true;
Thread workerThread;
protected override void OnStart(string[] args)
{
workerThread = new Thread(() =>
{
while(keepRunning)
{
DoWork();
Thread.Sleep(10 * 60 * 1000); // Sleep for ten minutes
}
});
workerThread.Start();
}
protected override void OnStop()
{
keepRunning = false;
workerThread.Join();
// Ended gracefully
}
This is the very common design flaw I mentioned. The problem is that while this will compile and run as expected, you will eventually experience that your Windows service won't respond to commands from the service console in Windows. This is because your call to Thread.Sleep() blocks the thread, causing your service to become unresponsive. You will only experience this error if the thread blocks for longer than the timeout configured by Windows in HKLM\SYSTEM\CurrentControlSet\Control\WaitToKillServiceTimeout, because of this registry value this implementation may work for you if your thread is configured to sleep for a very short period of time and does it's work in an acceptable period of time.
The alternative
Instead of using Thread.Sleep() I decided to go for ManualResetEvent and System.Threading.Timer instead. The implementation looks something like this:
OnStart:
this._workerTimer = new Timer(new TimerCallback(this._worker.DoWork));
this._workerTimer.Change(0, Timeout.Infinite); // This tells the timer to perform the callback right now
Callback:
if (MyServiceBase.ShutdownEvent.WaitOne(0)) // My static ManualResetEvent
return; // Exit callback
// Perform lots of work here
ThisMethodDoesAnEnormousAmountOfWork();
(stateInfo as Timer).Change(_waitForSeconds * 1000, Timeout.Infinite); // This tells the timer to execute the callback after a specified period of time. This is the amount of time that was previously passed to Thread.Sleep()
OnStop:
MyServiceBase.ShutdownEvent.Set(); // This signals the callback to never ever perform any work again
this._workerTimer.Dispose(); // Dispose of the timer so that the callback is never ever called again
The conclusion
By implementing System.Threading.Timer and ManualResetEvent you will avoid your service becoming unresponsive to service console commands as a result of Thread.Sleep() blocking.
PS! You may not be out of the woods just yet!
However, I believe there are cases in which a callback is assigned so much work by the programmer that the service may become unresponsive to service console commands during workload execution. If that happens you may wish to look at alternative solutions, like checking your ManualResetEvent deeper in your code, or perhaps implementing CancellationTokenSource.
I'm starting multiple threads and would like to know when any of then finishes. I know the following code:
foreach (Thread t in threads)
t.Join();
But it will only wait for all threads together. That's much too late. I need to know when one thread finishes, even when other threads are still running. I'm looking for something equivalent to WaitAny only for threads. But I can't add code to all threads I'm monitoring, so using signals or other synchronisation objects is not an option.
Some clarification: I'm working on a logging/tracing tool that should log the application's activity. I can insert log statements when a thread starts, but I can't insert a log statement on every possible way out of the thread (multiple exit points, exceptions etc.). So I'd like to register the new thread and then be notified when it finishes to write a log entry. I could asynchronously Join on every thread, but that means a second thread for every monitored thread which may seem a bit much overhead. Threads are used by various means, be it a BackgroundWorker, Task or pool thread. In its essence, it's a thread and I'd like to know when it's done. The exact thread mechanism is defined by the application, not the logging solution.
Instead of Threads use Tasks. It has the method WaitAny.
Task.WaitAny
As you can read here,
More efficient and more scalable use of system resources.
More programmatic control than is possible with a thread or work item.
In my opinion WaitHandle.WaitAny is the best solution, since you don't like to use it for some xyz reason you can try something like this.
Take the advantage of Thread.Join(int) method which takes millisecond timeout and returns true when thread is terminated or false when timed out.
List<Thread> threads = new List<Thread>();
while (!threads.Any(x=> x.Join(100)))
{
}
You can alter the timeout of Join If you know how long it will take.
My answer is based on your clarification that all you have is Thread.Current. Disclaimer: IMO, what you're trying to do is a hack, thus my idea by all means is a hack too.
So, use reflection to obtain the set of native Win32 handles for your desired threads. Your are looking for Thread.GetNativeHandle method which is internal, so you call it like thread.GetType().InvokeMember("GetNativeHandle", BindingFlags.InvokeMethod | BindingFlags.Instance | BindingFlags.NonPublic, ...). Use a reflection tool of your choice or Framework sources to learn more about it. Once you've got the handles, go on with one of the following options:
Set up your own implementation of SynchronizationContext (derive from it) and use SynchronizationContext.WaitHelper(waitAll: false) to wait for your unmanaged handles.
Use the raw Win32 API like WaitForMultipleObjects or CoWaitForMultipleObjects (depending on whether you need to pump messages).
Perform the wait on a separate child or pool thread.
[EDITED] Depending on the execution environment of your target threads, this hack may not work, because one-to-one mapping between managed and unmanaged threads is not guaranteed:
It is possible to determine the Windows thread that is executing the code for a managed thread and to retrieve its handle. However, it still doesn't make sense to call the SetThreadAffinityMask function for this Windows thread, because the managed scheduler can continue the execution of a managed thread in another Windows thread.
It appears however, this may be an implication only for custom CLR hosts. Also, it appears to be possible to control managed thread affinity with Thread.BeginThreadAffinity and Thread.EndThreadAffinity.
You could use a background worker for your working threads.
Then hook all the RunWorkerCompleted events to a method that will wait for them.
If you want that to be synched to the code where you're currently waiting for the join, then the problem is reduced to just synchronizing that single event method to that place in code.
Better yet, I'd suggest to do what you're doing asynchronously without blocking, and just do what you want in the event.
Would you consider wrapping your thread invocations with another 'logging' thread? That way you could log synchronously before & after the thread run.
Something like this pseudo-code:
int threadLogger(<parms>) {
log("starting thread");
retcode = ActualThreadBody(<parms>);
log("exiting thread");
return retcode;
}
If you have more information on the thread started, you could log that as well.
You could also take the thread function as a parameter in the case where you have multiple types of threads to start, which it sounds like you do.
Suppose I have a C# thread doing some blocking IO and waiting for it to finish. Now the OS scheduler gives it CPU time. Will it be given back right away or will it just be used by the thread doing nothing?
Or perhaps something entirely else?
On Windows blocking IO to any device (accessible via the file system interface or others) works by sending the IO request to the driver associated with the device, along with a handle to an event object, and then blocks the calling thread by waiting on that event object. (The event would get signaled when driver completes the IO). Hence when a thread does blocking IO it does not hog the CPU as it is only waiting on the event handle.
All blocking IO API(s) works in this fashion with probably subtle differences in implementation.
I'm using ThreadPool.UnsafeRegisterWaitForSingleObject (henceforth RWFSO) to asynchronously wait on a Semaphore. It returns me a RegisteredWaitHandle which I cannot easily Unregister(). I need to unregister these because the handle is keeping a reference to the delegate and its state object and my process is leaking memory with each handle. Eventually they do get finalized, but this takes far too long and puts far too much pressure on the GC, ballooning my process's private memory usage up into the 1.8GB range. I'm making a lot of asynchronous requests.
The semaphore is used to gate access to HttpWebRequest's asynchronous implementation: BeginGetRequestStream and BeginGetResponse. If I don't use a semaphore, it keeps telling me "not enough free threads on the thread pool" because of the moronic way in which it was implemented. If I use blocking primitives like semaphore.WaitOne() then my thread pool will eventually be deadlocked and nothing will make progress.
RWFSO returns a RegisteredWaitHandle but this is useless to my calling thread as I need to Unregister() this handle only when the wait is completed; I have no cancellation scenario. I can't just pass the RegisteredWaitHandle instance to my delegate (via an out-of-band field set on the state object passed to the delegate) because the delegate could be completed on another thread before control even returns from RWFSO.
How do I safely and quickly Unregister() a RegisteredWaitHandles when and only when its wait is completed?
Just pass the handle to the delegate via whatever mechanism works for you. If the handle is null by the time the delegate executes, have the delegate spin on waiting for the assignment. Don't worry about the trivial amount of CPU time used, because you can just put Thread.Yield in the loop. If it bothers you, you can use a lock.
Alternatively you can unregister the handles that are there, and let the GC clean up the few that lost the race.
If you are going to tie up a thread waiting for the handle, why not just make the HTTP request synchronously? What you are doing is actually tying up two threads per request (if I understand your implementation correctly), whereas just running your requests synchronously would take one thread per request (and could still be moderated using a semaphore or a constrained thread pool, if you need to throttle it).
In my application I have to send periodic heartbeats to a "brother" application.
Is this better accomplished with System.Timers.Timer/Threading.Timer or Using a Thread with a while loop and a Thread.Sleep?
The heartbeat interval is 1 second.
while(!exit)
{
//do work
Thread.Sleep(1000);
}
or
myTimer.Start( () => {
//do work
}, 1000); //pseudo code (not actual syntax)...
System.Threading.Timer has my vote.
System.Timers.Timer is meant for use in server-based (your code is running as a server/service on a host machine rather than being run by a user) timer functionality.
A Thread with a While loop and Thread.Sleep command is truly a bad idea given the existance of more robust Timer mecahnisms in .NET.
Server Timers are a different creature than sleeping threads.
For one thing, based on the priority of your thread, and what else is running, your sleeping thread may or may not be awoken and scheduled to run at the interval you ask. If the interval is long enough, and the precision of scheduling doesn't really matter, Thread.Sleep() is a reasonable choice.
Timers, on the other hand, can raise their events on any thread, allowing for better scheduling capabilities. The cost of using timers, however, is a little bit more complexity in your code - and the fact that you may not be able to control which thread runs the logic that the timer event fires on. From the docs:
The server-based Timer is designed for
use with worker threads in a
multithreaded environment. Server
timers can move among threads to
handle the raised Elapsed event,
resulting in more accuracy than
Windows timers in raising the event on
time.
Another consideration is that timers invoke their Elapsed delegate on a ThreadPool thread. Depending on how time-consuming and/or complicated your logic is, you may not want to run it on the thread pool - you may want a dedicated thread. Another factor with timers, is that if the processing takes long enough, the timer event may be raised again (concurrently) on another thread - which can be a problem if the code being run is not intended or structured for concurrency.
Don't confuse Server Timers with "Windows Timers". The later usually refers to a WM_TIMER messages tha can be delivered to a window, allowing an app to schedule and respond to timed-processing on its main thread without sleeping. However, Windows Timers can also refer to the Win API for low-level timing (which is not the same as WM_TIMER).
Neither :)
Sleeping is typically frowned upon (unfortunately I cannot remember the particulars, but for one, it is an uninteruptible "block"), and Timers come with a lot of baggage. If possible, I would recommend System.Threading.AutoResetEvent as such
// initially set to a "non-signaled" state, ie will block
// if inspected
private readonly AutoResetEvent _isStopping = new AutoResetEvent (false);
public void Process()
{
TimeSpan waitInterval = TimeSpan.FromMilliseconds (1000);
// will block for 'waitInterval', unless another thread,
// say a thread requesting termination, wakes you up. if
// no one signals you, WaitOne returns false, otherwise
// if someone signals WaitOne returns true
for (; !_isStopping.WaitOne (waitInterval); )
{
// do your thang!
}
}
Using an AutoResetEvent (or its cousin ManualResetEvent) guarantees a true block with thread safe signalling (for such things as graceful termination above). At worst, it is a better alternative to Sleep
Hope this helps :)
I've found that the only timer implementation that actually scales is System.Threading.Timer. All the other implementations seem pretty bogus if you're dealing with a non trivial number of scheduled items.