Cancelling a threadpool workitem with Thread.Interrupt - c#

We are using the TPL to queue long-running tasks into the threadpool.
Some of the tasks can block for some time, so we are using the following pattern to cancel them:
private void RunAction(Action action, CancellationTokenSourceWithException cts)
{
try
{
s_logger.Info("Starting action on thread ID: {0}", Utils.GetCurrentNativeThreadId());
Thread taskThread = Thread.CurrentThread;
cts.Token.Register(() => InterruptTask(taskThread));
s_logger.Info("Running next action");
action();
}
catch (Exception e)
{
cts.Cancel(e);
throw;
}
This way, calling cts.Cancel() will cause the task thread to be interrupted in case it is blocking.
This, however, has led to a problem: we don't know if the thread actually got the ThreadInterruptedException or not. It is possible that we call Thread.Interrupt() on it, but the thread will run to completion and the task will simply end. In that case, the threadpool thread will have a ticking bomb in the form of the ThreadInterruptedException, and whenver another task runs on this thread and attempts to block, it will get this exception.
A Thread.ResetInterrupted() method (similar to Thread.ResetAbort()) would be helpful here, but it does not seem to exist. We can use something like the following:
try
{
someEvent.Wait(10);
}
catch (ThreadInterruptedException) {}
To swallow the ThreadInterruptedException, but it looks ugly.
Can anyone suggest an alternative? Are we wrong to be calling Thread.Interrupt on threadpool threads? It seems like the easiest way to cancel tasks: cooperative cancellation using events etc. are much more cumbersome to use, and have to propagate into all classes that we use from the task.

You cannot do this because you don't know if/when the thread pool's threads will block when not running your own code!
Apart from the problems you mentioned, if a thread decides to block while not running your own code then the ThreadInterruptException will be unhandled and the app will immediately terminate. This is something you cannot work around with a try/block/catch guard because there is a race condition: the guard might have just completed when Thread.Interrupt is called, so if the runtime decides to have the thread block at that point you 'll get a crash.
So using Thread.Interrupt is not a viable option and you will definitely have to set up cooperative cancellation.
Apart from that, you should probably not be using the thread pool for these tasks in the first place (although there's not enough data to be . Quoting the docs (emphasis mine):
If you have short tasks that require background processing, the
managed thread pool is an easy way to take advantage of multiple
threads.
There are several scenarios in which it is appropriate to create and
manage your own threads instead of using thread pool threads:
...
You have tasks that cause the thread to block for long periods of time. The thread pool has a maximum number of threads, so a large
number of blocked thread pool threads might prevent tasks from
starting.
...
You might therefore want to consider using a thread pool of your own (there is an apparently very reputable implementation here).

Simple. You need to pass a CancellationToken to the action being called and act on it when cancellation is signalled. Messing with TPL threads with Interrupt is definitely the wrong action to take and will leave TPL in a "confused" state. Adopt the cancellation pattern all the way.

Related

Two Tasks run on the same thread which invalidates lock

Edit
I find Building Async Coordination Primitives, Part 1: AsyncManualResetEvent might be related to my topic.
In the case of TaskCompletionSource, that means that synchronous continuations can happen as part of a call to {Try}Set*, which means in our AsyncManualResetEvent example, those continuations could execute as part of the Set method. Depending on your needs (and whether callers of Set may be ok with a potentially longer-running Set call as all synchronous continuations execute), this may or may not be what you want.
Many thanks to all of the answers, thank you for your knowledge and patience!
Original Question
I know that Task.Run runs on a threadpool thread and threads can have re-entrancy. But I never knew that 2 tasks can run on the same thread when they are both alive!
My Question is: is that reasonable by design? Does that mean lock inside an async method is meaningless (or say, lock cannot be trusted in async method block, if I'd like a method that doesn't allow reentrancy)?
Code:
namespace TaskHijacking
{
class Program
{
static TaskCompletionSource<bool> tcs = new TaskCompletionSource<bool>();
static object methodLock = new object();
static void MethodNotAllowReetrance(string callerName)
{
lock(methodLock)
{
Console.WriteLine($"Enter MethodNotAllowReetrance, caller: {callerName}, on thread: {Thread.CurrentThread.ManagedThreadId}");
if (callerName == "task1")
{
tcs.SetException(new Exception("Terminate tcs"));
}
Thread.Sleep(1000);
Console.WriteLine($"Exit MethodNotAllowReetrance, caller: {callerName}, on thread: {Thread.CurrentThread.ManagedThreadId}");
}
}
static void Main(string[] args)
{
var task1 = Task.Run(async () =>
{
await Task.Delay(1000);
MethodNotAllowReetrance("task1");
});
var task2 = Task.Run(async () =>
{
try
{
await tcs.Task; // await here until task SetException on tcs
}
catch
{
// Omit the exception
}
MethodNotAllowReetrance("task2");
});
Task.WaitAll(task1, task2);
Console.ReadKey();
}
}
}
Output:
Enter MethodNotAllowReetrance, caller: task1, on thread: 6
Enter MethodNotAllowReetrance, caller: task2, on thread: 6
Exit MethodNotAllowReetrance, caller: task2, on thread: 6
Exit MethodNotAllowReetrance, caller: task1, on thread: 6
The control flow of the thread 6 is shown in the figure:
You already have several solutions. I just want to describe the problem a bit more. There are several factors at play here that combine to cause the observed re-entrancy.
First, lock is re-entrant. lock is strictly about mutual exclusion of threads, which is not the same as mutual exclusion of code. I think re-entrant locks are a bad idea in the 99% case (as described on my blog), since developers generally want mutual exclusion of code and not threads. SemaphoreSlim, since it is not re-entrant, mutually excludes code. IMO re-entrant locks are a holdover from decades ago, when they were introduced as an OS concept, and the OS is just concerned about managing threads.
Next, TaskCompletionSource<T> by default invokes task continuations synchronously.
Also, await will schedule its method continuation as a synchronous task continuation (as described on my blog).
Task continuations will sometimes run asynchronously even if scheduled synchronously, but in this scenario they will run synchronously. The context captured by await is the thread pool context, and the completing thread (the one calling TCS.TrySet*) is a thread pool thread, and in that case the continuation will almost always run synchronously.
So, you end up with a thread that takes a lock, completes a TCS, thus executing the continuations of that task, which includes continuing another method, which is then able to take that same lock.
To repeat the existing solutions in other answers, to solve this you need to break that chain at some point:
(OK) Use a non-reentrant lock. SemaphoreSlim.WaitAsync will still execute the continuations while holding the lock (not a good idea), but since SemaphoreSlim isn't re-entrant, the method continuation will (asynchronously) wait for the lock to be available.
(Best) Use TaskCompletionSource.RunContinuationsAsynchronously, which will force task continuations onto a (different) thread pool thread. This is a better solution because your code is no longer invoking arbitrary code while holding a lock (i.e., the task continuations).
You can also break the chain by using a non-thread-pool context for the method awaiting the TCS. E.g., if that method had to resume on a UI thread, then it could not be run synchronously from a thread pool thread.
From a broader perspective, if you're mixing locks and TaskCompletionSource instances, it sounds like you may be building (or may need) an asynchronous coordination primitive. I have an open-source library that implements a bunch of them, if that helps.
A task is an abstraction over some amount of work. Usually this means that the work is split into parts, where the execution can be paused and resumed between parts. When resuming it may very well run on another thread. But the pausing/resuming may only be done at the await statements. Notably, while the task is 'paused', for example because it is waiting for IO, it does not consume any thread at all, it will only use a thread while it is actually running.
My Question is: is that reasonable by design? Does that mean lock inside an async method is meaningless?
locks inside a async method is far from meaningless since it allows you to ensure a section of code is only run by one thread at a time.
In your first example there can be only one thread that has the lock at a time. While the lock is held that task cannot be paused/resumed since await is not legal while in a lock body. So a single thread will execute the entire lock body, and that thread cannot do anything else until it completes the lock body. So there is no risk of re-entrancy unless you invoke some code that can call back to the same method.
In your updated example the problem occurs due to TaskCompletionSource.SetException, this is allowed to reuse the current thread to run any continuation of the task immediately. To avoid this, and many other issues, make sure you only hold the lock while running a limited amount of code. Any method calls that may run arbitrary code risks causing deadlocks, reentrancy, and many other problems.
You can solve the specific problem by using a ManualResetEvent(Slim) to do the signaling between threads instead of using a TaskCompletionSource.
So your method is basically like this:
static void MethodNotAllowReetrance()
{
lock (methodLock) tcs.SetResult();
}
...and the tcs.Task has a continuation attached that invokes the MethodNotAllowReetrance. What happens then is the same thing that would happen if your method was like this instead:
static void MethodNotAllowReetrance()
{
lock (methodLock) MethodNotAllowReetrance();
}
The moral lesson is that you must be very careful every time you invoke any method inside a lock-protected region. In this particular case you have a couple of options:
Don't complete the TaskCompletionSource while holding the lock. Defer its completion until after you have exited the protected region:
static void MethodNotAllowReetrance()
{
bool doComplete = false;
lock (methodLock) doComplete = true;
if (doComplete) tcs.SetResult();
}
Configure the TaskCompletionSource so that it invokes its continuations asynchronously, by passing the TaskCreationOptions.RunContinuationsAsynchronously in its constructor. This is an option that you don't have very often. For example when you cancel a CancellationTokenSource, you don't have the option to invoke asynchronously the callbacks registered to its associated CancellationToken.
Refactor the MethodNotAllowReetrance method in a way that it can handle reentrancy.
Use SemaphoreSlim instead of lock, since, as the documentation says:
The SemaphoreSlim class doesn't enforce thread or task identity
In your case, it would look something like this:
// Semaphore only allows one request to enter at a time
private static readonly SemaphoreSlim _semaphoreSlim = new SemaphoreSlim(1, 1);
void SyncMethod() {
_semaphoreSlim.Wait();
try {
// Do some sync work
} finally {
_semaphoreSlim.Release();
}
}
The try/finally block is optional, but it makes sure that the semaphore is released even if an exception is thrown somewhere in your code.
Note that SemaphoreSlim also has a WaitAsync() method, if you want to wait asynchronously to enter the semaphore.

Cancelling a Thread due to hung Db call

I've designed and made a prototype application for a high performance, multi-threaded mail merge to run as a Windows Service (C#). This question refers to one sticky part of the problem, what to do if the process hangs on a database call. I have researched this a lot. I have read a lot of articles about thread cancellation and I ultimately only see one way to do this, thread.Abort(). Yes, I know, absolutely do not use Thread.Abort(), so I have been researching for days how to do it another way and as I see it, there is no alternative. I will tell you why and hopefully you can tell me why I am wrong.
FYI, these are meant as long running threads, so the TPL would make them outside the ThreadPool anyway.
TPL is just a nice wrapper for a Thread, so I see absolutely nothing a Task can do that a Thread cannot. It's just done differently.
Using a thread, you have two choices for stopping it.
1. Have the thread poll in a processing loop to see if a flag has requested cancellation and just end the processing and let the thread die. No problem.
2. Call Thread.Abort() (then catch the exception, do a Join and worry about Finally, etc.)
This is a database call in the thread, so polling will not work once it is started.
On the other hand, if you use TPL and a CancellationToken, it seems to me that you're still polling and then creating an exception. It looks like the same thing I described in case 1 with the thread. Once I start that database call (I also intend to put a async / await around it), there is no way I can test for a change in the CancellationToken. For that matter, the TPL is worse as calling the CancellationToken during a Db read will do exactly nothing, far less than a Thread.Abort() would do.
I cannot believe this is a unique problem, but I have not found a real solution and I have read a lot. Whether a Thread or Task, the worker thread has to poll to know it should stop and then stop (not possible when connected to a Db. It's not in a loop.) or else the thread must be aborted, throwing a ThreadAbortedException or a TaskCanceledException.
My current plan is to start each job as a longrunning thread. If the thread exceeds the time limit, I will call Thread.Abort, catch the exception in the thread and then do a Join() on the thread after the Abort().
I am very, very open to suggestions... Thanks, Mike
I will put this link, because it claims to do this, but I'm having trouble figuring it out and there are no replys to make me think it will work
multi-threading-cross-class-cancellation-with-tpl
Oh, this looked like a good possibility, but I don't know about it either Treating a Thread as a Service
You can't actually cancel the DB operation. The request is sent across the network; it's "out there" now, there's no pulling it back. The best you can really do is ignore the response that comes back, and continue on executing whatever code you would have executed had the operation actually completed. It's important to recognize what this is though; this isn't actually cancelling anything, it's just moving on even though you're not done. It's a very important distinction.
If you have some task, and you want it to instead become cancelled when you want it to be, you can create a continuation that uses a CancellationToken such that the continuation will be marked as canceled when the token indicates it should be, or it'll be completed when the task completes. You can then use that continuation's Task in place of the actual underlying tasks for all of your continuations, and the task will be cancelled if the token is cancelled.
public static Task WithCancellation(this Task task
, CancellationToken token)
{
return task.ContinueWith(t => t.GetAwaiter().GetResult(), token);
}
public static Task<T> WithCancellation<T>(this Task<T> task
, CancellationToken token)
{
return task.ContinueWith(t => t.GetAwaiter().GetResult(), token);
}
You can then take a given task, pass in a cancellation token, and get back a task that will have the same result except with altered cancellation semantics.
You have several other options for your thread cancellation. For example, your thread could make an asynchronous database call and then wait on that and on the cancellation token. For example:
// cmd is a SqlCommand object
// token is a cancellation token
IAsyncResult ia = cmd.BeginExecuteNonQuery(); // starts an async request
WaitHandle[] handles = new WaitHandle[]{token.WaitHandle, ia.AsyncWaitHandle};
var ix = WaitHandle.WaitAny(handles);
if (ix == 0)
{
// cancellation was requested
}
else if (ix == 1)
{
// async database operation is done. Harvest the result.
}
There's no need to throw an exception if the operation was canceled. And there's no need for Thread.Abort.
This all becomes much cleaner with Task, but it's essentially the same thing. Task handles common errors and helps you to do a better job fitting all the pieces together.
You said:
TPL is just a nice wrapper for a Thread, so I see absolutely nothing a Task can do that a Thread cannot. It's just done differently.
That's true, as far as it goes. After all, C# is just a nice wrapper for an assembly language program, so I see absolutely nothing a C# program can do that I can't do in assembly language. But it's a whole lot easier and faster to do it with C#.
Same goes for the difference between TPL or Tasks, and managing your own threads. You can do all manner of stuff managing your own threads, or you can let the TPL handle all the details and be more likely to get it right.

Is Task.Factory.StartNew() guaranteed to use another thread than the calling thread?

I am starting a new task from a function but I would not want it to run on the same thread. I don't care which thread it runs on as long as it is a different one (so the information given in this question does not help).
Am I guaranteed that the below code will always exit TestLock before allowing Task t to enter it again? If not, what is the recommended design pattern to prevent re-entrency?
object TestLock = new object();
public void Test(bool stop = false) {
Task t;
lock (this.TestLock) {
if (stop) return;
t = Task.Factory.StartNew(() => { this.Test(stop: true); });
}
t.Wait();
}
Edit: Based on the below answer by Jon Skeet and Stephen Toub, a simple way to deterministically prevent reentrancy would be to pass a CancellationToken, as illustrated in this extension method:
public static Task StartNewOnDifferentThread(this TaskFactory taskFactory, Action action)
{
return taskFactory.StartNew(action: action, cancellationToken: new CancellationToken());
}
I mailed Stephen Toub - a member of the PFX Team - about this question. He's come back to me really quickly, with a lot of detail - so I'll just copy and paste his text here. I haven't quoted it all, as reading a large amount of quoted text ends up getting less comfortable than vanilla black-on-white, but really, this is Stephen - I don't know this much stuff :) I've made this answer community wiki to reflect that all the goodness below isn't really my content:
If you call Wait() on a Task that's completed, there won't be any blocking (it'll just throw an exception if the task completed with a TaskStatus other than RanToCompletion, or otherwise return as a nop). If you call Wait() on a Task that's already executing, it must block as there’s nothing else it can reasonably do (when I say block, I'm including both true kernel-based waiting and spinning, as it'll typically do a mixture of both). Similarly, if you call Wait() on a Task that has the Created or WaitingForActivation status, it’ll block until the task has completed. None of those is the interesting case being discussed.
The interesting case is when you call Wait() on a Task in the WaitingToRun state, meaning that it’s previously been queued to a TaskScheduler but that TaskScheduler hasn't yet gotten around to actually running the Task's delegate yet. In that case, the call to Wait will ask the scheduler whether it's ok to run the Task then-and-there on the current thread, via a call to the scheduler's TryExecuteTaskInline method. This is called inlining. The scheduler can choose to either inline the task via a call to base.TryExecuteTask, or it can return 'false' to indicate that it is not executing the task (often this is done with logic like...
return SomeSchedulerSpecificCondition() ? false : TryExecuteTask(task);
The reason TryExecuteTask returns a Boolean is that it handles the synchronization to ensure a given Task is only ever executed once). So, if a scheduler wants to completely prohibit inlining of the Task during Wait, it can just be implemented as return false; If a scheduler wants to always allow inlining whenever possible, it can just be implemented as:
return TryExecuteTask(task);
In the current implementation (both .NET 4 and .NET 4.5, and I don’t personally expect this to change), the default scheduler that targets the ThreadPool allows for inlining if the current thread is a ThreadPool thread and if that thread was the one to have previously queued the task.
Note that there isn't arbitrary reentrancy here, in that the default scheduler won’t pump arbitrary threads when waiting for a task... it'll only allow that task to be inlined, and of course any inlining that task in turn decides to do. Also note that Wait won’t even ask the scheduler in certain conditions, instead preferring to block. For example, if you pass in a cancelable CancellationToken, or if you pass in a non-infinite timeout, it won’t try to inline because it could take an arbitrarily long amount of time to inline the task's execution, which is all or nothing, and that could end up significantly delaying the cancellation request or timeout. Overall, TPL tries to strike a decent balance here between wasting the thread that’s doing the Wait'ing and reusing that thread for too much. This kind of inlining is really important for recursive divide-and-conquer problems (e.g. QuickSort) where you spawn multiple tasks and then wait for them all to complete. If such were done without inlining, you’d very quickly deadlock as you exhaust all threads in the pool and any future ones it wanted to give to you.
Separate from Wait, it’s also (remotely) possible that the Task.Factory.StartNew call could end up executing the task then and there, iff the scheduler being used chose to run the task synchronously as part of the QueueTask call. None of the schedulers built into .NET will ever do this, and I personally think it would be a bad design for scheduler, but it’s theoretically possible, e.g.:
protected override void QueueTask(Task task, bool wasPreviouslyQueued)
{
return TryExecuteTask(task);
}
The overload of Task.Factory.StartNew that doesn’t accept a TaskScheduler uses the scheduler from the TaskFactory, which in the case of Task.Factory targets TaskScheduler.Current. This means if you call Task.Factory.StartNew from within a Task queued to this mythical RunSynchronouslyTaskScheduler, it would also queue to RunSynchronouslyTaskScheduler, resulting in the StartNew call executing the Task synchronously. If you’re at all concerned about this (e.g. you’re implementing a library and you don’t know where you’re going to be called from), you can explicitly pass TaskScheduler.Default to the StartNew call, use Task.Run (which always goes to TaskScheduler.Default), or use a TaskFactory created to target TaskScheduler.Default.
EDIT: Okay, it looks like I was completely wrong, and a thread which is currently waiting on a task can be hijacked. Here's a simpler example of this happening:
using System;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication1 {
class Program {
static void Main() {
for (int i = 0; i < 10; i++)
{
Task.Factory.StartNew(Launch).Wait();
}
}
static void Launch()
{
Console.WriteLine("Launch thread: {0}",
Thread.CurrentThread.ManagedThreadId);
Task.Factory.StartNew(Nested).Wait();
}
static void Nested()
{
Console.WriteLine("Nested thread: {0}",
Thread.CurrentThread.ManagedThreadId);
}
}
}
Sample output:
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 3
Nested thread: 3
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
Launch thread: 4
Nested thread: 4
As you can see, there are lots of times when the waiting thread is reused to execute the new task. This can happen even if the thread has acquired a lock. Nasty re-entrancy. I am suitably shocked and worried :(
Why not just design for it, rather than bend over backwards to ensure it doesn't happen?
The TPL is a red herring here, reentrancy can happen in any code provided you can create a cycle, and you don't know for sure what's going to happen 'south' of your stack frame. Synchronous reentrancy is the best outcome here - at least you can't self-deadlock yourself (as easily).
Locks manage cross thread synchronisation. They are orthogonal to managing reentrancy. Unless you are protecting a genuine single use resource (probably a physical device, in which case you should probably use a queue), why not just ensure your instance state is consistent so reentrancy can 'just work'.
(Side thought: are Semaphores reentrant without decrementing?)
You could easily test this by writting a quick app that shared a socket connection between threads / tasks.
The task would acquire a lock before sending a message down the socket and waiting for a response. Once this blocks and becomes idle (IOBlock) set another task in the same block to do the same. It should block on acquiring the lock, if it does not and the second task is allowed to pass the lock because it run by the same thread then you have an problem.
Solution with new CancellationToken() proposed by Erwin did not work for me, inlining happened to occur anyway.
So I ended up using another condition advised by Jon and Stephen
(... or if you pass in a non-infinite timeout ...):
Task<TResult> task = Task.Run(func);
task.Wait(TimeSpan.FromHours(1)); // Whatever is enough for task to start
return task.Result;
Note: Omitting exception handling etc here for simplicity, you should mind those in production code.

Thread.sleep vs Monitor.Wait vs RegisteredWaitHandle?

(the following items has different goals , but im interesting knowing how they "PAUSEd")
questions
Thread.sleep - Does it impact performance on a system ?does it tie up a thread with its wait ?
what about Monitor.Wait ? what is the difference in the way they "wait"? do they tie up a thread with their wait ?
what about RegisteredWaitHandle ? This method accepts a delegate that is executed when a wait
handle is signaled. While it’s waiting, it doesn’t tie up a thread.
so some thread are paused and can be woken by a delegate , while others just wait ? spin ?
can someone please make things clearer ?
edit
http://www.albahari.com/threading/part2.aspx
Both Thread.Sleep and Monitor.Wait put the thread in the WaitSleepJoin state:
WaitSleepJoin: The thread is blocked. This could be the result of calling
Thread::Sleep or Thread::Join, of requesting a lock — for example, by
calling Monitor::Enter or Monitor::Wait — or of waiting on a thread
synchronization object such as ManualResetEvent.
RegisteredWaitHandle is obtained by calling RegisterWaitForSingleObject and passing a WaitHandle. Generally all descendants of this class use blocking mechanisms, so calling Wait will again put the thread in WaitSleepJoin (e.g. AutoResetEvent).
Here's another quote from MSDN:
The RegisterWaitForSingleObject method checks the current state of the
specified object's WaitHandle. If the object's state is unsignaled,
the method registers a wait operation. The wait operation is performed
by a thread from the thread pool. The delegate is executed by a worker
thread when the object's state becomes signaled or the time-out
interval elapses.
So a thread in the pool does wait for the signal.
Regarding ThreadPool.RegisterWaitForSingleObject, this does not tie up a thread per registration (pooled or otherwise). You can test this easily: run the following script in LINQPad which calls that method 20,000 times:
static ManualResetEvent _starter = new ManualResetEvent (false);
void Main()
{
var regs = Enumerable.Range (0, 20000)
.Select (_ => ThreadPool.RegisterWaitForSingleObject (_starter, Go, "Some Data", -1, true))
.ToArray();
Thread.Sleep (5000);
Console.WriteLine ("Signaling worker...");
_starter.Set();
Console.ReadLine();
foreach (var reg in regs) reg.Unregister (_starter);
}
public static void Go (object data, bool timedOut)
{
Console.WriteLine ("Started - " + data);
// Perform task...
}
If that code tied up 20,000 threads for the duration of the 5-second "wait", it couldn't possibly work.
Edit - in response to:
"this is a proof. but is there still a single thread which checks for
signals only ? in the thread pool ?"
This is an implementation detail. Yes, it could be implemented with a single thread that offloads the callbacks to the managed thread pool, although there's no guarantee of this. Wait handles are ultimately managed by operating system, which will most likely trigger the callbacks, too. It might use one thread (or a small number of threads) in its internal implementation. Or with interrupts, it might not block a single thread. It might even vary according to the operating system version. This is an implementation detail that's of no real relevance to us.
While it's true RegisterWaitForSingleObject creates wait threads, not every call creates one.
From MSDN:
New wait threads are created automatically when required
From Raymond Chen's blog:
...instead of costing a whole thread, it costs something closer to (but not exactly) 1/64 of a thread
So using RegisterWaitForSingleObject is generally preferable to creating your own wait threads.
Thread.Sleep and RegisteredWaitHandle work at different levels. Let me try and clear it up:
Processes have multiple threads, which execute simultaneously (depending on the OS scheduler). If a thread calls Thread.Sleep or Monitor.Wait, it doesn't spin - it is put to WaitSleepJoin state, and the CPU is given to other threads.
Now, when you have many simultaneous work items, you use a thread pool - a mechanism which creates several threads, and uses its own understanding of work items to dispatch calls to its threads. In this models, worker threads are called from the thread pool dispatcher to do some work, and then return back to the pool. If a worker thread calls a blocking operation - like Thread.Sleep or Monitor.Wait - the this thread is "tied up", since the thread pool dispatcher can't use it for additional work items.
I'm not familiar with the actual API, but I think RegisteredWaitHandle would tell the thread pool dispatcher to call a worker thread when needed - and your own thread is not "tied up", and can continue its work or return to the thread pool.
ThreadPool.g RegisterWaitForSingleObject does call in its native implementation ultimately
QueueUserAPC. See rotor sources (sscli20\clr\src\vm\win32threadpool.cpp(1981)). Unlike Wait Thread.Sleep your thread will not be put to a halt when you use RegisterWaitForSingleObject.
Instead for this thread a FIFO queue with user mode callbacks is registered which will be called when the thread is in an alertable state. That means you can continue to work and when your thread is blocked the OS will work on the registered callbacks giving your thread do to the opportunity to do something meaningful while it is waiting.
Edit1:
To complete the analysis. On the thread that did call RegisterWaitForSingleObject a callback is called on the thread when it is in an alertable state. Once this happens the the thread that did call RegisterWaitForSingleObject will execute a CLR callback that does register another callback which is processed by a thread pool callback wait thread which is only there to wait for signaled callbacks. This thread pool callback wait thread will then check in regular intervals for signaled callbacks.
This wait thread does finally call QueueUserWorkItem for the signalled callback to be executed on a thread pool thread.

C# thread interruption stopped working

I dont know why but i can no longer interrupt my own thread.
thread = new Thread(new ParameterizedThreadStart(this.doWork));
thread.Start(param);
...
thread.Interrupt();
//in doWork()
try {
...
}
catch (System.Threading.ThreadInterruptedException)
{
//it never hits here. it use to
}
I search and i dont have any catch in my code and this is the only catch (System.Threading.ThreadInterruptedException). So what is going on? Using the debugger i can see my code run through the thread.Interrupt();. If i do thread.abort() i will catch a System.Threading.ThreadAbortException exception. Why is it catching that and not ThreadInterruptedException?
From BOL:
Interrupts a thread that is in the
WaitSleepJoin thread state.
If this thread is not currently
blocked in a wait, sleep, or join
state, it will be interrupted when it
next begins to block.
ThreadInterruptedException is thrown
in the interrupted thread, but not
until the thread blocks. If the thread
never blocks, the exception is never
thrown, and thus the thread might
complete without ever being
interrupted
BTW, you might be better off using the BackgroundWorker Class which supports cancelling.
From acidzombie24's comment to another answer:
So .abort is a better option? What i want to do is kill the thread but have it exist and call a few functions instead of outright death
Something like an event would be better.
Assuming you want to be able to signal each thread separately, before each worker thread is started create an AutoResetEvent and pass it to the thread.
When you want to interrupt the thread call Set on the event. In the worker thread check the state of the event regularly:
if (theEvent.WaitOne(TimeSpan.Zero)) {
// Handle the interruption.
}
(Regularly: needs to be defined by the requirements: overhead of checking vs. latency of interruption.)
To have a master interrupt, to signal all workers, use a ManualResetEvent which will stay signalled, and keep interrupting threads when they check, until explicitly Reset.

Categories

Resources