Shutting down a multithreaded application - c#

I'm trying to write a ThreadManager for my C# application. I create several threads:
One thread for my text writer.
One thread that monitors some statistics.
Multiple threads to perform a large sequence of calculations (up to 4 threads per core and I run my app on a 2x quad core server).
My application normally runs for up to 24 hours at a time, so all the threads get created in the beginning and they persist through the entire time the app runs.
I want to have a single place where I "register" all of my treads and when the application is shutting down I simply invoke a method and it goes through all of the registered threads and shuts them down.
For that purpose I have devised the following class:
public class ThreadManager
{
private static Object _sync = new Object();
private static ThreadManager _instance = null;
private static List<Thread> _threads;
private ThreadManager()
{
_threads = new List<Thread>();
}
public static ThreadManager Instance
{
get
{
lock (_sync)
{
if (_instance == null)
{
_instance = new ThreadManager();
}
}
return _instance;
}
}
public void AddThread(Thread t)
{
lock (_sync)
{
_threads.Add(t);
}
}
public void Shutdown()
{
lock (_sync)
{
foreach (Thread t in _threads)
{
t.Abort(); // does this also abort threads that are currently blocking?
}
}
}
}
I want to ensure that all of my threads are killed so the application can close properly and shutting down in the middle of some computation is just fine too. Should I be aware of anything here? Is this approach good given my situation?

If you set the threads to background threads, they will be killed when the application is shut down.
myThread.IsBackground = true;
obviously if you need the threads to finish before shutdown, this is not the solution you want.

Aborting threads is what you do when all else fails. It is a dangerous thing to do which you should only do as a last resort. The correct way to do this is to make your threading logic so that every worker thread responds quickly and correctly when the main thread gives it the command to shut itself down.
Coincidentally, this is the subject of my blog this week.
http://blogs.msdn.com/ericlippert/archive/2010/02/22/should-i-specify-a-timeout.aspx

What if AddThread is called while your Shutdown is running?
When shutdown finishes, the thread waiting in AddThread will add a new thread to the collection. This could lead to hangs in your app.
Add a bool flag that you ever only set in Shutdown to protect against this.
bool shouldGoAway = false;
public void AddThread(Thread t)
{
lock (_sync)
{
if( ! shouldGoAway )
_threads.Add(t);
}
}
public void Shutdown()
{
lock (_sync)
{
shouldGoAway = true;
foreach (Thread t in _threads)
{
t.Abort(); // does this also abort threads that are currently blocking?
}
}
Also you should not use static members - there is no reason for that as you have your Singleton instance.
.Abort() does not abort threads that are blocking in unmanaged space. So if you do that you need to use some other mechanism.

The only specific issue I know about is this one: http://www.bluebytesoftware.com/blog/2007/01/30/MonitorEnterThreadAbortsAndOrphanedLocks.aspx
But I'd avoid having to resort to a design like this. You could force each of your threads to check some flag regularly that it's time to shut down, and when shutting down, set that flag and wait for all threads to finish (with Join()). It feels a bit more like controlled shutdown that way.

If you don't care about the worker thread state then you can loop through _thread and abort:
void DieDieDie()
{
foreach (Thread thread in _thread)
{
thread.Abort();
thread.Join(); // if you need to wait for the thread to die
}
}
In your case you can probably just abort them all and shutdown as they're just doing calculations. But if you need to wait for a database write operation or need to close an unmanaged resource then you either need to catch the ThreadAbortException or signal the threads to kill themselves gracefully.

You want deferred thread cancellation, which basically means that the threads terminate themselves as opposed to a thread manager cancelling threads asynchronously, which is much more ill-defined and dangerous.
I you wanted to handle thread cancellation more elegantly than immediate termination, you can use signal handlers that are triggered by events outside the thread - by your thread manager perhaps.

Related

Is thread sleeping in while(true) loop a bad way to reduce cpu usage? [duplicate]

I have a unit of work I'm doing in a thread (not the main thread). Under certain circumstances I would like to put this thread to sleep for 10 seconds. Is Thread.Sleep(10000) the most resource efficient way to do this?
Is Thread.Sleep(10000) the most resource efficient way to do this?
Yes in the sense that it is not busy-waiting but giving up the CPU.
But it is wasting a Thread. You shouldn't scale this to many sleeping threads.
As no-one else has mentioned it...
If you want another thread to be able to wake up your "sleeping" thread, you may well want to use Monitor.Wait instead of Thread.Sleep:
private readonly object sharedMonitor;
private bool shouldStop;
public void Stop()
{
lock (sharedMonitor)
{
shouldStop = true;
Monitor.Pulse(sharedMonitor);
}
}
public void Loop()
{
while (true)
{
// Do some work...
lock (sharedMonitor)
{
if (shouldStop)
{
return;
}
Monitor.Wait(sharedMonitor, 10000);
if (shouldStop)
{
return;
}
}
}
}
Note that we only access shouldStop within the lock, so there aren't any memory model concerns.
You may want to loop round waiting until you've really slept for 10 seconds, just in case you get spurious wake-ups - it depends on how important it is that you don't do the work again for another 10 seconds. (I've never knowingly encountered spurious wakes, but I believe they're possible.)
Make a habit of using Thread.CurrentThread.Join(timeout) instead of Thread.Sleep.
The difference is that Join will still do some message pumping (e.g. GUI & COM).
Most of the time it doesn't matter but it makes life easier if you ever need to use some COM or GUI object in your application.
This will process something every x seconds without using a thread
Not sure how not using your own thread compares with a task to run that is created every two seconds
public void LogProcessor()
{
if (_isRunning)
{
WriteNewLogsToDisk();
// Come back in 2 seonds
var t = Task.Run(async delegate
{
await Task.Delay(2000);
LogProcessor();
});
}
}
From resource efficiency, yes.
For design, it depends on the circumstances for the pause. You want your work to be autonomous so if the thread has to pause because it knows to wait then put the pause in the thread code using the static Thread.Sleep method. If the pause happens because of some other external event than you need to control the thread processing, then have the thread owner keep reference to the thread and call childThread.Sleep.
Yes. There's no other efficient or safe way to sleep the thread.
However, if you're doing some work in a loop, you may want to use Sleep in loop to make aborting the thread easier, in case you want to cancel your work.
Here's an example:
bool exit = false;
...
void MyThread()
{
while(!exit)
{
// do your stuff here...
stuff...
// sleep for 10 seconds
int sc = 0;
while(sc < 1000 && !exit) { Thread.Sleep(10); sc++; }
}
}

Raise event from multiple worker threads?

Using C# to create a windows service application. I have a main object that creates worker threads to periodically conduct various tasks. Each worker completes a specific task, waits for a time, then repeats.
If one of those tasks should fail, I want that thread to alert the main to log that a task failed and then to exit.
I had thought about using a ManualResetEvent where Set would be called from each worker (and main would loop on checking it). Problem is, multiple workers could fail simultaneously and attempt to Set() the event at the same time.
Is there a thread-safe way to handle alerting from multiple worker threads? Only one alert is required, I don't need to handle any more than the first one received.
Why don't use Double-checked locking in your Setter / Event handler?
private static readonly object Locker = new object();
private bool _closing = false;
private void YourErrorHandler(object sender, EventArgs args)
{
if(!_closing)
lock (Locker)
if(!_closing)
{
_closing = true;
//What ever you need to do here
}
}
If you need cross process sync, you will need to use Mutex or something else. But hope you get the idea
Once one of the threads fails, you want to shut down the whole thing, correct? Once that happens, when you join all of the threads, you can check their status and report an error for each individual one that failed. Something like:
while(eventNotSet) sleep();
foreach(thread)
{
thread.Join();
checkStatus(thread);
}

Shutdown a thread waiting on a timer in a Windows Service

I have a .Net 4.0 C# Windows Service which spawns a number of thread which run continuously. Each thread runs at different intervals based on a timer.
I want to shut these threads down gracefully when the service is shutdown.
Since some of these threads may be waiting for hours to do it's processing, I need to wake them up and tell them to exit.
I could create a loop in these threads to periodically check some global variable at some interval less that their processing interval, but I would prefer a signaling mechanism which would cause them the timer to pop prematurely.
How can I wake these threads waiting on a timer without using Thread.Abort or Thread.Interrupt?
I'm going to assume that you have a good reason for using independently managed threads to do the work (as opposed to just doing it in the timer's event). If so, you want to use WaitHandle.WaitAny() and examine the return value to determine which WaitHandle caused the thread to proceed:
public class ExampleService
{
private static readonly AutoResetEvent TimerLatch = new AutoResetEvent(false);
private static readonly AutoResetEvent ShutdownLatch = new AutoResetEvent(false);
private static readonly Timer MyTimer = new Timer(TimerTick);
public void Start()
{
var t = new Thread(DoLoop);
t.Start();
MyTimer.Change(0, 500);
}
public void Stop()
{
ShutdownLatch.Set();
}
private static void TimerTick(object state)
{
TimerLatch.Set();
}
private static void DoLoop()
{
if (WaitHandle.WaitAny(new[] { TimerLatch, ShutdownLatch }) == 0)
{
// The timer ticked, do something timer related
}
else
{
// We are shutting down, do whatever cleanup you need
}
}
}
You can use WaitHandle.Waitone with a timeout and use events
if (shutDownEvent.WaitOne(_timeout, false ))
{
//Shutdown
}
else
{
//timeout so Dowork
}
Depending on your scenario it might be an option to make the threads you spawn background threads so you don't have to worry about explicitly shutting them down.
Thread thread = new Thread(DoSomething)
{
IsBackground = true
};
Setting IsBackground to true makes the spawned thread a background thread which won't stop your service from terminating.
From MSDN:
A thread is either a background thread or a foreground thread.
Background threads are identical to foreground threads, except that
background threads do not prevent a process from terminating. Once all
foreground threads belonging to a process have terminated, the common
language runtime ends the process. Any remaining background threads
are stopped and do not complete.
This of course is only an option if whatever operation you are performing may be interrupted and don't have to gracefully shut down (e.g. do some critical cleanup work). Otherwise as both other answers are suggesting you should use a WaitHandle, and signal from the main thread.

What are Monitor.Pulse And Monitor.Wait advantages?

I'm kinda new to concurrent programming, and trying to understand the benefits of using Monitor.Pulse and Monitor.Wait .
MSDN's example is the following:
class MonitorSample
{
const int MAX_LOOP_TIME = 1000;
Queue m_smplQueue;
public MonitorSample()
{
m_smplQueue = new Queue();
}
public void FirstThread()
{
int counter = 0;
lock(m_smplQueue)
{
while(counter < MAX_LOOP_TIME)
{
//Wait, if the queue is busy.
Monitor.Wait(m_smplQueue);
//Push one element.
m_smplQueue.Enqueue(counter);
//Release the waiting thread.
Monitor.Pulse(m_smplQueue);
counter++;
}
}
}
public void SecondThread()
{
lock(m_smplQueue)
{
//Release the waiting thread.
Monitor.Pulse(m_smplQueue);
//Wait in the loop, while the queue is busy.
//Exit on the time-out when the first thread stops.
while(Monitor.Wait(m_smplQueue,1000))
{
//Pop the first element.
int counter = (int)m_smplQueue.Dequeue();
//Print the first element.
Console.WriteLine(counter.ToString());
//Release the waiting thread.
Monitor.Pulse(m_smplQueue);
}
}
}
//Return the number of queue elements.
public int GetQueueCount()
{
return m_smplQueue.Count;
}
static void Main(string[] args)
{
//Create the MonitorSample object.
MonitorSample test = new MonitorSample();
//Create the first thread.
Thread tFirst = new Thread(new ThreadStart(test.FirstThread));
//Create the second thread.
Thread tSecond = new Thread(new ThreadStart(test.SecondThread));
//Start threads.
tFirst.Start();
tSecond.Start();
//wait to the end of the two threads
tFirst.Join();
tSecond.Join();
//Print the number of queue elements.
Console.WriteLine("Queue Count = " + test.GetQueueCount().ToString());
}
}
and i cant see the benefit of using Wait And Pulse instead of this:
public void FirstThreadTwo()
{
int counter = 0;
while (counter < MAX_LOOP_TIME)
{
lock (m_smplQueue)
{
m_smplQueue.Enqueue(counter);
counter++;
}
}
}
public void SecondThreadTwo()
{
while (true)
{
lock (m_smplQueue)
{
int counter = (int)m_smplQueue.Dequeue();
Console.WriteLine(counter.ToString());
}
}
}
Any help is most appreciated.
Thanks
To describe "advantages", a key question is "over what?". If you mean "in preference to a hot-loop", well, CPU utilization is obvious. If you mean "in preference to a sleep/retry loop" - you can get much faster response (Pulse doesn't need to wait as long) and use lower CPU (you haven't woken up 2000 times unnecessarily).
Generally, though, people mean "in preference to Mutex etc".
I tend to use these extensively, even in preference to mutex, reset-events, etc; reasons:
they are simple, and cover most of the scenarios I need
they are relatively cheap, since they don't need to go all the way to OS handles (unlike Mutex etc, which is owned by the OS)
I'm generally already using lock to handle synchronization, so chances are good that I already have a lock when I need to wait for something
it achieves my normal aim - allowing 2 threads to signal completion to each-other in a managed way
I rarely need the other features of Mutex etc (such as being inter-process)
There is a serious flaw in your snippet, SecondThreadTwo() will fail badly when it tries to call Dequeue() on an empty queue. You probably got it to work by having FirstThreadTwo() executed a fraction of a second before the consumer thread, probably by starting it first. That's an accident, one that will stop working after running these threads for a while or starting them with a different machine load. This can accidentally work error free for quite a while, very hard to diagnose the occasional failure.
There is no way to write a locking algorithm that blocks the consumer until the queue becomes non-empty with just the lock statement. A busy loop that constantly enters and exits the lock works but is a very poor substitute.
Writing this kind of code is best left to the threading gurus, it is very hard to prove it works in all cases. Not just absence of failure modes like this one or threading races. But also general fitness of the algorithm that avoids deadlock, livelock and thread convoys. In the .NET world, the gurus are Jeffrey Richter and Joe Duffy. They eat locking designs for breakfast, both in their books and their blogs and magazine articles. Stealing their code is expected and accepted. And partly entered into the .NET framework with the additions in the System.Collections.Concurrent namespace.
It is a performance improvement to use Monitor.Pulse/Wait, as you have guessed. It is a relatively expensive operation to acquire a lock. By using Monitor.Wait, your thread will sleep until some other thread wakes your thread up with `Monitor.Pulse'.
You'll see the difference in TaskManager because one processor core will be pegged even while nothing is in the queue.
The advantages of Pulse and Wait are that they can be used as building blocks for all other synchronization mechanisms including mutexes, events, barriers, etc. There are things that can be done with Pulse and Wait that cannot be done with any other synchronization mechanism in the BCL.
All of the interesting stuff happens inside the Wait method. Wait will exit the critical section and put the thread in the WaitSleepJoin state by placing it in the waiting queue. Once Pulse is called then the next thread in the waiting queue moves to the ready queue. Once the thread switches to the Running state it reenters the critical section. This is important to repeat another way. Wait will release the lock and reacquire it in an atomic fashion. No other synchronization mechanism has this feature.
The best way to envision this is to try to replicate the behavior with some other strategy and then see what can go wrong. Let us try this excerise with a ManualResetEvent since the Set and WaitOne methods seem like they may be analogous. Our first attempt might look like this.
void FirstThread()
{
lock (mre)
{
// Do stuff.
mre.Set();
// Do stuff.
}
}
void SecondThread()
{
lock (mre)
{
// Do stuff.
while (!CheckSomeCondition())
{
mre.WaitOne();
}
// Do stuff.
}
}
It should be easy to see that the code will can deadlock. So what happens if we try this naive fix?
void FirstThread()
{
lock (mre)
{
// Do stuff.
mre.Set();
// Do stuff.
}
}
void SecondThread()
{
lock (mre)
{
// Do stuff.
}
while (!CheckSomeCondition())
{
mre.WaitOne();
}
lock (mre)
{
// Do stuff.
}
}
Can you see what can go wrong here? Since we did not atomically reenter the lock after the wait condition was checked another thread could get in and invalidate the condition. In other words, another thread could do something that causes CheckSomeCondition to start returning false again before the following lock was reacquired. That can definitely cause a lot of weird problems if your second block of code required that the condition be true.

Proper way to have an endless worker thread?

I have an object that requires a lot of initialization (1-2 seconds on a beefy machine). Though once it is initialized it only takes about 20 miliseconds to do a typical "job"
In order to prevent it from being re-initialized every time an app wants to use it (which could be 50 times a second or not at all for minutes in typical usage), I decided to give it a job que, and have it run on its own thread, checking to see if there is any work for it in the que. However I'm not entirely sure how to make a thread that runs indefinetly with or without work.
Here's what I have so far, any critique is welcomed
private void DoWork()
{
while (true)
{
if (JobQue.Count > 0)
{
// do work on JobQue.Dequeue()
}
else
{
System.Threading.Thread.Sleep(50);
}
}
}
After thought: I was thinking I may need to kill this thread gracefully insead of letting it run forever, so I think I will add a Job type that tells the thread to end. Any thoughts on how to end a thread like this also appreciated.
You need to lock anyway, so you can Wait and Pulse:
while(true) {
SomeType item;
lock(queue) {
while(queue.Count == 0) {
Monitor.Wait(queue); // releases lock, waits for a Pulse,
// and re-acquires the lock
}
item = queue.Dequeue(); // we have the lock, and there's data
}
// process item **outside** of the lock
}
with add like:
lock(queue) {
queue.Enqueue(item);
// if the queue was empty, the worker may be waiting - wake it up
if(queue.Count == 1) { Monitor.PulseAll(queue); }
}
You might also want to look at this question, which limits the size of the queue (blocking if it is too full).
You need a synchronization primitive, like a WaitHandle (look at the static methods) . This way you can 'signal' the worker thread that there is work. It checks the queue and keeps on working until the queue is empty, at which time it waits for the mutex to signal it again.
Make one of the job items be a quit command too, so that you can signal the worker thread when it's time to exit the thread
In most cases, I've done this quite similar to how you've set up -- but not in the same language. I had the advantage of working with a data structure (in Python) which will block the thread until an item is put into the queue, negating the need for the sleep call.
If .NET provides a class like that, I'd look into using it. A thread blocking is much better than a thread spinning on sleep calls.
The job you can pass could be as simple as a "null"; if the code receives a null, it knows it's time to break out of the while and go home.
If you don't really need to have the thread exit (and just want it to keep from keeping your application running) you can set Thread.IsBackground to true and it will end when all non background threads end. Will and Marc both have good solutions for handling the queue.
Grab the Parallel Framework. It has a BlockingCollection<T> which you can use as a job queue. How you'd use it is:
Create the BlockingCollection<T> that will hold your tasks/jobs.
Create some Threads which have a never-ending loop (while(true){ // get job off the queue)
Set the threads going
Add jobs to the collection when they come available
The threads will be blocked until an item appears in the collection. Whoever's turn it is will get it (depends on the CPU). I'm using this now and it works great.
It also has the advantage of relying on MS to write that particularly nasty bit of code where multiple threads access the same resource. And whenever you can get somebody else to write that you should go for it. Assuming, of course, they have more technical/testing resources and combined experience than you.
I've implemented a background-task queue without using any kind of while loop, or pulsing, or waiting, or, indeed, touching Thread objects at all. And it seems to work. (By which I mean it's been in production environments handling thousands of tasks a day for the last 18 months without any unexpected behavior.) It's a class with two significant properties, a Queue<Task> and a BackgroundWorker. There are three significant methods, abbreviated here:
private void BackgroundWorker_DoWork(object sender, DoWorkEventArgs e)
{
if (TaskQueue.Count > 0)
{
TaskQueue[0].Execute();
}
}
private void BackgroundWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
Task t = TaskQueue[0];
lock (TaskQueue)
{
TaskQueue.Remove(t);
}
if (TaskQueue.Count > 0 && !BackgroundWorker.IsBusy)
{
BackgroundWorker.RunWorkerAsync();
}
}
public void Enqueue(Task t)
{
lock (TaskQueue)
{
TaskQueue.Add(t);
}
if (!BackgroundWorker.IsBusy)
{
BackgroundWorker.RunWorkerAsync();
}
}
It's not that there's no waiting and pulsing. But that all happens inside the BackgroundWorker. This just wakes up whenever a task is dropped in the queue, runs until the queue is empty, and then goes back to sleep.
I am far from an expert on threading. Is there a reason to mess around with System.Threading for a problem like this if using a BackgroundWorker will do?

Categories

Resources