I have had to set a fixed time out for a particular COM method call from a service that we have (which is written in C#). Not having used the System.Threading namespace for anything other than Thread.Sleep, I have had a play and have come up with a working prototype:
bool _comCallSuccessful = false;
bool _timedOut = false;
private void MakeACOMCallThatCouldTakeALongTime()
{
Thread.Sleep(2500);
_comCallSuccessful = true;
}
private void CheckForOneSecondTimeOut()
{
Thread.Sleep(1000);
_timedOut = true;
}
private void ThreadTester()
{
Thread t1 = new Thread(new ThreadStart(MakeACOMCallThatCouldTakeALongTime));
Thread t2 = new Thread(new ThreadStart(CheckForOneSecondTimeOut));
t1.Start();
t2.Start();
while (!_timedOut && !_comCallSuccessful) { }
if (_comCallSuccessful)
{
Console.WriteLine("Finished!");
}
else
{
t1.Abort();
Console.WriteLine("Timed out!");
}
Console.ReadLine();
}
Practically speaking, are there any problems with this approach? For instance, would there be a problem if I were to abort the thread that makes the COM method call (perhaps in terms of cleaning up used resources, etc)?
Thread.Abort() is always a problem.
Do you know anything about the COM server? Does it run in-process, out of process or remotely? If the COM server is buggy and you actually need to terminate it, consider wrapping the call in a sacrificial process (or at least a separate AppDomain) which can be terminated safely (and perhaps you can do some cheating and terminate the offending COM app as well). Don't abort threads in your own process if you can help it.
Yeah, big problem: it won't work in many cases. If your COM thread is busy in native code when you call Abort(), nothing will happen-it just sets a flag so when the thread comes back into managed code, it will pop the ThreadAbortException. There isn't a 100% reliable way to abort a call to unmanaged code. You can try killing the underlying OS thread, but the CLR won't respond well to that and you'll likely destabilize the process.
I must add to already mentioned by other commenters that waiting like
while (!_timedOut && !_comCallSuccessful) { }
is wrong, since it makes your CPU to spend its cyles stupidly.
You'd better to use System.Threading.EventWaitHandle:
EventwaitHandle _comCallSuccessful = new ManualResetEvent(false);
EventwaitHandle _timedOut = new ManualResetEvent(false);
private void MakeACOMCallThatCouldTakeALongTime() {
Thread.Sleep(2500);
_comCallSuccessful.Set();
}
private void CheckForOneSecondTimeOut() {
Thread.Sleep(1000);
_timedOut.Set();
}
private void ThreadTester() {
/* thread starting*/
var handles = new WaitHandle[]{_comCallSuccessful, _timedOut};
int indexFirstSet = Waithandle.WaitAny(handles);
if (indexFirstSet == 0) // _comCallSuccessful
{
Console.WriteLine("Finished!");
}
else
{
t1.Abort();
Console.WriteLine("Timed out!");
}
}
If there's nothing to do on your main thread, you may start only one thread and use _comCallSuccessful.WaitOne(timeout), which returns true if event was Set() before timeout.
And anyway, you'd better have an explicit way to cancel operation at your service (e.g. COM object method)
Related
I want to open a thread to do the things it needs to do until a new command is given by the user. Then this thread should either close or receive a new command.
I have seen many posts that sending a variable to a running thread is hard, that is why I decided to kill the thread and start it again with the new variable.
I used the following post: https://stackoverflow.com/a/1327377 but without success. When I start the thread again (after it has done abort()) it gives me an exception: System.Threading.ThreadStateException.
private static Thread t = new Thread(Threading);
private static bool _running = false;
static void Main(string[] args)
{
[get arg]
if (CanRedo(arg))
{
if (t.IsAlive)
{
_running = false;
t.Interrupt();
if (t.Join(2000)) // with a '!' like in the post, abort() would not be called
{
t.Abort();
}
}
_running = true;
t.Start(arg); // gives System.Threading.ThreadStateException
}
}
private static void Threading(object obj)
{
_stopped = false;
string arg = obj.ToString();
while(_running)
{
if (bot._isDone)
{
ExecuteInstruction(arg);
}
}
}
What am I doing wrong?
I'm going to guess that you don't literally mean to abort the thread and start that same thread again. That's because if we start a thread to do some work we don't care which thread it is. If you cancel one thing and start something else, you probably don't care if it's the same thread or a different one. (In fact it's probably better if you don't care. If you need precise control over which thread is doing what then something has gotten complicated.) You can't "abort" a thread and restart it anyway.
Regarding Thread.Abort:
The Thread.Abort method should be used with caution. Particularly when you call it to abort a thread other than the current thread, you do not know what code has executed or failed to execute when the ThreadAbortException is thrown, nor can you be certain of the state of your application or any application and user state that it is responsible for preserving. For example, calling Thread.Abort may prevent static constructors from executing or prevent the release of unmanaged resources.
It's like firing an employee by teleporting them out of the building without warning. What if they were in the middle of a phone call or carrying a stack of papers? That might be okay in an emergency, but it wouldn't be a normal way to operate. It would be better to let the employee know that they need to wrap up what they're doing immediately. Put down what you're carrying. Tell the customer that you can't finish entering their order and they'll need to call back.
You're describing an expected behavior, so it would be better to cancel the thread in an orderly way.
That's where we might use a CancellationToken. In effect you're passing an object to the thread and telling it to check it from time to time to see if it should cancel what it's doing.
So you could start your thread like this:
class Program
{
static void Main(string[] args)
{
using (var cts = new CancellationTokenSource())
{
ThreadPool.QueueUserWorkItem(DoSomethingOnAnotherThread, cts.Token);
// This is just for demonstration. It allows the other thread to run for a little while
// before it gets canceled.
Thread.Sleep(5000);
cts.Cancel();
}
}
private static void DoSomethingOnAnotherThread(object obj)
{
var cancellationToken = (CancellationToken) obj;
// This thread does its thing. Once in a while it does this:
if (cancellationToken.IsCancellationRequested)
{
return;
}
// Keep doing what it's doing.
}
}
Whatever the method is that's running in your separate thread, it's going to check IsCancellationRequested from time to time. If it's right in the middle of doing something it can stop. If it has unmanaged resources it can dispose them. But the important thing is that you can cancel what it does in a predictable way that leaves your application in a known state.
CancellationToken is one way to do this. In other really simple scenarios where the whole thing is happening inside one class you could also use a boolean field or property that acts as a flag to tell the thread if it needs to stop. The separate thread checks it to see if cancellation has been requested.
But using the CancellationToken makes it more manageable if you want to refactor and now the method executing on another thread is a in separate class. When you use a known pattern it makes it easier for the next person to understand what's going on.
Here's some documentation.
What about doing it this way:
private static Task t = null;
private static CancellationTokenSource cts = null;
static void Main(string[] args)
{
[get arg]
if (CanRedo(out var arg))
{
if (t != null)
{
cts.Cancel();
t.Wait();
}
// Set up a new task and matching cancellation token
cts = new CancellationTokenSource();
t = Task.Run(() => liveTask(arg, cts.Token));
}
}
private static void liveTask(object obj, CancellationToken ct)
{
string arg = obj.ToString();
while(!ct.IsCancellationRequested)
{
if (bot._isDone)
{
ExecuteInstruction(arg);
}
}
}
Tasks are cancellable, and I can see nothing in your thread that requires the same physical thread to be re-used.
We could abort a Thread like this:
Thread thread = new Thread(SomeMethod);
.
.
.
thread.Abort();
But can I abort a Task (in .Net 4.0) in the same way not by cancellation mechanism. I want to kill the Task immediately.
The guidance on not using a thread abort is controversial. I think there is still a place for it but in exceptional circumstance. However you should always attempt to design around it and see it as a last resort.
Example;
You have a simple windows form application that connects to a blocking synchronous web service. Within which it executes a function on the web service within a Parallel loop.
CancellationTokenSource cts = new CancellationTokenSource();
ParallelOptions po = new ParallelOptions();
po.CancellationToken = cts.Token;
po.MaxDegreeOfParallelism = System.Environment.ProcessorCount;
Parallel.ForEach(iListOfItems, po, (item, loopState) =>
{
Thread.Sleep(120000); // pretend web service call
});
Say in this example, the blocking call takes 2 mins to complete. Now I set my MaxDegreeOfParallelism to say ProcessorCount. iListOfItems has 1000 items within it to process.
The user clicks the process button and the loop commences, we have 'up-to' 20 threads executing against 1000 items in the iListOfItems collection. Each iteration executes on its own thread. Each thread will utilise a foreground thread when created by Parallel.ForEach. This means regardless of the main application shutdown, the app domain will be kept alive until all threads have finished.
However the user needs to close the application for some reason, say they close the form.
These 20 threads will continue to execute until all 1000 items are processed. This is not ideal in this scenario, as the application will not exit as the user expects and will continue to run behind the scenes, as can be seen by taking a look in task manger.
Say the user tries to rebuild the app again (VS 2010), it reports the exe is locked, then they would have to go into task manager to kill it or just wait until all 1000 items are processed.
I would not blame you for saying, but of course! I should be cancelling these threads using the CancellationTokenSource object and calling Cancel ... but there are some problems with this as of .net 4.0. Firstly this is still never going to result in a thread abort which would offer up an abort exception followed by thread termination, so the app domain will instead need to wait for the threads to finish normally, and this means waiting for the last blocking call, which would be the very last running iteration (thread) that ultimately gets to call po.CancellationToken.ThrowIfCancellationRequested.
In the example this would mean the app domain could still stay alive for up to 2 mins, even though the form has been closed and cancel called.
Note that Calling Cancel on CancellationTokenSource does not throw an exception on the processing thread(s), which would indeed act to interrupt the blocking call similar to a thread abort and stop the execution. An exception is cached ready for when all the other threads (concurrent iterations) eventually finish and return, the exception is thrown in the initiating thread (where the loop is declared).
I chose not to use the Cancel option on a CancellationTokenSource object. This is wasteful and arguably violates the well known anti-patten of controlling the flow of the code by Exceptions.
Instead, it is arguably 'better' to implement a simple thread safe property i.e. Bool stopExecuting. Then within the loop, check the value of stopExecuting and if the value is set to true by the external influence, we can take an alternate path to close down gracefully. Since we should not call cancel, this precludes checking CancellationTokenSource.IsCancellationRequested which would otherwise be another option.
Something like the following if condition would be appropriate within the loop;
if (loopState.ShouldExitCurrentIteration || loopState.IsExceptional || stopExecuting) {loopState.Stop(); return;}
The iteration will now exit in a 'controlled' manner as well as terminating further iterations, but as I said, this does little for our issue of having to wait on the long running and blocking call(s) that are made within each iteration (parallel loop thread), since these have to complete before each thread can get to the option of checking if it should stop.
In summary, as the user closes the form, the 20 threads will be signaled to stop via stopExecuting, but they will only stop when they have finished executing their long running function call.
We can't do anything about the fact that the application domain will always stay alive and only be released when all foreground threads have completed. And this means there will be a delay associated with waiting for any blocking calls made within the loop to complete.
Only a true thread abort can interrupt the blocking call, and you must mitigate leaving the system in a unstable/undefined state the best you can in the aborted thread's exception handler which goes without question. Whether that's appropriate is a matter for the programmer to decide, based on what resource handles they chose to maintain and how easy it is to close them in a thread's finally block. You could register with a token to terminate on cancel as a semi workaround i.e.
CancellationTokenSource cts = new CancellationTokenSource();
ParallelOptions po = new ParallelOptions();
po.CancellationToken = cts.Token;
po.MaxDegreeOfParallelism = System.Environment.ProcessorCount;
Parallel.ForEach(iListOfItems, po, (item, loopState) =>
{
using (cts.Token.Register(Thread.CurrentThread.Abort))
{
Try
{
Thread.Sleep(120000); // pretend web service call
}
Catch(ThreadAbortException ex)
{
// log etc.
}
Finally
{
// clean up here
}
}
});
but this will still result in an exception in the declaring thread.
All things considered, interrupt blocking calls using the parallel.loop constructs could have been a method on the options, avoiding the use of more obscure parts of the library. But why there is no option to cancel and avoid throwing an exception in the declaring method strikes me as a possible oversight.
But can I abort a Task (in .Net 4.0) in the same way not by
cancellation mechanism. I want to kill the Task immediately.
Other answerers have told you not to do it. But yes, you can do it. You can supply Thread.Abort() as the delegate to be called by the Task's cancellation mechanism. Here is how you could configure this:
class HardAborter
{
public bool WasAborted { get; private set; }
private CancellationTokenSource Canceller { get; set; }
private Task<object> Worker { get; set; }
public void Start(Func<object> DoFunc)
{
WasAborted = false;
// start a task with a means to do a hard abort (unsafe!)
Canceller = new CancellationTokenSource();
Worker = Task.Factory.StartNew(() =>
{
try
{
// specify this thread's Abort() as the cancel delegate
using (Canceller.Token.Register(Thread.CurrentThread.Abort))
{
return DoFunc();
}
}
catch (ThreadAbortException)
{
WasAborted = true;
return false;
}
}, Canceller.Token);
}
public void Abort()
{
Canceller.Cancel();
}
}
disclaimer: don't do this.
Here is an example of what not to do:
var doNotDoThis = new HardAborter();
// start a thread writing to the console
doNotDoThis.Start(() =>
{
while (true)
{
Thread.Sleep(100);
Console.Write(".");
}
return null;
});
// wait a second to see some output and show the WasAborted value as false
Thread.Sleep(1000);
Console.WriteLine("WasAborted: " + doNotDoThis.WasAborted);
// wait another second, abort, and print the time
Thread.Sleep(1000);
doNotDoThis.Abort();
Console.WriteLine("Abort triggered at " + DateTime.Now);
// wait until the abort finishes and print the time
while (!doNotDoThis.WasAborted) { Thread.CurrentThread.Join(0); }
Console.WriteLine("WasAborted: " + doNotDoThis.WasAborted + " at " + DateTime.Now);
Console.ReadKey();
You shouldn't use Thread.Abort()
Tasks can be Cancelled but not aborted.
The Thread.Abort() method is (severely) deprecated.
Both Threads and Tasks should cooperate when being stopped, otherwise you run the risk of leaving the system in a unstable/undefined state.
If you do need to run a Process and kill it from the outside, the only safe option is to run it in a separate AppDomain.
This answer is about .net 3.5 and earlier.
Thread-abort handling has been improved since then, a.o. by changing the way finally blocks work.
But Thread.Abort is still a suspect solution that you should always try to avoid.
And in .net Core (.net 5+) Thread.Abort() will now throw a PlatformNotSupportedException .
Kind of underscoring the 'deprecated' point.
Everyone knows (hopefully) its bad to terminate thread. The problem is when you don't own a piece of code you're calling. If this code is running in some do/while infinite loop , itself calling some native functions, etc. you're basically stuck. When this happens in your own code termination, stop or Dispose call, it's kinda ok to start shooting the bad guys (so you don't become a bad guy yourself).
So, for what it's worth, I've written those two blocking functions that use their own native thread, not a thread from the pool or some thread created by the CLR. They will stop the thread if a timeout occurs:
// returns true if the call went to completion successfully, false otherwise
public static bool RunWithAbort(this Action action, int milliseconds) => RunWithAbort(action, new TimeSpan(0, 0, 0, 0, milliseconds));
public static bool RunWithAbort(this Action action, TimeSpan delay)
{
if (action == null)
throw new ArgumentNullException(nameof(action));
var source = new CancellationTokenSource(delay);
var success = false;
var handle = IntPtr.Zero;
var fn = new Action(() =>
{
using (source.Token.Register(() => TerminateThread(handle, 0)))
{
action();
success = true;
}
});
handle = CreateThread(IntPtr.Zero, IntPtr.Zero, fn, IntPtr.Zero, 0, out var id);
WaitForSingleObject(handle, 100 + (int)delay.TotalMilliseconds);
CloseHandle(handle);
return success;
}
// returns what's the function should return if the call went to completion successfully, default(T) otherwise
public static T RunWithAbort<T>(this Func<T> func, int milliseconds) => RunWithAbort(func, new TimeSpan(0, 0, 0, 0, milliseconds));
public static T RunWithAbort<T>(this Func<T> func, TimeSpan delay)
{
if (func == null)
throw new ArgumentNullException(nameof(func));
var source = new CancellationTokenSource(delay);
var item = default(T);
var handle = IntPtr.Zero;
var fn = new Action(() =>
{
using (source.Token.Register(() => TerminateThread(handle, 0)))
{
item = func();
}
});
handle = CreateThread(IntPtr.Zero, IntPtr.Zero, fn, IntPtr.Zero, 0, out var id);
WaitForSingleObject(handle, 100 + (int)delay.TotalMilliseconds);
CloseHandle(handle);
return item;
}
[DllImport("kernel32")]
private static extern bool TerminateThread(IntPtr hThread, int dwExitCode);
[DllImport("kernel32")]
private static extern IntPtr CreateThread(IntPtr lpThreadAttributes, IntPtr dwStackSize, Delegate lpStartAddress, IntPtr lpParameter, int dwCreationFlags, out int lpThreadId);
[DllImport("kernel32")]
private static extern bool CloseHandle(IntPtr hObject);
[DllImport("kernel32")]
private static extern int WaitForSingleObject(IntPtr hHandle, int dwMilliseconds);
While it's possible to abort a thread, in practice it's almost always a very bad idea to do so. Aborthing a thread means the thread is not given a chance to clean up after itself, leaving resources undeleted, and things in unknown states.
In practice, if you abort a thread, you should only do so in conjunction with killing the process. Sadly, all too many people think ThreadAbort is a viable way of stopping something and continuing on, it's not.
Since Tasks run as threads, you can call ThreadAbort on them, but as with generic threads you almost never want to do this, except as a last resort.
I faced a similar problem with Excel's Application.Workbooks.
If the application is busy, the method hangs eternally. My approach was simply to try to get it in a task and wait, if it takes too long, I just leave the task be and go away (there is no harm "in this case", Excel will unfreeze the moment the user finishes whatever is busy).
In this case, it's impossible to use a cancellation token. The advantage is that I don't need excessive code, aborting threads, etc.
public static List<Workbook> GetAllOpenWorkbooks()
{
//gets all open Excel applications
List<Application> applications = GetAllOpenApplications();
//this is what we want to get from the third party library that may freeze
List<Workbook> books = null;
//as Excel may freeze here due to being busy, we try to get the workbooks asynchronously
Task task = Task.Run(() =>
{
try
{
books = applications
.SelectMany(app => app.Workbooks.OfType<Workbook>()).ToList();
}
catch { }
});
//wait for task completion
task.Wait(5000);
return books; //handle outside if books is null
}
This is my implementation of an idea presented by #Simon-Mourier, using the dotnet thread, short and simple code:
public static bool RunWithAbort(this Action action, int milliseconds)
{
if (action == null) throw new ArgumentNullException(nameof(action));
var success = false;
var thread = new Thread(() =>
{
action();
success = true;
});
thread.IsBackground = true;
thread.Start();
thread.Join(milliseconds);
thread.Abort();
return success;
}
You can "abort" a task by running it on a thread you control and aborting that thread. This causes the task to complete in a faulted state with a ThreadAbortException. You can control thread creation with a custom task scheduler, as described in this answer. Note that the caveat about aborting a thread applies.
(If you don't ensure the task is created on its own thread, aborting it would abort either a thread-pool thread or the thread initiating the task, neither of which you typically want to do.)
using System;
using System.Threading;
using System.Threading.Tasks;
...
var cts = new CancellationTokenSource();
var task = Task.Run(() => { while (true) { } });
Parallel.Invoke(() =>
{
task.Wait(cts.Token);
}, () =>
{
Thread.Sleep(1000);
cts.Cancel();
});
This is a simple snippet to abort a never-ending task with CancellationTokenSource.
I have a multi-thread windows service in .Net 3.5, and I am having some trouble to stop the service properly when more than one thread is created.
This service used to create only one thread to do all the work, and I just changed it to be multi-threaded. It works perfectly, but when the service is stopped, if more than one thread is being executed, it will hang the service until all the threads are completed.
When the service is started, I create a background thread to handle the main process:
protected override void OnStart(string[] args)
{
try
{
//Global variable that is checked by threads to learn if service was stopped
DeliveryConstant.StopService = false;
bool SetMaxThreadsResult = ThreadPool.SetMaxThreads(10, 10);
ThreadStart st = new ThreadStart(StartThreadPool);
workerThread = new Thread(st);
workerThread.IsBackground = true;
serviceStarted = true;
workerThread.Start();
}
catch (Exception ex)
{
//Log something;
}
Here is the StartThreadPool method:
//Tried with and without this attribute with no success...
[System.Runtime.CompilerServices.MethodImpl(System.Runtime.CompilerServices.MethodImplOptions.Synchronized)]
public void StartThreadPool()
{
while (serviceStarted)
{
ProcessInfo input = new ProcessInfo();
try
{
int? NumPendingRequests = GetItems(50, (Guid?)input.ProcessID);
if (NumPendingRequests > 0)
{
input.ProcessType = 1;
input.ProcessID = Guid.NewGuid();
ThreadPool.QueueUserWorkItem(new WaitCallback(new DispatchManager().ProcessRequestList), input);
}
}
catch (Exception ex)
{
//Some Logging here
}
}
DeliveryConstant.StopService = true;
}
I created a static variable in a separated class to notify the threads that the service was stopped. When the value for this variable is true, all threads should stop the main loop (a for each loop):
public static bool StopService;
Finally, the OnStop method:
protected override void OnStop()
{
DeliveryConstant.StopService = true;
//flag to tell the worker process to stop
serviceStarted = false;
workerThread.Join(TimeSpan.FromSeconds(30));
}
In the ProcessRequestList method, at the end of every foreach, I check for the value of the StopService variable. If true, I break the loop.
Here is the problem:
The threads are created in chunks of 50 items. When I have 50 items or less in the database, only one thread is created, and everything works beautifully.
When I have more than 50 items, multiple threads will be created, and when I try to stop the service, it doesn't stop until all the background threads are completed.
From the logs, I can see that the method OnStop is only executed AFTER all threads are completed.
Any clue what could be changed to fix that?
This blog answer states that OnStop isn't called until all ThreadPool tasks complete, which is news to me but would explain your issue.
I've fielded many multi-threaded Windows Services but I prefer to create my own background threads rather than use the ThreadPool since these are long-running threads. I instantiate worker classes and launch their DoWork() method on the thread. I also prefer to use callbacks to the launching class to check for a stop signal and pass status rather than just test against a global variable.
You are missing memory barriers around accesses to StopService, which may be a problem if you have multiple CPUs. Better lock any reference object for ALL accesses to the shared variable. For example:
object #lock;
...
lock (#lock)
{
StopService = true;
}
Edit: As another answer has revealed, this issue was not a locking problem, but I am leaving this answer here as a thing to check with multithread synchronization schemes.
Making the shared variable volatile would work in many cases as well, but it is more complex to prove correct because it does not emit full fences.
I am creating a thread A and in that thread creating a new thread B.
So how is the thread hierarchy? Thread B is child of Thread A? Or the threads are created as peers?
I want to abort the parent thread A which in turn kills/aborts its child threads.
How is that possible in C#?
Threads should ideally never be aborted. It simply isn't safe. Consider this as a way of putting down an already sick process. Otherwise, avoid like the plague.
The more correct way of doing this is to have something that the code can periodically check, and itself decide to exit.
An example of stopping threads the polite way:
using System;
using System.Threading;
namespace Treading
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Main program starts");
Thread firstThread = new Thread(A);
ThreadStateMessage messageToA = new ThreadStateMessage(){YouShouldStopNow = false};
firstThread.Start(messageToA);
Thread.Sleep(50); //Let other threads do their thing for 0.05 seconds
Console.WriteLine("Sending stop signal from main program!");
messageToA.YouShouldStopNow = true;
firstThread.Join();
Console.WriteLine("Main program ends - press any key to exit");
Console.Read();//
}
private class ThreadStateMessage
{
public bool YouShouldStopNow = false; //this assignment is not really needed, since default value is false
}
public static void A(object param)
{
ThreadStateMessage myMessage = (ThreadStateMessage)param;
Console.WriteLine("Hello from A");
ThreadStateMessage messageToB = new ThreadStateMessage();
Thread secondThread = new Thread(B);
secondThread.Start(messageToB);
while (!myMessage.YouShouldStopNow)
{
Thread.Sleep(10);
Console.WriteLine("A is still running");
}
Console.WriteLine("Sending stop signal from A!");
messageToB.YouShouldStopNow = true;
secondThread.Join();
Console.WriteLine("Goodbye from A");
}
public static void B(object param)
{
ThreadStateMessage myMessage = (ThreadStateMessage)param;
Console.WriteLine("Hello from B");
while(!myMessage.YouShouldStopNow)
{
Thread.Sleep(10);
Console.WriteLine("B is still running");
}
Console.WriteLine("Goodbye from B");
}
}
}
Using Thread.Abort(); causes an exception to be thrown if your thread is in a waiting state of any kind. This is sort of annoying to handle, since there are quite a number of ways that a thread can be waiting. As others have said, you should generally avoid doing it.
Thread.Abort will do what you want, but it is not recommended to abort thread, better choose is to think a way for finishing threads correctly by Thread synchronization mechanism
Here's yet another way to politely signal a thread to die:
Note that this fashion favors finite state automatons where the slave periodically checks for permission to live, then performs a task if allowed. Tasks are not interrupted and are 'atomic'. This works great with simple loops or with command queues. Also this makes sure the thread doesn't spin 100% cpu by giving the slave thread a rest period, set this one to 0 if you don't want any rest in your slave.
var dieEvent = new AutoResetEvent(false);
int slaveRestPeriod = 20;// let's not hog the CPU with an endless loop
var master = new Thread(() =>
{
doStuffAMasterDoes(); // long running operation
dieEvent.Set(); // kill the slave
});
var slave = new Thread(() =>
{
while (!dieEvent.WaitOne(restPeriod))
{
doStuffASlaveDoes();
}
});
slave.Start();
master.Start();
Threads are created as peers, obtain a handle to Thread A and then call ThreadA.Abort()
to forcefully end it. It's better to check a boolean in the thread and if it evaluates to false exit the thread.
public class MyClass
{
public static Thread ThreadA;
public static Thread ThreadB;
private void RunThings()
{
ThreadA = new Thread(new ThreadStart(ThreadAWork));
ThreadB = new Thread(new ThreadStart(ThreadBWork));
ThreadA.Start();
ThreadB.Start();
}
static void ThreadAWork()
{
// do some stuff
// thread A will close now, all work is done.
}
static void ThreadBWork()
{
// do some stuff
ThreadA.Abort(); // close thread A
// thread B will close now, all work is done.
}
}
I am using Enterprise Library 4 on one of my projects for logging (and other purposes). I've noticed that there is some cost to the logging that I am doing that I can mitigate by doing the logging on a separate thread.
The way I am doing this now is that I create a LogEntry object and then I call BeginInvoke on a delegate that calls Logger.Write.
new Action<LogEntry>(Logger.Write).BeginInvoke(le, null, null);
What I'd really like to do is add the log message to a queue and then have a single thread pulling LogEntry instances off the queue and performing the log operation. The benefit of this would be that logging is not interfering with the executing operation and not every logging operation results in a job getting thrown on the thread pool.
How can I create a shared queue that supports many writers and one reader in a thread safe way? Some examples of a queue implementation that is designed to support many writers (without causing synchronization/blocking) and a single reader would be really appreciated.
Recommendation regarding alternative approaches would also be appreciated, I am not interested in changing logging frameworks though.
I wrote this code a while back, feel free to use it.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MediaBrowser.Library.Logging {
public abstract class ThreadedLogger : LoggerBase {
Queue<Action> queue = new Queue<Action>();
AutoResetEvent hasNewItems = new AutoResetEvent(false);
volatile bool waiting = false;
public ThreadedLogger() : base() {
Thread loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting = true;
hasNewItems.WaitOne(10000,true);
waiting = false;
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public override void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public override void Flush() {
while (!waiting) {
Thread.Sleep(1);
}
}
}
}
Some advantages:
It keeps the background logger alive, so it does not need to spin up and spin down threads.
It uses a single thread to service the queue, which means there will never be a situation where 100 threads are servicing the queue.
It copies the queues to ensure the queue is not blocked while the log operation is performed
It uses an AutoResetEvent to ensure the bg thread is in a wait state
It is, IMHO, very easy to follow
Here is a slightly improved version, keep in mind I performed very little testing on it, but it does address a few minor issues.
public abstract class ThreadedLogger : IDisposable {
Queue<Action> queue = new Queue<Action>();
ManualResetEvent hasNewItems = new ManualResetEvent(false);
ManualResetEvent terminate = new ManualResetEvent(false);
ManualResetEvent waiting = new ManualResetEvent(false);
Thread loggingThread;
public ThreadedLogger() {
loggingThread = new Thread(new ThreadStart(ProcessQueue));
loggingThread.IsBackground = true;
// this is performed from a bg thread, to ensure the queue is serviced from a single thread
loggingThread.Start();
}
void ProcessQueue() {
while (true) {
waiting.Set();
int i = ManualResetEvent.WaitAny(new WaitHandle[] { hasNewItems, terminate });
// terminate was signaled
if (i == 1) return;
hasNewItems.Reset();
waiting.Reset();
Queue<Action> queueCopy;
lock (queue) {
queueCopy = new Queue<Action>(queue);
queue.Clear();
}
foreach (var log in queueCopy) {
log();
}
}
}
public void LogMessage(LogRow row) {
lock (queue) {
queue.Enqueue(() => AsyncLogMessage(row));
}
hasNewItems.Set();
}
protected abstract void AsyncLogMessage(LogRow row);
public void Flush() {
waiting.WaitOne();
}
public void Dispose() {
terminate.Set();
loggingThread.Join();
}
}
Advantages over the original:
It's disposable, so you can get rid of the async logger
The flush semantics are improved
It will respond slightly better to a burst followed by silence
Yes, you need a producer/consumer queue. I have one example of this in my threading tutorial - if you look my "deadlocks / monitor methods" page you'll find the code in the second half.
There are plenty of other examples online, of course - and .NET 4.0 will ship with one in the framework too (rather more fully featured than mine!). In .NET 4.0 you'd probably wrap a ConcurrentQueue<T> in a BlockingCollection<T>.
The version on that page is non-generic (it was written a long time ago) but you'd probably want to make it generic - it would be trivial to do.
You would call Produce from each "normal" thread, and Consume from one thread, just looping round and logging whatever it consumes. It's probably easiest just to make the consumer thread a background thread, so you don't need to worry about "stopping" the queue when your app exits. That does mean there's a remote possibility of missing the final log entry though (if it's half way through writing it when the app exits) - or even more if you're producing faster than it can consume/log.
Here is what I came up with... also see Sam Saffron's answer. This answer is community wiki in case there are any problems that people see in the code and want to update.
/// <summary>
/// A singleton queue that manages writing log entries to the different logging sources (Enterprise Library Logging) off the executing thread.
/// This queue ensures that log entries are written in the order that they were executed and that logging is only utilizing one thread (backgroundworker) at any given time.
/// </summary>
public class AsyncLoggerQueue
{
//create singleton instance of logger queue
public static AsyncLoggerQueue Current = new AsyncLoggerQueue();
private static readonly object logEntryQueueLock = new object();
private Queue<LogEntry> _LogEntryQueue = new Queue<LogEntry>();
private BackgroundWorker _Logger = new BackgroundWorker();
private AsyncLoggerQueue()
{
//configure background worker
_Logger.WorkerSupportsCancellation = false;
_Logger.DoWork += new DoWorkEventHandler(_Logger_DoWork);
}
public void Enqueue(LogEntry le)
{
//lock during write
lock (logEntryQueueLock)
{
_LogEntryQueue.Enqueue(le);
//while locked check to see if the BW is running, if not start it
if (!_Logger.IsBusy)
_Logger.RunWorkerAsync();
}
}
private void _Logger_DoWork(object sender, DoWorkEventArgs e)
{
while (true)
{
LogEntry le = null;
bool skipEmptyCheck = false;
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is empty than BW is done
return;
else if (_LogEntryQueue.Count > 1) //if greater than 1 we can skip checking to see if anything has been enqueued during the logging operation
skipEmptyCheck = true;
//dequeue the LogEntry that will be written to the log
le = _LogEntryQueue.Dequeue();
}
//pass LogEntry to Enterprise Library
Logger.Write(le);
if (skipEmptyCheck) //if LogEntryQueue.Count was > 1 before we wrote the last LogEntry we know to continue without double checking
{
lock (logEntryQueueLock)
{
if (_LogEntryQueue.Count <= 0) //if queue is still empty than BW is done
return;
}
}
}
}
}
I suggest to start with measuring actual performance impact of logging on the overall system (i.e. by running profiler) and optionally switching to something faster like log4net (I've personally migrated to it from EntLib logging a long time ago).
If this does not work, you can try using this simple method from .NET Framework:
ThreadPool.QueueUserWorkItem
Queues a method for execution. The method executes when a thread pool thread becomes available.
MSDN Details
If this does not work either then you can resort to something like John Skeet has offered and actually code the async logging framework yourself.
In response to Sam Safrons post, I wanted to call flush and make sure everything was really finished writting. In my case, I am writing to a database in the queue thread and all my log events were getting queued up but sometimes the application stopped before everything was finished writing which is not acceptable in my situation. I changed several chunks of your code but the main thing I wanted to share was the flush:
public static void FlushLogs()
{
bool queueHasValues = true;
while (queueHasValues)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
lock (m_loggerQueueSync)
{
queueHasValues = m_loggerQueue.Count > 0;
}
}
//force MEL to flush all its listeners
foreach (MEL.LogSource logSource in MEL.Logger.Writer.TraceSources.Values)
{
foreach (TraceListener listener in logSource.Listeners)
{
listener.Flush();
}
}
}
I hope that saves someone some frustration. It is especially apparent in parallel processes logging lots of data.
Thanks for sharing your solution, it set me into a good direction!
--Johnny S
I wanted to say that my previous post was kind of useless. You can simply set AutoFlush to true and you will not have to loop through all the listeners. However, I still had crazy problem with parallel threads trying to flush the logger. I had to create another boolean that was set to true during the copying of the queue and executing the LogEntry writes and then in the flush routine I had to check that boolean to make sure something was not already in the queue and the nothing was getting processed before returning.
Now multiple threads in parallel can hit this thing and when I call flush I know it is really flushed.
public static void FlushLogs()
{
int queueCount;
bool isProcessingLogs;
while (true)
{
//wait for the current iteration to complete
m_waitingThreadEvent.WaitOne();
//check to see if we are currently processing logs
lock (m_isProcessingLogsSync)
{
isProcessingLogs = m_isProcessingLogs;
}
//check to see if more events were added while the logger was processing the last batch
lock (m_loggerQueueSync)
{
queueCount = m_loggerQueue.Count;
}
if (queueCount == 0 && !isProcessingLogs)
break;
//since something is in the queue, reset the signal so we will not keep looping
Thread.Sleep(400);
}
}
Just an update:
Using enteprise library 5.0 with .NET 4.0 it can easily be done by:
static public void LogMessageAsync(LogEntry logEntry)
{
Task.Factory.StartNew(() => LogMessage(logEntry));
}
See:
http://randypaulo.wordpress.com/2011/07/28/c-enterprise-library-asynchronous-logging/
An extra level of indirection may help here.
Your first async method call can put messages onto a synchonized Queue and set an event -- so the locks are happening in the thread-pool, not on your worker threads -- and then have yet another thread pulling messages off the queue when the event is raised.
If you log something on a separate thread, the message may not be written if the application crashes, which makes it rather useless.
The reason goes why you should always flush after every written entry.
If what you have in mind is a SHARED queue, then I think you are going to have to synchronize the writes to it, the pushes and the pops.
But, I still think it's worth aiming at the shared queue design. In comparison to the IO of logging and probably in comparison to the other work your app is doing, the brief amount of blocking for the pushes and the pops will probably not be significant.