c# single worker thread, single GUI thread - c#

I have several forms applications, that rely on worker threads to do all the processing. They talk to an external application that in some cases relies on simulating user interface (copy/paste functionality).
In each case I have a RichTextBox in the form, to provide feedback to the user during progressing - which needs to be async so that the textbox can update in real time. (disclaimer: this was my first dive into the world of threading and async processes, the feedback part works at least!)
I have a class "AppTrack" that is being used to monitor all these applications for usage, reporting, and better exception handling. In it are 2 functions (shown below) I use to fire the worker threads "safely", along with turning on/off certain form controls to prevent further user interaction.
public static void DoInSafeThread(Action ActualThreadFunction, List<Control> FormControls = null)
{
System.Action<System.Exception> exception = null;
Thread thread = new Thread(
() => AppTrack.SafeExecute(() => ActualThreadFunction(), exception, FormControls)
);
thread.IsBackground = true;
thread.Start();
}
private static void SafeExecute(Action ActualThreadFunction, Action<Exception> handler, List<Control> FormControls)
{
try
{
DisableControls(FormControls);
ActualThreadFunction.Invoke();
EnableControls(FormControls);
}
catch (Exception ex)
{
Handler(ex);
}
}
This is causing me a big problem if there is a list of things to process, like the snippet below. It will fire a new worker thread for each one, and the nature of the interface with the external application means the processes interfere and give bad results/errors.
foreach (IOccurrences s in sel)
{
AppTrack.DoInSafeThread(delegate()
{
CableChecking.CheckCables((IOccurrences)s, rtbLog, CableData, materials);
}, FormControls);
}
Thoughts on solution
Have a single thread used in my "AppTrack" class. Assign it a new operation as it completes the previous one, or queue them up to run in order and eventually feedback to the GUI thread that all jobs are complete. But, trying to implement that I hit a brick wall... Is this even the correct approach?
I was hoping to find things like thread.BindNewAction but, that's apparently not right...

Related

call method from another thread without blocking the thread (or write custom SynchronizationContext for non-UI thread) C#

This is probably one of the most frequent questions in the Stackoverflow, however I couldn't find the exact answer to my question:
I would like to design a pattern, which allows to start thread B from thread A and under specific condition (for example when exception occurs) call the method in thread A. In case of exception the correct thread matters a lot because the exception must call a catch method in the main thread A. If a thread A is an UI thread then everything is simple (call .Invoke() or .BeginInvoke() and that's it). The UI thread has some mechanism how it is done and I would like to get some insights how it would be possible to write my own mechanism for the non-UI thread. The commonly suggested method to achieve this is using the message pumping http://www.codeproject.com/Articles/32113/Understanding-SynchronizationContext-Part-II
but the while loop would block the thread A and this is not what I need and not the way how UI thread handles this issue. There are multiple ways to work around this issue but I would like to get a deeper understanding of the issue and write my own generic utility independently of the chosen methods like using System.Threading.Thread or System.Threading.Tasks.Task or BackgroundWorker or anything else and independently if there is a UI thread or not (e.g. Console application).
Below is the example code, which I try to use for testing the catching of the exception (which clearly indicates the wrong thread an exception is thrown to). I will use it as an utility with all the locking features, checking if a thread is running, etc. that is why I create an instance of a class.
class Program
{
static void Main(string[] args)
{
CustomThreads t = new CustomThreads();
try
{
// finally is called after the first action
t.RunCustomTask(ForceException, ThrowException); // Runs the ForceException and in a catch calls the ThrowException
// finally is never reached due to the unhandled Exception
t.RunCustomThread(ForceException, ThrowException);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
// well, this is a lie but it is just an indication that thread B was called
Console.WriteLine("DONE, press any key");
Console.ReadKey();
}
private static void ThrowException(Exception ex)
{
throw new Exception(ex.Message, ex);
}
static void ForceException()
{
throw new Exception("Exception thrown");
}
}
public class CustomThreads
{
public void RunCustomTask(Action action, Action<Exception> action_on_exception)
{
Task.Factory.StartNew(() => PerformAction(action, action_on_exception));
}
public void RunCustomThread(Action action, Action<Exception> action_on_exception)
{
new Thread(() => PerformAction(action, action_on_exception)).Start();
}
private void PerformAction(Action action, Action<Exception> action_on_exception)
{
try
{
action();
}
catch (Exception ex)
{
action_on_exception.Invoke(ex);
}
finally
{
Console.WriteLine("Finally is called");
}
}
}
One more interesting feature that I've found is that new Thread() throws unhandled Exception and finally is never called whereas new Task() does not, and finally is called. Maybe someone could comment on the reason of this difference.
and not the way how UI thread handles this issue
That is not accurate, it is exactly how a UI thread handles it. The message loop is the general solution to the producer-consumer problem. Where in a typical Windows program, the operating system as well as other processes produce messages and the one-and-only UI thread consumes.
This pattern is required to deal with code that is fundamentally thread-unsafe. And there always is a lot of unsafe code around, the more convoluted it gets the lower the odds that it can be made thread-safe. Something you can see in .NET, there are very few classes that are thread-safe by design. Something as simple is a List<> is not thread-safe and it up to you to use the lock keyword to keep it safe. GUI code is drastically non-safe and no amount of locking is going to make it safe.
Not just because it is hard to figure out where to put the lock statement, there is a bunch of code involved that you did not write. Like message hooks, UI automation, programs that put objects on the clipboard that you paste, drag and drop, shell extensions that run when you use a shell dialog like OpenFileDialog. All of that code is thread-unsafe, primarily because its author did not have to make it thread-safe. If you trip a threading bug in such code then you do not have a phone number to call and a completely unsolvable problem.
Making a method call run on a specific thread requires this kind of help. It is not possible to arbitrarily interrupt the thread from whatever it is doing and force it to call a method. That causes horrible and completely undebuggable re-entrancy problems. Like the kind of problems caused by DoEvents(), but multiplied by a thousand. When code enters the dispatcher loop then it is implicitly "idle" and not busy executing its own code. So can take an execution request from the message queue. This can still go wrong, you'll shoot your leg off when you pump when you are not idle. Which is why DoEvents() is so dangerous.
So no shortcuts here, you really do need to deal with that while() loop. That it is possible to do so is something you have pretty solid proof for, the UI thread does it pretty well. Consider creating your own.

What is the difference between these two methods for pausing/resuming threads?

I have a multithreaded application which is used to extract data from a website. I wanted to be able to pause and resume multiple threads from the UI. After searching on the web I came to know about two approaches that I can use to control (pause/resume) my threads.
Using Monitor class.
Using EventWaitHandle and ManualResetEvent class.
What I did:
I have a function named GetHtml that simply returns the html of the website. I am just showing the fraction part of this function for brevity.
public string GetHtml(string url, bool isProxy = false)
{
string result = "";
ExecutionGateway();
//->> EXTRA CODE FOR FETCHING HTML
return result;
}
I have a function ControlTasks used to control threads from UI, below I have explained the ControlTasks function using both thread control approaches using the Monitor class as well as the EventWaitHandle class (I will also briefly explain the working of the function ExecutionGateway).
1. Using the Monitor class
private object taskStopper = new object();
public bool ControlTasks(bool isPause)
{
try
{
if (isPause)
{
Monitor.Enter(taskStopper);
}
else
{
Monitor.Exit(taskStopper);
}
return true;
}
catch (Exception ex)
{
Logger.Instance.WriteLog("ControlTasks:", ex, Logger.LogTypes.Error);
return false;
}
}
ControlTasks is called from the UI where if isPause is true the exclusive lock is used on object taskStopper else releases the lock, Now here comes the function ExecutionGateway which is used to acquire lock on object taskStopper but it does nothing as the code below shows.
private void ExecutionGateway()
{
lock(taskStopper){ }
}
In this way all running threads enters waiting state when isPause is true in ControlTasks as taskStopper is exclusively locked and if isPause is false all threads resumes their processing.
2. Using the EventWaitHandle class
private EventWaitHandle handle = new ManualResetEvent(true);
public bool ControlTasks(bool isPause)
{
try
{
if (isPause)
{
handle.Reset();
}
else
{
handle.Set();
}
return true;
}
catch (Exception ex)
{
Logger.Instance.WriteLog("ControlTasks:", ex, Logger.LogTypes.Error);
return false;
}
}
This code also fundamentally does the same job, where the event state is signaled/non-signaled depending on the isPause parameter. Now, the corresponding ExecutionGateway method.
private void ExecutionGateway()
{
handle.WaitOne(Timeout.Infinite);
}
Problem:
What is the difference between these two approaches, is one better than the other? Are there any other ways to do this?
The main problem I have faced many times is if I use either of the above methods and I have 100 threads; when I pause them, then resume them after 5 or more minutes, the UI starts hanging. The UI is terrifically unresponsive. It gets updated but keeps on hanging and I keep getting the message "Not Responding" at each interval. One thing I want to mention each thread extracts data and notifies the UI about the data fetched through event handling. What could be the reason of this unresponsiveness? Is it a problem with my approach(es)?
I think it's always desirable to use a construct that communicates your intent clearly. You want a signal to other threads that they should wait (i.e. stop doing what they're doing) until you signal to them that they can start again. You have one controlling thread (your UI) and potentially many threads doing work and marshalling results back to the UI.
Approach 1 isn't ideal because locks (at least in my experience) are most often used to protect a resource that isn't suitable for use in multi threaded code. For example, writing to a shared field.
Approach 2 makes much more sense, a manual reset event functions like a gate: open the gate and things can pass through, close it and they can't. That's exactly the behaviour you're looking for and I think most developers would understand quite quickly that that's your intent.
As for your second problem, it sounds like you're getting waves of messages clogging the UI. If you stop all 100 of your threads then start them at the same time, there's a good chance they're going to finish their work quite close together and all be trying to send the result of their work to the UI thread. To solve that you could try staggering the work when you restart or use fewer threads. Another option would be to aggregate results and only dispatch the the UI every x seconds - but that's a bit more work.
In Option 1, using the Monitor class means that only one thread owns the exclusive lock of the monitor object at a time. This means that of your 100 threads, only 1 is processing at a time, which kind of defeats the purpose of using threads. It also means that your GUI thread has to wait until the current worker thread has finished before it can obtain the lock.
The ManualResetEvent is a much better choice as it is used to signal between threads, rather than protect against multiple thread access.
I do not know why your GUI is so unresponsive using the second option, but I do not think it is related to your manual reset event. More likely you have a different problem where the GUI thread is getting swamped. You suggest you have 100 threads all firing notification events to the GUI which would seem a likely culprit.
What happens if you debug your app, and just randomly break when your GUI is unresponsive? Doing this many times should show what your GUI thread is up to and where the bottleneck is.

How to terminate a thread when the worker can't check the termination string

I have the following code running in a Windows form. The method it is calling takes about 40 seconds to complete, and I need to allow the user the ability to click an 'Abort' button to stop the thread running.
Normally I would have the Worker() method polling to see if the _terminationMessage was set to "Stop" but I can't do this here because the long running method, ThisMethodMightReturnSomethingAndICantChangeIt() is out of my control.
How do I implement this user feature please ?
Here is my thread code.
private const string TerminationValue = "Stop";
private volatile string _terminationMessage;
private bool RunThread()
{
try
{
var worker = new Thread(Worker);
_terminationMessage = "carry on";
_successful = false;
worker.Start();
worker.Join();
finally
{
return _successful;
}
}
private void Worker()
{
ThisMethodMightReturnSomethingAndICantChangeIt();
_successful = true;
}
Well, the simple answer would be "you can't". There's no real thread abort that you can use to cancel any processing that's happenning.
Thread.Abort will allow you to abort a managed thread, running managed code at the moment, but it's really just a bad idea. It's very easy to end up in an inconsistent state just because you were just now running a singleton constructor or something. In the end, there's quite a big chance you're going to blow something up.
A bit orthogonal to the question, but why are you still using threading code like this? Writing multi-threaded code is really hard, so you want to use as many high-level features as you can. The complexity can easily be seen already in your small snippet of code - you're Joining the newly created thread, which means that you're basically gaining no benefit whatsoever from starting the Worker method on a new thread - you start it, and then you just wait. It's just like calling Worker outright, except you'll save an unnecessary thread.
try will not catch exceptions that pop up in a separate thread. So any exception that gets thrown inside of Worker will simply kill your whole process. Not good.
The only way to implement reliable cancellation is through cooperative aborts. .NET has great constructs for this since 4.0, CancellationToken. It's easy to use, it's thread-safe (unlike your solution), and it can be propagated through all the method chain so that you can implement cancellation at depth. Sadly, if you simply can't modify the ThisMethodMightReturnSomethingAndICantChangeIt method, you're out of luck.
The only "supported" "cancellation" pattern that just works is Process.Kill. You'd have to launch the processing method in a wholy separate process, not just a separate thread. That can be killed, and it will not hurt your own process. Of course, it means you have to separate that call into a new process - that's usually quite tricky, and it's not a very good design (though it seems like you have little choice).
So if the method doesn't support some form of cancellation, just treat it like so. It can't be aborted, period. Any way that does abort it is a dirty hack.
Well, here's my solution so far. I will definitely read up on newer .NET higher level features as you suggest. Thanks for the pointers in the right direction
private void RunThread()
{
try
{
var worker = new Thread(Worker);
SetFormEnabledStatus(false);
_usuccessful = false;
worker.Start();
// give up if no response before timeout
worker.Join(60000); // TODO - Add timeout to config
worker.Abort();
}
finally
{
SetFormEnabledStatus(true);
}
}
private void Worker()
{
try
{
_successful= false;
ThisMethodMightReturnSomethingAndICantChangeIt();
_successful = true;
}
catch (ThreadAbortException ex)
{
// nlog.....
}
catch (Exception ex)
{
// nlog...
}
}

what is the best way to handle potential non thread safe events

please consider the following scenario for .net 2.0:
I have an event that is fired on system.Timers.Timer object. The subscriber then adds an item to a Windows.Forms.Listbox upon receiving the event. This results in a cross-thread exception.
My question is what would be the best way to handle this sort of situation. The solutions that I have come up with is as follows:
private delegate void messageDel(string text);
private void ThreadSafeMsg(string text)
{
if (this.InvokeRequired)
{
messageDel d = new messageDel(ThreadSafeMsg);
this.Invoke(d, new object[] { text });
}
else
{
listBox1.Items.Add(text);
listBox1.Update();
}
}
// event
void Instance_Message(string text)
{
ThreadSafeMsg(text);
}
Is this the optimum way to handle this in .net 2? What about .net 3.5?
There's no point in using Control.InvokeRequired, you know that it always is. The Elapsed event is raised on a threadpool thread, never the UI thread.
Which makes it kinda pointless to use a System.Timers.Timer, just use the System.Windows.Forms.Timer. No need to monkey with Control.Begin/Invoke, you can't crash your program with an ObjectDisposedException when the event is raised just as the user closes the form.
You have a cross thread exception because you are trying to access items from outside the UI thread. Delegates are necessary in order to hook into the message pump and make the UI change.
If you use the Form Timer, then you'll be in the UI thread. You'll have the same problem, however, if you use a BackgroundWorkerThread and you'll need a delegate there as well.
See Threading in Windows Forms
It is pretty much the same in .net 3.5, since it is related to Windows Forms and cross-threading when you are accessing the UI thread from some another working thread.
However, you can make the code smaller, by using the generic Action<> and Func<>, avoiding creating manually the delegates.
Something like this:
private void ThreadSafeMsg(string text)
{
if (this.InvokeRequired)
this.Invoke(new Action<string>(ThreadSafeMsg), new object[] { text });
else
{
// Stuff...
}
}
The easiest solution in your case - is using System.Windows.Forms.Timer class, but in general case you may use following solution to access you GUI-stuff from non-GUI thread (this solution applicable for .net 2.0 but it more elegant for .net 3.5):
public static class ControlExtentions
{
public static void InvokeIfNeeded(this Control control, Action doit)
{
if (control.InvokeRequired)
control.Invoke(doit);
else
doit();
}
}
And you may use it like this no mater from what thread, from UI or from another one:
this.InvokeIfNeeded(()=>
{
listBox1.Items.Add(text);
listBox1.Update();
});
Depending upon what your action is doing, Control.BeginInvoke may be better than Control.Invoke. Control.Invoke will wait for the UI thread to process your message before it returns. If the UI thread is blocked, it will wait forever. Control.BeginInvoke will enqueue a message for the UI thread and return immediately. Because there's no way to avoid an exception if a control gets disposed immediately before you try to BeginInvoke it, you need to catch (possibly swallow) the exception (I think it may be either ObjectDisposedException or IllegalOperationException depending upon timing). You also need to set a flag or counter when you're about to post a message and clear or decrement it in the message handler (probably use Threading.Interlocked.Increment/Decrement), to ensure that you don't enqueue an excessive number of messages while the UI thread is blocked.

C# threading pattern that will let me flush

I have a class that implements the Begin/End Invocation pattern where I initially used ThreadPool.QueueUserWorkItem() to thread my work. The work done on the thread doesn't loop but does takes a bit of time to process so the work itself is not easily stopped.
I now have the side effect where someone using my class is calling the Begin (with callback) a ton of times to do a lot of processing so ThreadPool.QueueUserWorkItem is creating a ton of threads to do the processing. That in itself isn't bad but there are instances where they want to abandon the processing and start a new process but they are forced to wait for their first request to finish.
Since ThreadPool.QueueUseWorkItem() doesn't allow me to cancel the threads I am trying to come up with a better way to queue up the work and maybe use an explicit FlushQueue() method in my class to allow the caller to abandon work in my queue.
Anyone have any suggestion on a threading pattern that fits my needs?
Edit: I'm currently targeting the 2.0 framework. I'm currently thinking that a Consumer/Producer queue might work. Does anyone have thoughts on the idea of flushing the queue?
Edit 2 Problem Clarification:
Since I'm using the Begin/End pattern in my class every time the caller uses the Begin with callback I create a whole new thread on the thread pool. This call does a very small amount of processing and is not where I want to cancel. It's the uncompleted jobs in the queue I wish to stop.
The fact that the ThreadPool will create 250 threads per processor by default means if you ask the ThreadPool to queue a large amount of items with QueueUserWorkItem() you end up creating a huge amount of concurrent threads that you have no way of stopping.
The caller is able to push the CPU to 100% with not only the work but the creation of the work because of the way I queued the threads.
I was thinking by using the Producer/Consumer pattern I could queue these threads into my own queue that would allow me to moderate how many threads I create to avoid the CPU spike creating all the concurrent threads. And that I might be able to allow the caller of my class to flush all the jobs in the queue when they are abandoning the requests.
I am currently trying to implement this myself but figured SO was a good place to have someone say look at this code or you won't be able to flush because of this or flushing isn't the right term you mean this.
EDIT My answer does not apply since OP is using 2.0. Leaving up and switching to CW for anyone who reads this question and using 4.0
If you are using C# 4.0, or can take a depedency on one of the earlier version of the parallel frameworks, you can use their built-in cancellation support. It's not as easy as cancelling a thread but the framework is much more reliable (cancelling a thread is very attractive but also very dangerous).
Reed did an excellent article on this you should take a look at
http://reedcopsey.com/2010/02/17/parallelism-in-net-part-10-cancellation-in-plinq-and-the-parallel-class/
A method I've used in the past, though it's certainly not a best practice is to dedicate a class instance to each thread, and have an abort flag on the class. Then create a ThrowIfAborting method on the class that is called periodically from the thread (particularly if the thread's running a loop, just call it every iteration). If the flag has been set, ThrowIfAborting will simply throw an exception, which is caught in the main method for the thread. Just make sure to clean up your resources as you're aborting.
You could extend the Begin/End pattern to become the Begin/Cancel/End pattern. The Cancel method could set a cancel flag that the worker thread polls periodically. When the worker thread detects a cancel request, it can stop its work, clean-up resources as needed, and report that the operation was canceled as part of the End arguments.
I've solved what I believe to be your exact problem by using a wrapper class around 1+ BackgroundWorker instances.
Unfortunately, I'm not able to post my entire class, but here's the basic concept along with it's limitations.
Usage:
You simply create an instance and call RunOrReplace(...) when you want to cancel your old worker and start a new one. If the old worker was busy, it is asked to cancel and then another worker is used to immediately execute your request.
public class BackgroundWorkerReplaceable : IDisposable
{
BackgroupWorker activeWorker = null;
object activeWorkerSyncRoot = new object();
List<BackgroupWorker> workerPool = new List<BackgroupWorker>();
DoWorkEventHandler doWork;
RunWorkerCompletedEventHandler runWorkerCompleted;
public bool IsBusy
{
get { return activeWorker != null ? activeWorker.IsBusy; : false }
}
public BackgroundWorkerReplaceable(DoWorkEventHandler doWork, RunWorkerCompletedEventHandler runWorkerCompleted)
{
this.doWork = doWork;
this.runWorkerCompleted = runWorkerCompleted;
ResetActiveWorker();
}
public void RunOrReplace(Object param, ...) // Overloads could include ProgressChangedEventHandler and other stuff
{
try
{
lock(activeWorkerSyncRoot)
{
if(activeWorker.IsBusy)
{
ResetActiveWorker();
}
// This works because if IsBusy was false above, there is no way for it to become true without another thread obtaining a lock
if(!activeWorker.IsBusy)
{
// Optionally handle ProgressChangedEventHandler and other features (under the lock!)
// Work on this new param
activeWorker.RunWorkerAsync(param);
}
else
{ // This should never happen since we create new workers when there's none available!
throw new LogicException(...); // assert or similar
}
}
}
catch(...) // InvalidOperationException and Exception
{ // In my experience, it's safe to just show the user an error and ignore these, but that's going to depend on what you use this for and where you want the exception handling to be
}
}
public void Cancel()
{
ResetActiveWorker();
}
public void Dispose()
{ // You should implement a proper Dispose/Finalizer pattern
if(activeWorker != null)
{
activeWorker.CancelAsync();
}
foreach(BackgroundWorker worker in workerPool)
{
worker.CancelAsync();
worker.Dispose();
// perhaps use a for loop instead so you can set worker to null? This might help the GC, but it's probably not needed
}
}
void ResetActiveWorker()
{
lock(activeWorkerSyncRoot)
{
if(activeWorker == null)
{
activeWorker = GetAvailableWorker();
}
else if(activeWorker.IsBusy)
{ // Current worker is busy - issue a cancel and set another active worker
activeWorker.CancelAsync(); // Make sure WorkerSupportsCancellation must be set to true [Link9372]
// Optionally handle ProgressEventHandler -=
activeWorker = GetAvailableWorker(); // Ensure that the activeWorker is available
}
//else - do nothing, activeWorker is already ready for work!
}
}
BackgroupdWorker GetAvailableWorker()
{
// Loop through workerPool and return a worker if IsBusy is false
// if the loop exits without returning...
if(activeWorker != null)
{
workerPool.Add(activeWorker); // Save the old worker for possible future use
}
return GenerateNewWorker();
}
BackgroundWorker GenerateNewWorker()
{
BackgroundWorker worker = new BackgroundWorker();
worker.WorkerSupportsCancellation = true; // [Link9372]
//worker.WorkerReportsProgress
worker.DoWork += doWork;
worker.RunWorkerCompleted += runWorkerCompleted;
// Other stuff
return worker;
}
} // class
Pro/Con:
This has the benefit of having a very low delay in starting your new execution, since new threads don't have to wait for old ones to finish.
This comes at the cost of a theoretical never-ending growth of BackgroundWorker objects that never get GC'd. However, in practice the code below attempts to recycle old workers so you shouldn't normally encounter a large pool of ideal threads. If you are worried about this because of how you plan to use this class, you could implement a Timer which fires a CleanUpExcessWorkers(...) method, or have ResetActiveWorker() do this cleanup (at the cost of a longer RunOrReplace(...) delay).
The main cost from using this is precisely why it's beneficial - it doesn't wait for the previous thread to exit, so for example, if DoWork is performing a database call and you execute RunOrReplace(...) 10 times in rapid succession, the database call might not be immediately canceled when the thread is - so you'll have 10 queries running, making all of them slow! This generally tends to work fine with Oracle, causing only minor delays, but I do not have experiences with other databases (to speed up the cleanup, I have the canceled worker tell Oracle to cancel the command). Proper use of the EventArgs described below mostly solves this.
Another minor cost is that whatever code this BackgroundWorker is performing must be compatible with this concept - it must be able to safely recover from being canceled. The DoWorkEventArgs and RunWorkerCompletedEventArgs have a Cancel/Cancelled property which you should use. For example, if you do Database calls in the DoWork method (mainly what I use this class for), you need to make sure you periodically check these properties and take perform the appropriate clean-up.

Categories

Resources