I have a function (Shutdown()) which is used to terminate my windows form (does some clean up and call this.close() at the end).
In my application I have threads of execution
The UI
A background worker
A timer
Each one of these can call Shutdown(), either by the user pressing a button (UI), the timer expiring (timer), or the background worker completing his task. This leads me to a worry that if the timing is really bad I can have more then one thread calling Shutdown() at the same time.
So how can I ensure that only the first one that calls it will execute it? Any subsequent calls should just be ignored as the call will end in terminating the application anyways.
It's not really clear from your question what the difficulty is. What have you tried? What trouble did you run into?
The obvious, trivial implementation would be something like this:
private readonly object _lock = new object();
private bool _shuttingDown;
public void Shutdown()
{
lock (_lock)
{
if (_shuttingDown) return;
_shuttingDown = true;
}
// do work here...
}
Is there some reason that doesn't work in your scenario? If so, please provide a good, minimal, complete code example that shows clearly what you've tried, a describe precisely what that code does and how that's different from what you want it to do.
Related
I have a multi-threaded UI application that starts numerous background threads. A lot of these threads execute code that looks as follows:
public void Update(){
if(Dispatcher.HasShutdownStarted()) return;
Dispatcher.Invoke(()=>{...});
...
}
Then I sometimes may have a thread execute the following code
pubic void Shutdown(){
if(Dispatcher.HasShutdownStarted()) return;
Dispatcher.InvokeShutdown();
}
The problem is that sometimes one thread executes Dispatcher.InvokeShutdown() AFTER another thread executed Dispatcher.HasShutdwonStarted() but before it got to Dispatcher.Invoke(()=>{...}). Which means, that there will be a thread trying to execute a lambda on the Dispatcher once the Dispatcher has begun to shut down. And that's when I get exceptions. What is the best solution to this?
The problem you face is that the HasShutdownStarted is checked, before the code inside the Invoke is executed (because it's queued on the dispatcher)
I think a better way is to check it inside the invoke, this way you don't need any locks.
public void Update(){
Dispatcher.Invoke(()=>
{
if(Dispatcher.HasShutdownStarted()) return;
...
});
}
With the help of others I managed to come up with the following solution to my problem and thought I'd share it. Calling Dispatcher.Invoke(...) after Dispatcher.InvokeShutdown() will always lead to a TaskCancelationException being thrown (as far as I can tell). Thus, checking Dispatcher.HasShutdownStarted inside of the Invoke method will not work.
What I did was create an application global CancellationToken by creating a static CancellationTokenSource. I now invoke the Dispatcher as follows:
Dispatcher.Invoke(()=>{...}, DispatcherPriority.Send, GlobalMembers.CancellationTokenSource.Token);
Then, when I wish to invoke shutdown on my dispatcher, I do the following:
GlobalMembers.CancellationTokenSource.Cancel();
Dispatcher.InvokeShutdown();
If by any chance I try to run Dispatcher.Invoke(()=>{...}, DispatcherPriority.Send, GlobalMembers.CancellationTokenSource.Token) after cancelling the global token and after invoking Dispatcher.InvokeShutdown(), nothing happens as the token is already cancelled and thus the action is not run.
I'm writing a application with a critical region.
And I decide to use AutoResetEvent to achieve mutual exclusion.
Here's the code
public class MyViewModel
{
private AutoResetEvent lock = new AutoResetEvent(true);
private aync Task CriticalRegion()
{
Dosomething();
}
public async Task Button_Click()
{
Debug.WriteLine("Entering Button_Click");
lock.WaitOne();
try
{
await CriticalRegion();
}
finally
{
lock.Set();
Debug.WriteLine("Leaving Button_Click");
}
}
}
I have a button whose click event calls the Button_Click() method
It works normally. But, if I'm quick enough to click the button for another time before the first call to Button_Click() completes, the whole app stops responding.
In the Debug window I find something like this
Entering Button_Click
Entering Button_Click
Looks like the method never completes.
I struggled a bit and find that if I change lock.WaitOne(); to
if (!sync.WaitOne(TimeSpan.FromSeconds(1)))
{
return;
}
In this case my app is able to avoid the deadlock,but I don't know why it works.
I only know about the IPC from my OS course and the async and await pattern in C#, and I'm not so familiar with the thread in .Net world.
I really want to understand what's really going on behind the scenes.
Thanks for any replys ;)
You have a deadlock because WaitOne is blocking the main thread (button click handler is executed on the main thread), while you haven't called ConfigureAwait(false) when calling await, which means that it tries to run the code which is after await on the main thread, even if it's blocked, which would causes a deadlock.
I suggest reading this post for a thorougher explanation of the dead lock situation.
For your code, I would suggest putting the lock deeper, probably within the async Task, and trying to use a more suitable pattern for locking, preferably, the lock statement, because using Event objects is awkward for mutual exclusion, as Hans stated in the comment.
AutoResetEvent.WaitOne() will block infinitely until you call AutoResetEvent.Set(), which you never seem to do except for after the WaitOne() call.
Quoting the AutoResetEvent.WaitOne() documentation:
Blocks the current thread until the current WaitHandle receives a signal.
I have a multithreaded application which is used to extract data from a website. I wanted to be able to pause and resume multiple threads from the UI. After searching on the web I came to know about two approaches that I can use to control (pause/resume) my threads.
Using Monitor class.
Using EventWaitHandle and ManualResetEvent class.
What I did:
I have a function named GetHtml that simply returns the html of the website. I am just showing the fraction part of this function for brevity.
public string GetHtml(string url, bool isProxy = false)
{
string result = "";
ExecutionGateway();
//->> EXTRA CODE FOR FETCHING HTML
return result;
}
I have a function ControlTasks used to control threads from UI, below I have explained the ControlTasks function using both thread control approaches using the Monitor class as well as the EventWaitHandle class (I will also briefly explain the working of the function ExecutionGateway).
1. Using the Monitor class
private object taskStopper = new object();
public bool ControlTasks(bool isPause)
{
try
{
if (isPause)
{
Monitor.Enter(taskStopper);
}
else
{
Monitor.Exit(taskStopper);
}
return true;
}
catch (Exception ex)
{
Logger.Instance.WriteLog("ControlTasks:", ex, Logger.LogTypes.Error);
return false;
}
}
ControlTasks is called from the UI where if isPause is true the exclusive lock is used on object taskStopper else releases the lock, Now here comes the function ExecutionGateway which is used to acquire lock on object taskStopper but it does nothing as the code below shows.
private void ExecutionGateway()
{
lock(taskStopper){ }
}
In this way all running threads enters waiting state when isPause is true in ControlTasks as taskStopper is exclusively locked and if isPause is false all threads resumes their processing.
2. Using the EventWaitHandle class
private EventWaitHandle handle = new ManualResetEvent(true);
public bool ControlTasks(bool isPause)
{
try
{
if (isPause)
{
handle.Reset();
}
else
{
handle.Set();
}
return true;
}
catch (Exception ex)
{
Logger.Instance.WriteLog("ControlTasks:", ex, Logger.LogTypes.Error);
return false;
}
}
This code also fundamentally does the same job, where the event state is signaled/non-signaled depending on the isPause parameter. Now, the corresponding ExecutionGateway method.
private void ExecutionGateway()
{
handle.WaitOne(Timeout.Infinite);
}
Problem:
What is the difference between these two approaches, is one better than the other? Are there any other ways to do this?
The main problem I have faced many times is if I use either of the above methods and I have 100 threads; when I pause them, then resume them after 5 or more minutes, the UI starts hanging. The UI is terrifically unresponsive. It gets updated but keeps on hanging and I keep getting the message "Not Responding" at each interval. One thing I want to mention each thread extracts data and notifies the UI about the data fetched through event handling. What could be the reason of this unresponsiveness? Is it a problem with my approach(es)?
I think it's always desirable to use a construct that communicates your intent clearly. You want a signal to other threads that they should wait (i.e. stop doing what they're doing) until you signal to them that they can start again. You have one controlling thread (your UI) and potentially many threads doing work and marshalling results back to the UI.
Approach 1 isn't ideal because locks (at least in my experience) are most often used to protect a resource that isn't suitable for use in multi threaded code. For example, writing to a shared field.
Approach 2 makes much more sense, a manual reset event functions like a gate: open the gate and things can pass through, close it and they can't. That's exactly the behaviour you're looking for and I think most developers would understand quite quickly that that's your intent.
As for your second problem, it sounds like you're getting waves of messages clogging the UI. If you stop all 100 of your threads then start them at the same time, there's a good chance they're going to finish their work quite close together and all be trying to send the result of their work to the UI thread. To solve that you could try staggering the work when you restart or use fewer threads. Another option would be to aggregate results and only dispatch the the UI every x seconds - but that's a bit more work.
In Option 1, using the Monitor class means that only one thread owns the exclusive lock of the monitor object at a time. This means that of your 100 threads, only 1 is processing at a time, which kind of defeats the purpose of using threads. It also means that your GUI thread has to wait until the current worker thread has finished before it can obtain the lock.
The ManualResetEvent is a much better choice as it is used to signal between threads, rather than protect against multiple thread access.
I do not know why your GUI is so unresponsive using the second option, but I do not think it is related to your manual reset event. More likely you have a different problem where the GUI thread is getting swamped. You suggest you have 100 threads all firing notification events to the GUI which would seem a likely culprit.
What happens if you debug your app, and just randomly break when your GUI is unresponsive? Doing this many times should show what your GUI thread is up to and where the bottleneck is.
Sorry for the lengthy post, I just want to illustrate my situation as best as possible. Read the items in bold and check the code if you want the quick gist of the issue.
I use ClickOnce to deploy a C# application, and have opted to have my application check for updates manually using the ApplicationDeployment Class rather than letting it do the update checking for me.
The program is a specialized network scanner that searches for network devices made by the company I work for. Once the main window is loaded, a prompt is displayed asking if the user would like to scan the network. If they say Yes, a scan begins which can take a minute or two to complete depending on their network settings; otherwise it just waits for the user to do some action.
One of the last things I do in Form_Load is create a new thread that checks for updates. This had all been working fine for several months through about 12 releases and has suddenly stopped working. I didn't change the update code at all, nor change the sequence of what happens when the app starts.
In staring at the code, I think I see why it is not working correctly and wanted to confirm if what I think is correct. If it is, it begs the question as to why it DID work before - but I'm not too concerned with that either.
Consider the following code:
frmMain.cs
private void Form1_Load(object sender, EventArgs e)
{
// set up ui, load settings etc
Thread t = new Thread(new ParameterizedThreadStart(StartUpdateThread));
t.Start(this);
}
private void StartUpdateThread(object param)
{
IWin32Window owner = param as IWin32Window;
frmAppUpdater.CheckForUpdate(owner);
}
frmAppUpdater.cs
public static void CheckForUpdate(IWin32Window owner)
{
if (ApplicationDeployment.IsNetworkDeployed) {
Console.WriteLine("Going to check for application updates.");
parentWindow = owner;
ApplicationDeployment ad = ApplicationDeployment.CurrentDeployment;
ad.CheckForUpdateCompleted += new CheckForUpdateCompletedEventHandler(ad_CheckForUpdateCompleted);
ad.CheckForUpdateProgressChanged += new DeploymentProgressChangedEventHandler(ad_CheckForUpdateProgressChanged);
ad.CheckForUpdateAsync();
// CAN/WILL THE THREAD CREATED IN FORM1_LOAD BE TERMINATED HERE???
}
}
When the CheckForUpdateAsync() callback completes, if no update is available the method call simply returns; if an update IS available, I use a loop to block until 2 things occur: The user has dismissed the "Would you like to scan prompt" AND no scan is currently running.
The loop looks like this, which takes place in ad_CheckForUpdateCompleted:
while (AppGlobals.ScanInProgress || AppGlobals.ScanPromptVisible) {
System.Threading.Thread.Sleep(5000);
}
I sleep for 5 seconds because I figured this was happening in a separate thread and it has seemed to work well for a while.
My main question about the above code is:
When ad.CheckForUpdateAsync(); is called from CheckForUpdate does the thread I created in Form1_Load terminate (or might it terminate)? I suspect it may because the subsequent Async call causes the method to return, and then start another thread?
The only reason I am confused is because this method WAS working for so long without hanging the application and now all of the sudden it hangs and my best effort at debugging revealed that it was that Sleep call blocking the app.
I'd be happy to post the full code for frmAppUpdater.cs if it would be helpful.
When ad.CheckForUpdateAsync(); is called from CheckForUpdate does
the thread I created in Form1_Load terminate (or might it terminate)?
If the CheckForUpdateAsync() call is asynchronous then yes, the thread will terminate, no it won't otherwise.
If you suspect the Sleep to have caused the application hang then these two variables AppGlobals.ScanInProgress and AppGlobals.ScanPromptVisible are probably always set to true! You should start looking at the code that is setting them to true and see what is going on there.
In order to avoid an application hang, you could introduce a variable to avoid sleeping indefinitely:
int nTrials = 0;
while ((AppGlobals.ScanInProgress || AppGlobals.ScanPromptVisible) && (nTrials < 5)) {
System.Threading.Thread.Sleep(5000);
nTrials++;
}
// Check the results and act accordingly
I personally do not like using Sleep for thread synchronization. .NET offers a bunch of classes that are perfect for thread synchronization, WaitHandle being one of them.
See this post at Asynchronous Delegates Vs Thread/ThreadPool?
your form load method seems to be doing synchronous work. you mention that you are using clickonce deployment. Has the binary location changed after the previous release or has permissions on this resource changed. Looks like the work (checkupdates) in the Thread is never finishing and is never handed back to the form.
as an immediate fix, I would change the Thread approach to Delegate - if you use delegate, then this becomes less of a customer issue (the form will respond to end user) but the underlying problem remains.
as the next step, i would go through http://msdn.microsoft.com/en-us/library/ms229001.aspx and do the troubleshoot
I have a class that implements the Begin/End Invocation pattern where I initially used ThreadPool.QueueUserWorkItem() to thread my work. The work done on the thread doesn't loop but does takes a bit of time to process so the work itself is not easily stopped.
I now have the side effect where someone using my class is calling the Begin (with callback) a ton of times to do a lot of processing so ThreadPool.QueueUserWorkItem is creating a ton of threads to do the processing. That in itself isn't bad but there are instances where they want to abandon the processing and start a new process but they are forced to wait for their first request to finish.
Since ThreadPool.QueueUseWorkItem() doesn't allow me to cancel the threads I am trying to come up with a better way to queue up the work and maybe use an explicit FlushQueue() method in my class to allow the caller to abandon work in my queue.
Anyone have any suggestion on a threading pattern that fits my needs?
Edit: I'm currently targeting the 2.0 framework. I'm currently thinking that a Consumer/Producer queue might work. Does anyone have thoughts on the idea of flushing the queue?
Edit 2 Problem Clarification:
Since I'm using the Begin/End pattern in my class every time the caller uses the Begin with callback I create a whole new thread on the thread pool. This call does a very small amount of processing and is not where I want to cancel. It's the uncompleted jobs in the queue I wish to stop.
The fact that the ThreadPool will create 250 threads per processor by default means if you ask the ThreadPool to queue a large amount of items with QueueUserWorkItem() you end up creating a huge amount of concurrent threads that you have no way of stopping.
The caller is able to push the CPU to 100% with not only the work but the creation of the work because of the way I queued the threads.
I was thinking by using the Producer/Consumer pattern I could queue these threads into my own queue that would allow me to moderate how many threads I create to avoid the CPU spike creating all the concurrent threads. And that I might be able to allow the caller of my class to flush all the jobs in the queue when they are abandoning the requests.
I am currently trying to implement this myself but figured SO was a good place to have someone say look at this code or you won't be able to flush because of this or flushing isn't the right term you mean this.
EDIT My answer does not apply since OP is using 2.0. Leaving up and switching to CW for anyone who reads this question and using 4.0
If you are using C# 4.0, or can take a depedency on one of the earlier version of the parallel frameworks, you can use their built-in cancellation support. It's not as easy as cancelling a thread but the framework is much more reliable (cancelling a thread is very attractive but also very dangerous).
Reed did an excellent article on this you should take a look at
http://reedcopsey.com/2010/02/17/parallelism-in-net-part-10-cancellation-in-plinq-and-the-parallel-class/
A method I've used in the past, though it's certainly not a best practice is to dedicate a class instance to each thread, and have an abort flag on the class. Then create a ThrowIfAborting method on the class that is called periodically from the thread (particularly if the thread's running a loop, just call it every iteration). If the flag has been set, ThrowIfAborting will simply throw an exception, which is caught in the main method for the thread. Just make sure to clean up your resources as you're aborting.
You could extend the Begin/End pattern to become the Begin/Cancel/End pattern. The Cancel method could set a cancel flag that the worker thread polls periodically. When the worker thread detects a cancel request, it can stop its work, clean-up resources as needed, and report that the operation was canceled as part of the End arguments.
I've solved what I believe to be your exact problem by using a wrapper class around 1+ BackgroundWorker instances.
Unfortunately, I'm not able to post my entire class, but here's the basic concept along with it's limitations.
Usage:
You simply create an instance and call RunOrReplace(...) when you want to cancel your old worker and start a new one. If the old worker was busy, it is asked to cancel and then another worker is used to immediately execute your request.
public class BackgroundWorkerReplaceable : IDisposable
{
BackgroupWorker activeWorker = null;
object activeWorkerSyncRoot = new object();
List<BackgroupWorker> workerPool = new List<BackgroupWorker>();
DoWorkEventHandler doWork;
RunWorkerCompletedEventHandler runWorkerCompleted;
public bool IsBusy
{
get { return activeWorker != null ? activeWorker.IsBusy; : false }
}
public BackgroundWorkerReplaceable(DoWorkEventHandler doWork, RunWorkerCompletedEventHandler runWorkerCompleted)
{
this.doWork = doWork;
this.runWorkerCompleted = runWorkerCompleted;
ResetActiveWorker();
}
public void RunOrReplace(Object param, ...) // Overloads could include ProgressChangedEventHandler and other stuff
{
try
{
lock(activeWorkerSyncRoot)
{
if(activeWorker.IsBusy)
{
ResetActiveWorker();
}
// This works because if IsBusy was false above, there is no way for it to become true without another thread obtaining a lock
if(!activeWorker.IsBusy)
{
// Optionally handle ProgressChangedEventHandler and other features (under the lock!)
// Work on this new param
activeWorker.RunWorkerAsync(param);
}
else
{ // This should never happen since we create new workers when there's none available!
throw new LogicException(...); // assert or similar
}
}
}
catch(...) // InvalidOperationException and Exception
{ // In my experience, it's safe to just show the user an error and ignore these, but that's going to depend on what you use this for and where you want the exception handling to be
}
}
public void Cancel()
{
ResetActiveWorker();
}
public void Dispose()
{ // You should implement a proper Dispose/Finalizer pattern
if(activeWorker != null)
{
activeWorker.CancelAsync();
}
foreach(BackgroundWorker worker in workerPool)
{
worker.CancelAsync();
worker.Dispose();
// perhaps use a for loop instead so you can set worker to null? This might help the GC, but it's probably not needed
}
}
void ResetActiveWorker()
{
lock(activeWorkerSyncRoot)
{
if(activeWorker == null)
{
activeWorker = GetAvailableWorker();
}
else if(activeWorker.IsBusy)
{ // Current worker is busy - issue a cancel and set another active worker
activeWorker.CancelAsync(); // Make sure WorkerSupportsCancellation must be set to true [Link9372]
// Optionally handle ProgressEventHandler -=
activeWorker = GetAvailableWorker(); // Ensure that the activeWorker is available
}
//else - do nothing, activeWorker is already ready for work!
}
}
BackgroupdWorker GetAvailableWorker()
{
// Loop through workerPool and return a worker if IsBusy is false
// if the loop exits without returning...
if(activeWorker != null)
{
workerPool.Add(activeWorker); // Save the old worker for possible future use
}
return GenerateNewWorker();
}
BackgroundWorker GenerateNewWorker()
{
BackgroundWorker worker = new BackgroundWorker();
worker.WorkerSupportsCancellation = true; // [Link9372]
//worker.WorkerReportsProgress
worker.DoWork += doWork;
worker.RunWorkerCompleted += runWorkerCompleted;
// Other stuff
return worker;
}
} // class
Pro/Con:
This has the benefit of having a very low delay in starting your new execution, since new threads don't have to wait for old ones to finish.
This comes at the cost of a theoretical never-ending growth of BackgroundWorker objects that never get GC'd. However, in practice the code below attempts to recycle old workers so you shouldn't normally encounter a large pool of ideal threads. If you are worried about this because of how you plan to use this class, you could implement a Timer which fires a CleanUpExcessWorkers(...) method, or have ResetActiveWorker() do this cleanup (at the cost of a longer RunOrReplace(...) delay).
The main cost from using this is precisely why it's beneficial - it doesn't wait for the previous thread to exit, so for example, if DoWork is performing a database call and you execute RunOrReplace(...) 10 times in rapid succession, the database call might not be immediately canceled when the thread is - so you'll have 10 queries running, making all of them slow! This generally tends to work fine with Oracle, causing only minor delays, but I do not have experiences with other databases (to speed up the cleanup, I have the canceled worker tell Oracle to cancel the command). Proper use of the EventArgs described below mostly solves this.
Another minor cost is that whatever code this BackgroundWorker is performing must be compatible with this concept - it must be able to safely recover from being canceled. The DoWorkEventArgs and RunWorkerCompletedEventArgs have a Cancel/Cancelled property which you should use. For example, if you do Database calls in the DoWork method (mainly what I use this class for), you need to make sure you periodically check these properties and take perform the appropriate clean-up.