i have this list of sounds:
List<SourceVoice> runningInstances;
i attach an event to a sound object so that i remove it from the list when it is stopped.
sourceVoice.StreamEnd += delegate
{
lock (runningInstances)
{
runningInstances.Remove(sourceVoice);
}
};
and i also have this stop function, which is called from any thread.
public void stop(int fadeoutTime)
{
lock (runningInstances)
{
foreach (var sourceVoice in runningInstances)
{
if (!sourceVoice.IsDisposed)
{
sourceVoice.Stop();
sourceVoice.FlushSourceBuffers();
sourceVoice.DestroyVoice();
sourceVoice.Dispose();
}
}
runningInstances.Clear();
}
}
i thought that since i make the event a delegate, it will always wait until the object is unlocked. however it seems that it freezes there.
There are 2 possibilities:
the event is raised on the same thread as sourceVoice.Stop();. The lock() {} has no function because it is re-entrant but it is also harmless. The Items should already have been removed when Clear() is called.
the event is raised on another (threadpool) thread. This is up to sourceVoice.Stop(). The lock() will block the event handling until after runningInstances.Clear(). After that the handlers will run and removing from an epty List<> is not an error.
Neither would cause any 'freezing', so there must be something relevant in code we don't see.
Delegates are just callbacks, they don't make any guarantees about threading. You may want to check out the ConcurrentBag class, which is already thread-safe, so you can avoid worrying as much about the locking with respect to the collection.
It looks like one of the calls within the lock scope of the stop method is probably causing the StreamEnd event to fire. You could test for this by stepping through the code in the stop method a seeing if it jumps into the event. I would hazard a guess that its the sourceVoice.Stop() call.
You can change your stop method as below if sourceVoice.Stop() always raise the sourceVoice.StreamEnd event.
public void stop(int fadeoutTime)
{
foreach (var sourceVoice in runningInstances.ToList<SourceVoice>())
{
if (!sourceVoice.IsDisposed)
{
sourceVoice.Stop();
sourceVoice.FlushSourceBuffers();
sourceVoice.DestroyVoice();
sourceVoice.Dispose();
}
}
}
To know about .ToList() you can see
ToList()-- Does it Create a New List?
Related
I have a piece of code that has two event handlers. I want these two event handlers to notify another method that there is some work to be done.
I have implemented this using a ManualResetEvent, but I am unsure if this is the best way to achieve what I want, or if there is some better way.
static ManualResetEvent autoEvent = new ManualResetEvent(false);
void begin() {
ThreadPool.QueueUserWorkItem(new WaitCallback(genericHandler));
}
void OnEvent1(object sender) {
autoEvent.Set();
}
void OnEvent2(object sender) {
autoEvent.Set();
}
void genericHandler(object info) {
while (true) {
autoEvent.WaitOne();
// do some work
}
}
One of the most important questions I have is: After autoEvent.WaitOne(), I do some work and that work consumes time. In the meanwhile, another event is triggered, and Set() is called before the genericHandler gets to WaitOne() again. When WaitOne is reached again, will it wait for another Set(), or proceed if a Set() has been called before reaching the WaitOne()?
Is this the best way to implement multiple publishers and one subscriber pattern in C#? Or should I use another thing instead of the ManualResetEvent?
Note: The genericHandler is in a different thread because Event1 and Event2 have different priorities, so in the handler I check whether Event1 has pending work, before checking Event2.
Your code does indeed do what you think it does, and the race condition you describe is not a problem at all. As per the documentation of a MRE, when it is set it will remain in the "signaled" state until it is reset by a call to WaitOne.
The question and code is presented too vaguely to offer good, specific advice. That said...
No, the use of ManualResetEvent is not appropriate here. Not only does it needlessly complicate the code, your code relies on a long-running thread that you have taken from the thread pool (where only short-lived tasks should be executed).
If you have a need for events to trigger the execution of some asynchronous work, then you should be using the async/await pattern, where each new unit of work is invoked via the Task class.
For example:
async void OnEvent1(object sender) {
var workUnit = ... ; // something here that represents your unit of work
await Task.Run(() => genericHandler(workUnit));
}
void OnEvent2(object sender) {
var workUnit = ... ; // something here that represents your unit of work
await Task.Run(() => genericHandler(workUnit));
}
void genericHandler(object info) {
// do some work using info
}
Note that the event object and the begin() method are eliminated entirely.
It's not clear from your question whether each work unit is entirely independent of each other. If not, then you may also require some synchronization to protect shared data. Again, without a more specific question it's not possible to say what this would be, but most likely you'd either use the lock statement, or one of the Concurrent... collections.
This code works, for most of time, so I'm thinking of some race condition. Result class is immutable, but I don't think the issue is with that class.
public Result GetResult()
{
using (var waitHandle = new ManualResetEvent(false))
{
Result result = null;
var completedHandler = new WorkCompletedEventHandler((o, e) =>
{
result = e.Result;
// somehow waitHandle is closed, thus exception occurs here
waitHandle.Set();
});
try
{
this.worker.Completed += completedHandler;
// starts working on separate thread
// when done, this.worker invokes its Completed event
this.worker.RunWork();
waitHandle.WaitOne();
return new WorkResult(result);
}
finally
{
this.worker.Completed -= completedHandler;
}
}
}
Edit: Apologies, I've missed a call to this.worker.RunWork() right before calling GetResult() method. This apparently resulted (sometimes) in doing same job twice, though I'm not sure why waitHandle got closed before waitHandle.Set(), despite having Completed event firing twice. This hasn't compromised the IO work at all (results were correct; after I've changed the code to manually close the waitHandle).
Therefore, Iridium's answer should be closest answer (if not the right one), even though the question wasn't complete.
There doesn't seem anything particularly problematic in the code you've given, which would suggest that there is perhaps something in the code you've not shown that's causing the problem. I'm assuming that the worker you're using is part of your codebase (rather than part of the .NET BCL like BackgroundWorker?) It may be worth posting the code for that, in case there is an issue there that's causing the problem.
If for example, the same worker is used repeatedly from multiple threads (or has a bug in which Completed can be raised more than once for the same piece of work), then if the worker uses the "usual" means for invoking an event handler, i.e.:
var handler = Completed;
if (handler != null)
{
handler(...);
}
You could have an instance where var handler = Completed; is executed before the finally clause (and so before the completedHandler has been detached from the Completed event), but handler(...) is called after the using(...) block is exited (and so after the ManualResetEvent has been disposed). Your event handler will then be executed after waitHandle is disposed, and the exception you are seeing will be thrown.
There is no obvious reason why this would fail from the posted code. But we can't see a stack trace and we can't see the logic that gets the Completed event fired so there are few opportunities to debug this for you. Arbitrarily, if the event fires more than once then you'll certainly have this kind of race problem.
Vexing threading problems are hard to debug, threading races are problems that occur at microsecond scale. Trying to debug it can be enough to make the race disappear. Or it happens so infrequently that having any hope of catching the problem is too rare to justify an attempt.
Such problems often require logging to diagnose the race. Be sure to select a light-weight logging method, logging in itself can alter the timing enough to prevent the race from ever occurring.
Last but certainly not least: do note that there is no point in using a thread here. You get the exact same outcome by directly calling the code that's executed by whatever thread is started by RunWork(). Minus the overhead and the headaches.
If you get rid of the using your code will not throw an exception at the by you designated line...
You have to find a decent place to dispose it, if you really need to.
public Result GetResult()
{
var waitHandle = new ManualResetEvent(false);
Result result = null;
var completedHandler = new WorkCompletedEventHandler((o, e) =>
{
result = e.Result;
// somehow waitHandle is closed, thus exception occurs here
waitHandle.Set();
waitHandle.Dispose();
});
try
{
this.worker.Completed += completedHandler;
// starts working on separate thread
// when done, this.worker invokes its Completed event
this.worker.RunWork();
waitHandle.WaitOne();
return new WorkResult(result);
}
finally
{
this.worker.Completed -= completedHandler;
}
}
I have a UserControl with a TreeView control called mTreeView on it. I can get data updates from multiple different threads, and these cause the TreeView to be updated. To do this, I've devised the following pattern:
all data update event handlers must acquire a lock and then check for InvokeRequired; if so, do the work by calling Invoke. Here's the relevant code:
public partial class TreeViewControl : UserControl
{
object mLock = new object();
void LockAndInvoke(Control c, Action a)
{
lock (mLock)
{
if (c.InvokeRequired)
{
c.Invoke(a);
}
else
{
a();
}
}
}
public void DataChanged(object sender, NewDataEventArgs e)
{
LockAndInvoke(mTreeView, () =>
{
// get the data
mTreeView.BeginUpdate();
// perform update
mTreeView.EndUpdate();
});
}
}
My problem is, sometimes, upon startup, I will get an InvalidOperationException on mTreeView.BeginUpdate(), saying mTreeView is being updated from a thread different than the one it was created. I go back in the call stack to my LockAndInvoke, and lo and behold, c.InvokeRequired is true but the else branch was taken! It's as if InvokeRequired had been set to true on a different thread after the else branch was taken.
Is there anything wrong with my approach, and what can I do to prevent this?
EDIT: my colleague tells me that the problem is that InvokeRequired is false until the control is created, so this is why it happens on startup. He's not sure what to do about it though. Any ideas?
It is a standard threading race. You are starting the thread too soon, before the TreeView is created. So your code sees InvokeRequired as false and fails when a split second later the native control gets created. Fix this by only starting the thread when the form's Load event fires, the first event that guarantees that all the control handles are valid.
Some mis-conceptions in the code btw. Using lock is unnecessary, both InvokeRequired and Begin/Invoke are thread-safe. And InvokeRequired is an anti-pattern. You almost always know that the method is going to be called by a worker thread. So use InvokeRequired only to throw an exception when it is false. Which would have allowed diagnosing this problem early.
When you marshal back to the UI thread, it's one thread--it can do only one thing at at time. You don't need any locks when you call Invoke.
The problem with Invoke is that it blocks the calling thread. That calling thread usually doesn't care about what get's completed on the UI thread. In that case I recommend using BeginInvoke to marshal the action back to the UI thread asynchronously. There are circumstances where the background thread can be blocked on Invoke while the UI thread can be waiting for the background thread to complete something and you end up with a deadlock: For example:
private bool b;
public void EventHandler(object sender, EventArgs e)
{
while(b) Thread.Sleep(1); // give up time to any other waiting threads
if(InvokeRequired)
{
b = true;
Invoke((MethodInvoker)(()=>EventHandler(sender, e)), null);
b = false;
}
}
... the above will deadlock on the while loop while because Invoke won't return until the call to EventHandler returns and EventHandler won't return until b is false...
Note my use of a bool to stop certain sections of code from running. This is very similar to lock. So, yes, you can end up having a deadlock by using lock.
Simply do this:
public void DataChanged(object sender, NewDataEventArgs e)
{
if(InvokeRequired)
{
BeginInvoke((MethodInvoker)(()=>DataChanged(sender, e)), null);
return;
}
// get the data
mTreeView.BeginUpdate();
// perform update
mTreeView.EndUpdate();
}
This simply re-invokes the DataChanged method asynchronously on the UI thread.
The pattern as you have shown it above looks 100% fine to me (albeit with some extra unnecessary locking, however I can't see how this would cause the problem you have described).
As David W points out, the only difference between what you are doing and this extension method is that you directly access mTreeView on the UI thread instead of passing it in as an argument to your action, however this will only make a difference if the value of mTreeView changes, and in any case you would have to try fairly hard to get this to cause the problem you have described.
Which means that the problem must be something else.
The only thing that I can think of is that you may have created mTreeView on a thread other than the UI thread - if this is the case then accessing the tree view will be 100% safe, however if you try and add that tree view to a form which was created on a different thread then it will go bang with an exception similar to the one that you describe.
I am observing a strange bug in some of my code which I suspect is tied to the way closing a form and background workers interact.
Here is the code potentially at fault:
var worker = new BackgroundWorker();
worker.DoWork += (sender, args) => {
command();
};
worker.RunWorkerCompleted += (sender, args) => {
cleanup();
if (args.Error != null)
MessageBox.Show("...", "...", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
};
worker.RunWorkerAsync();
This code is executed in a method in a form, when a button is pressed.
command() is slow, it may take a few seconds to run.
The user presses a button which executes the code above to be executed. Before it is done, the form is closed.
The problem is that calling cleanup() sometimes raises ObjectDisposedException. I say "sometimes", because this never happens on my computer. If the form is closed before command() is done, the handler I registered for RunWorkerCompleted is not executed. On another computer, the handler is called once out of hundred times. On a coworker's computer, it's almost always called. Apparently, the probability of execution of the handler rises with the age/slowness of the computer.
First question:
Is this the expected behaviour of BakgroundWorker? I would not expect it to know anything about the form, as there is nothing I can see that ties the form "this" with "worker".
Second question:
How should I go about fixing that problem?
Possible solutions I'm considering:
Test if (!this.IsDisposed) before calling cleanup(). Is that enough, or can the form be disposed while cleanup is being executed?
Wrap the call to cleanup() in a try {} catch (ObjectDisposedException). I don't like that kind of approach too much, as I may be catching exceptions that were raised due to some other unrelated bug in cleanup() or one of the methods it calls.
Register a handler for IsClosing and delay or cancel closing until the handler for RunWorker Completed has run.
Additional information that may be relevant: code from command() will cause updates to be done to GUI objects in "this". Such updates are performed via calls to this F# function:
/// Run a delegate on a ISynchronizeInvoke (typically a Windows.Form).
let runOnInvoker (notification_receiver : ISynchronizeInvoke) excHandler (dlg : Delegate) args =
try
let args : System.Object[] = args |> Seq.cast |> Array.ofSeq
notification_receiver.Invoke (dlg, args) |> ignore
with
| :? System.InvalidOperationException as op ->
excHandler(op)
The exceptions you mentioned do not have any connection to BackgroundWorker, other than the fact that one thread (the worker) tries to access controls which have been disposed by another thread (the UI).
The solution I would use is to attach an event handler to the Form.FormClosed event to set a flag that tells you the UI has been torn down. Then, then RunWorkerCompleted handle will check to see if the UI has been torn down before trying to do anything with the form.
While this approach will probably work more reliably than checking IsDisposed if you are not disposing the form explicitly, it does not provide a 100% guarantee that the form will not be closed and/or disposed just after the cleanup code has checked the flag and found that it is still there. This is the race condition you yourself mention.
To eliminate this race condition, you will need to synchronize, for example like this:
// set this to new object() in the constructor
public object CloseMonitor { get; private set; }
public bool HasBeenClosed { get; private set; }
private void Form1_FormClosed(object sender, FormClosedEventArgs e) {
lock (this.CloseMonitor) {
this.HasBeenClosed = true;
// other code
}
}
and for the worker:
worker.RunWorkerCompleted += (sender, args) => {
lock (form.CloseMonitor) {
if (form.HasBeenClosed) {
// maybe special code for this case
}
else {
cleanup();
// and other code
}
}
};
The Form.FormClosing event will also work fine for this purpose, you can use whichever of the two is more convenient if it makes a difference.
Note that, the way this code is written, both event handlers will be scheduled for execution on the UI thread (this is because WinForms components use a single-threaded apartment model) so you would actually not be affected by a race condition. However, if you decide to spawn more threads in the future you might expose the race condition unless you do use locking. In practice I have seen this happen quite often, so I suggest synchronizing anyway to be future-proof. Performance will not be affected as the sync only happens once.
Lets say I have a component called Tasking (that I cannot modify) which exposes a method “DoTask” that does some possibly lengthy calculations and returns the result in via an event TaskCompleted. Normally this is called in a windows form that the user closes after she gets the results.
In my particular scenario I need to associate some data (a database record) with the data returned in TaskCompleted and use that to update the database record.
I’ve investigated the use of AutoResetEvent to notify when the event is handled. The problem with that is AutoResetEvent.WaitOne() will block and the event handler will never get called. Normally AutoResetEvents is called be a separate thread, so I guess that means that the event handler is on the same thread as the method that calls.
Essentially I want to turn an asynchronous call, where the results are returned via an event, into a synchronous call (ie call DoSyncTask from another class) by blocking until the event is handled and the results placed in a location accessible to both the event handler and the method that called the method that started the async call.
public class SyncTask
{
TaskCompletedEventArgs data;
AutoResetEvent taskDone;
public SyncTask()
{
taskDone = new AutoResetEvent(false);
}
public string DoSyncTask(int latitude, int longitude)
{
Task t = new Task();
t.Completed = new TaskCompletedEventHandler(TaskCompleted);
t.DoTask(latitude, longitude);
taskDone.WaitOne(); // but something more like Application.DoEvents(); in WinForms.
taskDone.Reset();
return data.Street;
}
private void TaskCompleted(object sender, TaskCompletedEventArgs e)
{
data = e;
taskDone.Set(); //or some other mechanism to signal to DoSyncTask that the work is complete.
}
}
In a Windows App the following works correctly.
public class SyncTask
{
TaskCompletedEventArgs data;
public SyncTask()
{
taskDone = new AutoResetEvent(false);
}
public string DoSyncTask(int latitude, int longitude)
{
Task t = new Task();
t.Completed = new TaskCompletedEventHandler(TaskCompleted);
t.DoTask(latitude, longitude);
while (data == null) Application.DoEvents();
return data.Street;
}
private void TaskCompleted(object sender, TaskCompletedEventArgs e)
{
data = e;
}
}
I just need to replicate that behaviour in a window service, where Application.Run isn’t called and the ApplicationContext object isn’t available.
I've had some trouble lately with making asynchronous calls and events at threads and returning them to the main thread.
I used SynchronizationContext to keep track of things. The (pseudo)code below shows what is working for me at the moment.
SynchronizationContext context;
void start()
{
//First store the current context
//to call back to it later
context = SynchronizationContext.Current;
//Start a thread and make it call
//the async method, for example:
Proxy.BeginCodeLookup(aVariable,
new AsyncCallback(LookupResult),
AsyncState);
//Now continue with what you were doing
//and let the lookup finish
}
void LookupResult(IAsyncResult result)
{
//when the async function is finished
//this method is called. It's on
//the same thread as the the caller,
//BeginCodeLookup in this case.
result.AsyncWaitHandle.WaitOne();
var LookupResult= Proxy.EndCodeLookup(result);
//The SynchronizationContext.Send method
//performs a callback to the thread of the
//context, in this case the main thread
context.Send(new SendOrPostCallback(OnLookupCompleted),
result.AsyncState);
}
void OnLookupCompleted(object state)
{
//now this code will be executed on the
//main thread.
}
I hope this helps, as it fixed the problem for me.
Maybe you could get DoSyncTask to start a timer object that checks for the value of your data variable at some appropriate interval. Once data has a value, you could then have another event fire to tell you that data now has a value (and shut the timer off of course).
Pretty ugly hack, but it could work... in theory.
Sorry, that's the best I can come up with half asleep. Time for bed...
I worked out a solution to the async to sync problem, at least using all .NET classes.
Link
It still doesn't work with COM. I suspect because of STA threading. The Event raised by the .NET component that hosts the COM OCX is never handled by my worker thread, so I get a deadlock on WaitOne().
someone else may appreciate the solution though :)
If Task is a WinForms component, it might be very aware of threading issues and Invoke the event handler on the main thread -- which seems to be what you're seeing.
So, it might be that it relies on a message pump happening or something. Application.Run has overloads that are for non-GUI apps. You might consider getting a thread to startup and pump to see if that fixes the issue.
I'd also recommend using Reflector to get a look at the source code of the component to figure out what it's doing.
You've almost got it. You need the DoTask method to run on a different thread so the WaitOne call won't prevent work from being done. Something like this:
Action<int, int> doTaskAction = t.DoTask;
doTaskAction.BeginInvoke(latitude, longitude, cb => doTaskAction.EndInvoke(cb), null);
taskDone.WaitOne();
My comment on Scott W's answer seems a little cryptic after I re-read it. So let me be more explicit:
while( !done )
{
taskDone.WaitOne( 200 );
Application.DoEvents();
}
The WaitOne( 200 ) will cause it to return control to your UI thread 5 times per second (you can adjust this as you wish). The DoEvents() call will flush the windows event queue (the one that handles all windows event handling like painting, etc.). Add two members to your class (one bool flag "done" in this example, and one return data "street" in your example).
That is the simplest way to get what you want done. (I have very similar code in an app of my own, so I know it works)
Your code is almost right... I just changed
t.DoTask(latitude, longitude);
for
new Thread(() => t.DoTask(latitude, longitude)).Start();
TaskCompleted will be executed in the same thread as DoTask does. This should work.