Prevent multiple execution of a ReactiveCommand (CreateAsyncTask) - c#

Is it possible to prevent multiple execution of a ReactiveCommand.
Here is the 'simple' code I use:
The command is created:
this.LoadCommand = ReactiveCommand.CreateAsyncTask(
async _ => await this._dataService.Load(),
RxApp.TaskpoolScheduler);
After I add the subscription to the command:
this.LoadCommand.Subscribe(assets => ...);
And finally, I execute the command:
this.LoadCommand.ExecuteAsyncTask();
If I call the ExecuteAsyncTask multiple time at several location, I would like that any subsequent calls wait for the first one to finish.
EDIT:
Here is the complete code for the Subscribe method:
this.LoadCommand.Subscribe(assets =>
{
Application.Current.Dispatcher.Invoke(
DispatcherPriority.Background,
new Action(() => this.Assets.Clear()));
foreach (Asset asset in assets)
{
Application.Current.Dispatcher.Invoke(
DispatcherPriority.Background,
new Action<Asset>(a =>
{
this.Assets.Add(a);
}), asset);
}
});
Thanks,
Adrien.

I downloaded your sample application, and was able to fix it. Here's my 2 cents:
1) I took out the Rx.TaskpoolScheduler parameter in your command creation. That tells it to deliver the results using that scheduler, and I think you want to stick to delivering results on the UI thread.
2) Since by making this change you are now running your Subscribe logic on the UI thread, you don't need to deal with all that Invoking. You can access the collection directly:
this.LoadCommand.Subscribe(dataCollection =>
{
DataCollection.Clear();
DataCollection.AddRange(dataCollection);
});
Making just those 2 changes caused it to "work".
I'm no expert, but what I think was happening is that the actual ReactiveCommand "LoadCommand" you had was immediately returning and delivering results on various TaskPool threads. So it would never allow concurrency within the Command itself, which is by design. However the subscribes, I think since each was coming in on a different thread, were happening concurrently (race). So all the clears occurred, then all the adds.
By subscribing and handling all on the same thread you can avoid this, and if you can manage it on the UI thread, you won't need to involve Invoking to the Dispatcher.
Also, in this particular situation using the Invoke on the Dispatcher with the priority DispatcherPriority.Background seems to execute things in a non-serial fashion, not sure exactly the order, but it seemed to do all the clears, then the adds in reverse order (I incremented them so I could tell which invocation it was). So there is definitely something to be said for that. FWIW changing the priority to DispatcherPriority.Send kept it serial and displayed the "expected" behavior. That being said, I still prefer avoiding Invoking to the Dispatcher altogether, if you can.

Related

this.Close() not killing thread created by observer NewThreadScheduler.Default.Schedule

I'm pretty new to C# and I'm having the following problem: I have a WPF application that executes an infinite task which performs some pretty expensive background operations. Those operations can occasionally change a value and it has to be updated in the UI. The operations need to run in a thread different than the UI thread, since they could lock the UI. So, I'm trying to use the System.Reactive library and it's actually working pretty well... but, when I try to close the application by using a custom close button that executes the this.Close(); method, the app is not being closed.
My observable looks something like this:
internal IObservable<string> DoBackgroundOperations(string param) {
return Observable.Create<string>(o => {
NewThreadScheduler.Default.Schedule(() => {
for (;;) {
param = // some operations that change the param
// when the param has been changed, I send the new value to the subscribers
o.OnNext(param);
}
});
return Disposable.Empty;
});
}
Then I'm subscribing to it and changing the value I need to update in the UI:
sevice.DoBackgroundOperations(param).Subscribe(newVal => Data = newVal);
As I said before, I'm receiving the updated values when they come and it's working well, but when the click event of the close button is triggered, the UI window "disappears" but the application itself is never being closed. I think the thread created by the observable is keeping the app alive.
So, my question is: how can I properly close the app and prevent the thread from keeping it alive?
Thanks!
Edit
I'm using caliburn.micro for implementing the MVVM pattern. I'm doing the subscription in one of my ViewModel classes. I don't think it matters, but just in case...
Do not ever return Disposable.Empty. You're forced to do that because your code doesn't have a natural disposable in turn because you create an infinite loop.
Get rid of the infinite loop and you can make the whole problem go away.
You could solve this simply by this:
internal IObservable<string> DoBackgroundOperations(string param)
{
return
Observable
.Generate(
0,
x => true,
x => x + 1,
x => /* some operations that change the param */,
Scheduler.Default);
}
I purposely chose Scheduler.Default because Scheduler.NewThread has been deprecated.
Had you provided the code for // some operations that change the param I could have given you working code.
Now, to close your app cleanly you should dispose of any subscriptions you create, but at least you'd no longer be tying up a thread in an infinite loop.

Updating controls in Main Thread using EventHandler

I'm using this to update a control in the main thread from another thread:
private void ShowHourGlassSafe(bool visible)
{
this.Invoke((EventHandler)((s, ev) => pictureBoxHourGlass.Visible = visible));
}
I wonder what are the implications of doing it this way or if is there any risk this is going to fail?
From the many examples are there for the same thing I could not find one like this.
It could be that it's simply wrong?
Well, you've picked a rather odd delegate to choose, as you've chosen one that has two parameters despite the fact that none are needed nor will be provided. I don't know if that will cause it to break, but it's certainly doing nothing to help. You're most likely best off using a delegate that takes no parameters and returns no values, such as:
private void ShowHourGlassSafe(bool visible)
{
this.Invoke((MethodInvoker)(() => pictureBoxHourGlass.Visible = visible));
}
Other than that, the fundamental concept of what you're doing is perfectly fine.
Typical problems with this kind of code:
You'll deadlock if the UI thread is doing something unwise like waiting for the thread to complete. There's no point in using Invoke, that blocks the worker thread for no benefit, just use BeginInvoke. Solves the deadlock potential and the unnecessary delay.
You'll crash when the UI was closed and pictureBoxHourGlass was disposed. Ensuring that the thread is no longer running before allowing the UI to close is very commonly overlooked. Just displaying an hour glass isn't enough, you also have to take countermeasures to prevent the user from closing the UI. Or otherwise interlock it with a way to cancel the thread first
The user will typically be befuddled when an hour glass shows up without him doing anything to ask that something gets done. The 99% correct case is that you display the hour glass with code in the UI thread and then start the thread. And hide it again when the thread completes. Easiest to do with the BackgroundWorker or Task classes, they can run code on the UI thread after the job was done.
Favor the Action delegate types for consistency:
private void ShowHourGlassSafe(bool visible) {
this.BeginInvoke(new Action(() => something.Visible = visible));
}

Is calling Dispatcher.CheckAccess() good form in Silverlight?

I wonder if the following code buys any performance gains:
if (Deployment.Current.Dispatcher.CheckAccess())
{
DoUIWork();
}
else
{
Deployment.Current.Dispatcher.BeginInvoke(() =>
DoUIWork());
}
Is the Dispatcher smart enough to short circuit a dispatch to the UI thread if its unnecessary?
I couldn't say whether the dispatcher does anything expensive when dispatching from the UI thread to itself, compared with the check. But BeginInvoke from the UI thread may behave differently from executing the operation directly, as it's at least put on the queue rather than invoked immediately. You could tell the difference between this and removing the conditional statement if you had code directly afterwards.
Certainly worth being aware of the control flow, enough to know if the difference doesn't matter.
If it is anything like standard Windows SynchronizationContext (and it probably is) then the two options are not the same. BeginInvoke will basicaly queue up the method to be executed by the dispatcher message pump after the current execution of any existing message has been processed.
In your example the two options be the same if you were to use Invoke instead of BeginInvoke.

C# Asynchronous Options for Processing a List

I am trying to better understand the Async and the Parallel options I have in C#. In the snippets below, I have included the 5 approaches I come across most. But I am not sure which to choose - or better yet, what criteria to consider when choosing:
Method 1: Task
(see http://msdn.microsoft.com/en-us/library/dd321439.aspx)
Calling StartNew is functionally equivalent to creating a Task using one of its constructors and then calling Start to schedule it for execution. However, unless creation and scheduling must be separated, StartNew is the recommended approach for both simplicity and performance.
TaskFactory's StartNew method should be the preferred mechanism for creating and scheduling computational tasks, but for scenarios where creation and scheduling must be separated, the constructors may be used, and the task's Start method may then be used to schedule the task for execution at a later time.
// using System.Threading.Tasks.Task.Factory
void Do_1()
{
var _List = GetList();
_List.ForEach(i => Task.Factory.StartNew(_ => { DoSomething(i); }));
}
Method 2: QueueUserWorkItem
(see http://msdn.microsoft.com/en-us/library/system.threading.threadpool.getmaxthreads.aspx)
You can queue as many thread pool requests as system memory allows. If there are more requests than thread pool threads, the additional requests remain queued until thread pool threads become available.
You can place data required by the queued method in the instance fields of the class in which the method is defined, or you can use the QueueUserWorkItem(WaitCallback, Object) overload that accepts an object containing the necessary data.
// using System.Threading.ThreadPool
void Do_2()
{
var _List = GetList();
var _Action = new WaitCallback((o) => { DoSomething(o); });
_List.ForEach(x => ThreadPool.QueueUserWorkItem(_Action));
}
Method 3: Parallel.Foreach
(see: http://msdn.microsoft.com/en-us/library/system.threading.tasks.parallel.foreach.aspx)
The Parallel class provides library-based data parallel replacements for common operations such as for loops, for each loops, and execution of a set of statements.
The body delegate is invoked once for each element in the source enumerable. It is provided with the current element as a parameter.
// using System.Threading.Tasks.Parallel
void Do_3()
{
var _List = GetList();
var _Action = new Action<object>((o) => { DoSomething(o); });
Parallel.ForEach(_List, _Action);
}
Method 4: IAsync.BeginInvoke
(see: http://msdn.microsoft.com/en-us/library/cc190824.aspx)
BeginInvoke is asynchronous; therefore, control returns immediately to the calling object after it is called.
// using IAsync.BeginInvoke()
void Do_4()
{
var _List = GetList();
var _Action = new Action<object>((o) => { DoSomething(o); });
_List.ForEach(x => _Action.BeginInvoke(x, null, null));
}
Method 5: BackgroundWorker
(see: http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx)
To set up for a background operation, add an event handler for the DoWork event. Call your time-consuming operation in this event handler. To start the operation, call RunWorkerAsync. To receive notifications of progress updates, handle the ProgressChanged event. To receive a notification when the operation is completed, handle the RunWorkerCompleted event.
// using System.ComponentModel.BackgroundWorker
void Do_5()
{
var _List = GetList();
using (BackgroundWorker _Worker = new BackgroundWorker())
{
_Worker.DoWork += (s, arg) =>
{
arg.Result = arg.Argument;
DoSomething(arg.Argument);
};
_Worker.RunWorkerCompleted += (s, arg) =>
{
_List.Remove(arg.Result);
if (_List.Any())
_Worker.RunWorkerAsync(_List[0]);
};
if (_List.Any())
_Worker.RunWorkerAsync(_List[0]);
}
}
I suppose the obvious critieria would be:
Is any better than the other for performance?
Is any better than the other for error handling?
Is any better than the other for monitoring/feedback?
But, how do you choose?
Thanks in advance for your insights.
Going to take these in an arbitrary order:
BackgroundWorker (#5)
I like to use BackgroundWorker when I'm doing things with a UI. The advantage that it has is having the progress and completion events fire on the UI thread which means you don't get nasty exceptions when you try to change UI elements. It also has a nice built-in way of reporting progress. One disadvantage that this mode has is that if you have blocking calls (like web requests) in your work, you'll have a thread sitting around doing nothing while the work is happening. This is probably not a problem if you only think you'll have a handful of them though.
IAsyncResult/Begin/End (APM, #4)
This is a widespread and powerful but difficult model to use. Error handling is troublesome since you need to re-catch exceptions on the End call, and uncaught exceptions won't necessarily make it back to any relevant pieces of code that can handle it. This has the danger of permanently hanging requests in ASP.NET or just having errors mysteriously disappear in other applications. You also have to be vigilant about the CompletedSynchronously property. If you don't track and report this properly, the program can hang and leak resources. The flip side of this is that if you're running inside the context of another APM, you have to make sure that any async methods you call also report this value. That means doing another APM call or using a Task and casting it to an IAsyncResult to get at its CompletedSynchronously property.
There's also a lot of overhead in the signatures: You have to support an arbitrary object to pass through, make your own IAsyncResult implementation if you're writing an async method that supports polling and wait handles (even if you're only using the callback). By the way, you should only be using callback here. When you use the wait handle or poll IsCompleted, you're wasting a thread while the operation is pending.
Event-based Asynchronous Pattern (EAP)
One that was not on your list but I'll mention for the sake of completeness. It's a little bit friendlier than the APM. There are events instead of callbacks and there's less junk hanging onto the method signatures. Error handling is a little easier since it's saved and available in the callback rather than re-thrown. CompletedSynchronously is also not part of the API.
Tasks (#1)
Tasks are another friendly async API. Error handling is straightforward: the exception is always there for inspection on the callback and nobody cares about CompletedSynchronously. You can do dependencies and it's a great way to handle execution of multiple async tasks. You can even wrap APM or EAP (one type you missed) async methods in them. Another good thing about using tasks is your code doesn't care how the operation is implemented. It may block on a thread or be totally asynchronous but the consuming code doesn't care about this. You can also mix APM and EAP operations easily with Tasks.
Parallel.For methods (#3)
These are additional helpers on top of Tasks. They can do some of the work to create tasks for you and make your code more readable, if your async tasks are suited to run in a loop.
ThreadPool.QueueUserWorkItem (#2)
This is a low-level utility that's actually used by ASP.NET for all requests. It doesn't have any built-in error handling like tasks so you have to catch everything and pipe it back up to your app if you want to know about it. It's suitable for CPU-intensive work but you don't want to put any blocking calls on it, such as a synchronous web request. That's because as long as it runs, it's using up a thread.
async / await Keywords
New in .NET 4.5, these keywords let you write async code without explicit callbacks. You can await on a Task and any code below it will wait for that async operation to complete, without consuming a thread.
Your first, third and forth examples use the ThreadPool implicitly because by default Tasks are scheduled on the ThreadPool and the TPL extensions use the ThreadPool as well, the API simply hides some of the complexity see here and here. BackgroundWorkers are part of the ComponentModel namespace because they are meant for use in UI scenarios.
Reactive extensions is another upcoming library for handling asynchronous programming, especially when it comes to composition of asynchronous events and methods.
It's not native, however it's developed by Ms labs. It's available both for .NET 3.5 and .NET 4.0 and is essentially a collection of extension methods on the .NET 4.0 introduced IObservable<T> interface.
There are a lot of examples and tutorials on their main site, and I strongly recommend checking some of them out. The pattern might seem a bit odd at first (at least for .NET programmers), but well worth it, even if it's just grasping the new concept.
The real strength of reactive extensions (Rx.NET) is when you need to compose multiple asynchronous sources and events. All operators are designed with this in mind and handles the ugly parts of asynchrony for you.
Main site: http://msdn.microsoft.com/en-us/data/gg577609
Beginner's guide: http://msdn.microsoft.com/en-us/data/gg577611
Examples: http://rxwiki.wikidot.com/101samples
That said, the best async pattern probably depends on what situation you're in. Some are better (simpler) for simpler stuff and some are more extensible and easier to handle when it comes to more complex scenarios. I cannot speak for all the ones you're mentioning though.
The last one is the best for 2,3 at least. It has built-in methods/properties for this.
Other variants are almost the same, just different versions/convinient wrappers

Is it possible to overload a thread using ISynchronizeInvoke.BeginInvoke()?

My problem is this:
I have two threads, my UI thread, and a worker thread. My worker thread is running in a seperate class that gets instantiated by the form, which passes itself as an ISynchronizeInvoke to the worker class, which then uses Invoke on that interface to call it's events, which provide status updates to the UI for display. This works wonderfully.
I noticed that my background thread seemed to be running slowly though, so I changed the call to Invoke to BeginInvoke, thinking that "I'm just providing progress updates, it doesn't need to be exactly synchronous, no harm done" except that now I'm getting oddities with the progress update. My progress bar updates, but the label's text doesn't, and if I change to another window and try to change back, it acts like the UI thread is locked up, so I'm wondering if perhaps my progress calls (which happen very often) are overloading the UI thread so much that it never processes messages. Is this possible at all, or is there something else at work here?
You're definitively overloading the UI thread.
In your first sample, you were (behind the scenes) sending a message to the UI thread, waiting for it to be processed (that's the purpose of invoke, which ultimately relies on SendMessage), and then sending another one. In the meantime, other messages were probably enqueued (WM_PAINT messages, for example) and processed.
In your second sample, by using BeginInvoke (which ultimately relies on PostMessage), you massively enqueued a lot of messages in the message queue, that the message pump must sequentially handle. And of course, while it's handling those thousands of messages, it cannot handle the OS messages (WM_PAINT, etc..) which makes your UI look "frozen"
You're probably providing too much status updates ; try to lower the feedback level.
If you want to understand better how messages work in windows, this is the place to start.
A few thoughts;
try batching your updates; for example, there is no point updating for every iteration in a loop; depending on the speed, perhaps every 50 / 500. In the case of lists, you would buffer in a local list variable, take the list over via Invoke / BeginInvoke, and process the buffer on the UI thread
variable capture; if you are using BeginInvoke and anonymous methods, you could have problems... I'll add an example below
making the UI update efficient - especially if you are processing a list; some controls (especially list-based controls) have a pair of methods like BeginEdit / EndEdit, that stop the UI redrawing when you are making lots of updates; instead, it waits until the End* is called
capture problem... imagine (worker):
List<string> stuff = new List<string>();
for(int i = 0 ; i < 50000 ; i++) {
stuff.Add(i.ToString());
if((i % 100) == 0) {
// update UI
BeginInvoke((MethodInvoker) delegate {
foreach(string s in stuff) {
listBox.Items.Add(s);
}
});
}
}
Did you notice that at some point both threads are talking to stuff? The UI thread can be iterating it while the worker thread (which has kept running past BeginInvoke) keeps adding. This can cause issues. Not usually performance issues (unless you are catching the exceptions and taking a long time to log them), but definitely issues. Options here would include:
using Invoke to run the update synchronously
create a new buffer per update, so that the two threads never have the same list instance (you'd need to look very carefully at the variable scoped to make sure, though)

Categories

Resources