Silverlight: WaitOne synchronization in async call not behaving as expected - c#

I'm trying to use a ManualResetEvent to block until a callback is executed, but the callback is never reached, even when I try to run the callback on another thread.
waitHandle = new ManualResetEvent(false);
DataServiceQueryer<MyEntity> dataServiceQueryer = new DataServiceQueryer<MyEntity>(dsQuery.Expression);
ThreadPool.QueueUserWorkItem(new WaitCallback(stateInfo =>
{
dataServiceQueryer.ExecuteQuery();
}));
// waits here forever
waitHandle.WaitOne();
public class DataServiceQueryer<T>
{
//field, properties
public void ExecuteQuery()
{
// this block is definitely executed
_asyncResult = _query.BeginExecute(new AsyncCallback(c =>
{
// this is never reached
QueryOperationResponse<T> result = _query.EndExecute(c) as QueryOperationResponse<T>;
MainPage.ListRecords = new ObservableCollection<T>(result) as ObservableCollectionEx<MyEntity>;
MainPage.waitHandle.Set();
}), _query);
// neither is this!
var test = _asyncResult.AsyncWaitHandle.WaitOne(0);
}
}
Any suggestions? I am most confounded as to why the _asycResult assignment never seems to take place. I'm using Silverlight 4 with EF4 and the devart Oracle dotConnect provider.

In silverlight and on the ui thread, that is expected behavior (hung app). I cannot seem to find more official documentation, though I could swear I remember seeing it in the past...
I do not remember the exact details but I'm fairly certain directly blocking the ui thread causes everything to stop functioning. I seem to remember that the UI thread handles network IO.
Whatever you want to happen after the WaitOne call needs to be moved to a callback.
Although, it is not your exact situation, there is related info in this link. It explains that blocking the UI thread prevents the web service call from going out since it also blocks the UI messaging queue. I realize that your call is going out from the threadpool. I still think that blocking the ui thread will interfere, but I can't locate the documentation concretely explaining why.
http://www.codeproject.com/Articles/31018/Synchronous-Web-Service-Calls-with-Silverlight-Dis (best I can find at the moment, I'll update when/if I found the article I'm looking for).

Related

Call an awaitable method inside a Thread in C#

In my C# 7.0 Code I like to use an old school Thread of type Thread to do some work. Inside this Thread, I need to use some async methods. What would be the best approach to call these methods?
Writing the thread function async does not make any sense.
this._networkListenerThread = new Thread(/* async here is not an option */() =>
{
while (!this._listenerCancellation.IsCancellationRequested)
{
try
{
// This does not compile
var packetBuffer = await this._commProxy.ReadAsync();
doSomethingMore();
}
}
}
If we go down the call stack, finally there will be this call:
// _socket is of type Android.Bluetooth.BluetoothSocket
// .InputStream of type System.IO.Stream
// ReadAsync only returns when data arrived on the stream
// or throws an exception when the connection is lost
var receivedBytes = await this._socket.InputStream.ReadAsync(buffer, 0, buffer.Length);
For those wondering why I want to use a Thread instead of Task: I want to give it a meaningful name to enhance debugging. I did not find a way to name a Task. Besides of this, this Thread runs almost as long as the application runs, therefore a Thread does make sense for me.
this Thread runs almost as long as the application runs
No, it doesn't. Because the work that it's doing is asynchronous. The thread runs long enough to check the status of the cancellation token, fire off ReadAsync (which, being asynchronous, will return basically immediately) and then it's done. The thread goes away, and it has no more work to do. That's the whole idea of asynchronous operations; being asynchronous means the operation returns to its caller pretty much immediately, and does whatever meaningful work it has to do after returning control back to the caller (in this case, since this is the top level method of the thread, returning control back means that the thread has finished executing and gets torn down).
So there just isn't much of any purpose in creating a new thread just to have it check a boolean value and start some operation that will go off and do work on its own. It's not that you should use a different way of getting a new thread to do work (like using Task.Run), but rather you shouldn't use any means of getting a new thread to do work, because you don't have any long running CPU bound work to do. The long running (non-CPU bound, by the look of it) work that you have is already asynchronous, so you can just call the method directly from whatever thread wants to start this work, and have it do it right in line.
If you simply want to have some value that you can share along an asynchronous operation's logical call context, there are of course tools that accomplish that, such as AsyncLocal. Creating a new thread wouldn't accomplish that, because as you finish starting the asynchronous operation you have your thread is dead and gone, and the continuations will be running in some other thread anyway.

BeginFlush seems to work synchronously

I'm trying to use HttpResponse BeginFlush and EndFlush methods in order to make the flush async, which means my worker thread won't being used while flushing to the stream.
However it seems that the BeginFlush methods run in synchronous way always.
I dig in Microsoft reference code and didn't find the reason for this behavior.
This is Microsoft implementation: http://referencesource.microsoft.com/#System.Web/HttpResponse.cs,f121c649c992c407
I checked the SupportsAsyncFlush flag and I'm getting true , so my environment actually supports the AsyncFlush.
Any idea?
This is a code snippet for trying to do the async flush, but I'm not getting to the "Different Threads" line - it is always the same thread that runs this code.
Context.Response.Write("Some message");
Context.Response.BeginFlush(
res =>
{
try
{
var previousThreadId = (int)res.AsyncState;
var thread2Id = Thread.CurrentThread.ManagedThreadId;
if (previousThreadId != thread2Id)
{
Console.WriteLine("Different Threads");
}
Context.Response.EndFlush(res);
}
catch (Exception e)
{
}
},
Thread.CurrentThread.ManagedThreadId);
The code is asynchronous, but it's not multithreaded. You're defining a callback; a method that will be run at some indeterminate point in the future when the flush finishes. That doesn't necessarily mean that it'll run on another thread.
There are also many implementations of abstract functionality in .NET where the behavior is defined as asynchronous, but the implementation is synchronous because that particular implementation expects to run so quickly as to not warrant asynchrony. This is true for a fair bit of .NET's file IO. If the writer you're using expects to be able to flush the buffer very quickly, it may not bother doing it asynchronously.
The effect you are seeing may be related to SycnhronizationContext used by ASP.NET. Depending on .NET version and whether your code executes under ASP.NET page or something else, the behavior may change. Generally, it is normal for the SynchronizationContext to execute the callback on the same thread which started the async operation. You can find more info here: https://msdn.microsoft.com/en-us/magazine/gg598924.aspx
In any case, you can check whether the callback is synchronous or not by checking weather the next line of your code (after BeginFlush) executes before the callback. That will indicate that the callback is indeed asynchronous.

How to Kill a C# Thread?

I've got a thread that goes out and looks up data on our (old) SQL server.
As data comes in, I post information to a modal dialog box - the user can't & shouldn't do anything else while all this processing is going on. The modal dialog box is just to let them see that I'm doing something and to prevent them from running another query at the same time.
Sometimes (rarely) when the code makes a call to the SQL server, the server does not respond (IT has it down for maintenance, the LAN line got cut, or the PC isn't on the network) or the person doing the query runs out of time. So, the modal dialog box does have a cancel button.
The Thread object (System.Threading.Thread) has IsBackground=true.
When someone clicks Cancel, I call my KillThread method.
Note: I can NOT use the BackgroundWorker component in this class because it is shared with some Windows Mobile 5 code & WM5 does not have the BackgroundWorker.
void KillThread(Thread th) {
if (th != null) {
ManualResetEvent mre = new ManualResetEvent(false);
Thread thread1 = new Thread(
() =>
{
try {
if (th.IsAlive) {
//th.Stop();
// 'System.Threading.Thread' does not contain a definition for 'Stop'
// and no extension method 'Stop' accepting a first argument of type
// 'System.Threading.Thread' could be found (are you missing a using
// directive or an assembly reference?)
th.Abort();
}
} catch (Exception err) {
Console.WriteLine(err);
} finally {
mre.Set();
}
}
);
string text = "Thread Killer";
thread1.IsBackground = true;
thread1.Name = text;
thread1.Start();
bool worked = mre.WaitOne(1000);
if (!worked) {
Console.WriteLine(text + " Failed");
}
th = null;
}
}
In my Output Window, I always see "Thread Killer Failed" but no exception is ever thrown.
How should I stop a thread?
The best related posts I found where the two below:
How to Kill Thread in C#?
How to kill a thread instantly in C#?
EDIT:
There seems to be some confusion with the method I listed above.
First, when someone clicks the cancel button, this routine is called:
void Cancel_Click(object sender, EventArgs e) {
KillThread(myThread);
}
Next, when I go in to kill a thread, I'd rather not have to wait forever for the thread to stop. At the same time, I don't want to let my code proceed if the thread is still active. So, I use a ManualResetEvent object. It should not take a full second (1000ms) just to stop a thread, but every time the WaitOne method times out.
Still listening for ideas.
Short Answer: You don't. Normally you do it by signaling you want to quit.
If you're firing an SQL query, do it asynchronously (pardon my spelling), and cancel it if necessary. That really goes for any lengthy task in a separate thread.
For further reading see Eric Lippert's articles:
Careful with that axe, part one: Should I specify a timeout? and Careful with that axe, part two: What about exceptions?
Edit:
How do you call SQL Server? ADO, TDS, Standard/Custom Library, etc... ?
THAT call should be made asynchrone.
Thus: StartOpeningConnection, WaitFor OpeningComplete, StartQuery, WaitFor QueryComplete, Start CloseConnection, WaitFor CloseConnectionComplete etc. During any of the waits your thread should sleep. After waking up, Check if your parent thread (the UI thread) has cancelled, or a timeout has occurred and exit the thread and possibly inform sqlserver that you're done (closing connection).
It's not easy, but it rarely is...
Edit 2:In your case, if you are unable to change the database code to asynchrone, make it a seperate process and kill that if neccesary. That way the resources (connection etc.) will be released. With threads, this won't be the case. But it's an ugly hack.
Edit 3:
You should use the BeginExecuteReader/EndExecuteReader Pattern.
this article is a good reference:
It will require rewriting your data access code, but it's the way to do it properly.
I get the feeling that giving the Thread 1000ms to abort is simply not enough. MSDN recommends that you call Thread.Join. It would definitely be helpful to see the code that is being aborted.
Thread.Abort
The thread is not guaranteed to abort
immediately, or at all. This situation
can occur if a thread does an
unbounded amount of computation in the
finally blocks that are called as part
of the abort procedure, thereby
indefinitely delaying the abort. To
wait until a thread has aborted, you
can call the Join method on the thread
after calling the Abort method, but
there is no guarantee the wait will
end.
What are you passing into your KillThread method? The cancel button will be being clicked on the UI thread, not the one that you want to kill.
You should signal your event when the user clicks Cancel (not kill the thread). In your example, the ManualResetEvent "mre"'s scope should be outside the thread function.
To answer the more general question of how to force kill any kind of Thread in C#:
If any unhandled Exception is thrown inside a thread (including those used by Task and other ways of running asynchronously), this thread will be terminated.
However note that this comes with many problems, like resources not being freed, improper memory management, general undefined behavior etc, and the unhandled Exception may still have to be handled by its parent thread (wherever it was started from) OR by registering for the following Event beforehand, depending on how the thread was started:
AppDomain.CurrentDomain.UnhandledException += YourEventHandler;
I should emphasize again that this should be an absolute last resort. If you need this, your applications is almost certainly designed poorly and there are probably different solutions you should use. There are good reasons why Thread.Abort is now deprecated and no longer functional.

C# threading pattern that will let me flush

I have a class that implements the Begin/End Invocation pattern where I initially used ThreadPool.QueueUserWorkItem() to thread my work. The work done on the thread doesn't loop but does takes a bit of time to process so the work itself is not easily stopped.
I now have the side effect where someone using my class is calling the Begin (with callback) a ton of times to do a lot of processing so ThreadPool.QueueUserWorkItem is creating a ton of threads to do the processing. That in itself isn't bad but there are instances where they want to abandon the processing and start a new process but they are forced to wait for their first request to finish.
Since ThreadPool.QueueUseWorkItem() doesn't allow me to cancel the threads I am trying to come up with a better way to queue up the work and maybe use an explicit FlushQueue() method in my class to allow the caller to abandon work in my queue.
Anyone have any suggestion on a threading pattern that fits my needs?
Edit: I'm currently targeting the 2.0 framework. I'm currently thinking that a Consumer/Producer queue might work. Does anyone have thoughts on the idea of flushing the queue?
Edit 2 Problem Clarification:
Since I'm using the Begin/End pattern in my class every time the caller uses the Begin with callback I create a whole new thread on the thread pool. This call does a very small amount of processing and is not where I want to cancel. It's the uncompleted jobs in the queue I wish to stop.
The fact that the ThreadPool will create 250 threads per processor by default means if you ask the ThreadPool to queue a large amount of items with QueueUserWorkItem() you end up creating a huge amount of concurrent threads that you have no way of stopping.
The caller is able to push the CPU to 100% with not only the work but the creation of the work because of the way I queued the threads.
I was thinking by using the Producer/Consumer pattern I could queue these threads into my own queue that would allow me to moderate how many threads I create to avoid the CPU spike creating all the concurrent threads. And that I might be able to allow the caller of my class to flush all the jobs in the queue when they are abandoning the requests.
I am currently trying to implement this myself but figured SO was a good place to have someone say look at this code or you won't be able to flush because of this or flushing isn't the right term you mean this.
EDIT My answer does not apply since OP is using 2.0. Leaving up and switching to CW for anyone who reads this question and using 4.0
If you are using C# 4.0, or can take a depedency on one of the earlier version of the parallel frameworks, you can use their built-in cancellation support. It's not as easy as cancelling a thread but the framework is much more reliable (cancelling a thread is very attractive but also very dangerous).
Reed did an excellent article on this you should take a look at
http://reedcopsey.com/2010/02/17/parallelism-in-net-part-10-cancellation-in-plinq-and-the-parallel-class/
A method I've used in the past, though it's certainly not a best practice is to dedicate a class instance to each thread, and have an abort flag on the class. Then create a ThrowIfAborting method on the class that is called periodically from the thread (particularly if the thread's running a loop, just call it every iteration). If the flag has been set, ThrowIfAborting will simply throw an exception, which is caught in the main method for the thread. Just make sure to clean up your resources as you're aborting.
You could extend the Begin/End pattern to become the Begin/Cancel/End pattern. The Cancel method could set a cancel flag that the worker thread polls periodically. When the worker thread detects a cancel request, it can stop its work, clean-up resources as needed, and report that the operation was canceled as part of the End arguments.
I've solved what I believe to be your exact problem by using a wrapper class around 1+ BackgroundWorker instances.
Unfortunately, I'm not able to post my entire class, but here's the basic concept along with it's limitations.
Usage:
You simply create an instance and call RunOrReplace(...) when you want to cancel your old worker and start a new one. If the old worker was busy, it is asked to cancel and then another worker is used to immediately execute your request.
public class BackgroundWorkerReplaceable : IDisposable
{
BackgroupWorker activeWorker = null;
object activeWorkerSyncRoot = new object();
List<BackgroupWorker> workerPool = new List<BackgroupWorker>();
DoWorkEventHandler doWork;
RunWorkerCompletedEventHandler runWorkerCompleted;
public bool IsBusy
{
get { return activeWorker != null ? activeWorker.IsBusy; : false }
}
public BackgroundWorkerReplaceable(DoWorkEventHandler doWork, RunWorkerCompletedEventHandler runWorkerCompleted)
{
this.doWork = doWork;
this.runWorkerCompleted = runWorkerCompleted;
ResetActiveWorker();
}
public void RunOrReplace(Object param, ...) // Overloads could include ProgressChangedEventHandler and other stuff
{
try
{
lock(activeWorkerSyncRoot)
{
if(activeWorker.IsBusy)
{
ResetActiveWorker();
}
// This works because if IsBusy was false above, there is no way for it to become true without another thread obtaining a lock
if(!activeWorker.IsBusy)
{
// Optionally handle ProgressChangedEventHandler and other features (under the lock!)
// Work on this new param
activeWorker.RunWorkerAsync(param);
}
else
{ // This should never happen since we create new workers when there's none available!
throw new LogicException(...); // assert or similar
}
}
}
catch(...) // InvalidOperationException and Exception
{ // In my experience, it's safe to just show the user an error and ignore these, but that's going to depend on what you use this for and where you want the exception handling to be
}
}
public void Cancel()
{
ResetActiveWorker();
}
public void Dispose()
{ // You should implement a proper Dispose/Finalizer pattern
if(activeWorker != null)
{
activeWorker.CancelAsync();
}
foreach(BackgroundWorker worker in workerPool)
{
worker.CancelAsync();
worker.Dispose();
// perhaps use a for loop instead so you can set worker to null? This might help the GC, but it's probably not needed
}
}
void ResetActiveWorker()
{
lock(activeWorkerSyncRoot)
{
if(activeWorker == null)
{
activeWorker = GetAvailableWorker();
}
else if(activeWorker.IsBusy)
{ // Current worker is busy - issue a cancel and set another active worker
activeWorker.CancelAsync(); // Make sure WorkerSupportsCancellation must be set to true [Link9372]
// Optionally handle ProgressEventHandler -=
activeWorker = GetAvailableWorker(); // Ensure that the activeWorker is available
}
//else - do nothing, activeWorker is already ready for work!
}
}
BackgroupdWorker GetAvailableWorker()
{
// Loop through workerPool and return a worker if IsBusy is false
// if the loop exits without returning...
if(activeWorker != null)
{
workerPool.Add(activeWorker); // Save the old worker for possible future use
}
return GenerateNewWorker();
}
BackgroundWorker GenerateNewWorker()
{
BackgroundWorker worker = new BackgroundWorker();
worker.WorkerSupportsCancellation = true; // [Link9372]
//worker.WorkerReportsProgress
worker.DoWork += doWork;
worker.RunWorkerCompleted += runWorkerCompleted;
// Other stuff
return worker;
}
} // class
Pro/Con:
This has the benefit of having a very low delay in starting your new execution, since new threads don't have to wait for old ones to finish.
This comes at the cost of a theoretical never-ending growth of BackgroundWorker objects that never get GC'd. However, in practice the code below attempts to recycle old workers so you shouldn't normally encounter a large pool of ideal threads. If you are worried about this because of how you plan to use this class, you could implement a Timer which fires a CleanUpExcessWorkers(...) method, or have ResetActiveWorker() do this cleanup (at the cost of a longer RunOrReplace(...) delay).
The main cost from using this is precisely why it's beneficial - it doesn't wait for the previous thread to exit, so for example, if DoWork is performing a database call and you execute RunOrReplace(...) 10 times in rapid succession, the database call might not be immediately canceled when the thread is - so you'll have 10 queries running, making all of them slow! This generally tends to work fine with Oracle, causing only minor delays, but I do not have experiences with other databases (to speed up the cleanup, I have the canceled worker tell Oracle to cancel the command). Proper use of the EventArgs described below mostly solves this.
Another minor cost is that whatever code this BackgroundWorker is performing must be compatible with this concept - it must be able to safely recover from being canceled. The DoWorkEventArgs and RunWorkerCompletedEventArgs have a Cancel/Cancelled property which you should use. For example, if you do Database calls in the DoWork method (mainly what I use this class for), you need to make sure you periodically check these properties and take perform the appropriate clean-up.

How to ensure order of async operation calls?

[This appears to be a loooong question but I have tried to make it as clear as possible. Please have patience and help me...]
I have written a test class which supports an Async operation. That operation does nothing but reports 4 numbers:
class AsyncDemoUsingAsyncOperations
{
AsyncOperation asyncOp;
bool isBusy;
void NotifyStarted () {
isBusy = true;
Started (this, new EventArgs ());
}
void NotifyStopped () {
isBusy = false;
Stopped (this, new EventArgs ());
}
public void Start () {
if (isBusy)
throw new InvalidOperationException ("Already working you moron...");
asyncOp = AsyncOperationManager.CreateOperation (null);
ThreadPool.QueueUserWorkItem (new WaitCallback (StartOperation));
}
public event EventHandler Started = delegate { };
public event EventHandler Stopped = delegate { };
public event EventHandler<NewNumberEventArgs> NewNumber = delegate { };
private void StartOperation (object state) {
asyncOp.Post (args => NotifyStarted (), null);
for (int i = 1; i < 5; i++)
asyncOp.Post (args => NewNumber (this, args as NewNumberEventArgs), new NewNumberEventArgs (i));
asyncOp.Post (args => NotifyStopped (), null);
}
}
class NewNumberEventArgs: EventArgs
{
public int Num { get; private set; }
public NewNumberEventArgs (int num) {
Num = num;
}
}
Then I wrote 2 test programs; one as console app and another as windows form app. Windows form app works as expected when I call Start repeatedly:
But console app has hard time ensuring the order:
Since I am working on class library, I have to ensure that my library works correctly in all app models (Haven't tested in ASP.NET app yet). So I have following questions:
I have tested enough times and it appears to be working but is it OK to assume above code will always work in windows form app?
Whats the reason it (order) doesn't work correctly in console app? How can I fix it?
Not much experienced with ASP.NET. Will the order work in ASP.NET app?
[EDIT: Test stubs can be seen here if that helps.]
Unless I am missing something then given the code above I believe there is no way of guaranteeing the order of execution. I have never used the AsyncOperation and AsyncOperationManager class but I looked in reflector and as could be assumed AsyncOperation.Post uses the thread pool to execute the given code asynchronously.
This means that in the example you have provided 4 tasks will be queued to the thread pool synchronously in very quick succession. The thread pool will then dequeue the tasks in FIFO order (first in first out) but it's entirely possible for one of later threads to be scheduled before an earlier one or one of the later threads to complete before an earlier thread has completed its work.
Therefore given what you have there is no way to control the order in the way you desire. There are ways to do this, a good place to look is this article on MSDN.
http://msdn.microsoft.com/en-us/magazine/dd419664.aspx
I use a Queue you can then Enqueue stuff and Dequeue stuff in the correct order. This solved this problem for me.
The documentation for AsyncOperation.Post states:
Console applications do not synchronize the execution of Post calls. This can cause ProgressChanged events to be raised out of order. If you wish to have serialized execution of Post calls, implement and install a System.Threading.SynchronizationContext class.
I think this is the exact behavior you’re seeing. Basically, if the code that wants to subscribe to notifications from your asynchronous event wants to receive the updates in order, it must ensure that there is a synchronization context installed and that your AsyncOperationManager.CreateOperation() call is run inside of that context. If the code consuming the asynchronous events doesn’t care about receiving them in the correct order, it simply needs to avoid installing a synchronization context which will result in the “default” context being used (which just queues calls directly to the threadpool which may execute them in any order it wants to).
In the GUI version of your application, if you call your API from a UI thread, you will automatically have a synchronization context. This context is wired up to use the UI’s message queueing system which guarantees that posted messages are processed in order and serially (i.e., not concurrently).
In a Console application, unless if you manually install your own synchronization context, you will be using the default, non-synchronizing threadpool version. I am not exactly sure, but I don’t think that .net makes installing a serializing synchronization context very easy to do. I just use Nito.AsyncEx.AsyncContext from the Nito.AsyncEx nuget package to do that for me. Basically, if you call Nito.AsyncEx.AsyncContext.Run(MyMethod), it will capture the current thread and run an event loop with MyMethod as the first “handler” in that event loop. If MyMethod calls something that creates an AsyncOperation, that operation increments an “ongoing operations” counter and that loop will continue until the operation is completed via AsyncOperation.PostOperationCompleted or AsyncOperation.OperationCompleted. Just like the synchronization context provided by a UI thread, AsyncContext will queue posts from AsyncOperation.Post() in the order it receives them and run them serially in its event loop.
Here is an example of how to use AsyncContext with your demo asynchronous operation:
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Starting SynchronizationContext");
Nito.AsyncEx.AsyncContext.Run(Run);
Console.WriteLine("SynchronizationContext finished");
}
// This method is run like it is a UI callback. I.e., it has a
// single-threaded event-loop-based synchronization context which
// processes asynchronous callbacks.
static Task Run()
{
var remainingTasks = new Queue<Action>();
Action startNextTask = () =>
{
if (remainingTasks.Any())
remainingTasks.Dequeue()();
};
foreach (var i in Enumerable.Range(0, 4))
{
remainingTasks.Enqueue(
() =>
{
var demoOperation = new AsyncDemoUsingAsyncOperations();
demoOperation.Started += (sender, e) => Console.WriteLine("Started");
demoOperation.NewNumber += (sender, e) => Console.WriteLine($"Received number {e.Num}");
demoOperation.Stopped += (sender, e) =>
{
// The AsyncDemoUsingAsyncOperation has a bug where it fails to call
// AsyncOperation.OperationCompleted(). Do that for it. If we don’t,
// the AsyncContext will never exit because there are outstanding unfinished
// asynchronous operations.
((AsyncOperation)typeof(AsyncDemoUsingAsyncOperations).GetField("asyncOp", BindingFlags.NonPublic|BindingFlags.Instance).GetValue(demoOperation)).OperationCompleted();
Console.WriteLine("Stopped");
// Start the next task.
startNextTask();
};
demoOperation.Start();
});
}
// Start the first one.
startNextTask();
// AsyncContext requires us to return a Task because that is its
// normal use case.
return Nito.AsyncEx.TaskConstants.Completed;
}
}
With output:
Starting SynchronizationContext
Started
Received number 1
Received number 2
Received number 3
Received number 4
Stopped
Started
Received number 1
Received number 2
Received number 3
Received number 4
Stopped
Started
Received number 1
Received number 2
Received number 3
Received number 4
Stopped
Started
Received number 1
Received number 2
Received number 3
Received number 4
Stopped
SynchronizationContext finished
Note that in my example code I work around a bug in AsyncDemoUsingAsyncOperations which you should probably fix: when your operation stops, you never call AsyncOperation.OperationCompleted or AsyncOperation.PostOperationCompleted. This causes AsyncContext.Run() to hang forever because it is waiting for the outstanding operations to complete. You should make sure that your asynchronous operations complete—even in error cases. Otherwise you might run into similar issues elsewhere.
Also, my demo code, to imitate the output you showed in the winforms and console example, waits for each operation to finish before starting the next one. That kind of defeats the point of asynchronous coding. You can probably tell that my code could be greatly simplified by starting all four tasks at once. Each individual task would receive its callbacks in the correct order, but they would all make progress concurrently.
Recommendation
Because of how AsyncOperation seems to work and how it is intended to be used, it is the responsibility of the caller of an asynchronous API that uses this pattern to decide if it wants to receive events in order or not. If you are going to use AsyncOperation, you should document that the asynchronous events will only be received in order by the caller if the caller has a synchronization context that enforces serialization and suggest that the caller call your API on either a UI thread or in something like AsyncContext.Run(). If you try to use synchronization primitives and whatnot inside of the delegate you call with AsyncOperation.Post(), you could end up putting threadpool threads in a sleeping state which is a bad thing when considering performance and is completely redundant/wasteful when the caller of your API has properly set up a synchronization context already. This also enables the caller to decide that, if it is fine with receiving things out of order, that it is willing to process events concurrently and out of order. That may even enable speedup depending on what you’re doing. Or you might even decide to put something like a sequence number in your NewNumberEventArgs in case the caller wants both concurrency and still needs to be able to assemble the events into order at some point.

Categories

Resources