Multiple Asynchronous SqlClient Operations - Looking for a good example - c#

I have been using Asynchronous operations in my WinForms client since the beginning, but only for some operations. When the ExecuteReader or the ExecuteNonQuery completes, the callback delegate fires and everything works just fine.
I've basically got two issues:
1) What is the best structure for dealing with this in a real-life system? All the examples I have seen are toy examples where the form handles the operation completion and then opening a datareader on the EndExecuteReader. Of course, this means that the form is more tightly coupled to the database than you would normally like. And, of course, the form can always easily call .Invoke on itself. I've set up all my async objects to inherit from a AsyncCtrlBlock<T> class and have the form and all the callback delegates provided to the constructor of the async objects in my DAL.
2) I am going to re-visit a portion of the program that currently is not async. It makes two calls in series. When the first is complete, part of the model can be populated. When the second is complete, the remaining part of the model can be completed - but only if the first part is already done. What is the best way to structure this? It would be great if the first read can be done and the processing due to the first read be underway while the second is launched, but I don't want the processing of the second read to be started until I know that the processing of the first read's data has been completed.

regarding 2)
make the first phase of your model populating asynchronous.
you will have something like that
FisrtCall();
AsyncResult arPh1 = BeginPhaseOne(); //use results from first call
SecondCall();
EndPhaseOne(arPh1); //wait until phase one is finished
PhaseTwo(); //proceed to phase two

If you are on .Net 4, this would be an ideal application of TPL! You could factor your code into tasks like this:
TaskScheduler uiScheduler = GetUISheduller();
SqlCommand command1 = CreateCommand1();
Task<SqlDataReader> query1 = Task<SqlDataReader>.Factory.FromAsync(command1.BeginExecuteReader, command1.EndExecuteReader, null);
query1.ContinueWith(t => PopulateGrid1(t.Result), uiScheduler);
SqlCommand command2 = CreateCommand2();
query1.ContinueWith(t => Task<SqlDataReader>.Factory.FromAsync(command2.BeginExecuteReader, command2.EndExecuteReader, null)
.ContinueWith(t => PopulateGrid2(t.Result), uiScheduler);

Related

Long API Call - Async calls the answer?

I am calling an external API which is slow. Currently if I havent called the API to get some orders for a while the call can be broken up into pages (pagingation).
So therefore fetching orders could be making multiple calls rather than the 1 call. Sometimes each call can be around 10 seconds per call so this could be about a minute in total which is far too long.
GetOrdersCall getOrders = new GetOrdersCall();
getOrders.DetailLevelList.Add(DetailLevelCodeType.ReturnSummary);
getOrders.CreateTimeFrom = lastOrderDate;
getOrders.CreateTimeTo = DateTime.Now;
PaginationType paging = new PaginationType();
paging.EntriesPerPage = 20;
paging.PageNumber = 1;
getOrders.Pagination = paging;
getOrders.Execute();
var response = getOrders.ApiResponse;
OrderTypeCollection orders = new OrderTypeCollection();
while (response != null && response.OrderArray.Count > 0)
{
eBayConverter.ConvertOrders(response.OrderArray, 1);
if (response.HasMoreOrders)
{
getOrders.Pagination.PageNumber++;
getOrders.Execute();
response = getOrders.ApiResponse;
orders.AddRange(response.OrderArray);
}
}
This is a summary of my code above... The getOrders.Execute() is when the api fires.
After the 1st "getOrders.Execute()" there is a Pagination result which tells me how many pages of data there are. My thinking is that I should be able to start an asnychronous call for each page and to populate the OrderTypeCollection. When all the calls are made and the collection is fully loaded then I will commit to the database.
I have never done Asynchronous calls via c# before and I can kind of follow Async await but I think my scenario falls out of the reading I have done so far?
Questions:
I think I can set it up to fire off the multiple calls asynchronously but I'm not sure how to check when all tasks have been completed i.e. ready to commit to db.
I've read somewhere that I want to avoid combining the API call and the db write to avoid locking in SQL server - Is this correct?
If someone can point me in the right direction - It would be greatly appreciated.
I think I can set it up to fire off the multiple calls asynchronously
but I'm not sure how to check when all tasks have been completed i.e.
ready to commit to db.
Yes you can break this up
The problem is ebay doesn't have an async Task Execute Method, so you are left with blocking threaded calls and no IO optimised async await pattern. If there were, you could take advantage of a TPL Dataflow pipeline which is async aware (and fun for the whole family to play), you could anyway, though i propose a vanilla TPL solution...
However, all is not lost, just fall back to Parallel.For and a ConcurrentBag<OrderType>
Example
var concurrentBag = new ConcurrentBag<OrderType>();
// make first call
// add results to concurrentBag
// pass the pageCount to the for
int pagesize = ...;
Parallel.For(1, pagesize,
page =>
{
// Set up
// add page
// make Call
foreach(var order in getOrders.ApiResponse)
concurrentBag.Add(order);
});
// all orders have been downloaded
// save to db
Note : There are MaxDegreeOfParallelism which you configure, maybe set it to 50, though it wont really matter how much you give it, the Task Scheduler is not going to aggressively give you threads, maybe 10 or so initially and grow slowly.
The other way you can do this, is create your own Task Scheduler, or just spin up your own Threads with the old fashioned Thread Class
I've read somewhere that I want to avoid combining the API call and
the db write to avoid locking in SQL server - Is this correct?
If you mean locking as in slow DB insert, use Sql Bulk Insert and update tools.
If you mean locking as in the the DB deadlock error message, then this is an entirely different thing, and worthy of its own question
Additional Resources
For(Int32, Int32, ParallelOptions, Action)
Executes a for (For in Visual Basic) loop in which iterations may run
in parallel and loop options can be configured.
ParallelOptions Class
Stores options that configure the operation of methods on the Parallel
class.
MaxDegreeOfParallelism
Gets or sets the maximum number of concurrent tasks enabled by this
ParallelOptions instance.
ConcurrentBag Class
Represents a thread-safe, unordered collection of objects.
Yes ConcurrentBag<T> Class can be used to server the purpose of one of your questions which was: "I think I can set it up to fire off the multiple calls asynchronously but I'm not sure how to check when all tasks have been completed i.e. ready to commit to db."
This generic class can be used to Run your every task and wait all your tasks to be completed to do further processing. It is thread safe and useful for parallel processing.

Prevent multiple execution of a ReactiveCommand (CreateAsyncTask)

Is it possible to prevent multiple execution of a ReactiveCommand.
Here is the 'simple' code I use:
The command is created:
this.LoadCommand = ReactiveCommand.CreateAsyncTask(
async _ => await this._dataService.Load(),
RxApp.TaskpoolScheduler);
After I add the subscription to the command:
this.LoadCommand.Subscribe(assets => ...);
And finally, I execute the command:
this.LoadCommand.ExecuteAsyncTask();
If I call the ExecuteAsyncTask multiple time at several location, I would like that any subsequent calls wait for the first one to finish.
EDIT:
Here is the complete code for the Subscribe method:
this.LoadCommand.Subscribe(assets =>
{
Application.Current.Dispatcher.Invoke(
DispatcherPriority.Background,
new Action(() => this.Assets.Clear()));
foreach (Asset asset in assets)
{
Application.Current.Dispatcher.Invoke(
DispatcherPriority.Background,
new Action<Asset>(a =>
{
this.Assets.Add(a);
}), asset);
}
});
Thanks,
Adrien.
I downloaded your sample application, and was able to fix it. Here's my 2 cents:
1) I took out the Rx.TaskpoolScheduler parameter in your command creation. That tells it to deliver the results using that scheduler, and I think you want to stick to delivering results on the UI thread.
2) Since by making this change you are now running your Subscribe logic on the UI thread, you don't need to deal with all that Invoking. You can access the collection directly:
this.LoadCommand.Subscribe(dataCollection =>
{
DataCollection.Clear();
DataCollection.AddRange(dataCollection);
});
Making just those 2 changes caused it to "work".
I'm no expert, but what I think was happening is that the actual ReactiveCommand "LoadCommand" you had was immediately returning and delivering results on various TaskPool threads. So it would never allow concurrency within the Command itself, which is by design. However the subscribes, I think since each was coming in on a different thread, were happening concurrently (race). So all the clears occurred, then all the adds.
By subscribing and handling all on the same thread you can avoid this, and if you can manage it on the UI thread, you won't need to involve Invoking to the Dispatcher.
Also, in this particular situation using the Invoke on the Dispatcher with the priority DispatcherPriority.Background seems to execute things in a non-serial fashion, not sure exactly the order, but it seemed to do all the clears, then the adds in reverse order (I incremented them so I could tell which invocation it was). So there is definitely something to be said for that. FWIW changing the priority to DispatcherPriority.Send kept it serial and displayed the "expected" behavior. That being said, I still prefer avoiding Invoking to the Dispatcher altogether, if you can.

How to run (create?) class in a separate thread?

I am writing a program that can be easily partitioned into several distinct parts. Simplified, it would look like this:
Reader class would work with getting data from a certain device,
Analyzer class would perform calculations on the data obtained from the device at regular intervals,
Form1 class that outputs UI (graphical representation of data gathered by Reader and number output by Analyzer
Naturally, I'd like those three classes to run in separate threads (on separate cores). Meaning - all methods of Reader run in its own thread, all methods of Analyzer run in its own thread, and Form1 runs in default thread.
However, all that comes to mind is using Thread or BackgroundWorker classes, and then instead of calling some resource-heavy method on Reader or Analyzer I'd instead call
BackgroundWorker.RunWorkerAsync()
I suppose this is not the best way to do it, is it? I'd rather somehow create the class in a separate thread and leave it there for its lifespan, but I just don't get how do I do it... And I can't think of a suitable search query it seems because I haven't found answer when I searched for one.
EDIT: Thank you for the comments, I think I understand, the question itself was assuming that you can create a class "on a thread" - with implied meaning of "any method of this class called will execute on its thread" - which makes no sense, and cannot be done.
I think you are on the right track. You will need
two threads Reader and Analyzer started by Form1. They basically consist of big loops that run until some flag stopReader or stopAnalyzer is set:
two concurrent queues, let's call them readQueue and analyzedQueue. Reader will put stuff in readQueue, Analyzer will read from readQueue and write to analyzedQueue, and Form1 will read from analyzedQueue.
void runReader()
{
while (!stopReader)
{
var data = ...; // read data from device
readQueue.Enqueue(data);
}
}
void runAnalyzer()
{
while (!stopAnalyzer)
{
Data data;
if (readQueue.TryDequeue(out data))
{
var result = ...; // analyze data
analyzedQueue.Enqueue(result);
}
else
{
Thread.Sleep(...); // wait a while
}
}
}
Instead of Thread.Sleep, you could use a BlockingCollection to make Analyzer wait until a new data item is available. In that case, you might want to use a CancellationToken instead of a Boolean for stopAnalyzer, so that you can interrupt BlockingCollection.Take when stopping your algorithm.

Using Tasks as a way to separate computation of results from committing results

An API pattern we are considering for separating the work of calculating some results from the committing of those results is:
interface IResults { }
class Results : IResults { }
Task<IResults> CalculateResultsAsync(CancellationToken ct)
{
return Task.Run<IResults>(() => new Results(), ct);
}
void CommitResults(IResults iresults)
{
Results results = (Results)iresults;
// Commit the results
}
This would allow a client to have a UI that kicked off the calculation of some results and know when the calculation was ready, and then at that time decide whether or not to commit the results. This is mainly to help us deal with the case where during the calculation, the UI will allow the user to cancel the operation. We want to ensure that:
The cancel UI is only shown while the action is still cancellable (i.e once we're in CommitResults, there is no going back), so once the CalculateResultsAsync task completes, we take down the cancel UI and as long as the user hasn't cancelled, go ahead and call the commit method.
We don't want to have a case (i.e. a race condition) where the user hits cancel and the results are committed anyways.
The client will never make use of IResults other than to pass it back to CommitResults.
Question:
The general question is: is this a good approach? Specifically:
It doesn't feel right having this split into two methods since the client is never inspecting IResults, they are just handing it back to the Commit method.
Is there a standard approach to this problem?
This is a very standard pattern (if not the ideal pattern), especially when your Results object is immutable. We do this regularly in TPL-using code inside the Visual Studio codebase. Much happiness always exists when your asynchronous/parallel logic is processing data, and the mutating crap lives apart from that.
If you're familiar with or have heard of the "Roslyn" project, this is a pattern we're actually encouraging people to use. The idea is refactorings can process asynchronously in the background and produce an object just like your result one that represents the result of the refactoring being applied. Then, on the UI thread anybody can take one of those result objects and apply it, which goes and updates all your files to contain the new text.
I do find the entire IResults/Results thing a bit strange -- it's not clear if you're using this to hide implementations from yourself or not. If the empty interface and the cast bugs you, you could consider adding to IResult a commit method, which the result object implements. Up to you.
I'm not sure why exactly would you need this pattern. To me, it seems that if you check the CancellationToken just before starting the commit, you're going to get exactly the same result, with simpler interface.

Is there a way to signal a System.Threading.Tasks.Task complete?

I have a library that is a complicated arbiter of many network connections. Each method of it's primary object takes a delegate, which is called when the network responds to a given request.
I want to translate my library to use the new .NET 4.5 "async/await" pattern; this would require me to return a "Task" object, which would signal to the user that the asynchronous part of the call is complete. Creating this object requires a function for the task to represent - As far as my understanding, it is essentially a lightweight thread.
This doesn't really fit the design of my library - I would like the task to behave more like an event, and directly signal to the user that their request has completed, rather then representing a function. Is this possible? Should i avoid abusing the "async/await" pattern in this way?
I don't know if I'm wording this very well, I hope you understand my meaning. Thank you for any help.
As far as my understanding, it is essentially a lightweight thread.
No, that's not really true. I can be true, under certain circumstances, but that's just one usage of Task. You can start a thread by passing it a delegate, and having it execute it (usually asynchronously, possibly synchronously, and by default using the thread pool).
Another way of using threads is through the use of a TaskCompletionSource. When you do that the task is (potentially) not creating any threads, using the thread pool, or anything along those lines. One common usage of this model is converting an event-based API to a Task-based API:
Let's assume, just because it's a common example, that we want to have a Task that will be completed when a From that we have is closed. There is already a FormClosed event that fires when that event occurs:
public static Task WhenClosed(this Form form)
{
var tcs = new TaskCompletionSource<object>();
form.FormClosing += (_, args) =>
{
tcs.SetResult(null);
};
return tcs.Task;
}
We create a TaskCompletionSource, add a handler to the event in question, in that handler we signal the completion of the task, and the TaskCompletionSource provides us with a Task to return to the caller. This Task will not have resulted in any new threads being created, it won't use the thread pool, or anything like that.
You can have a Task/Event based model using this construct that appears quite asynchronous, but only using a single thread to do all work (the UI thread).
In general, anytime you want to have a Task that represents something other than the execution of a function you'll want to consider using a TaskCompletionSource. It's usually the appropriate conceptual way to approach the problem, other than possibly using one of the existing TPL methods, such as WhenAll, WhenAny, etc.
Should I avoid abusing the "async/await" pattern in this way?
No, because it's not abuse. It's a perfectly appropriate use of Task constructs, as well as async/await. Consider, for example, the code that you can write using the helper method that I have above:
private async void button1_Click(object sender, EventArgs e)
{
Form2 popup = new Form2();
this.Hide();
popup.Show();
await popup.WhenClosed();
this.Show();
}
This code now works just like it reads; create a new form, hide myself, show the popup, wait until the popup is closed, and then show myself again. However, since it's not a blocking wait the UI thread isn't blocked. We also don't need to bother with events; adding handlers, dealing with multiple contexts; moving our logic around, or any of it. (All of that happens, it's just hidden from us.)
I want to translate my library to use the new .NET 4.5 "async/await" pattern; this would require me to return a "Task" object, which would signal to the user that the asynchronous part of the call is complete.
Well, not really - you can return anything which implements the awaitable pattern - but a Task is the simplest way of doing this.
This doesn't really fit the design of my library - I would like the task to behave more like an event, and directly signal to the user that their request has completed, rather then representing a function.
You can call Task.ContinueWith to act as a "handler" to execute when the task completes. Indeed, that's what TaskAwaiter does under the hood.
Your question isn't terribly clear to be honest, but if you really want to create a Task which you can then force to completion whenever you like, I suspect you just want TaskCompletionSource<TResult> - you call the SetResult, SetCanceled or SetException methods to indicate the appropriate kind of completion. (Or you can call the TrySet... versions.) Use the Task property to return a task to whatever needs it.

Categories

Resources