Anyone knows why the Task.Run below just ends without doing anything, while if only keep that line:
var someDump = Helper.MakeRequest(body, Helper.GetUrl(2, ConfigurationManager.AppSettings["SomeId"]), 2);
it will work seamlessly??
Thank you!
PS: It will just print out "t2 done".
var t2 = Task.Run(() =>
{
string filePath = System.Web.HttpContext.Current.Server.MapPath("~/" + ConfigurationManager.AppSettings["ServerPoolsFile"]);
var someDump = Helper.MakeRequest(body, Helper.GetUrl(2, ConfigurationManager.AppSettings["SomeId"]), 2);
JObject wfOutput = JObject.Parse(someDump.WFOutput);
var jsonData = wfOutput["output-parameters"];
var poolsList = jsonData[0]["value"]["string"]["value"];
JObject siteJson = JObject.Parse(poolsList.ToString());
filePath = filePath + "_" + siteJson["site"].ToString() + ".json";
if (!System.IO.File.Exists(filePath))
{
// Create a file to write to.
using (StreamWriter sw = System.IO.File.CreateText(filePath))
{
sw.WriteLine(poolsList.ToString());
}
}
else
{
System.IO.File.WriteAllText(filePath, poolsList.ToString());
}
}).ContinueWith(t => Console.WriteLine("t2 done."));
Task.Yield();
You need to await the task at the end:
await t2;
Your function exits before the task had a chance to finish
Your code creates two tasks, not just one. This code:
var t2 = Task.Run(() =>
{
//...
}).ContinueWith(t => Console.WriteLine("t2 done."));
...is equivalent to this:
Task t1 = Task.Run(() =>
{
//...
});
Task t2 = t1.ContinueWith(t =>
{
Console.WriteLine("t2 done.");
});
The difference is that in the first case you don't have access to the t1 task, and so you can't await it and observe any error that may have occured. Your last chance to observe a possible error was inside the t2 body, by examining the IsFaulted/Exception property of the t argument. You didn't, so now you'll never know what happened inside the t1 task.
As a side note the ContinueWith is a primitive method, full of gotchas and nuances, and using it in application code is not advisable. If you are writing a library and you are fully aware of its nuances and pitfalls, then this advice doesn't apply. But for general usage, the async/await technology offers everything that you need for writing correct and maintainable asynchronous code.
Related
I fire up some async tasks in parallel like the following example:
var BooksTask = _client.GetBooks(clientId);
var ExtrasTask = _client.GetBooksExtras(clientId);
var InvoicesTask = _client.GetBooksInvoice(clientId);
var ReceiptsTask = _client.GetBooksRecceipts(clientId);
await Task.WhenAll(
BooksTask,
ExtrasTask,
InvoicesTask,
ReceiptsTask
);
model.Books = BooksTask.Result;
model.Extras = ExtrasTask.Result;
model.Invoices = InvoicesTask.Result;
model.Receipts = ReceiptsTask.Result;
This results in a lot of typing. I searched the .Net Framework for a way to shorten this up. I imagine it to be lile this. I call the class Collector as I don't know how to name the concept.
var collector = new Collector();
collector.Bind(_client.GetBooks(clientId), out model.Books);
collector.Bind(_client.GetBooksExtras(clientId), out model.Extras);
collector.Bind(_client.GetBooksInvoice(clientId), out model.Invoices);
collector.Bind(_client.GetBooksRecceipts(clientId), out model.Receipts);
collector.Run();
Is this a valid approach? Is there something like that?
Personally, I prefer the code in the question (but using await instead of Result for code maintainability reasons). As noted in andyb952's answer, the Task.WhenAll is not required. I do prefer it for readability reasons; it makes the semantics explicit and IMO makes the code easier to read.
I searched the .Net Framework for a way to shorten this up.
There isn't anything built-in, nor (to my knowledge) any libraries for this. I've thought about writing one using tuples. For your code, it would look like this:
public static class TaskHelpers
{
public static async Task<(T1, T2, T3, T4)> WhenAll<T1, T2, T3, T4>(Task<T1> task1, Task<T2> task2, Task<T3> task3, Task<T4> task4)
{
await Task.WhenAll(task1, task2, task3, task4).ConfigureAwait(false);
return (await task1, await task2, await task3, await task4);
}
}
With this helper in place, your original code simplifies to:
(model.Books, model.Extras, model.Invoices, model.Receipts) = await TaskHelpers.WhenAll(
_client.GetBooks(clientId),
_client.GetBooksExtras(clientId),
_client.GetBooksInvoice(clientId),
_client.GetBooksRecceipts(clientId)
);
But is it really more readable? So far, I have not been convinced enough to make this into a library.
In this case I believe that the WhenAll is kind of irrelevant as you are using the results immediately after. Changing to this will have the same effect.
var BooksTask = _client.GetBooks(clientId);
var ExtrasTask = _client.GetBooksExtras(clientId);
var InvoicesTask = _client.GetBooksInvoice(clientId);
var ReceiptsTask = _client.GetBooksRecceipts(clientId);
model.Books = await BooksTask;
model.Extras = await ExtrasTask;
model.Invoices = await InvoicesTask;
model.Receipts = await ReceiptsTask;
The awaits will take care of ensuring you don't move past the 4 later assignments until the tasks are all completed
As pointed out in andyb952's answer, in this case it's not really needed to call Task.WhenAll since all the tasks are hot and running.
But, there are situations where you may still desire to have an AsyncCollector type.
TL;DR:
Async helper function usage example
async Task Async(Func<Task> asyncDelegate) =>
await asyncDelegate().ConfigureAwait(false);
AsyncCollector implementation, usage example
var collector = new AsyncCollector();
collector.Register(async () => model.Books = await _client.GetBooks(clientId));
collector.Register(async () => model.Extras = await _client.GetBooksExtras(clientId));
collector.Register(async () => model.Invoices = await _client.GetBooksInvoice(clientId));
collector.Register(async () => model.Receipts = await _client.GetBooksReceipts(clientId));
await collector.WhenAll();
If you're worried about closures, see the note at the end.
Let's see why someone would want that.
This is the solution that runs the tasks concurrently:
var task1 = _client.GetFooAsync();
var task2 = _client.GetBarAsync();
// Both tasks are running.
var v1 = await task1;
var v2 = await task2;
// It doesn't matter if task2 completed before task1:
// at this point both tasks completed and they ran concurrently.
The problem
What about when you don't know how many tasks you'll use?
In this scenario, you can't define the task variables at compile time.
Storing the tasks in a collection, alone, won't solve the problem, since the result of each task was meant to be assigned to a specific variable!
var tasks = new List<Task<string>>();
foreach (var translation in translations)
{
var translationTask = _client.TranslateAsync(translation.Eng);
tasks.Add(translationTask);
}
await Task.WhenAll(tasks);
// Now there are N completed tasks, each with a value that
// should be associated to the translation instance that
// was used to generate the async operation.
Solutions
A workaround would be to assign the values based on the index of the task, which of course only works if the tasks were created (and stored) in the same order of the items:
await Task.WhenAll(tasks);
for (int i = 0; i < tasks.Count; i++)
translations[i].Value = await tasks[i];
A more appropriate solution would be to use Linq and generate a Task that identifies two operations: the fetch of the data and the assignment to its receiver
List<Task> translationTasks = translations
.Select(async t => t.Value = await _client.TranslateAsync(t.Eng))
// Enumerating the result of the Select forces the tasks to be created.
.ToList();
await Task.WhenAll(translationTasks);
// Now all the translations have been fetched and assigned to the right property.
This looks ok, until you need to execute the same pattern on another list, or another single value, then you start to have many List<Task> and Task inside your function that you need to manage:
var translationTasks = translations
.Select(async t => t.Value = await _client.TranslateAsync(t.Eng))
.ToList();
var fooTasks = foos
.Select(async f => f.Value = await _client.GetFooAsync(f.Id))
.ToList();
var bar = ...;
var barTask = _client.GetBarAsync(bar.Id);
// Now all tasks are running concurrently, some are also assigning the value
// to the right property, but now the "await" part is a bit more cumbersome.
bar.Value = await barTask;
await Task.WhenAll(translationTasks);
await Task.WhenAll(fooTasks);
A cleaner solution (imho)
In this situations, I like to use a helper function that wraps an async operation (any kind of operation), very similar to how the tasks are created with Select above:
async Task Async(Func<Task> asyncDelegate) =>
await asyncDelegate().ConfigureAwait(false);
Using this function in the previous scenario results in this code:
var tasks = new List<Task>();
foreach (var t in translations)
{
// The fetch of the value and its assignment are wrapped by the Task.
var fetchAndAssignTask = Async(async t =>
{
t.Value = await _client.TranslateAsync(t.Eng);
});
tasks.Add(fetchAndAssignTask);
}
foreach (var f in foos)
// Short syntax
tasks.Add(Async(async f => f.Value = await _client.GetFooAsync(f.Id)));
// It works even without enumerables!
var bar = ...;
tasks.Add(Async(async () => bar.Value = await _client.GetBarAsync(bar.Id)));
await Task.WhenAll(tasks);
// Now all the values have been fetched and assigned to their receiver.
Here you can find a full example of using this helper function, which without the comments becomes:
var tasks = new List<Task>();
foreach (var t in translations)
tasks.Add(Async(async t => t.Value = await _client.TranslateAsync(t.Eng)));
foreach (var f in foos)
tasks.Add(Async(async f => f.Value = await _client.GetFooAsync(f.Id)));
tasks.Add(Async(async () => bar.Value = await _client.GetBarAsync(bar.Id)));
await Task.WhenAll(tasks);
The AsyncCollector type
This technique can be easily wrapped inside a "Collector" type:
class AsyncCollector
{
private readonly List<Task> _tasks = new List<Task>();
public void Register(Func<Task> asyncDelegate) => _tasks.Add(asyncDelegate());
public Task WhenAll() => Task.WhenAll(_tasks);
}
Here a full implementation and here an usage example.
Note: as pointed out in the comments, there are risks involved when using closures and enumerators, but from C# 5 onwards the use of foreach is safe because closures will close over a fresh copy of the variable each time.
It you still would like to use this type with a previous version of C# and need the safety during closure, the Register method can be changed in order to accept a subject that will be used inside the delegate, avoiding closures.
public void Register<TSubject>(TSubject subject, Func<TSubject, Task> asyncDelegate)
{
var task = asyncDelegate(subject);
_tasks.Add(task);
}
The code then becomes:
var collector = new AsyncCollector();
foreach (var translation in translations)
// Register translation as a subject, and use it inside the delegate as "t".
collector.Register(translation,
async t => t.Value = await _client.TranslateAsync(t.Eng));
foreach (var foo in foos)
collector.Register(foo, async f.Value = await _client.GetFooAsync(f.Id));
collector.Register(bar, async b => b.Value = await _client.GetBarAsync(bar.Id));
await collector.WhenAll();
I'm playing with Tasks and I would like to defer my task's execution.
I've a sample method like this:
private async Task<bool> DoSomething(string name, int delayInSeconds)
{
Debug.WriteLine($"Inside task named: {name}");
await Task.Delay(TimeSpan.FromSeconds(delayInSeconds));
Debug.WriteLine($"Finishing task named: {name}");
return true;
}
I would like to create few tasks first, then perform some job and after this run those tasks. As the line Task<bool> myTask = DoSomething("Name", 4); fires the task right away, I've figured something like this:
string[] taskNames = new string[2];
Task<Task<bool>>[] myTasks = new Task<Task<bool>>[2];
myTasks[0] = new Task<Task<bool>>(async () => await DoSomething(taskNames[0], taskNames[0].Length));
myTasks[1] = new Task<Task<bool>>(async () => await DoSomething(taskNames[1], taskNames[1].Length));
// I think I can declare it also like this, but this will create tasks later
//IEnumerable<Task<Task<bool>>> myTasks = taskNames.Select(x => new Task<Task<bool>>(async () => await DoSomething(x, x.Length)));
taskNames[0] = "First";
taskNames[1] = "Second";
Debug.WriteLine($"Tasks created");
var results = await Task.WhenAll(myTasks.Select(x => { x.Start(); return x.Unwrap(); }));
Debug.WriteLine($"Finishing: {results.Select(x => x.ToString()).Aggregate((a,b) => a + "," + b) }");
Can this be done different way, without wrapping task?
You can just use Task-producing delegates to simplify things a bit:
string[] taskNames = new string[2];
Func<Task<bool>>[] myTasks = new Func<Task<bool>>[2];
myTasks[0] = new Func<Task<bool>>(async () => await DoSomething(taskNames[0], taskNames[0].Length));
myTasks[1] = new Func<Task<bool>>(() => DoSomething(taskNames[1], taskNames[1].Length)); // Shorter version, near-identical functionally.
// I think I can declare it also like this, but this will create tasks later
//IEnumerable<Task<Task<bool>>> myTasks = taskNames.Select(x => new Task<Task<bool>>(async () => await DoSomething(x, x.Length)));
taskNames[0] = "First";
taskNames[1] = "Second";
Debug.WriteLine($"Tasks created");
var results = await Task.WhenAll(myTasks.Select(x => x()));
Debug.WriteLine($"Finishing: {results.Select(x => x.ToString()).Aggregate((a, b) => a + "," + b) }");
Caveat: DoSomething will execute synchronously up to the first await when you invoke those delegates, so the behaviour is similar, but not exactly identical.
Alternatively, your IEnumerable-based solution will work fine too. Just write an iterator method and yield return tasks as you start them.
Personally though I'd just do this:
string[] taskNames = new string[2];
taskNames[0] = "First";
taskNames[1] = "Second";
var results = await Task.WhenAll(taskNames.Select(n => DoSomething(n, n.Length)));
Debug.WriteLine($"Finishing: {results.Select(x => x.ToString()).Aggregate((a, b) => a + "," + b) }");
You're not just "creating a Task" in your example. You're invoking a method, DoSomething, that returns a Task. Assuming these are async methods, the Task creation and starting takes place behind the scenes in compiler-generated code.
The solution to this problem is easy: Don't invoke the method until you're ready for the method to be running. Imagine how confusing the behavior you're asking for would be in any other context.
I hope this makes sense - Suppose I have the following code:
Task.Run(() =>
{
return Task.WhenAll
(
Task1,
Task2,
...
Taskn
)
.ContinueWith(tsks=>
{
TaskA (uses output from Tasks Task1 & Task2, say)
}
, ct)
.ContinueWith(res =>
{
TaskB (uses output from TaskA and Task3, say)
}
, ct);
});
So I want all my first N tasks to run concurrently (since we have no interdependencies), then only once they're all finished, to continue with a task that relies on their outputs (I get that for this, I can use the tsks.Result).
BUT THEN I want to continue with a task that relies on one of the first tasks and the result of TaskA.
I'm a bit lost how to structure my code correctly so I can access the results of my first set of tasks outside of the immediately proceeding ContinueWith.
My one thought was to assign return value to them within my method - Something like:
... declare variables outside of Tasks ...
Task.Run(() =>
{
return Task.WhenAll
(
Task.Run(() => { var1 = Task1.Result; }, ct),
...
Task.Run(() => { varn = Taskn.Result; }, ct),
)
.ContinueWith(tsks=>
{
TaskA (uses output from Tasks var1 & varn, say)
}
, ct)
.ContinueWith(res =>
{
TaskB (uses output from TaskA and var3, say)
}
, ct);
});
But, even though this works for me, I have no doubt that that is doing it wrong.
What is the correct way? Should I have a state object that contains all the necessary variables and pass that throughout all my tasks? Is there a better way in total?
Please forgive my ignorance here - I'm just VERY new to concurrency programming.
Since Task1, Task2, ... , TaskN are in scope for the call of WhenAll, and because by the time ContinueWith passes control to your next task all the earlier tasks are guaranteed to finish, it is safe to use TaskX.Result inside the code implementing continuations:
.ContinueWith(tsks=>
{
var resTask1 = Task1.Result;
...
}
, ct)
You are guaranteed to get the result without blocking, because the task Task1 has finished running.
Here is a way to do it with ConcurrentDictionary, which sounds like it might be applicable in your use case. Also, since you're new to concurrency, it shows you the Interlocked class as well:
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Executing...");
var numOfTasks = 50;
var tasks = new List<Task>();
for (int i = 0; i < numOfTasks; i++)
{
var iTask = Task.Run(() =>
{
var counter = Interlocked.Increment(ref _Counter);
Console.WriteLine(counter);
if (counter == numOfTasks - 1)
{
Console.WriteLine("Waiting {0} ms", 5000);
Task.Delay(5000).Wait(); // to simulate a longish running task
}
_State.AddOrUpdate(counter, "Updated Yo!", (k, v) =>
{
throw new InvalidOperationException("This shouldn't occure more than once.");
});
});
tasks.Add(iTask);
}
Task.WhenAll(tasks)
.ContinueWith(t =>
{
var longishState = _State[numOfTasks - 1];
Console.WriteLine(longishState);
Console.WriteLine("Complete. longishState: " + longishState);
});
Console.ReadKey();
}
static int _Counter = -1;
static ConcurrentDictionary<int, string> _State = new ConcurrentDictionary<int, string>();
}
You get output similar to this (though it the Waiting line won't always be last before the continuation):
An elegant way to solve this is to use Barrier class.
Like this:
var nrOfTasks = ... ;
ConcurrentDictionary<int, ResultType> Results = new ConcurrentDictionary<int, ResultType>();
var barrier = new Barrier(nrOfTasks, (b) =>
{
// here goes the work of TaskA
// and immediatley
// here goes the work of TaskB, having the results of TaskA and any other task you might need
});
Task.Run(() => { Results[1] = Task1.Result; barrier.SignalAndWait(); }, ct),
...
Task.Run(() => { Results[nrOfTasks] = Taskn.Result; barrier.SignalAndWait(); }, ct
Using the async/await model, I have a method which makes 3 different calls to a web service and then returns the union of the results.
var result1 = await myService.GetData(source1);
var result2 = await myService.GetData(source2);
var result3 = await myService.GetData(source3);
allResults = Union(result1, result2, result3);
Using typical await, these 3 calls will execute synchronously wrt each other. How would I go about letting them execute concurrently and join the results as they complete?
How would I go about letting them execute in parallel and join the results as they complete?
The simplest approach is just to create all the tasks and then await them:
var task1 = myService.GetData(source1);
var task2 = myService.GetData(source2);
var task3 = myService.GetData(source3);
// Now everything's started, we can await them
var result1 = await task1;
var result1 = await task2;
var result1 = await task3;
You might also consider Task.WhenAll. You need to consider the possibility that more than one task will fail... with the above code you wouldn't observe the failure of task3 for example, if task2 fails - because your async method will propagate the exception from task2 before you await task3.
I'm not suggesting a particular strategy here, because it will depend on your exact scenario. You may only care about success/failure and logging one cause of failure, in which case the above code is fine. Otherwise, you could potentially attach continuations to the original tasks to log all exceptions, for example.
You could use the Parallel class:
Parallel.Invoke(
() => result1 = myService.GetData(source1),
() => result2 = myService.GetData(source2),
() => result3 = myService.GetData(source3)
);
For more information visit: http://msdn.microsoft.com/en-us/library/system.threading.tasks.parallel(v=vs.110).aspx
As a more generic solution you can use the api I wrote below, it also allows you to define a real time throttling mechanism of max number of concurrent async requests.
The inputEnumerable will be the enumerable of your source and asyncProcessor is your async delegate (myservice.GetData in your example).
If the asyncProcessor - myservice.GetData - returns void or just a Task without any type, then you can simply update the api to reflect that. (just replace all Task<> references to Task)
public static async Task<TOut[]> ForEachAsync<TIn, TOut>(
IEnumerable<TIn> inputEnumerable,
Func<TIn, Task<TOut>> asyncProcessor,
int? maxDegreeOfParallelism = null)
{
IEnumerable<Task<TOut>> tasks;
if (maxDegreeOfParallelism != null)
{
SemaphoreSlim throttler = new SemaphoreSlim(maxDegreeOfParallelism.Value, maxDegreeOfParallelism.Value);
tasks = inputEnumerable.Select(
async input =>
{
await throttler.WaitAsync();
try
{
return await asyncProcessor(input).ConfigureAwait(false);
}
finally
{
throttler.Release();
}
});
}
else
{
tasks = inputEnumerable.Select(asyncProcessor);
}
await Task.WhenAll(tasks);
}
I have a List<Task<bool>> that I want to enumerate in parallel finding the first task to complete with a result of true and not waiting for or observe exceptions on any of the other tasks still pending.
var tasks = new List<Task<bool>>
{
Task.Delay(2000).ContinueWith(x => false),
Task.Delay(0).ContinueWith(x => true),
};
I have tried to use PLINQ to do something like:
var task = tasks.AsParallel().FirstOrDefault(t => t.Result);
Which executes in parallel, but doesn't return as soon as it finds a satisfying result. because accessing the Result property is blocking. In order for this to work using PLINQ, I'd have to write this aweful statement:
var cts = new CancellationTokenSource();
var task = tasks.AsParallel()
.FirstOrDefault(t =>
{
try
{
t.Wait(cts.Token);
if (t.Result)
{
cts.Cancel();
}
return t.Result;
}
catch (OperationCanceledException)
{
return false;
}
} );
I've written up an extension method that yields tasks as they complete like so.
public static class Exts
{
public static IEnumerable<Task<T>> InCompletionOrder<T>(this IEnumerable<Task<T>> source)
{
var tasks = source.ToList();
while (tasks.Any())
{
var t = Task.WhenAny(tasks);
yield return t.Result;
tasks.Remove(t.Result);
}
}
}
// and run like so
var task = tasks.InCompletionOrder().FirstOrDefault(t => t.Result);
But it feels like this is something common enough that there is a better way. Suggestions?
Maybe something like this?
var tcs = new TaskCompletionSource<Task<bool>>();
foreach (var task in tasks)
{
task.ContinueWith((t, state) =>
{
if (t.Result)
{
((TaskCompletionSource<Task<bool>>)state).TrySetResult(t);
}
},
tcs,
TaskContinuationOptions.OnlyOnRanToCompletion |
TaskContinuationOptions.ExecuteSynchronously);
}
var firstTaskToComplete = tcs.Task;
Perhaps you could try the Rx.Net library. Its very good for in effect providing Linq to Work.
Try this snippet in LinqPad after you reference the Microsoft Rx.Net assemblies.
using System
using System.Linq
using System.Reactive.Concurrency
using System.Reactive.Linq
using System.Reactive.Threading.Tasks
using System.Threading.Tasks
void Main()
{
var tasks = new List<Task<bool>>
{
Task.Delay(2000).ContinueWith(x => false),
Task.Delay(0).ContinueWith(x => true),
};
var observable = (from t in tasks.ToObservable()
//Convert task to an observable
let o = t.ToObservable()
//SelectMany
from x in o
select x);
var foo = observable
.SubscribeOn(Scheduler.Default) //Run the tasks on the threadpool
.ToList()
.First();
Console.WriteLine(foo);
}
First, I don't understand why are you trying to use PLINQ here. Enumerating a list of Tasks shouldn't take long, so I don't think you're going to gain anything from parallelizing it.
Now, to get the first Task that already completed with true, you can use the (non-blocking) IsCompleted property:
var task = tasks.FirstOrDefault(t => t.IsCompleted && t.Result);
If you wanted to get a collection of Tasks, ordered by their completion, have a look at Stephen Toub's article Processing tasks as they complete. If you want to list those that return true first, you would need to modify that code. If you don't want to modify it, you can use a version of this approach from Stephen Cleary's AsyncEx library.
Also, in the specific case in your question, you could “fix” your code by adding .WithMergeOptions(ParallelMergeOptions.NotBuffered) to the PLINQ query. But doing so still wouldn't work most of the time and can waste threads a lot even when it does. That's because PLINQ uses a constant number of threads and partitioning and using Result would block those threads most of the time.