Many to Many TPL Dataflow does not process all inputs - c#

I have a TPL Datalow pipeline with two sources and two targets linked in a many-to-many fashion. The target blocks appear to complete successfully, however, it usually drops one or more inputs. I've attached the simplest possible full repro I could come up with below. Any ideas?
Notes:
The problem only occurs if the artificial delay is used while generating the input.
Complete() is successfully called for both sources, but one of the source's Completion task hangs in the WaitingForActivation state, even though both Targets complete successfully.
I can't find any documentation stating many-to-many dataflows aren't supported, and this question's answer implies it is - https://social.msdn.microsoft.com/Forums/en-US/19d831af-2d3f-4d95-9672-b28ae53e6fa0/completion-of-complex-graph-dataflowgraph-object?forum=tpldataflow
using System;
using System.Diagnostics;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;
class Program
{
private const int NumbersPerSource = 10;
private const int MaxDelayMilliseconds = 10;
static async Task Main(string[] args)
{
int numbersProcessed = 0;
var source1 = new BufferBlock<int>();
var source2 = new BufferBlock<int>();
var target1 = new ActionBlock<int>(i => Interlocked.Increment(ref numbersProcessed));
var target2 = new ActionBlock<int>(i => Interlocked.Increment(ref numbersProcessed));
var linkOptions = new DataflowLinkOptions() { PropagateCompletion = true };
source1.LinkTo(target1, linkOptions);
source1.LinkTo(target2, linkOptions);
source2.LinkTo(target1, linkOptions);
source2.LinkTo(target2, linkOptions);
var task1 = Task.Run(() => Post(source1));
var task2 = Task.Run(() => Post(source2));
// source1 or source2 Completion tasks may never complete even though Complete is always successfully called.
//await Task.WhenAll(task1, task2, source1.Completion, source2.Completion, target1.Completion, target2.Completion);
await Task.WhenAll(task1, task2, target1.Completion, target2.Completion);
Console.WriteLine($"{numbersProcessed} of {NumbersPerSource * 2} numbers processed.");
}
private static async Task Post(BufferBlock<int> source)
{
foreach (var i in Enumerable.Range(0, NumbersPerSource)) {
await Task.Delay(TimeSpan.FromMilliseconds(GetRandomMilliseconds()));
Debug.Assert(source.Post(i));
}
source.Complete();
}
private static Random Random = new Random();
private static int GetRandomMilliseconds()
{
lock (Random) {
return Random.Next(0, MaxDelayMilliseconds);
}
}
}

As #MikeJ pointed out in a comment, linking the blocks using the PropagateCompletion in a many-to-many dataflow configuration can cause the premature completion of some target blocks. In this case the target1 and target2 are both marked as completed when any of the two source blocks completes, leaving the other source unable to complete, because there are still messages in it's output buffer. These messages are never going to be consumed, because none of the linked target blocks is willing to accept them.
To fix this problem you could use the custom PropagateCompletion method below:
public static void PropagateCompletion(IDataflowBlock[] sources,
IDataflowBlock[] targets)
{
// Arguments validation omitted
Task allSourcesCompletion = Task.WhenAll(sources.Select(s => s.Completion));
ThreadPool.QueueUserWorkItem(async _ =>
{
try { await allSourcesCompletion.ConfigureAwait(false); } catch { }
Exception exception = allSourcesCompletion.IsFaulted ?
allSourcesCompletion.Exception : null;
foreach (var target in targets)
{
if (exception is null) target.Complete(); else target.Fault(exception);
}
});
}
Usage example:
source1.LinkTo(target1);
source1.LinkTo(target2);
source2.LinkTo(target1);
source2.LinkTo(target2);
PropagateCompletion(new[] { source1, source2 }, new[] { target1, target2 });
Notice that no DataflowLinkOptions are passed when linking the sources to the targets in this example.

Related

Limit the number of cores: new Task will block until core free from other tasks [duplicate]

I have a collection of 1000 input message to process. I'm looping the input collection and starting the new task for each message to get processed.
//Assume this messages collection contains 1000 items
var messages = new List<string>();
foreach (var msg in messages)
{
Task.Factory.StartNew(() =>
{
Process(msg);
});
}
Can we guess how many maximum messages simultaneously get processed at the time (assuming normal Quad core processor), or can we limit the maximum number of messages to be processed at the time?
How to ensure this message get processed in the same sequence/order of the Collection?
You could use Parallel.Foreach and rely on MaxDegreeOfParallelism instead.
Parallel.ForEach(messages, new ParallelOptions {MaxDegreeOfParallelism = 10},
msg =>
{
// logic
Process(msg);
});
SemaphoreSlim is a very good solution in this case and I higly recommend OP to try this, but #Manoj's answer has flaw as mentioned in comments.semaphore should be waited before spawning the task like this.
Updated Answer: As #Vasyl pointed out Semaphore may be disposed before completion of tasks and will raise exception when Release() method is called so before exiting the using block must wait for the completion of all created Tasks.
int maxConcurrency=10;
var messages = new List<string>();
using(SemaphoreSlim concurrencySemaphore = new SemaphoreSlim(maxConcurrency))
{
List<Task> tasks = new List<Task>();
foreach(var msg in messages)
{
concurrencySemaphore.Wait();
var t = Task.Factory.StartNew(() =>
{
try
{
Process(msg);
}
finally
{
concurrencySemaphore.Release();
}
});
tasks.Add(t);
}
Task.WaitAll(tasks.ToArray());
}
Answer to Comments
for those who want to see how semaphore can be disposed without Task.WaitAll
Run below code in console app and this exception will be raised.
System.ObjectDisposedException: 'The semaphore has been disposed.'
static void Main(string[] args)
{
int maxConcurrency = 5;
List<string> messages = Enumerable.Range(1, 15).Select(e => e.ToString()).ToList();
using (SemaphoreSlim concurrencySemaphore = new SemaphoreSlim(maxConcurrency))
{
List<Task> tasks = new List<Task>();
foreach (var msg in messages)
{
concurrencySemaphore.Wait();
var t = Task.Factory.StartNew(() =>
{
try
{
Process(msg);
}
finally
{
concurrencySemaphore.Release();
}
});
tasks.Add(t);
}
// Task.WaitAll(tasks.ToArray());
}
Console.WriteLine("Exited using block");
Console.ReadKey();
}
private static void Process(string msg)
{
Thread.Sleep(2000);
Console.WriteLine(msg);
}
I think it would be better to use Parallel LINQ
Parallel.ForEach(messages ,
new ParallelOptions{MaxDegreeOfParallelism = 4},
x => Process(x);
);
where x is the MaxDegreeOfParallelism
With .NET 5.0 and Core 3.0 channels were introduced.
The main benefit of this producer/consumer concurrency pattern is that you can also limit the input data processing to reduce resource impact.
This is especially helpful when processing millions of data records.
Instead of reading the whole dataset at once into memory, you can now consecutively query only chunks of the data and wait for the workers to process it before querying more.
Code sample with a queue capacity of 50 messages and 5 consumer threads:
/// <exception cref="System.AggregateException">Thrown on Consumer Task exceptions.</exception>
public static async Task ProcessMessages(List<string> messages)
{
const int producerCapacity = 10, consumerTaskLimit = 3;
var channel = Channel.CreateBounded<string>(producerCapacity);
_ = Task.Run(async () =>
{
foreach (var msg in messages)
{
await channel.Writer.WriteAsync(msg);
// blocking when channel is full
// waiting for the consumer tasks to pop messages from the queue
}
channel.Writer.Complete();
// signaling the end of queue so that
// WaitToReadAsync will return false to stop the consumer tasks
});
var tokenSource = new CancellationTokenSource();
CancellationToken ct = tokenSource.Token;
var consumerTasks = Enumerable
.Range(1, consumerTaskLimit)
.Select(_ => Task.Run(async () =>
{
try
{
while (await channel.Reader.WaitToReadAsync(ct))
{
ct.ThrowIfCancellationRequested();
while (channel.Reader.TryRead(out var message))
{
await Task.Delay(500);
Console.WriteLine(message);
}
}
}
catch (OperationCanceledException) { }
catch
{
tokenSource.Cancel();
throw;
}
}))
.ToArray();
Task waitForConsumers = Task.WhenAll(consumerTasks);
try { await waitForConsumers; }
catch
{
foreach (var e in waitForConsumers.Exception.Flatten().InnerExceptions)
Console.WriteLine(e.ToString());
throw waitForConsumers.Exception.Flatten();
}
}
As pointed out by Theodor Zoulias:
On multiple consumer exceptions, the remaining tasks will continue to run and have to take the load of the killed tasks. To avoid this, I implemented a CancellationToken to stop all the remaining tasks and handle the exceptions combined in the AggregateException of waitForConsumers.Exception.
Side note:
The Task Parallel Library (TPL) might be good at automatically limiting the tasks based on your local resources. But when you are processing data remotely via RPC, it's necessary to manually limit your RPC calls to avoid filling the network/processing stack!
If your Process method is async you can't use Task.Factory.StartNew as it doesn't play well with an async delegate. Also there are some other nuances when using it (see this for example).
The proper way to do it in this case is to use Task.Run. Here's #ClearLogic answer modified for an async Process method.
static void Main(string[] args)
{
int maxConcurrency = 5;
List<string> messages = Enumerable.Range(1, 15).Select(e => e.ToString()).ToList();
using (SemaphoreSlim concurrencySemaphore = new SemaphoreSlim(maxConcurrency))
{
List<Task> tasks = new List<Task>();
foreach (var msg in messages)
{
concurrencySemaphore.Wait();
var t = Task.Run(async () =>
{
try
{
await Process(msg);
}
finally
{
concurrencySemaphore.Release();
}
});
tasks.Add(t);
}
Task.WaitAll(tasks.ToArray());
}
Console.WriteLine("Exited using block");
Console.ReadKey();
}
private static async Task Process(string msg)
{
await Task.Delay(2000);
Console.WriteLine(msg);
}
You can create your own TaskScheduler and override QueueTask there.
protected virtual void QueueTask(Task task)
Then you can do anything you like.
One example here:
Limited concurrency level task scheduler (with task priority) handling wrapped tasks
You can simply set the max concurrency degree like this way:
int maxConcurrency=10;
var messages = new List<1000>();
using(SemaphoreSlim concurrencySemaphore = new SemaphoreSlim(maxConcurrency))
{
foreach(var msg in messages)
{
Task.Factory.StartNew(() =>
{
concurrencySemaphore.Wait();
try
{
Process(msg);
}
finally
{
concurrencySemaphore.Release();
}
});
}
}
If you need in-order queuing (processing might finish in any order), there is no need for a semaphore. Old fashioned if statements work fine:
const int maxConcurrency = 5;
List<Task> tasks = new List<Task>();
foreach (var arg in args)
{
var t = Task.Run(() => { Process(arg); } );
tasks.Add(t);
if(tasks.Count >= maxConcurrency)
Task.WaitAny(tasks.ToArray());
}
Task.WaitAll(tasks.ToArray());
I ran into a similar problem where I wanted to produce 5000 results while calling apis, etc. So, I ran some speed tests.
Parallel.ForEach(products.Select(x => x.KeyValue).Distinct().Take(100), id =>
{
new ParallelOptions { MaxDegreeOfParallelism = 100 };
GetProductMetaData(productsMetaData, client, id).GetAwaiter().GetResult();
});
produced 100 results in 30 seconds.
Parallel.ForEach(products.Select(x => x.KeyValue).Distinct().Take(100), id =>
{
new ParallelOptions { MaxDegreeOfParallelism = 100 };
GetProductMetaData(productsMetaData, client, id);
});
Moving the GetAwaiter().GetResult() to the individual async api calls inside GetProductMetaData resulted in 14.09 seconds to produce 100 results.
foreach (var id in ids.Take(100))
{
GetProductMetaData(productsMetaData, client, id);
}
Complete non-async programming with the GetAwaiter().GetResult() in api calls resulted in 13.417 seconds.
var tasks = new List<Task>();
while (y < ids.Count())
{
foreach (var id in ids.Skip(y).Take(100))
{
tasks.Add(GetProductMetaData(productsMetaData, client, id));
}
y += 100;
Task.WhenAll(tasks).GetAwaiter().GetResult();
Console.WriteLine($"Finished {y}, {sw.Elapsed}");
}
Forming a task list and working through 100 at a time resulted in a speed of 7.36 seconds.
using (SemaphoreSlim cons = new SemaphoreSlim(10))
{
var tasks = new List<Task>();
foreach (var id in ids.Take(100))
{
cons.Wait();
var t = Task.Factory.StartNew(() =>
{
try
{
GetProductMetaData(productsMetaData, client, id);
}
finally
{
cons.Release();
}
});
tasks.Add(t);
}
Task.WaitAll(tasks.ToArray());
}
Using SemaphoreSlim resulted in 13.369 seconds, but also took a moment to boot to start using it.
var throttler = new SemaphoreSlim(initialCount: take);
foreach (var id in ids)
{
throttler.WaitAsync().GetAwaiter().GetResult();
tasks.Add(Task.Run(async () =>
{
try
{
skip += 1;
await GetProductMetaData(productsMetaData, client, id);
if (skip % 100 == 0)
{
Console.WriteLine($"started {skip}/{count}, {sw.Elapsed}");
}
}
finally
{
throttler.Release();
}
}));
}
Using Semaphore Slim with a throttler for my async task took 6.12 seconds.
The answer for me in this specific project was use a throttler with Semaphore Slim. Although the while foreach tasklist did sometimes beat the throttler, 4/6 times the throttler won for 1000 records.
I realize I'm not using the OPs code, but I think this is important and adds to this discussion because how is sometimes not the only question that should be asked, and the answer is sometimes "It depends on what you are trying to do."
Now to answer the specific questions:
How to limit the maximum number of parallel tasks in c#: I showed how to limit the number of tasks that are completed at a time.
Can we guess how many maximum messages simultaneously get processed at the time (assuming normal Quad core processor), or can we limit the maximum number of messages to be processed at the time? I cannot guess how many will be processed at a time unless I set an upper limit but I can set an upper limit. Obviously different computers function at different speeds due to CPU, RAM etc. and how many threads and cores the program itself has access to as well as other programs running in tandem on the same computer.
How to ensure this message get processed in the same sequence/order of the Collection? If you want to process everything in a specific order, it is synchronous programming. The point of being able to run things asynchronously is ensuring that they can do everything without an order. As you can see from my code, the time difference is minimal in 100 records unless you use async code. In the event that you need an order to what you are doing, use asynchronous programming up until that point, then await and do things synchronously from there. For example, task1a.start, task2a.start, then later task1a.await, task2a.await... then later task1b.start task1b.await and task2b.start task 2b.await.
public static void RunTasks(List<NamedTask> importTaskList)
{
List<NamedTask> runningTasks = new List<NamedTask>();
try
{
foreach (NamedTask currentTask in importTaskList)
{
currentTask.Start();
runningTasks.Add(currentTask);
if (runningTasks.Where(x => x.Status == TaskStatus.Running).Count() >= MaxCountImportThread)
{
Task.WaitAny(runningTasks.ToArray());
}
}
Task.WaitAll(runningTasks.ToArray());
}
catch (Exception ex)
{
Log.Fatal("ERROR!", ex);
}
}
you can use the BlockingCollection, If the consume collection limit has reached, the produce will stop producing until a consume process will finish. I find this pattern more easy to understand and implement than the SemaphoreSlim.
int TasksLimit = 10;
BlockingCollection<Task> tasks = new BlockingCollection<Task>(new ConcurrentBag<Task>(), TasksLimit);
void ProduceAndConsume()
{
var producer = Task.Factory.StartNew(RunProducer);
var consumer = Task.Factory.StartNew(RunConsumer);
try
{
Task.WaitAll(new[] { producer, consumer });
}
catch (AggregateException ae) { }
}
void RunConsumer()
{
foreach (var task in tasks.GetConsumingEnumerable())
{
task.Start();
}
}
void RunProducer()
{
for (int i = 0; i < 1000; i++)
{
tasks.Add(new Task(() => Thread.Sleep(1000), TaskCreationOptions.AttachedToParent));
}
}
Note that the RunProducer and RunConsumer has spawn two independent tasks.

Parallelize yield return inside C#8 IAsyncEnumerable<T>

I have a method that returns an async enumerator
public async IAsyncEnumerable<IResult> DoWorkAsync()
{
await Something();
foreach (var item in ListOfWorkItems)
{
yield return DoWork(item);
}
}
And the caller:
public async Task LogResultsAsync()
{
await foreach (var result in DoWorkAsync())
{
Console.WriteLine(result);
}
}
Because DoWork is an expensive operation, I'd prefer to somehow parallelize it, so it works similar to:
public async IAsyncEnumerable<IResult> DoWorkAsync()
{
await Something();
Parallel.ForEach(ListOfWorkItems, item =>
{
yield return DoWork(item);
});
}
However I can't do yield return from inside Parallel.Foreach so just wonder what's the best way to go about this?
The order of returned results doesn't matter.
Thanks.
Edit: Sorry I left out some code in DoWorkAsync, it was indeed awaiting on something I just didn't put it in the code above because that's not very relevent to the question. Updated now
Edit2: DoWork is mostly I/O bound in my case, it's reading data from a database.
Here is a basic implementation that uses a TransformBlock frοm the TPL Dataflow library:
public async IAsyncEnumerable<IResult> GetResults(List<IWorkItem> workItems)
{
// Define the dataflow block
var block = new TransformBlock<IWorkItem, IResult>(async item =>
{
return await TransformAsync(item);
}, new ExecutionDataflowBlockOptions()
{
MaxDegreeOfParallelism = 10, // the default is 1
EnsureOrdered = false // the default is true
});
// Feed the block with input data
foreach (var item in workItems)
{
block.Post(item);
}
block.Complete();
// Stream the block's output as IAsyncEnumerable
while (await block.OutputAvailableAsync())
{
while (block.TryReceive(out var result))
{
yield return result;
}
}
// Propagate the first exception, if any.
await block.Completion;
}
This implementation is not perfect because in case the consumer of the IAsyncEnumerable abandons the enumeration prematurely, the TransformBlock will continue working in the background until all work items have been processed. Also it doesn't support cancellation, which all respectable IAsyncEnumerable producing methods should support. These missing features could be added relatively easily. If you are interested at adding them, look at this question.
Another imperfection is that in case the await TransformAsync(item) throws an OperationCanceledException, this error is suppressed. This is the by design behavior of TPL Dataflow. In case this is a problem, you can find here the ingredients needed for a solution (it's not trivial).
.NET 6 update: A new API DataflowBlock.ReceiveAllAsync has been introduced in .NET 6, that can simplify the streaming of the block's output. There is a gotcha though. See this answer for details.
As suggested by canton7, you could use AsParallel instead of the Parallel.ForEach.
This can be consumed inside a standard foreach loop where you can yield the results:
public async IAsyncEnumerable<IResult> DoWorkAsync()
{
await Something();
foreach (var result in ListOfWorkItems.AsParallel().Select(DoWork))
{
yield return result;
}
}
As mentioned by Theodor Zoulias, the enumerable returned isn't actually asynchronous at all.
If you simply need to consume this using await foreach this shouldn't be a problem, but to be more explicit, you could return the IEnumerable and have the caller parallelise it:
public async Task<IEnumerable<Item>> DoWorkAsync()
{
await Something();
return ListOfWorkItems;
}
// Caller...
Parallel.ForEach(await DoWorkAsync(), item =>
{
var result = DoWork(item);
//...
});
Although this may be less maintainable if it need to be called in multiple places
The solution proposed by Theodor Zoulias unfortunately has a small issue: you can't choose to aggregate multiple exceptions so only the first exception is propagated.
Here's a modern solution using System.Threading.Channels (available
since .NET Core 3) and Parallel.ForEachAsync (available since .NET 6) in order to limit the max degree of parallelism.
#nullable enable
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Threading;
using System.Threading.Channels;
using System.Threading.Tasks;
namespace StackOverflow.SampleCode;
public class ParallelExecutionException<T> : Exception
{
internal ParallelExecutionException(T item, Exception innerException) : base(innerException.Message, innerException)
{
Item = item;
}
public T Item { get; }
public new Exception InnerException => base.InnerException!;
}
public static class AsyncEnumerableExtensions
{
public static async IAsyncEnumerable<TOutput> AsParallelAsync<TInput, TOutput>(this IAsyncEnumerable<TInput> source, int maxDegreeOfParallelism, Func<TInput, CancellationToken, Task<TOutput>> transform, bool aggregateException = false, [EnumeratorCancellation] CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(source);
if (maxDegreeOfParallelism < 1)
throw new ArgumentOutOfRangeException(nameof(maxDegreeOfParallelism));
var channelOptions = new UnboundedChannelOptions { SingleReader = true };
var channel = Channel.CreateUnbounded<TOutput>(channelOptions);
_ = Task.Run(async () =>
{
var exceptions = new List<Exception>();
var writer = channel.Writer;
var parallelOptions = new ParallelOptions
{
MaxDegreeOfParallelism = maxDegreeOfParallelism,
CancellationToken = cancellationToken,
};
await Parallel.ForEachAsync(source, parallelOptions, async (item, ct) =>
{
try
{
var result = await transform(item, ct);
await writer.WriteAsync(result, ct).ConfigureAwait(false);
}
catch (Exception exception)
{
var parallelExecutionException = new ParallelExecutionException<TInput>(item, exception);
if (aggregateException)
{
exceptions.Add(parallelExecutionException);
}
else
{
writer.Complete(parallelExecutionException);
}
}
});
if (aggregateException)
{
writer.Complete(exceptions.Any() ? new AggregateException(exceptions) : null);
}
}, cancellationToken);
await foreach (var result in channel.Reader.ReadAllAsync(cancellationToken))
{
yield return result;
}
}
}
With this solution, you can choose whether to abort at the first encountered exception (aggregateException = false) or continue processing and throw an AggregateException (aggregateException = true) once all the source items are processed.

How can I check whether this interval is even running?

I've set a bunch of Console.WriteLines and as far as I can tell none of them are being invoked when I run the following in .NET Fiddle.
using System;
using System.Net;
using System.Linq.Expressions;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using System.Timers;
using System.Collections.Generic;
public class Program
{
private static readonly object locker = new object();
private static readonly string pageFormat = "http://www.letsrun.com/forum/forum.php?board=1&page={0}";
public static void Main()
{
var client = new WebClient();
// Queue up the requests we are going to make
var tasks = new Queue<Task<string>>(
Enumerable
.Repeat(0,50)
.Select(i => new Task<string>(() => client.DownloadString(string.Format(pageFormat,i))))
);
// Create set of 5 tasks which will be the at most 5
// requests we wait on
var runningTasks = new HashSet<Task<string>>();
for(int i = 0; i < 5; ++i)
{
runningTasks.Add(tasks.Dequeue());
}
var timer = new System.Timers.Timer
{
AutoReset = true,
Interval = 2000
};
// On each tick, go through the tasks that are supposed
// to have started running and if they have completed
// without error then store their result and run the
// next queued task if there is one. When we run out of
// any more tasks to run or wait for, stop the ticks.
timer.Elapsed += delegate
{
lock(locker)
{
foreach(var task in runningTasks)
{
if(task.IsCompleted)
{
if(!task.IsFaulted)
{
Console.WriteLine("Got a document: {0}",
task.Result.Substring(Math.Min(30, task.Result.Length)));
runningTasks.Remove(task);
if(tasks.Any())
{
runningTasks.Add(tasks.Dequeue());
}
}
else
{
Console.WriteLine("Uh-oh, task faulted, apparently");
}
}
else if(!task.Status.Equals(TaskStatus.Running)) // task not started
{
Console.WriteLine("About to start a task.");
task.Start();
}
else
{
Console.WriteLine("Apparently a task is running.");
}
}
if(!runningTasks.Any())
{
timer.Stop();
}
}
};
}
}
I'd also appreciate advice on how I can simplify or fix any faulty logic in this. The pattern I'm trying to do is like
(1) Create a queueu of N tasks
(2) Create a set of M tasks, the first M dequeued items from (1)
(3) Start the M tasks running
(4) After X seconds, check for completed tasks.
(5) For any completed task, do something with the result, remove the task from the set and replace it with another task from the queue (if any are left in the queueu).
(6) Repeat (4)-(5) indefinitely.
(7) If the set has no tasks left, we're done.
but perhaps there's a better way to implement it, or perhaps there's some .NET function that easily encapsulates what I'm trying to do (web requests in parallel with a specified max degree of parallelism).
There are several issues in your code, but since you are looking for better way to implement it - you can use Parallel.For or Parallel.ForEach:
Parallel.For(0, 50, new ParallelOptions() { MaxDegreeOfParallelism = 5 }, (i) =>
{
// surround with try-catch
string result;
using (var client = new WebClient()) {
result = client.DownloadString(string.Format(pageFormat, i));
}
// do something with result
Console.WriteLine("Got a document: {0}", result.Substring(Math.Min(30, result.Length)));
});
It will execute the body in parallel (not more than 5 tasks at any given time). When one task is completed - next one is started, until they are all done, just like you want.
UPDATE. There are several waits to throttle tasks with this approach, but the most straightforward is just sleep:
Parallel.For(0, 50, new ParallelOptions() { MaxDegreeOfParallelism = 5 },
(i) =>
{
// surround with try-catch
var watch = Stopwatch.StartNew();
string result;
using (var client = new WebClient()) {
result = client.DownloadString(string.Format(pageFormat, i));
}
// do something with result
Console.WriteLine("Got a document: {0}", result.Substring(Math.Min(30, result.Length)));
watch.Stop();
var sleep = 2000 - watch.ElapsedMilliseconds;
if (sleep > 0)
Thread.Sleep((int)sleep);
});
This isn't a direct answer to your question. I just wanted to suggest an alternative approach.
I'd recommend that you look into using Microsoft's Reactive Framework (NuGet "System.Reactive") for doing this kind of thing.
You could then do something like this:
var query =
Observable
.Range(0, 50)
.Select(i => string.Format(pageFormat, i))
.Select(u => Observable.Using(
() => new WebClient(),
wc => Observable.Start(() => new { url = u, content = wc.DownloadString(u) })))
.Merge(5);
IDisposable subscription = query.Subscribe(x =>
{
Console.WriteLine(x.url);
Console.WriteLine(x.content);
});
It's all async and the process can be stopped at any time by calling subscription.Dispose();

Task.WhenAny and SemaphoreSlim class

When using WaitHandle.WaitAny and Semaphore class like the following:
var s1 = new Semaphore(1, 1);
var s2 = new Semaphore(1, 1);
var handles = new [] { s1, s2 };
var index = WaitHandle.WaitAny(handles);
handles[index].Release();
It seems guaranteed that only one semaphore is acquired by WaitHandle.WaitAny.
Is it possible to obtain similar behavior for asynchronous (async/await) code?
I cannot think of a built-in solution. I'd do it like this:
var s1 = new SemaphoreSlim(1, 1);
var s2 = new SemaphoreSlim(1, 1);
var waits = new [] { s1.WaitAsync(), s2.WaitAsync() };
var firstWait = await Task.WhenAny(waits);
//The wait is still running - perform compensation.
if (firstWait == waits[0])
waits[1].ContinueWith(_ => s2.Release());
if (firstWait == waits[1])
waits[0].ContinueWith(_ => s1.Release());
This acquires both semaphores but it immediately releases the one that came second. This should be equivalent. I cannot think of a negative consequence of acquiring a semaphore needlessly (except performance of course).
Here is a generalized implementation of a WaitAnyAsync method, that acquires asynchronously any of the supplied semaphores:
/// <summary>
/// Asynchronously waits to enter any of the semaphores in the specified array.
/// </summary>
public static async Task<SemaphoreSlim> WaitAnyAsync(SemaphoreSlim[] semaphores,
CancellationToken cancellationToken = default)
{
// Fast path
cancellationToken.ThrowIfCancellationRequested();
var acquired = semaphores.FirstOrDefault(x => x.Wait(0));
if (acquired != null) return acquired;
// Slow path
using var cts = CancellationTokenSource.CreateLinkedTokenSource(
cancellationToken);
Task<SemaphoreSlim>[] acquireTasks = semaphores
.Select(async s => { await s.WaitAsync(cts.Token); return s; })
.ToArray();
Task<SemaphoreSlim> acquiredTask = await Task.WhenAny(acquireTasks);
cts.Cancel(); // Cancel all other tasks
var releaseOtherTasks = acquireTasks
.Where(task => task != acquiredTask)
.Select(async task => (await task).Release());
try { await Task.WhenAll(releaseOtherTasks); }
catch (OperationCanceledException) { } // Ignore
catch
{
// Consider any other error (possibly SemaphoreFullException or
// ObjectDisposedException) as a failure, and propagate the exception.
try { (await acquiredTask).Release(); } catch { }
throw;
}
try { return await acquiredTask; }
catch (OperationCanceledException)
{
// Propagate an exception holding the correct CancellationToken
cancellationToken.ThrowIfCancellationRequested();
throw; // Should never happen
}
}
This method becomes increasingly inefficient as the contention gets higher and higher, so I wouldn't recommend using it in hot paths.
Variation of #usr's answer which solved my slightly more general problem (after quite some time going down the rathole of trying to marry AvailableWaitHandle with Task...)
class SemaphoreSlimExtensions
public static Task AwaitButReleaseAsync(this SemaphoreSlim s) =>
s.WaitAsync().ContinueWith(_t -> s.Release(), TaskContinuationOptions.ExecuteSynchronously);
public static bool TryTake(this SemaphoreSlim s) =>
s.Wait(0);
In my use case, the await is just a trigger for synchronous logic that then walks the full set - the TryTake helper is in my case a natural way to handle the conditional acquisition of the semaphore and the processing that's contingent on that.
var sems = new[] { new SemaphoreSlim(1, 1), new SemaphoreSlim(1, 1) };
await Task.WhenAny(from s in sems select s.AwaitButReleaseAsync());
Putting it here as I believe it to be clean, clear and relatively efficient but would be happy to see improvements on it

How to constraint concurrency the right way in Rx.NET

Please, observe the following code snippet:
var result = await GetSource(1000).SelectMany(s => getResultAsync(s).ToObservable()).ToList();
The problem with this code is that getResultAsync runs concurrently in an unconstrained fashion. Which could be not what we want in certain cases. Suppose I want to restrict its concurrency to at most 10 concurrent invocations. What is the Rx.NET way to do it?
I am enclosing a simple console application that demonstrates the subject and my lame solution of the described problem.
There is a bit extra code, like the Stats class and the artificial random sleeps. They are there to ensure I truly get concurrent execution and can reliably compute the max concurrency reached during the process.
The method RunUnconstrained demonstrates the naive, unconstrained run. The method RunConstrained shows my solution, which is not very elegant. Ideally, I would like to ease constraining the concurrency by simply applying a dedicated Rx operator to the Monad. Of course, without sacrificing the performance.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Reactive.Linq;
using System.Reactive.Threading.Tasks;
using System.Threading;
using System.Threading.Tasks;
namespace RxConstrainedConcurrency
{
class Program
{
public class Stats
{
public int MaxConcurrentCount;
public int CurConcurrentCount;
public readonly object MaxConcurrentCountGuard = new object();
}
static void Main()
{
RunUnconstrained().GetAwaiter().GetResult();
RunConstrained().GetAwaiter().GetResult();
}
static async Task RunUnconstrained()
{
await Run(AsyncOp);
}
static async Task RunConstrained()
{
using (var sem = new SemaphoreSlim(10))
{
await Run(async (s, pause, stats) =>
{
// ReSharper disable AccessToDisposedClosure
await sem.WaitAsync();
try
{
return await AsyncOp(s, pause, stats);
}
finally
{
sem.Release();
}
// ReSharper restore AccessToDisposedClosure
});
}
}
static async Task Run(Func<string, int, Stats, Task<int>> getResultAsync)
{
var stats = new Stats();
var rnd = new Random(0x1234);
var result = await GetSource(1000).SelectMany(s => getResultAsync(s, rnd.Next(30), stats).ToObservable()).ToList();
Debug.Assert(stats.CurConcurrentCount == 0);
Debug.Assert(result.Count == 1000);
Debug.Assert(!result.Contains(0));
Debug.WriteLine("Max concurrency = " + stats.MaxConcurrentCount);
}
static IObservable<string> GetSource(int count)
{
return Enumerable.Range(1, count).Select(i => i.ToString()).ToObservable();
}
static Task<int> AsyncOp(string s, int pause, Stats stats)
{
return Task.Run(() =>
{
int cur = Interlocked.Increment(ref stats.CurConcurrentCount);
if (stats.MaxConcurrentCount < cur)
{
lock (stats.MaxConcurrentCountGuard)
{
if (stats.MaxConcurrentCount < cur)
{
stats.MaxConcurrentCount = cur;
}
}
}
try
{
Thread.Sleep(pause);
return int.Parse(s);
}
finally
{
Interlocked.Decrement(ref stats.CurConcurrentCount);
}
});
}
}
}
You can do this in Rx using the overload of Merge that constrains the number of concurrent subscriptions to inner observables.
This form of Merge is applied to a stream of streams.
Ordinarily, using SelectMany to invoke an async task from an event does two jobs: it projects each event into an observable stream whose single event is the result, and it flattens all the resulting streams together.
To use Merge we must use a regular Select to project each event into the invocation of an async task, (thus creating a stream of streams), and use Merge to flatten the result. It will do this in a constrained way by only subscribing to a supplied fixed number of the inner streams at any point in time.
We must be careful to only invoke each asynchronous task invocation upon subscription to the wrapping inner stream. Conversion of an async task to an observable with ToObservable() will actually call the async task immediately, rather than on subscription, so we must defer the evaluation until subscription using Observable.Defer.
Here's an example putting all these steps together:
void Main()
{
var xs = Observable.Range(0, 10); // source events
// "Double" here is our async operation to be constrained,
// in this case to 3 concurrent invocations
xs.Select(x =>
Observable.Defer(() => Double(x).ToObservable())).Merge(3)
.Subscribe(Console.WriteLine,
() => Console.WriteLine("Max: " + MaxConcurrent));
}
private static int Concurrent;
private static int MaxConcurrent;
private static readonly object gate = new Object();
public async Task<int> Double(int x)
{
var concurrent = Interlocked.Increment(ref Concurrent);
lock(gate)
{
MaxConcurrent = Math.Max(concurrent, MaxConcurrent);
}
await Task.Delay(TimeSpan.FromSeconds(1));
Interlocked.Decrement(ref Concurrent);
return x * 2;
}
The maximum concurrency output here will be "3". Remove the Merge to go "unconstrained" and you'll get "10" instead.
Another (equivalent) way of getting the Defer effect that reads a bit nicer is to use FromAsync instead of Defer + ToObservable:
xs.Select(x => Observable.FromAsync(() => Double(x))).Merge(3)

Categories

Resources