Task Parallel Library - Custom Task Schedulers - c#

I have a requirement to fire off web service requests to an online api and I thought that Parallel Extensions would be a good fit for my needs.
The web service in question is designed to be called repeatedly, but has a mechanism that charges you if you got over a certain number of calls per second. I obviously want to minimize my charges and so was wondering if anyone has seen a TaskScheduler that can cope with the following requirements:
Limit the number of tasks scheduled per timespan. I guess if the number of requests exceeded this limit then it would need to throw away the task or possibly block? (to stop a back log of tasks)
Detect if the same request is already in the scheduler to be executed but hasn't been yet and if so not queue the second task but return the first instead.
Do people feel that these are the sorts of responsibilities a task scheduler should be dealing with or am i barking up the wrong tree? If you have alternatives I am open to suggestions.

I agree with others that TPL Dataflow sounds like a good solution for this.
To limit the processing, you could create a TransformBlock that doesn't actually transform the data in any way, it just delays it if it arrived too soon after the previous data:
static IPropagatorBlock<T, T> CreateDelayBlock<T>(TimeSpan delay)
{
DateTime lastItem = DateTime.MinValue;
return new TransformBlock<T, T>(
async x =>
{
var waitTime = lastItem + delay - DateTime.UtcNow;
if (waitTime > TimeSpan.Zero)
await Task.Delay(waitTime);
lastItem = DateTime.UtcNow;
return x;
},
new ExecutionDataflowBlockOptions { BoundedCapacity = 1 });
}
Then create a method that produces the data (for example integers starting from 0):
static async Task Producer(ITargetBlock<int> target)
{
int i = 0;
while (await target.SendAsync(i))
i++;
}
It's written asynchronously, so that if the target block isn't able to process the items right now, it will wait.
Then write a consumer method:
static void Consumer(int i)
{
Console.WriteLine(i);
}
And finally, link it all together and start it up:
var delayBlock = CreateDelayBlock<int>(TimeSpan.FromMilliseconds(500));
var consumerBlock = new ActionBlock<int>(
(Action<int>)Consumer,
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = DataflowBlockOptions.Unbounded });
delayBlock.LinkTo(consumerBlock, new DataflowLinkOptions { PropagateCompletion = true });
Task.WaitAll(Producer(delayBlock), consumerBlock.Completion);
Here, delayBlock will accept at most one item every 500 ms and the Consumer() method can run multiple times in parallel. To finish processing, call delayBlock.Complete().
If you want to add some caching per your #2, you could create another TransformBlock do the work there and link it to the other blocks.

Honestly I would work at a higher level of abstraction and use the TPL Dataflow API for this. The only catch is you would need to write a custom block that will throttle the requests at the rate at which you need because, by default, blocks are "greedy" and will just process as fast as possible. The implementation would be something like this:
Start with a BufferBlock<T> which is the logical block that you would post to.
Link the BufferBlock<T> to a custom block which has the knowledge of requests/sec and throttling logic.
Link the custom block from 2 to to your ActionBlock<T>.
I don't have the time to write the custom block for #2 right this second, but I will check back later and try to fill in an implementation for you if you haven't already figured it out.

I haven't used RX much, but AFAICT the Observable.Window method would work fine for this.
http://msdn.microsoft.com/en-us/library/system.reactive.linq.observable.window(VS.103).aspx
It would seem to be a better fit than Throttle which seems to throw elements away, which I'm guessing is not what you want

If you need to throttle by time, you should check out Quartz.net. It can facilitate consistent polling. If you care about all requests, you should consider using some sort of queueing mechanism. MSMQ is probably the right solution but there are many specific implementations if you want to go bigger and use an ESB like NServiceBus or RabbitMQ.
Update:
In that case, TPL Dataflow is your preferred solution if you can leverage the CTP. A throttled BufferBlock is the solution.
This example comes from the documentation provided by Microsoft:
// Hand-off through a bounded BufferBlock<T>
private static BufferBlock<int> m_buffer = new BufferBlock<int>(
new DataflowBlockOptions { BoundedCapacity = 10 });
// Producer
private static async void Producer()
{
while(true)
{
await m_buffer.SendAsync(Produce());
}
}
// Consumer
private static async Task Consumer()
{
while(true)
{
Process(await m_buffer.ReceiveAsync());
}
}
// Start the Producer and Consumer
private static async Task Run()
{
await Task.WhenAll(Producer(), Consumer());
}
Update:
Check out RX's Observable.Throttle.

Related

Is there a way to limit the number of parallel Tasks globally in an ASP.NET Web API application?

I have an ASP.NET 5 Web API application which contains a method that takes objects from a List<T> and makes HTTP requests to a server, 5 at a time, until all requests have completed. This is accomplished using a SemaphoreSlim, a List<Task>(), and awaiting on Task.WhenAll(), similar to the example snippet below:
public async Task<ResponseObj[]> DoStuff(List<Input> inputData)
{
const int maxDegreeOfParallelism = 5;
var tasks = new List<Task<ResponseObj>>();
using var throttler = new SemaphoreSlim(maxDegreeOfParallelism);
foreach (var input in inputData)
{
tasks.Add(ExecHttpRequestAsync(input, throttler));
}
List<ResponseObj> resposnes = await Task.WhenAll(tasks).ConfigureAwait(false);
return responses;
}
private async Task<ResponseObj> ExecHttpRequestAsync(Input input, SemaphoreSlim throttler)
{
await throttler.WaitAsync().ConfigureAwait(false);
try
{
using var request = new HttpRequestMessage(HttpMethod.Post, "https://foo.bar/api");
request.Content = new StringContent(JsonConvert.SerializeObject(input, Encoding.UTF8, "application/json");
var response = await HttpClientWrapper.SendAsync(request).ConfigureAwait(false);
var responseBody = await response.Content.ReadAsStringAsync().ConfigureAwait(false);
var responseObject = JsonConvert.DeserializeObject<ResponseObj>(responseBody);
return responseObject;
}
finally
{
throttler.Release();
}
}
This works well, however I am looking to limit the total number of Tasks that are being executed in parallel globally throughout the application, so as to allow scaling up of this application. For example, if 50 requests to my API came in at the same time, this would start at most 250 tasks running parallel. If I wanted to limit the total number of Tasks that are being executed at any given time to say 100, is it possible to accomplish this? Perhaps via a Queue<T>? Would the framework automatically prevent too many tasks from being executed? Or am I approaching this problem in the wrong way, and would I instead need to Queue the incoming requests to my application?
I'm going to assume the code is fixed, i.e., Task.Run is removed and the WaitAsync / Release are adjusted to throttle the HTTP calls instead of List<T>.Add.
I am looking to limit the total number of Tasks that are being executed in parallel globally throughout the application, so as to allow scaling up of this application.
This does not make sense to me. Limiting your tasks limits your scaling up.
For example, if 50 requests to my API came in at the same time, this would start at most 250 tasks running parallel.
Concurrently, sure, but not in parallel. It's important to note that these aren't 250 threads, and that they're not 250 CPU-bound operations waiting for free thread pool threads to run on, either. These are Promise Tasks, not Delegate Tasks, so they don't "run" on a thread at all. It's just 250 objects in memory.
If I wanted to limit the total number of Tasks that are being executed at any given time to say 100, is it possible to accomplish this?
Since (these kinds of) tasks are just in-memory objects, there should be no need to limit them, any more than you would need to limit the number of strings or List<T>s. Apply throttling where you do need it; e.g., number of HTTP calls done simultaneously per request. Or per host.
Would the framework automatically prevent too many tasks from being executed?
The framework has nothing like this built-in.
Perhaps via a Queue? Or am I approaching this problem in the wrong way, and would I instead need to Queue the incoming requests to my application?
There's already a queue of requests. It's handled by IIS (or whatever your host is). If your server gets too busy (or gets busy very suddenly), the requests will queue up without you having to do anything.
If I wanted to limit the total number of Tasks that are being executed at any given time to say 100, is it possible to accomplish this?
What you are looking for is to limit the MaximumConcurrencyLevel of what's called the Task Scheduler. You can create your own task scheduler that regulates the MaximumCongruencyLevel of the tasks it manages. I would recommend implementing a queue-like object that tracks incoming requests and currently working requests and waits for the current requests to finish before consuming more. The below information may still be relevant.
The task scheduler is in charge of how Tasks are prioritized, and in charge of tracking the tasks and ensuring that their work is completed, at least eventually.
The way it does this is actually very similar to what you mentioned, in general the way the Task Scheduler handles tasks is in a FIFO (First in first out) model very similar to how a ConcurrentQueue<T> works (at least starting in .NET 4).
Would the framework automatically prevent too many tasks from being executed?
By default the TaskScheduler that is created with most applications appears to default to a MaximumConcurrencyLevel of int.MaxValue. So theoretically yes.
The fact that there practically is no limit to the amount of tasks(at least with the default TaskScheduler) might not be that big of a deal for your case scenario.
Tasks are separated into two types, at least when it comes to how they are assigned to the available thread pools. They're separated into Local and Global queues.
Without going too far into detail, the way it works is if a task creates other tasks, those new tasks are part of the parent tasks queue (a local queue). Tasks spawned by a parent task are limited to the parent's thread pool.(Unless the task scheduler takes it upon itself to move queues around)
If a task isn't created by another task, it's a top-level task and is placed into the Global Queue. These would normally be assigned their own thread(if available) and if one isn't available it's treated in a FIFO model, as mentioned above, until it's work can be completed.
This is important because although you can limit the amount of concurrency that happens with the TaskScheduler, it may not necessarily be important - if for say you have a top-level task that's marked as long running and is in-charge of processing your incoming requests. This would be helpful since all the tasks spawned by this top-level task will be part of that task's local queue and therefor won't spam all your available threads in your thread pool.
When you have a bunch of items and you want to process them asynchronously and with limited concurrency, the SemaphoreSlim is a great tool for this job. There are two ways that it can be used. One way is to create all the tasks immediately and have each task acquire the semaphore before doing it's main work, and the other is to throttle the creation of the tasks while the source is enumerated. The first technique is eager, and so it consumes more RAM, but it's more maintainable because it is easier to understand and implement. The second technique is lazy, and it's more efficient if you have millions of items to process.
The technique that you have used in your sample code is the second (lazy) one.
Here is an example of using two SemaphoreSlims in order to impose two maximum concurrency policies, one per request and one globally. First the eager approach:
private const int maxConcurrencyGlobal = 100;
private static SemaphoreSlim globalThrottler
= new SemaphoreSlim(maxConcurrencyGlobal, maxConcurrencyGlobal);
public async Task<ResponseObj[]> DoStuffAsync(IEnumerable<Input> inputData)
{
const int maxConcurrencyPerRequest = 5;
var perRequestThrottler
= new SemaphoreSlim(maxConcurrencyPerRequest, maxConcurrencyPerRequest);
Task<ResponseObj>[] tasks = inputData.Select(async input =>
{
await perRequestThrottler.WaitAsync();
try
{
await globalThrottler.WaitAsync();
try
{
return await ExecHttpRequestAsync(input);
}
finally { globalThrottler.Release(); }
}
finally { perRequestThrottler.Release(); }
}).ToArray();
return await Task.WhenAll(tasks);
}
The Select LINQ operator provides an easy and intuitive way to project items to tasks.
And here is the lazy approach for doing exactly the same thing:
private const int maxConcurrencyGlobal = 100;
private static SemaphoreSlim globalThrottler
= new SemaphoreSlim(maxConcurrencyGlobal, maxConcurrencyGlobal);
public async Task<ResponseObj[]> DoStuffAsync(IEnumerable<Input> inputData)
{
const int maxConcurrencyPerRequest = 5;
var perRequestThrottler
= new SemaphoreSlim(maxConcurrencyPerRequest, maxConcurrencyPerRequest);
var tasks = new List<Task<ResponseObj>>();
foreach (var input in inputData)
{
await perRequestThrottler.WaitAsync();
await globalThrottler.WaitAsync();
Task<ResponseObj> task = Run(async () =>
{
try
{
return await ExecHttpRequestAsync(input);
}
finally
{
try { globalThrottler.Release(); }
finally { perRequestThrottler.Release(); }
}
});
tasks.Add(task);
}
return await Task.WhenAll(tasks);
static async Task<T> Run<T>(Func<Task<T>> action) => await action();
}
This implementation assumes that the await globalThrottler.WaitAsync() will never throw, which is a given according to the documentation. This will no longer be the case if you decide later to add support for cancellation, and you pass a CancellationToken to the method. In that case you would need one more try/finally wrapper around the task-creation logic. The first (eager) approach could be enhanced with cancellation support without such considerations. Its existing try/finally infrastructure is
already sufficient.
It is also important that the internal helper Run method is implemented with async/await. Eliding the async/await would be an easy mistake to make, because in that case any exception thrown synchronously by the ExecHttpRequestAsync method would be rethrown immediately, and it would not be encapsulated in a Task<ResponseObj>. Then the task returned by the DoStuffAsync method would fail without releasing the acquired semaphores, and also without awaiting the completion of the already started operations. That's another argument for preferring the eager approach. The lazy approach has too many gotchas to watch for.

Parallel queued background tasks with hosted services in ASP.NET Core

I'm doing some tests with the new Background tasks with hosted services in ASP.NET Core feature present in version 2.1, more specifically with Queued background tasks, and a question about parallelism came to my mind.
I'm currently following strictly the tutorial provided by Microsoft and when trying to simulate a workload with several requests being made from a same user to enqueue tasks I noticed that all workItems are executed in order, so no parallelism.
My question is, is this behavior expected? And if so, in order to make the request execution parallel is it ok to fire and forget, instead of waiting the workItem to complete?
I've searched for a couple of days about this specific scenario without luck, so if anyone has any guide or examples to provide, I would be really glad.
Edit: The code from the tutorial is quite long, so the link for it is https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-2.1#queued-background-tasks
The method which executes the work item is this:
public class QueuedHostedService : IHostedService
{
...
public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Queued Hosted Service is starting.");
_backgroundTask = Task.Run(BackgroundProceessing);
return Task.CompletedTask;
}
private async Task BackgroundProceessing()
{
while (!_shutdown.IsCancellationRequested)
{
var workItem =
await TaskQueue.DequeueAsync(_shutdown.Token);
try
{
await workItem(_shutdown.Token);
}
catch (Exception ex)
{
_logger.LogError(ex,
$"Error occurred executing {nameof(workItem)}.");
}
}
}
...
}
The main point of the question is to know if anyone out there could share the knowledge of how to use this specific technology to execute several work items at the same time, since a server can handle this workload.
I tried the fire and forget method when executing the work item and it worked the way I intended it to, several tasks executing in parallel at the same time, I 'm jut no sure if this is an ok practice, or if there is a better or proper way of handling this situation.
The code you posted executes the queued items in order, one at a time but also in parallel to the web server. An IHostedService is running per definition in parallel to the web server. This article provides a good overview.
Consider the following example:
_logger.LogInformation ("Before()");
for (var i = 0; i < 10; i++)
{
var j = i;
_backgroundTaskQueue.QueueBackgroundWorkItem (async token =>
{
var random = new Random();
await Task.Delay (random.Next (50, 1000), token);
_logger.LogInformation ($"Event {j}");
});
}
_logger.LogInformation ("After()");
We add ten tasks which will wait a random amount of time. If you put the code in a controller method the events will still be logged even after controller method returns. But each item will be executed in order so that the output looks like this:
Event 1
Event 2
...
Event 9
Event 10
In order to introduce parallelism we have to change the implementation of the BackgroundProceessing method in the QueuedHostedService.
Here is an example implementation that allows two Tasks to be executed in parallel:
private async Task BackgroundProceessing()
{
var semaphore = new SemaphoreSlim (2);
void HandleTask(Task task)
{
semaphore.Release();
}
while (!_shutdown.IsCancellationRequested)
{
await semaphore.WaitAsync();
var item = await TaskQueue.DequeueAsync(_shutdown.Token);
var task = item (_shutdown.Token);
task.ContinueWith (HandleTask);
}
}
Using this implementation the order of the events logged in no longer in order as each task waits a random amount of time. So the output could be:
Event 0
Event 1
Event 2
Event 3
Event 4
Event 5
Event 7
Event 6
Event 9
Event 8
edit: Is it ok in a production environment to execute code this way, without awaiting it?
I think the reason why most devs have a problem with fire-and-forget is that it is often misused.
When you execute a Task using fire-and-forget you are basically telling me that you do not care about the result of this function. You do not care if it exits successfully, if it is canceled or if it threw an exception. But for most Tasks you do care about the result.
You do want to make sure a database write went through
You do want to make sure a Log entry is written to the hard drive
You do want to make sure a network packet is sent to the receiver
And if you care about the result of the Task then fire-and-forget is the wrong method.
That's it in my opinion. The hard part is finding a Task where you really do not care about the result of the Task.
You can add the QueuedHostedService once or twice for every CPU in the machine.
So something like this:
for (var i=0;i<Environment.ProcessorCount;++i)
{
services.AddHostedService<QueuedHostedService>();
}
You can hide this in an extension method and make the concurrency level configurable to keep things clean.

Optimizing for fire & forget using async/await and tasks

I have about 5 million items to update. I don't really care about the response (A response would be nice to have so I can log it, but I don't want a response if that will cost me time.) Having said that, is this code optimized to run as fast as possible? If there are 5 million items, would I run the risk of getting any task cancelled or timeout errors? I get about 1 or 2 responses back every second.
var tasks = items.Select(async item =>
{
await Update(CreateUrl(item));
}).ToList();
if (tasks.Any())
{
await Task.WhenAll(tasks);
}
private async Task<HttpResponseMessage> Update(string url)
{
var client = new HttpClient();
var response = await client.SendAsync(url).ConfigureAwait(false);
//log response.
}
UPDATE:
I am actually getting TaskCanceledExceptions. Did my system run out of threads? What could I do to avoid this?
You method will kick off all tasks at the same time, which may not be what you want. There wouldn't be any threads involved because with async operations There is no thread, but there may be number of concurrent connection limits.
There may be better tools to do this but if you want to use async/await one option is to use Stephen Toub's ForEachAsync as documented in this article. It allows you to control how many simultaneous operations you want to execute, so you don't overrun your connection limit.
Here it is from the article:
public static class Extensions
{
public static async Task ExecuteInPartition<T>(IEnumerator<T> partition, Func<T, Task> body)
{
using (partition)
while (partition.MoveNext())
await body(partition.Current);
}
public static Task ForEachAsync<T>(this IEnumerable<T> source, int dop, Func<T, Task> body)
{
return Task.WhenAll(
from partition in Partitioner.Create(source).GetPartitions(dop)
select ExecuteInPartition(partition, body));
}
}
Usage:
public async Task UpdateAll()
{
// Allow for 100 concurrent Updates
await items.ForEachAsync(100, async t => await Update(t));
}
A much better approach would be to use TPL Dataflow's ActionBlock with MaxDegreeOfParallelism and a single HttpClient:
Task UpdateAll(IEnumerable<Item> items)
{
var block = new ActionBlock<Item>(
item => UpdateAsync(CreateUrl(item)),
new ExecutionDataflowBlockOptions {MaxDegreeOfParallelism = 1000});
foreach (var item in items)
{
block.Post(item);
}
block.Complete();
return block.Completion;
}
async Task UpdateAsync(string url)
{
var response = await _client.SendAsync(url).ConfigureAwait(false);
Console.WriteLine(response.StatusCode);
}
A single HttpClient can be used concurrently for multiple requests, and so it's much better to only create and disposing a single instance instead of 5 million.
There are numerous problems in firing so many request at the same time: The machine's network stack, the target web site, timeouts and so forth. The ActionBlock caps that number with the MaxDegreeOfParallelism (which you should test and optimize for your specific case). It's important to note that TPL may choose a lower number when it deems it to be appropriate.
When you have a single async call at the end of an async method or lambda expression, it's better for performance to remove the redundant async-await and just return the task (i.e return block.Completion;)
Complete will notify the ActionBlock to not accept any more items, but finish processing items it already has. When it's done the Completion task will be done so you can await it.
I suspect you are suffering from outgoing connection management preventing large numbers of simultaneous connections to the same domain. The answers given in this extensive Q+A might give you some avenues to investigate.
What is limiting the # of simultaneous connections my ASP.NET application can make to a web service?
In terms of your code structure, I'd personally try and use a dynamic pool of connections. You know that you cant actually get 5m connections simultaneously so trying to attempt it will just fail to work - you may as well deal with a reasonable and configured limit of (for instance) 20 connections and use them in a pool. In this way you can tune up or down.
alternatively you could investigate HTTP Pipelining (which I've not used) which is intended specifically for the job you are doing (batching up Http requests). http://en.wikipedia.org/wiki/HTTP_pipelining

Polling the right way?

I am a software/hardware engineer with quite some experience in C and embedded technologies. Currently i am busy with writing some applications in C# (.NET) that is using hardware for data acquisition. Now the following, for me burning, question:
For example: I have a machine that has an endswitch for detecting the final position of an axis. Now i am using a USB Data acquisition module to read the data. Currently I am using a Thread to continuously read the port-status.
There is no interrupt functionality on this device.
My question: Is this the right way? Should i use timers, threads or Tasks? I know polling is something that most of you guys "hate", but any suggestion is welcome!
IMO, this heavily depends on your exact environment, but first off - You should not use Threads anymore in most cases. Tasks are the more convenient and more powerful solution for that.
Low polling frequency: Timer + polling in the Tick event:
A timer is easy to handle and stop. No need to worry about threads/tasks running in the background, but the handling happens in the main thread
Medium polling frequency: Task + await Task.Delay(delay):
await Task.Delay(delay) does not block a thread-pool thread, but because of the context switching the minimum delay is ~15ms
High polling frequency: Task + Thread.Sleep(delay)
usable at 1ms delays - we actually do this to poll our USB measurement device
This could be implemented as follows:
int delay = 1;
var cancellationTokenSource = new CancellationTokenSource();
var token = cancellationTokenSource.Token;
var listener = Task.Factory.StartNew(() =>
{
while (true)
{
// poll hardware
Thread.Sleep(delay);
if (token.IsCancellationRequested)
break;
}
// cleanup, e.g. close connection
}, token, TaskCreationOptions.LongRunning, TaskScheduler.Default);
In most cases you can just use Task.Run(() => DoWork(), token), but there is no overload to supply the TaskCreationOptions.LongRunning option which tells the task-scheduler to not use a normal thread-pool thread.
But as you see Tasks are easier to handle (and awaitable, but does not apply here). Especially the "stopping" is just calling cancellationTokenSource.Cancel() in this implementation from anywhere in the code.
You can even share this token in multiple actions and stop them at once. Also, not yet started tasks are not started when the token is cancelled.
You can also attach another action to a task to run after one task:
listener.ContinueWith(t => ShutDown(t));
This is then executed after the listener completes and you can do cleanup (t.Exception contains the exception of the tasks action if it was not successful).
IMO polling cannot be avoided.
What you can do is create a module, with its independent thread/Task that will poll the port regularly. Based on the change in data, this module will raise the event which will be handled by the consuming applications
May be:
public async Task Poll(Func<bool> condition, TimeSpan timeout, string message = null)
{
// https://github.com/dotnet/corefx/blob/3b24c535852d19274362ad3dbc75e932b7d41766/src/Common/src/CoreLib/System/Threading/ReaderWriterLockSlim.cs#L233
var timeoutTracker = new TimeoutTracker(timeout);
while (!condition())
{
await Task.Yield();
if (timeoutTracker.IsExpired)
{
if (message != null) throw new TimeoutException(message);
else throw new TimeoutException();
}
}
}
Look into SpinWait or into Task.Delay internals either.
I've been thinking about this and what you could probably do is build an abstraction layer on utilizing Tasks and Func, Action with the Polling service taking in the Func, Action and polling interval as args. This would keep the implementation of either functionality separate while having them open to injection into the polling service.
So for example you'd have something like this serve as your polling class
public class PollingService {
public void Poll(Func<bool> func, int interval, string exceptionMessage) {
while(func.Invoke()){
Task.Delay(interval)
}
throw new PollingException(exceptionMessage)
}
public void Poll(Func<bool, T> func, T arg, int interval, string exceptionMessage)
{
while(func.Invoke(arg)){
Task.Delay(interval)
}
throw new PollingException(exceptionMessage)
}
}

C# queueing dependant tasks to be processed by a thread pool

I want to queue dependant tasks across several flows that need to be processed in order (in each flow). The flows can be processed in parallel.
To be specific, let's say I need two queues and I want the tasks in each queue to be processed in order. Here is sample pseudocode to illustrate the desired behavior:
Queue1_WorkItem wi1a=...;
enqueue wi1a;
... time passes ...
Queue1_WorkItem wi1b=...;
enqueue wi1b; // This must be processed after processing of item wi1a is complete
... time passes ...
Queue2_WorkItem wi2a=...;
enqueue wi2a; // This can be processed concurrently with the wi1a/wi1b
... time passes ...
Queue1_WorkItem wi1c=...;
enqueue wi1c; // This must be processed after processing of item wi1b is complete
Here is a diagram with arrows illustrating dependencies between work items:
The question is how do I do this using C# 4.0/.NET 4.0? Right now I have two worker threads, one per queue and I use a BlockingCollection<> for each queue. I would like to instead leverage the .NET thread pool and have worker threads process items concurrently (across flows), but serially within a flow. In other words I would like to be able to indicate that for example wi1b depends on completion of wi1a, without having to track completion and remember wi1a, when wi1b arrives. In other words, I just want to say, "I want to submit a work item for queue1, which is to be processed serially with other items I have already submitted for queue1, but possibly in parallel with work items submitted to other queues".
I hope this description made sense. If not please feel free to ask questions in the comments and I will update this question accordingly.
Thanks for reading.
Update:
To summarize "flawed" solutions so far, here are the solutions from the answers section that I cannot use and the reason(s) why I cannot use them:
TPL tasks require specifying the antecedent task for a ContinueWith(). I do not want to maintain knowledge of each queue's antecedent task when submitting a new task.
TDF ActionBlocks looked promising, but it would appear that items posted to an ActionBlock are processed in parallel. I need for the items for a particular queue to be processed serially.
Update 2:
RE: ActionBlocks
It would appear that setting the MaxDegreeOfParallelism option to one prevents parallel processing of work items submitted to a single ActionBlock. Therefore it seems that having an ActionBlock per queue solves my problem with the only disadvantage being that this requires the installation and deployment of the TDF library from Microsoft and I was hoping for a pure .NET 4.0 solution. So far, this is the candidate accepted answer, unless someone can figure out a way to do this with a pure .NET 4.0 solution that doesn't degenerate to a worker thread per queue (which I am already using).
I understand you have many queues and don't want to tie up threads. You could have an ActionBlock per queue. The ActionBlock automates most of what you need: It processes work items serially, and only starts a Task when work is pending. When no work is pending, no Task/Thread is blocked.
The best way is to use the Task Parallel Library (TPL) and Continuations. A continuation not only allows you to create a flow of tasks but also handles your exceptions. This is a great introduction to the TPL. But to give you some idea...
You can start a TPL task using
Task task = Task.Factory.StartNew(() =>
{
// Do some work here...
});
Now to start a second task when an antecedent task finishes (in error or successfully) you can use the ContinueWith method
Task task1 = Task.Factory.StartNew(() => Console.WriteLine("Antecedant Task"));
Task task2 = task1.ContinueWith(antTask => Console.WriteLine("Continuation..."));
So as soon as task1 completes, fails or is cancelled task2 'fires-up' and starts running. Note that if task1 had completed before reaching the second line of code task2 would be scheduled to execute immediately. The antTask argument passed to the second lambda is a reference to the antecedent task. See this link for more detailed examples...
You can also pass continuations results from the antecedent task
Task.Factory.StartNew<int>(() => 1)
.ContinueWith(antTask => antTask.Result * 4)
.ContinueWith(antTask => antTask.Result * 4)
.ContinueWith(antTask =>Console.WriteLine(antTask.Result * 4)); // Prints 64.
Note. Be sure to read up on exception handling in the first link provided as this can lead a newcomer to TPL astray.
One last thing to look at in particular for what you want is child tasks. Child tasks are those which are created as AttachedToParent. In this case the continuation will not run until all child tasks have completed
TaskCreationOptions atp = TaskCreationOptions.AttachedToParent;
Task.Factory.StartNew(() =>
{
Task.Factory.StartNew(() => { SomeMethod() }, atp);
Task.Factory.StartNew(() => { SomeOtherMethod() }, atp);
}).ContinueWith( cont => { Console.WriteLine("Finished!") });
I hope this helps.
Edit: Have you had a look at ConcurrentCollections in particular the BlockngCollection<T>. So in your case you might use something like
public class TaskQueue : IDisposable
{
BlockingCollection<Action> taskX = new BlockingCollection<Action>();
public TaskQueue(int taskCount)
{
// Create and start new Task for each consumer.
for (int i = 0; i < taskCount; i++)
Task.Factory.StartNew(Consumer);
}
public void Dispose() { taskX.CompleteAdding(); }
public void EnqueueTask (Action action) { taskX.Add(Action); }
void Consumer()
{
// This seq. that we are enumerating will BLOCK when no elements
// are avalible and will end when CompleteAdding is called.
foreach (Action action in taskX.GetConsumingEnumerable())
action(); // Perform your task.
}
}
A .NET 4.0 solution based on TPL is possible, while hiding away the fact that it needs to store the parent task somewhere. For example:
class QueuePool
{
private readonly Task[] _queues;
public QueuePool(int queueCount)
{ _queues = new Task[queueCount]; }
public void Enqueue(int queueIndex, Action action)
{
lock (_queues)
{
var parent = _queue[queueIndex];
if (parent == null)
_queues[queueIndex] = Task.Factory.StartNew(action);
else
_queues[queueIndex] = parent.ContinueWith(_ => action());
}
}
}
This is using a single lock for all queues, to illustrate the idea. In production code, however, I would use a lock per queue to reduce contention.
It looks like the design you already have is good and working. Your worker threads (one per queue) are long-running so if you want to use Task's instead, specify TaskCreationOptions.LongRunning so you get a dedicated worker thread.
But there isn't really a need to use the ThreadPool here. It doesn't offer many benefits for long-running work.

Categories

Resources