I am trying to implement the following Use Case. I have an Azure Worker Role that will monitor the Azure Storage Queue, and when a message comes in, this will trigger a job to run Asynchronously. I want to use the TPL if possible, and need the operations to support cancellation, so that when the Azure Role OnStop fires, jobs can exit gracefully if possible. The MyFixIt example posted by Scott Guthrie is almost exactly what I need, and I have used this as the template for my project. The one critical aspect not supported is the requirement to run the jobs asynchronously. In the FixIt code, once a job is launched, no other jobs will process until that one finishes. Some of the jobs my application will process are long running, and I need the worker role to be able to notice other incoming jobs and run those while the long running job is running.
The 2 key methods here are ProcessMessagesAsync, which monitors the queue, and ProcessMessage, which will run the job when a message comes in. Here is what I have, and it mostly works except it does not handle the CancellationRequest properly, and the Azure Worker Role will shut down without waiting for jobs to complete.
/// <summary>
/// Continuous loop that monitors the queue and launches jobs when they are retrieved.
/// </summary>
/// <param name="token"></param>
/// <returns></returns>
public virtual async Task ProcessMessagesAsync(CancellationToken token)
{
CloudQueue queue = _queueClient.GetQueueReference(_queueName);
await queue.CreateIfNotExistsAsync(token);
while (!token.IsCancellationRequested)
{
Debug.WriteLine("inLoop");
// The default timeout is 90 seconds, so we won’t continuously poll the queue if there are no messages.
// Pass in a cancellation token, because the operation can be long-running.
CloudQueueMessage message = await queue.GetMessageAsync(token);
if (message != null)
{
ProcessMessage(message, queue, token);
}
else
{
await Task.Delay(500, token);
}
}
}
protected virtual async Task ProcessMessage(CloudQueueMessage message, CloudQueue queue, CancellationToken token)
{
var jobDetails = JobDetails.DeserializeJson(message.AsString);
var result = await _jobRunner.RunJob(jobDetails, token);
//todo handle error
//if (result.Status == JobStatus.Error)
await queue.DeleteMessageAsync(message);
}
Then the JobRunner runs the job requested. I have written a TestJob in which I am trying to simulate a long running job that can notice the CancellationRequest, and after a short cleanup period, exit the job early.
public virtual async Task<JobResult> RunJob(JobDetails jobDetails, CancellationToken token)
{
switch (jobDetails.JobName.ToLower())
{
case "testjob":
return await TestJob(jobDetails.Args, token);
}
return new JobResult(JobStatus.Error) { ErrorMessage = "The job requested does not exist." };
}
protected virtual async Task<JobResult> TestJob(List<string> jobArgs, CancellationToken token)
{
var message = "no message";
if (jobArgs != null && jobArgs.Any())
message = jobArgs[0];
return await Task.Run(async () =>
{
Debug.WriteLine(string.Format("Start:{0}", message));
for (int i = 1; i <= 800; i++)
{
if (token.IsCancellationRequested)
{
Debug.WriteLine("CancelationRequest in TestJob");
//simulate short time to cleanup and exit early
Thread.Sleep(1500);
Debug.WriteLine("Cancelation Job Cleanup finsihed.");
token.ThrowIfCancellationRequested();
}
Thread.Sleep(10);
}
Debug.WriteLine(string.Format("Finish:{0}", message));
return new JobResult(JobStatus.Success);
});
}
I have been searching and researching for 2 days now, including the TPL DataFlow library, and have not yet been able to come up with a way to make this work properly. I feel like the Call to ProcessMessage(message, queue, token) is not being done correctly, there even is a compiler warning 'Because this call is not awaited...'. But I DON'T want to await (which is what the FixIt example does), because then no other jobs get noticed until the running one is finished. This seems like it would not be an uncommon use case, though I cannot seem to find anyone describing it.
Thank you in advance for any help!
Danny Green
The reason this is happening is because you are not honouring the task returned from ProcessMessage. Because of this ProcessMessageAsync can finish before ProcessMessage gracefully completes or cancels. Keeping in mind that you don't want to await ProcessMessage because it will make message processing sequential, I would suggest that you keep a list of running tasks.
In other words, create a List in ProcessMessageAsync and add the task returned from ProcessMessage to this list. Then at the end of while loop you should loop through this list to cancel all pending tasks if token was cancelled.
Sorry I don't have VS handy but I hope you get the point.
Thank you Sanjay, Based on your suggestion I have come up with the following.
/// <summary>
/// Continuous loop that monitors the queue and launches jobs when they are retrieved.
/// </summary>
/// <param name="token"></param>
/// <returns></returns>
public virtual async Task ProcessMessagesAsync(CancellationToken token)
{
CloudQueue queue = _queueClient.GetQueueReference(_queueName);
await queue.CreateIfNotExistsAsync(token);
var runningTasks = new ConcurrentDictionary<int, Task>();
while (!token.IsCancellationRequested)
{
Debug.WriteLine("inLoop");
// The default timeout is 90 seconds, so we won’t continuously poll the queue if there are no messages.
// Pass in a cancellation token, because the operation can be long-running.
CloudQueueMessage message = await queue.GetMessageAsync(token);
if (message != null)
{
var t = ProcessMessage(message, queue, token);
var c = t.ContinueWith(z => RemoveRunningTask(t.Id, runningTasks));
while (true)
{
if (runningTasks.TryAdd(t.Id, t))
break;
Task.Delay(25);
}
}
else
{
try
{
await Task.Delay(500, token);
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
}
while (!runningTasks.IsEmpty)
{
Debug.WriteLine("Waiting for running tasks");
Task.Delay(500);
}
}
private static void RemoveRunningTask(int id, ConcurrentDictionary<int, Task> runningTasks)
{
while (true)
{
Task outTask;
if (runningTasks.TryRemove(id, out outTask))
break;
Task.Delay(25);
}
}
This seems to work, though I feel it is a little clumsy. I started out coding the 'ContinueWith' like this, but was surprised that the incoming task had a different Id value (I expected it to be the same Task):
var task = ProcessMessage(message, queue, token).ContinueWith(x =>
{
while (true)
{
Task outTask;
if (runningTasks.TryRemove(x.Id, out outTask))
break;
Task.Delay(25);
}
});
UPDATE:
It turns out that this still does not quite work, I somehow misread the results when testing earlier. Based on the MyFixIt example, in the Work Role OnStop I have the following code:
public override void OnStop()
{
Debug.WriteLine("OnStop_Begin");
tokenSource.Cancel();
tokenSource.Token.WaitHandle.WaitOne();
base.OnStop();
Debug.WriteLine("Onstop_End");
tokenSource.Dispose();
}
It appears that the tokenSource.Token.WaitHandle.WaitOne isn't really able to wait until all of the tasks that have a reference to the token have finished, so the role continues and stops even when tasks are still in the processing of finishing up. Is there some way to properly use the token to signal when the cancellation is actually completed?
Thanks!
UPDATE 2
Ok, I think I have a solution that is now working. It appears that the CancellationToken.WaitHandle is signaled when the .Cancel is called, so I'm not sure what the purpose of having it immediately after the .Cancel is called, it seems like it would always just continue immediately through that code? This is how it is in the FixIt example, but I don't really understand it. For my purpose, I have changed ProcessMessagesAsync to now get passed in a ManualResetEventSlim, and then set that after all tasks have finished. Then in OnStop I wait on that before finishing the Stop.
/// <summary>
/// Continuous loop that monitors the queue and launches jobs when they are retrieved.
/// </summary>
/// <param name="token"></param>
/// <returns></returns>
public virtual async Task ProcessMessagesAsync(CancellationToken token, ManualResetEventSlim reset)
{
CloudQueue queue = _queueClient.GetQueueReference(_queueName);
await queue.CreateIfNotExistsAsync(token);
var runningTasks = new ConcurrentDictionary<int, Task>();
while (!token.IsCancellationRequested)
{
Debug.WriteLine("inLoop");
// The default timeout is 90 seconds, so we won’t continuously poll the queue if there are no messages.
// Pass in a cancellation token, because the operation can be long-running.
CloudQueueMessage message = await queue.GetMessageAsync(token);
if (message != null)
{
var t = ProcessMessage(message, queue, token);
var c = t.ContinueWith(z => RemoveRunningTask(t.Id, runningTasks));
while (true)
{
if (runningTasks.TryAdd(t.Id, t))
break;
await Task.Delay(25);
}
}
else
{
try
{
await Task.Delay(500, token);
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
}
while (!runningTasks.IsEmpty)
{
Debug.WriteLine("Waiting for running tasks");
await Task.Delay(500);
}
Debug.WriteLine("All tasks have finished, exiting ProcessMessagesAsync.");
reset.Set();
}
public override void OnStop()
{
Debug.WriteLine("OnStop_Begin");
tokenSource.Cancel();
tokenSource.Token.WaitHandle.WaitOne();
_reset.Wait();
base.OnStop();
Debug.WriteLine("Onstop_End");
tokenSource.Dispose();
}
Related
I'm trying to create an AWS SQS windows service consumer that will poll messages in batch of 10. Each messages will be executed in its own task for parallel execution. Message processing includes calling different api's and sending email so it might take some time.
My problem is that first, I only want to poll the queue when 10 messages can be processed immediately. This is due to sqs visibility timeout and having the received messages "wait" might go over the visibility timeout and be "back" on the queue. This will produce duplication. I don't think tweaking the visibility timeout is good, because there are still chances that messages will be duplicated and that's what I'm trying to avoid. Second, I want to have some sort of limit for parallelism (ex. max limit of 100 concurrent tasks), so that server resources can be kept at bay since there are also other apps running in the server.
How to achieve this? Or are there any other way to remedy these problems?
This answer makes the following assumptions:
Fetching messages from the AWS should be serialized. Only the processing of messages should be parallelized.
Every message fetched from the AWS should be processed. The whole execution should not terminate before all fetched messages have a chance to be processed.
Every message-processing operation should be awaited. The whole execution should not terminate before the completion of all started tasks.
Any error that occurs during the processing of a message should be ignored. The whole execution should not terminate because the processing of a single message failed.
Any error that occurs during the fetching of messages from the AWS should be fatal. The whole execution should terminate, but not before all currently running message-processing operations have completed.
The execution mechanism should be able to handle the case that a fetch-from-the-AWS operation returned a batch having a different number of messages than the requested number.
Below is an implementation that (hopefully) satisfies these requirements:
/// <summary>
/// Starts an execution loop that fetches batches of messages sequentially,
/// and process them one by one in parallel.
/// </summary>
public static async Task ExecutionLoopAsync<TMessage>(
Func<int, Task<TMessage[]>> fetchMessagesAsync,
Func<TMessage, Task> processMessageAsync,
int fetchCount,
int maxDegreeOfParallelism,
CancellationToken cancellationToken = default)
{
// Arguments validation omitted
var semaphore = new SemaphoreSlim(maxDegreeOfParallelism, maxDegreeOfParallelism);
// Count how many times we have acquired the semaphore, so that we know
// how many more times we have to acquire it before we exit from this method.
int acquiredCount = 0;
try
{
while (true)
{
Debug.Assert(acquiredCount == 0);
for (int i = 0; i < fetchCount; i++)
{
await semaphore.WaitAsync(cancellationToken);
acquiredCount++;
}
TMessage[] messages = await fetchMessagesAsync(fetchCount)
?? Array.Empty<TMessage>();
for (int i = 0; i < messages.Length; i++)
{
if (i >= fetchCount) // We got more messages than we asked for
{
await semaphore.WaitAsync();
acquiredCount++;
}
ProcessAndRelease(messages[i]);
acquiredCount--;
}
if (messages.Length < fetchCount)
{
// We got less messages than we asked for
semaphore.Release(fetchCount - messages.Length);
acquiredCount -= fetchCount - messages.Length;
}
// This method is 'async void' because it is not expected to throw ever
async void ProcessAndRelease(TMessage message)
{
try { await processMessageAsync(message); }
catch { } // Swallow exceptions
finally { semaphore.Release(); }
}
}
}
catch (SemaphoreFullException)
{
// Guard against the (unlikely) scenario that the counting logic is flawed.
// The counter is no longer reliable, so skip the awaiting in finally.
acquiredCount = maxDegreeOfParallelism;
throw;
}
finally
{
// Wait for all pending operations to complete. This could cause a deadlock
// in case the counter has become out of sync.
for (int i = acquiredCount; i < maxDegreeOfParallelism; i++)
await semaphore.WaitAsync();
}
}
Usage example:
var cts = new CancellationTokenSource();
Task executionTask = ExecutionLoopAsync<Message>(async count =>
{
return await GetBatchFromAwsAsync(count);
}, async message =>
{
await ProcessMessageAsync(message);
}, fetchCount: 10, maxDegreeOfParallelism: 100, cts.Token);
[ This question needs to be reimagined. One of my thread queues MUST run on an STA thread, and the code below does not accommodate that. In particular it seems Task<> chooses its own thread and that just is not going to work for me. ]
I have a task queue (BlockingCollection) that I'm running through on a dedicated thread. That queue receives a series of Task<> objects that it runs sequentially within that thread via a while loop.
I need a means of Cancelling that series of tasks, and a means of knowing that the tasks are all complete. I have not been able to figure out how to do this.
Here's a fragment of my queuing class. ProcessQueue is run on a separate thread from main. QueueJob calls occur on the main thread.
using Job = Tuple<Task<bool>, string>;
public class JobProcessor
{
private readonly BlockingCollection<Job> m_queue = new BlockingCollection<Job>();
volatile bool cancel_queue = false;
private bool ProcessQueue()
{
while (true)
{
if (m_queue.IsAddingCompleted)
break;
Job tuple;
if (!m_queue.TryTake(out tuple, Timeout.Infinite))
break;
var task = tuple.Item1;
var taskName = tuple.Item2;
try
{
Console.WriteLine("Task {0}::{1} starting", this.name, taskName);
task.RunSynchronously();
Console.WriteLine("Task {0}::{1} completed", this.name, taskName);
}
catch (Exception e)
{
string message = e.Message;
}
if (cancel_queue) // CANCEL BY ERASING TASKS AND NOT RUNNING.
{
while (m_queue.TryTake(out tuple))
{
}
}
} // while(true)
return true;
}
public Task<bool> QueueJob(Func<bool> input)
{
var task = new Task<bool>(input);
try
{
m_queue.Add(Tuple.Create(task, input.Method.Name));
}
catch (InvalidOperationException)
{
Task<bool> dummy = new Task<bool>(() => false);
dummy.Start();
return dummy;
}
return task;
}
Here are the functions that trouble me:
public void ClearQueue()
{
cancel_queue = true;
// wait for queue to become empty. HOW?
cancel_queue = false;
}
public void WaitForCompletion()
{
// wait for all tasks to be completed.
// not sufficient to wait for empty queue because the last task
// must also execute and finish. HOW?
}
}
Here is some usage:
class SomeClass
{
void Test()
{
JobProcessor jp = new JobProcessor();
// launch Processor loop on separate thread... code not shown.
// send a bunch of jobs via QueueJob... code not show.
// launch dialog... code not shown.
if (dialog_result == Result.Cancel)
jp.ClearQueue();
if (dialog_result == Result.Proceed)
jp.WaitForCompletion();
}
}
The idea is after the work is completed or cancelled, new work may be posted. In general though, new work may come in asynchronously. WaitForCompletion might in fact be "when all work is done, inform the user and then do other stuff", so it doesn't strictly have to be a synchronous function call like above, but I can't figure how to make these happen.
(One further complication, I expect to have several queues that interact. While I am careful to keep things parallelized in a way to prevent deadlocks, I am not confident what happens when cancellation is introduced into the mix, but this is probably beyond scope for this question.)
WaitForCompletion() sounds easy enough. Create a semaphore or event, create a task whose only action is to signal the semaphore, queue up the task, wait on the semaphore.
When the thread finishes the last 'real' task, the semaphore task will be run and so the thread that called WaitForCompletion will become ready/running:)
Would not a similar approach work for cancellation? Have a very high priority thread that you create/signal that drains the queue of all pending jobs, disposing them, queueing up the semaphore task and waiting for the 'last task done' signal?
I have a function at idle time when at certain interval e.g. every 30 seconds, it will perform a polling to hardware for status.
The method name is public static async Task PollCurrentHardwareStatus() which inside this method, it will go through status of every hardwares e.g. I have 4 devices to get connection status (printer module, cash module, coin module, terminal module).
If public touch on main screen, it will divert to next page which I need to cancel the poll status. How to cancel poll status of device in await Task?
I have come through Cancel an Async Task or a List of Tasks (C#) but I seems cannot get the idea where to put the CancellationToken.
My code at PollCurrentHardwareStatus:-
public static async Task PollCurrentHardwareStatus()
{
try
{
//POLLING CARD READER
if (GlobVars.HwIDTech.Enabled)
{
if (IDTechDevice.PingForReply())
{
LogEvents($"[App] Terminal OK.", EventLogEntryType.Information);
AppDeviceStatus.strIDTechStatus = StatusMessageIDTech.strSuccessID;
}
else
{
LogEvents($"[App] IDTechDevice: Not found/Disconnected", EventLogEntryType.Information);
AppDeviceStatus.strIDTechStatus = StatusMessageIDTech.strErrorID;
}
}
//POLLING PRINTER
if (GlobVars.HwCustom.Enabled)
{
string description = string.Empty;
int status = 0;
PrintMain.PrinterGetStatus(ref description, ref status);
if (status == 0)
{
AppDeviceStatus.strPrinterStatus = StatusMessagePrinter.strSuccessID;
}
else
{
LogEvents($"[App] Printer error: {description}", EventLogEntryType.Information);
AppDeviceStatus.strPrinterStatus = StatusMessagePrinter.strErrorID;
}
}
//POLLING CASH COIN MODULE
if (GlobVars.HwB2B.Enabled && GlobVars.HwBCR.Enabled)
{
string B2BStatus = await CCMain.GetCurrentDeviceStatus();
if (B2BStatus == "DISABLED")
{
AppDeviceStatus.strB2BStatus = StatusMessageB2B.strSuccessID;
LogEvents($"[App] Poll B2B device: Status - OK.", EventLogEntryType.Information);
}
else
{
LogEvents($"[App] Poll B2B device: Status - {B2BStatus}.", EventLogEntryType.Information);
AppDeviceStatus.strB2BStatus = StatusMessageB2B.strErrorID;
}
if (ModuleCoins.OpenConnection())
{
await ModuleCoins.PerformSelfTest();
AppDeviceStatus.strBCRStatus = StatusMessageBCR.strSuccessID;
}
else
{
AppDeviceStatus.strBCRStatus = StatusMessageBCR.strErrorID;
}
}
UpdateErrorStatus();
}
catch (Exception ex)
{
LogEvents($"[App] Poll hardware status : Ex-{ex.Message}. Stack Trace-{ex.StackTrace}", EventLogEntryType.Error);
}
await Task.Delay(100);
}
I think you can create the CancellationTokenSource from the method which you are calling PollCurrentHardwareStatus(). Please check bellow example:
Add CancellationTokenSource as a parameter in PollCurrentHardwareStatus method
public static async Task PollCurrentHardwareStatus(CancellationToken cts)
{
// your logic code
// ...............
}
Create a CancellationTokenSource and call it on your Page class:
public class Page
{
private CancellationTokenSource cancellationTokenSource;
public Page()
{
cancellationTokenSource = new CancellationTokenSource();
}
public async void CallPoll()
{
await PollCurrentHardwareStatus(cancellationTokenSource.Token);
}
public void OnCancelPoll(object sender, EventArgs e)
{
cancellationTokenSource.Cancel();
}
}
According to MSDN: Cancellation in managed threads
Cancellation is cooperative and is not forced on the listener. The
listener determines how to gracefully terminate in response to a
cancellation request.
You'll have to create an overload of PollCurrentHardwareStatus that takes a CancellationToken object as input. The function should regularly check whether cancellation is requested and cancel the function gracefully.
There are several problems in this: what is reqularly? what to do when cancellation is requested.
The answers are up to your requirements. It depends on the kine of interruption on wether you should have cancelled within 50 msec, or whether the cancellation may take upon a second. For example, if your process is interrupted when a operator touches your screen for the first time, this operator might be wiling to wait half a second before the screen responds. But if your process is interrupted every time the operator types a letter, then one second to cancel might be annoying.
So the question how often you should check for cancellation depends on the usage.
async Task PollCurrentHardwareStatus(CancellatinToken token)
{
token.ThrowIfCancellationRequested();
DoSomeShortProcessing();
token.ThrowIfCancellationRequested();
DoSomeOtherProcessing();
token.ThrowIfcancellationRequested();
etc.
The problem arises if you call a function that would take longer time to process. Best thing would be to pass the token to the other process:
LongProcessingFunction(token);
The other function should regularly check the token.
Gracefully cancellation depends on this.
If you can't change the other function, you can't guarantee proper and fast cancellation.
async-await does not help you in this. It is up to programmers who create awaitable functions to provide versions that accept a CancellationToken.
What you'll find is that all basic awaitable functions (Read / Write file, fetch information form a database or the internet, etc) have a version that accepts a CancellationToken.
You could start a Thread and kill this thread when cancellation is requested, but that is fairly dangerous, because you don't know the status of the object when the thread is killed. I would not advise that.
I am working on a web api project in which user perform some action and all the related user get notification regarding the user activity. To notify every user i am starting a new thread which perform the desire action. is it necessary to wait for this thread to terminate before request gets complete and return result to user.
P.S. Execution time for thread may increase with no of user.
Please Suggest any alternate if possible
Program Logic(Presently i am using await function to wait for async function to execute)
public async Task<IHttpActionResult> doSomething(arguments)
{
.
.
.
.
<!-- Perform some operation which includes some database transcations--!>
if(operation succesed)
{
await Notification(userid);
}
return result;
}
static void Main(string[] args)
{
var userIds = new[] { 1, 2, 3, 4, 5};
Console.WriteLine("Updating db for users...");
// Start new thread for notficiation send-out
Task.Run(() =>
{
foreach (var i in userIds)
Console.WriteLine("Sending notification for #user " + i);
}).ContinueWith(t => Console.WriteLine("Notifcation all sent!"));
Console.WriteLine("Return the result before notification all sent out!");
}
If you remove await in front of Task.Run() (equivalent to Notifcation() which returns Task<> in your case) and run then it will create separate thread for notification send-out.
public async Task<IHttpActionResult> doSomething(arguments)
{
bool isInsertDone ,isUpdateDone = false;
//create thread list
var task = new List<Task>();
// parallel tasks to thread list and execute that tasks
task.Add(Task.Run(() =>
{`enter code here`
isInsertDone = insertData(arguments)
}));
task.Add(Task.Run(() =>
{
isUpdateDone updateData(arguments)
}));
// wait for execute all above tasks
Task.WaitAll(task.ToArray());
// return response by result of insert and update.
return Ok<bool>(isInsertDone && isUpdateDone);
}
If it is going to be a long running function and there is no direct impact on the current function then there is no need to wait. Fire and forget. You can safely remove the await.
public async Task<IHttpActionResult> doSomething(arguments) {
//... Perform some operation which includes some async database transactions
if(operation succesed) {
NotificationsAsync(userid); //Start notifications and continue
}
return result;
}
I would suggest using a messaging queue for jobs like that but that is a more advanced topic which is out of scope for this question.
I am trying to achieve the following behaviour using the Task Parallel Library:
As messages arrive I would like to process them sequentially but in groups. So when the first message arrives it should be processed immediately. If 2 messages come in while the first is being processed then they should be processed in a group of 2.
I can almost get what I want using a BatchBlock linked to an ActionBlock
var batchBlock = new BatchBlock<int>(100);
var actionBlock = new ActionBlock<int[]>(list =>
{
// do work
// now trigger
batchBlock.TriggerBatch();
});
batchBlock.LinkTo(actionBlock);
The problem with the code above is that if an item arrives after the TriggerBatch() call then it needs to wait for the batch to fill up. If I trigger batch after each post instead then the ActionBlock always receives single messages.
Instead of BatchBlock, you could use BufferBlock with a Task the receives items from it and resends them in batches to the target, according to your logic. Because you need to try to send a message containing a batch, and cancel it if another item comes in, the target block (actionBlock in your sample) has to have BoundedCapacity set to 1.
So, what you do is that you first receive something. When you have that, you start sending asynchronously and you also try to receive more items. If sending completes first, you start over. If receiving completes first, you cancel sending, add the received items to the batch, and then start both asynchronous actions again.
The actual code is a bit more complicated, because it needs to deal with some corner cases (receiving and sending complete at the same time; sending couldn't be canceled; receiving completed, because the whole was completed; exceptions):
public static ITargetBlock<T> CreateBatchingWrapper<T>(
ITargetBlock<IReadOnlyList<T>> target)
{
// target should have BoundedCapacity == 1,
// but there is no way to check for that
var source = new BufferBlock<T>();
Task.Run(() => BatchItems(source, target));
return source;
}
private static async Task BatchItems<T>(
IReceivableSourceBlock<T> source, ITargetBlock<IReadOnlyList<T>> target)
{
try
{
while (true)
{
var messages = new List<T>();
// wait for first message in batch
if (!await source.OutputAvailableAsync())
{
// source was completed, complete target and return
target.Complete();
return;
}
// receive all there is right now
source.ReceiveAllInto(messages);
// try sending what we've got
var sendCancellation = new CancellationTokenSource();
var sendTask = target.SendAsync(messages, sendCancellation.Token);
var outputAvailableTask = source.OutputAvailableAsync();
while (true)
{
await Task.WhenAny(sendTask, outputAvailableTask);
// got another message, try cancelling send
if (outputAvailableTask.IsCompleted
&& outputAvailableTask.Result)
{
sendCancellation.Cancel();
// cancellation wasn't successful
// and the message was received, start another batch
if (!await sendTask.EnsureCancelled() && sendTask.Result)
break;
// send was cancelled, receive messages
source.ReceiveAllInto(messages);
// restart both Tasks
sendCancellation = new CancellationTokenSource();
sendTask = target.SendAsync(
messages, sendCancellation.Token);
outputAvailableTask = source.OutputAvailableAsync();
}
else
{
// we get here in three situations:
// 1. send was completed succesfully
// 2. send failed
// 3. input has completed
// in cases 2 and 3, this await is necessary
// in case 1, it's harmless
await sendTask;
break;
}
}
}
}
catch (Exception e)
{
source.Fault(e);
target.Fault(e);
}
}
/// <summary>
/// Returns a Task that completes when the given Task completes.
/// The Result is true if the Task was cancelled,
/// and false if it completed successfully.
/// If the Task was faulted, the returned Task is faulted too.
/// </summary>
public static Task<bool> EnsureCancelled(this Task task)
{
return task.ContinueWith(t =>
{
if (t.IsCanceled)
return true;
if (t.IsFaulted)
{
// rethrow the exception
ExceptionDispatchInfo.Capture(task.Exception.InnerException)
.Throw();
}
// completed successfully
return false;
});
}
public static void ReceiveAllInto<T>(
this IReceivableSourceBlock<T> source, List<T> targetCollection)
{
// TryReceiveAll would be best suited for this, except it's bugged
// (see http://connect.microsoft.com/VisualStudio/feedback/details/785185)
T item;
while (source.TryReceive(out item))
targetCollection.Add(item);
}
You can also use Timer; which will Trigger Batch on every 10 seconds