I am currently working with a Serial Port, and the API I use will some times hang on a read, even when its own time out is set.
This is not a big problem, but i need to do some work when that happens and the hanging thread needs to be shutdown. I have tried that with the following, but it has been giving me problems as the API call is not terminated, but allowed to continue while the rest of the code continues, and the TimeoutException was thrown. How can i use Tasks to be able to cancel a hanging task after a certain amount of time?
CancellationToken token = new CancellationToken();
var task = Task.Factory.StartNew(() =>
{
CallingAPIThatMightHang(); // Example
}, token);
if (!task.Wait(this.TimeToTimeOut, token))
{
throw new TimeoutException("The operation timed out");
}
CancellationToken is of the form of cooperative cancellation. You need to moderate the token while executing your operation and watch if a cancelation has been requested.
From your code block, it seems as you have one long running synchronous operation which you offload to a threadpool thread. If that's the case, see if you can separate that serial call to chunks, where you can poll the token after read chunk. If you can't, cancellation wont be possible.
Note that in order to request cancellation, you'll have to create a CancellationTokenSource, which you'll later be able to call it's Cancel() method.
As a side note, serial port is async IO, You can use naturally async API's instead of offloading a synchronous to a threadpool thread.
Edit:
#HansPassant gave a better idea. Run the third party call inside another process, one which you keep a reference to. Once you need to terminate it, kill the process.
For example:
void Main()
{
SomeMethodThatDoesStuff();
}
void SomeMethodThatDoesStuff()
{
// Do actual stuff
}
And then launch it in a separate process:
private Process processThatDoesStuff;
void Main()
{
processThatDoesStuff = Process.Start(#"SomeLocation");
// Do your checks here.
if (someCondition == null)
{
processThatDoesStuff.Kill();
}
}
If you need to communicate any result between these two processes, you can do those via several mechanisms. One would be writing and reading the Standard Output of the process.
I am sadly not able to use any other framework, and i am not able to just change the API i am calling so it can use a Cancellation Token.
This is how i chose to solve the problem.
class Program
{
static void Main(string[] args)
{
try
{
var result = TestThreadTimeOut();
Console.WriteLine("Result: " + result);
}
catch (TimeoutException exp)
{
Console.WriteLine("Time out");
}
catch (Exception exp)
{
Console.WriteLine("Other error! " + exp.Message);
}
Console.WriteLine("Done!");
Console.ReadLine();
}
public static string TestThreadTimeOut()
{
string result = null;
Thread t = new Thread(() =>
{
while (true)
{
Console.WriteLine("Blah Blah Blah");
}
});
t.Start();
DateTime end = DateTime.Now + new TimeSpan(0, 0, 0, 0, 1500);
while (DateTime.Now <= end)
{
if (result != null)
{
break;
}
Thread.Sleep(50);
}
if (result == null)
{
try
{
t.Abort();
}
catch (ThreadAbortException)
{
// Fine
}
throw new TimeoutException();
}
return result;
}
}
Related
I'm trying to create a way to handle spikes of events in the eventhub. My current poc solution is just to fire and forget tasks as I'm consuming events, instead of awaiting them and then throttle parallel task amount using semaphore to avoid resource starvation.
Utility that throttles things:
public class ThrottledParallelTaskFactory
{
...
public Task StartNew(Func<Task> func)
{
_logger.LogDebug("Available semaphore count {AvailableDataConsumerCount} out of total {DataConsumerCountLimit}", _semaphore.CurrentCount, _limit);
_semaphoreSlim.Wait(_timeout);
_ = Task.Run(func)
.ContinueWith(t =>
{
if (t.Status is TaskStatus.Faulted or TaskStatus.Canceled or TaskStatus.RanToCompletion)
{
_semaphoreSlim.Release();
_logger.LogDebug("Available semaphore count {AvailableDataConsumerCount} out of total {DataConsumerCountLimit}", _semaphore.CurrentCount, _limit);
}
if (t.Status is TaskStatus.Canceled or TaskStatus.Faulted)
{
_logger?.LogError(t.Exception, "Parallel task failed");
}
});
return Task.CompletedTask;
}
}
My EventProcessorClient.ProcessEventAsync delegate:
private Task ProcessEvent(ProcessEventArgs arg)
{
var sw = Stopwatch.StartNew();
try
{
_throttledParallelTaskFactory.StartNew(async () => await Task.Delay(1000));
}
catch (Exception e)
{
_logger.LogError(e, "Failed to process event");
}
_logger.LogDebug($"Took {sw.ElapsedMilliseconds} ms");
return Task.CompletedTask;
}
After running this setup for a while, I noticed that my throttler's Semaphore maxes out at 2-3 tasks running in parallel, when my configured limit is 15. This kind of suggests to me that my handler takes 333-500ms to finish, but Stopwatch inside the handler says that the whole handler takes 0 ms to execute. I later added timestamp logging of when handler starts/ends to confirm it and it does take 0-1ms, but there's a mystery 300-600ms gap between them. NOTE: For current tests, this client is processing a backlog of millions of events, it's not processing live data, which could cause similar delays between events.
Does by any chance EventProcessorClient checkpoint internally after every single event? 300-500ms seems massive in my head.
I have both used default cached event/prefetch counts and increased ones without much difference.
Edit:
It ended up being not implementation related networking issue
You are not measuring the right thing and basically you are using async/await & Task wrong.
private Task ProcessEvent(ProcessEventArgs arg)
{
var sw = Stopwatch.StartNew();
try
{
_throttledParallelTaskFactory.StartNew(async () => await Task.Delay(1000));
}
catch (Exception e)
{
_logger.LogError(e, "Failed to process event");
}
_logger.LogDebug($"Took {sw.ElapsedMilliseconds} ms");
return Task.CompletedTask;
}
In the above code the call to _throttledParallelTaskFactory.StartNew is not awaited. So the stopwatch has nothing to measure. Furthermore, since the call is not awaited any exception won't be caught.
You should move the exception handling and time measurement to the StartNew method like this:
private Task ProcessEvent(ProcessEventArgs arg)
{
_throttledParallelTaskFactory.StartNew(() => Task.Delay(1000));
return Task.CompletedTask;
}
public class ThrottledParallelTaskFactory
{
public async Task StartNew(Func<Task> func)
{
var sw = Stopwatch.StartNew();
_logger.LogDebug("Available semaphore count {AvailableDataConsumerCount} out of total {DataConsumerCountLimit}", _semaphore.CurrentCount, _limit);
_semaphoreSlim.Wait(_timeout);
try
{
await func.Invoke();
}
catch
{
_logger.LogError(e, "Failed to process event");
_logger?.LogError(t.Exception, "Parallel task failed");
}
finally
{
_semaphoreSlim.Release();
_logger.LogDebug("Available semaphore count {AvailableDataConsumerCount} out of total {DataConsumerCountLimit}", _semaphore.CurrentCount, _limit);
_logger.LogDebug($"Took {sw.ElapsedMilliseconds} ms");
}
}
}
See how we got rid of the call to ContinueWith? Also, since the func already represents a Task there is no need to wrap the code in a call to Task.Run.
Does by any chance EventProcessorClient checkpoint internally after every single event?
No, it does not. You have to do checkpointing manually.
I have a system with 10 machines where I need to perform a certain task on each machine one by one in synchronize order. Basically only one machine should do that task at a particular time. We already use Consul for some other purpose but I was thinking can we use Consul to do this as well?
I read more about it and it looks like we can use leader election with consul where each machine will try to acquire lock, do the work and then release the lock. Once work is done, it will release the lock and then other machine will try to acquire lock again and do the same work. This way everything will be synchronized one machine at a time.
I decided to use this C# PlayFab ConsulDotNet library which already has this capability built in looks like but if there is any better option available I am open to that as well. Below Action method in my code base is called on each machine at the same time almost through a watcher mechanism.
private void Action() {
// Try to acquire lock using Consul.
// If lock acquired then DoTheWork() otherwise keep waiting for it until lock is acquired.
// Once work is done, release the lock
// so that some other machine can acquire the lock and do the same work.
}
Now inside that above method I need to do below things -
Try to acquire lock. If you cannot acquire the lock wait for it since other machine might have grabbed it before you.
If lock acquired then DoTheWork().
Once work is done, release the lock so that some other machine can acquire the lock and do the same work.
Idea is all 10 machines should DoTheWork() one at a time in synchronize order. Based on this blog and this blog I decided to modify their example to fit our needs -
Below is my LeaderElectionService class:
public class LeaderElectionService
{
public LeaderElectionService(string leadershipLockKey)
{
this.key = leadershipLockKey;
}
public event EventHandler<LeaderChangedEventArgs> LeaderChanged;
string key;
CancellationTokenSource cts = new CancellationTokenSource();
Timer timer;
bool lastIsHeld = false;
IDistributedLock distributedLock;
public void Start()
{
timer = new Timer(async (object state) => await TryAcquireLock((CancellationToken)state), cts.Token, 0, Timeout.Infinite);
}
private async Task TryAcquireLock(CancellationToken token)
{
if (token.IsCancellationRequested)
return;
try
{
if (distributedLock == null)
{
var clientConfig = new ConsulClientConfiguration { Address = new Uri("http://consul.host.domain.com") };
ConsulClient client = new ConsulClient(clientConfig);
distributedLock = await client.AcquireLock(new LockOptions(key) { LockTryOnce = true, LockWaitTime = TimeSpan.FromSeconds(3) }, token).ConfigureAwait(false);
}
else
{
if (!distributedLock.IsHeld)
{
await distributedLock.Acquire(token).ConfigureAwait(false);
}
}
}
catch (LockMaxAttemptsReachedException ex)
{
//this is expected if it couldn't acquire the lock within the first attempt.
Console.WriteLine(ex.Stacktrace);
}
catch (Exception ex)
{
Console.WriteLine(ex.Stacktrace);
}
finally
{
bool lockHeld = distributedLock?.IsHeld == true;
HandleLockStatusChange(lockHeld);
//Retrigger the timer after a 10 seconds delay (in this example). Delay for 7s if not held as the AcquireLock call will block for ~3s in every failed attempt.
timer.Change(lockHeld ? 10000 : 7000, Timeout.Infinite);
}
}
protected virtual void HandleLockStatusChange(bool isHeldNew)
{
// Is this the right way to check and do the work here?
// In general I want to call method "DoTheWork" in "Action" method itself
// And then release and destroy the session once work is done.
if (isHeldNew)
{
// DoTheWork();
Console.WriteLine("Hello");
// And then were should I release the lock so that other machine can try to grab it?
// distributedLock.Release();
// distributedLock.Destroy();
}
if (lastIsHeld == isHeldNew)
return;
else
{
lastIsHeld = isHeldNew;
}
if (LeaderChanged != null)
{
LeaderChangedEventArgs args = new LeaderChangedEventArgs(lastIsHeld);
foreach (EventHandler<LeaderChangedEventArgs> handler in LeaderChanged.GetInvocationList())
{
try
{
handler(this, args);
}
catch (Exception ex)
{
Console.WriteLine(ex.Stacktrace);
}
}
}
}
}
And below is my LeaderChangedEventArgs class:
public class LeaderChangedEventArgs : EventArgs
{
private bool isLeader;
public LeaderChangedEventArgs(bool isHeld)
{
isLeader = isHeld;
}
public bool IsLeader { get { return isLeader; } }
}
In the above code there are lot of pieces which might not be needed for my use case but idea is same.
Problem Statement
Now in my Action method I would like to use above class and perform the task as soon as lock is acquired otherwise keep waiting for the lock. Once work is done, release and destroy the session so that other machine can grab it and do the work. I am kinda confuse on how to use above class properly in my below method.
private void Action() {
LeaderElectionService electionService = new LeaderElectionService("data/process");
// electionService.LeaderChanged += (source, arguments) => Console.WriteLine(arguments.IsLeader ? "Leader" : "Slave");
electionService.Start();
// now how do I wait for the lock to be acquired here indefinitely
// And once lock is acquired, do the work and then release and destroy the session
// so that other machine can grab the lock and do the work
}
I recently started working with C# so that's why kinda confuse on how to make this work efficiently in production by using Consul and this library.
Update
I tried with below code as per your suggestion and I think I tried this earlier as well but for some reason as soon as it goes to this line await distributedLock.Acquire(cancellationToken);, it just comes back to main method automatically. It never moves forward to my Doing Some Work! print out. Does CreateLock actually works? I am expecting that it will create data/lock on consul (since it is not there) and then try to acquire the lock on it and if acquired, then do the work and then release it for other machines?
private static CancellationTokenSource cts = new CancellationTokenSource();
public static void Main(string[] args)
{
Action(cts.Token);
Console.WriteLine("Hello World");
}
private static async Task Action(CancellationToken cancellationToken)
{
const string keyName = "data/lock";
var clientConfig = new ConsulClientConfiguration { Address = new Uri("http://consul.test.host.com") };
ConsulClient client = new ConsulClient(clientConfig);
var distributedLock = client.CreateLock(keyName);
while (true)
{
try
{
// Try to acquire lock
// As soon as it comes to this line,
// it just goes back to main method automatically. not sure why
await distributedLock.Acquire(cancellationToken);
// Lock is acquired
// DoTheWork();
Console.WriteLine("Doing Some Work!");
// Work is done. Jump out of loop to release the lock
break;
}
catch (LockHeldException)
{
// Cannot acquire the lock. Wait a while then retry
await Task.Delay(TimeSpan.FromSeconds(10), cancellationToken);
}
catch (Exception)
{
// TODO: Handle exception thrown by DoTheWork method
// Here we jump out of the loop to release the lock
// But you can try to acquire the lock again based on your requirements
break;
}
}
// Release and destroy the lock
// So that other machine can grab the lock and do the work
await distributedLock.Release(cancellationToken);
await distributedLock.Destroy(cancellationToken);
}
IMO, LeaderElectionService from those blogs is an overkill in your case.
Update 1
There is no need to do while loop because:
ConsulClient is local variable
No need to check IsHeld property
Acquire will block indefinitely unless
Set LockTryOnce true in LockOptions
Set timeout to CancellationToken
Side note, it is not necessary to invoke Destroy method after you call Release on the distributed lock (reference).
private async Task Action(CancellationToken cancellationToken)
{
const string keyName = "YOUR_KEY";
var client = new ConsulClient();
var distributedLock = client.CreateLock(keyName);
try
{
// Try to acquire lock
// NOTE:
// Acquire method will block indefinitely unless
// 1. Set LockTryOnce = true in LockOptions
// 2. Pass a timeout to cancellation token
await distributedLock.Acquire(cancellationToken);
// Lock is acquired
DoTheWork();
}
catch (Exception)
{
// TODO: Handle exception thrown by DoTheWork method
}
// Release the lock (not necessary to invoke Destroy method),
// so that other machine can grab the lock and do the work
await distributedLock.Release(cancellationToken);
}
Update 2
The reason why OP's code just returns back to Main method is that, Action method is not awaited. You can use async Main if you use C# 7.1, and put await on Action method.
public static async Task Main(string[] args)
{
await Action(cts.Token);
Console.WriteLine("Hello World");
}
I have the following method:
public async Task ScrapeObjects(int page = 1)
{
try
{
while (!isObjectSearchCompleted)
{
..do calls..
}
}
catch (HttpRequestException ex)
{
Thread.Sleep(TimeSpan.FromSeconds(60));
ScrapeObjects(page);
Log.Fatal(ex, ex.Message);
}
}
I call this long running method async and I don't wait for it to finish. Thing is that an exception my occur and in that case I want to handle it. But then I want to start from where I left and with the same thread. At the current state a new thread gets used when I recursively call the method after handling the exception. I would like to keep using the same thread. Is there a way to do so? Thank you!
You probably need to move the try/catch block inside the while loop, and add a counter with the errors occurred, to bail out in case of continuous faulted attempts.
public async Task ScrapeObjects()
{
int failedCount = 0;
int page = 1;
while (!isObjectSearchCompleted)
{
try
{
//..do calls..
}
catch (HttpRequestException ex)
{
failedCount++;
if (failedCount < 3)
{
Log.Info(ex, ex.Message);
await Task.Delay(TimeSpan.FromSeconds(60));
}
else
{
Log.Fatal(ex, ex.Message);
throw; // or return;
}
}
}
}
As a side note it is generally better to await Task.Delay instead of Thread.Sleep inside asynchronous methods, to avoid blocking a thread without a reason.
One simple question before you read the long answer below:
Why you need the same thread? Are you accessing thread static / contextual data?
If yes, there will be ways to solve that easily than limiting your tasks to run on the same thread.
How to limit tasks to run on a single thread
As long as you use async calls on the default synchronization context, and as soon as the code is resumed from an await, it is possible that the thread can change after an await. This is because the default context schedules tasks to the next available thread in the thread pool. Like in the below case, before can be different from after:
public async Task ScrapeObjects(int page = 1)
{
var before = Thread.CurrentThread.ManagedThreadId;
await Task.Delay(1000);
var after = Thread.CurrentThread.ManagedThreadId;
}
The only reliable way to guarantee that your code could come back on the same thread is to schedule your async code onto a single threaded synchronization context:
class SingleThreadSynchronizationContext : SynchronizationContext
{
private readonly BlockingCollection<Action> _actions = new BlockingCollection<Action>();
private readonly Thread _theThread;
public SingleThreadSynchronizationContext()
{
_theThread = new Thread(DoWork);
_theThread.IsBackground = true;
_theThread.Start();
}
public override void Send(SendOrPostCallback d, object state)
{
// Send requires run the delegate immediately.
d(state);
}
public override void Post(SendOrPostCallback d, object state)
{
// Schedule the action by adding to blocking collection.
_actions.Add(() => d(state));
}
private void DoWork()
{
// Keep picking up actions to run from the collection.
while (!_actions.IsAddingCompleted)
{
try
{
var action = _actions.Take();
action();
}
catch (InvalidOperationException)
{
break;
}
}
}
}
And you need to schedule ScrapeObjects to the custom context:
SynchronizationContext.SetSynchronizationContext(new SingleThreadSynchronizationContext());
await Task.Factory.StartNew(
() => ScrapeObjects(),
CancellationToken.None,
TaskCreationOptions.DenyChildAttach | TaskCreationOptions.LongRunning,
TaskScheduler.FromCurrentSynchronizationContext()
).Unwrap();
By doing that, all your async code shall be scheduled to the same context, and run by the thread on that context.
However
This is typically dangerous, as you suddenly lose the ability to use the thread pool. If you block the thread, the entire async operation is blocked, meaning you will have deadlocks.
I`m working on implementing a get method for cache. This method will return to caller if a maximum wait time has passed(in my case 100ms for tests).
My issue is that the exception NEVER reaches the catch, after the timer triggered the event.
Please help me understand why? (I read that events are executed on the same thread, so that should`t be the issue)
public static T Get<T>(string key, int? maxMilisecondsForResponse = null)
{
var result = default(T);
try
{
// Return default if time expired
if (maxMilisecondsForResponse.HasValue)
{
var timer = new System.Timers.Timer(maxMilisecondsForResponse.Value);
timer.Elapsed += OnTimerElapsed;
timer.AutoReset = false;
timer.Enabled = true; // start the timer
}
var externalCache = new CacheServiceClient(BindingName);
Thread.Sleep(3000); // just for testing
}
catch (Exception ex)
{
// why is the exception not caught here?
}
return result;
}
private static void OnTimerElapsed(object source, System.Timers.ElapsedEventArgs e)
{
throw new Exception("Timer elapsed");
}
The timer fires on it's own thread. You can read more about it in this answer.
The answer to your question is to use async methods that can be cancelled. Then you can use a cancellation token source and do it the proper way instead of homebrewing a solution with timers.
You can find a good overview here.
For example:
cts = new CancellationTokenSource();
cts.CancelAfter(2500);
await Task.Delay(10000, cts.Token);
This would cancel the waiting task after 2500 (of 10000) because it took too long. Obviously you need to insert your own logic in a task instead of just waiting.
From MSDN
The Timer component catches and suppresses all exceptions thrown by
event handlers for the Elapsed event. This behavior is subject to
change in future releases of the .NET Framework.
And continues
Note, however, that this is not true of event handlers that execute
asynchronously and include the await operator (in C#) or the Await
operator (in Visual Basic). Exceptions thrown in these event handlers
are propagated back to the calling thread.
Please take a look Exception Handling (Task Parallel Library)
An applied example below:
public class Program
{
static void Main()
{
Console.WriteLine("Begin");
Get<string>("key", 1000);
Console.WriteLine("End");
}
public static T Get<T>(string key, int? maxMilisecondsForResponse = null)
{
var result = default(T);
try
{
var task = Task.Run(async () =>
{
await Task.Delay(maxMilisecondsForResponse.Value);
throw new Exception("Timer elapsed");
});
task.Wait();
}
catch (Exception ex)
{
// why the exception is not catched here?
Console.WriteLine(ex);
}
return result;
}
}
The timer is being executed in the own thread but you can't catch the exception at the caller level. So, it is not a good approach to use timer in this case and you can change it by creating the Task operation.
var result = default(T);
CacheServiceClient externalCache;
if (!Task.Run(() =>
{
externalCache = new CacheServiceClient(BindingName);
return externalCache;
}).Wait(100))//Wait for the 100 ms to complete operation.
{
throw new Exception("Task is not completed !");
}
// Do something
return result;
I have a doubt with cancellation token source which I am using as shown in the below code:
void Process()
{
//for the sake of simplicity I am taking 1, in original implementation it is more than 1
var cancellationToken = _cancellationTokenSource.Token;
Task[] tArray = new Task[1];
tArray[0] = Task.Factory.StartNew(() =>
{
cancellationToken.ThrowIfCancellationRequested();
//do some work here
MainTaskRoutine();
}, cancellationToken);
try
{
Task.WaitAll(tArray);
}
catch (Exception ex)
{
//do error handling here
}
}
void MainTaskRoutine()
{
//for the sake of simplicity I am taking 1, in original implementation it is more than 1
//this method shows that a nested task is created
var cancellationToken = _cancellationTokenSource.Token;
Task[] tArray = new Task[1];
tArray[0] = Task.Factory.StartNew(() =>
{
cancellationToken.ThrowIfCancellationRequested();
//do some work here
}, cancellationToken);
try
{
Task.WaitAll(tArray);
}
catch (Exception ex)
{
//do error handling here
}
}
Edit: further elaboration
End objective is: when a user cancels the operation, all the immediate pending tasks(either children or grand children) should cancel.
Scenario:
As per above code:
1. I first check whether user has asked for cancellation
2. If user has not asked for cancellation then only continue with the task (Please see Process method).
sample code shows only one task here but actually there can be three or more
Lets say that CPU started processing Task1 while other tasks are still in the Task queue waiting for some CPU to come and execute them.
User requests cancellation: Task 2,3 in Process method are immediately cancelled, but Task 1 will continue to work since it is already undergoing processing.
In Task 1 it calls method MainTaskRoutine, which in turn creates more tasks.
In the function of MainTaskRoutine I have written: cancellationToken.ThrowIfCancellationRequested();
So the question is: is it correct way of using CancellationTokenSource as it is dependent on Task.WaitAll()?
[EDITED] As you use an array in your code, I assume there could be multiple tasks, not just one. I also assume that within each task that you're starting from Process you want to do some CPU-bound work first (//do some work here), and then run MainTaskRoutine.
How you handle task cancellation exceptions is determined by your project design workflow. E.g., you could do it inside Process method, or from where you call Process. If your only concern is to remove Task objects from the array where you keep track of the pending tasks, this can be done using Task.ContinueWith. The continuation will be executed regardless of the task's completion status (Cancelled, Faulted or RanToCompletion):
Task Process(CancellationToken cancellationToken)
{
var tArray = new List<Task>();
var tArrayLock = new Object();
var task = Task.Run(() =>
{
cancellationToken.ThrowIfCancellationRequested();
//do some work here
return MainTaskRoutine(cancellationToken);
}, cancellationToken);
// add the task to the array,
// use lock as we may remove tasks from this array on a different thread
lock (tArrayLock)
tArray.Add(task);
task.ContinueWith((antecedentTask) =>
{
if (antecedentTask.IsCanceled || antecedentTask.IsFaulted)
{
// handle cancellation or exception inside the task
// ...
}
// remove task from the array,
// could be on a different thread from the Process's thread, use lock
lock (tArrayLock)
tArray.Remove(antecedentTask);
}, TaskContinuationOptions.ExecuteSynchronously);
// add more tasks like the above
// ...
// Return aggregated task
Task[] allTasks = null;
lock (tArrayLock)
allTasks = tArray.ToArray();
return Task.WhenAll(allTasks);
}
Your MainTaskRoutine can be structured in exactly the same way as Process, and have the same method signature (return a Task).
Then you may want to perform a blocking wait on the aggregated task returned by Process, or handle its completion asynchronously, e.g:
// handle the completion asynchronously with a blocking wait
void RunProcessSync()
{
try
{
Process(_cancellationTokenSource.Token).Wait();
MessageBox.Show("Process complete");
}
catch (Exception e)
{
MessageBox.Show("Process cancelled (or faulted): " + e.Message);
}
}
// handle the completion asynchronously using ContinueWith
Task RunProcessAync()
{
return Process(_cancellationTokenSource.Token).ContinueWith((task) =>
{
// check task.Status here
MessageBox.Show("Process complete (or cancelled, or faulted)");
}, TaskScheduler.FromCurrentSynchronizationContext());
}
// handle the completion asynchronously with async/await
async Task RunProcessAync()
{
try
{
await Process(_cancellationTokenSource.Token);
MessageBox.Show("Process complete");
}
catch (Exception e)
{
MessageBox.Show("Process cancelled (or faulted): " + e.Message);
}
}
After doing some research I found this link.
The code now looks like this:
see the usage of CancellationTokenSource.CreateLinkedTokenSource in below code
void Process()
{
//for the sake of simplicity I am taking 1, in original implementation it is more than 1
var cancellationToken = _cancellationTokenSource.Token;
Task[] tArray = new Task[1];
tArray[0] = Task.Factory.StartNew(() =>
{
cancellationToken.ThrowIfCancellationRequested();
//do some work here
MainTaskRoutine(cancellationToken);
}, cancellationToken);
try
{
Task.WaitAll(tArray);
}
catch (Exception ex)
{
//do error handling here
}
}
void MainTaskRoutine(CancellationToken cancellationToken)
{
//for the sake of simplicity I am taking 1, in original implementation it is more than 1
//this method shows that a nested task is created
using (var cancellationTokenSource = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken))
{
var cancelToken = cancellationTokenSource.Token;
Task[] tArray = new Task[1];
tArray[0] = Task.Factory.StartNew(() =>
{
cancelToken.ThrowIfCancellationRequested();
//do some work here
}, cancelToken);
try
{
Task.WaitAll(tArray);
}
catch (Exception ex)
{
//do error handling here
}
}
}
Note: I haven't used it, but I will let You know once it is done :)