I'm switching from Task.Run to Hangfire. In .NET 4.5+ Task.Run can return Task<TResult> which allows me to run tasks that return other than void. I can normally wait and get the result of my task by accessing the property MyReturnedTask.Result
Example of my old code:
public void MyMainCode()
{
List<string> listStr = new List<string>();
listStr.Add("Bob");
listStr.Add("Kate");
listStr.Add("Yaz");
List<Task<string>> listTasks = new List<Task<string>>();
foreach(string str in listStr)
{
Task<string> returnedTask = Task.Run(() => GetMyString(str));
listTasks.Add(returnedTask);
}
foreach(Task<string> task in listTasks)
{
// using task.Result will cause the code to wait for the task if not yet finished.
// Alternatively, you can use Task.WaitAll(listTasks.ToArray()) to wait for all tasks in the list to finish.
MyTextBox.Text += task.Result + Environment.NewLine;
}
}
private string GetMyString(string str)
{
// long execution in order to calculate the returned string
return str + "_finished";
}
As far as I can see from the Quick Start page of Hangfire, your main guy which is BackgroundJob.Enqueue(() => Console.WriteLine("Fire-and-forget"));
perfectly runs the code as a background job but apparently doesn't support jobs that have a return value (like the code I presented above). Is that right? if not, how can I tweak my code in order to use Hangfire?
P.S. I already looked at HostingEnvironment.QueueBackgroundWorkItem (here) but it apparently lacks the same functionality (background jobs have to be void)
EDIT
As #Dejan figured out, the main reason I want to switch to Hangfire is the same reason the .NET folks added QueueBackgroundWorkItem in .NET 4.5.2. And that reason is well described in Scott Hanselman's great article about Background Tasks in ASP.NET. So I'm gonna quote from the article:
QBWI (QueueBackgroundWorkItem) schedules a task which can run in the background, independent of
any request. This differs from a normal ThreadPool work item in that
ASP.NET automatically keeps track of how many work items registered
through this API are currently running, and the ASP.NET runtime will
try to delay AppDomain shutdown until these work items have finished
executing.
One simple solution would be to poll the monitoring API until the job is finished like this:
public static Task Enqueue(Expression<Action> methodCall)
{
string jobId = BackgroundJob.Enqueue(methodCall);
Task checkJobState = Task.Factory.StartNew(() =>
{
while (true)
{
IMonitoringApi monitoringApi = JobStorage.Current.GetMonitoringApi();
JobDetailsDto jobDetails = monitoringApi.JobDetails(jobId);
string currentState = jobDetails.History[0].StateName;
if (currentState != "Enqueued" && currentState != "Processing")
{
break;
}
Thread.Sleep(100); // adjust to a coarse enough value for your scenario
}
});
return checkJobState;
}
Attention: Of course, in a Web-hosted scenario you cannot rely on continuation of the task (task.ContinueWith()) to do more things after the job has finished as the AppDomain might be shut down - for the same reasons you probably want to use Hangfire in the first place.
Related
In an application I am experiencing odd behavior due to wrong/unexpected values of AsyncLocal: Despite I suppressed the flow of the execution context, I the AsyncLocal.Value-property is sometimes not reset within the execution scope of a newly spawned Task.
Below I created a minimal reproducible sample which demonstrates the problem:
private static readonly AsyncLocal<object> AsyncLocal = new AsyncLocal<object>();
[TestMethod]
public void Test()
{
Trace.WriteLine(System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription);
var mainTask = Task.Factory.StartNew(() =>
{
AsyncLocal.Value = "1";
Task anotherTask;
using (ExecutionContext.SuppressFlow())
{
anotherTask = Task.Run(() =>
{
Trace.WriteLine(AsyncLocal.Value); // "1" <- ???
Assert.IsNull(AsyncLocal.Value); // BOOM - FAILS
AsyncLocal.Value = "2";
});
}
Task.WaitAll(anotherTask);
});
mainTask.Wait(500000, CancellationToken.None);
}
In nine out of ten runs (on my pc) the outcome of the Test-method is:
.NET 6.0.2
"1"
-> The test fails
As you can see the test fails because within the action which is executed within Task.Run the the previous value is still present within AsyncLocal.Value (Message: 1).
My concrete questions are:
Why does this happen?
I suspect this happens because Task.Run may use the current thread to execute the work load. In that case, I assume lack of async/await-operators does not force the creation of a new/separate ExecutionContext for the action. Like Stephen Cleary said "from the logical call context’s perspective, all synchronous invocations are “collapsed” - they’re actually part of the context of the closest async method further up the call stack". If that’s the case I do understand why the same context is used within the action.
Is this the correct explanation for this behavior? In addition, why does it work flawlessly sometimes (about 1 run out of 10 on my machine)?
How can I fix this?
Assuming that my theory above is true it should be enough to forcefully introduce a new async "layer", like below:
private static readonly AsyncLocal<object> AsyncLocal = new AsyncLocal<object>();
[TestMethod]
public void Test()
{
Trace.WriteLine(System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription);
var mainTask = Task.Factory.StartNew(() =>
{
AsyncLocal.Value = "1";
Task anotherTask;
using (ExecutionContext.SuppressFlow())
{
var wrapper = () =>
{
Trace.WriteLine(AsyncLocal.Value);
Assert.IsNull(AsyncLocal.Value);
AsyncLocal.Value = "2";
return Task.CompletedTask;
};
anotherTask = Task.Run(async () => await wrapper());
}
Task.WaitAll(anotherTask);
});
mainTask.Wait(500000, CancellationToken.None);
}
This seems to fix the problem (it consistently works on my machine), but I want to be sure that this is a correct fix for this problem.
Many thanks in advance
Why does this happen? I suspect this happens because Task.Run may use the current thread to execute the work load.
I suspect that it happens because Task.WaitAll will use the current thread to execute the task inline.
Specifically, Task.WaitAll calls Task.WaitAllCore, which will attempt to run it inline by calling Task.WrappedTryRunInline. I'm going to assume the default task scheduler is used throughout. In that case, this will invoke TaskScheduler.TryRunInline, which will return false if the delegate is already invoked. So, if the task has already started running on a thread pool thread, this will return back to WaitAllCore, which will just do a normal wait, and your code will work as expected (1 out of 10).
If a thread pool thread hasn't picked it up yet (9 out of 10), then TaskScheduler.TryRunInline will call TaskScheduler.TryExecuteTaskInline, the default implementation of which will call Task.ExecuteEntryUnsafe, which calls Task.ExecuteWithThreadLocal. Task.ExecuteWithThreadLocal has logic for applying an ExecutionContext if one was captured. Assuming none was captured, the task's delegate is just invoked directly.
So, it seems like each step is behaving logically. Technically, what ExecutionContext.SuppressFlow means is "don't capture the ExecutionContext", and that is what is happening. It doesn't mean "clear the ExecutionContext". Sometimes the task is run on a thread pool thread (without the captured ExecutionContext), and WaitAll will just wait for it to complete. Other times the task will be executed inline by WaitAll instead of a thread pool thread, and in that case the ExecutionContext is not cleared (and technically isn't captured, either).
You can test this theory by capturing the current thread id within your wrapper and comparing it to the thread id doing the Task.WaitAll. I expect that they will be the same thread for the runs where the async local value is (unexpectedly) inherited, and they will be different threads for the runs where the async local value works as expected.
If you can, I'd first consider whether it's possible to replace the thread-specific caches with a single shared cache. The app likely predates useful types such as ConcurrentDictionary.
If it isn't possible to use a singleton cache, then you can use a stack of async local values. Stacking async local values is a common pattern. I prefer wrapping the stack logic into a separate type (AsyncLocalValue in the code below):
public sealed class AsyncLocalValue
{
private static readonly AsyncLocal<ImmutableStack<object>> _asyncLocal = new();
public object Value => _asyncLocal.Value?.Peek();
public IDisposable PushValue(object value)
{
var originalValue = _asyncLocal.Value;
var newValue = (originalValue ?? ImmutableStack<object>.Empty).Push(value);
_asyncLocal.Value = newValue;
return Disposable.Create(() => _asyncLocal.Value = originalValue);
}
}
private static AsyncLocalValue AsyncLocal = new();
[TestMethod]
public void Test()
{
Console.WriteLine(System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription);
var mainTask = Task.Factory.StartNew(() =>
{
Task anotherTask;
using (AsyncLocal.PushValue("1"))
{
using (AsyncLocal.PushValue(null))
{
anotherTask = Task.Run(() =>
{
Console.WriteLine("Observed: " + AsyncLocal.Value);
using (AsyncLocal.PushValue("2"))
{
}
});
}
}
Task.WaitAll(anotherTask);
});
mainTask.Wait(500000, CancellationToken.None);
}
This code sample uses Disposable.Create from my Nito.Disposables library.
I've been working on a project and saw the below code. I am new to the async/await world. As far as I know, only a single task is performing in the method then why it is decorated with async/await. What benefits I am getting by using async/await and what is the drawback if I remove async/await i.e make it synchronous I am a little bit confused so any help will be appreciated.
[Route("UpdatePersonalInformation")]
public async Task<DataTransferObject<bool>> UpdatePersonalInformation([FromBody] UserPersonalInformationRequestModel model)
{
DataTransferObject<bool> transfer = new DataTransferObject<bool>();
try
{
model.UserId = UserIdentity;
transfer = await _userService.UpdateUserPersonalInformation(model);
}
catch (Exception ex)
{
transfer.TransactionStatusCode = 500;
transfer.ErrorMessage = ex.Message;
}
return transfer;
}
Service code
public async Task<DataTransferObject<bool>> UpdateUserPersonalInformation(UserPersonalInformationRequestModel model)
{
DataTransferObject<bool> transfer = new DataTransferObject<bool>();
await Task.Run(() =>
{
try
{
var data = _userProfileRepository.FindBy(x => x.AspNetUserId == model.UserId)?.FirstOrDefault();
if (data != null)
{
var userProfile = mapper.Map<UserProfile>(model);
userProfile.UpdatedBy = model.UserId;
userProfile.UpdateOn = DateTime.UtcNow;
userProfile.CreatedBy = data.CreatedBy;
userProfile.CreatedOn = data.CreatedOn;
userProfile.Id = data.Id;
userProfile.TypeId = data.TypeId;
userProfile.AspNetUserId = data.AspNetUserId;
userProfile.ProfileStatus = data.ProfileStatus;
userProfile.MemberSince = DateTime.UtcNow;
if(userProfile.DOB==DateTime.MinValue)
{
userProfile.DOB = null;
}
_userProfileRepository.Update(userProfile);
transfer.Value = true;
}
else
{
transfer.Value = false;
transfer.Message = "Invalid User";
}
}
catch (Exception ex)
{
transfer.ErrorMessage = ex.Message;
}
});
return transfer;
}
What benefits I am getting by using async/await
Normally, on ASP.NET, the benefit of async is that your server is more scalable - i.e., can handle more requests than it otherwise could. The "Synchronous vs. Asynchronous Request Handling" section of this article goes into more detail, but the short explanation is that async/await frees up a thread so that it can handle other requests while the asynchronous work is being done.
However, in this specific case, that's not actually what's going on. Using async/await in ASP.NET is good and proper, but using Task.Run on ASP.NET is not. Because what happens with Task.Run is that another thread is used to run the delegate within UpdateUserPersonalInformation. So this isn't asynchronous; it's just synchronous code running on a background thread. UpdateUserPersonalInformation will take another thread pool thread to run its synchronous repository call and then yield the request thread by using await. So it's just doing a thread switch for no benefit at all.
A proper implementation would make the repository asynchronous first, and then UpdateUserPersonalInformation can be implemented without Task.Run at all:
public async Task<DataTransferObject<bool>> UpdateUserPersonalInformation(UserPersonalInformationRequestModel model)
{
DataTransferObject<bool> transfer = new DataTransferObject<bool>();
try
{
var data = _userProfileRepository.FindBy(x => x.AspNetUserId == model.UserId)?.FirstOrDefault();
if (data != null)
{
...
await _userProfileRepository.UpdateAsync(userProfile);
transfer.Value = true;
}
else
{
transfer.Value = false;
transfer.Message = "Invalid User";
}
}
catch (Exception ex)
{
transfer.ErrorMessage = ex.Message;
}
return transfer;
}
The await keyword only indicates that the execution of the current function is halted until the Task which is being awaited is completed. This means if you remove the async, the method will continue execution and therefore immediately return the transfer object, even if the UpdateUserPersonalInformation Task is not finished.
Take a look at this example:
private void showInfo()
{
Task.Delay(1000);
MessageBox.Show("Info");
}
private async void showInfoAsync()
{
await Task.Delay(1000);
MessageBox.Show("Info");
}
In the first method, the MessageBox is immediately displayed, since the newly created Task (which only waits a specified amount of time) is not awaited. However, the second method specifies the await keyword, therefore the MessageBox is displayed only after the Task is finished (in the example, after 1000ms elapsed).
But, in both cases the delay Task is ran asynchronously in the background, so the main thread (for example the UI) will not freeze.
The usage of async-await mechanism mainly used
when you have some long calculation process which takes some time and you want it to be on the background
in UI when you don't want to make the main thread stuck which will be reflected on UI performance.
you can read more here:
https://learn.microsoft.com/en-us/dotnet/csharp/async
Time Outs
The main usages of async and await operates preventing TimeOuts by waiting for long operations to complete. However, there is another less known, but very powerful one.
If you don't await long operation, you will get a result back, such as a null, even though the actual request as not completed yet.
Cancellation Tokens
Async requests have a default parameter you can add:
public async Task<DataTransferObject<bool>> UpdatePersonalInformation(
[FromBody] UserPersonalInformationRequestModel model,
CancellationToken cancellationToken){..}
A CancellationToken allows the request to stop when the user changes pages or interrupts the connection. A good example of this is a user has a search box, and every time a letter is typed you filter and search results from your API. Now imagine the user types a very long string with say 15 characters. That means that 15 requests are sent and 15 requests need to be completed. Even if the front end is not awaiting the first 14 results, the API is still doing all the 15 requests.
A cancellation token simply tells the API to drop the unused threads.
I would like to chime in on this because most answers although good, do not point to a definite time when to use and when not.
From my experience, if you are developing anything with a front-end, add async/await to your methods when expecting output from other threads to be input to your UI. This is the best strategy for handling multithread output and Microsoft should be commended to come out with this when they did. Without async/await you would have to add more code to handle thread output to UI (e.g Event, Event Handler, Delegate, Event Subscription, Marshaller).
Don't need it anywhere else except if using strategically for slow peripherals.
I'd like to know how can i check status of an asynchronous task in c#.
I have a save method to save users, and i'd like to run a background task to update them after the save.
I'm in framework 4.0, here is my code to begin the task
System.Threading.Tasks.Task task = null;
task = System.Threading.Tasks.Task.Factory.StartNew(() =>
{
beginTask();
});
My problem is the task take some times to end (near 7 mins) so if someone do several user saves, the task is runned several times, so i'd like to check before running the task if the function beginTask() is already running to avoid to have a lot of background tasks are running.
Thanks
Maybe try do it this way-
Initialise your task object:
Task task = new Task(begintask);
And add new method to run it, for example:
public void StartTask(Task t)
{
if (t.Status == TaskStatus.Running)
return;
else
t.Start();
}
Of course you can add more conditions depend of task's states.
I think I found better solution. Check if it suits you.
public class TaskDemo
{
private static AutoResetEvent autoReset = new AutoResetEvent(true);
Action beginTask = () =>
{
Console.WriteLine("Method start");
Thread.Sleep(2000);
};
public void RunTask()
{
Task myTask = Task.Run(() =>
{
autoReset.WaitOne();
beginTask();
}).ContinueWith(t => autoReset.Set());
}
}
And simply console app test:
static void Main(string[] args)
{
TaskDemo td = new TaskDemo();
// Simulation multiple requests
Thread.Sleep(1000);
td.RunTask();
Thread.Sleep(1000);
td.RunTask();
Thread.Sleep(1000);
td.RunTask();
Thread.Sleep(1000);
td.RunTask();
Thread.Sleep(1000);
td.RunTask();
Thread.Sleep(1000);
td.RunTask();
}
The clue is to use AutoResetEvent to signal task state and
Task myTask = Task.Run(() =>
{
autoReset.WaitOne();
beginTask();
}).ContinueWith(t => autoReset.Set());
to change its state during run and after finish (ContinueWith(t => autoReset.Set()).
Each Task object has a Task.Status property of type TaskStatus which can be queried if you already have the task object. It will tell you whether the task is running, finished, cancelled, etc.
If you want to check if ANY task is running that particular chunk of functionality that may be more difficult. One possible suggestion would be keeping a Task variable globally accessible for that reason and any time they tried to run that functionality the code did:
Check the global to see if one is running.
If not create a new Task to run it and assign it to the global variable
Else perform whatever handling you wanted to do if it was already running (wait for it to finish perhaps?)
It sounds like you're a) misunderstanding tasks a bit and b) need to take a look at your solution design.
A task is an object representing a piece of work to be done. A task is NOT the BeginTask method itself. The BeginTask method is more or less just a set of instructions to carry out. It doesn't have any state. Individual Tasks which implement those instructions do have a state which can be queried.
If you want to make it so only one Task could be run per user you'd just have to somewhere globally store a collection of Tasks per user (such as a Dictionary with the Key being the user).
This would ideally be created and stored in either some sort of governing class that contains this section of application functionality or in the outer program if it is one.
To make reference to your comment of "i need to avoid to create a new task everytime i save my users", for this you're going to have to adapt that particular piece of code to check the stored status of any running Tasks. So in my idea above you'd alter that piece of functionality to check if a Task exists for the user you're saving in the Dictionary of already begun tasks and if it does, check the status of it.
If you're not sure though please keep asking questions in the comments. Perhaps if you gave more information on how the system this is in, is structured I'd be better able to assist.
Hope that helps.
You can use property Task.IsCompleted or Task.Status
And by the way try to investigation of this method Task.ContinueWith
I think using of Task.ContinueWith is better way for multitask solutions.
See example below which explain my suggestion with using Task.ContinueWith:
System.Threading.Tasks.Task task = null;
if (task==null)
{
task = Task.Factory.StartNew(() =>
{
beginTask();
});
return;
}
if (task.Status == TaskStatus.Running)
{
task.ContinueWith((x) =>
{
beginTask();
});
}
else
{
task = Task.Factory.StartNew(() =>
{
beginTask();
});
}
This implementation can resolve scope of potential problem like:
How to keep previous Task
When should system launch previous Tasks
I have a core task retreiving me some core data and multiple other sub-tasks fetching extra data. Would like to run some enricher process to the core data as soon as the core task and any of the sub-task is ready. Would you know how to do so?
Thought about something like this but not sure it's the doing what I want:
// Starting the tasks
var coreDataTask = new Task(...);
var extraDataTask1 = new Task(...);
var extraDataTask2 = new Task(...);
coreDataTask.Start();
extraDataTask1.Start();
extraDataTask2.Start();
// Enriching the results
Task.WaitAll(coreDataTask, extraDataTask1);
EnrichCore(coreDataTask.Results, extraDataTask1.Results);
Task.WaitAll(coreDataTask, extraDataTask2);
EnrichCore(coreDataTask.Results, extraDataTask2.Results);
Also given the enrichement is on the same core object, guess I would need to lock it somewhere?
Thanks in advance!
Here is another idea taking advantage of Task.WhenAny() to detect when tasks are completing.
For this minimal example, I just assume that the core data and extra data are strings. But you can adjust for whatever your type is.
Also, I am not actually doing any processing. You would have to plug in your processing.
Also, an assumption I am making, that is not really clear, is that you are mostly trying to parallelize the gathering of your data because that's the expensive part, but that the enriching part is actually pretty fast. Based on that assumption, you'll notice that the tasks run in parallel to gather the core data and extra data. But as the data becomes available, the core data is enriched synchronously to avoid having to complicate the code with locking.
If you copy-paste the code below, you should be able to run it as is to see how it works.
public static void Main(string[] args)
{
StartWork().Wait();
}
private async static Task StartWork()
{
// start core and extra tasks
Task<string> coreDataTask = Task.Run(() => "core data" /* do something more complicated here */);
List<Task<string>> extraDataTaskList = new List<Task<string>>();
for (int i = 0; i < 10; i++)
{
int x = i;
extraDataTaskList.Add(Task.Run(() => "extra data " + x /* do something more complicated here */));
}
// wait for core data to be ready first.
StringBuilder coreData = new StringBuilder(await coreDataTask);
// enrich core as the extra data tasks complete.
while (extraDataTaskList.Count != 0)
{
Task<string> completedExtraDataTask = await Task.WhenAny(extraDataTaskList);
extraDataTaskList.Remove(completedExtraDataTask);
EnrichCore(coreData, await completedExtraDataTask);
}
Console.WriteLine(coreData.ToString());
}
private static void EnrichCore(StringBuilder coreData, string extraData)
{
coreData.Append(" enriched with ").Append(extraData);
}
EDIT: .NET 4.0 version
Here is how I would change it for .NET 4.0, while still retaining the same overall design:
Task.Run() becomes Task.Factory.StartNew()
Instead of doing await on tasks, I call Result, which is a blocking call that waits for the task to complete.
Use Task.WaitAny instead of Task.WhenAny, which is also a blocking call.
The design remains very similar. The one big difference between both versions of the code is that in the .NET 4.5 version, whenever there is an await, the current thread is free to do other work. In the .NET 4.0 version, whenever you call Task.Result or Task.WaitAny, the current thread blocks until the Task completes. It's possible that this difference is not really important to you. But if it is, just make sure to wrap and run the whole block of code in a background thread or task to free up your main thread.
The other difference is with the exception handling. With the .NET 4.5 version, if any of your tasks fails with an unhandled exception, the exception is automatically unwrapped and propagated in a very transparent manner. With the .NET 4.0 version, you'll be getting AggregateExceptions that you will have to unwrap and handle yourself. If this is a concern, make sure you test this beforehand so you know what to expect.
Personally, I try to avoid Task.ContinueWith whenever I can. It tends to make the code really ugly and hard to read.
public static void Main(string[] args)
{
// start core and extra tasks
Task<string> coreDataTask = Task.Factory.StartNew(() => "core data" /* do something more complicated here */);
List<Task<string>> extraDataTaskList = new List<Task<string>>();
for (int i = 0; i < 10; i++)
{
int x = i;
extraDataTaskList.Add(Task.Factory.StartNew(() => "extra data " + x /* do something more complicated here */));
}
// wait for core data to be ready first.
StringBuilder coreData = new StringBuilder(coreDataTask.Result);
// enrich core as the extra data tasks complete.
while (extraDataTaskList.Count != 0)
{
int indexOfCompletedTask = Task.WaitAny(extraDataTaskList.ToArray());
Task<string> completedExtraDataTask = extraDataTaskList[indexOfCompletedTask];
extraDataTaskList.Remove(completedExtraDataTask);
EnrichCore(coreData, completedExtraDataTask.Result);
}
Console.WriteLine(coreData.ToString());
}
private static void EnrichCore(StringBuilder coreData, string extraData)
{
coreData.Append(" enriched with ").Append(extraData);
}
I think what you probably want is "ContinueWith" (Documentation here : https://msdn.microsoft.com/en-us/library/dd270696(v=vs.110).aspx). That is as long as your enriching doesn't need to be done in a specific order.
The code would look something like the following :
var coreTask = new Task<object>(() => { return null; });
var enrichTask1 = new Task<object>(() => { return null; });
var enrichTask2 = new Task<object>(() => { return null; });
coreTask.Start();
coreTask.Wait();
//Create your continue tasks here with the data you want.
enrichTask1.ContinueWith(task => {/*Do enriching here with task.Result*/});
//Start all enricher tasks here.
enrichTask1.Start();
//Wait for all the tasks to complete here.
Task.WaitAll(enrichTask1);
You still need to run your CoreTask first as that's required to finish before all enriching tasks. But from there you can start all tasks, and tell them when they are done to "ContinueWith" doing something else.
You should also take a quick look in the "Enricher Pattern" that may be able to help you in general with what you want to achieve (Outside of threading). Examples like here : http://www.enterpriseintegrationpatterns.com/DataEnricher.html
I need some suggestion on the following situation. We have two APIs lets say API1 and API2. API1 calls a method from API2. Sometimes API1 unable to contact API2. But API1 used to try three times if it unable to contact API2. After three times, still if API1 unable to contact API2, we decided to add 1 minute delay and try again. API1 should not depends on this 1 minute delayed process result. It should return a response to the user like 'please check email for result'. For this, we tried
TPL(Task Parallel Library)
While using TPL, the API1 waits to complete the task and then only it returns the result.
Threading
We tried thread pool but it is old fashioned.
.NET Framework 4.0
Here the code for API1 implements TPL
public string TestTPL()
{
string str = string.Empty;
int i = 1;
ServiceReference1.Service1Client obj = new ServiceReference1.Service1Client();
while (i <= 3)
{
//call a method resides in API2
string str = obj.API2Method();
if (string.IsNullOrEmpty(str))
i++;
else
break;
}
if (string.IsNullOrEmpty(str))
Parallel.Invoke(() => DoSomeWork());
return "Hey, I came";
}
public void DoSomeWork()
{
//wait for 1 min
System.Threading.Thread.Sleep(60000);
ServiceReference1.Service1Client obj = new ServiceReference1.Service1Client();
//call a method resides in API2
string str = obj.API2Method();
//send mail to the user
}
To run a child task independent of parent method, we can use Task class.
public string TestTPL() //parent method
{
Task task = new Task(DoSomeWork); //child task
task.Start();
return "Hey, I came";
}
Tasks can get their own dedicated thread and do not consume a thread from the pool.
In case of Parallel.Invoke, parent method wait until the child task get finished.
Parallel.Invoke() is a synchronous call which means it will not return until all of its children are complete. In this case it will wait until DoSomWork() is done.
Instead of
if (string.IsNullOrEmpty(str))
Parallel.Invoke(() => DoSomeWork());
Try something like
if (string.IsNullOrEmpty(str))
Task.Run(() => DoSomeWork());
That will return immediately and execute DoSomeWork() on a thread from the thread pool.
The threadpool is old but its still a good tool. The TPL just gives you some better semantics to express what work being done at a higher level rather than how it is being done underneath the covers.
Task.Wait(60000).Run( () => {
} );