Slow down using Async calls with lock - c#

I was noticing an initial slow down in my process and upon taking multiple hangdumps, I was able to isolate the issue and reproduce the scenario using the following code. I am using a library that has locks and what not, which eventually calls the user side implementation of certain methods. These methods make async calls using httpclient. These async calls are made from within these locks inside the library.
Now, my theory as to what is happening (do correct me if I am wrong):
The tasks that get spun try to acquire the lock and hold on to the threads fast enough such that the first PingAsync method needs to wait for the default task scheduler to spin up a new thread for it to run on, which is 0.5 s based on the default .net scheduling algorithm. This is why I think I notice delays for total tasks greater than 32, which also increases linearly with increasing total tasks count.
The workaround:
Increase the minthreads count, which I think is treating the symptom and not the actual problem.
Another way is to have a limited concurrency to control the number of tasks fired. But these are tasks spun by a webserver for incoming httprequests and typically we will not have control over it (or will we?)
I understand that combining asyc and non-async is bad design and using sempahores' async calls would be a better way to go. Assuming I do not have control over this library, how does one go about mitigating this problem?
const int ParallelCount = 16;
const int TotalTasks = 33;
static object _lockObj = new object();
static HttpClient _httpClient = new HttpClient();
static int count = 0;
static void Main(string[] args)
{
ThreadPool.GetMinThreads(out int workerThreads, out int ioThreads);
Console.WriteLine($"Min threads count. Worker: {workerThreads}. IoThreads: {ioThreads}");
ThreadPool.GetMaxThreads(out workerThreads, out ioThreads);
Console.WriteLine($"Max threads count. Worker: {workerThreads}. IoThreads: {ioThreads}");
//var done = ThreadPool.SetMaxThreads(1024, 1000);
//ThreadPool.GetMaxThreads(out workerThreads, out ioThreads);
//Console.WriteLine($"Set Max Threads success? {done}.");
//Console.WriteLine($"Max threads count. Worker: {workerThreads}. IoThreads: {ioThreads}");
//var done = ThreadPool.SetMinThreads(1024, 1000);
//ThreadPool.GetMinThreads(out workerThreads, out ioThreads);
//Console.WriteLine($"Set Min Threads success? {done}.");
//Console.WriteLine($"Min threads count. Worker: {workerThreads}. IoThreads: {ioThreads}");
var startTime = DateTime.UtcNow;
var tasks = new List<Task>();
for (int i = 0; i < TotalTasks; i++)
{
tasks.Add(Task.Run(() => LibraryMethod()));
//while (tasks.Count > ParallelCount)
//{
// var task = Task.WhenAny(tasks.ToArray()).GetAwaiter().GetResult();
// if (task.IsFaulted)
// {
// throw task.Exception;
// }
// tasks.Remove(task);
//}
}
Task.WaitAll(tasks.ToArray());
//while (tasks.Count > 0)
//{
// var task = Task.WhenAny(tasks.ToArray()).GetAwaiter().GetResult();
// if (task.IsFaulted)
// {
// throw task.Exception;
// }
// tasks.Remove(task);
// Console.Write(".");
//}
Console.Write($"\nDone in {(DateTime.UtcNow-startTime).TotalMilliseconds}");
Console.ReadLine();
}
Assuming this is the part where library methods are called,
public static void LibraryMethod()
{
lock (_lockObj)
{
SimpleNonAsync();
}
}
Eventually, the user implementation of this method gets called which is async.
public static void SimpleNonAsync()
{
//PingAsync().Result;
//PingAsync().ConfigureAwaiter(false).Wait();
PingAsync().Wait();
}
private static async Task PingAsync()
{
Console.Write($"{Interlocked.Increment(ref count)}.");
await _httpClient.SendAsync(new HttpRequestMessage
{
RequestUri = new Uri($#"http://127.0.0.1"),
Method = HttpMethod.Get
});
}

These async calls are made from within these locks inside the library.
This is a design flaw. No one should call arbitrary code while under a lock.
That said, the locks have nothing to do with the problem you're seeing.
I understand that combining asyc and non-async is bad design and using sempahores' async calls would be a better way to go. Assuming I do not have control over this library, how does one go about mitigating this problem?
The problem is that the library is forcing your code to be synchronous. This means one thread is being blocked for every download; there's no way around that as long as the library's callbacks are synchronous.
Increase the minthreads count, which I think is treating the symptom and not the actual problem.
If you can't modify the library, then you must use one thread per request, and this becomes a viable workaround. You have to treat the symptom because you can't fix the problem (i.e., the library).
Another way is to have a limited concurrency to control the number of tasks fired. But these are tasks spun by a webserver for incoming httprequests and typically we will not have control over it (or will we?)
No; the tasks causing problems are the ones you're spinning up yourself using Task.Run. The tasks on the server are completely independent; your code can't influence or even detect them.
If you want higher concurrency without waiting for thread injection, then you'll need to increase min threads, and you'll also probably need to increase ServicePointManager.DefaultConnectionLimit. You can then continue to use Task.Run, or (as I would prefer) Parallel or Parallel LINQ to do parallel processing. One nice aspect of Parallel / Parallel LINQ is that it has built-in support for throttling, if that is also desired.

Related

Multi-threading in a foreach loop

I have read a few stackoverflow threads about multi-threading in a foreach loop, but I am not sure I am understanding and using it right.
I have tried multiple scenarios, but I am not seeing much increase in performance.
Here is what I believe runs Asynchronous tasks, but running synchronously in the loop using a single thread:
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
foreach (IExchangeAPI selectedApi in selectedApis)
{
if (exchangeSymbols.TryGetValue(selectedApi.Name, out symbol))
{
ticker = await selectedApi.GetTickerAsync(symbol);
}
}
stopWatch.Stop();
Here is what I hoped to be running Asynchronously (still using a single thread) - I would have expected some speed improvement already here:
List<Task<ExchangeTicker>> exchTkrs = new List<Task<ExchangeTicker>>();
stopWatch.Start();
foreach (IExchangeAPI selectedApi in selectedApis)
{
if (exchangeSymbols.TryGetValue(selectedApi.Name, out symbol))
{
exchTkrs.Add(selectedApi.GetTickerAsync(symbol));
}
}
ExchangeTicker[] retTickers = await Task.WhenAll(exchTkrs);
stopWatch.Stop();
Here is what I would have hoped to run Asynchronously in Multi-thread:
stopWatch.Start();
Parallel.ForEach(selectedApis, async (IExchangeAPI selectedApi) =>
{
if (exchangeSymbols.TryGetValue(selectedApi.Name, out symbol))
{
ticker = await selectedApi.GetTickerAsync(symbol);
}
});
stopWatch.Stop();
Stop watch results interpreted as follows:
Console.WriteLine("Time elapsed (ns): {0}", stopWatch.Elapsed.TotalMilliseconds * 1000000);
Console outputs:
Time elapsed (ns): 4183308100
Time elapsed (ns): 4183946299.9999995
Time elapsed (ns): 4188032599.9999995
Now, the speed improvement looks minuscule. Am I doing something wrong or is that more or less what I should be expecting? I suppose writing to files would be a better to check that.
Would you mind also confirming I am interpreting the different use cases correctly?
Finally, using a foreach loop in order to get the ticker from multiple platforms in parallel may not be the best approach. Suggestions on how to improve this would be welcome.
EDIT
Note that I am using the ExchangeSharp code base that you can find here
Here is what the GerTickerAsync() method looks like:
public virtual async Task<ExchangeTicker> GetTickerAsync(string marketSymbol)
{
marketSymbol = NormalizeMarketSymbol(marketSymbol);
return await Cache.CacheMethod(MethodCachePolicy, async () => await OnGetTickerAsync(marketSymbol), nameof(GetTickerAsync), nameof(marketSymbol), marketSymbol);
}
For the Kraken API, you then have:
protected override async Task<ExchangeTicker> OnGetTickerAsync(string marketSymbol)
{
JToken apiTickers = await MakeJsonRequestAsync<JToken>("/0/public/Ticker", null, new Dictionary<string, object> { { "pair", NormalizeMarketSymbol(marketSymbol) } });
JToken ticker = apiTickers[marketSymbol];
return await ConvertToExchangeTickerAsync(marketSymbol, ticker);
}
And the Caching method:
public static async Task<T> CacheMethod<T>(this ICache cache, Dictionary<string, TimeSpan> methodCachePolicy, Func<Task<T>> method, params object?[] arguments) where T : class
{
await new SynchronizationContextRemover();
methodCachePolicy.ThrowIfNull(nameof(methodCachePolicy));
if (arguments.Length % 2 == 0)
{
throw new ArgumentException("Must pass function name and then name and value of each argument");
}
string methodName = (arguments[0] ?? string.Empty).ToStringInvariant();
string cacheKey = methodName;
for (int i = 1; i < arguments.Length;)
{
cacheKey += "|" + (arguments[i++] ?? string.Empty).ToStringInvariant() + "=" + (arguments[i++] ?? string.Empty).ToStringInvariant("(null)");
}
if (methodCachePolicy.TryGetValue(methodName, out TimeSpan cacheTime))
{
return (await cache.Get<T>(cacheKey, async () =>
{
T innerResult = await method();
return new CachedItem<T>(innerResult, CryptoUtility.UtcNow.Add(cacheTime));
})).Value;
}
else
{
return await method();
}
}
At first it should be pointed out that what you are trying to achieve is performance, not asynchrony. And you are trying to achieve it by running multiple operations concurrently, not in parallel. To keep the explanation simple I'll use a simplified version of your code, and I'll assume that each operation is a direct web request, without an intermediate caching layer, and with no dependencies to values existing in dictionaries.
foreach (var symbol in selectedSymbols)
{
var ticker = await selectedApi.GetTickerAsync(symbol);
}
The above code runs the operations sequentially. Each operation starts after the completion of the previous one.
var tasks = new List<Task<ExchangeTicker>>();
foreach (var symbol in selectedSymbols)
{
tasks.Add(selectedApi.GetTickerAsync(symbol));
}
var tickers = await Task.WhenAll(tasks);
The above code runs the operations concurrently. All operations start at once. The total duration is expected to be the duration of the longest running operation.
Parallel.ForEach(selectedSymbols, async symbol =>
{
var ticker = await selectedApi.GetTickerAsync(symbol);
});
The above code runs the operations concurrently, like the previous version with Task.WhenAll. It offers no advantage, while having the huge disadvantage that you no longer have a way to await the operations to complete. The Parallel.ForEach method will return immediately after launching the operations, because the Parallel class doesn't understand async delegates (it does not accept Func<Task> lambdas). Essentially there are a bunch of async void lambdas in there, that are running out of control, and in case of an exception they will bring down the process.
So the correct way to run the operations concurrently is the second way, using a list of tasks and the Task.WhenAll. Since you've already measured this method and haven't observed any performance improvements, I am assuming that there something else that serializes the concurrent operations. It could be something like a SemaphoreSlim hidden somewhere in your code, or some mechanism on the server side that throttles your requests. You'll have to investigate further to find where and why the throttling happens.
In general, when you do not see an increase by multi threading, it is because your task is not CPU limited or large enough to offset the overhead.
In your example, i.e.:
selectedApi.GetTickerAsync(symbol);
This can hae 2 reasons:
1: Looking up the ticker is brutally fast and it should not be an async to start with. I.e. when you look it up in a dictionary.
2: This is running via a http connection where the runtime is LIMITING THE NUMBER OF CONCURRENT CALLS. Regardless how many tasks you open, it will not use more than 4 at the same time.
Oh, and 3: you think async is using threads. It is not. It is particularly not the case in a codel ike this:
await selectedApi.GetTickerAsync(symbol);
Where you basically IMMEDIATELY WAIT FOR THE RESULT. There is no multi threading involved here at all.
foreach (IExchangeAPI selectedApi in selectedApis) {
if (exchangeSymbols.TryGetValue(selectedApi.Name, out symbol))
{
ticker = await selectedApi.GetTickerAsync(symbol);
} }
This is linear non threaded code using an async interface to not block the current thread while the (likely expensive IO) operation is in place. It starts one, THEN WAITS FOR THE RESULT. No 2 queries ever start at the same time.
If you want a possible (just as example) more scalable way:
In the foreach, do not await but add the task to a list of tasks.
Then start await once all the tasks have started. Likein a 2nd loop.
WAY not perfect, but at least the runtime has a CHANCE to do multiple lookups at the same time. Your await makes sure that you essentially run single threaded code, except async, so your thread goes back into the pool (and is not waiting for results), increasing your scalability - an item possibly not relevant in this case and definitely not measured in your test.

Is there a way to globally WaitAll() for all tasks created by a process?

I have a process which does logging by calling an external service. Because of the overhead involved (small, but builds up for many logging messages), my process logs asynchronously, in a "fire and forget" kind of way. I don't want to wait for each log message to go through before continuing, and I don't want to fail my process for a problem with the logger.
In order to accomplish this, I have wrapped the main log call in a Task - each call to logging fires off a Task, which just goes off and does it's thing. Most of the time, my process loops through the things it needs to check, handles them, and then exits just fine, logging all the way. However, on those occasions when it finds only a single item to handle, the process completes so quickly that the process exits, thus killing all of its threads, before the logging actually happens, and I get almost nothing in the logs.
I have confirmed that this is what is happening by checking that the items are handled as expected even when they are not logged (they are), and by putting a short (100 millisecond) delay into the logging method (outside of the Task), so that the logging DOES actually block. In this case, everything logs as expected.
(Based on this, I actually believe that even when the process works as expected, we may be missing a couple of log entries from the end of each run, since it is exiting before the last entries can go through, but I haven't been able to tell for certain.)
I could just put a delay at the very end of the process, so that no matter what, it hangs at at least a second or two to give these "fire and forget" Tasks time to complete, but that feels clunky.
Another option I'm considering is creating a global list of logging Tasks that will collect the Tasks as they are created, so that I can do a Task.WaitAll() on them. This feels like a bit of overhead I shouldn't have to deal with, but it may be the best solution.
What I'm looking for is some way to, at the end of my process, do a WaitAll() type of call that doesn't require me to know what Tasks I'm waiting for - just wait for any and all Tasks still hanging out there (except for the main Thread of the process, of course).
Does such a thing exist, or do I need to just keep track of all of my Tasks globally?
You could create a task aggregator, a task that completes when all observed tasks are completed. It would be a functionally equivalent version of Task.WhenAll, but much more lightweight since only the number of incomplete tasks would be stored, not the tasks themselves. Here is an implementation of this idea:
public class TaskAggregator
{
private int _activeCount = 0;
private int _isAddingCompleted = 0;
private TaskCompletionSource<bool> _tcs = new TaskCompletionSource<bool>();
public Task Task { get => _tcs.Task; }
public int ActiveCount
{
get => Interlocked.CompareExchange(ref _activeCount, 0, 0);
}
public bool IsAddingCompleted
{
get => Interlocked.CompareExchange(ref _isAddingCompleted, 0, 0) != 0;
}
public void Add(Task task)
{
Interlocked.Increment(ref _activeCount);
task.ContinueWith(_ =>
{
int localActiveCount = Interlocked.Decrement(ref _activeCount);
if (localActiveCount == 0 && this.IsAddingCompleted)
_tcs.TrySetResult(true);
}, TaskContinuationOptions.ExecuteSynchronously);
}
public void CompleteAdding()
{
Interlocked.Exchange(ref _isAddingCompleted, 1);
if (this.ActiveCount == 0) _tcs.TrySetResult(true);
}
}
Usage example:
public static TaskAggregator LogTasksAggregator = new TaskAggregator();
public static void Log(string str)
{
var logTask = Console.Out.WriteLineAsync(str);
LogTasksAggregator.Add(logTask);
}
// End of program
LogTasksAggregator.CompleteAdding();
bool completedInTime = LogTasksAggregator.Task.Wait(5000);
if (!completedInTime)
{
Console.WriteLine("LogTasksAggregator timed out");
}
I was going to suggest using internal Task[] System.Threading.Tasks.TaskScheduler.GetScheduledTasksForDebugger() as documented in TaskScheduler.GetScheduledTasks Method but it doesn't seem to return tasks that are currently running:
Task.Factory.StartNew(() =>
{
Console.WriteLine("Sleeping");
Thread.Sleep(1000);
Console.WriteLine("Done");
});
// Retrieve method info for internal Task[] GetScheduledTasksForDebugger()
var typeInfo = typeof(System.Threading.Tasks.TaskScheduler);
var bindingAttr = BindingFlags.NonPublic | BindingFlags.Instance;
var methodInfo = typeInfo.GetMethod("GetScheduledTasksForDebugger", bindingAttr);
Task[] tasks = (Task[])methodInfo.Invoke(System.Threading.Tasks.TaskScheduler.Current, null);
Task.WaitAll(tasks);
I think you're going to have to manage a List<Task> and WaitAll on that.

Running a long-running parallel task in the background, while allowing small async tasks to update the foreground

I have around 10 000 000 tasks that each takes from 1-10 seconds to complete. I am running those tasks on a powerful server, using 50 different threads, where each thread picks the first not-done task, runs it, and repeats.
Pseudo-code:
for i = 0 to 50:
run a new thread:
while True:
task = first available task
if no available tasks: exit thread
run task
Using this code, I can run all the tasks in parallell on any given number of threads.
In reality, the code uses C#'s Task.WhenAll, and looks like this:
ServicePointManager.DefaultConnectionLimit = threadCount; //Allow more HTTP request simultaneously
var currentIndex = -1;
var threads = new List<Task>(); //List of threads
for (int i = 0; i < threadCount; i++) //Generate the threads
{
var wc = CreateWebClient();
threads.Add(Task.Run(() =>
{
while (true) //Each thread should loop, picking the first available task, and executing it.
{
var index = Interlocked.Increment(ref currentIndex);
if (index >= tasks.Count) break;
var task = tasks[index];
RunTask(conn, wc, task, port);
}
}));
}
await Task.WhenAll(threads);
This works just as I wanted it to, but I have a problem: since this code takes a lot of time to run, I want the user to see some progress. The progress is displayed in a colored bitmap (representing a matrix), and also takes some time to generate (a few seconds).
Therefore, I want to generate this visualization on a background thread. But this other background thread is never executed. My suspicion is that it is using the same thread pool as the parallel code, and is therefore enqueued, and will not be executed before the parallel code is actually finished. (And that's a bit too late.)
Here's an example of how I generate the progress visualization:
private async void Refresh_Button_Clicked(object sender, RoutedEventArgs e)
{
var bitmap = await Task.Run(() => // <<< This task is never executed!
{
//bla, bla, various database calls, and generating a relatively large bitmap
});
//Convert the bitmap into a WPF image, and update the GUI
VisualizationImage = BitmapToImageSource(bitmap);
}
So, how could I best solve this problem? I could create a list of Tasks, where each Task represents one of my tasks, and run them with Parallel.Invoke, and pick another Thread pool (I think). But then I have to generate 10 million Task objects, instead of just 50 Task objects, running through my array of stuff to do. That sounds like it uses much more RAM than necessary. Any clever solutions to this?
EDIT:
As Panagiotis Kanavos suggested in one of his comments, I tried replacing some of my loop logic with ActionBlock, like this:
// Create an ActionBlock<int> that performs some work.
var workerBlock = new ActionBlock<ZoneTask>(
t =>
{
var wc = CreateWebClient(); //This probably generates some unnecessary overhead, but that's a problem I can solve later.
RunTask(conn, wc, t, port);
},
// Specify a maximum degree of parallelism.
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = threadCount
});
foreach (var t in tasks) //Note: the objects in the tasks array are not Task objects
workerBlock.Post(t);
workerBlock.Complete();
await workerBlock.Completion;
Note: RunTask just executes a web request using the WebClient, and parses the results. It's nothing in there that can create a dead lock.
This seems to work as the old parallelism code, except that it needs a minute or two to do the initial foreach loop to post the tasks. Is this delay really worth it?
Nevertheless, my progress task still seems to be blocked. Ignoring the Progress< T > suggestion for now, since this reduced code still suffers the same problem:
private async void Refresh_Button_Clicked(object sender, RoutedEventArgs e)
{
Debug.WriteLine("This happens");
var bitmap = await Task.Run(() =>
{
Debug.WriteLine("This does not!");
//Still doing some work here, so it's not optimized away.
};
VisualizationImage = BitmapToImageSource(bitmap);
}
So it still looks like new tasks are not executed as long as the parallell task is running. I even reduced the "MaxDegreeOfParallelism" from 50 to 5 (on a 24 core server) to see if Peter Ritchie's suggestion was right, but no change. Any other suggestions?
ANOTHER EDIT:
The issue seems to have been that I overloaded the thread pool with all my simultaneous blocking I/O calls. I replaced WebClient with HttpClient and its async-functions, and now everything seems to be working nicely.
Thanks to everyone for the great suggestions! Even though not all of them directly solved the problem, I'm sure they all improved my code. :)
.NET already provides a mechanism to report progress with the IProgress< T> and the Progress< T> implementation.
The IProgress interface allows clients to publish messages with the Report(T) class without having to worry about threading. The implementation ensures that the messages are processed in the appropriate thread, eg the UI thread. By using the simple IProgress< T> interface the background methods are decoupled from whoever processes the messages.
You can find more information in the Async in 4.5: Enabling Progress and Cancellation in Async APIs article. The cancellation and progress APIs aren't specific to the TPL. They can be used to simplify cancellation and reporting even for raw threads.
Progress< T> processes messages on the thread on which it was created. This can be done either by passing a processing delegate when the class is instantiated, or by subscribing to an event. Copying from the article:
private async void Start_Button_Click(object sender, RoutedEventArgs e)
{
//construct Progress<T>, passing ReportProgress as the Action<T>
var progressIndicator = new Progress<int>(ReportProgress);
//call async method
int uploads=await UploadPicturesAsync(GenerateTestImages(), progressIndicator);
}
where ReportProgress is a method that accepts a parameter of int. It could also accept a complex class that reported work done, messages etc.
The asynchronous method only has to use IProgress.Report, eg:
async Task<int> UploadPicturesAsync(List<Image> imageList, IProgress<int> progress)
{
int totalCount = imageList.Count;
int processCount = await Task.Run<int>(() =>
{
int tempCount = 0;
foreach (var image in imageList)
{
//await the processing and uploading logic here
int processed = await UploadAndProcessAsync(image);
if (progress != null)
{
progress.Report((tempCount * 100 / totalCount));
}
tempCount++;
}
return tempCount;
});
return processCount;
}
This decouples the background method from whoever receives and processes the progress messages.

Always Running Threads on Windows Service

I'm writing a Windows Service that will kick off multiple worker threads that will listen to Amazon SQS queues and process messages. There will be about 20 threads listening to 10 queues.
The threads will have to be always running and that's why I'm leaning towards to actually using actual threads for the worker loops rather than threadpool threads.
Here is a top level implementation. Windows service will kick off multiple worker threads and each will listen to it's queue and process messages.
protected override void OnStart(string[] args)
{
for (int i = 0; i < _workers; i++)
{
new Thread(RunWorker).Start();
}
}
Here is the implementation of the work
public async void RunWorker()
{
while(true)
{
// .. get message from amazon sqs sync.. about 20ms
var message = sqsClient.ReceiveMessage();
try
{
await PerformWebRequestAsync(message);
await InsertIntoDbAsync(message);
}
catch(SomeExeception)
{
// ... log
//continue to retry
continue;
}
sqsClient.DeleteMessage();
}
}
I know I can perform the same operation with Task.Run and execute it on the threadpool thread rather than starting individual thread, but I don't see a reason for that since each thread will always be running.
Do you see any problems with this implementation? How reliable would it be to leave threads always running in this fashion and what can I do to make sure that each thread is always running?
One problem with your existing solution is that you call your RunWorker in a fire-and-forget manner, albeit on a new thread (i.e., new Thread(RunWorker).Start()).
RunWorker is an async method, it will return to the caller when the execution point hits the first await (i.e. await PerformWebRequestAsync(message)). If PerformWebRequestAsync returns a pending task, RunWorker returns and the new thread you just started terminates.
I don't think you need a new thread here at all, just use AmazonSQSClient.ReceiveMessageAsync and await its result. Another thing is that you shouldn't be using async void methods unless you really don't care about tracking the state of the asynchronous task. Use async Task instead.
Your code might look like this:
List<Task> _workers = new List<Task>();
CancellationTokenSource _cts = new CancellationTokenSource();
protected override void OnStart(string[] args)
{
for (int i = 0; i < _MAX_WORKERS; i++)
{
_workers.Add(RunWorkerAsync(_cts.Token));
}
}
public async Task RunWorkerAsync(CancellationToken token)
{
while(true)
{
token.ThrowIfCancellationRequested();
// .. get message from amazon sqs sync.. about 20ms
var message = await sqsClient.ReceiveMessageAsync().ConfigureAwait(false);
try
{
await PerformWebRequestAsync(message);
await InsertIntoDbAsync(message);
}
catch(SomeExeception)
{
// ... log
//continue to retry
continue;
}
sqsClient.DeleteMessage();
}
}
Now, to stop all pending workers, you could simple do this (from the main "request dispatcher" thread):
_cts.Cancel();
try
{
Task.WaitAll(_workers.ToArray());
}
catch (AggregateException ex)
{
ex.Handle(inner => inner is OperationCanceledException);
}
Note, ConfigureAwait(false) is optional for Windows Service, because there's no synchronization context on the initial thread, by default. However, I'd keep it that way to make the code independent of the execution environment (for cases where there is synchronization context).
Finally, if for some reason you cannot use ReceiveMessageAsync, or you need to call another blocking API, or simply do a piece of CPU intensive work at the beginning of RunWorkerAsync, just wrap it with Task.Run (as opposed to wrapping the whole RunWorkerAsync):
var message = await Task.Run(
() => sqsClient.ReceiveMessage()).ConfigureAwait(false);
Well, for one I'd use a CancellationTokenSource instantiated in the service and passed down to the workers. Your while statement would become:
while(!cancellationTokenSource.IsCancellationRequested)
{
//rest of the code
}
This way you can cancel all your workers from the OnStop service method.
Additionally, you should watch for:
If you're playing with thread states from outside of the thread, then a ThreadStateException, or ThreadInterruptedException or one of the others might be thrown. So, you want to handle a proper thread restart.
Do the workers need to run without pause in-between iterations? I would throw in a sleep in there (even a few ms's) just so they don't keep the CPU up for nothing.
You need to handle ThreadStartException and restart the worker, if it occurs.
Other than that there's no reason why those 10 treads can't run for as long as the service runs (days, weeks, months at a time).

Why does Parallel.Foreach create endless threads?

The code below continues to create threads, even when the queue is empty..until eventually an OutOfMemory exception occurs. If i replace the Parallel.ForEach with a regular foreach, this does not happen. anyone know of reasons why this may happen?
public delegate void DataChangedDelegate(DataItem obj);
public class Consumer
{
public DataChangedDelegate OnCustomerChanged;
public DataChangedDelegate OnOrdersChanged;
private CancellationTokenSource cts;
private CancellationToken ct;
private BlockingCollection<DataItem> queue;
public Consumer(BlockingCollection<DataItem> queue) {
this.queue = queue;
Start();
}
private void Start() {
cts = new CancellationTokenSource();
ct = cts.Token;
Task.Factory.StartNew(() => DoWork(), ct);
}
private void DoWork() {
Parallel.ForEach(queue.GetConsumingPartitioner(), item => {
if (item.DataType == DataTypes.Customer) {
OnCustomerChanged(item);
} else if(item.DataType == DataTypes.Order) {
OnOrdersChanged(item);
}
});
}
}
I think Parallel.ForEach() was made primarily for processing bounded collections. And it doesn't expect collections like the one returned by GetConsumingPartitioner(), where MoveNext() blocks for a long time.
The problem is that Parallel.ForEach() tries to find the best degree of parallelism, so it starts as many Tasks as the TaskScheduler lets it run. But the TaskScheduler sees there are many Tasks that take a very long time to finish, and that they're not doing anything (they block) so it keeps on starting new ones.
I think the best solution is to set the MaxDegreeOfParallelism.
As an alternative, you could use TPL Dataflow's ActionBlock. The main difference in this case is that ActionBlock doesn't block any threads when there are no items to process, so the number of threads wouldn't get anywhere near the limit.
The Producer/Consumer pattern is mainly used when there is just one Producer and one Consumer.
However, what you are trying to achieve (multiple consumers) more neatly fits in the Worklist pattern. The following code was taken from a slide for unit2 slide "2c - Shared Memory Patterns" from a parallel programming class taught at the University of Utah, which is available in the download at http://ppcp.codeplex.com/
BlockingCollection<Item> workList;
CancellationTokenSource cts;
int itemcount
public void Run()
{
int num_workers = 4;
//create worklist, filled with initial work
worklist = new BlockingCollection<Item>(
new ConcurrentQueue<Item>(GetInitialWork()));
cts = new CancellationTokenSource();
itemcount = worklist.Count();
for( int i = 0; i < num_workers; i++)
Task.Factory.StartNew( RunWorker );
}
IEnumberable<Item> GetInitialWork() { ... }
public void RunWorker() {
try {
do {
Item i = worklist.Take( cts.Token );
//blocks until item available or cancelled
Process(i);
//exit loop if no more items left
} while (Interlocked.Decrement( ref itemcount) > 0);
} finally {
if( ! cts.IsCancellationRequested )
cts.Cancel();
}
}
}
public void AddWork( Item item) {
Interlocked.Increment( ref itemcount );
worklist.Add(item);
}
public void Process( Item i )
{
//Do what you want to the work item here.
}
The preceding code allows you to add worklist items to the queue, and lets you set an arbitrary number of workers (in this case, four) to pull items out of the queue and process them.
Another great resource for the Parallelism on .Net 4.0 is the book "Parallel Programming with Microsoft .Net" which is freely available at: http://msdn.microsoft.com/en-us/library/ff963553
Internally in the Task Parallel Library, the Parallel.For and Parallel.Foreach follow a hill-climbing algorithm to determine how much parallelism should be utilized for the operation.
More or less, they start with running the body on one task, move to two, and so on, until a break-point is reached and they need to reduce the number of tasks.
This works quite well for method bodies that complete quickly, but if the body takes a long time to run, it may take a long time before the it realizes it needs to decrease the amount of parallelism. Until that point, it continues adding tasks, and possibly crashes the computer.
I learned the above during a lecture given by one of the developers of the Task Parallel Library.
Specifying the MaxDegreeOfParallelism is probably the easiest way to go.

Categories

Resources