How can I determine the number of threads used during a specific call of Parallel.ForEach (or Parallel.Invoke, or Parallel.For)
I know how to limit the maximum number of threads, e.g.
Parallel.ForEach(myList,
new ParallelOptions { MaxDegreeOfParallelism = 4 },
item => { doStuff(item); });
I know that the Task.Parallel library uses some heuristics to determine the optimal number of additional threadpool threads to use at runtime, in addition to the current thread; some value between 0 and MaxDegreeOfParallelism.
I would like to know how many threads have actually been used, for logging purposes:
Stopwatch watch = Stopwatch.StartNew();
Parallel.ForEach(myList, item => { doStuff(item); });
trace.TraceInformation("Task finished in {0}ms using {1} threads",
watch.ElapsedMilliseconds, NUM_THREADS_USED);
I mainly want this data logged for curiosity's sake, and to improve my understanding. It does not have to be 100% reliable, since I do not intend to use it for anything else.
Is there a way to get this number, without major performance penalties?
You could use a (thread-safe) list to store the IDs of the used threads and count them:
ConcurrentBag<int> threadIDs = new ConcurrentBag<int>();
Parallel.ForEach(myList, item => {
threadIDs.Add(Thread.CurrentThread.ManagedThreadId);
doStuff(item);
});
int usedThreads = threadIDs.Distinct().Count();
This does have a performance impact (especially the thread-safety logic of ConcurrentBag), but I can't tell how big that is. The relative effect depends on how much work doStuff does itself. If that method has only a few commands, this thread counting solution may even change the number of used threads.
In your DoStuff method you can add the code like this
private void DoStuff(T item)
{
Logger.Log($"Item {item.ToString()} was handled by thread # {Thread.CurrentThread.ManagedThreadId}");
// your logic here
}
I know that the Task.Parallel library uses some heuristics to determine the optimal number of additional threadpool threads to use at runtime, in addition to the current thread; some value between 0 and MaxDegreeOfParallelism.
I would like to know how many threads have actually been used, for logging purposes
Since you mention the thread pool and MaxDoP, I interpreted this question as you wanted to know how many concurrent threads were used at any one time. This you can find out by using a field and Interlocked.
class MyClass
{
private int _concurrentThreadCount;
private ILog _logger; //for example
public void DoWork()
{
var listOfSomething = GetListOfStuff();
Parallel.ForEach(listOfSomething, singleSomething =>
{
Interlocked.Increment(ref _concurrentThreadCount);
_logger.Info($"Doing some work. Concurrent thread count: {_concurrentThreadCount}");
// do work
Interlocked.Decrement(ref _concurrentThreadCount);
});
}
}
While I am aware this is an older question, I followed up on Evk's suggestion. Also not sure about the performance impact, but you could use a concurrentdictionary to keep track of the threadids:
var threadIDs = new ConcurrentDictionary<int, int>();
Parallel.ForEach(myList, item => {
threadIDs.TryAdd(Thread.CurrentThread.ManagedThreadId, 0);
doStuff(item);
});
int usedThreads = threadIDs.Keys.Count();
Related
I have 1000 elements in a TPL dataflow block,
each element will call external webservices.
the web service supports a maximum of 10 simultaneous calls,
which is easily achieved using:
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 10
...
}
The web service requires each call to have a unique id passed which distinguises it from the other simultaneous calls.
In theory this should be a guid, but in practise the 11th GUID will fail - because the throttling mechanism on the server is slow to recognise that the first call is finished.
The vendor suggests we recycle the guids, keeping 10 in active use.
I intend to have an array of GUIDS, each task will use (Interlocked.Increment(ref COUNTER) % 10 ) as the array index
EDIT :
I just realised this won't work!
It assumes tasks will complete in order which they may not
I could implement this as a queue of IDs where each task borrows and returns one, but the question still stands, is there a an easier, pre bulit thread-safe way to do this?
(there will never be enough calls for COUNTER to overflow)
But I've been surprised a number of times by C# (I'm new to .net) that I am implementing something that already exists.
Is there a better thread-safe way for each task to recycle from a pool of ids?
Creating resource pools is the exact situation System.Collections.ConcurrentBag<T> is useful for. Wrap it up in a BlockingCollection<T> to make the code easier.
class Example
{
private readonly BlockingCollection<Guid> _guidPool;
private readonly TransformBlock<Foo, Bar> _transform;
public Example(int concurrentLimit)
{
_guidPool = new BlockingCollection<Guid>(new ConcurrentBag<Guid>(), concurrentLimit)
for(int i = 0: i < concurrentLimit; i++)
{
_guidPool.Add(Guid.NewGuid());
}
_transform = new TransformBlock<Foo, Bar>(() => SomeAction,
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = concurrentLimit
//...
});
//...
}
private async Task<Bar> SomeAction(Foo foo)
{
var id= _guidPool.Take();
try
{
//...
}
finally
{
_guidPool.Add(id);
}
}
}
I've been trying to implement a simple producer-consumer pattern using Rx and observable collections. I also need to be able to throttle the number of subscribers easily. I have seen lots of references to LimitedConcurrencyLevelTaskScheduler in parallel extensions but I don't seem to be able to get this to use multiple threads.
I think I'm doing something silly so I was hoping someone could explain what. In the unit test below, I expect multiple (2) threads to be used to consume the strings in the blocking collection. What am I doing wrong?
[TestClass]
public class LimitedConcurrencyLevelTaskSchedulerTestscs
{
private ConcurrentBag<string> _testStrings = new ConcurrentBag<string>();
ConcurrentBag<int> _threadIds= new ConcurrentBag<int>();
[TestMethod]
public void WhenConsumingFromBlockingCollection_GivenLimitOfTwoThreads_TwoThreadsAreUsed()
{
// Setup the command queue for processing combinations
var commandQueue = new BlockingCollection<string>();
var taskFactory = new TaskFactory(new LimitedConcurrencyLevelTaskScheduler(2));
var scheduler = new TaskPoolScheduler(taskFactory);
commandQueue.GetConsumingEnumerable()
.ToObservable(scheduler)
.Subscribe(Go, ex => { throw ex; });
var iterationCount = 100;
for (int i = 0; i < iterationCount; i++)
{
commandQueue.Add(string.Format("string {0}", i));
}
commandQueue.CompleteAdding();
while (!commandQueue.IsCompleted)
{
Thread.Sleep(100);
}
Assert.AreEqual(iterationCount, _testStrings.Count);
Assert.AreEqual(2, _threadIds.Distinct().Count());
}
private void Go(string testString)
{
_testStrings.Add(testString);
_threadIds.Add(Thread.CurrentThread.ManagedThreadId);
}
}
Everyone seems to go through the same learning curve with Rx. The thing to understand is that Rx doesn't do parallel processing unless you explicitly make a query that forces parallelism. Schedulers do not introduce parallelism.
Rx has a contract of behaviour that says zero or more values are produced in series (regardless of how many threads might be used), one after another, with no overlap, finally to be followed by an optional single error or a single complete message, and then nothing else.
This is often written as OnNext*(OnError|OnCompleted).
All that schedulers do is define the rule to determine which thread a new value is processed on if the scheduler has no pending values it is processing for the current observable.
Now take your code:
var taskFactory = new TaskFactory(new LimitedConcurrencyLevelTaskScheduler(2));
var scheduler = new TaskPoolScheduler(taskFactory);
This says that the scheduler will run values for a subscription on one of two threads. But it doesn't mean that it will do this for every value produced. Remember, since values are produced in series, one after another, it is better to re-use an existing thread than to go to the high cost of creating a new thread. So what Rx does is re-use the existing thread if a new value is scheduled on the scheduler before the current value is finished being processed.
This is the key - it re-uses the thread if a new value is scheduled before the processing of existing values is complete.
So your code does this:
commandQueue.GetConsumingEnumerable()
.ToObservable(scheduler)
.Subscribe(Go, ex => { throw ex; });
It means that the scheduler will only create a thread when the first value comes along. But by the time the expensive thread creation operation is complete then the code that adds values to the commandQueue is also done so it has queued them all and hence it can more efficiently use a single thread rather than create a costly second one.
To avoid this you need to construct the query to introduce parallelism.
Here's how:
public void WhenConsumingFromBlockingCollection_GivenLimitOfTwoThreads_TwoThreadsAreUsed()
{
var taskFactory = new TaskFactory(new LimitedConcurrencyLevelTaskScheduler(2));
var scheduler = new TaskPoolScheduler(taskFactory);
var iterationCount = 100;
Observable
.Range(0, iterationCount)
.SelectMany(n => Observable.Start(() => n.ToString(), scheduler)
.Do(x => Go(x)))
.Wait();
(iterationCount == _testStrings.Count).Dump();
(2 == _threadIds.Distinct().Count()).Dump();
}
Now, I've used the Do(...)/.Wait() combo to give you the equivalent of a blocking .Subscribe(...) method.
This results is your asserts both returning true.
I have found that by modifying the subscription as follows I can add 5 subscribers but only two threads will process the contents of the collection so this serves my purpose.
for(int i = 0; i < 5; i++)
observable.Subscribe(Go, ex => { throw ex; });
I'd be interested to know if there is a better or more elegant way to achieve this!
This seems like it has a very simple solution, but I've been looking on and off for months without finding a definitive answer.
I have an object being created by the UI Thread. I'm actually creating several of the same type of object. Sometimes I'll create 1 or 2 every minute, sometimes I'll create 30 in a second. Each time an object it created, I need to perform some calculations on the object. The combination of potentially 30 objects created at in a short time, and the expensive calculations I'm performing on said objects can really lag the UI.
I've tried performing the calculations via Tasks and backgroundWorkers and all sorts of threading but they all perform the calculations out of order, and it's imperative that the objects are calculated in the order they are created and that one doesn't start it's calculations until the object ahead of it finishes it's own calculations.
I can find all sorts of information about how to perform these tasks in parallel, but can anyone explain to me how I can force them to happen sequentially, just not on the UI Thread? Any help would be greatly appreciated. I've been trying to figure this out for months :(
IMO, the easiest way to run tasks in sequential order here is to use Task.ContinueWith. Note TaskContinuationOptions.LazyCancellation, it's used to make sure that cancellation doesn't break the order of the sequence:
Task _currentTask = Task.FromResult(Type.Missing);
readonly object _lock = new Object();
void QueueTask(Action action)
{
lock (_lock)
{
_currentTask = _currentTask.ContinueWith(
lastTask =>
{
// re-throw the error of the last completed task (if any)
lastTask.GetAwaiter().GetResult();
// run the new task
action();
},
CancellationToken.None,
TaskContinuationOptions.LazyCancellation,
TaskScheduler.Default);
}
}
private void button1_Click(object sender, EventArgs e)
{
for(var i = 0; i < 10; i++)
{
var sleep = 1000 - i*100;
QueueTask(() =>
{
Thread.Sleep(sleep);
Debug.WriteLine("Slept for {0} ms", sleep);
});
}
}
I think a proper solution would be to use a Queue, as it is FIFO in a task that is running in the background.
Your code could look something like this:
Edit
Edited to use Queue.Synchronize as #rwong mentioned (thanks!)
var queue = new Queue<MyCustomObject>();
//add object to the queue..
var mySyncedQueue = Queue.Synchronize(queue)
Task.Factory.StartNew(() =>
{
while (true)
{
var myObj = mySyncedQueue.Dequeue();
if (myObj != null)
{ do work...}
}
}, TaskCreationOptions.LongRunning);
This is just to get you started, im sure your could could be more efficient when you know exactly what is needed :)
Edit 2
As you are accessing the queue from multiple threads, it is better to use a ConcurrentQueue.
the method wont look much different:
var concurrentQueue = new ConcurrentQueue<MyCustomObject>();
//add object to the queue..
Task.Factory.StartNew(() =>
{
while (true)
{
MyCustomObject myObj;
if (concurrentQueue.TryDequeue(myObj))
{ do work...}
}
}, TaskCreationOptions.LongRunning);
Create exactly one background task. Then let it process all items in a thread-safe fifo buffer. Add objects to the fifo from your gui thread.
I have a huge collection, over which i have to perform a specific task(which involves calling a wcf service). I want to control the number of threads instead of using Parallel.ForEach directly. Here i have 2 options:
I am using below to partition the data:
List<MyCollectionObject> MyCollection = new List<MyCollectionObject>();
public static IEnumerable<List<T>> PartitionMyData<T>(this IList<T> source, Int32 size)
{
for (int i = 0; i < Math.Ceiling(source.Count / (Double)size); i++)
{
yield return new List<T>(source.Skip(size * i).Take(size));
}
}
Option 1:
MyCollection.PartitionMyData(AutoEnrollRequests.Count()/threadValue).AsParallel().AsOrdered()
.Select(no => InvokeTask(no)).ToArray();
private void InvokeTask(List<MyCollectionObject> requests)
{
foreach(MyCollectionObject obj in requests)
{
//Do Something
}
}
Option2:
MyCollection.PartitionMyData(threadValue).AsOrdered()
.Select(no => InvokeTask(no)).ToArray();
private void InvokeTask(List<MyCollectionObject> requests)
{
Action<MyCollectionObject> dosomething =
{
}
Parallel.ForEach(requests,dosomething)
}
If i have 16 objects in my collection, as per my knowledge Option1 will launch 4 threads, each thread having 4 objects will be processed synchronously.
Option 2 will launch 4 threads with 1 object each, process them and again will launch 4 threads.
Can anyone please suggest which option is better?
P.S.
I understand .Net framework does thread pooling and we need not control the number of threads but due to some design decision we want to use it.
Thanks In Advance,
Rohit
I want to control the number of threads instead of using Parallel.ForEach directly
You can control de number of threads in Parallel.ForEach if you use this call with a ParallelOptions object:
Parallel.ForEach(requests,
new ParallelOptions(){MaxDegreeOfParallelism = 4}, //change here
dosomething)
It's impossible to give an A or B answer here. It depends on too many unknowns.
I will assume you want the fastest approach. To see which is better, run both on the target environment (or closest approximation you can get) and see which one completes fastest.
Imagine I have an function which goes through one million/billion strings and checks smth in them.
f.ex:
foreach (String item in ListOfStrings)
{
result.add(CalculateSmth(item));
}
it consumes lot's of time, because CalculateSmth is very time consuming function.
I want to ask: how to integrate multithreading in this kinda process?
f.ex: I want to fire-up 5 threads and each of them returns some results, and thats goes-on till the list has items.
Maybe anyone can show some examples or articles..
Forgot to mention I need it in .NET 2.0
You could try the Parallel extensions (part of .NET 4.0)
These allow you to write something like:
Parallel.Foreach (ListOfStrings, (item) =>
result.add(CalculateSmth(item));
);
Of course result.add would need to be thread safe.
The Parallel extensions is cool, but this can also be done just by using the threadpool like this:
using System.Collections.Generic;
using System.Threading;
namespace noocyte.Threading
{
class CalcState
{
public CalcState(ManualResetEvent reset, string input) {
Reset = reset;
Input = input;
}
public ManualResetEvent Reset { get; private set; }
public string Input { get; set; }
}
class CalculateMT
{
List<string> result = new List<string>();
List<ManualResetEvent> events = new List<ManualResetEvent>();
private void Calc() {
List<string> aList = new List<string>();
aList.Add("test");
foreach (var item in aList)
{
CalcState cs = new CalcState(new ManualResetEvent(false), item);
events.Add(cs.Reset);
ThreadPool.QueueUserWorkItem(new WaitCallback(Calculate), cs);
}
WaitHandle.WaitAll(events.ToArray());
}
private void Calculate(object s)
{
CalcState cs = s as CalcState;
cs.Reset.Set();
result.Add(cs.Input);
}
}
}
Note that concurrency doesn't magically give you more resource. You need to establish what is slowing CalculateSmth down.
For example, if it's CPU-bound (and you're on a single core) then the same number of CPU ticks will go to the code, whether you execute them sequentially or in parallel. Plus you'd get some overhead from managing the threads. Same argument applies to other constraints (e.g. I/O)
You'll only get performance gains in this if CalculateSmth is leaving resource free during its execution, that could be used by another instance. That's not uncommon. For example, if the task involves IO followed by some CPU stuff, then process 1 could be doing the CPU stuff while process 2 is doing the IO. As mats points out, a chain of producer-consumer units can achieve this, if you have the infrastructure.
You need to split up the work you want to do in parallel. Here is an example of how you can split the work in two:
List<string> work = (some list with lots of strings)
// Split the work in two
List<string> odd = new List<string>();
List<string> even = new List<string>();
for (int i = 0; i < work.Count; i++)
{
if (i % 2 == 0)
{
even.Add(work[i]);
}
else
{
odd.Add(work[i]);
}
}
// Set up to worker delegates
List<Foo> oddResult = new List<Foo>();
Action oddWork = delegate { foreach (string item in odd) oddResult.Add(CalculateSmth(item)); };
List<Foo> evenResult = new List<Foo>();
Action evenWork = delegate { foreach (string item in even) evenResult.Add(CalculateSmth(item)); };
// Run two delegates asynchronously
IAsyncResult evenHandle = evenWork.BeginInvoke(null, null);
IAsyncResult oddHandle = oddWork.BeginInvoke(null, null);
// Wait for both to finish
evenWork.EndInvoke(evenHandle);
oddWork.EndInvoke(oddHandle);
// Merge the results from the two jobs
List<Foo> allResults = new List<Foo>();
allResults.AddRange(oddResult);
allResults.AddRange(evenResult);
return allResults;
The first question you must answer is whether you should be using threading
If your function CalculateSmth() is basically CPU-bound, i.e. heavy in CPU-usage and basically no I/O-usage, then I have a hard time seeing the point of using threads, since the threads will be competing over the same resource, in this case the CPU.
If your CalculateSmth() is using both CPU and I/O, then it might be a point in using threading.
I totally agree with the comment to my answer. I made a erroneous assumption that we were talking about a single CPU with one core, but these days we have multi-core CPUs, my bad.
Not that I have any good articles here right now, but what you want to do is something along Producer-Consumer with a Threadpool.
The Producers loops through and creates tasks (which in this case could be to just queue up the items in a List or Stack). The Consumers are, say, five threads that reads one item off the stack, consumes it by calculating it, and then stores it else where.
This way the multithreading is limited to just those five threads, and they will all have work to do up until the stack is empty.
Things to think about:
Put protection on the input and output list, such as a mutex.
If the order is important, make sure that the output order is maintained. One example could be to store them in a SortedList or something like that.
Make sure that the CalculateSmth is thread safe, that it doesn't use any global state.