I have the following code:
var factory = new TaskFactory();
for (int i = 0; i < 100; i++)
{
var i1 = i;
factory.StartNew(() => foo(i1));
}
static void foo(int i)
{
Thread.Sleep(1000);
Console.WriteLine($"foo{i} - on thread {Thread.CurrentThread.ManagedThreadId}");
}
I can see it only does 4 threads at a time (based on observation). My questions:
What determines the number of threads used at a time?
How can I retrieve this number?
How can I change this number?
P.S. My box has 4 cores.
P.P.S. I needed to have a specific number of tasks (and no more) that are concurrently processed by the TPL and ended up with the following code:
private static int count = 0; // keep track of how many concurrent tasks are running
private static void SemaphoreImplementation()
{
var s = new Semaphore(20, 20); // allow 20 tasks at a time
for (int i = 0; i < 1000; i++)
{
var i1 = i;
Task.Factory.StartNew(() =>
{
try
{
s.WaitOne();
Interlocked.Increment(ref count);
foo(i1);
}
finally
{
s.Release();
Interlocked.Decrement(ref count);
}
}, TaskCreationOptions.LongRunning);
}
}
static void foo(int i)
{
Thread.Sleep(100);
Console.WriteLine($"foo{i:00} - on thread " +
$"{Thread.CurrentThread.ManagedThreadId:00}. Executing concurently: {count}");
}
When you are using a Task in .NET, you are telling the TPL to schedule a piece of work (via TaskScheduler) to be executed on the ThreadPool. Note that the work will be scheduled at its earliest opportunity and however the scheduler sees fit. This means that the TaskScheduler will decide how many threads will be used to run n number of tasks and which task is executed on which thread.
The TPL is very well tuned and continues to adjust its algorithm as it executes your tasks. So, in most cases, it tries to minimize contention. What this means is if you are running 100 tasks and only have 4 cores (which you can get using Environment.ProcessorCount), it would not make sense to execute more than 4 threads at any given time, as otherwise it would need to do more context switching. Now there are times where you want to explicitly override this behaviour. Let's say in the case where you need to wait for some sort of IO to finish, which is a whole different story.
In summary, trust the TPL. But if you are adamant to spawn a thread per task (not always a good idea!), you can use:
Task.Factory.StartNew(
() => /* your piece of work */,
TaskCreationOptions.LongRunning);
This tells the DefaultTaskscheduler to explicitly spawn a new thread for that piece of work.
You can also use your own Scheduler and pass it in to the TaskFactory. You can find a whole bunch of Schedulers HERE.
Note another alternative would be to use PLINQ which again by default analyses your query and decides whether parallelizing it would yield any benefit or not, again in the case of a blocking IO where you are certain starting multiple threads will result in a better execution you can force the parallelism by using WithExecutionMode(ParallelExecutionMode.ForceParallelism) you then can use WithDegreeOfParallelism, to give hints on how many threads to use but remember there is no guarantee you would get that many threads, as MSDN says:
Sets the degree of parallelism to use in a query. Degree of
parallelism is the maximum number of concurrently executing tasks that
will be used to process the query.
Finally, I highly recommend having a read of THIS great series of articles on Threading and TPL.
If you increase the number of tasks to for example 1000000 you will see a lot more threads spawned over time. The TPL tends to inject one every 500ms.
The TPL threadpool does not understand IO-bound workloads (sleep is IO). It's not a good idea to rely on the TPL for picking the right degree of parallelism in these cases. The TPL is completely clueless and injects more threads based on vague guesses about throughput. Also to avoid deadlocks.
Here, the TPL policy clearly is not useful because the more threads you add the more throughput you get. Each thread can process one item per second in this contrived case. The TPL has no idea about that. It makes no sense to limit the thread count to the number of cores.
What determines the number of threads used at a time?
Barely documented TPL heuristics. They frequently go wrong. In particular they will spawn an unlimited number of threads over time in this case. Use task manager to see for yourself. Let this run for an hour and you'll have 1000s of threads.
How can I retrieve this number? How can I change this number?
You can retrieve some of these numbers but that's not the right way to go. If you need a guaranteed DOP you can use AsParallel().WithDegreeOfParallelism(...) or a custom task scheduler. You also can manually start LongRunning tasks. Do not mess with process global settings.
I would suggest using SemaphoreSlim because it doesn't use Windows kernel (so it can be used in Linux C# microservices) and also has a property SemaphoreSlim.CurrentCount that tells how many remaining threads are left so you don't need the Interlocked.Increment or Interlocked.Decrement. I also removed i1 because i is value type and it won't be changed by the call of foo method passing the i argument so it's no need to copy it into i1 to ensure it never changes (if that was the reasoning for adding i1):
private static void SemaphoreImplementation()
{
var maxThreadsCount = 20; // allow 20 tasks at a time
var semaphoreSlim = new SemaphoreSlim(maxTasksCount, maxTasksCount);
var taskFactory = new TaskFactory();
for (int i = 0; i < 1000; i++)
{
taskFactory.StartNew(async () =>
{
try
{
await semaphoreSlim.WaitAsync();
var count = maxTasksCount-semaphoreSlim.CurrentCount; //SemaphoreSlim.CurrentCount tells how many threads are remaining
await foo(i, count);
}
finally
{
semaphoreSlim.Release();
}
}, TaskCreationOptions.LongRunning);
}
}
static async void foo(int i, int count)
{
await Task.Wait(100);
Console.WriteLine($"foo{i:00} - on thread " +
$"{Thread.CurrentThread.ManagedThreadId:00}. Executing concurently: {count}");
}
Related
In chapter 4.4 Dynamic Parallelism, in Stephen Cleary's book Concurrency in C# Cookbook, it says the following:
Parallel tasks may use blocking members, such as Task.Wait,
Task.Result, Task.WaitAll, and Task.WaitAny. In contrast, asynchronous
tasks should avoid blocking members, and prefer await, Task.WhenAll,
and Task.WhenAny.
I was always told that Task.Wait etc are bad because they block the current thread, and that it's much better to use await instead, so that the calling thread is not blocked.
Why is it ok to use Task.Wait etc for a parallel (which I think means CPU bound) Task?
Example:
In the example below, isn't Test1() better because the thread that calls Test1() is able to continue doing something else while it waits for the for loop to complete?
Whereas the thread that calls Test() is stuck waiting for the for loop to complete.
private static void Test()
{
Task.Run(() =>
{
for (int i = 0; i < 100; i++)
{
//do something.
}
}).Wait();
}
private static async Task Test1()
{
await Task.Run(() =>
{
for (int i = 0; i < 100; i++)
{
//do something.
}
});
}
EDIT:
This is the rest of the paragraph which I'm adding based on Peter Csala's comment:
Parallel tasks also commonly use AttachedToParent to create parent/child relationships between tasks. Parallel tasks should be created with Task.Run or Task.Factory.StartNew.
You've already got some great answers here, but just to chime in (sorry if this is repetitive at all):
Task was introduced in the TPL before async/await existed. When async came along, the Task type was reused instead of creating a separate "Promise" type.
In the TPL, pretty much all tasks were Delegate Tasks - i.e., they wrap a delegate (code) which is executed on a TaskScheduler. It was also possible - though rare - to have Promise Tasks in the TPL, which were created by TaskCompletionSource<T>.
The higher-level TPL APIs (Parallel and PLINQ) hide the Delegate Tasks from you; they are higher-level abstractions that create multiple Delegate Tasks and execute them on multiple threads, complete with all the complexity of partitioning and work queue stealing and all that stuff.
However, the one drawback to the higher-level APIs is that you need to know how much work you are going to do before you start. It's not possible for, e.g., the processing of one data item to add another data item(s) back at the beginning of the parallel work. That's where Dynamic Parallelism comes in.
Dynamic Parallelism uses the Task type directly. There are many APIs on the Task type that were designed for Dynamic Parallelism and should be avoided in async code unless you really know what you're doing (i.e., either your name is Stephen Toub or you're writing a high-performance .NET runtime). These APIs include StartNew, ContinueWith, Wait, Result, WaitAll, WaitAny, Id, CurrentId, RunSynchronously, and parent/child tasks. And then there's the Task constructor itself and Start which should never be used in any code at all.
In the particular case of Wait, yes, it does block the thread. And that is not ideal (even in parallel programming), because it blocks a literal thread. However, the alternative may be worse.
Consider the case where task A reaches a point where it has to be sure task B completes before it continues. This is the general Dynamic Parallelism case, so assume no parent/child relationship.
The old-school way to avoid this kind of blocking is to split method A up into a continuation and use ContinueWith. That works fine, but it does complicate the code - rather considerably in the case of loops. You end up writing a state machine, essentially what async does for you. In modern code, you may be able to use await, but then that has its own dangers: parallel code does not work out of the box with async, and combining the two can be tricky.
So it really comes down to a tradeoff between code complexity vs runtime efficiency. And when you consider the following points, you'll see why blocking was common:
Parallelism is normally done on Desktop applications; it's not common (or recommended) for web servers.
Desktop machines tend to have plenty of threads to spare. I remember Mark Russinovich (long before he joined Microsoft) demoing how showing a File Open dialog on Windows spawned some crazy number of threads (over 20, IIRC). And yet the user wouldn't even notice 20 threads being spawned (and presumably blocked).
Parallel code is difficult to maintain in the first place; Dynamic Parallelism using continuations is exceptionally difficult to maintain.
Given these points, it's pretty easy to see why a lot of parallel code blocks thread pool threads: the user experience is degraded by an unnoticeable amount, but the developer experience is enhanced significantly.
The thing is if you are using tasks to parallelize CPU-bound work - your method is likely not asynchronous, because the main benefit of async is asynchronous IO, and you have no IO in this case. Since your method is synchronous - you can't await anything, including tasks you use to parallelize computation, nor do you need to.
The valid concern you mentioned is you would waste current thread if you just block it waiting for parallel tasks to complete. However you should not waste it like this - it can be used as one participant in parallel computation. Say you want to perform parallel computation on 4 threads. Use current thread + 3 other threads, instead of using just 4 other threads and waste current one blocked waiting for them.
That's what for example Parallel LINQ does - it uses current thread together with thread pool threads. Note also its methods are not async (and should not be), but they do use Tasks internally and do block waiting on them.
Update: about your examples.
This one:
private static void Test()
{
Task.Run(() =>
{
for (int i = 0; i < 100; i++)
{
//do something.
}
}).Wait();
}
Is always useless - you offset some computation to separate thread while current thread is blocked waiting, so in result one thread is just wasted for nothing useful. Instead you should just do:
private static void Test()
{
for (int i = 0; i < 100; i++)
{
//do something.
}
}
This one:
private static async Task Test1()
{
await Task.Run(() =>
{
for (int i = 0; i < 100; i++)
{
//do something.
}
});
}
Is useful sometimes - when for some reason you need to perform computation but don't want to block current thread. For example, if current thread is UI thread and you don't want user interface to be freezed while computation is performed. However, if you are not in such environment, for example you are writing general purpose library - then it's useless too and you should stick to synchronous version above. If user of your library happen to be on UI thread - he can wrap the call in Task.Run himself. I would say that even if you are not writing a library but UI application - you should move all such logic (for loop in this case) into separate synchronous method and then wrap call to that method in Task.Run if necessary. So like this:
private static async Task Test2()
{
// we are on UI thread here, don't want to block it
await Task.Run(() => {
OurSynchronousVersionAbove();
});
// back on UI thread
// do something else
}
Now say you have that synchronous method and want to parallelize the computation. You may try something like this:
static void Test1() {
var task1 = Task.Run(() => {
for (int i = 0; i < 50;i++) {
// do something
}
});
var task2 = Task.Run(() => {
for (int i = 50; i < 100;i++) {
// do something
}
});
Task.WaitAll(task1, task2);
}
That will work but it wastes current thread blocked for no reason, waiting for two tasks to complete. Instead, you should do it like this:
static void Test1() {
var task = Task.Run(() => {
for (int i = 0; i < 50; i++) {
// do something
}
});
for (int i = 50; i < 100; i++) {
// do something
}
task.Wait();
}
Now you perform computation in parallel using 2 threads - one thread pool thread (from Task.Run) and current thread. And here is your legitimate use of task.Wait(). Of course usually you should stick to existing solutions like parallel LINQ, which does the same for you but better.
One of the risks of Task.Wait is deadlocks. If you call .Wait on the UI thread, you will deadlock if the task needs the main thread to complete. If you call an async method on the UI thread such deadlocks are very likely.
If you are 100% sure the task is running on a background thread, is guaranteed to complete no matter what, and that this will never change, it is fine to wait on it.
Since this if fairly difficult to guarantee it is usually a good idea to try to avoid waiting on tasks at all.
I believe that point in this passage is to not use blocking operations like Task.Wait in asynchronous code.
The main point isn't that Task.Wait is preferred in parallel code; it just says that you can get away with it, while in asynchronous code it can have a really serious effect.
This is because the success of async code depends on the tasks 'letting go' (with await) so that the thread(s) can do other work. In explicitly parallel code a blocking Wait may be OK because the other streams of work will continue going because they have a dedicated thread(s).
As I mentioned in the comments section if you look at the Receipt as a whole it might make more sense. Let me quote here the relevant part as well.
The Task type serves two purposes in concurrent programming: it can be a parallel task or an asynchronous task. Parallel tasks may use blocking members, such as Task.Wait, Task.Result, Task.WaitAll, and Task.WaitAny. In contrast, asynchronous tasks should avoid blocking members, and prefer await, Task.WhenAll, and Task.WhenAny. Parallel tasks also commonly use AttachedToParent to create parent/child relationships between tasks. Parallel tasks should be created with Task.Run or Task.Factory.StartNew.
In contrast, asynchronous tasks should avoid blocking members and prefer await, Task.WhenAll, and Task.WhenAny. Asynchronous tasks do not use AttachedToParent, but they can inform an implicit kind of parent/child relationship by awaiting an other task.
IMHO, it clearly articulates that a Task (or future) can represent a job, which can take advantage of the async I/O. OR it can represent a CPU bound job which could run in parallel with other CPU bound jobs.
Awaiting the former is the suggested way because otherwise you can't really take advantage of the underlying I/O driver's async capability. The latter does not require awaiting since it is not an async I/O job.
UPDATE Provide an example
As Theodor Zoulias asked in a comment section here is a made up example for parallel tasks where Task.WaitAll is being used.
Let's suppose we have this naive Is Prime Number implementation. It is not efficient, but it demonstrates that you perform something which is computationally can be considered as heavy. (Please also bear in mind for the sake of simplicity I did not add any error handling logic.)
static (int, bool) NaiveIsPrime(int number)
{
int numberOfDividers = 0;
for (int divider = 1; divider <= number; divider++)
{
if (number % divider == 0)
{
numberOfDividers++;
}
}
return (number, numberOfDividers == 2);
}
And here is a sample use case which run a couple of is prime calculation in parallel and waits for the results in a blocking way.
List<Task<(int, bool)>> jobs = new();
for (int number = 1_010; number < 1_020; number++)
{
var x = number;
jobs.Add(Task.Run(() => NaiveIsPrime(x)));
}
Task.WaitAll(jobs.ToArray());
foreach (var job in jobs)
{
(int number, bool isPrime) = job.Result;
var isPrimeInText = isPrime ? "a prime" : "not a prime";
Console.WriteLine($"{number} is {isPrimeInText}");
}
As you can see I haven't used any await keyword anywhere.
Here is a dotnet fiddle link
and here is a link for the prime numbers under 10 000.
I recommended using await instead of Task.Wait() for asynchronous methods/tasks, because this way the thread can be used for something else while the task is running.
However, for parallel tasks that are CPU -bound, most of the available CPU should be used. It makes sense use Task.Wait() to block the current thread until the task is complete. This way, the CPU -bound task can make full use of the CPU resources.
Update with supplementary statement.
Parallel tasks can use blocking members such as Task.Wait(), Task.Result, Task.WaitAll, and Task.WaitAny as they should consume all available CPU resources. When working with parallel tasks, it can be beneficial to block the current thread until the task is complete, since the thread is not being used for anything else. This way, the software can fully utilize all available CPU resources instead of wasting resources by keeping the thread running while it is blocked.
I have windows service that can get requests, I want to handle in each request in separated thread.
I want to limit also the number of the threads, i.e maximum 5 threads.
And i want to wait for all threads before i'm close the application,
What is the best way to do that?
What I'm tried:
for (int i = 0; i < 10; i++)
{
var i1 = i;
Task.Factory.StartNew(() => RequestHandle(i1.ToString())).ContinueWith(t => Console.WriteLine("Done"));
}
Task.WaitAll();//Not waiting actually for all threads, why?
In this way i can limit the number of the theads?
Or
var events = new ManualResetEvent[10];
ThreadPool.SetMaxThreads(5, 5);
for (int i = 0; i < 10; i++)
{
var i1 = i;
ThreadPool.QueueUserWorkItem(x =>
{
Test(i1.ToString());
events[i1].Set();
});
}
WaitHandle.WaitAll(events);
There is another way to implement that?
Both approaches should work, neither is a guarantee that each operation will run in a different thread (and that is a good thing). Controlling the maximun number of threads is another thing...
You can set the maximun number of threads of the ThreadPool with SetMaxThreads. Remember that this change is global, you only have one ThreadPool.
The default TaskScheduler will use the thread pool (except in some particular situations where it can run the taks inline on the same thread that is calling them), so changing the parameters of the ThreadPool will also affect the Tasks.
Now, notice I said default TaskScheduler. You can roll your own, which will give you more control over how the tasks will run. It is not advised to create a custom TaskScheduler (unless you really need it and you know what you are doing).
For the embedded question:
Task.WaitAll();//Not waiting actually for all threads, why?
To correctly call Task.WaitAll you need to pass the tasks you want to wait for as parameters. There is no way to wait for all currently existing tasks implicitly, you need to tell it what tasks you want to wait for.
Task.WaitAll();//Not waiting actually for all threads, why?
you have to the following
List<Task> tasks = new List<Task>();
for (int i = 0; i < 10; i++)
{
var i1 = i;
tasks.Add(Task.Factory.StartNew(() => RequestHandle(i1.ToString())).ContinueWith(t => Console.WriteLine("Done")));
}
Task.WaitAll(tasks);
I think that you should make the differences between Task and thread
Task are not threads Task is just a promise of result in the future and your code can execute on only one thread even if you have many tasks scheduled
Thread is a low-level concept if you start a thread you know that it will be a separate thread
I think you first implentation is well enough to be sure that all your code is executed but you have to use Task.Run instead of Task.Factory.StartNew
In a Parallel.For, is it possible to synchronize each threads with a 'WaitAll' ?
Parallel.For(0, maxIter, i =>
{
// Do stuffs
// Synchronisation : wait for all threads => ???
// Do another stuffs
});
Parallel.For, in the background, batches the iterations of the loop into one or more Tasks, which can executed in parallel. Unless you take ownership of the partitioning, the number of tasks (and threads) is (and should!) be abstracted away. Control will only exit the Parallel.For loop once all the tasks have completed (i.e. no need for WaitAll).
The idea of course is that each loop iteration is independent and doesn't require synchronization.
If synchronization is required in the tight loop, then you haven't isolated the Tasks correctly, or it means that Amdahl's Law is in effect, and the problem can't be speeded up through parallelization.
However, for an aggregation type pattern, you may need to synchronize after completion of each Task - use the overload with the localInit / localFinally to do this, e.g.:
// allTheStrings is a shared resource which isn't thread safe
var allTheStrings = new List<string>();
Parallel.For( // for (
0, // var i = 0;
numberOfIterations, // i < numberOfIterations;
() => new List<string> (), // localInit - Setup each task. List<string> --> localStrings
(i, parallelLoopState, localStrings) =>
{
// The "tight" loop. If you need to synchronize here, there is no point
// using parallel at all
localStrings.Add(i.ToString());
return localStrings;
},
(localStrings) => // local Finally for each task.
{
// Synchronization needed here is needed - run once per task
lock(allTheStrings)
{
allTheStrings.AddRange(localStrings);
}
});
In the above example, you could also have just declared allTheStrings as
var allTheStrings = new ConcurrentBag<string>();
In which case, we wouldn't have required the lock in the localFinally.
You shouldn't (for reasons stated by other users), but if you want to, you can use Barrier. This can be used to cause all threads to wait (block) at a certain point before X number of participants hit a barrier, causing the barrier to proceed and threads to unblock. The downside of this approach, as others have said, deadlocks
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is the difference between task and thread?
I understand the title itself may appear to be a duplicate question but I've really read all the previous posts related to this topic and still don't quite understand the program behavior.
I'm currently writing a small program that checks around 1,000 E-mail accounts. Undoubtedly I feel multithreading or multitasking is the right approach since each thread / task is not computationally expensive but the duration of each thread relies heavily on network I/O.
I think that under such a scenario, it would also be reasonable to set the number of threads / tasks at a number that is much larger than the number of cores. (four for i5-750). Therefore I've set the number of threads or tasks at 100.
The code snippet written using Tasks:
const int taskCount = 100;
var tasks = new Task[taskCount];
var loopVal = (int) Math.Ceiling(1.0*EmailAddress.Count/taskCount);
for (int i = 0; i < taskCount; i++)
{
var objContainer = new AutoCheck(i*loopVal, i*loopVal + loopVal);
tasks[i] = new Task(objContainer.CheckMail);
tasks[i].Start();
}
Task.WaitAll(tasks);
The same code snippet written using Threads:
const int threadCount = 100;
var threads = new Thread[threadCount];
var loopVal = (int)Math.Ceiling(1.0 * EmailAddress.Count / threadCount);
for (int i = 0; i < threadCount; i++)
{
var objContainer = new AutoCheck(i * loopVal, i * loopVal + loopVal);
threads[i] = new Thread(objContainer.CheckMail);
threads[i].Start();
}
foreach (Thread t in threads)
t.Join();
runningTime.Stop();
Console.WriteLine(runningTime.Elapsed);
So what are the essential differences between these two?
Tasks do not necessarily correspond to threads. They will be scheduled by the task library to threadpool threads in a much more efficient manner than your thread code.
Threads are fairly expensive to create. Tasks will queue up and reuse threads as they become available, so when a thread is waiting for network IO, it can actually be reused to execute another task. Threads that sit idle are wasted resources. You can only execute the number of threads that corresponds to your processor core count (simultaneously), so 100 threads means context switches on all your cores at least 25 times each.
If using tasks, just queue up all 1000 email processing tasks instead of batching them up and let it rip. The task library will handle how many threads to run it on.
The sort quick answer is that a Task does not equal a Thread. A task is queued up in a Task Scheduler and then executed on a Thread, but queuing up 100 Tasks does not mean you will have 100 threads running.
Usually a task will run on a thread from the thread pool, which has a finite size. Once all of those threads are busy, then your tasks will have to wait for a thread to become available to execute your task. It's also possible to queue them up on things like the UI thread using the appropriate task scheduler, in which case they end up being run synchronously by the UI thread's message loop (this is useful when you're interacting with a UI).
In your task example, there is no need to actually limit the number of tasks started, since the tasks will already get queued up and will have to wait until there is a thread available to run them. This is probably the best approach, because you're letting the system determine the maximum number of threads based on what the system can handle, rather than assuming you have enough CPU/Memory to handle it.
Whereas your example using threads specifically mean that you will definitely spin up the number of threads you're alloting for.
I have question on controlling the amount of concurrent threads I want running. Let me explain with what I currently do: For example
var myItems = getItems(); // is just some generic list
// cycle through the mails, picking 10 at a time
int index = 0;
int itemsToTake = myItems.Count >= 10 ? 10 : myItems.Count;
while (index < myItems.Count)
{
var itemRange = myItems.GetRange(index, itemsToTake);
AutoResetEvent[] handles = new AutoResetEvent[itemsToTake];
for (int i = 0; i < itemRange.Count; i++)
{
var item = itemRange[i];
handles[i] = new AutoResetEvent(false);
// set up the thread
ThreadPool.QueueUserWorkItem(processItems, new Item_Thread(handles[i], item));
}
// wait for all the threads to finish
WaitHandle.WaitAll(handles);
// update the index
index += itemsToTake;
// make sure that the next batch of items to get is within range
itemsToTake = (itemsToTake + index < myItems.Count) ? itemsToTake : myItems.Count -index;
This is a path that I currently take. However I do not like it at all. I know I can 'manage' the thread pool itself, but I have heard it is not advisable to do so. So what is the alternative? The semaphore class?
Thanks.
Instead of using ThreadPool directly, you might also consider using TPL or PLINQ. For example, with PLINQ you could do something like this:
getItems().AsParallel()
.WithDegreeOfParallelism(numberOfThreadsYouWant)
.ForAll(item => process(item));
or using Parallel:
var options = new ParallelOptions {MaxDegreeOfParallelism = numberOfThreadsYouWant};
Parallel.ForEach(getItems, options, item => process(item));
Make sure that specifying the degree of parallelism does actually improve performance of your application. TPL and PLINQ use ThreadPool by default, which does a very good job of managing the number of threads that are running. In .NET 4, ThreadPool implements algorithms that add more processing threads only if that improves performance.
Don't use THE treadpool, get another one (just look for google, there are half a dozen implementations out) and manage that yourself.
Managing THE treadpool is not advisable as a lot of internal workings may go ther, managing your OWN threadpool instance is totally ok.
It looks like you can control the maximum number of threads using ThreadPool.SetMaxThreads, although I haven't tested this.
Assuming the question is; "How do I limit the number of worker threads?" The the answer would be use a producer-consumer queue where you control the number of worker threads. Just queue your items and let it handle workers.
Here is a generic implementation you could use.
you can use ThreadPool.SetMaxThreads Method
http://msdn.microsoft.com/en-us/library/system.threading.threadpool.setmaxthreads.aspx
In the documentation, there is a mention of SetMaxThreads ...
public static bool SetMaxThreads (
int workerThreads,
int completionPortThreads
)
Sets the number of requests to the thread pool that can be active concurrently. All requests above that number remain queued until thread pool threads become available.
However:
You cannot set the number of worker threads or the number of I/O completion threads to a number smaller than the number of processors in the computer.
But I guess you are anyways better served by using a non-singleton thread pool.
There is no reason to deal with hybrid thread synchronization constructs (such is AutoResetEvent) and the ThreadPool.
You can use a class that can act as the coordinator responsible for executing all of your code asynchronously.
Wrap using a Task or the APM pattern what the "Item_Thread" does. Then use the AsyncCoordinator class by Jeffrey Richter (can be found at the code from the book CLR via C# 3rd Edition).