I set the max thread to 10. Then I added 22000 task using ThreadPool.QueueUserWorkItem.
It is very likely that not all the 22000 task was completed after running the program. Is there a limitation how many task can be queued for avaiable threads?
If you need to wait for all of the tasks to process, you need to handle that yourself. The ThreadPool threads are all background threads, and will not keep the application alive.
This is a relatively clean way to handle this type of situation:
using (var mre = new ManualResetEvent(false))
{
int remainingToProcess = workItems.Count(); // Assuming workItems is a collection of "tasks"
foreach(var item in workItems)
{
// Delegate closure (in C# 4 and earlier) below will
// capture a reference to 'item', resulting in
// the incorrect item sent to ProcessTask each iteration. Use a local copy
// of the 'item' variable instead.
// C# 5/VS2012 will not require the local here.
var localItem = item;
ThreadPool.QueueUserWorkItem(delegate
{
// Replace this with your "work"
ProcessTask(localItem);
// This will (safely) decrement the remaining count, and allow the main thread to continue when we're done
if (Interlocked.Decrement(ref remainingToProcess) == 0)
mre.Set();
});
}
mre.WaitOne();
}
That being said, it's usually better to "group" together your work items if you have thousands of them, and not treat them as separate Work Items for the threadpool. This is some overhead involved in managing the list of items, and since you won't be able to process 22000 at a time, you're better off grouping these into blocks. Having single work items each process 50 or so will probably help your overall throughput quite a bit...
The queue has no practical limit however the pool itself will not exceed 64 wait handles, ie total threads active.
This is an implementation dependent question and the implementation of this function has changed a bit over time. But in .Net 4.0, you're essentially limited by the amount of memory in the system as the tasks are stored in an in memory queue. You can see this by digging through the implementation in reflector.
From the documentation of ThreadPool:
Note: The threads in the managed thread pool are background threads. That is, their IsBackground properties are true. This means that a ThreadPool thread will not keep an application running after all foreground threads have exited.
Is it possible that you're exiting before all tasks have been processed?
Related
I understand that thread pool priority should/can not be changed by the running process, but is the priority of particular task running on the thread pool somewhat priced-in with the calling process priority?
in other words, do all tasks in thread pool run in the same priority regardless the calling process priority?
thank you
update 1: i should have been more specific, i refer to thread inside Parallel.ForEach
I understand that thread pool priority should/can not be changed by the running process,
That's not exact. You can change Thread Pool's thread priority (inside delegate itself) and it'll run with new priority but default one will be restored when its task finishes and it'll be send back to pool.
ThreadPool.QueueUserWorkItem(delegate(object state) {
Thread.CurrentThread.Priority = ThreadPriority.Highest;
// Code in this function will run with Highest priority
});
is the priority of particular task running on the thread pool somewhat priced-in with the calling process priority?
Yes, and it doesn't apply to Thread Pool's threads only. In Windows process' priority is given by its class (from IDLE_PRIORITY_CLASS to REALTIME_PRIORITY_CLASS). Together with thread's priority (from THREAD_PRIORITY_IDLE to THREAD_PRIORITY_TIME_CRITICAL) it'll be used to calculate thread's final priority.
From MSDN:
The process priority class and thread priority level are combined to form the base priority of each thread.
Note that it's not simply a base priority plus an offset:
NORMAL_PRIORITY_CLASS + THREAD_PRIORITY_IDLE == 1
NORMAL_PRIORITY_CLASS + THREAD_PRIORITY_TIME_CRITICAL == 15
But:
REALTIME_PRIORITY_CLASS + THREAD_PRIORITY_IDLE == 16
REALTIME_PRIORITY_CLASS + THREAD_PRIORITY_TIME_CRITICAL == 31
Moreover threads can have a temporary boost (decided and managed by Windows Scheduler). Be aware that a process can also change its own priority class.
in other words, do all tasks in thread pool run in the same priority regardless the calling process priority?
No, thread's priority depends on process' priority (see previous paragraph) and each thread in pool can temporary have a different priority. Also note that thread priority isn't affected by caller thread's priority:
ThreadPool.QueueUserWorkItem(delegate(object s1) {
Thread.CurrentThread.Priority = ThreadPriority.Highest;
ThreadPool.QueueUserWorkItem(delegate(object s2) {
// This code is executed with ThreadPriority.Normal
Thread.CurrentThread.Priority = ThreadPriority.Lowest;
// This code is executed with ThreadPriority.Lowest
});
// This code is executed with ThreadPriority.Highest
});
EDIT: .NET tasks uses Thread Pool than what wrote above still applies. If, for example, you're enumerating a collecition with Parallel.ForEach to increase thread priority you have to do it inside your loop:
Parallel.ForEach(items, item => {
Thread.CurrentThread.Priority = ThreadPriority.Highest;
// Your code here...
});
Just a warning: be careful when you change priorities. If, for example, two threads use a shared resource (protected by a lock), there are many races to acquire that resources and one of them has highest priority then you may end with a very high CPU usage (because of spinning behavior of Monitor.Enter). This is just one issue, please refer to MSDN for more details (increasing thread's priority may even result is worse performance).
do all tasks in thread pool run in the same priority regardless the calling process priority?
They have to. The only thing dropped off at the pool is a delegate. That holds a reference to an object but not to the Thread that dropped it off.
The ones that are currently running have the same priority. But there's a queue for the ones that aren't running yet - so in practice, there is a "priority". Even more confusingly, thread priorities can be boosted (and limited) by the OS, for example when two threads in the threadpool depend on each other (e.g. one is blocking the other). And of course, any time something is blocking on the threadpool, you're wasting resources :D
That said, you shouldn't really change thread priorities at all, threadpool or not. You don't really need to, and thread (and process) priorities don't work the way you probably expect them to - it's just not worth it. Keep everything at normal, and just ignore that there's a Priority property, and you'll avoid a lot of unnecessary problems :)
You'll find a lot of nice explanations on the internet - for example, http://blog.codinghorror.com/thread-priorities-are-evil/. Those are usually dated, of course - but so is the concept of thread priorities, really - they were designed for single-core machines at a time when OSes weren't all that good at pre-emptive multitasking, really.
I was writing some code to process a lot of data, and I thought it would be useful to have Parallel.ForEach create a file for each thread it creates so the output doesn't need to be synchronized (by me at least).
It looks something like this:
Parallel.ForEach(vals,
new ParallelOptions { MaxDegreeOfParallelism = 8 },
()=>GetWriter(), // returns a new BinaryWriter backed by a file with a guid name
(item, state, writer)=>
{
if(something)
{
state.Break();
return writer;
}
List<Result> results = new List<Result>();
foreach(var subItem in item.SubItems)
results.Add(ProcessItem(subItem));
if(results.Count > 0)
{
foreach(var result in results)
result.Write(writer);
}
return writer;
},
(writer)=>writer.Dispose());
What I expected to happen was that up to 8 files would be created and would persist through the entire run time. Then each would be Disposed when the entire ForEach call finishes. What really happens is that the localInit seems to be called once for each item, so I end up with hundreds of files. The writers are also getting disposed at the end of each item that is processed.
This shows the same thing happening:
var vals = Enumerable.Range(0, 10000000).ToArray();
long sum = 0;
Parallel.ForEach(vals,
new ParallelOptions { MaxDegreeOfParallelism = 8 },
() => { Console.WriteLine("init " + Thread.CurrentThread.ManagedThreadId); return 0L; },
(i, state, common) =>
{
Thread.Sleep(10);
return common + i;
},
(common) => Interlocked.Add(ref sum, common));
I see:
init 10
init 14
init 11
init 13
init 12
init 14
init 11
init 12
init 13
init 11
... // hundreds of lines over < 30 seconds
init 14
init 11
init 18
init 17
init 10
init 11
init 14
init 11
init 14
init 11
init 18
Note: if I leave out the Thread.Sleep call, it sometimes seems to function "correctly". localInit only gets called once each for the 4 threads that it decides to use on my pc. Not every time, however.
Is this the desired behavior of the function? What's going on behind the scenes that causes it to do this? And lastly, what's a good way to get my desired functionality, ThreadLocal?
This is on .NET 4.5, by the way.
Parallel.ForEach does not work as you think it does. It's important to note that the method is build on top of Task classes and that the relationship between Task and Thread is not 1:1. You can have, for example, 10 tasks that run on 2 managed threads.
Try using this line in your method body instead of the current one:
Console.WriteLine("ThreadId {0} -- TaskId {1} ",
Thread.CurrentThread.ManagedThreadId, Task.CurrentId);
You should see that the ThreadId will be reused across many different tasks, shown by their unique ids. You'll see this more if you left in, or increased, your call to Thread.Sleep.
The (very) basic idea of how the Parallel.ForEach method works, is that it takes your enumerable creates a series of tasks that will run process sections of the enumeration, the way this is done depends a lot on the input. There is also some special logic that checks for the case of a task exceeding a certain number of milliseconds without completing. If that case is true, then a new task may be spawned to help relieve the work.
If you looked at the documentation for the localinit function in Parallel.ForEach, you'll notice that it says that it returns the initial state of the local data for each _task_, not each thread.
You might ask why there are more than 8 tasks being spawned. That answer is similar to the last, found in the documentation for ParallelOptions.MaxDegreeOfParallelism.
Changing MaxDegreeOfParallelism from the default only limits how many concurrent tasks will be used.
This limit is only on the number of concurrent tasks, not a hard-limit on the number of tasks that will be created during the entire time it is processing. And as I mentioned above, there are times where a separate task will be spawned, which results in your localinit function being called multiple times and writing hundreds of files to disk.
Writing to disk is certainly a operation with a bit of latency, particularly if you're using synchronous I/O. When the disk operation happens, it blocks the entire thread; the same happens with Thread.Sleep. If a Task does this, it will block the thread it is currently running on, and no other tasks can run on it. Usually in these cases, the scheduler will spawn a new Task to help pick up the slack.
And lastly, what's a good way to get my desired functionality, ThreadLocal?
The bottom line is that thread locals don't make sense with Parallel.ForEach because you're not dealing with threads; you're dealing with tasks. A thread local could be shared between tasks because many tasks can use the same thread at the same time. Also, a task's thread local could change mid-execution, because the scheduler could preempt it from running and then continue its execution on a different thread, which would have a different thread local.
I'm not sure the best way to do it, but you could rely on the localinit function to pass in whatever resource you'd like, only allowing a resource to be used in one thread at a time. You can use the localfinally to mark it as no longer in use and thus available for another task to acquire. This is what those methods were designed for; each method is only called once per task that is spawned (see the remarks section of the Parallel.ForEach MSDN documentation).
You can also split the work yourself, and create your own set of threads and run your work. However, this is less idea, in my opinion, since the Parallel class already does this heavy lifting for you.
What you're seeing is the implementation trying to get your work done as quickly as possible.
To do this, it tries using different numbers of tasks to maximize throughput. It grabs a certain number of threads from the thread pool and runs your work for a bit. It then tries adding and removing threads to see what happens. It continues doing this until all your work is done.
The algorithm is quite dumb in that it doesn't know if your work is using a lot of CPU, or a lot of IO, or even if there is a lot of synchronization and the threads are blocking each other. All it can do is add and remove threads and measure how fast each unit of work completes.
This means it is continually calling your localInit and localFinally functions as it injects and retires threads - which is what you have found.
Unfortunately, there is no easy way to control this algorithm. Parallel.ForEach is a high-level construct that intentionally hides much of the thread-management code.
Using a ThreadLocal might help a bit, but it relies on the fact that the thread pool will reuse the same threads when Parallel.ForEach asks for new ones. This is not guarenteed - in fact, it is unlikely that the thread pool will use exactly 8 threads for the whole call. This means you will again be creating more files than necessary.
One thing that is guaranteed is that Parallel.ForEach will never use more than MaxDegreeOfParallelism threads at any one time.
You can use this to your advantage by creating a fixed-size "pool" of files that can be re-used by whichever threads are running at a particular time. You know that only MaxDegreeOfParallelism threads can run at once, so you can create that number of files before calling ForEach. Then grab one in your localInit and release it in your localFinally.
Of course, you will have to write this pool yourself and it must be thread-safe as it will be called concurrently. A simple locking strategy should be good enough, though, because threads are not injected and retired very quickly compared to the cost of a lock.
According to MSDN the localInit method is called once for each task, not for each thread:
The localInit delegate is invoked once for each task that participates in the loop's execution and returns the initial local state for each of those tasks.
localInit called when thread created.
if body takes so long it must create another thread and suspends current thread,
and if it creates another thread, it calls localInit
also when Parallel.ForEach called it creates threads as much as MaxDegreeOfParallelism value for example:
var k = Enumerable.Range(0, 1);
Parallel.ForEach(k,new ParallelOptions(){MaxDegreeOfParallelism = 4}.....
it create 4 thread when first it called
I am using multiple threads in my application using while(true) loop and now i want to exit from loop when all the active threads complete their work.
Assuming that you have a list of the threads themselves, here are two approaches.
Solution the first:
Use Thread.Join() with a timespan parameter to synch up with each thread in turn. The return value tells you whether the thread has finished or not.
Solution the second:
Check Thread.IsAlive() to see if the thread is still running.
In either situation, make sure that your main thread yields processor time to the running threads, else your wait loop will consume most/all the CPU and starve your worker threads.
You can use Process.GetCurrentProcess().Threads.Count.
There are various approaches here, but utlimately most of them come down to your changing the executed threads to do something whenever they leave (success or via exception, which you don't want to do anyway). A simple approach might be to use Interlock.Decrement to reduce a counter - and if it is zero (or -ve, which probably means an error) release a ManualResetEvent or Monitor.Pulse an object; in either case, the original thread would be waiting on that object. A number of such approaches are discussed here.
Of course, it might be easier to look at the TPL bits in 4.0, which provide a lot of new options here (not least things like Parallel.For in PLINQ).
If you are using a synchronized work queue, it might also be possible to set that queue to close (drain) itself, and simply wait for the queue to be empty? The assumption here being that your worker threads are doing something like:
T workItem;
while(queue.TryDequeue(out workItem)) { // this may block until either something
ProcessWorkItem(workItem); // todo, or the queue is terminated
}
// queue has closed - exit the thread
in which case, once the queue is empty all your worker threads should already be in the process of suicide.
You can use Thread.Join(). The Join method will block the calling thread until the thread (the one on which the Join method is called) terminates.
So if you have a list of thread, then you can loop through and call Join on each thread. You loop will only exit when all the threads are dead. Something like this:
for(int i = 0 ;i < childThreadList.Count; i++)
{
childThreadList[i].Join();
}
///...The following code will execute when all threads in the list have been terminated...///
I find that using the Join() method is the cleanest way. I use multiple threads frequently, and each thread is typically loading data from different data sources (Informix, Oracle and SQL at the same time.) A simple loop, as mentioned above, calling Join() on each thread object (which I store in a simple List object) works!!!
Carlos Merighe.
I prefer using a HashSet of Threads:
// create a HashSet of heavy tasks (threads) to run
HashSet<Thread> Threadlist = new HashSet<Thread>();
Threadlist.Add(new Thread(() => SomeHeavyTask1()));
Threadlist.Add(new Thread(() => SomeHeavyTask2()));
Threadlist.Add(new Thread(() => SomeHeavyTask3()));
// start the threads
foreach (Thread T in Threadlist)
T.Start();
// these will execute sequential
NotSoHeavyTask1();
NotSoHeavyTask2();
NotSoHeavyTask3();
// loop through tasks to see if they are still active, and join them to main thread
foreach (Thread T in Threadlist)
if (T.ThreadState == ThreadState.Running)
T.Join();
// finally this code will execute
MoreTasksToDo();
This isn't about the different methods I could or should be using to utilize the queues in the best manner, rather something I have seen happening that makes no sense to me.
void Runner() {
// member variable
queue = Queue.Synchronized(new Queue());
while (true) {
if (0 < queue.Count) {
queue.Dequeue();
}
}
}
This is run in a single thread:
var t = new Thread(Runner);
t.IsBackground = true;
t.Start();
Other events are "Enqueue"ing else where. What I've seen happen is over a period of time, the Dequeue will actually throw InvalidOperationException, queue empty. This should be impossible seeing as how the count guarantees there is something there, and I'm positive that nothing else is "Dequeue"ing.
The question(s):
Is it possible that the Enqueue actually increases the count before the item is fully on the queue (whatever that means...)?
Is it possible that the thread is somehow restarting (expiring, reseting...) at the Dequeue statement, but immediately after it already removed an item?
Edit (clarification):
These code pieces are part of a Wrapper class that implements the background helper thread. The Dequeue here is the only Dequeue, and all Enqueue/Dequeue are on the Synchronized member variable (queue).
Using Reflector, you can see that no, the count does not get increased until after the item is added.
As Ben points out, it does seem as you do have multiple people calling dequeue.
You say you are positive that nothing else is calling dequeue. Is that because you only have the one thread calling dequeue? Is dequeue called anywhere else at all?
EDIT:
I wrote a little sample code, but could not get the problem to reproduce. It just kept running and running without any exceptions.
How long was it running before you got errors? Maybe you can share a bit more of the code.
class Program
{
static Queue q = Queue.Synchronized(new Queue());
static bool running = true;
static void Main()
{
Thread producer1 = new Thread(() =>
{
while (running)
{
q.Enqueue(Guid.NewGuid());
Thread.Sleep(100);
}
});
Thread producer2 = new Thread(() =>
{
while (running)
{
q.Enqueue(Guid.NewGuid());
Thread.Sleep(25);
}
});
Thread consumer = new Thread(() =>
{
while (running)
{
if (q.Count > 0)
{
Guid g = (Guid)q.Dequeue();
Console.Write(g.ToString() + " ");
}
else
{
Console.Write(" . ");
}
Thread.Sleep(1);
}
});
consumer.IsBackground = true;
consumer.Start();
producer1.Start();
producer2.Start();
Console.ReadLine();
running = false;
}
}
Here is what I think the problematic sequence is:
(0 < queue.Count) evaluates to true, the queue is not empty.
This thread gets preempted and another thread runs.
The other thread removes an item from the queue, emptying it.
This thread resumes execution, but is now within the if block, and attempts to dequeue an empty list.
However, you say nothing else is dequeuing...
Try outputting the count inside the if block. If you see the count jump numbers downwards, someone else is dequeuing.
Here's a possible answer from the MSDN page on this topic:
Enumerating through a collection is
intrinsically not a thread-safe
procedure. Even when a collection is
synchronized, other threads can still
modify the collection, which causes
the enumerator to throw an exception.
To guarantee thread safety during
enumeration, you can either lock the
collection during the entire
enumeration or catch the exceptions
resulting from changes made by other
threads.
My guess is that you're correct - at some point, there's a race condition happening, and you end up dequeuing something that isn't there.
A Mutex or Monitor.Lock is probably appropriate here.
Good luck!
Are the other areas that are "Enqueuing" data also using the same synchronized queue object? In order for the Queue.Synchronized to be thread-safe, all Enqueue and Dequeue operations must use the same synchronized queue object.
From MSDN:
To guarantee the thread safety of the
Queue, all operations must be done
through this wrapper only.
Edited:
If you are looping over many items that involve heavy computation or if you are using a long-term thread loop (communications, etc.), you should consider having a wait function such as System.Threading.Thread.Sleep, System.Threading.WaitHandle.WaitOne, System.Threading.WaitHandle.WaitAll, or System.Threading.WaitHandle.WaitAny in the loop, otherwise it might kill system performance.
question 1: If you're using a synchronized queue, then: no, you're safe! But you'll need to use the synchronized instance on both sides, the supplier and the feeder.
question 2: Terminating your worker thread when there is no work to do, is a simple job. However, you either way need a monitoring thread or have the queue start a background worker thread whenever the queue has something to do. The last one sounds more like the ActiveObject Pattern, than a simple queue (which's Single-Responsibily-Pattern says that it should only do queueing).
In addition, I'd go for a blocking queue instead of your code above. The way your code works requires CPU processing power even if there is no work to do. A blocking queue lets your worker thread sleep whenever there is nothing to do. You can have multiple sleeping threads running without using CPU processing power.
C# doesn't come with a blocking queue implementation, but there a many out there. See this example and this one.
Another option for making thread-safe use of queues is the ConcurrentQueue<T> class that has been introduced since 2009 (the year of this question). This may help avoid having to write your own synchronization code or at least help making it much simpler.
From .NET Framework 4.6 onward, ConcurrentQueue<T> also implements the interface IReadOnlyCollection<T>.
I have multiple threads (C# application running on IIS) running that all need to communicate with the same MQ backend. To minimize network traffic, I need to only send a backend request when there is work to be done. There will be one thread to monitor if there is work to be done, and it needs to notify the other threads that they should also begin processing. The current solution involves the monitor thread setting a global variable and having the other threads loop and check that, ie in the monitor thread:
CheckIfWorkAvailable() {
while(true) {
if (queue.Empty != true) {
workToBeDone = true;
}
}//end while loop
}
and then in the worker threads:
DoWork() {
while(true) {
if (workToBeDone == true) {
//do work...
}
else {
Thread.Sleep(x seconds)
}
}//end while loop
}
Can the monitor thread notify the worker threads when there is work to do instead of having them just loop and sleep? The worker threads also set a counter indicating they are working and the decrement it when their work is done so the workToBeDone flag can be set to false.
Check out WaitHandle and its descending classes. EventWaitHandle may suit your needs.
As well as the WaitHandle classes pointed out by Kent, simple Monitor.Wait and Monitor.Pulse/PulseAll can do this easily. They're "lighter" than event handles, although somewhat more primitive. (You can't wait on multiple monitors, etc.)
I have an example of this (as a producer consumer queue) in my threading article.
In your scenario it may also be possible to directly use the ThreadPool class. This means that you do not need to setup the threads you will be consuming and it also allows you to setup the threads based on work to be completed.
If you are into using CTPs in your projects you might want to check out the TPL as it some more advanced synchronization and tasking features.
Use ManualResetEvent for cases where you want all worker threads to proceed when a state is met (looks like what you are wanting here). Use AutoResetEvent in cases where you only want to signal a single worker each time some work becomes available. Use Semaphore when you want to allow a specific number of threads to proceed. Almost never use a global variable for this type of thing, and if you do, mark it as volatile.
Be careful in this situation. You don't want to cause "lock convoys" to occur because you release all the workers to hit the queue all at once every time a single item gets released only to have to wait again.
http://msdn.microsoft.com/en-us/library/yy12yx1f(VS.80).aspx
You can use AutoReset Events