Implementing Parallel Task Queues in .Net - c#

An image speaks more than words, so here is basically what I want to achieve :
(I have also used a fruit analogy for the sake of genericity an simplicity)
I've done this kind of stuff many time in the past using different king of .Net classes (BackGroundWOrkers, ThreadPool, Self Made Stuff...)
I am asking here for the sake of advice and to get fresh ideas on how to do this efficiently.
This is a high computing program so I am receiving Millions of (similar in structure but not in content) data, that have to be queued in order to be processed according to its content type. Hence, I want to avoid creating a parallel task for each single data to be processed (this overloads the CPU and is poor design IMHO). That's why I got the idea of having only ONE thread running for EACH data TYPE, dedicated to processing it (knowing that the "Press Juice" method is generic and independent of the fruit to be pressed)
Any Ideas and implementation suggestions are welcome.
I am free to give any further details.

TPL DataFlow seems like a very strong candidate for this.
Take a read of the intro here.

If all you really want is one thread (or a constant number of threads) for each type of fruit, then the simplest solution might be to use a BlockingCollection for each type of fruit. Your data bus will deliver the fruit to those collections, and your processing threads will take from them. But this means if there are no apples for now, the thread will be blocked, doing nothing.
A more flexible and efficient approach would be to use TPL Dataflow. With that, you don't work with threads or tasks, you work with blocks. For example your Thread C could be represented as a TransformBlock<Apple, AppleJuice>.
By default, each block uses at most one thread, but they can be easily configured to use more of them (by setting MaxDegreeOfParallelism). Also, dataflow blocks work well with the new C# 5.0 async-await, which could be a big advantage.
There are also things you should be careful about. For example, TDF is by default optimized for throughput, not latency. So, if your thread pool is busy and you have lots of oranges incoming and only one apple, it's possible that the apple will be processed only after all of the oranges are. But this can be also fixed by configuring the blocks properly (by setting MaxMessagesPerTask).

I would caution against a "worker thread per data type" approach. This makes the assumption that actual input load will conform to the equivalence classes that are handy for developers. Do you know if bananas are 5x slower to juice than oranges? What happens if every Tuesday is "apple celebration day" and everyone juices more fruit than usual, and all of it is apples?
Running things in parallel is about performance, not about the domain. Don't model it after your domain, model it to provide the lowest average cycle time.

Related

Parallel programming in games

in my game I have entities (about 2000-5000) that do very performance heavy calculations every frame (or every 2nd or 3rd).
I know that parallel programming will help in this scenario.
The question is how do I use multiple threads / cpu-cores in the best way possible.
I tried the static Parallel class in .NET, but the startup cost for every Parallel.For / Parallel.ForEach is simply to big.
Does it start a number of threads each time I call a function of the Parallel class ??
The game's target fps is 60 fps, so whatevery approach I use, it should never involve starting multiple threads every game-frame!
So my question is:
What other alternatives are there to parallel-process multiple entities?
Should I create the threads myself, and if so, what pitfalls are there?
Is the threadpool a good alternative?
By default the Parallel.For/Parallel.Foreach uses ThreadPool thread. Infact TPL api like Parallel class, Task instance etc uses ThreadPool thread.
The number of threads started by TPL will depend upon lots of factors like number of available ideal thread in threadpool etc. So assuming TPL will create lots of threads as soon as you will submit request will not be entirely correct.
Important point is that you can control, the number of threads used the TPL for executing the tasks via MaxDegreeOfParallelism flag. By setting this value to say number "X", you limit the number of concurrent items being processed to "X". Personally, I feel this is a very powerful feature of TPL library.
Also, if you do not want your task to block other requests in queue, then you can set the TaskCreationOptions to be of type "LongRunning", in that case, TPL will run the task on a dedicated thread.
In your case, you can try using Parallel.For/ForEach and set the MaxDegreeOfParalleleism value to some optimal value and see how that behaves.
You need some computers or servers to test your parallel programmed game. Other wise on a single computer, since it will work on thread, it will cost more.
Additionally, since the game is calculated and drawn 60 times in a second, working on threads is not logical, your game is already running on threads.
Maybe you have to change or optimize your algorithm. Do not let everything is done by CPU, use GPU as much as possible.
There isn't a simple answer.
When every entity of your game can be calculated without knowing any other entity then you can simply move that calcultion to a given number of threads. A threadpool would be a good start.
You could set this up when your programm starts.
But even when you do so, you have to check if switching threads performs better than having one single thread. Context switch is here the keyword you can search for.
But what happens when entity A on thread A does a calculation with affects entity B on thread B and forces entity B to be destroyed which forces entity C on thread C to be destroyed also?
And now to make it harder :P entity C was calculated before entity B, before entity A...
What should happen? Is it ok that those entities will be cleaned up in 2 or 3 frames in the future?
Does it affect or break your game logic?
If so then things are getting complicated.

Introduction to threading - Process xml file

I have never written multi-threaded code before (barring a few basic backgroundworker tricks) and am hoping for some guidance about how I would approach my problem.
I have an XML file which is a serialized List<Stock>. For each one of these stock items I need to perform a webservice call called UpdatePrice().
What I want to do is take each one of these items, create a threadpool (who's size depends on the amount of rows I will need to process) and begin making webservice calls.
I am not asking for a complete solution (obviously) but would really appreciate some guidance about how one would typically solve this problem.
The biggest issue that I see arising is how I would designate which threads would work on which objects. Do I simply take the list divide it by the number of threads I make and split the work? Or am I better off allowing each thread to arbitrarily pick an item from the list to process? (Then I have locking issues but as a plus can ensure no thread is idle)
As I said before I am not looking for a complete solution but just some basic guidance on where to start because honestly I am lost on this one and haven't written a single line of code.
PS: Also are autogenerated webservice proxies in .NET threadsafe?
I would suggest looking into TPL and PLINQ for a solution. A simple example solution using Parallel.ForEach() could look like this (parallel calls limited to 5 in the example).
List<Stock> stocks;
Parallel.ForEach(stocks,
new ParallelOptions() { MaxDegreeOfParallelism = 5 },
(stock) =>
{
float newPrice = UpdatePrice(stock.TickerSymbol); //web service call
stock.Price = newPrice;
});
i would:
First read the whole XML data synchronously.
Then, i would put each element to be processed in a single queue.
Then, you can spawn N processing threads, in which at the beginning of each one, it would "pop" an element of your queue, wrapping this specific piece of code in a mutex / semaphore (Google C# mutex, or concurrent access, or anything related). This is easily done in C# with the "lock" keyword on an arbitrary object.
Hope this helps.
Pierre.
There's no point in using threads here. A thread can only give you one resource: more cpu cycles, provided that you have a CPU with multiple cores. That is not the resource that you need to speed up your program. You need a faster Internet connection.
If you have an UI you don't want frozen then the BackgroundWorker tricks will work just fine.

Why is the explicit management of threads a bad thing?

In a previous question, I made a bit of a faux pas. You see, I'd been reading about threads and had got the impression that they were the tastiest things since kiwi jello.
Imagine my confusion then, when I read stuff like this:
[T]hreads are A Very Bad Thing. Or, at least, explicit management of threads is a bad thing
and
Updating the UI across threads is usually a sign that you are abusing threads.
Since I kill a puppy every time something confuses me, consider this your chance get your karma back in the black...
How should I be using thread?
Enthusiam for learning about threading is great; don't get me wrong. Enthusiasm for using lots of threads, by contrast, is symptomatic of what I call Thread Happiness Disease.
Developers who have just learned about the power of threads start asking questions like "how many threads can I possible create in one program?" This is rather like an English major asking "how many words can I use in a sentence?" Typical advice for writers is to keep your sentences short and to the point, rather than trying to cram as many words and ideas into one sentence as possible. Threads are the same way; the right question is not "how many can I get away with creating?" but rather "how can I write this program so that the number of threads is the minimum necessary to get the job done?"
Threads solve a lot of problems, it's true, but they also introduce huge problems:
Performance analysis of multi-threaded programs is often extremely difficult and deeply counterintuitive. I've seen real-world examples in heavily multi-threaded programs in which making a function faster without slowing down any other function or using more memory makes the total throughput of the system smaller. Why? Because threads are often like streets downtown. Imagine taking every street and magically making it shorter without re-timing the traffic lights. Would traffic jams get better, or worse? Writing faster functions in multi-threaded programs drives the processors towards congestion faster.
What you want is for threads to be like interstate highways: no traffic lights, highly parallel, intersecting at a small number of very well-defined, carefully engineered points. That is very hard to do. Most heavily multi-threaded programs are more like dense urban cores with stoplights everywhere.
Writing your own custom management of threads is insanely difficult to get right. The reason is because when you are writing a regular single-threaded program in a well-designed program, the amount of "global state" you have to reason about is typically small. Ideally you write objects that have well-defined boundaries, and that do not care about the control flow that invokes their members. You want to invoke an object in a loop, or a switch, or whatever, you go right ahead.
Multi-threaded programs with custom thread management require global understanding of everything that a thread is going to do that could possibly affect data that is visible from another thread. You pretty much have to have the entire program in your head, and understand all the possible ways that two threads could be interacting in order to get it right and prevent deadlocks or data corruption. That is a large cost to pay, and highly prone to bugs.
Essentially, threads make your methods lie. Let me give you an example. Suppose you have:
if (!queue.IsEmpty) queue.RemoveWorkItem().Execute();
Is that code correct? If it is single threaded, probably. If it is multi-threaded, what is stopping another thread from removing the last remaining item after the call to IsEmpty is executed? Nothing, that's what. This code, which locally looks just fine, is a bomb waiting to go off in a multi-threaded program. Basically that code is actually:
if (queue.WasNotEmptyAtSomePointInThePast) ...
which obviously is pretty useless.
So suppose you decide to fix the problem by locking the queue. Is this right?
lock(queue) {if (!queue.IsEmpty) queue.RemoveWorkItem().Execute(); }
That's not right either, necessarily. Suppose the execution causes code to run which waits on a resource currently locked by another thread, but that thread is waiting on the lock for queue - what happens? Both threads wait forever. Putting a lock around a hunk of code requires you to know everything that code could possibly do with any shared resource, so that you can work out whether there will be any deadlocks. Again, that is an extremely heavy burden to put on someone writing what ought to be very simple code. (The right thing to do here is probably to extract the work item in the lock and then execute it outside the lock. But... what if the items are in a queue because they have to be executed in a particular order? Now that code is wrong too because other threads can then execute later jobs first.)
It gets worse. The C# language spec guarantees that a single-threaded program will have observable behaviour that is exactly as the program is specified. That is, if you have something like "if (M(ref x)) b = 10;" then you know that the code generated will behave as though x is accessed by M before b is written. Now, the compiler, jitter and CPU are all free to optimize that. If one of them can determine that M is going to be true and if we know that on this thread, the value of b is not read after the call to M, then b can be assigned before x is accessed. All that is guaranteed is that the single-threaded program seems to work like it was written.
Multi-threaded programs do not make that guarantee. If you are examining b and x on a different thread while this one is running then you can see b change before x is accessed, if that optimization is performed. Reads and writes can logically be moved forwards and backwards in time with respect to each other in single threaded programs, and those moves can be observed in multi-threaded programs.
This means that in order to write multi-threaded programs where there is a dependency in the logic on things being observed to happen in the same order as the code is actually written, you have to have a detailed understanding of the "memory model" of the language and the runtime. You have to know precisely what guarantees are made about how accesses can move around in time. And you cannot simply test on your x86 box and hope for the best; the x86 chips have pretty conservative optimizations compared to some other chips out there.
That's just a brief overview of just a few of the problems you run into when writing your own multithreaded logic. There are plenty more. So, some advice:
Do learn about threading.
Do not attempt to write your own thread management in production code.
Use higher-level libraries written by experts to solve problems with threads. If you have a bunch of work that needs to be done in the background and want to farm it out to worker threads, use a thread pool rather than writing your own thread creation logic. If you have a problem that is amenable to solution by multiple processors at once, use the Task Parallel Library. If you want to lazily initialize a resource, use the lazy initialization class rather than trying to write lock free code yourself.
Avoid shared state.
If you can't avoid shared state, share immutable state.
If you have to share mutable state, prefer using locks to lock-free techniques.
Explicit management of threads is not intrinsically a bad thing, but it's frought with dangers and shouldn't be done unless absolutely necessary.
Saying threads are absolutely a good thing would be like saying a propeller is absolutely a good thing: propellers work great on airplanes (when jet engines aren't a better alternative), but wouldn't be a good idea on a car.
You cannot appreciate what kind of problems threading can cause unless you've debugged a three-way deadlock. Or spent a month chasing a race condition that happens only once a day. So, go ahead and jump in with both feet and make all the kind of mistakes you need to make to learn to fear the Beast and what to do to stay out of trouble.
There's no way I could offer a better answer than what's already here. But I can offer a concrete example of some multithreaded code that we actually had at my work that was disastrous.
One of my coworkers, like you, was very enthusiastic about threads when he first learned about them. So there started to be code like this throughout the program:
Thread t = new Thread(LongRunningMethod);
t.Start(GetThreadParameters());
Basically, he was creating threads all over the place.
So eventually another coworker discovered this and told the developer responsible: don't do that! Creating threads is expensive, you should use the thread pool, etc. etc. So a lot of places in the code that originally looked like the above snippet started getting rewritten as:
ThreadPool.QueueUserWorkItem(LongRunningMethod, GetThreadParameters());
Big improvement, right? Everything's sane again?
Well, except that there was a particular call in that LongRunningMethod that could potentially block -- for a long time. Suddenly every now and then we started seeing it happen that something our software should have reacted to right away... it just didn't. In fact, it might not have reacted for several seconds (clarification: I work for a trading firm, so this was a complete catastrophe).
What had ended up happening was that the thread pool was actually filling up with long-blocking calls, leading to other code that was supposed to happen very quickly getting queued up and not running until significantly later than it should have.
The moral of this story is not, of course, that the first approach of creating your own threads is the right thing to do (it isn't). It's really just that using threads is tough, and error-prone, and that, as others have already said, you should be very careful when you use them.
In our particular situation, many mistakes were made:
Creating new threads in the first place was wrong because it was far more costly than the developer realized.
Queuing all background work on the thread pool was wrong because it treated all background tasks indiscriminately and did not account for the possibility of asynchronous calls actually being blocked.
Having a long-blocking method by itself was the result of some careless and very lazy use of the lock keyword.
Insufficient attention was given to ensuring that the code that was being run on background threads was thread-safe (it wasn't).
Insufficient thought was given to the question of whether making a lot of the affected code multithreaded was even worth doing to begin with. In plenty of cases, the answer was no: multithreading just introduced complexity and bugs, made the code less comprehensible, and (here's the kicker): hurt performance.
I'm happy to say that today, we're still alive and our code is in a much healthier state than it once was. And we do use multithreading in plenty of places where we've decided it's appropriate and have measured performance gains (such as reduced latency between receiving a market data tick and having an outgoing quote confirmed by the exchange). But we learned some pretty important lessons the hard way. Chances are, if you ever work on a large, highly multithreaded system, you will too.
Unless you are on the level of being able to write a fully-fledged kernel scheduler, you will get explicit thread management always wrong.
Threads can be the most awesome thing since hot chocolate, but parallel programming is incredibly complex. However, if you design your threads to be independent then you can't shoot yourself in the foot.
As fore rule of the thumb, if a problem is decomposed into threads, they should be as independent as possible, with as few but well defined shared resources as possible, with the most minimalistic management concept.
I think the first statement is best explained as such: with the many advanced APIs now available, manually writing your own thread code is almost never necessary. The new APIs are a lot easier to use, and a lot harder to mess up!. Whereas, with the old-style threading, you have to be quite good to not mess up. The old-style APIs (Thread et. al.) are still available, but the new APIs (Task Parallel Library, Parallel LINQ, and Reactive Extensions) are the way of the future.
The second statement is from more of a design perspective, IMO. In a design that has a clean separation of concerns, a background task should not really be reaching directly into the UI to report updates. There should be some separation there, using a pattern like MVVM or MVC.
I would start by questioning this perception:
I'd been reading about threads and had got the impression that they were the tastiest things since kiwi jello.
Don’t get me wrong – threads are a very versatile tool – but this degree of enthusiasm seems weird. In particular, it indicates that you might be using threads in a lot of situations where they simply don’t make sense (but then again, I might just mistake your enthusiasm).
As others have indicated, thread handling is additionally quite complex and complicated. Wrappers for threads exist and only in rare occasions do they have to be handled explicitly. For most applications, threads can be implied.
For example, if you just want to push a computation to the background while leaving the GUI responsive, a better solution is often to either use callback (that makes it seem as though the computation is done in the background while really being executed on the same thread), or by using a convenience wrapper such as the BackgroundWorker that takes and hides all the explicit thread handling.
A last thing, creating a thread is actually very expensive. Using a thread pool mitigates this cost because here, the runtime creates a number of threads that are subsequently reused. When people say that explicit management of threads is bad, this is all they might be referring to.
Many advanced GUI Applications usually consist of two threads, one for the UI, one (or sometimes more) for Processing of data (copying files, making heavy calculations, loading data from a database, etc).
The processing threads shouldn't update the UI directly, the UI should be a black box to them (check Wikipedia for Encapsulation).
They just say "I'm done processing" or "I completed task 7 of 9" and call an Event or other callback method. The UI subscribes to the event, checks what has changed and updates the UI accordingly.
If you update the UI from the Processing Thread you won't be able to reuse your code and you will have bigger problems if you want to change parts of your code.
I think you should experiement as much as possible with Threads and get to know the benefits and pitfalls of using them. Only by experimentation and usage will your understanding of them grow. Read as much as you can on the subject.
When it comes to C# and the userinterface (which is single threaded and you can only modify userinterface elements on code executed on the UI thread). I use the following utility to keep myself sane and sleep soundly at night.
public static class UIThreadSafe {
public static void Perform(Control c, MethodInvoker inv) {
if(c == null)
return;
if(c.InvokeRequired) {
c.Invoke(inv, null);
}
else {
inv();
}
}
}
You can use this in any thread that needs to change a UI element, like thus:
UIThreadSafe.Perform(myForm, delegate() {
myForm.Title = "I Love Threads!";
});
A huge reason to try to keep the UI thread and the processing thread as independent as possible is that if the UI thread freezes, the user will notice and be unhappy. Having the UI thread be blazing fast is important. If you start moving UI stuff out of the UI thread or moving processing stuff into the UI thread, you run a higher risk of having your application become unresponsive.
Also, a lot of the framework code is deliberately written with the expectation that you will separate the UI and processing; programs will just work better when you separate the two out, and will hit errors and problems when you don't. I don't recall any specifics issues that I encountered as a result of this, though I have vague recollections of in the past trying to set certain properties of stuff the UI was responsible for outside of the UI and having the code refuse to work; I don't recall whether it didn't compile or it threw an exception.
Threads are a very good thing, I think. But, working with them is very hard and needs a lot of knowledge and training. The main problem is when we want to access shared resources from two other threads which can cause undesirable effects.
Consider classic example: you have a two threads which get some items from a shared list and after doing something they remove the item from the list.
The thread method that is called periodically could look like this:
void Thread()
{
if (list.Count > 0)
{
/// Do stuff
list.RemoveAt(0);
}
}
Remember that the threads, in theory, can switch at any line of your code that is not synchronized. So if the list contains only one item, one thread could pass the list.Count condition, just before list.Remove the threads switch and another thread passes the list.Count (list still contains one item). Now the first thread continues to list.Remove and after that second thread continues to list.Remove, but the last item already has been removed by the first thread, so the second one crashes. That's why it would have to be synchronized using lock statement, so that there can't be a situation where two threads are inside the if statement.
So that is the reason why UI which is not synchronized must always run in a single thread and no other thread should interfere with UI.
In previous versions of .NET if you wanted to update UI in another thread, you would have to synchronize using Invoke methods, but as it was hard enough to implement, new versions of .NET come with BackgroundWorker class which simplifies a thing by wrapping all the stuff and letting you do the asynchronous stuff in a DoWork event and updating UI in ProgressChanged event.
A couple of things are important to note when updating the UI from a non-UI thread:
If you use "Invoke" frequently, the performance of your non-UI thread may be severely adversely affected if other stuff makes the UI thread run sluggishly. I prefer to avoid using "Invoke" unless the non-UI thread needs to wait for the UI-thread action to be performed before it continues.
If you use "BeginInvoke" recklessly for things like control updates, an excessive number of invocation delegates may get queued, some of which may well be pretty useless by the time they actually occur.
My preferred style in many cases is to have each control's state encapsulated in an immutable class, and then have a flag which indicates whether an update is not needed, pending, or needed but not pending (the latter situation may occur if a request is made to update a control before it is fully created). The control's update routine should, if an update is needed, start by clearing the update flag, grabbing the state, and drawing the control. If the update flag is set, it should re-loop. To request another thread, a routine should use Interlocked.Exchange to set the update flag to update pending and--if it wasn't pending--try to BeginInvoke the update routine; if the BeginInvoke fails, set the update flag to "needed but not pending".
If an attempt to control occurs just after the control's update routine checks and clears its update flag, it may well happen that the first update will reflect the new value but the update flag will have been set anyway, forcing an extra screen redraw. On the occasions when this happens, it will be relatively harmless. The important thing is that the control will end up being drawn in the correct state, without there ever having been more than one BeginInvoke pending.

How do I pick the best number of threads for hyptherthreading/multicore?

I have some embarrassingly-parallelizable work in a .NET 3.5 console app and I want to take advantage of hyperthreading and multi-core processors. How do I pick the best number of worker threads to utilize either of these the best on an arbitrary system? For example, if it's a dual core I will want 2 threads; quad core I will want 4 threads. What I'm ultimately after is determining the processor characteristics so I can know how many threads to create.
I'm not asking how to split up the work nor how to do threading, I'm asking how do I determine the "optimal" number of the threads on an arbitrary machine this console app will run on.
I'd suggest that you don't try to determine it yourself. Use the ThreadPool and let .NET manage the threads for you.
You can use Environment.ProcessorCount if that's the only thing you're after. But usually using a ThreadPool is indeed the better option.
The .NET thread pool also has provisions for sometimes allocating more threads than you have cores to maximise throughput in certain scenarios where many threads are waiting for I/O to finish.
The correct number is obviously 42.
Now on the serious note. Just use the thread pool, always.
1) If you have a lengthy processing task (ie. CPU intensive) that can be partitioned into multiple work piece meals then you should partition your task and then submit all individual work items to the ThreadPool. The thread pool will pick up work items and start churning on them in a dynamic fashion as it has self monitoring capabilities that include starting new threads as needed and can be configured at deployment by administrators according to the deployment site requirements, as opposed to pre-compute the numbers at development time. While is true that the proper partitioning size of your processing task can take into account the number of CPUs available, the right answer depends so much on the nature of the task and the data that is not even worth talking about at this stage (and besides the primary concerns should be your NUMA nodes, memory locality and interlocked cache contention, and only after that the number of cores).
2) If you're doing I/O (including DB calls) then you should use Asynchronous I/O and complete the calls in ThreadPool called completion routines.
These two are the the only valid reasons why you should have multiple threads, and they're both best handled by using the ThreadPool. Anything else, including starting a thread per 'request' or 'connection' are in fact anti patterns on the Win32 API world (fork is a valid pattern in *nix, but definitely not on Windows).
For a more specialized and way, way more detailed discussion of the topic I can only recommend the Rick Vicik papers on the subject:
designing-applications-for-high-performance-part-1.aspx
designing-applications-for-high-performance-part-ii.aspx
designing-applications-for-high-performance-part-iii.aspx
The optimal number would just be the processor count. Optimally you would always have one thread running on a CPU (logical or physical) to minimise context switches and the overhead that has with it.
Whether that is the right number depends (very much as everyone has said) on what you are doing. The threadpool (if I understand it correctly) pretty much tries to use as few threads as possible but spins up another one each time a thread blocks.
The blocking is never optimal but if you are doing any form of blocking then the answer would change dramatically.
The simplest and easiest way to get good (not necessarily optimal) behaviour is to use the threadpool. In my opinion its really hard to do any better than the threadpool so thats simply the best place to start and only ever think about something else if you can demonstrate why that is not good enough.
A good rule of the thumb, given that you're completely CPU-bound, is processorCount+1.
That's +1 because you will always get some tasks started/stopped/interrupted and n tasks will almost never completely fill up n processors.
The only way is a combination of data and code analysis based on performance data.
Different CPU families and speeds vs. memory speed vs other activities on the system are all going to make the tuning different.
Potentially some self-tuning is possible, but this will mean having some form of live performance tuning and self adjustment.
Or even better than the ThreadPool, use .NET 4.0 Task instances from the TPL. The Task Parallel Library is built on a foundation in the .NET 4.0 framework that will actually determine the optimal number of threads to perform the tasks as efficiently as possible for you.
I read something on this recently (see the accepted answer to this question for example).
The simple answer is that you let the operating system decide. It can do a far better job of deciding what's optimal than you can.
There are a number of questions on a similar theme - search for "optimal number threads" (without the quotes) gives you a couple of pages of results.
I would say it also depends on what you are doing, if your making a server application then using all you can out of the CPU`s via either Environment.ProcessorCount or a thread pool is a good idea.
But if this is running on a desktop or a machine that not dedicated to this task, you might want to leave some CPU idle so the machine "functions" for the user.
It can be argued that the real way to pick the best number of threads is for the application to profile itself and adaptively change its threading behavior based on what gives the best performance.
I wrote a simple number crunching app that used multiple threads, and found that on my Quad-core system, it completed the most work in a fixed period using 6 threads.
I think the only real way to determine is through trialling or profiling.
In addition to processor count, you may want to take into account the process's processor affinity by counting bits in the affinity mask returned by the GetProcessAffinityMask function.
If there is no excessive i/o processing or system calls when the threads are running, then the number of thread (except the main thread) is in general equal to the number of processors/cores in your system, otherwise you can try to increase the number of threads by testing.

Alternative to Threads

I've read that threads are very problematic. What alternatives are available? Something that handles blocking and stuff automatically?
A lot of people recommend the background worker, but I've no idea why.
Anyone care to explain "easy" alternatives? The user will be able to select the number of threads to use (depending on their speed needs and computer power).
Any ideas?
To summarize the problems with threads:
if threads share memory, you can get
race conditions
if you avoid races by liberally using locks, you
can get deadlocks (see the dining philosophers problem)
An example of a race: suppose two threads share access to some memory where a number is stored. Thread 1 reads from the memory address and stores it in a CPU register. Thread 2 does the same. Now thread 1 increments the number and writes it back to memory. Thread 2 then does the same. End result: the number was only incremented by 1, while both threads tried to increment it. The outcome of such interactions depend on timing. Worse, your code may seem to work bug-free but once in a blue moon the timing is wrong and bad things happen.
To avoid these problems, the answer is simple: avoid sharing writable memory. Instead, use message passing to communicate between threads. An extreme example is to put the threads in separate processes and communicate via TCP/IP connections or named pipes.
Another approach is to share only read-only data structures, which is why functional programming languages can work so well with multiple threads.
This is a bit higher-level answer, but it may be useful if you want to consider other alternatives to threads. Anyway, most of the answers discussed solutions based on threads (or thread pools) or maybe tasks from .NET 4.0, but there is one more alternative, which is called message-passing. This has been successfuly used in Erlang (a functional language used by Ericsson). Since functional programming is becoming more mainstream in these days (e.g. F#), I thought I could mention it. In genral:
Threads (or thread pools) can usually used when you have some relatively long-running computation. When it needs to share state with other threads, it gets tricky (you have to correctly use locks or other synchronization primitives).
Tasks (available in TPL in .NET 4.0) are very lightweight - you can split your program into thousands of tasks and then let the runtime run them (it will use optimal number of threads). If you can write your algorithm using tasks instead of threads, it sounds like a good idea - you can avoid some synchronization when you run computation using smaller steps.
Declarative approaches (PLINQ in .NET 4.0 is a great option) if you have some higher-level data processing operation that can be encoded using LINQ primitives, then you can use this technique. The runtime will automatically parallelize your code, because LINQ doesn't specify how exactly should it evaluate the results (you just say what results you want to get).
Message-passing allows you two write program as concurrently running processes that perform some (relatively simple) tasks and communicate by sending messages to each other. This is great, because you can share some state (send messages) without the usual synchronization issues (you just send a message, then do other thing or wait for messages). Here is a good introduction to message-passing in F# from Robert Pickering.
Note that the last three techniques are quite related to functional programming - in functional programming, you desing programs differently - as computations that return result (which makes it easier to use Tasks). You also often write declarative and higher-level code (which makes it easier to use Declarative approaches).
When it comes to actual implementation, F# has a wonderful message-passing library right in the core libraries. In C#, you can use Concurrency & Coordination Runtime, which feels a bit "hacky", but is probably quite powerful too (but may look too complicated).
Won't the parallel programming options in .Net 4 be an "easy" way to use threads? I'm not sure what I'd suggest for .Net 3.5 and earlier...
This MSDN link to the Parallel Computing Developer Center has links to lots of info on Parellel Programming including links to videos, etc.
I can recommend this project. Smart Thread Pool
Project Description
Smart Thread Pool is a thread pool written in C#. It is far more advanced than the .NET built-in thread pool.
Here is a list of the thread pool features:
The number of threads dynamically changes according to the workload on the threads in the pool.
Work items can return a value.
A work item can be cancelled.
The caller thread's context is used when the work item is executed (limited).
Usage of minimum number of Win32 event handles, so the handle count of the application won't explode.
The caller can wait for multiple or all the work items to complete.
Work item can have a PostExecute callback, which is called as soon the work item is completed.
The state object, that accompanies the work item, can be disposed automatically.
Work item exceptions are sent back to the caller.
Work items have priority.
Work items group.
The caller can suspend the start of a thread pool and work items group.
Threads have priority.
Can run COM objects that have single threaded apartment.
Support Action and Func delegates.
Support for WindowsCE (limited)
The MaxThreads and MinThreads can be changed at run time.
Cancel behavior is imporved.
"Problematic" is not the word I would use to describe working with threads. "Tedious" is a more appropriate description.
If you are new to threaded programming, I would suggest reading this thread as a starting point. It is by no means exhaustive but has some good introductory information. From there, I would continue to scour this website and other programming sites for information related to specific threading questions you may have.
As for specific threading options in C#, here's some suggestions on when to use each one.
Use BackgroundWorker if you have a single task that runs in the background and needs to interact with the UI. The task of marshalling data and method calls to the UI thread are handled automatically through its event-based model. Avoid BackgroundWorker if (1) your assembly does not already reference the System.Windows.Form assembly, (2) you need the thread to be a foreground thread, or (3) you need to manipulate the thread priority.
Use a ThreadPool thread when efficiency is desired. The ThreadPool helps avoid the overhead associated with creating, starting, and stopping threads. Avoid using the ThreadPool if (1) the task runs for the lifetime of your application, (2) you need the thread to be a foreground thread, (3) you need to manipulate the thread priority, or (4) you need the thread to have a fixed identity (aborting, suspending, discovering).
Use the Thread class for long-running tasks and when you require features offered by a formal threading model, e.g., choosing between foreground and background threads, tweaking the thread priority, fine-grained control over thread execution, etc.
Any time you introduce multiple threads, each running at once, you open up the potential for race conditions. To avoid these, you tend to need to add synchronization, which adds complexity, as well as the potential for deadlocks.
Many tools make this easier. .NET has quite a few classes specifically meant to ease the pain of dealing with multiple threads, including the BackgroundWorker class, which makes running background work and interacting with a user interface much simpler.
.NET 4 is going to do a lot to ease this even more. The Task Parallel Library and PLINQ dramatically ease working with multiple threads.
As for your last comment:
The user will be able to select the number of threads to use (depending on their speed needs and computer power).
Most of the routines in .NET are built upon the ThreadPool. In .NET 4, when using the TPL, the work load will actually scale at runtime, for you, eliminating the burden of having to specify the number of threads to use. However, there are ways to do this now.
Currently, you can use ThreadPool.SetMaxThreads to help limit the number of threads generated. In TPL, you can specify ParallelOptions.MaxDegreesOfParallelism, and pass an instance of the ParallelOptions into your routine to control this. The default behavior scales up with more threads as you add more processing cores, which is usually the best behavior in any case.
Threads are not problematic if you understand what causes problems with them.
For ex. if you avoid statics, you know which API's to use (e.g. use synchronized streams), you will avoid many of the issues that come up for their bad utilization.
If threading is a problem (this can happen if you have unsafe/unmanaged 3rd party dll's that cannot support multithreading. In this can an option is to create a meachism to queue the operations. ie store the parameters of the action to a database and just run through them one at a time. This can be done in a windows service. Obviously this will take longer but in some cases is the only option.
Threads are indispensable tools for solving many problems, and it behooves the maturing developer to know how to effectively use them. But like many tools, they can cause some very difficult-to-find bugs.
Don't shy away from some so useful just because it can cause problems, instead study and practice until you become the go-to guy for multi-threaded apps.
A great place to start is Joe Albahari's article: http://www.albahari.com/threading/.

Categories

Resources