Whats a good architecture for a life simulator - c#

I was thinking the other day about creating a little life simulator. I have only brushed over the idea and I was wondering the best way to implement so it will run efficiently.
At the lowest level there will be male and female entities wondering around and when they meet they will produce offspring.
I would like to use multithreading to manage the entities but im not sure of number of threads etc.
Would it be best to have a couple of threads that manage males and females, or would it be ok to start a thread for each entity so each instance is running on its own thread?
I have read a few posts about maximum number of threads and the limit ranges from 20-1000 threads in an app.
Anyone have any suggestions on architecture?
N

DO NOT HAVE ONE THREAD PER ENTITY. That is a recipe for disaster. That is not what threads were designed for. Threads are extremely heavyweight; remember, every thread consumes one million bytes of virtual address space immediately. Also, remember that every time two threads interact on a shared data structure -- like your simulated world -- they need to take out locks to prevent corruption. Your program will be a mass of hundreds of blocked threads, and blocked threads are useless; they can't work. High contention is anathema to good performance, and lots of threads sharing one data structure is nothing but constant contention.
The number of threads in your program should be one unless you have an extremely good reason to have two. In general the number of threads in a program should be as small as you can possibly make it while still getting the performance characteristics you need.
If your program really is "embarrassingly parallel" -- that is, it is extremely easy to do calculations in parallel without locking a shared data structure -- then the correct number of threads is equal to the number of processor cores in the machine. Remember, threads slow each other down. If you have four bank tellers, and each one is servicing one customer, things go fast. You are describing a situation where you have four bank tellers (CPU cores) each servicing a hundred people at the same time by handing out pennies round-robin.
Simulations are rarely embarrassingly parallel because it is hard to break up the work into independent parts. Ray tracing, for example, is embarrassingly parallel; the contents of one pixel do not depend on the contents of any other pixel, so they can be calculated in parallel. But you would not have one thread per pixel! You'd have one thread per processor, and let each of four processors work on a quarter of the pixels. It's hard to do that with simulations because the entities do interact with each other.
High quality professional simulators such as physics engines that need to deal with interacting entities do not solve their problems with threads. Typically they'll have one thread doing the simulation and one thread running the user interface, so that a costly simulation calculation does not hang the UI.
The right architecture for you is probably to have one thread going like blazes just doing the simulation. Work out all the interactions of the entities for a single frame and then signal the UI thread that it needs to update. This will allow you to figure out what your maximum frame rate is, by measuring how many microseconds it takes to work out all the interactions of every entity.

Using 100s of Threads (that would Sleep() a lot) is not a good solution. You will quickly run out of memory.
The TPL might make this idea workable but it wasn't really designed for this either.
Look into Discrete Event Simulation and Fibers. There are pseudo-Fiber libraries for .NET

I assume that your application manages the timeline in delta-Ts.
You could use C# 4.0's parallel framework and work with TASKs not threads.
Each delta-T run a parallel-for to update the entities.
The parallel framework will decide how many threads to open and manage the threads.

Related

How can I get started creating a multithreaded load balancer?

I have an interesting exercise to solve from my professor. But I need a little bit of help so it does not become boring during the holidays.
The exercise is to
create a multithreaded load balancer, that reads 1 measuring point from 5 sensors every second. (therefore 5 values every second).
Then do some "complex" calculations with those values.
Printing results of the calculations on the screen. (like max value or average value of sensor 1-5 and so on, of course multithreaded)
As an additional task I also have to ensure that if in the future for example 500 sensors would be read every second the computer doesn't quit the job.(load balancing).
I have a csv textfile with ~400 measuring points from 5 imaginary sensors.
What I think I have to do:
Read the measuring points into an array
Ensure thread safe access to that array
Spawn a new thread for every value that calculates some math stuff
Set a max value for maximum concurrent working threads
I am new to multithreading applications in c# but I think using threadpool is the right way. I am currently working on a queue and maybe starting it inside a task so it wont block the application.
What would you recommend?
There are a couple of environment dependencies here:
What version of .NET are you using?
What UI are you using - desktop (WPF/WinForms) or ASP.NET?
Let's assume that it's .NET 4.0 or higher and a desktop app.
Reading the sensors
In a WPF or WinForms application, I would use a single BackgroundWorker to read data from the sensors. 500 reads per second is trivial - even 500,00 is usually trivial. And the BackgroundWorker type is specifically designed for interacting with desktop apps, for example handing-off results to the UI without worrying about thread interactions.
Processing the calculations
Then you need to process the "complex" calculations. This depends on how long-lived these calculations are. If we assume they're short-lived (say less than 1 second each), then I think using the TaskScheduler and the standard ThreadPool will be fine. So you create a Task for each calculation, and then let the TaskScheduler take care of allocating tasks to threads.
The job of the TaskScheduler is to load-balance the work by queuing lightweight tasks to more heavyweight threads, and managing the ThreadPool to best balance the workload vs the number of cores on the machine. You can even override the default TaskScheduler to schedule tasks in whatever manner you want.
The ThreadPool is a FIFO queue of work items that need to be processed. In .NET 4.0, the ThreadPool has improved performance by making the work queue a thread-safe ConcurrentQueue collection.
Measuring task throughput and efficiency
You can use PerformanceCounter to measure both CPU and memory usage. This will give you a good idea of whether the cores and memory are being used efficiently. The task throughput is simply measured by looking at the rate at which tasks are being processed and supplying results.
Note that I haven't included any code here, as I assume you want to deal with the implementation details for your professor :-)

C# ThreadPool Implementation / Performance Spikes

In an attempt to speed up processing of physics objects in C# I decided to change a linear update algorithm into a parallel algorithm. I believed the best approach was to use the ThreadPool as it is built for completing a queue of jobs.
When I first implemented the parallel algorithm, I queued up a job for every physics object. Keep in mind, a single job completes fairly quickly (updates forces, velocity, position, checks for collision with the old state of any surrounding objects to make it thread safe, etc). I would then wait on all jobs to be finished using a single wait handle, with an interlocked integer that I decremented each time a physics object completed (upon hitting zero, I then set the wait handle). The wait was required as the next task I needed to do involved having the objects all be updated.
The first thing I noticed was that performance was crazy. When averaged, the thread pooling seemed to be going a bit faster, but had massive spikes in performance (on the order of 10 ms per update, with random jumps to 40-60ms). I attempted to profile this using ANTS, however I could not gain any insight into why the spikes were occurring.
My next approach was to still use the ThreadPool, however instead I split all the objects into groups. I initially started with only 8 groups, as that was how any cores my computer had. The performance was great. It far outperformed the single threaded approach, and had no spikes (about 6ms per update).
The only thing I thought about was that, if one job completed before the others, there would be an idle core. Therefore, I increased the number of jobs to about 20, and even up to 500. As I expected, it dropped to 5ms.
So my questions are as follows:
Why would spikes occur when I made the job sizes quick / many?
Is there any insight into how the ThreadPool is implemented that would help me to understand how best to use it?
Using threads has a price - you need context switching, you need locking (the job queue is most probably locked when a thread tries to fetch a new job) - it all comes at a price. This price is usually small compared to the actual work your thread is doing, but if the work ends quickly, the price becomes meaningful.
Your solution seems correct. A reasonable rule of thumb is to have twice as many threads as there are cores.
As you probably expect yourself, the spikes are likely caused by the code that manages the thread pools and distributes tasks to them.
For parallel programming, there are more sophisticated approaches than "manually" distributing work across different threads (even if using the threadpool).
See Parallel Programming in the .NET Framework for instance for an overview and different options. In your case, the "solution" may be as simple as this:
Parallel.ForEach(physicObjects, physicObject => Process(physicObject));
Here's my take on your two questions:
I'd like to start with question 2 (how the thread pool works) because it actually holds the key to answering question 1. The thread pool is implemented (without going into details) as a (thread-safe) work queue and a group of worker threads (which may shrink or enlarge as needed). As the user calls QueueUserWorkItem the task is put into the work queue. The workers keep polling the queue and taking work if they are idle. Once they manage to take a task, they execute it and then return to the queue for more work (this is very important!). So the work is done by the workers on-demand: as the workers become idle they take more pieces of work to do.
Having said the above, it's simple to see what is the answer to question 1 (why did you see a performance difference with more fine-grained tasks): it's because with fine-grain you get more load-balancing (a very desirable property), i.e. your workers do more or less the same amount of work and all cores are exploited uniformly. As you said, with a coarse-grain task distribution, there may be longer and shorter tasks, so one or more cores may be lagging behind, slowing down the overall computation, while other do nothing. With small tasks the problem goes away. Each worker thread takes one small task at a time and then goes back for more. If one thread picks up a shorter task it will go to the queue more often, If it takes a longer task it will go to the queue less often, so things are balanced.
Finally, when the jobs are too fine-grained, and considering that the pool may enlarge to over 1K threads, there is very high contention on the queue when all threads go back to take more work (which happens very often), which may account for the spikes you are seeing. If the underlying implementation uses a blocking lock to access the queue, then context switches are very frequent which hurts performance a lot and makes it seem rather random.
answer of question 1:
this is because of Thread switching , thread switching (or context switching in OS concepts) is CPU clocks that takes to switch between each thread , most of times multi-threading increases the speed of programs and process but when it's process is so small and quick size then context switching will take more time than thread's self process so the whole program throughput decreases, you can find more information about this in O.S concepts books .
answer of question 2:
actually i have a overall insight of ThreadPool , and i cant explain what is it's structure exactly.
to learn more about ThreadPool start here ThreadPool Class
each version of .NET Framework adds more and more capabilities utilizing ThreadPool indirectly. such as Parallel.ForEach Method mentioned before added in .NET 4 along with System.Threading.Tasks which makes code more readable and neat. You can learn more on this here Task Schedulers as well.
At very basic level what it does is: it creates let's say 20 threads and puts them into a lits. Each time it receives a delegate to execute async it takes idle thread from the list and executes delegate. if no available threads found it puts it into a queue. every time deletegate execution completes it will check if queue has any item and if so peeks one and executes in the same thread.

Parallel programming in games

in my game I have entities (about 2000-5000) that do very performance heavy calculations every frame (or every 2nd or 3rd).
I know that parallel programming will help in this scenario.
The question is how do I use multiple threads / cpu-cores in the best way possible.
I tried the static Parallel class in .NET, but the startup cost for every Parallel.For / Parallel.ForEach is simply to big.
Does it start a number of threads each time I call a function of the Parallel class ??
The game's target fps is 60 fps, so whatevery approach I use, it should never involve starting multiple threads every game-frame!
So my question is:
What other alternatives are there to parallel-process multiple entities?
Should I create the threads myself, and if so, what pitfalls are there?
Is the threadpool a good alternative?
By default the Parallel.For/Parallel.Foreach uses ThreadPool thread. Infact TPL api like Parallel class, Task instance etc uses ThreadPool thread.
The number of threads started by TPL will depend upon lots of factors like number of available ideal thread in threadpool etc. So assuming TPL will create lots of threads as soon as you will submit request will not be entirely correct.
Important point is that you can control, the number of threads used the TPL for executing the tasks via MaxDegreeOfParallelism flag. By setting this value to say number "X", you limit the number of concurrent items being processed to "X". Personally, I feel this is a very powerful feature of TPL library.
Also, if you do not want your task to block other requests in queue, then you can set the TaskCreationOptions to be of type "LongRunning", in that case, TPL will run the task on a dedicated thread.
In your case, you can try using Parallel.For/ForEach and set the MaxDegreeOfParalleleism value to some optimal value and see how that behaves.
You need some computers or servers to test your parallel programmed game. Other wise on a single computer, since it will work on thread, it will cost more.
Additionally, since the game is calculated and drawn 60 times in a second, working on threads is not logical, your game is already running on threads.
Maybe you have to change or optimize your algorithm. Do not let everything is done by CPU, use GPU as much as possible.
There isn't a simple answer.
When every entity of your game can be calculated without knowing any other entity then you can simply move that calcultion to a given number of threads. A threadpool would be a good start.
You could set this up when your programm starts.
But even when you do so, you have to check if switching threads performs better than having one single thread. Context switch is here the keyword you can search for.
But what happens when entity A on thread A does a calculation with affects entity B on thread B and forces entity B to be destroyed which forces entity C on thread C to be destroyed also?
And now to make it harder :P entity C was calculated before entity B, before entity A...
What should happen? Is it ok that those entities will be cleaned up in 2 or 3 frames in the future?
Does it affect or break your game logic?
If so then things are getting complicated.

Are Socket.*Async methods threaded?

I'm currently trying to figure what is the best way to minimize the amount of threads I use in a TCP master server, in order to maximize performance.
As I've been reading a lot recently with the new async features of C# 5.0, asynchronous does not necessarily mean multithreaded. It could mean separated in smaller chunks of finite state objects, then processed alongside other operations, by alternating. However, I don't see how this could be done in networking, since I'm basically "waiting" for input (from the client).
Therefore, I wouldn't use ReceiveAsync() for all my sockets, it would just be creating and ending threads continuously (assuming it does create threads).
Consequently, my question is more or less: what architecture can a master server take without having one "thread" per connection?
Side question for bonus coolness points: Why is having multiple threads bad, considering that having an amount of threads that is over your amount of processing cores simply makes the machine "fake" multithreading, just like any other asynchronous method would?
No, you would not necessarily be creating threads. There are two possible ways you can do async without setting up and tearing down threads all the time:
You can have a "small" number of long-lived threads, and have them sleep when there's no work to do (this means that the OS will never schedule them for execution, so the resource drain is minimal). Then, when work arrives (i.e. Async method called), wake one of them up and tell it what needs to be done. Pleased to meet you, managed thread pool.
In Windows, the most efficient mechanism for async is I/O completion ports which synchronizes access to I/O operations and allows a small number of threads to manage massive workloads.
Regarding multiple threads:
Having multiple threads is not bad for performance, if
the number of threads is not excessive
the threads do not oversaturate the CPU
If the number of threads is excessive then obviously we are taxing the OS with having to keep track of and schedule all these threads, which uses up global resources and slows it down.
If the threads are CPU-bound, then the OS will need to perform much more frequent context switches in order to maintain fairness, and context switches kill performance. In fact, with user-mode threads (which all highly scalable systems use -- think RDBMS) we make our lives harder just so we can avoid context switches.
Update:
I just found this question, which lends support to the position that you can't say how many threads are too much beforehand -- there are just too many unknown variables.
Seems like the *Async methods use IOCP (by looking at the code with Reflector).
Jon's answer is great. As for the 'side question'... See http://en.wikipedia.org/wiki/Amdahl%27s_law. Amdel's law says that serial code quickly diminishes the gains to be had from parallel code. We also know that thread coordination (scheduling, context switching, etc) is serial - so at some point more threads means there are so many serial steps that parallelization benefits are lost and you have a net negative performance. This is tricky stuff. That's why there is so much effort going into letting .NET manage threads while we define 'tasks' for the framework to decide what thread to run on. The framework can switch between tasks much more efficiently than the OS can switch between threads because the OS has a lot of extra things it needs to worry about when doing so.
Asynchronous work can be done without one-thread-per-connection or a thread pool with OS support for select or poll (and Windows supports this and it is exposed via Socket.Select). I am not sure of the performance on windows, but this is a very common idiom elsewhere.
One thread is the "pump" that manages the IO connections and monitors changes to the streams and then dispatches messages to/from other threads (conceivably 0 ... n depending upon model). Approaches with 0 or 1 additional threads may fall into the "Event Machine" category like twisted (Python) or POE (Perl). With >1 threads the callers form an "implicit thread pool" (themselves) and basically just offload the blocking IO.
There are also approaches like Actors, Continuations or Fibres exposed in the underlying models of some languages which alter how the basic problem is approached -- don't wait, react.
Happy coding.

Threading cost - minimum execution time when threads would add speed

I am working on a C# application that works with an array. It walks through it (meaning that at one time only a narrow part of the array is used). I am considering adding threads in it to make it perform faster (it runs on a dualcore computer). The problem is that I do not know if it would actually help, because threads cost something and this cost could easily be more than the parallel gain... So how do I determine if threading would help?
Try writing some benchmarks that mimic, as closely as possible, the real-world conditions in which your software will actually be used.
Test and time the single-threaded version. Test and time the multi-threaded version. Compare the two sets of results.
If your application is CPU bound (i.e. it isn't spending time trying to read files or waiting for data from a device) and there is little to no sharing of live data (data being altered, if its read only its fine) between the threads then you can pretty much increase the speed by 50->75% by adding another thread (as long as it still remains CPU bound of course).
The main overhead in multithreading comes from 2 places.
Creation & initialization of the thread. Creating a thread requires quite a few resources to be allocated and involves swaps between kernel and user mode, this is expensive though a once off per thread so you can pretty much ignore it if the thread is running for any reasonable amount of time. The best way to mitigate this problem is to use a thread pool as it will keep the thread on hand and not need to be recreated.
Handling synchronization of data. If one thread is reading from data that another is writing, bad things will generally happen (worse if both are changing it). This requires you to lock your data before altering it so that no thread reads a half written value. These locks are generally quite slow as well. To mitigate this problem, you need to design your data layout so that the threads don't need to read or write to the same data as much as possible. If you do need a lot of these locks it can then become slower than the single thread option.
In short, if you are doing something that requires the CPU's to share a lot of data, then multi-threading it will be slower and if the program isn't CPU bound there will be little or no difference (could be a lot slower depending on what it is bound to, e.g. a cd/hard drive). If your program matches these conditions, then it will PROBABLY be worthwhile to add another thread (though the only way to be certain would be profiling).
One more little note, you should only create as many CPU bound threads as you have physical cores (threads that idle most of the time, such as a GUI message pump thread, can be ignored for this condition).
P.S. You can reduce the cost of locking data by using a methodology called "lock-free programming", though this something that should really only be attempted by people with a lot of experience with multi-threading and a clear understanding of their target architecture (including how the cache is treated and the memory bus).
I agree with Luke's answer. Benchmark it, it's the only way to be sure.
I can also give a prediction of the results - the fastest version will be when the number of threads matches the number of cores, EXCEPT if the array is very small and each thread would have to process just a few items, the setup/teardown times might get larger than the processing itself. How few - that depends on what you do. Again - benchmark.
I'd advise to find out a "minimum number of items for a thread to be useful". Then, when you are deciding how many threads to spawn (or take from a pool), check how many cores the computer has and how many items there are. Spawn as many threads as possible, but no more than the computer has cores, and not so many that each thread would have less than the minimum number of items to process.
For example if the minimum number of items is, say, 1000; and the computer has 4 cores; and your list contains 2500 items, you would spawn just 2 threads, because more threads would be inefficient (each would process less than 1000 items).
Making a step by step list for Luke's idea:
Make a single threaded test app
Download Sysinternals Process Monitor and run it
Run your test app and find it on the process list (remember to run it as a release build outside of Visual Studio)
Double click the process and select the Performance Graph tab
Observe the CPU time used by your process
If the CPU time is sittling flat 50% for more than a few seconds, you can probably speed your overall process up using threads (assuming the bunch of stuff Mr Peters refered to holds true)
(However, the best you can do on a duel core machine is to halve the time it takes to run. If your process only take 4 seconds, it might not be worth getting it to run in 2 seconds)
Using the task parallel library / Rx provides a friendlier interface than System.Threading.ThreadPool, which might make your world a bit easier.
You miss imho one item, which is that it is not always about execution time. There is:
The problem to koop a UI operational during an operation. Even if the UI is "dormant", a nonresponsive message pump makes a worse impression.
The possibility to use a thread pool to actually not ahve to start / stop threads all the time. I use thread pools very extensively, and various parts of the applications keep them busy.
Anyhow, ignoring my point 1 - where you may go multi threaded without speeding things up in order to keep your UI responsive - I would say it is always then faster when you can actually either split up work (so you can keep more than one core busy) or offload it for othe reasons.

Categories

Resources