Specification
C# Distributed Application.
Client/Server design.
Client (Winforms), Server (Windows Service), Communication via .Net Remoting.
The below question relates to the Server-Side of the application.
(EDIT) The Server-side of the application runs on a server with 8 Cores and 12Gb Ram
(EDIT) The CPU of this server is always hitting around 80% Usage due to lots of other services being run on this same server.
Scenario
I've inheritted a large legacy application.
It carries out a bunch of tasks, some of them independently, but others not.
The current design for this application involves the creation of 14 threads, each running either 1 task or a number of tasks.
The problem is that I get the feeling this design element has an impact on performance.
Code Examples - How Each Class/Thread Is Designed & Run
public class ManageThreads
{
private Thread doStuffThread = null;
//Inside the constructor EVERY thread is instantiated and run.
//(I am aware that this example only shows the use of 1 thread).
public ManageThreads()
{
doStuffThread = new Thread(new ThreadStart(DoSomeStuff.Instance.Start));
doStuffThread.Start();
//Instantiate and run another thread.....
//Instantiate and run another thread.....
//Instantiate and run another thread.....etc.
}
}
public class DoSomeStuff
{
void Start()
{
while(true)
{
//Repeatedly do some tasks.....
Thread.Sleep(5000);
}
}
}
Thoughts
What I'd like to do is keep the existing code, but modify the way that it runs.
I've thought about the use of a Thread Pool to solve this problem, but given the current architecture I am unsure of how I would go about doing this.
Questions
Would this current design affect performance in a noticeable way?
Is it possible for me to improve the performance of this application without altering the underlying functions, but changing the design slightly?
Can anyone recommend anything / advise me on the right way to go about improving this?
Help greatly appreciated.
"I get the feeling this design element has an impact on performance."
Don't guess, get a profiler out and measure what's going on. Gather some empirical stats about where time is spent in the application and then you can take a view on where the pinch points are.
If the time spent creating threads is your biggest headache then moving to a threadpool may be the right answer, but you won't know without some forensic analysis.
From the small snippet you've posted it looks like the 14 threads are reasonably long-lived, doing multiple things over their lifetime so I suspect that this is not the problem actually, but there isn't enough info in your post to make a definitive call on this.
if your threads are all doing work and you have more threads active than processors then you are going to be spending time context switching.
If you have a dual core processor dont expect to get great performance with more than 4 active working threads.
So starting off 14 threads that are all doing work is a bad idea unless you have a processor that can manage this. Physical processor architecture and feature set has a big impacrt on this. My point is that a threadpool will help manage the context switching but starting 14 busy threads at once is always gonna kill performance ... you will probably get faster performance from simply excuting the threads sequentially. Obviously that is a big statement and so is probably not trus, but you get the gist.
Hence the use of a thread pool along with a profiler to figure out the optimum number of threads to make available to the thread pool.
In most situations when people are usign a thread pool a lot of the threads are doing nothing most of the time, or a thread is sleeping/blocking whilst some slow operation or external dependancy awaits a response.
consider using an asynch pattern so that you can get info about progress out of your threads.
On a dual core processor i would be hesitant abou tusing more than 3 threads if they are all working 100% of the time
Related
I have a c# desktop application. Its purpose is 2 fold.
1). To display a live feed from an IP camera to my winform application.
2). Send any captured motion to my server.
It is (2) that is labour intensive. I believe I have optimised it as much as I can and the RAM is manageable.
However, in my quest to learn and to try to make my code even more efficient I am always open to new approaches.
Today, I have come across parallel processing. But, reading some links it seems to suggest there would be not much performance gain using parallel processing. Indeed in all my travels (contracts) I have never seen anyone use parallel processing in C# development.
Should I take early heed and not bother to look into this or should I see whether there is anything to gain by 'off-loading' my motion detection code to a separate parallel process?
Peoples advice/experience would be greatly informative.
Thanks
I would recommend taking a look at the Task Parallel Library provided in the .NET Framework, it's based on an idea that a piece of work is a Task. The idea is to give an abstraction to having to manage and create threads manually.
Tasks can run in parallel, on their own threads or run on the same thread, depending on the workload and configuration. Task Parallel Library is also great for asynchronous operations and work very well with I/O where the hardware can cause a blocking thread which can cause performance issues in your application, for example reading from a hard drive will cause some issues.
I suggest running a profiler on your application, visual studio professional onwards comes with a built in profiler that will enable you to trace and pin-point intensive operations that could possibly be improved with concurrency. If your application is running smooth, then there is no need, but there's nothing wrong with forward thinking and learning the Task Parallel Library as im sure there will be a point where this will benefit you from knowing how to implement concurrency in your application.
I've used TPL to solve various performance issues with large database calls in iterative loops and it's great for these IO operations, TPL will also take into account the hardware which it's being executed on and if used correctly, always be the most optimal for the hardware its running on. You could take your same piece of code and run it on a 2 core machine and it will still work the best to its abilities the hardware can provide without you having to worry about creating too many threads etc.
Personally, I'd say some asynchronous operations could be a good addition to your application since this is regarding external network camera devices which could cause blocking threads in your application.
I need an application/service which runs in the background and generate bills on a particular date of every month.
I went through many articles explaining the difference between Windows Service and Scheduled task Application and came to a conclusion that an application would suit my scenario.
Having said this, I wonder if I need to use Multi-threading in my application as I understand Multi-threading is basically to create a responsive UI while doing long running tasks but since my application will have no UI , do I need to have multi-threading actually?
Is there any difference in performance for a single threaded application to get the data from various sources (say database,webservice) and a multi-threaded application where we distribute each task to a thread and finally integrate all the output?
Typically, an application like this will have no user interface at all, in which case your rationale for multi-threading may be meaningless in this case.
That being said, whether or not to use multiple threads for processing your data is another issue entirely. You could, if it makes sense to do so. If this is an application that's going to run once per month, it may be just as easy to leave it single threaded, as there's likely not a time constraint for completion.
If you need to process the items quickly, though, it may make sense to thread portions of the application.
Is there any difference in performance for a single threaded application to get the data from various sources (say database,webservice) and a multi-threaded application where we distribute each task to a thread and finally integrate all the output?
Typically, yes. That's the most common reason to introduce threading - it allows you to do more work in less time. It does add a fair amount of complexity (depends on the scenario), however.
You would, presumably, get a faster response time from a multi-threaded program if these two cases are true: You have a multi core processor, which almost everyone does these days. Pulling data from all of your sources could be done in any order, and accessing that source with one thread would not lock it up from another.
The best reason to use multiple threads in this cause would be if you spend a lot of time blocking; waiting for something else to respond. If you're reading tons of data from your hard drive as fast as the disk can give it, then having two threads that read data shouldn't give you anything faster. In fact, I think it would be a bit slower. But if you're getting a lot of data from, say, sockets (the internet), and your threads spend a fair amount of time waiting for external servers to respond (and you're not using all of your bandwidth), then a multi threaded program would give you a boost in speed.
I'm working on a ASP.NET website that on some requests will run a very lengthy caching process. I'm wondering what happens exactly if the execution timeout is reached while it is still running in terms of how the code handles it.
Particularly I am wondering about things like if the code is in the try of a try/finally block will the finally still be run?
Also given I am not sure I want the caching to terminate even if it goes on that long is there a way with spawning new threads, etc. that I can circumvent this execution timeout? I am thinking it would be much nicer to return to the user immediately and say "a cache build is happening" rather than just letting them time out. I have recently started playing with some locking code to make sure only one cache build happens at a time but am thinking about extending this to make it run out of sync.
I've not really played with creating threads and such like myself so am not sure exactly how they work, particularly in terms of interacting with ASP.NET. eg if the parent thread that launched it is terminated will that have any effect on the spawned thread?
I know there is kind of a lot of different questions in here and I can split them if that is deemed best but they all seem to go together... I'll try to summarise the questions though:
Will a finally block still be executed if a thread is terminated by ASP.NET while in the try block
Would newly created threads be subject to the same timeouts as the original thread?
Would newly created threads die at the same time as the parent thread that created them?
And the general one of what is the best way to do long running background processes on an ASP.NET site?
Sorry for some noobish questions, I've never really played with threads and they still intimidate me a bit (my brain says they are hard). I could probably test the answer to a lot of tehse questions but I wouldn't be confident enough of my tests. :)
Edit to add:
In response to Capital G:
The problem I have is that the ASp.NET execution timeout is currently set to one hour which is not always long enough for some of these processes I reckon. I've put some stuff in with locks to prevent more than one person setting off these long processes and I was worried the locks might not be released (which if finally blocks aren't always run might happen I guess).
Your comments on not running long processes in ASP.NET is why I was thinking of moving them to other threads rather than blocking the request thread but I don't know if that still counts as running within the ASP.NET architecture that you said was bad.
The code is not actually mine so I'm not allowed (and not sure I 100% understand it enough) to rework it into a service though that is certainly where it would best live.
Would using a BackgroundWorker process for something that could take an hour be feasible in this situation (with respect to comments on long running processes in ASP.NET). I would then make request return a "Cache is building" page until its finished and then go back to serving normally... Its all a bit of a nightmare but its my job so I've got to find a way to improve it. :)
Interesting question, just tested and no it's not guaranteed to execute the code in the finally block, if a thread is aborted it could stop at any point in the processing. You can design some sanity checking and other mechanisms to handle special cleanup routines and such but it has a lot to do with your thread handling as well.
Not necessarily, it depends on how your implementing your threads. If you are working with threads yourself, then you can easily get into situations where the parent thread is killed while it's child threads are still out there processing, you generally want to do some cleanup in the parent thread that ends the child threads as well. Some objects might do a lot of this for you as well, so it's a tough call to say one way or the other. Never assume this at the very least.
No, not necessarily, don't assume this at least, again has to do with your design and whether your doing threading yourself or using some higher level threading object/pattern. I would never assume this regardless.
I don't recommend long running processes within the ASP.NET architecture, unless its within the typical timeout, if it's 10-20s okay but if it's minutes, no, the reason is resource usage within ASP.NET and it's awfully bad on a user. That being said you could perform asynchronous operations where you hand off the work to the server, then you return back to the user when the processing is finished, (this is great for those 10-20s+ processes), the user can be given a little animation or otherwise not have their browser all stuck for that long waiting for whatever is happening on the server to happen.
If it is a long running process, things that take greater than 30-60s+, unless it absolutely has to be done in ASP.NET due to the nature of the process, I suggest moving it to a windows service and schedule it in some way to occur when required.
Note: Threading CAN be complicated, it's not that it's hard so much as that you have to be very aware of what your doing, which requires a firm understanding of what threads are and how they work, I'm no expert, but I'm also not completely new and I'll tell you that in most situations you don't need to get into the realm of threading, even when it seems like you do, if you must however, I would suggest looking into the BackgroundWorker object as they are simplified for the purposes of doing batched processing etc. (honestly for many situations that DO need threads, this is usually a very simple solution).
http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx
Long or time consuming processes to be started behind the web-page; it should not hit the ASP.NET execution time out; the user page should be freed; running the requests under lock etc. All these situation points towards using async services. In one of the products, where I architected, used services for such scenarios. The service exposes some async method to initiate. The status of the progress can be queried using another method. Every request is given some id and no duplicate requests are fired ever. The progress proceeds even if the user logs out. The user can see the results at a later time.
If you have looked at such options already, let me know if there is any issue. Or if you are yet to look in this direction, please get it this way. For any help, just send in your comments.
I am working on a C# application that works with an array. It walks through it (meaning that at one time only a narrow part of the array is used). I am considering adding threads in it to make it perform faster (it runs on a dualcore computer). The problem is that I do not know if it would actually help, because threads cost something and this cost could easily be more than the parallel gain... So how do I determine if threading would help?
Try writing some benchmarks that mimic, as closely as possible, the real-world conditions in which your software will actually be used.
Test and time the single-threaded version. Test and time the multi-threaded version. Compare the two sets of results.
If your application is CPU bound (i.e. it isn't spending time trying to read files or waiting for data from a device) and there is little to no sharing of live data (data being altered, if its read only its fine) between the threads then you can pretty much increase the speed by 50->75% by adding another thread (as long as it still remains CPU bound of course).
The main overhead in multithreading comes from 2 places.
Creation & initialization of the thread. Creating a thread requires quite a few resources to be allocated and involves swaps between kernel and user mode, this is expensive though a once off per thread so you can pretty much ignore it if the thread is running for any reasonable amount of time. The best way to mitigate this problem is to use a thread pool as it will keep the thread on hand and not need to be recreated.
Handling synchronization of data. If one thread is reading from data that another is writing, bad things will generally happen (worse if both are changing it). This requires you to lock your data before altering it so that no thread reads a half written value. These locks are generally quite slow as well. To mitigate this problem, you need to design your data layout so that the threads don't need to read or write to the same data as much as possible. If you do need a lot of these locks it can then become slower than the single thread option.
In short, if you are doing something that requires the CPU's to share a lot of data, then multi-threading it will be slower and if the program isn't CPU bound there will be little or no difference (could be a lot slower depending on what it is bound to, e.g. a cd/hard drive). If your program matches these conditions, then it will PROBABLY be worthwhile to add another thread (though the only way to be certain would be profiling).
One more little note, you should only create as many CPU bound threads as you have physical cores (threads that idle most of the time, such as a GUI message pump thread, can be ignored for this condition).
P.S. You can reduce the cost of locking data by using a methodology called "lock-free programming", though this something that should really only be attempted by people with a lot of experience with multi-threading and a clear understanding of their target architecture (including how the cache is treated and the memory bus).
I agree with Luke's answer. Benchmark it, it's the only way to be sure.
I can also give a prediction of the results - the fastest version will be when the number of threads matches the number of cores, EXCEPT if the array is very small and each thread would have to process just a few items, the setup/teardown times might get larger than the processing itself. How few - that depends on what you do. Again - benchmark.
I'd advise to find out a "minimum number of items for a thread to be useful". Then, when you are deciding how many threads to spawn (or take from a pool), check how many cores the computer has and how many items there are. Spawn as many threads as possible, but no more than the computer has cores, and not so many that each thread would have less than the minimum number of items to process.
For example if the minimum number of items is, say, 1000; and the computer has 4 cores; and your list contains 2500 items, you would spawn just 2 threads, because more threads would be inefficient (each would process less than 1000 items).
Making a step by step list for Luke's idea:
Make a single threaded test app
Download Sysinternals Process Monitor and run it
Run your test app and find it on the process list (remember to run it as a release build outside of Visual Studio)
Double click the process and select the Performance Graph tab
Observe the CPU time used by your process
If the CPU time is sittling flat 50% for more than a few seconds, you can probably speed your overall process up using threads (assuming the bunch of stuff Mr Peters refered to holds true)
(However, the best you can do on a duel core machine is to halve the time it takes to run. If your process only take 4 seconds, it might not be worth getting it to run in 2 seconds)
Using the task parallel library / Rx provides a friendlier interface than System.Threading.ThreadPool, which might make your world a bit easier.
You miss imho one item, which is that it is not always about execution time. There is:
The problem to koop a UI operational during an operation. Even if the UI is "dormant", a nonresponsive message pump makes a worse impression.
The possibility to use a thread pool to actually not ahve to start / stop threads all the time. I use thread pools very extensively, and various parts of the applications keep them busy.
Anyhow, ignoring my point 1 - where you may go multi threaded without speeding things up in order to keep your UI responsive - I would say it is always then faster when you can actually either split up work (so you can keep more than one core busy) or offload it for othe reasons.
Scenario
I have a very heavy number-crunching process that pools large datasets from 3 different databases and then does a bit of processing on each to eventually produce a result.
This process is fine if it is only used by a single asset. However I now have 3500 assets that I need to process, which takes about 1hr30mins in the state of the current process.
Question
What is my best option for speeding this process up in terms of a multi-threaded c# application? Realistically I don't have to share anything between the processing of each asset, so I'm confident that being able to run process multiple assets at a time shouldn't cause too many issues.
Thoughts
I've heard good things about thread pools, but I guess realistically I want something that isn't too huge to implement, is easily understandable and can run off a decent number of threads at a time.
Help would be greatly appreciated.
In .net you can use the existing Thread Pool, no need to implement one yourself. Here is the relevant MSDN.
You should take care not to run too many processes at once (3500 are a bit much), but using the supplied queuing mechanism should get you started in the right direction.
Another thing to try is using PLINQ.
If you don't have a multi-core processor, multiple machines, and/or the thread processes are not I/O bound, multithreading will not help. Start by profiling the current processing to see where the time is going.
Thread pools are fine, and you can use a task queue to do simple load-balancing, but if there's no spare CPU cycles in the current application this would be a waste of time.
The nicest option would be to use the new Task Parallel Library in .NET 4, if you can do this using VS 2010 RC. This has built-in load balancing and work stealing queues, so it will make this task easy to thread, and very scalable.
However, if you need to do this in .NET 3.5, I would recommend using the ThreadPool, and just using ThreadPool.QueueUserWorkItem to start each task.
If your tasks are all very computationally intensive for their entire lifetime, you may want to prevent having too many running concurrently. Some form of queue, which you pull work from and execute, can be beneficial in this case. Just place all of your work items into a queue, and have threads pull work from the queue (with appropriate locking), and process.
If you have a multi-core system, and CPU cycles are your bottleneck, this should scale very well.
The .Net built in ThreadPool will solve both of your requirements of running a decent number of threads as well as being simple to work with. I have previously written an article on the subject which you can find here.
With using SQL Server 2005 or later, you can create user-defined functions in C# and use them from within T-SQL procedures, which can give a marked speedup for number crunching. SQL Server is multi-threaded and does a good job with it, so consider keeping as much of the processing in the database engine as you can.