Options Besides Using Parallel.ForEach - c#

I have built a Windows Forms app that is used to generate approximately 70k SSRS reports and save them to a folder for distribution. This process takes about 8 hours to run so I tried using Parallel.ForEach() to speed things up.
I can run the app with MaxDegreeOfParallelism set to 3 as long as no other processes are accessing the report server, anything higher than that or some other process accessing the server at the same time and the report server throws an HTTP 503 error because it's overloaded. I have no control over what or when other processes can access the server so I’m concerned that setting MaxDegreeOfParallelism down to 2 may not prevent overloading the server.
I have almost no experience using the Parallel Library so I would appreciate any direction or suggestions on what I can do besides using Parallel.ForEach() to speed up this app.

First thing in your analysis is, is your task processor intensive or I/O intensive, this will help you decide whether to use Parallel.ForEach() for processor intensive processing or something like Task.WhenAll for I/O intensive processing.
According to your question, I believe this is more a I/O intensive process but is hard to say without being able to see your code.
Is time in each process being spent on database queries, on file reads/writes or in actual processor operations? These are the key questions you need to answer to find the best solution.
Also you can consider new language tools like async streams, or parallel foreach async
You can find some great examples here :
https://scatteredcode.net/parallel-foreach-async-in-c/

Related

Proper understanding of Tasks

At the risk of asking a stupid question (and I will voluntarily delete the question myself if my peers think it is a stupid question)..
I have a C# desktop app.
I upload data to my server using a WCF Service.
I am experimenting with using Tasks.
This code calls my web service...
Task t = Task.Run(() => { wcf.UploadMotionDynamicRaw(bytes); });
I am stress testing this line of code.
I call it as many times in 1 second for a period of X time.
Will this 'flood' my router if the internet is slow for whatever reason?
I can sort of guess that this will be the case...
So, how can I test whether the task has completed before calling it again? In doing this will the extra plumbing slow down my speed gains by using Task?
Finally, is using Tasks making usage of multiple cores?
Will this 'flood' my router if the internet is slow for whatever
reason?
This depends on the size of the file you are uploading and you connection speed. To figure that out, just run .
So, how can I test whether the task has completed before calling it
again?
You can use Task.ContinueWith function (any of available overloads) to "catch" task completion and run some other method, may be in recursion too.
In doing this will the extra plumbing slow down my speed gains by
using Task?
It depends on workload, your processor and timing you expect. So, in other words, run it to measure, there is no possible generic answer to this.
is using Tasks making usage of multiple cores?
Yes, whenever it figures out it is possible. Running single task one after another will not spread the single function work on multiple cores. For this you need to use Parallel.For and similar artifacts. And again, .NET does not provide you with a mechanism for SIMD orchestration, so you are not guaranteed that it will run on multicores, but most probably will.

multi core processing in c#

I have a c# desktop application. Its purpose is 2 fold.
1). To display a live feed from an IP camera to my winform application.
2). Send any captured motion to my server.
It is (2) that is labour intensive. I believe I have optimised it as much as I can and the RAM is manageable.
However, in my quest to learn and to try to make my code even more efficient I am always open to new approaches.
Today, I have come across parallel processing. But, reading some links it seems to suggest there would be not much performance gain using parallel processing. Indeed in all my travels (contracts) I have never seen anyone use parallel processing in C# development.
Should I take early heed and not bother to look into this or should I see whether there is anything to gain by 'off-loading' my motion detection code to a separate parallel process?
Peoples advice/experience would be greatly informative.
Thanks
I would recommend taking a look at the Task Parallel Library provided in the .NET Framework, it's based on an idea that a piece of work is a Task. The idea is to give an abstraction to having to manage and create threads manually.
Tasks can run in parallel, on their own threads or run on the same thread, depending on the workload and configuration. Task Parallel Library is also great for asynchronous operations and work very well with I/O where the hardware can cause a blocking thread which can cause performance issues in your application, for example reading from a hard drive will cause some issues.
I suggest running a profiler on your application, visual studio professional onwards comes with a built in profiler that will enable you to trace and pin-point intensive operations that could possibly be improved with concurrency. If your application is running smooth, then there is no need, but there's nothing wrong with forward thinking and learning the Task Parallel Library as im sure there will be a point where this will benefit you from knowing how to implement concurrency in your application.
I've used TPL to solve various performance issues with large database calls in iterative loops and it's great for these IO operations, TPL will also take into account the hardware which it's being executed on and if used correctly, always be the most optimal for the hardware its running on. You could take your same piece of code and run it on a 2 core machine and it will still work the best to its abilities the hardware can provide without you having to worry about creating too many threads etc.
Personally, I'd say some asynchronous operations could be a good addition to your application since this is regarding external network camera devices which could cause blocking threads in your application.

C# multi threading query

I am trying to write a program in C# that will connect to around 400 computers and retrieve some information, lets say it retrieves the list of web services running on each computer.
I am assuming I need a well threaded application to be able to retrieve info from such a huge number of servers really quick. I am pretty blank on how to start working on this, can you guys give me a head start as to how to begin!
Thanks!
I see no reason why you should use threading in your main logic. Use asynchronous APIs and schedule their callback to the main thread. That way you get the benefits of asynchrony, but without most of the difficulty related to threading.
You'll only need multithreading in your logic code if the work you need to do on the data is that expensive. And even then you usually can get aways with parallelizing using side effect free functions.
Take a look at the Task Parallel Library.
Speficically Data Parallelism.
You could also use PLINQ if you wanted.
You should also execute the threads parallely on a multi-core CPU to enhance performance.
My favourite references on the topic are given below -
http://www.albahari.com/threading/
http://www.codeproject.com/KB/Parallel_Programming/NET4ParallelIntro.aspx
Where and how do you get the list of those 400 servers to query?
how often do you need to do this?
you could use a windows service or schedule a task which invoke your software and in it you could do a foreach element in the server list and start a call to such server in a different thread using thread queue/pool, but there is a maximum so you won't start 400 threads all together anyway.
describe a bit better your solution and we see what you can do :)
Take a look at this library: Task Parallel Library. You can make efficient use of your system resources and manage your work easier than managing your threads directly.
There might be considerable impact on the server side when you start query all 400 computers. But you can take a look at Parallel LINQ (PLINQ), where you can limit the degree of parallelism.
You can also use thread pooling for this matter, e.g. a Task class.
Createing manual threads may not be a good idea, as they are not highly reusable and take quite a lot of memory/CPU to be created

C# Multithreading File IO (Reading)

We have a situation where our application needs to process a series of files and rather than perform this function synchronously, we would like to employ multi-threading to have the workload split amongst different threads.
Each item of work is:
1. Open a file for read only
2. Process the data in the file
3. Write the processed data to a Dictionary
We would like to perform each file's work on a new thread?
Is this possible and should be we better to use the ThreadPool or spawn new threads keeping in mind that each item of "work" only takes 30ms however its possible that hundreds of files will need to be processed.
Any ideas to make this more efficient is appreciated.
EDIT: At the moment we are making use of the ThreadPool to handle this. If we have 500 files to process we cycle through the files and allocate each "unit of processing work" to the threadpool using QueueUserWorkItem.
Is it suitable to make use of the threadpool for this?
I would suggest you to use ThreadPool.QueueUserWorkItem(...), in this, threads are managed by the system and the .net framework. The chances of you meshing up with your own threadpool is much higher. So I would recommend you to use Threadpool provided by .net .
It's very easy to use,
ThreadPool.QueueUserWorkItem(new WaitCallback(YourMethod), ParameterToBeUsedByMethod);
YourMethod(object o){
Your Code here...
}
For more reading please follow the link http://msdn.microsoft.com/en-us/library/3dasc8as%28VS.80%29.aspx
Hope, this helps
I suggest you have a finite number of threads (say 4) and then have 4 pools of work. I.e. If you have 400 files to process have 100 files per thread split evenly. You then spawn the threads, and pass to each their work and let them run until they have finished their specific work.
You only have a certain amount of I/O bandwidth so having too many threads will not provide any benefits, also remember that creating a thread also takes a small amount of time.
Instead of having to deal with threads or manage thread pools directly I would suggest using a higher-level library like Parallel Extensions (PEX):
var filesContent = from file in enumerableOfFilesToProcess
select new
{
File=file,
Content=File.ReadAllText(file)
};
var processedContent = from content in filesContent
select new
{
content.File,
ProcessedContent = ProcessContent(content.Content)
};
var dictionary = processedContent
.AsParallel()
.ToDictionary(c => c.File);
PEX will handle thread management according to available cores and load while you get to concentrate about the business logic at hand (wow, that sounded like a commercial!)
PEX is part of the .Net Framework 4.0 but a back-port to 3.5 is also available as part of the Reactive Framework.
I suggest using the CCR (Concurrency and Coordination Runtime) it will handle the low-level threading details for you. As for your strategy, one thread per work item may not be the best approach depending on how you attempt to write to the dictionary, because you may create heavy contention since dictionaries aren't thread safe.
Here's some sample code using the CCR, an Interleave would work nicely here:
Arbiter.Activate(dispatcherQueue, Arbiter.Interleave(
new TeardownReceiverGroup(Arbiter.Receive<bool>(
false, mainPort, new Handler<bool>(Teardown))),
new ExclusiveReceiverGroup(Arbiter.Receive<object>(
true, mainPort, new Handler<object>(WriteData))),
new ConcurrentReceiverGroup(Arbiter.Receive<string>(
true, mainPort, new Handler<string>(ReadAndProcessData)))));
public void WriteData(object data)
{
// write data to the dictionary
// this code is never executed in parallel so no synchronization code needed
}
public void ReadAndProcessData(string s)
{
// this code gets scheduled to be executed in parallel
// CCR take care of the task scheduling for you
}
public void Teardown(bool b)
{
// clean up when all tasks are done
}
In the long run, I think you'll be happier if you manage your own threads. This will let you control how many are running and make it easy to report status.
Build a worker class that does the processing and give it a callback routine to return results and status.
For each file, create a worker instance and a thread to run it. Put the thread in a Queue.
Peel threads off of the queue up to the maximum you want to run simultaneously. As each thread completes go get another one. Adjust the maximum and measure throughput. I prefer to use a Dictionary to hold running threads, keyed by their ManagedThreadId.
To stop early, just clear the queue.
Use locking around your thread collections to preserve your sanity.
Use ThreadPool.QueueUserWorkItem to execute each independent task. Definitely don't create hundreds of threads. That is likely to cause major headaches.
The general rule for using the ThreadPool is if you don't want to worry about when the threads finish (or use Mutexes to track them), or worry about stopping the threads.
So do you need to worry about when the work is done? If not, the ThreadPool is the best option. If you want to track the overall progress, stop threads then your own collection of threads is best.
ThreadPool is generally more efficient if you are re-using threads. This question will give you a more detailed discussion.
Hth
Using the ThreadPool for each individual task is definitely a bad idea. From my experience this tends to hurt performance more than helping it. The first reason is that a considerable amount of overhead is required just to allocate a task for the ThreadPool to execute. By default, each application is assigned it's own ThreadPool that is initialized with ~100 thread capacity. When you are executing 400 operations in a parallel, it does not take long to fill the queue with requests and now you have ~100 threads all competing for CPU cycles. Yes the .NET framework does a great job with throttling and prioritizing the queue, however, I have found that the ThreadPool is best left for long-running operations that probably won't occur very often (loading a configuration file, or random web requests). Using the ThreadPool to fire off a few operations at random is much more efficient than using it to execute hundreds of requests at once. Given the current information, the best course of action would be something similar to this:
Create a System.Threading.Thread (or use a SINGLE ThreadPool thread) with a queue that the application can post requests to
Use the FileStream's BeginRead and BeginWrite methods to perform the IO operations. This will cause the .NET framework to use native API's to thread and execute the IO (IOCP).
This will give you 2 leverages, one is that your requests will still get processed in parallel while allowing the operating system to manage file system access and threading. The second is that because the bottleneck of the vast majority of systems will be the HDD, you can implement a custom priority sort and throttling to your request thread to give greater control over resource usage.
Currently I have been writing a similar application and using this method is both efficient and fast... Without any threading or throttling my application was only using 10-15% CPU, which can be acceptable for some operations depending on the processing involved, however, it made my PC as slow as if an application was using 80%+ of the CPU. This was the file system access. The ThreadPool and IOCP functions do not care if they are bogging the PC down, so don't get confused, they are optimized for performance, even if that performance means your HDD is squeeling like a pig.
The only problem I have had is memory usage ran a little high (50+ mb) during the testing phaze with approximately 35 streams open at once. I am currently working on a solution similar to the MSDN recommendation for SocketAsyncEventArgs, using a pool to allow x number of requests to be operating simultaneously, which ultimately led me to this forum post.
Hope this helps somebody with their decision making in the future :)

Appropriate Multi-Threading Option

Scenario
I have a very heavy number-crunching process that pools large datasets from 3 different databases and then does a bit of processing on each to eventually produce a result.
This process is fine if it is only used by a single asset. However I now have 3500 assets that I need to process, which takes about 1hr30mins in the state of the current process.
Question
What is my best option for speeding this process up in terms of a multi-threaded c# application? Realistically I don't have to share anything between the processing of each asset, so I'm confident that being able to run process multiple assets at a time shouldn't cause too many issues.
Thoughts
I've heard good things about thread pools, but I guess realistically I want something that isn't too huge to implement, is easily understandable and can run off a decent number of threads at a time.
Help would be greatly appreciated.
In .net you can use the existing Thread Pool, no need to implement one yourself. Here is the relevant MSDN.
You should take care not to run too many processes at once (3500 are a bit much), but using the supplied queuing mechanism should get you started in the right direction.
Another thing to try is using PLINQ.
If you don't have a multi-core processor, multiple machines, and/or the thread processes are not I/O bound, multithreading will not help. Start by profiling the current processing to see where the time is going.
Thread pools are fine, and you can use a task queue to do simple load-balancing, but if there's no spare CPU cycles in the current application this would be a waste of time.
The nicest option would be to use the new Task Parallel Library in .NET 4, if you can do this using VS 2010 RC. This has built-in load balancing and work stealing queues, so it will make this task easy to thread, and very scalable.
However, if you need to do this in .NET 3.5, I would recommend using the ThreadPool, and just using ThreadPool.QueueUserWorkItem to start each task.
If your tasks are all very computationally intensive for their entire lifetime, you may want to prevent having too many running concurrently. Some form of queue, which you pull work from and execute, can be beneficial in this case. Just place all of your work items into a queue, and have threads pull work from the queue (with appropriate locking), and process.
If you have a multi-core system, and CPU cycles are your bottleneck, this should scale very well.
The .Net built in ThreadPool will solve both of your requirements of running a decent number of threads as well as being simple to work with. I have previously written an article on the subject which you can find here.
With using SQL Server 2005 or later, you can create user-defined functions in C# and use them from within T-SQL procedures, which can give a marked speedup for number crunching. SQL Server is multi-threaded and does a good job with it, so consider keeping as much of the processing in the database engine as you can.

Categories

Resources