C# - Several Threads Stuck Waiting for LowLevelLifoSemaphore.WaitNative - c#

I have a old DotNet 6 application running on linux that does a ton of synchronous DB calls and I am trying to debug a threading issue and I noticed that a lot of threads have the following callstack:
System.Private.CoreLib!System.Threading.LowLevelLifoSemaphore.WaitNative(class Microsoft.Win32.SafeHandles.SafeWaitHandle,int32)
System.Private.CoreLib!System.Threading.LowLevelLifoSemaphore.WaitForSignal(int32)
System.Private.CoreLib!System.Threading.LowLevelLifoSemaphore.Wait(int32,bool)
System.Private.CoreLib!System.Threading.PortableThreadPool+WorkerThread.WorkerThreadStart()
That's the whole callstack, and there are many threads with this exact same stack. What is this LowLevelLifoSemaphore.WaitNative about? Is there some kind of low level deadlock going on?

Related

multi core processing in c#

I have a c# desktop application. Its purpose is 2 fold.
1). To display a live feed from an IP camera to my winform application.
2). Send any captured motion to my server.
It is (2) that is labour intensive. I believe I have optimised it as much as I can and the RAM is manageable.
However, in my quest to learn and to try to make my code even more efficient I am always open to new approaches.
Today, I have come across parallel processing. But, reading some links it seems to suggest there would be not much performance gain using parallel processing. Indeed in all my travels (contracts) I have never seen anyone use parallel processing in C# development.
Should I take early heed and not bother to look into this or should I see whether there is anything to gain by 'off-loading' my motion detection code to a separate parallel process?
Peoples advice/experience would be greatly informative.
Thanks
I would recommend taking a look at the Task Parallel Library provided in the .NET Framework, it's based on an idea that a piece of work is a Task. The idea is to give an abstraction to having to manage and create threads manually.
Tasks can run in parallel, on their own threads or run on the same thread, depending on the workload and configuration. Task Parallel Library is also great for asynchronous operations and work very well with I/O where the hardware can cause a blocking thread which can cause performance issues in your application, for example reading from a hard drive will cause some issues.
I suggest running a profiler on your application, visual studio professional onwards comes with a built in profiler that will enable you to trace and pin-point intensive operations that could possibly be improved with concurrency. If your application is running smooth, then there is no need, but there's nothing wrong with forward thinking and learning the Task Parallel Library as im sure there will be a point where this will benefit you from knowing how to implement concurrency in your application.
I've used TPL to solve various performance issues with large database calls in iterative loops and it's great for these IO operations, TPL will also take into account the hardware which it's being executed on and if used correctly, always be the most optimal for the hardware its running on. You could take your same piece of code and run it on a 2 core machine and it will still work the best to its abilities the hardware can provide without you having to worry about creating too many threads etc.
Personally, I'd say some asynchronous operations could be a good addition to your application since this is regarding external network camera devices which could cause blocking threads in your application.

2000 Worker Threads and only few real

I paused VS and came to the thread window. I see there > 2000 "Worker Thread" entries with the same call stack and different Id's (threads are created with Task.Factory.StartNew
method).
All these threads are waiting for one lock to be unlocked. This can be a bug in my application. The issue, that when I come to the task manager, i see +- standard amount of thread and memory usage. Is this a CLR optimization to not have many idle thread, or VS thread window bug?
It is a bug in your code. Deadlock is one of the universal threading bugs.
Getting to 2000 threads is possible. It is the job of the ThreadPool manager to limit the number of threads that can run. Governed by its SetMaxThreads() method. The default is a ridiculously large number, 1023 on my 4 core laptop. Depends on the .NET version as well, you probably have an 8 core machine. Actually getting that many started takes a while.
Deadlock is the easier threading bug to solve, you've got a lot of time to look at the call stacks to figure out where they are deadlocking. Unlike threading race bugs, the really nasty ones you're liable to get when you remove whatever lock causes the deadlock right now. Temporarily calling ThreadPool.SetMaxThreads(4, 10000) to limit the carnage is a decent strategy to not drown in the number of threads to look at and make the debugging attempt seem futile.

multiprocessing in winforms application

I'm working on about 35 batches updating many databases as a part of our daily process at work. Every batch of them was developed in a single web app. Due to database issues, i have collected all of them in one windows application to make use of DB connection pooling and i have assigned a single backgroundworker for each batch. Reaching to 20 batch in the application, every thing is working good. But when i add any other backgroundworker for any other batch, the application hangs.I think this is because i'm running too many threads in one process. Is there a solution for this problem, for example, making the application working with many processes ??!!!.
Regards,
Note,
I have assigned a single machine for this application (Core i7 cpu, 8 gb ram).
How many databases you have to update?
I think it is more recommended to have the number of Threads as the number of Databases
If your UI is freezing while many background workers are active, but recovers when those background workers are finished processing, then likely the UI thread is executing a method which waits for a result or signal from one of the background worker threads.
To fix your problem, you will have to look for UI-related code that deals with synchronization / multi-threading. This might be places where one of the many synchronization objects of .NET are being used (including the lock statement), but it could also involve "dumb" polling loops a-ka while(!worker.IsFinished) Thread.Sleep();.
Another possible reason for the freeze might be that you are running a worker (or worker-related method) accidentally in the UI thread instead in a background thread.
But you will find out when you use the debugger.
To keep the scope of your hunt for problematic methods managable, let your program run in the debugger until the UI freezes. At that moment, pause the program execution in the debugger. Look which code the UI thread is processing then, and you will have found one instance of offending code. (Whatever there is wrong, i can't tell you - because i don't know your code.)
It is quite possible that different UI-related methods in your code will suffer from the same issue. So, if you found the offending code (and were able to fix it) you would want to check on for other problematic methods, but that should be rather easy since at that point of time you will know what to look for...

C# - Improving a Multi-Threaded Application Design

Specification
C# Distributed Application.
Client/Server design.
Client (Winforms), Server (Windows Service), Communication via .Net Remoting.
The below question relates to the Server-Side of the application.
(EDIT) The Server-side of the application runs on a server with 8 Cores and 12Gb Ram
(EDIT) The CPU of this server is always hitting around 80% Usage due to lots of other services being run on this same server.
Scenario
I've inheritted a large legacy application.
It carries out a bunch of tasks, some of them independently, but others not.
The current design for this application involves the creation of 14 threads, each running either 1 task or a number of tasks.
The problem is that I get the feeling this design element has an impact on performance.
Code Examples - How Each Class/Thread Is Designed & Run
public class ManageThreads
{
private Thread doStuffThread = null;
//Inside the constructor EVERY thread is instantiated and run.
//(I am aware that this example only shows the use of 1 thread).
public ManageThreads()
{
doStuffThread = new Thread(new ThreadStart(DoSomeStuff.Instance.Start));
doStuffThread.Start();
//Instantiate and run another thread.....
//Instantiate and run another thread.....
//Instantiate and run another thread.....etc.
}
}
public class DoSomeStuff
{
void Start()
{
while(true)
{
//Repeatedly do some tasks.....
Thread.Sleep(5000);
}
}
}
Thoughts
What I'd like to do is keep the existing code, but modify the way that it runs.
I've thought about the use of a Thread Pool to solve this problem, but given the current architecture I am unsure of how I would go about doing this.
Questions
Would this current design affect performance in a noticeable way?
Is it possible for me to improve the performance of this application without altering the underlying functions, but changing the design slightly?
Can anyone recommend anything / advise me on the right way to go about improving this?
Help greatly appreciated.
"I get the feeling this design element has an impact on performance."
Don't guess, get a profiler out and measure what's going on. Gather some empirical stats about where time is spent in the application and then you can take a view on where the pinch points are.
If the time spent creating threads is your biggest headache then moving to a threadpool may be the right answer, but you won't know without some forensic analysis.
From the small snippet you've posted it looks like the 14 threads are reasonably long-lived, doing multiple things over their lifetime so I suspect that this is not the problem actually, but there isn't enough info in your post to make a definitive call on this.
if your threads are all doing work and you have more threads active than processors then you are going to be spending time context switching.
If you have a dual core processor dont expect to get great performance with more than 4 active working threads.
So starting off 14 threads that are all doing work is a bad idea unless you have a processor that can manage this. Physical processor architecture and feature set has a big impacrt on this. My point is that a threadpool will help manage the context switching but starting 14 busy threads at once is always gonna kill performance ... you will probably get faster performance from simply excuting the threads sequentially. Obviously that is a big statement and so is probably not trus, but you get the gist.
Hence the use of a thread pool along with a profiler to figure out the optimum number of threads to make available to the thread pool.
In most situations when people are usign a thread pool a lot of the threads are doing nothing most of the time, or a thread is sleeping/blocking whilst some slow operation or external dependancy awaits a response.
consider using an asynch pattern so that you can get info about progress out of your threads.
On a dual core processor i would be hesitant abou tusing more than 3 threads if they are all working 100% of the time

Are multithreaded apps bound to a single core?

I'm running a .NET remoting application built using .NET 2.0. It is a console app, although I removed the [STAThread] on Main.
The TCP channel I'm using uses a ThreadPool in the background.
I've been reported that when running on a dual core box, under heay load, the application never uses more than 50% of the CPU (although I've seen it at 70% or more on a quad core).
Is there any restriction in terms of multi-core for remoting apps or ThreadPools?
Is it needed to change something in order to make a multithreaded app run on several cores?
Thanks
There shouldn't be.
There are several reasons why you could be seeing this behavior:
Your threads are IO bound.
In that case you won't see a lot of parallelism, because everything will be waiting on the disk. A single disk is inherently sequential.
Your lock granularity is too small
Your app may be spending most of it's time obtaining locks, rather than executing your app logic. This can slow things down considerably.
Your lock granularity is too big
If your locking is not granular enough, your other threads may spend a lot of time waiting.
You have a lot of lock contention
Your threads might all be trying to lock the same resources at the same time, making them inherently sequential.
You may not be partitioning your threads correctly.
You may be running the wrong things on multiple threads. For example, if you are using one thread per connection, you may not be taking advantage of available parallelism within the task you are running on that thread. Try splitting those tasks up into chunks that can run in parallel.
Your process may not have a lot of available parallelism
You might just be doing stuff that can't really be done in parallel.
I would try and investigate each one to see what the cause is.
Multithreaded applications will use all of your cores.
I suspect your behavior is due to this statement:
The TCP channel I'm using uses a ThreadPool in the background.
TCP, as well as most socket/file/etc code, tends to use very little CPU. It's spending most of its time waiting, so the CPU usage of your program will probably never spike. Try using the threadpool with heavy computations, and you'll see your processor spike to near 100% CPU usage.
Multi-threaded apps are not required to be bound to a single core. You can check the affinity (how many cores it operates on) of a thread by using ProcessThread.ProcessorAffinity. I'm not sure what the default behavior is, but you can change it programmatically if you need to.
Here is an example of how to do this (taken directly from TechRepublic)
Console.WriteLine("Current ProcessorAffinity: {0}",
Process.GetCurrentProcess().ProcessorAffinity);
Process.GetCurrentProcess().ProcessorAffinity = (System.IntPtr)2;
Console.WriteLine("Current ProcessorAffinity: {0}",
Process.GetCurrentProcess().ProcessorAffinity);
And the output:
Current ProcessorAffinity: 3
Current ProcessorAffinity: 2
The code above, first shows that the process is running on both cores. Then it changes to only use the second core and shows that it is now using only the second core. You can read the .NET documentation on ProcessorAffinity to see what the various numbers mean for affinity.

Categories

Resources