This sort of thing is already on Stack Overflow, here. However, it doesn't quite fix the problem.
I'm working on a probability simulator that's multi-threaded. Say, I want to figure out the result of one billion "dice rolls." If this runs for > 1 minute, the following message pops up:
The CLR has been unable to transition from COM context 0x29eac30 to COM context 0x29eab78 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations.
My code includes, for the situation:
bool done = false;
while (done)
{
done = false;
foreach (WorkerObject w in workers)
{
done = done && w.done;
}
}
I'm using Task objects for multi-threading, and there is a Task.Wait() method, but this is a thread-blocking operation that does essentially the same thing as this. Using Thread.Sleep(500) in the loop gives the same result.
Now, I've Googled around for how to actually fix this, and, normally, the "fix" is to disable the compiler warning and to just keep going on with it, but how would I actually fix the problem?
Related
.NET Framework 4.5.2. I'm trying to update some old code and trying to figure out how to manage the number of running threads for a specific ID. This is the code.
foreach (ThreadGroup thread_group in threadGroups)
{
if (ImportQueues.TryGetValue(thread_group.ThreadGroupID, out BlockingCollection<ImportFrequency> configuration))
{
if (configuration.Count > thread_group.ThreadCount)
{
// increase number of threads running for this group
}
else if (configuration.Count < thread_group.ThreadCount)
{
// decrease number of threads running for this group
}
}
else // Spin up the initial threads
{
ImportQueues.Add(thread_group.ThreadGroupID, new BlockingCollection<ImportFrequency>());
for (int x = 0; x < thread_group.ThreadCount; x++)
{
Thread import_thread = new Thread(new ParameterizedThreadStart(ProcessImportQueue)) { IsBackground = true };
import_thread.Start((ImportQueues[thread_group.ThreadGroupID]));
}
}
}
Essentially, we're running thread_group.ThreadCount number of threads for each group's blocking collection, which will be continually updated. Additionally, thread_group can be updated to change the ThreadCount. If the change increases the number of threads, spin up more to process the blocking collection. If it decreases, wait for the difference in number of threads to finish while the others continue running.
Is this possible with this paradigm or do I need to find a better way to manage threads?
Edit: One solution I tried with this is using CancellationTokens. When I start up a thread, I pass in a model that contains the context as well as a CancellationToken. That model is saved to a global variable. If we need to decrease the number of threads, I go through however many need to be stopped and cancel the token, which stops the infinite loop for that one thread and stops it.
As I understand the example code, it creates N queues, with M threads processing each queue.
There are a few potential problems with this. I say potential since there might be some special circumstances that motivates such a solution, but Sturgeon's law suggest it might just be a misguided attempt to achieve some unclear goal.
It is likely that N*M is larger than the amount of hardware threads available, either resulting in having idle threads that consume resources, or having unnecessary context switching if all threads are loaded.
It is unclear if the queues are in any way different. If they are identical, why not just use a single queue?
While adding threads is fairly simple, decreasing threads can be a bit complicated. Threads should only be canceled cooperatively, so you need some way to signal that the thread should be cancelled. But if the thread is blocked waiting for items in the queue you might run into situations where you have asked threads to stop, but they have not yet done so, and you are starting threads faster than threads are actually freed, eventually running out of threads. This can probably be solved, but it is unclear if the current solution does this.
'Import' sounds like something involving IO, and IO usually does not scale well with the number of threads. In the worst case, parallel IO can even harm performance.
It is unclear what the actual processing is doing. In my experience, a cpu core can do an incredible amount of work if properly utilized. When things are 'slow' it usually means that the code is doing some kind of unnecessary work. Some junior programmers approach performance problems by attempting multi-threading as the first and last step. When profiling the code and improving efficiency can often give much greater improvements.
For a simple case of a queue that needs to be processed in parallel I would consider using a parallel.Foreach loop over a blocking collection, using the parallelExtensionsExtras for a better partitioner. That should automatically try to balance the number of threads used to available hardware and CPU usage patterns for maximum throughput.
But I would not touch working code without a good understanding what it is trying to do, and why it should be changed to something better. Any preferably only after sufficient automated testing is in place to help avoid introducing new bugs. Changing code just because it is old is a terrible idea.
I like threads, but I can't find any information on the Internet (maybe I just don't Know how to search for that properly) regarding what happen in the background when for example thread starvation is on the stage. How does OS handles it? Do my thread waits in the line to get it's chance to be created in the thread pool or is it killed after xy time when it can't be created? Or is it something totally else?
Another question is why should I care about thread context? By calling ConfigureAwait(false) from what I know I am not waiting for that thread context which can be translated as "I don't care about that context". From what I know by calling ConfigureAwait(false) I am taking care of deadlocks.
The last question is, when a deadlock happen, what is going on in the background? Do main thread tries to catch that context or something else?
I think deadlock is used in the wrong context here. Deadlock describes a situation where two or more threads are blocked forever, waiting for each other (e.g. by lock statement).
Thread starvation: Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress.
Do my thread waits in the line to get it's chance to be created in the thread pool or is it killed after xy time when it can't be created?
I don't really understand this question.
Your thread doesn't wait to be created. Your function creates the thread.
Another question is why should I care about thread context?
From MSDN: "A context is an ordered sequence of properties that define an environment for the objects resident inside it. Contexts get created during the activation process for objects that are configured to require certain automatic services, such as synchronization, transactions, just-in-time activation, security, and so on. Multiple objects can live inside a context."
Here a similar question :Why would I bother to use Task.ConfigureAwait(continueOnCapturedContext: false);
The last question is, when a deadlock happen, what is going on in the background?
The two threads keep running (waiting for each other) until you close the application.
I'm getting a
ContextSwitchDeadlock
when adding a CustomXMLPart after performing a Documents.Add().
The same code was working fine last week..
I understand that ContextSwitchDeadlock is caused by a long running operation (this is not a duplicate question).
Why would the CustomXMLParts.Add() command result in a long running operation?
Anyone come across this? and any ideas how to troubleshoot?
ContextSwitchDeadlock occurred Message: Managed Debugging Assistant
'ContextSwitchDeadlock' has detected a problem in 'C:\Program Files
(x86)\Microsoft Office\root\Office16\WINWORD.EXE'. Additional
information: The CLR has been unable to transition from COM context
0xfdb520 to COM context 0xfdb468 for 60 seconds. The thread that owns
the destination context/apartment is most likely either doing a non
pumping wait or processing a very long running operation without
pumping Windows messages. This situation generally has a negative
performance impact and may even lead to the application becoming non
responsive or memory usage accumulating continually over time. To
avoid this problem, all single threaded apartment (STA) threads should
use pumping wait primitives (such as CoWaitForMultipleHandles) and
routinely pump messages during long running operations.
The context switch deadlock can show up when debugging a long running process. For the most part you can ignore it if the process is expected to be long running.
see previous stackoverflow answer
I was trying to use AutoResetEvent.WaitOne() on a GUI thread hoping that it would not block the GUI thread completely and allow the GUI thread to keep pumping windows messages while it waits for a signal (similar to Thread.Wait()). I learned that wasn't a correct assumption.
So am looking for a way to be on the GUI Thread and wait for a thread to finish running (similar to using AutoResetEvent.WaitOne()) but keep the message pump flowing. (Please no DoEvents())
I guess the short question is: Is there a WAIT in .NET that pumps windows messages (especially "Paint" event) while it is waiting?
The CLR has a special workaround for calling WaitOne() on an STA thread. That is illegal, a thread that supports apartment threading is not allowed to block. That's very prone to cause deadlock. The CLR will in fact start taking over the duty of pumping messages, roughly similar to MsgWaitForMultipleObjects. Very roughly.
While this works to keep the basic plumbing of a UI thread alive, like painting, this is not ever something you want to do if you can avoid it. Quirky stuff can happen, not quite unlike using Application.DoEvents(), although the CLR code does try to minimize the damage that the re-entrancy can cause.
Big secret how they do this btw, it was intentionally omitted from the SSCLI20 distribution which is otherwise a very complete copy of the CLR code. Chris Brumme blogged about it, pretty impenetrable in his usual way, but with just waving his hands and not giving away any good secrets. The code itself is quite resistant to reverse engineering, it is large. The only common signs of it is finding it back in a stack trace from a programmer that's got a very hard problem to solve.
In other words, you are invoking a code path that's highly undocumented and poorly understood. Don't do it. It is fundamentally unnecessary, you can always invoke back to the UI thread and continue with the code that you've now got after the WaitOne() call. That's safe.
I don't know of any such thing that you're asking for. AFIK (as far as i know), you'll need to start a second background task or thread which waits for the AutoResetEvent to trigger in the background thread and in your UI thread, once you launch the task or 2nd thread, exit your method so it is free to do it's job of 'message pumping' as you put it.
So in this scenario, the background thread would then need to finish your processing once its AutoResetEvent is triggered to continue the processing.
While debugging an application i am getting the following error.
The CLR has been unable to transition from COM context 0x3b2d70 to COM
context 0x3b2ee0 for 60 seconds. The thread that owns the destination
context/apartment is most likely either doing a non pumping wait or
processing a very long running operation without pumping Windows
messages. This situation generally has a negative performance impact
and may even lead to the application becoming non responsive or memory
usage accumulating continually over time. To avoid this problem, all
single threaded apartment (STA) threads should use pumping wait
primitives (such as CoWaitForMultipleHandles) and routinely pump
messages during long running operations.
Why system throws this error.
I got the solution
Need to uncheck ContextSwitchDeadlock under Debug->Exceptions->Managed Debugging Assistants.
After unchecking ContextSwitchDeadlock its not throwing the error.