Is using multiple 'while()' loops bad practice? - c#

I am writing two applications that work with each other. One is in c++, the other in C#. As there is streaming data involved, I am using, in multiple places, code such as:
while (true)
{
//copy streamed data to other places
}
I am noticing that these programmes use a lot of cpu, and become slow to respond. Is it bad programming to use this kind of loop? Should I be adding a:
Sleep(5);
In each one? Will that help with cpu usage?
Thanks!

Generally, using Thread.Sleep() in the code will also freeze the thread so if what you worry about is responsiveness you shouldn't use it. You should consider moving the streaming methods out of the main (UI) thread
Also, you mentioned that it is some streaming process, so the best practise wouldn't be something like
while(!stream.EndOfStream)
{
//streaming
}
but rather using some DataReceived events (if available)

you will probably find that the code is more of the format
while (true)
{
//WAIT for streamed data to arrive
//READ data from stream
//COPY streamed data to other places
//BREAK when no more data/program is ending
}
which is totally normal. the other common solution is
while (boolean_variable_if_program_is_to_keep_running)
{
//WAIT for streamed data to arrive
//READ data from stream
//COPY streamed data to other places
//when no more data/program is ending
// set boolean_variable_if_program_is_to_keep_running = FALSE
}

What you really need to avoid (for the health of your CPU) is to make your program waiting for data in a while(true) loop without using a system threading wait function.
Example 1:
while(true) {
if (thereIsSomeDataToStream) {
StreamDataFunction();
}
}
In this example, if thereIsSomeDataToStream is false for a time, then CPU will still continue to work 100% performing while loop even if there is no data to stream. So this would be waste of CPU and leads your computer to slow down.
Example 2:
while(true) {
if (thereIsSomeDataToStream) {
StreamDataFunction();
}
else {
SystemThreadingWaitFunction();
}
}
On the contrary, in this example, when there is no more data to stream, then the thread stops for a time. Operating system will use this free time to execute other threads and, after a while, system will wake up your thread which will resume and loop again for streaming possible new data. Then there is not too much waste of CPU and your computer will remain responsive.
To perform the thread waiting procedure, you may have several possibilities:
First, you can use, as you suggested, Sleep(t). This may do the
job: Sleep functions provided by compilers logically would use
operating system API to idle current thread. But in this case you
will have to wait all the specified time, even if some data came
meanwhile. So if waiting time is too short, CPU will overwork, and if
time is too long, your streaming will lag.
Or you can use operating system API directly, which I would
recommend. If you are using Microsoft environnement, there are lots
of waiting methods you can document on here: Microsoft wait
functions API. For example you can create an Event object which
will signal incoming data to stream. You can signal this event
anywhere in your code or from another program. In the while loop you
may then call a function like WaitForSingleObject API which will wait
the event for signal state. Here is documentation on how to do this:
Using event objects. I do not know about linux or other systems,
but I am sure you can find it on the web. I did it few times myself,
it is not so hard to code. Enjoy ! :)

This is expanding on my comment above, but for the sake of completeness I am pasting what I've written in the commend here (and adding to it).
The general solution to this sort of problem is to wait for arrival of data and process it as it arrives (possibly caching newly arrived data as you're processing previously arrived data). As to the mechanics of the implementation - well, that depends on a concrete example. For example, often in a graphics processing application (games, for instance), the "game loop" is essentially what you describe (without sleeps). This is also encountered in GUI app design. For a design where the app waits for an event before processing, look to typical network client-server design.
As for the slowdown of the CPU, try the following code to observe significant slowdown:
while(true);
versus
while(true) sleep(1);
The reason the first slows down is because on each cycle the CPU checks if the condition in the loop evaluates to true (that is, in this case, if true == true). On the other hand, in the second example, the CPU checks if true == true and then sleeps for 1ms, which is significantly longer than the amount of time it takes to check true == true, freeing the CPU to do other things.
Now, as for your example, presumably processing the data inside the loop takes significantly more CPU cycles than checking true == true. So adding a sleep statement will not help, but only worsen the situation.
You should leave it to the kernel to decide when to interrupt your process to dedicate CPU cycles to other things.
Of course, take what I wrote above with a grain of salt (especially the last paragraph), as it paints a very simplistic picture not taking into account your specific situation, which may benefit from one design versus another (I don't know, because you haven't provided any details).

Related

C# - Live text feed from one thread to another

In a thread "A", I want to read a very long file, and as that happens, I want to send each new line read to another thread "B", which would do -something- to them.
Basically, I don't want to wait for the file-loading to finish before I start processing the lines.
(I definitely want 2 threads and communication between them; I've never done this before and I wanna learn)
So, how do I go about doing this?
Thread A should wait for thread B to finish processing the "current line", before thread A sends another line to Thread B. But that won't be efficient; so how about a buffer in thread B?(to catch the lines)
Also, please give an example of what methods I have to use for this cross thread communication since I haven't found/seen any useful examples.
Thank you.
First of all, it's not clear that two threads will necessarily be useful here. A single thread reading one line at a time (which is pretty easy with StreamReader) and processing each line as you go might perform at least as well. File reads are buffered, and the OS can read ahead of your code requesting data, in which case most of your reads will either complete immediately because the next line has already been read off disk in advance by the OS, or both of your threads will have to wait because the data isn't there on disk. (And having 2 threads sat waiting for the disk doesn't make things happen any faster than having 1 thread sat waiting.) The only possible benefit is that you avoid dead time by getting the next read underway before you finish processing the previous one, but the OS will often do that for you in any case. So the benefits of multithreading will be marginal at best here.
However, since you say you're doing this as a learning exercise, that may not be a problem...
I'd use a BlockingCollection<string> as the mechanism for passing data from one thread to another. (As long as you're using .NET 4 or later. And if not...I suggest you move to .NET 4 - it will simplify this task considerably.) You'll read a line from the file and put it into the collection from one thread:
string nextLine = myFileReader.ReadLine();
myBlockingCollection.Add(nextLine);
And then some other thread can retrieve lines from that:
while (true)
{
string lineToProcess = myBlockingCollection.Take();
ProcessLine(lineToProcess);
}
That'll let the reading thread run through the file just as fast as the disk will let it, while the processing thread processes data at whatever rate it can. The Take method simply sits and waits if your processing thread gets ahead of the file reading thread.
One problem with this is that your reading thread might get way ahead if the file is large and your processing is slow - your program might attempt to read gigabytes of data from a file while having only processed the first few kilobytes. There's not much point reading data way ahead of processing it - you really only want to read a little in advance. You could use the BlockingCollection<T>'s BoundedCapacity property to throttle things - if you set that to some number, then the call to Add will block if the collection already has that number of lines in it, and your reading thread won't proceed until the processing loop processes its next line.
It would be interesting to compare performance of a program using your two-threaded technique against one that simply reads lines out of a file and processes them in a loop on a single thread. You would be able to see what, if any, benefit you get from a multithreaded approach here.
Incidentally, if your processing is very CPU intensive, you could use a variation on this theme to have multiple processing threads (and still a single file-reading thread), because BlockingCollection<T> is perfectly happy to have numerous consumers all reading out of the collection. Of course, if the order in which you finish processing the lines of the file matters, that won't be an option, because although you'll start processing in the right order, if you have multiple processing threads, it's possible that one thread might overtake another one, causing out-of-order completion.

Using multithreading for loop

I'm new to threading and want to do something similar to this question:
Speed up loop using multithreading in C# (Question)
However, I'm not sure if that solution is the best one for me as I want them to keep running and never finish. (I'm also using .net 3.5 rather than 2.0 as for that question.)
I want to do something like this:
foreach (Agent agent in AgentList)
{
// I want to start a new thread for each of these
agent.DoProcessLoop();
}
---
public void DoProcessLoop()
{
while (true)
{
// do the processing
// this is things like check folder for new files, update database
// if new files found
}
}
Would a ThreadPool be the best solution or is there something that suits this better?
Update: Thanks for all the great answers! I thought I'd explain the use case in more detail. A number of agents can upload files to a folder. Each agent has their own folder which they can upload assets to (csv files, images, pdfs). Our service (it's meant to be a windows service running on the server they upload their assets to, rest assured I'll be coming back with questions about windows services sometime soon :)) will keep checking every agent's folder if any new assets are there, and if there are, the database will be updated and for some of them static html pages created. As it could take a while for them to upload everything and we want them to be able to see their uploaded changes pretty much straight away, we thought a thread per agent would be a good idea as no agent then needs to wait for someone else to finish (and we have multiple processors so wanted to use their full capacity). Hope this explains it!
Thanks,
Annelie
Given the specific usage your describe (watching for files), I'd suggest you use a FileSystemWatcher to determine when there are new files and then fire off a thread with the threadpool to process the files until there are no more to process -- at which point the thread exits.
This should reduce i/o (since you're not constantly polling the disk), reduce CPU usage (since the constant looping of multiple threads polling the disk would use cycles), and reduce the number of threads you have running at any one time (assuming there aren't constant modifications being made to the file system).
You might want to open and read the files only on the main thread and pass the data to the worker threads (if possible), to limit i/o to a single thread.
I believe that the Parallels Extensions make this possible:
Parallel.Foreach
http://msdn.microsoft.com/en-us/library/system.threading.tasks.parallel.foreach.aspx
http://blogs.msdn.com/pfxteam/
One issue with ThreadPool would be that if the pool happens to be smaller than the number of Agents you would like to have, the ones you try to start later may never execute. Some tasks may never begin to execute, and you could starve everything else in your app domain that uses the thread pool as well. You're probably better off not going down that route.
You definitely don't want to use the ThreadPool for this purpose. ThreadPool threads are not meant to be used for long-running tasks ("infinite" counts as long-running), since that would obviously tie up resources meant to be shared.
For your application, it would probably be better to create one thread (not from the ThreadPool) and in that thread execute your while loop, inside of which you iterate through your Agents collection and perform the processing for each one. In the while loop you should also use a Thread.Sleep call so you don't max out the processor (there are better ways of executing code periodically, but Thread.Sleep will work for your purposes).
Finally, you need to include some way for the while loop to exit when your program terminates.
Update: Finally finally, multi-threading does not automatically speed up slow-running code. Nine women can't make a baby in one month.
A thread pool is useful when you expect threads to be coming into and out of existence fairly regularly, not for a predefined set number of threads.
Hmm.. as Ragoczy points out, its better to use FileSystemWatcher to monitor the files. However, since you have additional operations, you may think in terms of multithreading.
But beware, no matter how many processers you have, there is a limit to it's capacity. You may not want to create as many threads as the number of concurrent users, for the simple reason that your number of agents can increase.
Until you upgrade to .NET 4, the ThreadPool might be your best option. You may also want to use a Semaphore and a AutoResetEvent to control the number of concurrent threads. If you're talking about long-running work then the overhead of starting up and managing your own threads is low and the solution is more elegant. That will allow you to use a WorkerThread.Join() so you can make sure all worker threads are complete before you resume execution.

C# Multithreading File IO (Reading)

We have a situation where our application needs to process a series of files and rather than perform this function synchronously, we would like to employ multi-threading to have the workload split amongst different threads.
Each item of work is:
1. Open a file for read only
2. Process the data in the file
3. Write the processed data to a Dictionary
We would like to perform each file's work on a new thread?
Is this possible and should be we better to use the ThreadPool or spawn new threads keeping in mind that each item of "work" only takes 30ms however its possible that hundreds of files will need to be processed.
Any ideas to make this more efficient is appreciated.
EDIT: At the moment we are making use of the ThreadPool to handle this. If we have 500 files to process we cycle through the files and allocate each "unit of processing work" to the threadpool using QueueUserWorkItem.
Is it suitable to make use of the threadpool for this?
I would suggest you to use ThreadPool.QueueUserWorkItem(...), in this, threads are managed by the system and the .net framework. The chances of you meshing up with your own threadpool is much higher. So I would recommend you to use Threadpool provided by .net .
It's very easy to use,
ThreadPool.QueueUserWorkItem(new WaitCallback(YourMethod), ParameterToBeUsedByMethod);
YourMethod(object o){
Your Code here...
}
For more reading please follow the link http://msdn.microsoft.com/en-us/library/3dasc8as%28VS.80%29.aspx
Hope, this helps
I suggest you have a finite number of threads (say 4) and then have 4 pools of work. I.e. If you have 400 files to process have 100 files per thread split evenly. You then spawn the threads, and pass to each their work and let them run until they have finished their specific work.
You only have a certain amount of I/O bandwidth so having too many threads will not provide any benefits, also remember that creating a thread also takes a small amount of time.
Instead of having to deal with threads or manage thread pools directly I would suggest using a higher-level library like Parallel Extensions (PEX):
var filesContent = from file in enumerableOfFilesToProcess
select new
{
File=file,
Content=File.ReadAllText(file)
};
var processedContent = from content in filesContent
select new
{
content.File,
ProcessedContent = ProcessContent(content.Content)
};
var dictionary = processedContent
.AsParallel()
.ToDictionary(c => c.File);
PEX will handle thread management according to available cores and load while you get to concentrate about the business logic at hand (wow, that sounded like a commercial!)
PEX is part of the .Net Framework 4.0 but a back-port to 3.5 is also available as part of the Reactive Framework.
I suggest using the CCR (Concurrency and Coordination Runtime) it will handle the low-level threading details for you. As for your strategy, one thread per work item may not be the best approach depending on how you attempt to write to the dictionary, because you may create heavy contention since dictionaries aren't thread safe.
Here's some sample code using the CCR, an Interleave would work nicely here:
Arbiter.Activate(dispatcherQueue, Arbiter.Interleave(
new TeardownReceiverGroup(Arbiter.Receive<bool>(
false, mainPort, new Handler<bool>(Teardown))),
new ExclusiveReceiverGroup(Arbiter.Receive<object>(
true, mainPort, new Handler<object>(WriteData))),
new ConcurrentReceiverGroup(Arbiter.Receive<string>(
true, mainPort, new Handler<string>(ReadAndProcessData)))));
public void WriteData(object data)
{
// write data to the dictionary
// this code is never executed in parallel so no synchronization code needed
}
public void ReadAndProcessData(string s)
{
// this code gets scheduled to be executed in parallel
// CCR take care of the task scheduling for you
}
public void Teardown(bool b)
{
// clean up when all tasks are done
}
In the long run, I think you'll be happier if you manage your own threads. This will let you control how many are running and make it easy to report status.
Build a worker class that does the processing and give it a callback routine to return results and status.
For each file, create a worker instance and a thread to run it. Put the thread in a Queue.
Peel threads off of the queue up to the maximum you want to run simultaneously. As each thread completes go get another one. Adjust the maximum and measure throughput. I prefer to use a Dictionary to hold running threads, keyed by their ManagedThreadId.
To stop early, just clear the queue.
Use locking around your thread collections to preserve your sanity.
Use ThreadPool.QueueUserWorkItem to execute each independent task. Definitely don't create hundreds of threads. That is likely to cause major headaches.
The general rule for using the ThreadPool is if you don't want to worry about when the threads finish (or use Mutexes to track them), or worry about stopping the threads.
So do you need to worry about when the work is done? If not, the ThreadPool is the best option. If you want to track the overall progress, stop threads then your own collection of threads is best.
ThreadPool is generally more efficient if you are re-using threads. This question will give you a more detailed discussion.
Hth
Using the ThreadPool for each individual task is definitely a bad idea. From my experience this tends to hurt performance more than helping it. The first reason is that a considerable amount of overhead is required just to allocate a task for the ThreadPool to execute. By default, each application is assigned it's own ThreadPool that is initialized with ~100 thread capacity. When you are executing 400 operations in a parallel, it does not take long to fill the queue with requests and now you have ~100 threads all competing for CPU cycles. Yes the .NET framework does a great job with throttling and prioritizing the queue, however, I have found that the ThreadPool is best left for long-running operations that probably won't occur very often (loading a configuration file, or random web requests). Using the ThreadPool to fire off a few operations at random is much more efficient than using it to execute hundreds of requests at once. Given the current information, the best course of action would be something similar to this:
Create a System.Threading.Thread (or use a SINGLE ThreadPool thread) with a queue that the application can post requests to
Use the FileStream's BeginRead and BeginWrite methods to perform the IO operations. This will cause the .NET framework to use native API's to thread and execute the IO (IOCP).
This will give you 2 leverages, one is that your requests will still get processed in parallel while allowing the operating system to manage file system access and threading. The second is that because the bottleneck of the vast majority of systems will be the HDD, you can implement a custom priority sort and throttling to your request thread to give greater control over resource usage.
Currently I have been writing a similar application and using this method is both efficient and fast... Without any threading or throttling my application was only using 10-15% CPU, which can be acceptable for some operations depending on the processing involved, however, it made my PC as slow as if an application was using 80%+ of the CPU. This was the file system access. The ThreadPool and IOCP functions do not care if they are bogging the PC down, so don't get confused, they are optimized for performance, even if that performance means your HDD is squeeling like a pig.
The only problem I have had is memory usage ran a little high (50+ mb) during the testing phaze with approximately 35 streams open at once. I am currently working on a solution similar to the MSDN recommendation for SocketAsyncEventArgs, using a pool to allow x number of requests to be operating simultaneously, which ultimately led me to this forum post.
Hope this helps somebody with their decision making in the future :)

Threading cost - minimum execution time when threads would add speed

I am working on a C# application that works with an array. It walks through it (meaning that at one time only a narrow part of the array is used). I am considering adding threads in it to make it perform faster (it runs on a dualcore computer). The problem is that I do not know if it would actually help, because threads cost something and this cost could easily be more than the parallel gain... So how do I determine if threading would help?
Try writing some benchmarks that mimic, as closely as possible, the real-world conditions in which your software will actually be used.
Test and time the single-threaded version. Test and time the multi-threaded version. Compare the two sets of results.
If your application is CPU bound (i.e. it isn't spending time trying to read files or waiting for data from a device) and there is little to no sharing of live data (data being altered, if its read only its fine) between the threads then you can pretty much increase the speed by 50->75% by adding another thread (as long as it still remains CPU bound of course).
The main overhead in multithreading comes from 2 places.
Creation & initialization of the thread. Creating a thread requires quite a few resources to be allocated and involves swaps between kernel and user mode, this is expensive though a once off per thread so you can pretty much ignore it if the thread is running for any reasonable amount of time. The best way to mitigate this problem is to use a thread pool as it will keep the thread on hand and not need to be recreated.
Handling synchronization of data. If one thread is reading from data that another is writing, bad things will generally happen (worse if both are changing it). This requires you to lock your data before altering it so that no thread reads a half written value. These locks are generally quite slow as well. To mitigate this problem, you need to design your data layout so that the threads don't need to read or write to the same data as much as possible. If you do need a lot of these locks it can then become slower than the single thread option.
In short, if you are doing something that requires the CPU's to share a lot of data, then multi-threading it will be slower and if the program isn't CPU bound there will be little or no difference (could be a lot slower depending on what it is bound to, e.g. a cd/hard drive). If your program matches these conditions, then it will PROBABLY be worthwhile to add another thread (though the only way to be certain would be profiling).
One more little note, you should only create as many CPU bound threads as you have physical cores (threads that idle most of the time, such as a GUI message pump thread, can be ignored for this condition).
P.S. You can reduce the cost of locking data by using a methodology called "lock-free programming", though this something that should really only be attempted by people with a lot of experience with multi-threading and a clear understanding of their target architecture (including how the cache is treated and the memory bus).
I agree with Luke's answer. Benchmark it, it's the only way to be sure.
I can also give a prediction of the results - the fastest version will be when the number of threads matches the number of cores, EXCEPT if the array is very small and each thread would have to process just a few items, the setup/teardown times might get larger than the processing itself. How few - that depends on what you do. Again - benchmark.
I'd advise to find out a "minimum number of items for a thread to be useful". Then, when you are deciding how many threads to spawn (or take from a pool), check how many cores the computer has and how many items there are. Spawn as many threads as possible, but no more than the computer has cores, and not so many that each thread would have less than the minimum number of items to process.
For example if the minimum number of items is, say, 1000; and the computer has 4 cores; and your list contains 2500 items, you would spawn just 2 threads, because more threads would be inefficient (each would process less than 1000 items).
Making a step by step list for Luke's idea:
Make a single threaded test app
Download Sysinternals Process Monitor and run it
Run your test app and find it on the process list (remember to run it as a release build outside of Visual Studio)
Double click the process and select the Performance Graph tab
Observe the CPU time used by your process
If the CPU time is sittling flat 50% for more than a few seconds, you can probably speed your overall process up using threads (assuming the bunch of stuff Mr Peters refered to holds true)
(However, the best you can do on a duel core machine is to halve the time it takes to run. If your process only take 4 seconds, it might not be worth getting it to run in 2 seconds)
Using the task parallel library / Rx provides a friendlier interface than System.Threading.ThreadPool, which might make your world a bit easier.
You miss imho one item, which is that it is not always about execution time. There is:
The problem to koop a UI operational during an operation. Even if the UI is "dormant", a nonresponsive message pump makes a worse impression.
The possibility to use a thread pool to actually not ahve to start / stop threads all the time. I use thread pools very extensively, and various parts of the applications keep them busy.
Anyhow, ignoring my point 1 - where you may go multi threaded without speeding things up in order to keep your UI responsive - I would say it is always then faster when you can actually either split up work (so you can keep more than one core busy) or offload it for othe reasons.

what type of bug causes a program to slowly use more processor power and all of a sudden go to 100%?

I was hoping to get some good ideas as to what might be causing a really nasty bug.
This is a program which is transmitting data over a socket, and also receives messages back.
I could explain lots more, but I don't think this will help here.
I'm just searching for hypothetical problems which can cause the following behaviour:
program runs
processor time slowly accumulates (till around 60%)
all of a sudden (could be after 30 but also after 60 seconds) the processor time shoots to 100%. the program halts completely
In my syslog it always ends on one line with a memory allocation (something similar to: myArray = new byte[16384]) in the same thread.
now here is the weird part: if I set the debugger anywhere...it immediately stops on that line. So just the act of setting a breakpoint, made the thread continue (it wasn't running since I saw no log output anymore)
I was thinking 'deadlock' but that would not cause 100% processor power. If anything, the opposite. Also, setting a breakpoint would not cause a deadlock to end.
anyone else a theoretical suggestion as to what kind of 'construct' might cause this effect?
(apart from 'bad programming') ;^)
thanks
EDIT:
I just noticed.... by setting the sendspeed slower, the problem shows itself much later than expected. I would think around the same amount of packets send...but no the amount of packets send is much higher this way before it has the same problem.
I can only guess, but the opposite of a deadlock would be a livelock. This means two threads who react to each other in an infinite loop. This could also be possibly interrupted by setting a break point, as livelocks generally depend on the right timing.
Other than this I had once a similar issue with the Java nio classes which are non-blocking which caused the main thread to busy wait for input. Although the CPU usage rose instantaneously, not just after a few seconds.
Maybe if you can provide a bit more information like the programming language or even a code sample there might be more ideas.
Anything that involves repetitive processing (looping, recursion, etc) can cause this.
What's interesting is that if the program is doing anything that normally slows down performance (such as disk IO or network access), then the processor is less likely to peg . The processor pegs at 100% only if the program is using the processor. If you have to wait for disk or network IO, then the processor thread has to wait.
So in the code, I'd check for loops where a lot of work is going on, but little IO.
Also, if you're debugging in Visual Studio, you can hit the pause button to stop the app at the current point and see what your code is doing when it locks.
I'm guessing an infinite loop in the socket receiving end. It keeps trying to allocate a buffer to receive the data that is coming in, but the buffer is never big enough so it keeps allocating. But it is really hard to say without code. I'd advise you to add more logging and/or single step the code if you don't want to share it.
Without seeing code, I only can say your program is probably infinite looping and the call that should block is not blocking correctly as you're expecting
You can also try profiling (EQUATEC free profiler, for example). If will show you how much of your processor time was spent in each method.
I found the answer... quite silly actually (it always is). The thread which is sending/receiving messages is doing this via asynchronous methods. However, the asynchronous callbacks never seem to be able to come through while the thread is also pumping messages in the sendqueue. I notice when I put a thread.sleep every second, all asynchronous callbacks are pumped through. So the solution it turns out is to have a separate thread for sending/receiving, done purely on async, and another one for filling the sendqueue.
why this would have resulted in 100% processor power is beyond me. But it does actually explain why setting a breakpoint allowed the async callbacks to catch up.
Because the program fails while allocating memory I would guess that the incoming message rate is too high for it to handle.
I imagine that your program has some thread that it's only job is to listen to the socket and send the incoming messages to some other threads to handle (maybe you have some thread pool there). Imagine a situation where the incoming message rate is too high so all the worker threads are busy handling previous messages and the thread that listen to the socket have to put the new messages into some kind of queue until one of the worker threads will be free to handle them. this queue will grow and grow until you won't have additional memory. so that could be the reason for your program's termination.
now, about the 100% CPU. I guess that the thread the uses the CPU must be one of the worker threads. this will explain why the listening thread is queuing the messages. the reason can be a corrupted message or something else that causes it to run into an infinite loop. "frenetisch applaudierend" suggested in his answer that two or more of the worker threads can cause "livelock" on each other which could also be the reason for your problem.

Categories

Resources