C# inter-thread communication - c#

I want two threads to collaborate; a producer and a consumer.
the consumer is rather slow, and the producer is very fast and works in bursts.
for example the consumer can process one message per 20 seconds, and the producer can produce 10 messages in one second, but does it about once in a long while so the consumer can catch up.
I want something like:
Stream commonStream;
AutoResetEvent commonLock;
void Producer()
{
while (true)
{
magic.BlockUntilMagicAvalible();
byte[] buffer = magic.Produce();
commonStream.Write(buffer);
commonLock.Set();
}
}
void Consumer()
{
while(true)
{
commonLock.WaitOne();
MagicalObject o = binarySerializer.Deserialize(commonStream);
DoSomething(o);
}
}

If you have .Net 4.0 or higher you can do it this way by using a BlockingCollection
int maxBufferCap = 500;
BlockingCollection<MagicalObject> Collection
= new BlockingCollection<MagicalObject>(maxBufferCap);
void Producer()
{
while (magic.HasMoreMagic)
{
this.Collection.Add(magic.ProduceMagic());
}
this.Collection.CompleteAdding();
}
void Consumer()
{
foreach (MagicalObject magicalObject in this.Collection.GetConsumingEnumerable())
{
DoSomthing(magicalObject);
}
}
The foreach line will sleep if there is no data in the buffer, it will also automatically wake it self up when something is added to the collection.
The reason I set the max buffer is if your producer is much faster than the consumer you may end up consuming a lot of memory as more and more objects get put in to the collection. By setting up a max buffer size as you create the blocking collection when the buffer size is reached the Add call on the producer will block until a item has been removed from the collection by the consumer.
Another bonus of the BlockingCollection class is it can have as many producers and consumers as you want, it does not need to be a 1:1 ratio. If DoSomthing supports it you could have a foreach loop per core of the computer (or even use Parallel.ForEach and use the consuming enumerable as the data source)
void ConsumersInParalell()
{
//This assumes the method signature of DoSomthing is one of the following:
// Action<MagicalObject>
// Action<MagicalObject, ParallelLoopState>
// Action<MagicalObject, ParallelLoopState, long>
Paralell.ForEach(this.Collection.GetConsumingEnumerable(), DoSomthing);
}

I would read the following articles they describe your problem. Basically you're not getting the right isolation for your unit of work.
Link
Link

You can get what you want using a queue and timer. The producer adds values to the queue and starts the consumer timer. The consumer timer's elapsed event (which is on a Threadpool thread) stops the timer, and loops through the queue until it's empty, then disappears (no unnecessary polling). The producer can add to the queue while the consumer is still running.
System.Timers.Timer consumerTimer;
Queue<byte[]> queue = new Queue<byte[]>();
void Producer()
{
consumerTimer = new System.Timers.Timer(1000);
consumerTimer.Elapsed += new System.Timers.ElapsedEventHandler(consumerTimer_Elapsed);
while (true)
{
magic.BlockUntilMagicAvailable();
lock (queue)
{
queue.Enqueue(magic.Produce());
if (!consumerTimer.Enabled)
{
consumerTimer.Start();
}
}
}
}
void consumerTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
while (true)
{
consumerTimer.Stop();
lock (queue)
{
if (queue.Count > 0)
{
DoSomething(queue.Dequeue());
}
else
{
break;
}
}
}
}

I use Mutex's. The idea is that both run in different threads. The Consumer thread is started locked by a mutex, where it will sit indefinitely until release by the Producer. It will then process data in parallel leaving the Producer to continue. The Consumer will re-lock when complete.
(Code start the thread, and other quality bits have been omitted for brevity.)
// Pre-create mutex owned by Producer thread, then start Consumer thread.
Mutex mutex = new Mutex(true);
Queue<T> queue = new Queue<T>();
void Producer_AddData(T data)
{
lock (queue) {
queue.Enqueue(GetData());
}
// Release mutex to start thread:
mutex.ReleaseMutex();
mutex.WaitOne();
}
void Consumer()
{
while(true)
{
// Wait indefinitely on mutex
mutex.WaitOne();
mutex.ReleaseMutex();
T data;
lock (queue) {
data = queue.Dequeue();
}
DoSomething(data);
}
}
This slows the Producer by a very few milliseconds whilst is waits for the Consumer to wake and release the mutex. If you can live with that.

Related

Make sure ProcessingQueue.Count correct in multiple threading application

I have a windows service to process xml files in the linked list queue. The files in the queue were added by FileSystemWatcher event when files created.
namespace XMLFTP
{
public class XML_Processor : ServiceBase
{
public string s_folder { get; set; }
public XML_Processor(string folder)
{
s_folder = folder;
}
Thread worker;
FileSystemWatcher watcher;
DirectoryInfo my_Folder;
public static AutoResetEvent ResetEvent { get; set; }
bool running;
public bool Start()
{
my_Folder = new DirectoryInfo(s_folder);
bool success = true;
running = true;
worker = new Thread(new ThreadStart(ServiceLoop));
worker.Start();
// add files to queue by FileSystemWatcher event
return (success);
}
public bool Stop()
{
try
{
running = false;
watcher.EnableRaisingEvents = false;
worker.Join(ServiceSettings.ThreadJoinTimeOut);
}
catch (Exception ex)
{
return (false);
}
return (true);
}
public void ServiceLoop()
{
string fileName;
while (running)
{
Thread.Sleep(2000);
if (ProcessingQueue.Count > 0)
{
// process file and write info to DB.
}
}
}
void watcher_Created(object sender, FileSystemEventArgs e)
{
switch (e.ChangeType)
{
case WatcherChangeTypes.Created:// add files to queue
}
}
}
}
There might be a thread safe problem.
while (running)
{
Thread.Sleep(2000);
if (ProcessingQueue.Count > 0)
{
// process file and write info to DB.
}
}
As the access to ProcessingQueue.Count isn't protected by a lock, the Count can change, if a different thread alters the "queue". As result the process file part may fail. That's also the case if you implement the Count property as:
public static int Count
{
get { lock (syncRoot) return _files.Count; }
}
as the lock is released to early.
My two questions:
How to make ProcessingQueue.Count be correct?
If I use .NET Framework 4.5 BlockingCollection skills, the sample code as:
class ConsumingEnumerableDemo
{
// Demonstrates:
// BlockingCollection<T>.Add()
// BlockingCollection<T>.CompleteAdding()
// BlockingCollection<T>.GetConsumingEnumerable()
public static void BC_GetConsumingEnumerable()
{
using (BlockingCollection<int> bc = new BlockingCollection<int>())
{
// Kick off a producer task
Task.Factory.StartNew(() =>
{
for (int i = 0; i < 10; i++)
{
bc.Add(i);
Thread.Sleep(100); // sleep 100 ms between adds
}
// Need to do this to keep foreach below from hanging
bc.CompleteAdding();
});
// Now consume the blocking collection with foreach.
// Use bc.GetConsumingEnumerable() instead of just bc because the
// former will block waiting for completion and the latter will
// simply take a snapshot of the current state of the underlying collection.
foreach (var item in bc.GetConsumingEnumerable())
{
Console.WriteLine(item);
}
}
}
}
The sample uses a constant 10 as the iteration-clause, how to apply my dynamic count in queue to it?
With BlockingCollection, you don't have to know the count. The consumer knows to keep processing items until the queue is empty and IsCompleted is true. So you could have this:
var producer = Task.Factory.StartNew(() =>
{
// Add 10 items to the queue
foreach (var i in Enumerable.Range(0, 10))
queue.Add(i);
// Wait one minute
Thread.Sleep(TimeSpan.FromMinutes(1.0));
// Add 10 more items to the queue
foreach (var i in Enumerable.Range(10, 10))
queue.Add(i);
// mark the queue as complete for adding
queue.CompleteAdding();
});
// consumer
foreach (var item in queue.GetConsumingEnumerable())
{
Console.WriteLine(item);
}
The consumer will output the first 10 items, which empties the queue. But because the producer hasn't called CompleteAdding, the consumer will continue to block on the queue. It will catch the next 10 items that the producer writes. Then, the queue is empty and IsCompleted == true, so the consumer ends (GetConsumingEnumerable gets to the end of the queue).
You can check Count at any time you like, but the value you get is just a snapshot. By the time you evaluate it, it's likely that either the producer or the consumer will have modified the queue and changed the count. But it shouldn't matter. As long as you don't call CompleteAdding, the consumer will continue to wait for an item.
The number of items that the producer writes doesn't have to be constant. For example in my Simple Multithreading blog post, I show a producer that reads a file and writes the items to a BlockingCollection that's serviced by a consumer. The producer and consumer run concurrently, and everything goes until the producer reaches the end of the file.

Prevent Race Condition in Efficient Consumer Producer model

What I am trying to achieve is to have a consumer producer method. There can be many producers but only one consumer. There cannot be a dedicated consumer because of scalability, so the idea is to have the producer start the consuming process if there is data to be consumed and there is currently no active consumer.
1. Many threads can be producing messages. (Asynchronous)
2. Only one thread can be consuming messages. (Synchronous)
3. We should only have a consumer in process if there is data to be consumed
4. A continuous consumer that waits for data would not be efficient if we add many of these classes.
In my example I have a set of methods that send data. Multiple threads can write data Write() but only one of those threads will loop and Send data SendNewData(). The reason that only one loop can write data is because the order of data must be synchronous, and with a AsyncWrite() out of our control we can only guarantee order by running one AyncWrite() at a time.
The problem that I have is that if a thread gets called to Write() produce, it will queue the data and check the Interlocked.CompareExchance to see if there is a consumer. If it sees that another thread is in the loop already consuming, it will assume that this consumer will send the data. This is a problem if that looping thread consumer is at "Race Point A" since this consumer has already checked that there is no more messages to send and is about to shut down the consuming process.
Is there a way to prevent this race condition without locking a large part of the code. The real scenario has many queues and is a bit more complex than this.
In the real code List<INetworkSerializable> is actually a byte[] BufferPool. I used List for the example to make this block easier to read.
With 1000s of these classes being active at once, I cannot afford to have the SendNewData looping continuously with a dedicated thread. The looping thread should only be active if there is data to send.
public void Write(INetworkSerializable messageToSend)
{
Queue.Enqueue(messageToSend);
// Check if there are any current consumers. If not then we should instigate the consuming.
if (Interlocked.CompareExchange(ref RunningWrites, 1, 0) == 0)
{ //We are now the thread that consumes and sends data
SendNewData();
}
}
//Only one thread should be looping here to keep consuming and sending data synchronously.
private void SendNewData()
{
INetworkSerializable dataToSend;
List<INetworkSerializable> dataToSendList = new List<INetworkSerializable>();
while (true)
{
if (!Queue.TryDequeue(out dataToSend))
{
//Race Point A
if (dataToSendList.IsEmpty)
{
//All data is sent, return so that another thread can take responsibility.
Interlocked.Decrement(ref RunningWrites);
return;
}
//We have data in the list to send but nothing more to consume so lets send the data that we do have.
break;
}
dataToSendList.Add(dataToSend);
}
//Async callback is WriteAsyncCallback()
WriteAsync(dataToSendList);
}
//Callback after WriteAsync() has sent the data.
private void WriteAsyncCallback()
{
//Data was written to sockets, now lets loop back for more data
SendNewData();
}
It sounds like you would be better off with the producer-consumer pattern that is easily implemented with the BlockingCollection:
var toSend = new BlockingCollection<something>();
// producers
toSend.Add(something);
// when all producers are done
toSend.CompleteAdding();
// consumer -- this won't end until CompleteAdding is called
foreach(var item in toSend.GetConsumingEnumerable())
Send(item);
To address the comment of knowing when to call CompleteAdding, I would launch the 1000s of producers as tasks, wait for all those tasks to complete (Task.WaitAll), and then call CompleteAdding. There are good overloads taking in CancellationTokens that would give you better control, if needed.
Also, TPL is pretty good about scheduling off blocked threads.
More complete code:
var toSend = new BlockingCollection<int>();
Parallel.Invoke(() => Produce(toSend), () => Consume(toSend));
...
private static void Consume(BlockingCollection<int> toSend)
{
foreach (var value in toSend.GetConsumingEnumerable())
{
Console.WriteLine("Sending {0}", value);
}
}
private static void Produce(BlockingCollection<int> toSend)
{
Action<int> generateToSend = toSend.Add;
var producers = Enumerable.Range(0, 1000)
.Select(n => new Task(value => generateToSend((int) value), n))
.ToArray();
foreach(var p in producers)
{
p.Start();
}
Task.WaitAll(producers);
toSend.CompleteAdding();
}
Check this variant. There are some descriptive comments in code.
Also notice that WriteAsyncCallback now don't call SendNewData method anymore
private int _pendingMessages;
private int _consuming;
public void Write(INetworkSerializable messageToSend)
{
Interlocked.Increment(ref _pendingMessages);
Queue.Enqueue(messageToSend);
// Check if there is anyone consuming messages
// if not, we will have to become a consumer and process our own message,
// and any other further messages until we have cleaned the queue
if (Interlocked.CompareExchange(ref _consuming, 1, 0) == 0)
{
// We are now the thread that consumes and sends data
SendNewData();
}
}
// Only one thread should be looping here to keep consuming and sending data synchronously.
private void SendNewData()
{
INetworkSerializable dataToSend;
var dataToSendList = new List<INetworkSerializable>();
int messagesLeft;
do
{
if (!Queue.TryDequeue(out dataToSend))
{
// there is one possibility that we get here while _pendingMessages != 0:
// some other thread had just increased _pendingMessages from 0 to 1, but haven't put a message to queue.
if (dataToSendList.Count == 0)
{
if (_pendingMessages == 0)
{
_consuming = 0;
// and if we have no data this mean that we are safe to exit from current thread.
return;
}
}
else
{
// We have data in the list to send but nothing more to consume so lets send the data that we do have.
break;
}
}
dataToSendList.Add(dataToSend);
messagesLeft = Interlocked.Decrement(ref _pendingMessages);
}
while (messagesLeft > 0);
// Async callback is WriteAsyncCallback()
WriteAsync(dataToSendList);
}
private void WriteAsync(List<INetworkSerializable> dataToSendList)
{
// some code
}
// Callback after WriteAsync() has sent the data.
private void WriteAsyncCallback()
{
// ...
SendNewData();
}
The race condition can be prevented by adding the following and double checking the Queue after we have declared that we are no longer the consumer.
if (dataToSend.IsEmpty)
{
//Declare that we are no longer the consumer.
Interlocked.Decrement(ref RunningWrites);
//Double check the queue to prevent race condition A
if (Queue.IsEmpty)
return;
else
{ //Race condition A occurred. There is data again.
//Let's try to become a consumer.
if (Interlocked.CompareExchange(ref RunningWrites, 1, 0) == 0)
continue;
//Another thread has nominated itself as the consumer. Our job is done.
return;
}
}
break;

C# Threading in Window service creating issue

I have issue with email sending window service. The service starts after every three minutes delay and get messages that are to send from the db, and start sending it. Here is how the code looks like:
MessageFilesHandler MFHObj = new MessageFilesHandler();
List<Broadcostmsg> imidiateMsgs = Manager.GetImidiateBroadCastMsgs(conString);
if (imidiateMsgs.Count > 0)
{
// WriteToFileImi(strLog);
Thread imMsgThread = new Thread(new ParameterizedThreadStart(MFHObj.SendImidiatBroadcast));
imMsgThread.IsBackground = true;
imMsgThread.Start(imidiateMsgs);
}
This sends messages to large lists, and take long to complete sending to a larger list. now the problem occurs when on message is still sending and the service get a new message to send, the previous sending is haulted and new message sending started, although i am using threads, each time service get message to send it initiate a new thread.
Can u please help where i am doing mistake in the code.
I think you are using your code inside a loop which WAITS for new messages, did you manage those waits?? let's see:
while(imidiateMsgs.Count == 0)
{
//Wait for new Message
}
//Now you have a new message Here
//Make a new thread to process message
there are different methods for that wait, I suggest using BlockingQueues:
In public area:
BlockingCollection<Broadcostmsg> imidiateMsgs = new BlockingCollection<Broadcostmsg>();
In your consumer(thread which generates messages):
SendImidiatBroadcast = imidiateMsgs.Take();//this will wait for new message
//Now you have a new message Here
//Make a new thread to process message
In producer(thread which answers messages):
imidiateMsgs.Add(SendImidiatBroadcast);
And you have to use thread pool for making new threads each time to answer messages, don' initialize new thread each time.
It looks like requirement is to build a consumer producer queue. In which producer will keep adding message to a list and consumer would pick item from that list and do some work with it
Only worry for me is, you are each time creating a new Thread to send email rather than picking threads from thread pool. If you keep on creating more and more thread, performance of your application will degrade due to over head created by context switching.
If you are using .Net framwe work 4.0, the soultion become pretty easy. You could use System.Collections.Concurrent.ConcurrentQueue for en-queuing and dequeuing your items. Its thread safe, so no lock objects required. Use Tasks to process your messages.
BlockingCollection takes an IProducerConsumerCollection in its constructor, or it will use a ConcurrentQueue by default if you call its empty constructor.
So to enqueue your messages.
//define a blocking collectiom
var blockingCollection = new BlockingCollection<string>();
//Producer
Task.Factory.StartNew(() =>
{
while (true)
{
blockingCollection.Add("value" + count);
count++;
}
});
//consumer
//GetConsumingEnumerable would wait until it find some item for work
// its similar to while(true) loop that we put inside consumer queue
Task.Factory.StartNew(() =>
{
foreach (string value in blockingCollection.GetConsumingEnumerable())
{
Console.WriteLine("Worker 1: " + value);
}
});
UPDATE
Since you are using FrameWork 3.5. I suggest you have a look at Joseph Albahari's implementation of Consumer/Producer Queue. Its one of the best that you would ever find out.
Taking the code directly from above link
public class PCQueue
{
readonly object _locker = new object();
Thread[] _workers;
Queue<Action> _itemQ = new Queue<Action>();
public PCQueue (int workerCount)
{
_workers = new Thread [workerCount];
// Create and start a separate thread for each worker
for (int i = 0; i < workerCount; i++)
(_workers [i] = new Thread (Consume)).Start();
}
public void Shutdown (bool waitForWorkers)
{
// Enqueue one null item per worker to make each exit.
foreach (Thread worker in _workers)
EnqueueItem (null);
// Wait for workers to finish
if (waitForWorkers)
foreach (Thread worker in _workers)
worker.Join();
}
public void EnqueueItem (Action item)
{
lock (_locker)
{
_itemQ.Enqueue (item); // We must pulse because we're
Monitor.Pulse (_locker); // changing a blocking condition.
}
}
void Consume()
{
while (true) // Keep consuming until
{ // told otherwise.
Action item;
lock (_locker)
{
while (_itemQ.Count == 0) Monitor.Wait (_locker);
item = _itemQ.Dequeue();
}
if (item == null) return; // This signals our exit.
item(); // Execute item.
}
}
}
The advantage with this approach is you can control the number of Threads that you need to create for optimized performance. With threadpools approach, although its safe, you can not control the number of threads that could be created simultaneously.

Producer/consumer of a web crawler using queue with unknown size

I need to crawl parent web pages and its children web pages and I followed the producer/consumer concept from http://www.albahari.com/threading/part4.aspx#%5FWait%5Fand%5FPulse. Also, I used 5 threads which enqueue and dequeue links.
Any recommendations on how will I end/join all the threads once all of them have finished processing the queue, given that the length of queue is unknown?
Below is the idea on how I coded it.
static void Main(string[] args)
{
//enqueue parent links here
...
//then start crawling via threading
...
}
public void Crawl()
{
//dequeue
//get child links
//enqueue child links
}
If all of your threads are idle (i.e. waiting on the queue) and the queue is empty, then you're done.
An easy way to handle that is to have the threads use a timeout when they're trying to access the queue. Something like BlockingCollection.TryTake. Whenever TryTake times out, the thread updates a field to say how long it's been idle:
while (!queue.TryTake(out item, 5000, token))
{
if (token.IsCancellationRequested)
break;
// here, update idle counter
}
You can then have a timer that executes every 15 seconds or so to check all of the threads' idle counters. If all threads have been idle for some period of time (a minute, perhaps), then the timer can set the cancellation token. That will kill all the threads. Your main program, too, can be monitoring the cancellation token.
You can do this without BlockingCollection and cancellation, by the way. You'll just have to create your own cancellation signaling mechanism, and if you're using a lock on the queue, you can replace the lock syntax with Monitor.TryEnter, etc.
There are several other ways to handle this, although they would require some major restructuring of your program.
You can enqueue a dummy token at the end and have the threads exit when they encounter this token. Like:
public void Crawl()
{
int report = 0;
while(true)
{
if(!(queue.Count == 0))
{
if(report > 0) Interlocked.Decrement(ref report);
//dequeue
if(token == "TERMINATION")
return;
else
//enqueue child links
}
else
{
if(report == num_threads) // all threads have signaled empty queue
queue.Enqueue("TERMINATION");
else
Interlocked.Increment(ref report); // this thread has found the queue empty
}
}
}
Of course, I have omitted the locks for enqueue/dequeue operations.
The threads could signal that have ended their work raising an event for example, or calling a delegate.
static void Main(string[] args)
{
//enqueue parent links here
...
//then start crawling via threading
...
}
public void X()
{
//block the threads until all of them are here
}
public void Crawl(Action x)
{
//dequeue
//get child links
//enqueue child links
//call x()
}
There is really no need to handle the producer-consumer stuff manually if you are willing to use the Task Parallel Library. When you create tasks with the AttachToParent option the child tasks will link with the parent task in such a manner that it will not complete until child tasks have completed.
class Program
{
static void Main(string[] args)
{
var task = CrawlAsync("http://stackoverflow.com");
task.Wait();
}
static Task CrawlAsync(string url)
{
return Task.Factory.StartNew(
() =>
{
string[] children = ExtractChildren(url);
foreach (string child in children)
{
CrawlAsync(child);
}
ProcessUrl(url);
}, TaskCreationOptions.AttachedToParent);
}
static string[] ExtractChildren(string root)
{
// Return all child urls here.
}
static void ProcessUrl(string url)
{
// Process the url here.
}
}
You could remove some of the explicit task creation logic by using Parallel.ForEach.

Concurrent collections eating too much cpu without Thread.Sleep

What would be the correct usage of either, BlockingCollection or ConcurrentQueue so you can freely dequeue items without burning out half or more of your CPU using a thread ?
I was running some tests using 2 threads and unless I had a Thread.Sleep of at least 50~100ms it would always hit at least 50% of my CPU.
Here is a fictional example:
private void _DequeueItem()
{
object o = null;
while(socket.Connected)
{
while (!listOfQueueItems.IsEmpty)
{
if (listOfQueueItems.TryDequeue(out o))
{
// use the data
}
}
}
}
With the above example I would have to set a thread.sleep so the cpu doesnt blow up.
Note: I have also tried it without the while for IsEmpty check, result was the same.
It is not because of the BlockingCollection or ConcurrentQueue, but the while loop:
while(socket.Connected)
{
while (!listOfQueueItems.IsEmpty)
{ /*code*/ }
}
Of course it will take the cpu down; because of if the queue is empty, then the while loop is just like:
while (true) ;
which in turn will eat the cpu resources.
This is not a good way of using ConcurrentQueue you should use AutoResetEvent with it so whenever item is added you will be notified.
Example:
private ConcurrentQueue<Data> _queue = new ConcurrentQueue<Data>();
private AutoResetEvent _queueNotifier = new AutoResetEvent(false);
//at the producer:
_queue.Enqueue(new Data());
_queueNotifier.Set();
//at the consumer:
while (true)//or some condition
{
_queueNotifier.WaitOne();//here we will block until receive signal notification.
Data data;
if (_queue.TryDequeue(out data))
{
//handle the data
}
}
For a good usage of the BlockingCollection you should use the GetConsumingEnumerable() to wait for the items to be added, Like:
//declare the buffer
private BlockingCollection<Data> _buffer = new BlockingCollection<Data>(new ConcurrentQueue<Data>());
//at the producer method:
_messageBuffer.Add(new Data());
//at the consumer
foreach (Data data in _buffer.GetConsumingEnumerable())//it will block here automatically waiting from new items to be added and it will not take cpu down
{
//handle the data here.
}
You really want to be using the BlockingCollection class in this case. It is designed to block until an item appears in the queue. A collection of this nature is often referred to as a blocking queue. This particular implementation is safe for multiple producers and multiple consumers. That is something that is surprisingly difficult to get right if you tried implementing it yourself. Here is what your code would look like if you used BlockingCollection.
private void _DequeueItem()
{
while(socket.Connected)
{
object o = listOfQueueItems.Take();
// use the data
}
}
The Take method blocks automatically if the queue is empty. It blocks in a manner that puts the thread in the SleepWaitJoin state so that it will not consume CPU resources. The neat thing about BlockingCollection is that it also uses low-lock strategies to increase performance. What this means is that Take will check to see if there is an item in the queue and if not then it will briefly perform a spin wait to prevent a context switch of the thread. If the queue is still empty then it will put the thread to sleep. This means that BlockingCollection will have some of the performance benefits that ConcurrentQueue provides in regards to concurrent execution.
You can call Thread.Sleep() only when queue is empty:
private void DequeueItem()
{
object o = null;
while(socket.Connected)
{
if (listOfQueueItems.IsEmpty)
{
Thread.Sleep(50);
}
else if (listOfQueueItems.TryDequeue(out o))
{
// use the data
}
}
}
Otherwise you should consider to use events.

Categories

Resources