I am trying to poll an API as fast and as efficiently as possible to get market data. The API allows you to get market data from batchSize markets per request. The API allows you to have 3 concurrent requests but no more (or throws errors).
I may be requesting data from many more than batchSize different markets.
I continuously loop through all of the markets, requesting the data in batches, one batch per thread and 3 threads at any time.
The total number of markets (and hence batches) can change at any time.
I'm using the following code:
private static object lockObj = new object();
private void PollMarkets()
{
const int NumberOfConcurrentRequests = 3;
for (int i = 0; i < NumberOfConcurrentRequests; i++)
{
int batch = 0;
Task.Factory.StartNew(async () =>
{
while (true)
{
if (markets.Count > 0)
{
List<string> batchMarketIds;
lock (lockObj)
{
var numBatches = (int)Math.Ceiling((double)markets.Count / batchSize);
batchMarketIds = markets.Keys.Skip(batch*batchSize).Take(batchSize).ToList();
batch = (batch + 1) % numBatches;
}
var marketData = await GetMarketData(batchMarketIds);
// Do something with marketData
}
else
{
await Task.Delay(1000); // wait for some markets to be added.
}
}
}
});
}
}
Even though there is a lock in the critical section, each thread starts with batch = 0 (each thread is often polling for duplicate data).
If I change batch to a private volatile field the above code works as I want it to (volatile and lock).
So for some reason my lock doesn't work? I feel like it's something obvious but I'm missing it.
I believe that it is best here to use a lock instead of a volatile field, is this also correct?
Thanks
The issue was that you were defining the batch variable inside the for loop. That meant that the threads were using their own variable instead of sharing it.
In my mind you should use Queue<> to create a jobs pipeline.
Something like this
private int batchSize = 10;
private Queue<int> queue = new Queue<int>();
private void AddMarket(params int[] marketIDs)
{
lock (queue)
{
foreach (var marketID in marketIDs)
{
queue.Enqueue(marketID);
}
if (queue.Count >= batchSize)
{
Monitor.Pulse(queue);
}
}
}
private void Start()
{
for (var tid = 0; tid < 3; tid++)
{
Task.Run(async () =>
{
while (true)
{
List<int> toProcess;
lock (queue)
{
if (queue.Count < batchSize)
{
Monitor.Wait(queue);
continue;
}
toProcess = new List<int>(batchSize);
for (var count = 0; count < batchSize; count++)
{
toProcess.Add(queue.Dequeue());
}
if (queue.Count >= batchSize)
{
Monitor.Pulse(queue);
}
}
var marketData = await GetMarketData(toProcess);
}
});
}
}
Related
I am implementing a flow control component that limits maximum requests can be sent. Every worker thread can send either a single request or a batch of requests, but at any time the total amount of pending requests should not exceed a maximum number.
I initially want to implement with a SemaphoreSlim:
initialising the semaphore to the maximum request count, then when a worker thread is going to call service, it must acquire enough count of tokens, however I found actually SemaphoreSlim and Semaphore only allow a thread to decrease Semaphore count by 1, in my case I want to decrease the count by the number of requests that work thread carries.
What synchronization primitive should I use here?
Just to clarify, the service supports batch processing, so one thread can send a N requests in one service call, but accordingly it should be able to decrease semaphore's current count by N.
Below is a custom SemaphoreManyFifo class that offers the methods Wait(int acquireCount) method and Release(int releaseCount). Its behavior is strictly FIFO. It has a quite decent performance (~500,000 operations per second on 8 threads in my PC).
public class SemaphoreManyFifo : IDisposable
{
private readonly object _locker = new object();
private readonly Queue<(ManualResetEventSlim, int AcquireCount)> _queue;
private readonly ThreadLocal<ManualResetEventSlim> _pool;
private readonly int _maxCount;
private int _currentCount;
public int CurrentCount => Volatile.Read(ref _currentCount);
public SemaphoreManyFifo(int initialCount, int maxCount)
{
// Proper arguments validation omitted
Debug.Assert(initialCount >= 0);
Debug.Assert(maxCount > 0 && maxCount >= initialCount);
_queue = new Queue<(ManualResetEventSlim, int)>();
_pool = new ThreadLocal<ManualResetEventSlim>(
() => new ManualResetEventSlim(false), trackAllValues: true);
_currentCount = initialCount;
_maxCount = maxCount;
}
public SemaphoreManyFifo(int initialCount) : this(initialCount, Int32.MaxValue) { }
public void Wait(int acquireCount)
{
Debug.Assert(acquireCount > 0 && acquireCount <= _maxCount);
ManualResetEventSlim gate;
lock (_locker)
{
Debug.Assert(_currentCount >= 0 && _currentCount <= _maxCount);
if (acquireCount <= _currentCount && _queue.Count == 0)
{
_currentCount -= acquireCount; return; // Fast path
}
gate = _pool.Value;
gate.Reset(); // Important, because the gate is reused
_queue.Enqueue((gate, acquireCount));
}
gate.Wait();
}
public void Release(int releaseCount)
{
Debug.Assert(releaseCount > 0);
lock (_locker)
{
Debug.Assert(_currentCount >= 0 && _currentCount <= _maxCount);
if (releaseCount > _maxCount - _currentCount)
throw new SemaphoreFullException();
_currentCount += releaseCount;
while (_queue.Count > 0 && _queue.Peek().AcquireCount <= _currentCount)
{
var (gate, acquireCount) = _queue.Dequeue();
_currentCount -= acquireCount;
gate.Set();
}
}
}
public void Dispose()
{
foreach (var gate in _pool.Values) gate.Dispose();
_pool.Dispose();
}
}
Adding support for timeout and cancellation in the above implementation is not trivial. It would require a different (updateable) data structure instead of the Queue<T>.
The original Wait+Pulse implementation can be found in the 1st revision of this answer. It is simple, but it lacks the desirable FIFO behavior.
It looks to me like you want something like
using System;
using System.Collections.Generic;
using System.Threading;
namespace Sema
{
class Program
{
// do a little bit of timing magic
static ManualResetEvent go = new ManualResetEvent(false);
static void Main()
{
// limit the resources
var s = new SemaphoreSlim(30, 30);
// start up some threads
var threads = new List<Thread>();
for (int i = 0; i < 20; i++)
{
var start = new ParameterizedThreadStart(dowork);
Thread t = new Thread(start);
threads.Add(t);
t.Start(s);
}
go.Set();
// Wait until all threads finished
foreach (Thread thread in threads)
{
thread.Join();
}
Console.WriteLine();
}
private static void dowork(object obj)
{
go.WaitOne();
var s = (SemaphoreSlim) obj;
var batchsize = 3;
// acquire tokens
for (int i = 0; i < batchsize; i++)
{
s.Wait();
}
// send the request
Console.WriteLine("Working on a batch of size " + batchsize);
Thread.Sleep(200);
s.Release(batchsize);
}
}
}
However, you'll soon figure out that this causes deadlocks. You'll additionally need some synchronization on the semaphore to guarantee that one thread either gets all of its tokens or none.
var trylater = true;
while (trylater)
{
lock (s)
{
if (s.CurrentCount >= batchsize)
{
for (int i = 0; i < batchsize; i++)
{
s.Wait();
}
trylater = false;
}
}
if (trylater)
{
Thread.Sleep(20);
}
}
Now, that's potentially suffering from starvation. A huge batch (say 29) may never get enough resources while hundreds single requests are made.
I have this function which checks for proxy servers and currently it checks only a number of threads and waits for all to finish until the next set is starting. Is it possible to start a new thread as soon as one is finished from the maximum allowed?
for (int i = 0; i < listProxies.Count(); i+=nThreadsNum)
{
for (nCurrentThread = 0; nCurrentThread < nThreadsNum; nCurrentThread++)
{
if (nCurrentThread < nThreadsNum)
{
string strProxyIP = listProxies[i + nCurrentThread].sIPAddress;
int nPort = listProxies[i + nCurrentThread].nPort;
tasks.Add(Task.Factory.StartNew<ProxyAddress>(() => CheckProxyServer(strProxyIP, nPort, nCurrentThread)));
}
}
Task.WaitAll(tasks.ToArray());
foreach (var tsk in tasks)
{
ProxyAddress result = tsk.Result;
UpdateProxyDBRecord(result.sIPAddress, result.bOnlineStatus);
}
tasks.Clear();
}
This seems much more simple:
int numberProcessed = 0;
Parallel.ForEach(listProxies,
new ParallelOptions { MaxDegreeOfParallelism = nThreadsNum },
(p)=> {
var result = CheckProxyServer(p.sIPAddress, s.nPort, Thread.CurrentThread.ManagedThreadId);
UpdateProxyDBRecord(result.sIPAddress, result.bOnlineStatus);
Interlocked.Increment(numberProcessed);
});
With slots:
var obj = new Object();
var slots = new List<int>();
Parallel.ForEach(listProxies,
new ParallelOptions { MaxDegreeOfParallelism = nThreadsNum },
(p)=> {
int threadId = Thread.CurrentThread.ManagedThreadId;
int slot = slots.IndexOf(threadId);
if (slot == -1)
{
lock(obj)
{
slots.Add(threadId);
}
slot = slots.IndexOf(threadId);
}
var result = CheckProxyServer(p.sIPAddress, s.nPort, slot);
UpdateProxyDBRecord(result.sIPAddress, result.bOnlineStatus);
});
I took a few shortcuts there to guarantee thread safety. You don't have to do the normal check-lock-check dance because there will never be two threads attempting to add the same threadid to the list, so the second check will always fail and isn't needed. Secondly, for the same reason, I don't believe you need to ever lock around the outer IndexOf either. That makes this a very highly efficient concurrent routine that rarely locks (it should only lock nThreadsNum times) no matter how many items are in the enumerable.
Another solution is to use a SemaphoreSlim or the Producer-Consumer Pattern using a BlockinCollection<T>. Both solution support cancellation.
SemaphoreSlim
private async Task CheckProxyServerAsync(IEnumerable<object> proxies)
{
var tasks = new List<Task>();
int currentThreadNumber = 0;
int maxNumberOfThreads = 8;
using (semaphore = new SemaphoreSlim(maxNumberOfThreads, maxNumberOfThreads))
{
foreach (var proxy in proxies)
{
// Asynchronously wait until thread is available if thread limit reached
await semaphore.WaitAsync();
string proxyIP = proxy.IPAddress;
int port = proxy.Port;
tasks.Add(Task.Run(() => CheckProxyServer(proxyIP, port, Interlocked.Increment(ref currentThreadNumber)))
.ContinueWith(
(task) =>
{
ProxyAddress result = task.Result;
// Method call must be thread-safe!
UpdateProxyDbRecord(result.IPAddress, result.OnlineStatus);
Interlocked.Decrement(ref currentThreadNumber);
// Allow to start next thread if thread limit was reached
semaphore.Release();
},
TaskContinuationOptions.OnlyOnRanToCompletion));
}
// Asynchronously wait until all tasks are completed
// to prevent premature disposal of semaphore
await Task.WhenAll(tasks);
}
}
Producer-Consumer Pattern
// Uses a fixed number of same threads
private async Task CheckProxyServerAsync(IEnumerable<ProxyInfo> proxies)
{
var pipe = new BlockingCollection<ProxyInfo>();
int maxNumberOfThreads = 8;
var tasks = new List<Task>();
// Create all threads (count == maxNumberOfThreads)
for (int currentThreadNumber = 0; currentThreadNumber < maxNumberOfThreads; currentThreadNumber++)
{
tasks.Add(
Task.Run(() => ConsumeProxyInfo(pipe, currentThreadNumber)));
}
proxies.ToList().ForEach(pipe.Add);
pipe.CompleteAdding();
await Task.WhenAll(tasks);
}
private void ConsumeProxyInfo(BlockingCollection<ProxyInfo> proxiesPipe, int currentThreadNumber)
{
while (!proxiesPipe.IsCompleted)
{
if (proxiesPipe.TryTake(out ProxyInfo proxy))
{
int port = proxy.Port;
string proxyIP = proxy.IPAddress;
ProxyAddress result = CheckProxyServer(proxyIP, port, currentThreadNumber);
// Method call must be thread-safe!
UpdateProxyDbRecord(result.IPAddress, result.OnlineStatus);
}
}
}
If I'm understanding your question properly, this is actually fairly simple to do with await Task.WhenAny. Basically, you keep a collection of all of the running tasks. Once you reach a certain number of tasks running, you wait for one or more of your tasks to finish, and then you remove the tasks that were completed from your collection and continue to add more tasks.
Here's an example of what I mean below:
var tasks = new List<Task>();
for (int i = 0; i < 20; i++)
{
// I want my list of tasks to contain at most 5 tasks at once
if (tasks.Count == 5)
{
// Wait for at least one of the tasks to complete
await Task.WhenAny(tasks.ToArray());
// Remove all of the completed tasks from the list
tasks = tasks.Where(t => !t.IsCompleted).ToList();
}
// Add some task to the list
tasks.Add(Task.Factory.StartNew(async delegate ()
{
await Task.Delay(1000);
}));
}
I suggest changing your approach slightly. Instead of starting and stopping threads, put your proxy server data in a concurrent queue, one item for each proxy server. Then create a fixed number of threads (or async tasks) to work on the queue. This is more likely to provide smooth performance (you aren't starting and stopping threads over and over, which has overhead) and is a lot easier to code, in my opinion.
A simple example:
class ProxyChecker
{
private ConcurrentQueue<ProxyInfo> _masterQueue = new ConcurrentQueue<ProxyInfo>();
public ProxyChecker(IEnumerable<ProxyInfo> listProxies)
{
foreach (var proxy in listProxies)
{
_masterQueue.Enqueue(proxy);
}
}
public async Task RunChecks(int maximumConcurrency)
{
var count = Math.Max(maximumConcurrency, _masterQueue.Count);
var tasks = Enumerable.Range(0, count).Select( i => WorkerTask() ).ToList();
await Task.WhenAll(tasks);
}
private async Task WorkerTask()
{
ProxyInfo proxyInfo;
while ( _masterList.TryDequeue(out proxyInfo))
{
DoTheTest(proxyInfo.IP, proxyInfo.Port)
}
}
}
I am implementing a flow control component that limits maximum requests can be sent. Every worker thread can send either a single request or a batch of requests, but at any time the total amount of pending requests should not exceed a maximum number.
I initially want to implement with a SemaphoreSlim:
initialising the semaphore to the maximum request count, then when a worker thread is going to call service, it must acquire enough count of tokens, however I found actually SemaphoreSlim and Semaphore only allow a thread to decrease Semaphore count by 1, in my case I want to decrease the count by the number of requests that work thread carries.
What synchronization primitive should I use here?
Just to clarify, the service supports batch processing, so one thread can send a N requests in one service call, but accordingly it should be able to decrease semaphore's current count by N.
Below is a custom SemaphoreManyFifo class that offers the methods Wait(int acquireCount) method and Release(int releaseCount). Its behavior is strictly FIFO. It has a quite decent performance (~500,000 operations per second on 8 threads in my PC).
public class SemaphoreManyFifo : IDisposable
{
private readonly object _locker = new object();
private readonly Queue<(ManualResetEventSlim, int AcquireCount)> _queue;
private readonly ThreadLocal<ManualResetEventSlim> _pool;
private readonly int _maxCount;
private int _currentCount;
public int CurrentCount => Volatile.Read(ref _currentCount);
public SemaphoreManyFifo(int initialCount, int maxCount)
{
// Proper arguments validation omitted
Debug.Assert(initialCount >= 0);
Debug.Assert(maxCount > 0 && maxCount >= initialCount);
_queue = new Queue<(ManualResetEventSlim, int)>();
_pool = new ThreadLocal<ManualResetEventSlim>(
() => new ManualResetEventSlim(false), trackAllValues: true);
_currentCount = initialCount;
_maxCount = maxCount;
}
public SemaphoreManyFifo(int initialCount) : this(initialCount, Int32.MaxValue) { }
public void Wait(int acquireCount)
{
Debug.Assert(acquireCount > 0 && acquireCount <= _maxCount);
ManualResetEventSlim gate;
lock (_locker)
{
Debug.Assert(_currentCount >= 0 && _currentCount <= _maxCount);
if (acquireCount <= _currentCount && _queue.Count == 0)
{
_currentCount -= acquireCount; return; // Fast path
}
gate = _pool.Value;
gate.Reset(); // Important, because the gate is reused
_queue.Enqueue((gate, acquireCount));
}
gate.Wait();
}
public void Release(int releaseCount)
{
Debug.Assert(releaseCount > 0);
lock (_locker)
{
Debug.Assert(_currentCount >= 0 && _currentCount <= _maxCount);
if (releaseCount > _maxCount - _currentCount)
throw new SemaphoreFullException();
_currentCount += releaseCount;
while (_queue.Count > 0 && _queue.Peek().AcquireCount <= _currentCount)
{
var (gate, acquireCount) = _queue.Dequeue();
_currentCount -= acquireCount;
gate.Set();
}
}
}
public void Dispose()
{
foreach (var gate in _pool.Values) gate.Dispose();
_pool.Dispose();
}
}
Adding support for timeout and cancellation in the above implementation is not trivial. It would require a different (updateable) data structure instead of the Queue<T>.
The original Wait+Pulse implementation can be found in the 1st revision of this answer. It is simple, but it lacks the desirable FIFO behavior.
It looks to me like you want something like
using System;
using System.Collections.Generic;
using System.Threading;
namespace Sema
{
class Program
{
// do a little bit of timing magic
static ManualResetEvent go = new ManualResetEvent(false);
static void Main()
{
// limit the resources
var s = new SemaphoreSlim(30, 30);
// start up some threads
var threads = new List<Thread>();
for (int i = 0; i < 20; i++)
{
var start = new ParameterizedThreadStart(dowork);
Thread t = new Thread(start);
threads.Add(t);
t.Start(s);
}
go.Set();
// Wait until all threads finished
foreach (Thread thread in threads)
{
thread.Join();
}
Console.WriteLine();
}
private static void dowork(object obj)
{
go.WaitOne();
var s = (SemaphoreSlim) obj;
var batchsize = 3;
// acquire tokens
for (int i = 0; i < batchsize; i++)
{
s.Wait();
}
// send the request
Console.WriteLine("Working on a batch of size " + batchsize);
Thread.Sleep(200);
s.Release(batchsize);
}
}
}
However, you'll soon figure out that this causes deadlocks. You'll additionally need some synchronization on the semaphore to guarantee that one thread either gets all of its tokens or none.
var trylater = true;
while (trylater)
{
lock (s)
{
if (s.CurrentCount >= batchsize)
{
for (int i = 0; i < batchsize; i++)
{
s.Wait();
}
trylater = false;
}
}
if (trylater)
{
Thread.Sleep(20);
}
}
Now, that's potentially suffering from starvation. A huge batch (say 29) may never get enough resources while hundreds single requests are made.
I have a multi-line textbox and I want to process each line with multi threads.
The textbox could have a lot of lines (1000+), but not as many threads. I want to use custom amount of threads to read all those 1000+ lines without any duplicates (as in each thread reading UNIQUE lines only, if a line has been read by other thread, not to read it again).
What I have right now:
private void button5_Click(object sender, EventArgs e)
{
for (int i = 0; i < threadCount; i++)
{
new Thread(new ThreadStart(threadJob)).Start();
}
}
private void threadJob()
{
for (int i = 0; i < txtSearchTerms.Lines.Length; i++)
{
lock (threadLock)
{
Console.WriteLine(txtSearchTerms.Lines[i]);
}
}
}
It does start the correct amount of threads, but they all read the same variable multiple times.
Separate data collection and data processing and next possible steps after calculation. You can safely collect results calculated in parallel by using ConcurrentBag<T>, which is simply thread-safe collection.
Then you don't need to worry about "locking" objects and all lines will be "processed" only once.
1. Collect data
2. Execute collected data in parallel
3. Handle calculated result
private string Process(string line)
{
// Your logic for given line
}
private void Button_Click(object sender, EventArgs e)
{
var results = new ConcurrentBag<string>();
Parallel.ForEach(txtSearchTerms.Lines,
line =>
{
var result = Process(line);
results.Add(result);
});
foreach (var result in results)
{
Console.WriteLine(result);
}
}
By default Parallel.ForEach will use as much threads as underlying scheduler provides.
You can control amount of used threads by passing instance of ParallelOptions to the Parallel.ForEach method.
var options = new ParallelOptions
{
MaxDegreeOfParallelism = Environment.ProcessorCount
};
var results = new ConcurrentBag<string>();
Parallel.ForEach(values,
options,
value =>
{
var result = Process(value);
results.Add(result);
});
Consider using Parallel.ForEach to iterate over the Lines array. It is just like a normal foreach loop (i.e. each value will be processed only once), but the work is done in parallel - with multiple Tasks (threads).
var data = txtSearchTerms.Lines;
var threadCount = 4; // or whatever you want
Parallel.ForEach(data,
new ParallelOptions() { MaxDegreeOfParallelism = threadCount },
(val) =>
{
//Your code here
Console.WriteLine(val);
});
The above code will need this line to be added at the top of your file:
using System.Threading.Tasks;
Alternatively if you want to not just execute something, but also return / project something then instead try:
var results = data.AsParallel(new ParallelLinqOptions()
{
MaxDegreeOfParallelism = threadCount
}).Select(val =>
{
// Your code here, I just return the value but you could return whatever you want
return val;
}).ToList();
which still executes the code in parallel, but also returns a List (in this case with the same values in the original TextBox). And most importantly, the List will be in the same order as your input.
There many ways to do it what you want.
Take an extra class field:
private int _counter;
Use it instead of loop index. Increment it inside the lock:
private void threadJob()
{
while (true)
{
lock (threadLock)
{
if (_counter >= txtSearchTerms.Lines.Length)
return;
Console.WriteLine(txtSearchTerms.Lines[_counter]);
_counter++;
}
}
}
It works, but it very inefficient.
Lets consider another way. Each thread will handle its part of the dataset independently from the others.
public void button5_Click(object sender, EventArgs e)
{
for (int i = 0; i < threadCount; i++)
{
new Thread(new ParameterizedThreadStart(threadJob)).Start(i);
}
}
private void threadJob(object o)
{
int threadNumber = (int)o;
int count = txtSearchTerms.Lines.Length / threadCount;
int start = threadNumber * count;
int end = threadNumber != threadCount - 1 ? start + count : txtSearchTerms.Lines.Length;
for (int i = start; i < end; i++)
{
Console.WriteLine(txtSearchTerms.Lines[i]);
}
}
This is more efficient because threads do not wait on the lock. However, the array elements are processed not in a general manner.
I have following code which throws SemaphoreFullException, I don't understand why ?
If I change _semaphore = new SemaphoreSlim(0, 2) to
_semaphore = new SemaphoreSlim(0, int.MaxValue)
then all works fine.
Can anyone please find fault with this code and explain to me.
class BlockingQueue<T>
{
private Queue<T> _queue = new Queue<T>();
private SemaphoreSlim _semaphore = new SemaphoreSlim(0, 2);
public void Enqueue(T data)
{
if (data == null) throw new ArgumentNullException("data");
lock (_queue)
{
_queue.Enqueue(data);
}
_semaphore.Release();
}
public T Dequeue()
{
_semaphore.Wait();
lock (_queue)
{
return _queue.Dequeue();
}
}
}
public class Test
{
private static BlockingQueue<string> _bq = new BlockingQueue<string>();
public static void Main()
{
for (int i = 0; i < 100; i++)
{
_bq.Enqueue("item-" + i);
}
for (int i = 0; i < 5; i++)
{
Thread t = new Thread(Produce);
t.Start();
}
for (int i = 0; i < 100; i++)
{
Thread t = new Thread(Consume);
t.Start();
}
Console.ReadLine();
}
private static Random _random = new Random();
private static void Produce()
{
while (true)
{
_bq.Enqueue("item-" + _random.Next());
Thread.Sleep(2000);
}
}
private static void Consume()
{
while (true)
{
Console.WriteLine("Consumed-" + _bq.Dequeue());
Thread.Sleep(1000);
}
}
}
If you want to use the semaphore to control the number of concurrent threads, you're using it wrong. You should acquire the semaphore when you dequeue an item, and release the semaphore when the thread is done processing that item.
What you have right now is a system that allows only two items to be in the queue at any one time. Initially, your semaphore has a count of 2. Each time you enqueue an item, the count is reduced. After two items, the count is 0 and if you try to release again you're going to get a semaphore full exception.
If you really want to do this with a semaphore, you need to remove the Release call from the Enqueue method. And add a Release method to the BlockingQueue class. You then would write:
private static void Consume()
{
while (true)
{
Console.WriteLine("Consumed-" + _bq.Dequeue());
Thread.Sleep(1000);
bq.Release();
}
}
That would make your code work, but it's not a very good solution. A much better solution would be to use BlockingCollection<T> and two persistent consumers. Something like:
private BlockingCollection<int> bq = new BlockingCollection<int>();
void Test()
{
// create two consumers
var c1 = new Thread(Consume);
var c2 = new Thread(Consume);
c1.Start();
c2.Start();
// produce
for (var i = 0; i < 100; ++i)
{
bq.Add(i);
}
bq.CompleteAdding();
c1.Join();
c2.Join();
}
void Consume()
{
foreach (var i in bq.GetConsumingEnumerable())
{
Console.WriteLine("Consumed-" + i);
Thread.Sleep(1000);
}
}
That gives you two persistent threads consuming the items. The benefit is that you avoid the cost of spinning up a new thread (or having the RTL assign a pool thread) for each item. Instead, the threads do non-busy waits on the queue. You also don't have to worry about explicit locking, etc. The code is simpler, more robust, and much less likely to contain a bug.