I have a queue of jobs which can be populated by multiple threads (ConcurrentQueue<MyJob>). I need to implement continuous execution of this jobs asynchronously(not by main thread), but only by one thread at the same time. I've tried something like this:
public class ConcurrentLoop {
private static ConcurrentQueue<MyJob> _concurrentQueue = new ConcurrentQueue<MyJob>();
private static Task _currentTask;
private static object _lock = new object();
public static void QueueJob(Job job)
{
_concurrentQueue.Enqueue(job);
checkLoop();
}
private static void checkLoop()
{
if ( _currentTask == null || _currentTask.IsCompleted )
{
lock (_lock)
{
if ( _currentTask == null || _currentTask.IsCompleted )
{
_currentTask = Task.Run(() =>
{
MyJob current;
while( _concurrentQueue.TryDequeue( out current ) )
//Do something
});
}
}
}
}
}
This code in my opinion have a problem: if task finnishing to execute(TryDequeue returns false but task have not been marked as completed yet) and in this moment i get a new job, it will not be executed. Am i right? If so, how to fix this
Your problem statement looks like a producer-consumer problem, with a caveat that you only want a single consumer.
There is no need to reimplement such functionality manually.
Instead, I suggest to use BlockingCollection -- internally it uses ConcurrentQueue and a separate thread for the consumption.
Note, that this may or may not be suitable for your use case.
Something like:
_blockingCollection = new BlockingCollection<your type>(); // you may want to create bounded or unbounded collection
_consumingThread = new Thread(() =>
{
foreach (var workItem in _blockingCollection.GetConsumingEnumerable()) // blocks when there is no more work to do, continues whenever a new item is added.
{
// do work with workItem
}
});
_consumingThread.Start();
Multiple producers (tasks or threads) can add work items to the _blockingCollection no problem, and no need to worry about synchronizing producers/consumer.
When you are done with producing task, call _blockingCollection.CompleteAdding() (this method is not thread safe, so it is advised to stop all producers beforehand).
Probably, you should also do _consumingThread.Join() somewhere to terminate your consuming thread.
I would use Microsoft's Reactive Framework Team's Reactive Extensions (NuGet "System.Reactive") for this. It's a lovely abstraction.
public class ConcurrentLoop
{
private static Subject<MyJob> _jobs = new Subject<MyJob>();
private static IDisposable _subscription =
_jobs
.Synchronize()
.ObserveOn(Scheduler.Default)
.Subscribe(job =>
{
//Do something
});
public static void QueueJob(MyJob job)
{
_jobs.OnNext(job);
}
}
This nicely synchronizes all incoming jobs into a single stream and pushes the execution on to Scheduler.Default (which is basically the thread-pool), but because it has serialized all input only one can happen at a time. The nice thing about this is that it releases the thread if there is a significant gap between the values. It's a very lean solution.
To clean up you just need call either _jobs.OnCompleted(); or _subscription.Dispose();.
Related
I've a method which could be called by multiple threads, to write data to a database. To reduce database traffic, I cache the data and write it in a bulk.
Now I wanted to know, is there a better (for example lock-free pattern) to use?
Here is a Example how I do it at the moment?
public class WriteToDatabase : IWriter, IDisposable
{
public WriteToDatabase(PLCProtocolServiceConfig currentConfig)
{
writeTimer = new System.Threading.Timer(Writer);
writeTimer.Change((int)currentConfig.WriteToDatabaseTimer.TotalMilliseconds, Timeout.Infinite);
this.currentConfig = currentConfig;
}
private System.Threading.Timer writeTimer;
private List<PlcProtocolDTO> writeChache = new List<PlcProtocolDTO>();
private readonly PLCProtocolServiceConfig currentConfig;
private bool disposed;
public void Write(PlcProtocolDTO row)
{
lock (this)
{
writeChache.Add(row);
}
}
private void Writer(object state)
{
List<PlcProtocolDTO> oldCachce = null;
lock (this)
{
if (writeChache.Count > 0)
{
oldCachce = writeChache;
writeChache = new List<PlcProtocolDTO>();
}
}
if (oldCachce != null)
{
using (var s = VisuDL.CreateSession())
{
s.Insert(oldCachce);
}
}
if (!this.disposed)
writeTimer.Change((int)currentConfig.WriteToDatabaseTimer.TotalMilliseconds, Timeout.Infinite);
}
public void Dispose()
{
this.disposed = true;
writeTimer.Dispose();
Writer(null);
}
}
There are a few issues I can see with the timer based code.
Even in the new version of the code there is still a chance to lose writes on restart or shutdown.
The Dispose method is not waiting for the completion of the last timer callback that may be currently in progress.
Since timer callbacks run on thread pool threads, which are background threads, they will be aborted when the main thread exits.
There is no limit on the size of the batches, this is going to break when you hit a limit of the underlying storage api
(e.g. sql databases have a limit on query length and the number of parameters used).
since you're doing i/o the implementation should probably be async
This will behave poorly under load.
in particular as the load keeps increasing the batches will get bigger and therefore slower to execute,
a slower batch execution in turn will give the next one additional time to accumulate items making them even slower, etc...
ultimately either writing the batch will fail (if you hit a sql limit or the query times out) or the application will just go out of memory.
To handle high load you really have only two choices which are applying backpressure (i.e. slowing down the producers) or dropping writes.
you might want to allow a limited number of concurrent writers if the database can handle it.
There's a race condition on the disposed field which might result in an ObjectDisposedException in writeTimer.Change.
I think a better pattern that addresses the issues above is the consumer-producer pattern, you can implement it in .net
with a ConcurrentQueue or with the new System.Threading.Channels api.
Also keep in mind that if your application crashes for any reason you will lose the records that are still buffered.
This is a sample implementation using channels:
public interface IWriter<in T>
{
ValueTask WriteAsync(IEnumerable<T> items);
}
public sealed record Options(int BatchSize, TimeSpan Interval, int MaxPendingWrites, int Concurrency);
public class BatchWriter<T> : IWriter<T>, IAsyncDisposable
{
readonly IWriter<T> writer;
readonly Options options;
readonly Channel<T> channel;
readonly Task[] consumers;
public BatchWriter(IWriter<T> writer, Options options)
{
this.writer = writer;
this.options = options;
channel = Channel.CreateBounded<T>(new BoundedChannelOptions(options.MaxPendingWrites)
{
// Choose between backpressure (Wait) or
// various ways to drop writes (DropNewest, DropOldest, DropWrite).
FullMode = BoundedChannelFullMode.Wait,
SingleWriter = false,
SingleReader = options.Concurrency == 1
});
consumers = Enumerable.Range(start: 0, options.Concurrency)
.Select(_ => Task.Run(Start))
.ToArray();
}
async Task Start()
{
var batch = new List<T>(options.BatchSize);
var timer = Task.Delay(options.Interval);
var canRead = channel.Reader.WaitToReadAsync().AsTask();
while (true)
{
if (await Task.WhenAny(timer, canRead) == timer)
{
timer = Task.Delay(options.Interval);
await Flush(batch);
}
else if (await canRead)
{
while (channel.Reader.TryRead(out var item))
{
batch.Add(item);
if (batch.Count == options.BatchSize)
{
await Flush(batch);
}
}
canRead = channel.Reader.WaitToReadAsync().AsTask();
}
else
{
await Flush(batch);
return;
}
}
async Task Flush(ICollection<T> items)
{
if (items.Count > 0)
{
await writer.WriteAsync(items);
items.Clear();
}
}
}
public async ValueTask WriteAsync(IEnumerable<T> items)
{
foreach (var item in items)
{
await channel.Writer.WriteAsync(item);
}
}
public async ValueTask DisposeAsync()
{
channel.Writer.Complete();
await Task.WhenAll(consumers);
}
}
Instead of using a mutable List and protecting it using locks, you could use an ImmutableList, and stop worrying about the possibility of the list being mutated by the wrong thread at the wrong time. With immutable collections it is cheap and easy to pass around snapshots of your data, because you don't need to block the writers (and possibly also the readers) while creating copies of the data. An immutable collection is a snapshot by itself.
Although you don't have to worry about the contents of the collection, you still have to worry about its reference. This is because updating an immutable collection means replacing the reference to the old collection with a new collection. You don't want to have multiple threads swapping references in an uncontrollable manner, so you still need some sort of synchronization. You can still use locks, but it is quite easy to avoid locking altogether by using interlocked operations. The example below uses the handy ImmutableInterlocked.Update method, that allows to do an atomic update-and-swap in a single line:
private ImmutableList<PlcProtocolDTO> writeCache
= ImmutableList<PlcProtocolDTO>.Empty;
public void Write(PlcProtocolDTO row)
{
ImmutableInterlocked.Update(ref writeCache, x => x.Add(row));
}
private void Writer(object state)
{
IList<PlcProtocolDTO> oldCache = Interlocked.Exchange(
ref writeCache, ImmutableList<PlcProtocolDTO>.Empty);
using (var s = VisuDL.CreateSession())
s.Insert(oldCache);
}
private void Dump()
{
foreach (var row in Volatile.Read(ref writeCache))
Console.WriteLine(row);
}
Here is the description of the ImmutableInterlocked.Update method:
Mutates a value in-place with optimistic locking transaction semantics via a specified transformation function. The transformation is retried as many times as necessary to win the optimistic locking race.
This method can be used for updating any type of reference-type variables. Its usage may be increased with the advent of the new C# 9 record types, that are immutable by default, and are intended to be used as such.
Given some code like so
public class CustomCollectionClass : Collection<CustomData> {}
public class CustomData
{
string name;
bool finished;
string result;
}
public async Task DoWorkInParallel(CustomCollectionClass collection)
{
// collection can be retrieved from a DB, may not exist.
if (collection == null)
{
collection = new CustomCollectionClass();
foreach (var data in myData)
{
collection.Add(new CustomData()
{
name = data.Name;
});
}
}
// This part doesn't feel safe. Not sure what to do here.
var processTasks = myData.Select(o =>
this.DoWorkOnItemInCollection(collection.Single(d => d.name = o.Name))).ToArray();
await Task.WhenAll(processTasks);
await SaveModifedCollection(collection);
}
public async Task DoWorkOnItemInCollection(CustomData data)
{
await DoABunchOfWorkElsewhere();
// This doesn't feel safe either. Lock here?
data.finished = true;
data.result = "Parallel";
}
As I noted in a couple comments inline, it doesn't feel safe for me to do the above, but I'm not sure. I do have a collection of elements that I'd like to assign a unique element to each parallel task and have those tasks be able to modify that single element of the collection based on what work is done. End result being, I wanted to save the collection after individual, different elements have been modified in parallel. If this isn't a safe way to do it, how best would I go about this?
Your code is the right way to do this, assuming starting DoABunchOfWorkElsewhere() multiple times is itself safe.
You don't need to worry about your LINQ query, because it doesn't actually run in parallel. All it does is to invoke DoWorkOnItemInCollection() multiple times. Those invocations may work in parallel (or not, depending on your synchronization context and the implementation of DoABunchOfWorkElsewhere()), but the code you showed is safe.
Your above code should work without issue. You are passing off one item to each worker thread. I'm not so sure about the async attribute. You might just return a Task, and then in your method do:
public Task DoWorkOnItemInCollection(CustomData data)
{
return Task.Run(() => {
DoABunchOfWorkElsewhere().Wait();
data.finished = true;
data.result = "Parallel";
});
}
You might want to be careful, with large amount of items, you could overflow your max thread count with background threads. In this case, c# just deletes your threads, which can be difficult to debug later.
I have done this before, It might be easier if instead of handing the whole collection to some magic linq, rather do a classic consumer problem:
class ParallelWorker<T>
{
private Action<T> Action;
private Queue<T> Queue = new Queue<T>();
private object QueueLock = new object();
private void DoWork()
{
while(true)
{
T item;
lock(this.QueueLock)
{
if(this.Queue.Count == 0) return; //exit thread
item = this.Queue.DeQueue();
}
try { this.Action(item); }
catch { /*...*/ }
}
}
public void DoParallelWork(IEnumerable<T> items, int maxDegreesOfParallelism, Action<T> action)
{
this.Action = action;
this.Queue.Clear();
this.Queue.AddRange(items);
List<Thread> threads = new List<Thread>();
for(int i = 0; i < items; i++)
{
ParameterizedThreadStart threadStart = new ParameterizedThreadStart(DoWork);
Thread thread = new Thread(threadStart);
thread.Start();
threads.Add(thread);
}
foreach(Thread thread in threads)
{
thread.Join();
}
}
}
This was done IDE free, so there may be typos.
I'm going to make the suggestion that you use Microsoft's Reactive Framework (NuGet "Rx-Main") to do this task.
Here's the code:
public void DoWorkInParallel(CustomCollectionClass collection)
{
var query =
from x in collection.ToObservable()
from r in Observable.FromAsync(() => DoWorkOnItemInCollection(x))
select x;
query.Subscribe(x => { }, ex => { }, async () =>
{
await SaveModifedCollection(collection);
});
}
Done. That's it. Nothing more.
I have to say though, that when I tried to get your code to run it was full of bugs and issues. I suspect that the code you posted isn't your production code, but an example you wrote specifically for this question. I suggest that you try to make a running compilable example before posting.
Nevertheless, my suggestion should work for you with a little tweaking.
It is multi-threaded and thread-safe. And it does do cleanly save the modified collection when done.
I have a process that goes through a loop. During each iteration of the loop, it calls out to an external web service and then adds a object to an EntityFramework repository. The call to the external service is wrapped in a static method. Typically the loop only has one or two iterations but up to 4 is currently possible with the UI. (Each iteration represents an insurance quote).
It seems that this would benefit from being refactored as an asynchronous process. How do I set this up so that each iteration occurs in a seperate thread, and the commit waits until all threads are completed?
public class ProcessRequest
{
private IUnitOfWork = unitOfWork;
public ProcessRequest(IUnitOfWork uow)
{
unitOfWork = uow;
}
public void Execute(MyRequestParams p)
{
foreach (Quote q in p.Quotes)
{
q.Premium = QuoteService.GetQuote(q);
unitOfWork.GetRepository<Quote>().Add(q);
}
unitOfWork.Commit();
}
}
public static class QuoteService
{
public static decimal GetQuote(Quote quote)
{
//I've simplified proprietary code to single line that calls an external service
return ExternalWebService.GetQuote(quote.Deductible);
}
}
You're asking two different things: one is how to execute the loop in parallel, where each iteration occurs (potentially) on a separate thread; this is completely different to executing the entire loop as an asynchronous process which means the thread that initiates it won't wait for it to complete. I assume you meant the first, i.e. that you want to parallelize the iterations in the loop but still block until all of them are done.
Without knowing anything about the context in which this runs, one straightforward way would be to use Parallel Extensions, specifically Parallel Foreach:
public void Execute(MyRequestParams p)
{
Parallel.ForEach(p.Quotes, q => {
q.Premium = QuoteService.GetQuote(q);
unitOfWork.GetRepository<Quote>().Add(q);
});
unitOfWork.Commit();
}
Or maybe something like:
public void Execute(MyRequestParams p)
{
Parallel.ForEach(p.Quotes, q => {
q.Premium = QuoteService.GetQuote(q);
});
unitOfWork.GetRepository<Quote>().AddAll(p.Quotes);
unitOfWork.Commit();
}
This depends heavily on the thread-safety what you're dealing with.
If most of your work is I/O, im not sure i'd go for spinning up a new thread, as you are wasting most of your time idle waiting for your service/DB to reply.
i'd try to go with a async approach:
public async Task Execute(MyRequestParams p)
{
foreach (var quote in p.Quotes)
{
//Of course, you'll need an async endpoint.
var q.Premium = await QuoteService.GetQuoteAsync(q);
}
unitOfWork.GetRepository<Quote>().AddAll(p.Quotes);
await unitOfWork.SaveChangesAsync();
}
With this approach, you save the overhead of spinning up new threads and letting them be idle most of the time.
Hope this makes sense, of course you'd have to have access to an async endpoint of the webservice, and use Entity Framework v6.
I have a method named InitializeCRMService() which returns an object of IOrganizationService. Now I am defining a different method named GetConnection(string thread) which calls InitializeCRMService() based on the parameter passed to it. If the string passed to GetConnection is single it will start a single threaded instance of the IntializeCRMService() method, but if the string passed is multiple, I need to use a thread pool where I need to pass the method to QueueUserWorkItem. The method InitializeCRMService has no input parameters. It just returns a service object. Please find below the code block in the GetConnection method:
public void GetConnection(string thread)
{
ParallelOptions ops = new ParallelOptions();
if(thread.Equals("one"))
{
Parallel.For(0, 1, i =>
{
dynamic serviceObject = InitializeCRMService();
});
}
else if (thread.Equals("multi"))
{
// HERE I NEED TO IMPLEMENT MULTITHREADING USING THREAD POOL
// AND NOT PARALLEL FOR LOOP......
// ThreadPool.QueueUserWorkItem(new WaitCallback(InitializeCRMService));
}
}
Please note my method InitializeCRMService() has a return type of Service Object.
Please tell me how do I implement it.
Since you want to execute InitializeCRMService in the ThreadPool when a slot is available, and you are executing this only once, the solution depends on what you want to do with the return value of InitializeCRMService.
If you only want to ignore it, I have two options so far.
Option 1
public void GetConnection(string thread)
{
//I found that ops is not being used
//ParallelOptions ops = new ParallelOptions();
if(thread.Equals("one"))
{
Parallel.For(0, 1, i =>
{
//You don't really need to have a variable
/*dynamic serviceObject =*/ InitializeCRMService();
});
}
else if (thread.Equals("multi"))
{
ThreadPool.QueueUserWorkItem
(
new WaitCallback
(
(_) =>
{
//You don't really need to have a variable
/*dynamic serviceObject =*/ InitializeCRMService();
}
)
);
}
}
On the other hand, if you need to pass it somewhere to store it an reuse it later you can do it like this:
public void GetConnection(string thread)
{
//I found that ops is not being used
//ParallelOptions ops = new ParallelOptions();
if(thread.Equals("one"))
{
Parallel.For(0, 1, i =>
{
//It seems to me a good idea to take the same path here too
//dynamic serviceObject = InitializeCRMService();
Store(InitializeCRMService());
});
}
else if (thread.Equals("multi"))
{
ThreadPool.QueueUserWorkItem
(
new WaitCallback
(
(_) =>
{
Store(InitializeCRMService());
}
)
);
}
}
Where Store would be something like this:
private void Store(dynamic serviceObject)
{
//store serviceObject somewhere you can use it later.
//Depending on your situation you may want to
// set a flag or use a ManualResetEvent to notify
// that serviceObject is ready to be used.
//Any pre proccess can be done here too.
//Take care of thread affinity,
// since this may come from the ThreadPool
// and the consuming thread may be another one,
// you may need some synchronization.
}
Now, if you need to allow clients of your class to access serviceObject, you can take the following approach:
//Note: I marked it as partial because there may be other code not showed here
// in particular I will not write the method GetConnection again. That said...
// you can have it all in a single block in a single file without using partial.
public partial class YourClass
{
private dynamic _serviceObject;
private void Store(dynamic serviceObject)
{
_serviceObject = serviceObject;
}
public dynamic ServiceObject
{
get
{
return _serviceObject;
}
}
}
But this doesn't take care of all the cases. In particular if you want to have thread waiting for serviceObject to be ready:
public partial class YourClass
{
private ManualResetEvent _serviceObjectWaitHandle = new ManualResetEvent(false);
private dynamic _serviceObject;
private void Store(dynamic serviceObject)
{
_serviceObject = serviceObject;
//If you need to do some work as soon as _serviceObject is ready...
// then it can be done here, this may still be the thread pool thread.
//If you need to call something like the UI...
// you will need to use BeginInvoke or a similar solution.
_serviceObjectWaitHandle.Set();
}
public void WaitForServiceObject()
{
//You may also expose other overloads, just for convenience.
//This will wait until Store is executed
//When _serviceObjectWaitHandle.Set() is called
// this will let other threads pass.
_serviceObjectWaitHandle.WaitOne();
}
public dynamic ServiceObject
{
get
{
return _serviceObject;
}
}
}
Still, I haven't covered all the scenarios. For intance... what happens if GetConnection is called multiple times? We need to decide if we want to allow that, and if we do, what do we do with the old serviceObject? (do we need to call something to dismiss it?). This can be problematic, if we allow multiple threads to call GetConnection at once. So by default I will say that we don't, but we don't want to block the other threads either...
The solution? Follows:
//This is another part of the same class
//This one includes GetConnection
public partial class YourClass
{
//1 if GetConnection has been called, 0 otherwise
private int _initializingServiceObject;
public void GetConnection(string thread)
{
if (Interlocked.CompareExchange(ref _initializingServiceObject, 1, 0) == 0)
{
//Go on, it is the first time GetConnection is called
//I found that ops is not being used
//ParallelOptions ops = new ParallelOptions();
if(thread.Equals("one"))
{
Parallel.For(0, 1, i =>
{
//It seems to me a good idea to take the same path here too
//dynamic serviceObject = InitializeCRMService();
Store(InitializeCRMService());
});
}
else if (thread.Equals("multi"))
{
ThreadPool.QueueUserWorkItem
(
new WaitCallback
(
(_) =>
{
Store(InitializeCRMService());
}
)
);
}
}
}
}
Finally, if we are allowing multiple thread to use _serviceObject, and _serviceObject is not thread safe, we can run into trouble. Using monitor or using a read write lock are two alternatives to solve that.
Do you remember this?
public dynamic ServiceObject
{
get
{
return _serviceObject;
}
}
Ok, you want to have the caller access the _serviceObject when it is in a context that will prevent others thread to enter (see System.Threading.Monitor), and make sure it stop using it, and then leave this context I mentioned before.
Now consider that the caller thread could still store a copy of _serviceObject somewhere, and then leave the syncrhonization, and then do something with _serviceObject, and that may happen when another thread is using it.
I'm used to think of every corner case when it comes to threading. But if you have control over the calling threads, you can do it very well with just the property showed above. If you don't... let's talk about it, I warn you, it can be extensive.
Option 2
This is a totally different behaviour, the commend Damien_The_Unbeliever made in your question made me think that you may have intended to return serviceObject. In that case, it is not shared among threads, and it is ok to have multiple serviceObject at a time. And any synchronization needed is left to the caller.
Ok, this may be what you have been looking for:
public void GetConnection(string thread, Action<dynamic> callback)
{
if (ReferenceEquals(callback, null))
{
throw new ArgumentNullException("callback");
}
//I found that ops is not being used
//ParallelOptions ops = new ParallelOptions();
if(thread.Equals("one"))
{
Parallel.For(0, 1, i =>
{
callback(InitializeCRMService());
});
}
else if (thread.Equals("multi"))
{
ThreadPool.QueueUserWorkItem
(
new WaitCallback
(
(_) =>
{
callback(InitializeCRMService());
}
)
);
}
}
How should the callback look? Well, as soon as it is not shared between threads it is ok. Why? Because each thread that calls GetConnection passes it's own callback Action, and will recieve a different serviceObject, so there is no risk that what one thread does to it affect what the other does to its (since it is not the same serviceObject).
Unless you want to have one thread call this and then shared it with other threads, in which case, it is a problem of the caller and it will be resolved in another place in another moment.
One last thing, you could use an enum to represent the options you currently pass in the string thread. In fact, since there are only two options you may consider using a bool, unless they may appear more cases in the future.
I have a couple of situations in my code where various threads can create work items that, for various reasons, shouldn't be done in parallel. I'd like to make sure the work gets done in a FIFO manner, regardless of what thread it comes in from. In Java, I'd put the work items on a single-threaded ExecutorService; is there an equivalent in C#? I've cobbled something together with a Queue and a bunch of lock(){} blocks, but it'd be nice to be able to use something off-the-shelf and tested.
Update: Does anybody have experience with System.Threading.Tasks? Does it have a solution for this sort of thing? I'm writing a Monotouch app so who knows if I could even find a backported version of it that I could get to work, but it'd at least be something to think about for the future.
Update #2 For C# developers unfamiliar with the Java libraries I'm talking about, basically I want something that lets various threads hand off work items such that all those work items will be run on a single thread (which isn't any of the calling threads).
Update, 6/2018: If I was architecting a similar system now, I'd probably use Reactive Extensions as per Matt Craig's answer. I'm leaving Zachary Yates' answer the accepted one, though, because if you're thinking in Rx you probably wouldn't even ask this question, and I think ConcurrentQueue is easier to bodge into a pre-Rx program.
Update: To address the comments on wasting resources (and if you're not using Rx), you can use a BlockingCollection (if you use the default constructor, it wraps a ConcurrentQueue) and just call .GetConsumingEnumerable(). There's an overload that takes a CancellationToken if the work is long-running. See the example below.
You can use ConcurrentQueue, (if monotouch supports .net 4?) it's thread safe and I think the implementation is actually lockless. This works pretty well if you have a long-running task (like in a windows service).
Generally, your problem sounds like you have multiple producers with a single consumer.
var work = new BlockingCollection<Item>();
var producer1 = Task.Factory.StartNew(() => {
work.TryAdd(item); // or whatever your threads are doing
});
var producer2 = Task.Factory.StartNew(() => {
work.TryAdd(item); // etc
});
var consumer = Task.Factory.StartNew(() => {
foreach (var item in work.GetConsumingEnumerable()) {
// do the work
}
});
Task.WaitAll(producer1, producer2, consumer);
You should use BlockingCollection if you have a finite pool of work items. Here's an MSDN page showing all of the new concurrent collection types.
I believe this can be done using a SynchronizationContext. However, I have only done this to post back to the UI thread, which already has a synchronization context (if told to be installed) provided by .NET -- I don't know how to prepare it for use from a "vanilla thread" though.
Some links I found for "custom synchronizationcontext provider" (I have not had time to review these, do not fully understand the working/context, nor do I have any additional information):
Looking for an example of a custom SynchronizationContext (Required for unit testing)
http://codeidol.com/csharp/wcf/Concurrency-Management/Custom-Service-Synchronization-Context/
Happy coding.
There is a more contemporary solution now available - the EventLoopScheduler class.
Not native AFAIK, but look at this:
Serial Task Executor; is this thread safe?
I made an example here https://github.com/embeddedmz/message_passing_on_csharp which makes use of BlockingCollection.
So you will have a class that manages a resource and you can use the class below which creates a thread that will be the only one to manage it :
using System;
using System.Collections.Concurrent;
using System.Threading.Tasks;
public class ResourceManagerThread<Resource>
{
private readonly Resource _managedResource;
private readonly BlockingCollection<Action<Resource>> _tasksQueue;
private Task _task;
private readonly object _taskLock = new object();
public ResourceManagerThread(Resource resource)
{
_managedResource = (resource != null) ? resource : throw new ArgumentNullException(nameof(resource));
_tasksQueue = new BlockingCollection<Action<Resource>>();
}
public Task<T> Enqueue<T>(Func<Resource, T> method)
{
var tcs = new TaskCompletionSource<T>();
_tasksQueue.Add(r => tcs.SetResult(method(r)));
return tcs.Task;
}
public void Start()
{
lock (_taskLock)
{
if (_task == null)
{
_task = Task.Run(ThreadMain);
}
}
}
public void Stop()
{
lock (_taskLock)
{
if (_task != null)
{
_tasksQueue.CompleteAdding();
_task.Wait();
_task = null;
}
}
}
public bool HasStarted
{
get
{
lock (_taskLock)
{
if (_task != null)
{
return _task.IsCompleted == false ||
_task.Status == TaskStatus.Running ||
_task.Status == TaskStatus.WaitingToRun ||
_task.Status == TaskStatus.WaitingForActivation;
}
else
{
return false;
}
}
}
}
private void ThreadMain()
{
try
{
foreach (var action in _tasksQueue.GetConsumingEnumerable())
{
try
{
action(_managedResource);
}
catch
{
//...
}
}
}
catch
{
}
}
}
Example :
private readonly DevicesManager _devicesManager;
private readonly ResourceManagerThread<DevicesManager> _devicesManagerThread;
//...
_devicesManagerThread = new ResourceManagerThread<DevicesManager>(_devicesManager);
_devicesManagerThread.Start();
_devicesManagerThread.Enqueue((DevicesManager dm) =>
{
return dm.Initialize();
});
// Enqueue will return a Task. Use the 'Result' property to get the result of the 'message' or 'request' sent to the the thread managing the resource
As I wrote in comments, you discovered by yourself that the lock statement can do the work.
If you are interested in getting a "container" that can make simpler the job of managing a queue of work items, look at the ThreadPool class.
I think that, in a well designed architecture, with these two elemnts (ThreadPool class and lock statement) you can easily and succesfully serialize access to resources.