If there are multiple threads all waiting on the same lock is it possible to have the Main thread have higher priority in acquiring the lock. Meaning that if worker threads go to the lock statement before the main thread, the main thread would acquire the lock before the other threads that were already waiting on it.
No, the lock statement maps to System.Threading.Monitor.Enter() (MSDN) and there is no overload that accepts a priority parameter.
The closest thing I can think of is a ReaderWriterLock(Slim) but I would seriously reconsider the design that leads to this request. There probably are better ways to achieve what you need.
Through a native lock statement, no. Through your own custom locking mechanism, sure, if you're willing to spend the time and effort to develop it.
Here's my draft a a solution. It may or may not work, and may not be super efficient, but it's at least a starting place:
public class Lock
{
bool locked = false;
private object key = new object();
SortedDictionary<int, Queue<ManualResetEvent>> notifiers =
new SortedDictionary<int, Queue<ManualResetEvent>>();
ManualResetEvent specialNotifier = null;
public void Lock()
{
lock (key)
{
if (locked)
{
ManualResetEvent notifier = new ManualResetEvent(false);
int priority = getPriorityForThread();
Queue<ManualResetEvent> queue = notifiers[priority];
if (queue == null)
{
queue = new Queue<ManualResetEvent>();
notifiers[priority] = queue;
}
queue.Enqueue(notifier);
notifier.WaitOne();
}
else
{
locked = true;
}
}
}
private static int getPriorityForThread()
{
return 0;
}
public void Release()
{
lock (key)
{
foreach (var queue in notifiers.Values)
{
if (queue.Any())
{
var notifier = queue.Dequeue();
notifier.Set();
return;
}
}
locked = false;
}
}
}
Here is another solution. I has a lot of lines, but it is pretty simple. The function DoSomethingSingle will be called only one thread at a time, and those with the highPriority flag will get preference.
static int numWaiting = 0;
static object single = new object();
ResultType DoSomething(string[] argList, bool highPriority = false)
{
try
{
if (highPriority)
{
Interlocked.Increment(ref numWaiting);
}
for (;;)
{
lock (single)
{
if (highPriority || numWaiting == 0)
{
return DoSomethingSingle(argList);
}
}
// Sleep gives other threads a chance to enter the lock
Thread.Sleep(0);
}
}
finally
{
if (highPriority)
{
Interlocked.Decrement(ref numWaiting);
}
}
}
This allows two priority levels. Guaranteed that a low priority thread will gain access to the resource only if there are no high priority threads waiting for it.
edit: change to interlock incr/dec
Related
Given a scenario where there's a function that should only be executed by one thread at any given time, and the rest just return (since a specific state is already being worked on), what's the best way to accomplish this?
public void RunOnce()
{
if(Interlocked.Exchange(ref m_isRunning, 1) == 1)
return;
// Run code that should only be executed once
// What mechanism do we use here to ensure thread safety?
Volatile.Write(ref m_isRunning, 0);
}
Would the same mechanism apply if m_isRunning is a state (ie. an integer representing an enum)?
The code in your question is thread-safe IMHO, but in general
the Interlocked.CompareExchange method is more flexible than the Interlocked.Exchange for implementing lock-free multithreading. Here is how I would prefer to code the RunOnce method:
int _lock; // 0: not acquired, 1: acquired
public void RunOnce()
{
bool lockTaken = Interlocked.CompareExchange(ref _lock, 1, 0) == 0;
if (!lockTaken) return;
try
{
// Run code that should be executed by one thread only.
}
finally
{
bool lockReleased = Interlocked.CompareExchange(ref _lock, 0, 1) == 1;
if (!lockReleased)
throw new InvalidOperationException("Could not release the lock.");
}
}
My suggestion though would be to use the Monitor class:
object _locker = new();
public void RunOnce()
{
bool lockTaken = Monitor.TryEnter(_locker);
if (!lockTaken) return;
try
{
// Run code that should be executed by one thread only.
}
finally { Monitor.Exit(_locker); }
}
...or the SemaphoreSlim class if you prefer to prevent reentrancy:
SemaphoreSlim _semaphore = new(1, 1);
public void RunOnce()
{
bool lockTaken = _semaphore.Wait(0);
if (!lockTaken) return;
try
{
// Run code that should be executed by one thread only.
}
finally { _semaphore.Release(); }
}
It makes the intentions of your code cleaner IMHO.
I have a critical section (using locked scope).
I'd like that only the latest incoming thread "sleeps" on it. Hence - once the critical section is locked, every incoming thread "terminates" all previous sleeping ones.
Is there a way to achieve this using C#?
Thank you
The mechanism that seemed most effective for accomplishing this was to use Tasks, so the solution below is actually asynchronous, rather than synchronous, as the code came out much simpler that way. If you need it to be synchronous, you can just synchronously wait on the Tasks.
public class SingleWaiterLock
{
private bool realLockTaken = false;
private TaskCompletionSource<bool> waiterTCS = null;
private object lockObject = new object();
public Task<bool> WaitAsync()
{
lock (lockObject)
{
if (!realLockTaken)
{
realLockTaken = true;
return Task.FromResult(true);
}
if (waiterTCS == null)
{
waiterTCS = new TaskCompletionSource<bool>();
return waiterTCS.Task;
}
else
{
waiterTCS.SetResult(false);
waiterTCS = new TaskCompletionSource<bool>();
return waiterTCS.Task;
}
}
}
public void Release()
{
lock (lockObject)
{
if (waiterTCS != null)
{
waiterTCS.SetResult(true);
waiterTCS = null;
}
else
{
realLockTaken = false;
}
}
}
}
The boolean returned from the wait method indicates whether you actually acquired the lock (if it returns true) or were booted by someone coming later (returning false). The lock should only be released (but must always be released) when wait returns true.
I have a question, and I could do with some code examples to help me, and I feel it may help to give some background.
I have the need to create an engine of 3 Queues (in C#, winforms). The 3 Queues merely contain an "action" object. Actions get thrown into the engine, and stick themselves to the "most available" Queue (basically, the Queue with the lowest count). Almost all of the time the Queues can run discretely and asynchronously with no harm. However there is one "Action" situation which may happen, and when that type of "Action" occurs and does bubble to the front of a Queue, it must :
wait for the other queues to stop their current actions
lock/pause them when they are finished on their current Action
run the Action alone until it finishes
release the lock on the other 2 queues.
With the added issue that any of the 3 queues can lock the other 2.
Does anyone have any experience of this?
I hope so, it seems a bit painful :-) Thanks in advance
This is a combination of the single queue approach recommended by Servy and the ReaderWriterLock suggestion by Casperah.
ReaderWriterLockSlim throttler = new ReaderWriterLockSlim();
for (int i = 0; i < numWorkers; i++)
{
Task.Factory.StartNew(() =>
{
foreach (Action nextAction in queue.GetConsumingEnumerable())
{
if (mustBeExectutedSerially(nextAction))
{
try
{
throttler.EnterWriteLock();
nextAction();
}
finally
{
throttler.ExitWriteLock();
}
}
else
{
try
{
throttler.EnterReadLock();
nextAction();
}
finally
{
throttler.ExitReadLock();
}
}
}
});
}
First off, I wouldn't suggest using three queues. I'd suggest using one queue and just have 3 different tasks reading from it. I'd also suggest using BlockingCollection<T> (which is just a wrapper for a ConcurrentQueue as it's easier to work with.
As for the rest, a ReaderWriterLockSlim (Thanks Casperah) should handle it easy enough. A Writer requires an exclusive lock, and a reader only locks out other writers, which is exactly your use case.
var queue = new BlockingCollection<Action>();
int numWorkers = 3;
ReaderWriterLockSlim throttler = new ReaderWriterLockSlim();
for (int i = 0; i < numWorkers; i++)
{
Task.Factory.StartNew(() =>
{
foreach (Action nextAction in queue.GetConsumingEnumerable())
{
if (mustBeExectutedSerially(nextAction))
{
try
{
throttler.EnterWriteLock();
nextAction();
}
finally
{
throttler.ExitWriteLock();
}
}
else
{
try
{
throttler.EnterReadLock();
nextAction();
}
finally
{
throttler.ExitReadLock();
}
}
}
});
}
It seems that a System.Threading.ReaderWriterLock will do the job for you.
A normal task should do this:
readerWriterLock.AcquireReaderLock(timeout);
try
{
RunNormalAction();
}
finally
{
readerWriterLock.ReleaseReaderLock();
}
And the advanced task should do this:
readerWriterLock.AcquireWriterLock(timeout);
try
{
RunSpecialAction();
}
finally
{
readerWriterLock.ReleaseWriterLock();
}
You can start as many ReaderLocks as you want, and they will keep running as expected.
When a WriterLock is Acquired all the ReaderLocks has been released and only one WriterLock will run at a time.
My humble sugestion:
Create three objects
object threadlock1 = new object();
object threadlock2 = new object();
object threadlock3 = new object();
Each thread acquires lock over one object before running any action.
lock (threadlock1) // On thread 1, for example
{ //Run Action }
When THE action comes, the thread with THE action must acquire lock over the three objects, thus waiting for the other threads to finish their work, and preventing them from doing any more.
lock (threadlock1) // On thread 1, for example
{
lock (threadlock2)
{
lock (threadlock3)
{
//Run THE Action
}
}
}
When THE action is finished, you release all three locks, and all is back to normal, with each thread holding it's own lock, and resuming actions.
I'm trying to implement a concurrent producer-consumer collection (multiple producers and consumers) that supports timeouts for consumers.
Now the actual collection is pretty complicated (nothing in System.Collections.Concurrent that does the job unfortunately), but I have a minimal sample here that demonstrates my problem (looks a bit like BlockingCollection<T>).
public sealed class ProducerConsumerQueueDraft<T>
{
private readonly Queue<T> queue = new Queue<T>();
private readonly object locker = new object();
public void Enqueue(T item)
{
lock (locker)
{
queue.Enqueue(item);
/* This "optimization" is broken, as Nicholas Butler points out.
if(queue.Count == 1) // Optimization
*/
Monitor.Pulse(locker); // Notify any waiting consumer threads.
}
}
public T Dequeue(T item)
{
lock (locker)
{
// Surprisingly, this needs to be a *while* and not an *if*
// which is the core of my problem.
while (queue.Count == 0)
Monitor.Wait(locker);
return queue.Dequeue();
}
}
// This isn't thread-safe, but is how I want TryDequeue to look.
public bool TryDequeueDesired(out T item, TimeSpan timeout)
{
lock (locker)
{
if (queue.Count == 0 && !Monitor.Wait(locker, timeout))
{
item = default(T);
return false;
}
// This is wrong! The queue may be empty even though we were pulsed!
item = queue.Dequeue();
return true;
}
}
// Has nasty timing-gymnastics I want to avoid.
public bool TryDequeueThatWorks(out T item, TimeSpan timeout)
{
lock (locker)
{
var watch = Stopwatch.StartNew();
while (queue.Count == 0)
{
var remaining = timeout - watch.Elapsed;
if (!Monitor.Wait(locker, remaining < TimeSpan.Zero ? TimeSpan.Zero : remaining))
{
item = default(T);
return false;
}
}
item = queue.Dequeue();
return true;
}
}
}
The idea is straightforward: consumers who find an empty queue wait to be signaled, and producers Pulse (note: not PulseAll, which would be inefficient) them to notify them of a waiting item.
My problem is this property of Monitor.Pulse:
When the thread that invoked Pulse releases the lock, the next
thread in the ready queue (which is not necessarily the thread that
was pulsed) acquires the lock.
What this means is that consumer-thread C1 could be woken up by a producer-thread to consume an item, but another consumer-thread C2 could acquire the lock before C1 has a chance to reacquire it, and consume the item, leaving C1 with an empty queue when it is given control.
This means I have to defensively check in the consumer code on every pulse if the queue is indeed non-empty, and go back and wait empty-handed if this not the case.
My primary issue with this is that it inefficient - threads may be woken up to do work and then promptly sent back to wait again. A related consequence of this is that implementing a TryDequeue with a timeout is unnecessarily difficult and inefficient (see TryDequeueThatWorks) when it should be elegant (see TryDequeueDesired).
How can I twist Monitor.Pulse to do what I want? Alternatively, is there another synchronization primitive that does? Is there a more efficient and/or elegant way to implement a TryDequeue timeout than what I have done?
FYI, here's a test that demonstrates the issues with my desired solution:
var queue = new ProducerConsumerQueueDraft<int>();
for (int consumer = 0; consumer < 3; consumer++)
new Thread(() =>
{
while (true)
{
int item;
// This call should occasionally throw an exception.
// Switching to queue.TryDequeueThatWorks should make
// the problem go away.
if (queue.TryDequeueDesired(out item, TimeSpan.FromSeconds(1)))
{
// Do nothing.
}
}
}).Start();
Thread.Sleep(1000); // Let consumers get up and running
for (int itemIndex = 0; itemIndex < 50000000; itemIndex++)
{
queue.Enqueue(0);
}
My primary issue with this is that it inefficient
It is not. You assume that this is a common occurrence but this kind of race happens very rarely. Once in a Blue Moon, at best. The while loop is necessary to ensure nothing goes wrong when it does occur. And it will. Don't mess with it.
It is in fact the opposite, the lock design is efficient because it does allow a race to occur. And deals with it. Tinkering with locking designs is so very dangerous because the races don't happen frequently enough. They are horribly random which prevents sufficient testing to prove that the alterations don't cause failure. Adding any instrumenting code doesn't work either, it alters the timing.
I wrote an article about this that may help:
Thread synchronization: Wait and Pulse demystified
In particular, it explains why a while loop is necessary.
Here's a simple key-based conflating producer-consumer queue:
public class ConflatingConcurrentQueue<TKey, TValue>
{
private readonly ConcurrentDictionary<TKey, Entry> entries;
private readonly BlockingCollection<Entry> queue;
public ConflatingConcurrentQueue()
{
this.entries = new ConcurrentDictionary<TKey, Entry>();
this.queue = new BlockingCollection<Entry>();
}
public void Enqueue(TValue value, Func<TValue, TKey> keySelector)
{
// Get the entry for the key. Create a new one if necessary.
Entry entry = entries.GetOrAdd(keySelector(value), k => new Entry());
// Get exclusive access to the entry.
lock (entry)
{
// Replace any old value with the new one.
entry.Value = value;
// Add the entry to the queue if it's not enqueued yet.
if (!entry.Enqueued)
{
entry.Enqueued = true;
queue.Add(entry);
}
}
}
public bool TryDequeue(out TValue value, TimeSpan timeout)
{
Entry entry;
// Try to dequeue an entry (with timeout).
if (!queue.TryTake(out entry, timeout))
{
value = default(TValue);
return false;
}
// Get exclusive access to the entry.
lock (entry)
{
// Return the value.
value = entry.Value;
// Mark the entry as dequeued.
entry.Enqueued = false;
entry.Value = default(TValue);
}
return true;
}
private class Entry
{
public TValue Value { get; set; }
public bool Enqueued { get; set; }
}
}
(This may need a code review or two, but I think in general it's sane.)
I'm trying to use the producer consumer pattern to process and save some data. I'm using AutoResetEvent for signalling between the two therads here is the code I have
Here is the producer function
public Results[] Evaluate()
{
processingComplete = false;
resultQueue.Clear();
for (int i = 0; i < data.Length; ++i)
{
if (saveThread.ThreadState == ThreadState.Unstarted)
saveThread.Start();
//-....
//Process data
//
lock (lockobject)
{
resultQueue.Enqueue(result);
}
signal.Set();
}
processingComplete = true;
}
And here is the consumer function
private void SaveResults()
{
Model dataAccess = new Model();
while (!processingComplete || resultQueue.Count > 0)
{
if (resultQueue.Count == 0)
signal.WaitOne();
ModelResults result;
lock (lockobject)
{
result = resultQueue.Dequeue();
}
dataAccess.Save(result);
}
SaveCompleteSignal.Set();
}
So my issue is sometimes resultQueue.Dequeue() throws InvalidOperation exception because the Queue is empty. I'm not sure what I'm doing wrong shouldn't the signal.WaitOne() above that block the the queue is empty?
You have synchronization issues due to a lack of proper locking.
You should lock all of the queue access, including the count check.
In addition, using Thread.ThreadState in this manner is a "bad idea". From the MSDN docs for ThreadState:
"Thread state is only of interest in debugging scenarios. Your code should never use thread state to synchronize the activities of threads."
You can't rely on this as a means of handling synchronization. You should redesign to make sure the thread will be started before it's used. If it's not started, just don't initialize it. (You can always use a null check - if the thread's null, create it and start it).
You check the Queue's Count outside of a synchronized context. Since the Queue is not threadsafe, this could be a problem (possibly while Enqueue is in process Count return 1 but no item can be dequeued), and it would go seriously wrong if you were to use more than one consumer anyways.
You may want to read the threading articles written by Joseph Albahari, he has also a good sample for your problem as well as a "better" solution without OS synchronization objects.
You have to put lock() around all references to the queue. You also have some issues around identifying processing complete (at the end of the queue you'll get a signal but the queue will be empty).
public Results[] Evaluate()
{
processingComplete = false;
lock(lockobject)
{
resultQueue.Clear();
}
for (int i = 0; i < data.Length; ++i)
{
if (saveThread.ThreadState == ThreadState.Unstarted)
saveThread.Start();
//-....
//Process data
//
lock (lockobject)
{
resultQueue.Enqueue(result);
}
signal.Set();
}
processingComplete = true;
}
private void SaveResults()
{
Model dataAccess = new Model();
while (true)
{
int count;
lock(lockobject)
{
count = resultQueue.Count;
}
if (count == 0)
signal.WaitOne();
lock(lockobject)
{
count = resultQueue.Count;
}
// we got a signal, but queue is empty, processing is complete
if (count == 0)
break;
ModelResults result;
lock (lockobject)
{
result = resultQueue.Dequeue();
}
dataAccess.Save(result);
}
SaveCompleteSignal.Set();
}