Monitor.TryEnter seems not to work as expected [duplicate] - c#

Reading on locks in C#. I see that being able to acquire lock on same object multiple times is said to be possible because Monitors are re-entrant. The definition of re-entrant code as defined in wikipedia does not seem to fit well in this context. Can you please help me understand what is re-entrancy in the context of C# and how does it apply to Monitors? From what I have understood, when a thread has acquired a lock, it would not relinquish the lock when in the middle of a critical section - even if it yields CPU..as a result of which, no other thread would be able to acquire the monitor..where does re-entrancy come into picture?

#Zbynek Vyskovsky - kvr000 has already explained what reentrancy means with regards to Monitor.
Wikipedia defines "reentrant mutex" (recursive lock) as:
particular type of mutual exclusion (mutex) device that may be locked multiple times by the same process/thread, without causing a deadlock.
Here's a little example to help you visualise the concept:
void MonitorReentrancy()
{
var obj = new object();
lock (obj)
{
// Lock obtained. Must exit once to release.
// No *other* thread can obtain a lock on obj
// until this (outermost) "lock" scope completes.
lock (obj) // Deadlock?
{
// Nope, we've just *re-entered* the lock.
// Must exit twice to release.
bool lockTaken = false;
try
{
Monitor.Enter(obj, ref lockTaken); // Deadlock?
// Nope, we've *re-entered* lock again.
// Must exit three times to release.
}
finally
{
if (lockTaken) {
Monitor.Exit(obj);
}
// Must exit twice to release.
}
}
// Must exit once to release.
}
// At this point we have finally truly released
// the lock allowing other threads to obtain it.
}

Reentrancy has many meanings actually.
Here in this context it means that the monitor can be entered by the same thread repeatedly several times and will unlock it once the same number of releases are done.

Related

Concurrency with non-thread safe resources

I have a multi-threaded application that implements async methods. The application utilizes resources that are not thread safe, and needs to be used on a single thread. The worker thread is guarded like this
private void EnsureWorkerIsRunning()
{
// first try without lock
if (_processingRequests)
{
return;
}
lock (_processLock)
{
// try again without lock
if (_processingRequests)
{
return;
}
_processingRequests = true;
DoWork();
_processingRequests = false;
}
}
That is
Check if (the bool) _processingRequests is true without any lock. If it is processing requests, return and be confident that the worker is running.
If _processingRequests is false, continue to the lock statement, only allowing one thread to enter at a time. The first thread that enters the block set's _processingRequests to true and starts the worker. Any subsequent threads that enter the lock block will bail, since _processingRequests are now true.
Adding a lock directly introduces a performance hit that is not acceptable.
I'm looking for a more elegant way to achieve the same thing without affecting the performance. Any ideas?
The technique you are using is called Double-checked locking and is perfectly fine approach to use, where suitable. It's widely used and it's nothing to do with elegance, because it's main purpose is to reduce the performance degradation when entering lock statement each time, with additional checking for a condition without a lock.
However in your particular case it's more suitable to use just Monitor.TryEnter, which returns false if some thread has already acquired the lock.
Also, a blog post about the impact of processor's context switches, that double-checked locking avoids where unnecessary.
bool _processingRequests with additional lock(_processLock) is a nonsense.
Use proper synchronization, e.g. Monitor:
object _processLock = new object();
// acquiring lock,
if(Monitor.TryEnter(_processLock)) // if already acquired - exit immediately(return false)
try
{
...
}
finally { Monitor.Exit(_processLock); }
This will either do job or, if _processLock is already occupied, don't do (seems you want that behavior), no need to check for anything.

Monitor.Wait - while or if?

Currently, I'm learning for a multithreading exam. I read the good threading article of albahari. I've got a question at the monitor usage - why is here used a loop in place of an if?
lock (_locker)
{
while (!_go) //why while and not if?
Monitor.Wait (_locker); // _lock is released
// lock is regained
...
}
I think, that an if would be sufficient.
I'm afraid, that I don't understand the article completely.
//Edit
Example-Code:
class SimpleWaitPulse
{
static readonly object _locker = new object();
static bool _go;
static void Main()
{ // The new thread will block
new Thread (Work).Start(); // because _go==false.
Console.ReadLine(); // Wait for user to hit Enter
lock (_locker) // Let's now wake up the thread by
{ // setting _go=true and pulsing.
_go = true;
Monitor.Pulse (_locker);
}
}
static void Work()
{
lock (_locker)
while (!_go)
Monitor.Wait (_locker); // Lock is released while we’re waiting
Console.WriteLine ("Woken!!!");
}
}
It just depends on the situation. In this case the code is just waiting for _go to be true.
Every time _locker is pulsed it will check to see if _go has been set to true. If _go is still false, it will wait for the next pulse.
If an if was used instead of a while, it would only wait once (or not at all if _go was already true), and would then continue on after a pulse, regardless of the new state of _go.
So how you use Monitor.Wait() depends entirely on your specific needs.
It really just depends on the situation. But first, we need to clarify how Monitors work. When a thread proceeds to signal a thread through Monitor.Pulse(), there is usually no guarantee that the signaled thread will actually run next. This means that it is possible for other threads to run before the signaled thread and change the condition under which it was okay for the signaled thread to proceed. This means that the signaled thread still needs to check if is safe for it to proceed after being woken up (ie the while loop). However, certain rare synchronization problems allow you to make the assumption that once a thread has been signaled to wake up (ie Monitor.Pulse()), no other thread has the ability to change the condition under which it is safe to proceed (ie. the if condition).
I wrote an article that might help here: Wait and Pulse demystified
There's more going on than is immediately obvious.
I've got a question at the monitor usage - why is here used a loop in
place of an if?
There is a well known rule when working with Pulse and Wait that states that when in doubt prefer while over an if. Clearly, either one will work in this case, but in almost every other situation while is required. In fact, there are very few (if any) scenarios where using a while loop would produce an incorrect result. That is the basis for this general rule. The author used a while loop because he was trying to stick with the tried-and-true pattern. He even provides the template in the same article. Here is it is:
lock (_locker)
while ( <blocking-condition> )
Monitor.Wait (_locker);
The simplest way to write correct code with Monitor.Wait is to assume the system will regard it as "advisory", and assume that the system may arbitrarily wake any waiting thread any time it can acquire the lock, without regard for whether Pulse has been called. The system usually won't do so, of course, but if a program is using Wait and Pulse properly, its correctness should not be affected by having Wait calls arbitrarily exit early for no reason. Essentially, one should regard Wait as a means of telling the system "Continuing execution past here will be a waste of time unless or until someone else calls Pulse".

Is there any way to determine the number of threads waiting to lock in C#?

I'm using simple locking in C# using the lock statement. Is there any way to determine how many other threads are waiting to get a lock on the object? I basically want to limit the number of threads that are waiting for a lock to 5. My code would throw an exception if a sixth thread needs to get a lock.
This can be easily accomplished via the Semaphore class. It will do the counting for you. Notice in the code below that I use a semaphore to do a non-blocking check of the number of threads waiting for the resource and then I use a plain old lock to actually serialize access to that resource. An exception is thrown if there are more than 5 threads waiting for the resource.
public class YourResourceExecutor
{
private Semaphore m_Semaphore = new Semaphore(5, 5);
public void Execute()
{
bool acquired = false;
try
{
acquired = m_Semaphore.WaitOne(0);
if (!acquired)
{
throw new InvalidOperationException();
}
lock (m_Semaphore)
{
// Use the resource here.
}
}
finally
{
if (acquired) m_Semaphore.Release();
}
}
}
There is one notable variation of this pattern. You could change the name of the method to TryExecute and have it return a bool instead of throwing an exception. It is completely up to you.
Remember that the object used in the lock expression is not the subject of the lock. It merely serves as an identifier for a synchronized block of code. Any code blocks that acquire locks using the same object will effectively be serialized. It is the code block that is being "locked", not the object used in the lock expression
The lock statement is a shortcut for Monitor.Enter and Monitor.Exit. I do not think, that you have a chance to get the number of waiting objects.
You can use a simple shared counter(integer) that increments before the lock statement. If the value is equal to 5 then have your thread avoid the lock statement. The challenge however is that you will need to lock the counter to ensure the increment operation is atomic.
No, lock() uses the Monitor class and that has no member for finding out the nr of queued threads.
You can specify a time-out.
And frankly, throwing an Exception when a queue fills up sounds like a bad idea.

How to use Multiple Variables for a lock Scope in C#

I have a situation where a block of code should be executed only if two locker objects are free.
I was hoping there would be something like:
lock(a,b)
{
// this scope is in critical region
}
However, there seems to be nothing like that. So does it mean the only way for doing this is:
lock(a)
{
lock(b)
{
// this scope is in critical region
}
}
Will this even work as expected? Although the code compiles, but I am not sure whether it would achieve what I am expecting it to.
lock(a) lock(b) { // this scope is in critical region }
This could would block until the thread can acquire the lock for a. Then, with that lock acquired, it would block until the thread can acquire the lock for b. So this works as expected.
However, you have to be careful not to do this somewhere else:
lock(b) lock(a) { // this scope is in critical region }
This could lead to a deadlock situation in which thread 1 has acquired the lock for a and is waiting to acquire the lock for b, and thread 2 has acquired the lock for b and is waiting to acquire the lock for a.
Requesting the lock on both should work fine. lock(a) will block until a is free. Once you have that lock, lock(b) will block until you have b. After that you have both.
One thing you need to be very careful about here is the order. If you're going to do this make sure you always get the lock on a before getting the lock on b. Otherwise you could very easily find yourself in a deadlock situation.
I'd expect it to, though there'd be a case where it could potentially cause a deadlock condition.
Normally, the code will attempt to lock a and then proceed to lock b if that succeeded. This means that it would only execute the code if it could lock both a and b. Which is what you want.
However, if some other code has already got a lock on b then this code will not do what you expect. You'd also need to ensure that everywhere you needed to lock on both a and b you attempt to get the locks in the same order. If you get b first and then a you would cause a deadlock.

Does Monitor.Wait ensure that fields are re-read?

It is generally accepted (I believe!) that a lock will force any values from fields to be reloaded (essentially acting as a memory-barrier or fence - my terminology in this area gets a bit loose, I'm afraid), with the consequence that fields that are only ever accessed inside a lock do not themselves need to be volatile.
(If I'm wrong already, just say!)
A good comment was raised here, questioning whether the same is true if code does a Wait() - i.e. once it has been Pulse()d, will it reload fields from memory, or could they be in a register (etc).
Or more simply: does the field need to be volatile to ensure that the current value is obtained when resuming after a Wait()?
Looking at reflector, Wait calls down into ObjWait, which is managed internalcall (the same as Enter).
The scenario in question was:
bool closing;
public bool TryDequeue(out T value) {
lock (queue) { // arbitrary lock-object (a private readonly ref-type)
while (queue.Count == 0) {
if (closing) { // <==== (2) access field here
value = default(T);
return false;
}
Monitor.Wait(queue); // <==== (1) waits here
}
...blah do something with the head of the queue
}
}
Obviously I could just make it volatile, or I could move this out so that I exit and re-enter the Monitor every time it gets pulsed, but I'm intrigued to know if either is necessary.
Since the Wait() method is releasing and reacquiring the Monitor lock, if lock performs the memory fence semantics, then Monitor.Wait() will as well.
To hopefully address your comment:
The locking behavior of Monitor.Wait() is in the docs (http://msdn.microsoft.com/en-us/library/aa332339.aspx), emphasis added:
When a thread calls Wait, it releases the lock on the object and enters the object's waiting queue. The next thread in the object's ready queue (if there is one) acquires the lock and has exclusive use of the object. All threads that call Wait remain in the waiting queue until they receive a signal from Pulse or PulseAll, sent by the owner of the lock. If Pulse is sent, only the thread at the head of the waiting queue is affected. If PulseAll is sent, all threads that are waiting for the object are affected. When the signal is received, one or more threads leave the waiting queue and enter the ready queue. A thread in the ready queue is permitted to reacquire the lock.
This method returns when the calling thread reacquires the lock on the object.
If you're asking about a reference for whether a lock/acquired Monitor implies a memory barrier, the ECMA CLI spec says the following:
12.6.5 Locks and Threads:
Acquiring a lock (System.Threading.Monitor.Enter or entering a synchronized method) shall implicitly perform a volatile read operation, and releasing a lock (System.Threading.Monitor.Exit or leaving a synchronized method) shall implicitly perform a volatile write operation. See §12.6.7.
12.6.7 Volatile Reads and Writes:
A volatile read has "acquire semantics" meaning that the read is guaranteed to occur prior to any references to memory that occur after the read instruction in the CIL instruction sequence. A volatile write has "release semantics" meaning that the write is guaranteed to happen after any memory references prior to the write instruction in the CIL instruction sequence.
Also, these blog entries have some details that might be of interest:
http://blogs.msdn.com/jaredpar/archive/2008/01/17/clr-memory-model.aspx
http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/
http://www.bluebytesoftware.com/blog/2007/11/10/CLR20MemoryModel.aspx
Further to Michael Burr's answer, not only does Wait release and re-acquire the lock, but it does this so that another thread can take out the lock in order to examine the shared state and call Pulse. If the second thread doesn't take out the lock then Pulse will throw. If they don't Pulse the first thread's Wait won't return. Hence any other thread's access to the shared state must happen within a proper memory-barried scenario.
So assuming the Monitor methods are being used according to the locally-checkable rules, then all memory accesses happen inside a lock, and hence only the automatic memory barrier support of lock is relevant/necessary.
Maybe I can help you this time... instead of using a volatile you can use Interlocked.Exchange with an integer.
if (closing==1) { // <==== (2) access field here
value = default(T);
return false;
}
// somewhere else in your code:
Interlocked.Exchange(ref closing, 1);
Interlocked.Exchange is a synchronization mechanism, volatile isn't... I hope that's worth something (but you probably already thought about this).

Categories

Resources