Why do nested locks not cause a deadlock? - c#

Why does this code not cause a deadlock?
private static readonly object a = new object();
...
lock(a)
{
lock(a)
{
....
}
}

If a thread already holds a lock, then it can "take that lock" again without issue.
As to why that is, (and why it's a good idea), consider the following situation, where we have a defined lock ordering elsewhere in the program of a -> b:
void f()
{
lock(a)
{ /* do stuff inside a */ }
}
void doStuff()
{
lock(b)
{
//do stuff inside b, that involves leaving b in an inconsistent state
f();
//do more stuff inside b so that its consistent again
}
}
Whoops, we just violated our lock ordering and have a potential deadlock on our hands.
We really need to be able to do the following:
function doStuff()
{
lock(a)
lock(b)
{
//do stuff inside b, that involves leaving b in an inconsistent state
f();
//do more stuff inside b so that its consistent again
}
}
So that our lock ordering is maintained, without self-deadlocking when we call f().

The lock keyword uses a re-entrant lock, meaning the current thread already has the lock so it doesn't try to reacquire it.
A deadlock occurs if
Thread 1 acquires lock A
Thread 2 acquires lock B
Thread 1 tries to acquire lock B (waits for Thread 2 to be done with it)
Thread 2 tries to acquire lock A (waits for Thread 1 to be done with it)
Both threads are now waiting on each other and thus deadlocked.

From section 8.12 of the C# language specification:
While a mutual-exclusion lock is held,
code executing in the same execution
thread can also obtain and release the
lock. However, code executing in other
threads is blocked from obtaining the
lock until the lock is released.
It should be obvious that the internal lock scope is in the same thread as the outer.

Related

Conditional thread lock in c#

Is it possible to have a conditional thread lock when the underlying condition is not constant?
I have two functions A and B, and a condition to decide which function to execute.
A is thread safe by itself, multiple calls to A can execute simultaneously, B is not, and is Synchronized. But during execution of B the condition can change (from false to true) and therefore all threads executing A at that time will throw errors.
if (condition)
{
A();
}
else
{
B();
}
A - thread safe
B - Synchronized using [MethodImpl(MethodImplOptions.Synchronized)]
Therefore, I am looking for a way to lock A but only when B is running.
Please suggest a way to achieve this.
Some elaborations:
I am creating a cache, and performance is very crucial, thus a blanket lock is not feasible.
Condition is whether or not the requested data is present in the cache.
A() = AddToUpdates() - Executed on a cache hit, just adds to the number of updates for a particular cache key, using a concurrent dictionary.
B() = ProccessUpdates() and EvictLeastPriorityEntry() - Executed on a cache miss, all previous updates will be processed and the underlying data structure storing the ordering of cache entries will be re-arranged.
And then the entry with least priority will be removed.
As mentioned in the accepted answer ReaderWriterLock seems to be the way to go.
Just one problem though,
Let's say, thread1 starts execution and a cache hit occurs, (on the entry with the least priority) meaning the if condition is true and enters the if block. But before calling A(), control is switched to thread2.
thread2 - cache miss occurs, reordering and eviction (Entry which A() from thread1 needed access to) is performed.
Now when controlled is returned to thread1, error will occur.
This is the solution I feel should work:
_lock.EnterReadLock();
if (condition)
{
A();
}
_lock.ExitReadLock();
if (!condition)
{
B();
}
void A()
{
// ....
}
void B()
{
_lock.EnterWriteLock();
// ...
_lock.ExitWriteLock();
}
Will this work?
Thank you.
I possible solution to your problem might be the ReaderWriterLockSlim class. This is a synchronization primitive that allows multiple concurrent readers, or one exclusive writer, but not both of those at the same time.
Use ReaderWriterLockSlim to protect a resource that is read by multiple threads and written to by one thread at a time. ReaderWriterLockSlim allows multiple threads to be in read mode, allows one thread to be in write mode with exclusive ownership of the lock, and allows one thread that has read access to be in upgradeable read mode, from which the thread can upgrade to write mode without having to relinquish its read access to the resource.
Example:
private readonly ReaderWriterLockSlim _lock = new();
void A()
{
_lock.EnterReadLock();
try
{
//...
}
finally { _lock.ExitReadLock(); }
}
void B()
{
_lock.EnterWriteLock();
try
{
//...
}
finally { _lock.ExitWriteLock(); }
}
Your question looks a lot like this:
A() is some read only method, so thread safe. Different execution of A in parallel is OK.
B() is like writing/mutating things that A method uses. So A() becomes not thread safe if executed at same time.
For example B() could write in a List and A() executions read on this list. And you would get exception "InvalidOperationException: Collection Was Modified" thrown from A() .
I advise you to look for "producer/consumer problem" in google and look for the tons of example there are.
But in case you absolutely want to begins B execution while A execution(s) has/have not terminated, you can add checkpoint in A() using Monitor class, it is used to lock a resource and synchronize with other threads. It is more complex though and i would go first for producer/consumer pattern to see if it fill the needs
Some more things:
I would check is the use of BlockingCollection<T> class that may fit your exact need too (and is easy to use)
The use of MethodImplOptions.Synchronized is not recommended because it use public lock. We use usually use private lock (object readonly _lock = new object();) so no one except the maintainer of this object can lock on it, thus preventing dead lock (and preventing other people accusing your code of a bug because other people locked your instance of class without knowing you do the same internally)

Multithreading and Locking (Thread-Safe operations)

So I have a class with a few methods which all use locking in order to prevent weird things happening when someone uses an instance of my class with multiple threads accessing it:
public class SomeRandomClass
{
private object locker = new object();
public void MethodA()
{
lock (locker)
{
// Does something
MethodB();
}
}
public void MethodB()
{
lock (locker)
{
// Does something else
}
}
}
As we can see, MethodB() is automatically accessed by MethodA(), but that won't work since MethodA() has currently locked the locker object.
I want to make MethodB() accessible publicly, so you can call it manually whenever needed, but I do NOT want it to be used while MethodA() is doing things (that's why I'm using a locker object).
And of course I do not want MethodA() to do things while MethodB() is doing stuff. I basically want only one of all the methods to be used at the same time, but MethodA() needs to access MethodB() somehow without removing the lock (so that it stays completely thread-safe the whole time).
I really hope it is kind of understandable what I'm trying to ask... If there's any questions about my question, then please go ahead and post them below. Answers/Solutions are very much appreciated as well!
The solution is probably incredibly easy and I'm just not seeing it.
By the way, the above is supposed to be C#-code.
An easy solution would be to create a private method that contains what MethodB does that can be called by MethodA and another public MethodB
The private MethodB does not lock, only the public ones do.
For example:
public class SomeRandomClass {
private object locker = new object();
public void MethodA {
lock(locker) {
// exclusive club
// do something before calling _methodB
_methodB();
}
}
private void _methodB {
// do that, what used to be done by MethodB
}
public void MethodB {
//this one only exists to expose _methodB in a thread-safe context
lock(locker) {
_methodB();
}
}
}
P.S.
I think it is obvious to you and everyone else why your code is somewhat designed to create a deadlock.
Update:
Apparently lock(object) {} is re-entrant as pointed out in the comments, so the obvious deadlock isn't even one.
Locking forbids what you're trying to do -- that's its purpose.
One thing to do here is creating a private method that you can access from both methodA and methodB. That method wouldn't use locking, and wouldn't be thread safe, but could be called from either one of the locking methods.
You have race condition here: it make data incorrect. I suppose method A write static theVar variable of type string:
thread A -> call method A -> lock -> change theVar to "A"
thread B -> call method B -> wait because thread A keep lock
thread A -> release lock to call method B
The bug here: thread B process theVar of "A"
If method B only read theVar, it's Ok.
Your lock mechanism needs to allow locks to be taken in a recursive way (by the same thread only), usually called reentrant. lock (Monitor class internally).
It is legal for the same thread to invoke Enter more than once without it blocking; however, an equal number of Exit calls must be invoked before other threads waiting on the object will unblock.
See also Recursive / nested locking in C# with the lock statement
and Re-entrant locks in C#
As pointed out by Henk Holterman in the comment, the Monitor class is already reentrant. And the lock statement is managing the right amount of Enter and Exit calls to the underlying Monitor class.
The ReaderWriterLockSlim class is an example for a lock mechanism where one can choose between reentrant and non-reentrant. See https://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim(v=vs.110).aspx
var rwLock = new ReaderWriterLockSlim(LockRecursionPolicy.SupportsRecursion);
Replace your lock { ... } with
ReaderWriterLockSlim rwLock =
new ReaderWriterLockSlim(LockRecursionPolicy.SupportsRecursion);
...
try
{
rwLock.EnterWriteLock();
// Does something
}
finally
{
rwLock.ExitWriteLock();
}
```
The Code written by you is correct.
Because according to Microsoft, once the call is acquired even if program calls for lock in the same flow, it will not be blocked as lock is already with the thread.
The code works as below.
call "MethodA" -->acquire lock --> call "MethodB" (will not be blocked as thread is already acquired lock) and execution will be completed.
Call "MethodB" in between previous execution from another thread, the execution will be blocked as lock is with first thread.

Is a .Net Lock(context) atomic?

I do understand that a .net lock ensures that only a single thread performs the lines of code found within the scope of the lock.
What I don't understand is whether a lock is atomic.
Can a thread be interrupted while performing the locked code?
For example - It appears to me that if a lock is NOT atomic, then the following code is not thread safe:
Class example
{
private int myNumber;
private object context = new object();
void Write()
{
myNumber--;
}
void WriteLock()
{
lock (context)
{
myNumber++;
print(myNumber);
}
}
}
If thread A peforms method WriteLock() and is interrupted because thread B is performing Write(), then myNumber may be changed unsafely. Am I right?
No, that's quacks loudly like a bug. Those operators are not atomic, even though they look like it. Under the hood, they operate as a read-modify-write, three operations instead of one. So are themselves not atomic. The missing lock in Write() permits it to execute concurrently with WriteLock(). The outcome is arbitrary, including no change when Write races ahead of WriteLock and the value actually getting decremented when WriteLock races ahead of Write.
Having a thread interrupted while it owns the lock doesn't matter, the lock just will be held longer.
Use Interlocked.Increment() and Decrement() for a cheaper version that doesn't need lock.
Check the msdn documentation. I would imagine that it is either atomic or is implementing some other pattern to ensure this cannot happen.
The issue with your example is that the Write method should also be obtaining a lock before decrementing myNumber so that no other thread can alter the shared resource.
void Write()
{
lock(context) {
myNumber--;
}
}

Is it good practice to lock a thread in order to make things transactional in winforms?

This is a old school winforms application that I am working with, and they design pattern that was used is as follows:
Whenever you need to make things transactional, a operation is performed on its own thread, and the thread is locked (a specific lock object is used for each operation), and then a call is made to the wcf service, some local objects are updated, then the lock is released.
Is this good practise?
Yes, but be careful of multithreading and have a good read on it as too many locks might create a deadlock situation.
I don't quite know what you mean, "lock a thread." Is it something like this?
static object ThreadLock = new object();
void ThreadProc(object state)
{
lock (ThreadLock)
{
// do stuff here
}
}
If so, there's nothing really wrong with that design. Your UI thread spawns a thread that's supposed to execute that code, and the lock prevents multiple threads from executing concurrently. It's a little bit wasteful in that you could potentially have many threads queued up behind the lock, but in practice you probably don't have more than one or two threads waiting. There are more efficient ways to do it (implement a task queue of some sort), but what you have is simple and effective.
As long as you are not waiting on multiple lock objects, this should be fine. Deadlock occurs when you have a situation like this:
Thread A:
lock (lockObject1)
{
// Do some stuff
lock (lockObject2)
{
// Do some stuff
}
}
Thread B:
lock (lockObject2)
{
// Do some stuff
lock (lockObject1)
{
// Do some stuff
}
}
If you happen to lock lockObject1 in thread A, and thread B locks lockObject2 before thread A locks it, then both threads will be waiting for an object that is locked in another thread, and neither will unlock because each is waiting while having an object locked. This is an oversimplified example -- there are many ways one can end up in this situation.
To avoid deadlock, do not wait on a second object while you have a first object locked. If you lock one object at a time like this, you can't get deadlocked because eventually, the locking thread will release the object a waiting thread needs. So for example, the above should be unrolled:
Thread A:
lock (lockObject1)
{
// Do some stuff
}
lock (lockObject2)
{
// Do some stuff
}
Thread B:
lock (lockObject2)
{
// Do some stuff
}
lock (lockObject1)
{
// Do some stuff
}
In this case, each lock operation will complete without trying to acquire another resources, and so deadlock is avoided.
This is not making the action transactional. I would take that to mean that either the entire operation succeeds or it has no effect -- if I update two local object inside your synchronization block, an error with the second does not rollback changes to the first.
Also, there is nothing stopping the main thread from using the two objects while they are being updated -- it needs to cooperate by also locking.
Locking in the background thread is only meaningful if you also lock when you use those objects in the main thread.

How to use Multiple Variables for a lock Scope in C#

I have a situation where a block of code should be executed only if two locker objects are free.
I was hoping there would be something like:
lock(a,b)
{
// this scope is in critical region
}
However, there seems to be nothing like that. So does it mean the only way for doing this is:
lock(a)
{
lock(b)
{
// this scope is in critical region
}
}
Will this even work as expected? Although the code compiles, but I am not sure whether it would achieve what I am expecting it to.
lock(a) lock(b) { // this scope is in critical region }
This could would block until the thread can acquire the lock for a. Then, with that lock acquired, it would block until the thread can acquire the lock for b. So this works as expected.
However, you have to be careful not to do this somewhere else:
lock(b) lock(a) { // this scope is in critical region }
This could lead to a deadlock situation in which thread 1 has acquired the lock for a and is waiting to acquire the lock for b, and thread 2 has acquired the lock for b and is waiting to acquire the lock for a.
Requesting the lock on both should work fine. lock(a) will block until a is free. Once you have that lock, lock(b) will block until you have b. After that you have both.
One thing you need to be very careful about here is the order. If you're going to do this make sure you always get the lock on a before getting the lock on b. Otherwise you could very easily find yourself in a deadlock situation.
I'd expect it to, though there'd be a case where it could potentially cause a deadlock condition.
Normally, the code will attempt to lock a and then proceed to lock b if that succeeded. This means that it would only execute the code if it could lock both a and b. Which is what you want.
However, if some other code has already got a lock on b then this code will not do what you expect. You'd also need to ensure that everywhere you needed to lock on both a and b you attempt to get the locks in the same order. If you get b first and then a you would cause a deadlock.

Categories

Resources