How to use Multiple Variables for a lock Scope in C# - c#

I have a situation where a block of code should be executed only if two locker objects are free.
I was hoping there would be something like:
lock(a,b)
{
// this scope is in critical region
}
However, there seems to be nothing like that. So does it mean the only way for doing this is:
lock(a)
{
lock(b)
{
// this scope is in critical region
}
}
Will this even work as expected? Although the code compiles, but I am not sure whether it would achieve what I am expecting it to.

lock(a) lock(b) { // this scope is in critical region }
This could would block until the thread can acquire the lock for a. Then, with that lock acquired, it would block until the thread can acquire the lock for b. So this works as expected.
However, you have to be careful not to do this somewhere else:
lock(b) lock(a) { // this scope is in critical region }
This could lead to a deadlock situation in which thread 1 has acquired the lock for a and is waiting to acquire the lock for b, and thread 2 has acquired the lock for b and is waiting to acquire the lock for a.

Requesting the lock on both should work fine. lock(a) will block until a is free. Once you have that lock, lock(b) will block until you have b. After that you have both.
One thing you need to be very careful about here is the order. If you're going to do this make sure you always get the lock on a before getting the lock on b. Otherwise you could very easily find yourself in a deadlock situation.

I'd expect it to, though there'd be a case where it could potentially cause a deadlock condition.
Normally, the code will attempt to lock a and then proceed to lock b if that succeeded. This means that it would only execute the code if it could lock both a and b. Which is what you want.
However, if some other code has already got a lock on b then this code will not do what you expect. You'd also need to ensure that everywhere you needed to lock on both a and b you attempt to get the locks in the same order. If you get b first and then a you would cause a deadlock.

Related

Monitor.TryEnter seems not to work as expected [duplicate]

Reading on locks in C#. I see that being able to acquire lock on same object multiple times is said to be possible because Monitors are re-entrant. The definition of re-entrant code as defined in wikipedia does not seem to fit well in this context. Can you please help me understand what is re-entrancy in the context of C# and how does it apply to Monitors? From what I have understood, when a thread has acquired a lock, it would not relinquish the lock when in the middle of a critical section - even if it yields CPU..as a result of which, no other thread would be able to acquire the monitor..where does re-entrancy come into picture?
#Zbynek Vyskovsky - kvr000 has already explained what reentrancy means with regards to Monitor.
Wikipedia defines "reentrant mutex" (recursive lock) as:
particular type of mutual exclusion (mutex) device that may be locked multiple times by the same process/thread, without causing a deadlock.
Here's a little example to help you visualise the concept:
void MonitorReentrancy()
{
var obj = new object();
lock (obj)
{
// Lock obtained. Must exit once to release.
// No *other* thread can obtain a lock on obj
// until this (outermost) "lock" scope completes.
lock (obj) // Deadlock?
{
// Nope, we've just *re-entered* the lock.
// Must exit twice to release.
bool lockTaken = false;
try
{
Monitor.Enter(obj, ref lockTaken); // Deadlock?
// Nope, we've *re-entered* lock again.
// Must exit three times to release.
}
finally
{
if (lockTaken) {
Monitor.Exit(obj);
}
// Must exit twice to release.
}
}
// Must exit once to release.
}
// At this point we have finally truly released
// the lock allowing other threads to obtain it.
}
Reentrancy has many meanings actually.
Here in this context it means that the monitor can be entered by the same thread repeatedly several times and will unlock it once the same number of releases are done.

Conditional thread lock in c#

Is it possible to have a conditional thread lock when the underlying condition is not constant?
I have two functions A and B, and a condition to decide which function to execute.
A is thread safe by itself, multiple calls to A can execute simultaneously, B is not, and is Synchronized. But during execution of B the condition can change (from false to true) and therefore all threads executing A at that time will throw errors.
if (condition)
{
A();
}
else
{
B();
}
A - thread safe
B - Synchronized using [MethodImpl(MethodImplOptions.Synchronized)]
Therefore, I am looking for a way to lock A but only when B is running.
Please suggest a way to achieve this.
Some elaborations:
I am creating a cache, and performance is very crucial, thus a blanket lock is not feasible.
Condition is whether or not the requested data is present in the cache.
A() = AddToUpdates() - Executed on a cache hit, just adds to the number of updates for a particular cache key, using a concurrent dictionary.
B() = ProccessUpdates() and EvictLeastPriorityEntry() - Executed on a cache miss, all previous updates will be processed and the underlying data structure storing the ordering of cache entries will be re-arranged.
And then the entry with least priority will be removed.
As mentioned in the accepted answer ReaderWriterLock seems to be the way to go.
Just one problem though,
Let's say, thread1 starts execution and a cache hit occurs, (on the entry with the least priority) meaning the if condition is true and enters the if block. But before calling A(), control is switched to thread2.
thread2 - cache miss occurs, reordering and eviction (Entry which A() from thread1 needed access to) is performed.
Now when controlled is returned to thread1, error will occur.
This is the solution I feel should work:
_lock.EnterReadLock();
if (condition)
{
A();
}
_lock.ExitReadLock();
if (!condition)
{
B();
}
void A()
{
// ....
}
void B()
{
_lock.EnterWriteLock();
// ...
_lock.ExitWriteLock();
}
Will this work?
Thank you.
I possible solution to your problem might be the ReaderWriterLockSlim class. This is a synchronization primitive that allows multiple concurrent readers, or one exclusive writer, but not both of those at the same time.
Use ReaderWriterLockSlim to protect a resource that is read by multiple threads and written to by one thread at a time. ReaderWriterLockSlim allows multiple threads to be in read mode, allows one thread to be in write mode with exclusive ownership of the lock, and allows one thread that has read access to be in upgradeable read mode, from which the thread can upgrade to write mode without having to relinquish its read access to the resource.
Example:
private readonly ReaderWriterLockSlim _lock = new();
void A()
{
_lock.EnterReadLock();
try
{
//...
}
finally { _lock.ExitReadLock(); }
}
void B()
{
_lock.EnterWriteLock();
try
{
//...
}
finally { _lock.ExitWriteLock(); }
}
Your question looks a lot like this:
A() is some read only method, so thread safe. Different execution of A in parallel is OK.
B() is like writing/mutating things that A method uses. So A() becomes not thread safe if executed at same time.
For example B() could write in a List and A() executions read on this list. And you would get exception "InvalidOperationException: Collection Was Modified" thrown from A() .
I advise you to look for "producer/consumer problem" in google and look for the tons of example there are.
But in case you absolutely want to begins B execution while A execution(s) has/have not terminated, you can add checkpoint in A() using Monitor class, it is used to lock a resource and synchronize with other threads. It is more complex though and i would go first for producer/consumer pattern to see if it fill the needs
Some more things:
I would check is the use of BlockingCollection<T> class that may fit your exact need too (and is easy to use)
The use of MethodImplOptions.Synchronized is not recommended because it use public lock. We use usually use private lock (object readonly _lock = new object();) so no one except the maintainer of this object can lock on it, thus preventing dead lock (and preventing other people accusing your code of a bug because other people locked your instance of class without knowing you do the same internally)

About lock objects in C#

Consider the following code:
static void AddItem()
{
lock (_list)
_list.Add ("Item " + _list.Count); //Lock 1
string[] items;
lock (_list)
items = _list.ToArray(); //Lock 2
foreach (string s in items)
Console.WriteLine (s);
}
If Thread A gets Lock 2, and Thread B attempts to get Lock 1, will B get the lock or not? Considering both locks use the same locking object.
No, thread B will need to wait until thread A releases the lock. That's the point of it being the same lock object, after all - there's one lock. Where the lock is acquired or released is irrelevant: only one thread can "own" the monitor at a time.
I would strongly advise you to use braces for readability, by the way:
lock(_list)
{
_list.Add(...);
}
No, B will not. Both are locking on the same object and therefore the two locks are "linked." For this reason, if you need to highly-optimise such code, there are times where you might consider multiple lock objects.
As a side note, you should not be locking on the list itself but on an object created specifically for that purpose.
No, since they use the same locking object, they are mutually exclusive.
Often code is used to lock an object (for example a list) to perform an operation on it without interference from other threads. This requires that the item is locked no matter what operation is performed.
To elaborate, say you have a list that is designed to be threadsafe. If you try adding and deleting multiple items simultaneously, you could corrupt the list. By locking the list whenever it needs to be modified, you can help ensure thread safety.
This all depends on the fact that one object will make all locks mutually exclusive.
If Thread A is using the lock, then no other thread can use it (regardless of where the lock is being used). So, thread B will be blocked until that lock is free.
Consider that:
lock(obj)
{
//Do Stuff;
}
Is shorthand for:
Monitor.Enter(obj);
try
{
//Do Stuff;
}
finally
{
Monitor.Exit(obj);
}
Now consider that Monitor.Enter() is a method call like any other. It knows nothing about where in the code it was called. The only thing it knows about, is what object was passed to it.
As far as it's concerned, what you call "Lock 1" and "Lock 2" are the same lock.

Is it good practice to lock a thread in order to make things transactional in winforms?

This is a old school winforms application that I am working with, and they design pattern that was used is as follows:
Whenever you need to make things transactional, a operation is performed on its own thread, and the thread is locked (a specific lock object is used for each operation), and then a call is made to the wcf service, some local objects are updated, then the lock is released.
Is this good practise?
Yes, but be careful of multithreading and have a good read on it as too many locks might create a deadlock situation.
I don't quite know what you mean, "lock a thread." Is it something like this?
static object ThreadLock = new object();
void ThreadProc(object state)
{
lock (ThreadLock)
{
// do stuff here
}
}
If so, there's nothing really wrong with that design. Your UI thread spawns a thread that's supposed to execute that code, and the lock prevents multiple threads from executing concurrently. It's a little bit wasteful in that you could potentially have many threads queued up behind the lock, but in practice you probably don't have more than one or two threads waiting. There are more efficient ways to do it (implement a task queue of some sort), but what you have is simple and effective.
As long as you are not waiting on multiple lock objects, this should be fine. Deadlock occurs when you have a situation like this:
Thread A:
lock (lockObject1)
{
// Do some stuff
lock (lockObject2)
{
// Do some stuff
}
}
Thread B:
lock (lockObject2)
{
// Do some stuff
lock (lockObject1)
{
// Do some stuff
}
}
If you happen to lock lockObject1 in thread A, and thread B locks lockObject2 before thread A locks it, then both threads will be waiting for an object that is locked in another thread, and neither will unlock because each is waiting while having an object locked. This is an oversimplified example -- there are many ways one can end up in this situation.
To avoid deadlock, do not wait on a second object while you have a first object locked. If you lock one object at a time like this, you can't get deadlocked because eventually, the locking thread will release the object a waiting thread needs. So for example, the above should be unrolled:
Thread A:
lock (lockObject1)
{
// Do some stuff
}
lock (lockObject2)
{
// Do some stuff
}
Thread B:
lock (lockObject2)
{
// Do some stuff
}
lock (lockObject1)
{
// Do some stuff
}
In this case, each lock operation will complete without trying to acquire another resources, and so deadlock is avoided.
This is not making the action transactional. I would take that to mean that either the entire operation succeeds or it has no effect -- if I update two local object inside your synchronization block, an error with the second does not rollback changes to the first.
Also, there is nothing stopping the main thread from using the two objects while they are being updated -- it needs to cooperate by also locking.
Locking in the background thread is only meaningful if you also lock when you use those objects in the main thread.

Why do nested locks not cause a deadlock?

Why does this code not cause a deadlock?
private static readonly object a = new object();
...
lock(a)
{
lock(a)
{
....
}
}
If a thread already holds a lock, then it can "take that lock" again without issue.
As to why that is, (and why it's a good idea), consider the following situation, where we have a defined lock ordering elsewhere in the program of a -> b:
void f()
{
lock(a)
{ /* do stuff inside a */ }
}
void doStuff()
{
lock(b)
{
//do stuff inside b, that involves leaving b in an inconsistent state
f();
//do more stuff inside b so that its consistent again
}
}
Whoops, we just violated our lock ordering and have a potential deadlock on our hands.
We really need to be able to do the following:
function doStuff()
{
lock(a)
lock(b)
{
//do stuff inside b, that involves leaving b in an inconsistent state
f();
//do more stuff inside b so that its consistent again
}
}
So that our lock ordering is maintained, without self-deadlocking when we call f().
The lock keyword uses a re-entrant lock, meaning the current thread already has the lock so it doesn't try to reacquire it.
A deadlock occurs if
Thread 1 acquires lock A
Thread 2 acquires lock B
Thread 1 tries to acquire lock B (waits for Thread 2 to be done with it)
Thread 2 tries to acquire lock A (waits for Thread 1 to be done with it)
Both threads are now waiting on each other and thus deadlocked.
From section 8.12 of the C# language specification:
While a mutual-exclusion lock is held,
code executing in the same execution
thread can also obtain and release the
lock. However, code executing in other
threads is blocked from obtaining the
lock until the lock is released.
It should be obvious that the internal lock scope is in the same thread as the outer.

Categories

Resources