.net Lock - Two Questions - c#

Two questions about the Lock() construct in .net
First, I am aware that if an object is locked within one class and another class attempts to lock the same object this produces a deadlock. But why? I have read about it on MSDN but MSDN is rarely all that clear.
----Edit Question One----
Still confused. I have a main thread (UI thread) that spawns many Threadpool threads. Each child thread locks the data before it works with it. This works fine every time.
If I then attempt to lock the same data from the UI thread to check if I should even bother creating a new thread for an edge case I create deadlock nearly every time.
----Edit Question Two----
Secondly, If I have a compound object that I lock are all child objects within it locked as well? Short code Demo:
internal sealed class Update
{
//Three objects instantiated via other external assemblies
public DataObject One { get; set; }
public DataObject Two { get; set; }
public ReplayStatus Status { get; set; }
}
If I call lock(UpdateObject) are each of the three internal objects and all of there child objects locked as well?
So I should do somthing like this to prevent threads from playing with my data objects:
Lock(UpdateObject.One)
{
Lock(UpdateObject.Two)
{
Lock(UpdateObject.Status)
{
//Do Stuff
}
}
}

First, I am aware that if an object is locked within one class and another class attempts to lock the same object this produces a deadlock.
No. If one thread locks an object and a second thread attempts to lock that object, that second thread must wait for the first thread to exit the lock.
Deadlock is something else:
1. thread1 locks instanceA
2. thread2 locks instanceB
3. thread1 attempts to lock instanceB and now must wait on thread2
4. thread2 attempts to lock instanceA and now must wait on thread1
These two threads can no longer execute, and so never release their locks. What a mess.
If I call lock(UpdateObject) are each of the three internal objects and all of there child objects locked as well?
No, the "lock" is only on the locked instance. Note: the lock doesn't prevent anything other than a second thread from acquiring a lock on that instance at the same time.

First, the whole point of a lock is that two sections of code can't get ahold of the same lock at once. This is to coordinate multiple threads working with the same stuff without interfering with each other. If you have a lock on an object, then anyone else that tries to get the lock will block (wait) until the original lock is released (only one thread can have the lock at any given time). You only have a deadlock if the first thread never gives up the lock, or if both threads are waiting for something from each other and neither can proceed until each gets what it's waiting for.
Second, if you lock an object in C#, you're not really "locking" the object in any semantic sense. You're acquiring a "lock" on the object (which you later release or relenquish). The object is purely a convenient token that is used to uniquely identify which lock you wish to obtain. So no, a lock on an object does not create a lock on any sub-parts of that object.

Related

Locking Variables when Threading in C#

I have a C# program, where I'm spawning a thread to do some calculations. I'm then adding the result of the calculations to a Queue, and from the main thread, I'm constantly checking to see if the Queue has length more than 0. If it does, then the result of the calculation is de-queued and used elsewhere.
I've read that I should lock the queue when accessing it from either thread because it may cause problems if both threads are accessing it at the same time. But should I lock it whenever I do ANYTHING with the Queue, or only when en-queuing/de-queuing?
E.g.
// In main thread
lock (meshDataQueue) {
if (meshDataQueue.Count > 0)
{
constructMesh(meshDataQueue.dequeue())
}
}
vs.
if (meshDataQueue.Count > 0) {
lock (meshDataQueue)
{
constructMesh(meshDataQueue.dequeue())
}
}
Yes, you should lock the Queue instance (using always the same "locker" object) whenever you do anything with it, including trivial things like reading the queue's Count. The Queue class is not thread-safe, so for its behavior to stay defined you must ensure that it is accessed by one thread at a time (with proper memory barriers when switching from thread to thread, that the lock statement robustly provides). Otherwise you enter the undefined behavior territory, where all guarantees are off.

Multithreading: difference between types of locking objects

Please explain the difference between these two types of locking.
I have a List which I want to access thread-safe:
var tasks = new List<string>();
1.
var locker = new object();
lock (locker)
{
tasks.Add("work 1");
}
2.
lock (tasks)
{
tasks.Add("work 2");
}
My thoughts:
Prevents two different threads from running the locked block of code at the same time.
But if another thread runs a different method where it tries to access task - this type of lock won't help.
Blocks the List<> instance so other threads in other methods will be blocked untill I unlock tasks.
Am I right or mistaking?
(2) only blocks other code that explicitly calls lock (tasks). Generally, you should only do this if you know that tasks is a private field and thus can enforce throughout your class that lock (tasks) means locking operations on the list. This can be a nice shortcut when the lock is conceptually linked with access to the collection and you don't need to worry about public exposure of the lock. You don't get this 'for free', though; it needs to be explicitly used just like locking on any other object.
They do the same thing. Any other code that tries to modify the list without locking the same object will cause potential race conditions.
A better way might be to encapsulate the list in another object that obtains a lock before doing any operations on the underlying list and then any other code can simple call methods on the wrapper object without worrying about obtaining the lock.

In which context will thread safe singletons run?

If a singleton which is accessed from multiple threads is used, and the singleton itself is threadsafe, which thread will block when the singleton is accessed?
For example thinking that there is a mainthread A . A first accessed the singleton S. Then does something else.
A bit later thread B accesses the singleton S.
If B accesses S, will the singleton still be in context of thread A and also block thread A or only thread B (and other ones trying to actually access it?)
-> accesses
A->S {}
A->X {}
B->S {
...
C-S
} - will B only block C or also A?
To answer to the questions:
thread safe singleton (stripped down):
private static volatile Singleton instance;
private static object _sync = new Object();
private Singleton() {
// dosomething...
}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (_sync)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
(and of cause locking in methods)
And the question is mostly specific to the following point:
I know that the lock will prevent multiple threads to access the same region of code. My question is specific to the following point:
If the original thread in which scope the Singleton was produced does not hold the lock, will it also be blocked if another thread accesses the lock, as the Singleton Instance was created in the scope? Will the singleton just only run in the scope of the original thread?
Usually, thread safety for a singleton means mutual exclusion. That is, if a thread needs to use the singleton, it must acquire a lock/token, do what it needs, and release the token. During the whole time it is holding the token, no other thread will be able to acquire it. Any thread that tries that will be blocked and placed in a FIFO queue and will receive the token as soon as the holder releases it. This ensures only one thread accesses the protected resource (a singleton object in this case) at a time.
This is the typical scenario; your mileage might vary.
On a related note, Singleton is considered a bad idea by most people.
Syncronization mechanisms for C# are covered in part 2 of the tutorial linked by makc, which is quite nice.
Thread safe normally means only one thread can access it at a time. Locks around critical sections will mean multiple threads trying to run that piece of code will be blocked and only one at a time can proceed and access it.
Let's assume in your question that the class is synchronised at the class level, then while A is calling methods on S any other thread trying to call S at the same time will have to wait until A is finished.
Once A has finished running S then all waiting threads can be re-scheduled and one of them will then acquire the lock and run S (blocking any remaining waiting threads).
Meanwhile...A can go ahead and run X while someone else is accessing S (unless they share the same lock).
Think of a lock - specifically a mutex in this example - as a token, only the thread holding the token can run the code it protects. once it's done it drops the token and the next thread that picks it up can proceed.
Typically your synchronisation is done at a finer-grained level than across the whole class, say on a specific method or a specific block of code within a method. This avoids threads wasting time waiting around when they could actually both access different methods that don't affect each other.
It'll depend on how thread-safe is your singleton or any other object.
For example, if you use a Monitor or Mutex, only one thread will have access to the protected code block by one of these threading synchronization approaches. Let's say one thread tries to enter a synchronized code block and some other thread acquired the lock: then the second thread will wait till the first releases the lock.
In the other hand, if you use a Semaphore, you'll define how many threads can pass through a protected code block. Let's say the Semaphore allows 8 threads at the same time. If a possible 9th thread tries to enter to the protected code block it'll wait until Semaphore notifies that there's one or more slots available for the queued threads.
There're different strategies when it comes to synchronize objects when using multi-threading.
Check this MSDN article:
http://msdn.microsoft.com/en-us/library/ms173178(v=vs.110).aspx
UPDATE
I've checked your code in your updated question body.
Definitively: yes. Any thread, even the main thread, will be blocked until the one that acquired the lock releases it
Imagine that this wouldn't be this way: some threading rules work for any thread excluding the main one. It would be a chance to have a non-synchronized code region.
lock statement compiles into something like a Monitor.Enter and Monitor.Exit. This means that the locked object acquires an exclusive lock for the current thread.
UPDATE 2
Taken from some OP comment:
Can you explain why? I mean if the main thread does nothing with the
singleton in the moment, then the thread does not try to get that lock?
Ooops! I feel you forgot something about how threading works!
When you protect a code region using a thread synchronization approach like Monitor (lock keyword uses a Monitor behind the scene), you're blocking any thread that tries to enter to the protected/synchronized object rather than blocking any working thread until the Monitor releases the lock.
Let's say there're two threads A and B and you've this code:
lock(_syncObject)
{
// Do some stuff
}
Thread A goes through the synchronized code region and B is a background worker that's doing some other stuff that won't go through the protected region. In this case, B won't be blocked.
In other words: when you synchronize threaded access to some region you're protecting an object. lock (or Mutex, AutoResetEvent or whatever) is not equivalent to something like an hypothetical Thread.SleepAll(). If any thread is started and working and no one goes through a synchronized object access, no thread will be blocked.

About lock objects in C#

Consider the following code:
static void AddItem()
{
lock (_list)
_list.Add ("Item " + _list.Count); //Lock 1
string[] items;
lock (_list)
items = _list.ToArray(); //Lock 2
foreach (string s in items)
Console.WriteLine (s);
}
If Thread A gets Lock 2, and Thread B attempts to get Lock 1, will B get the lock or not? Considering both locks use the same locking object.
No, thread B will need to wait until thread A releases the lock. That's the point of it being the same lock object, after all - there's one lock. Where the lock is acquired or released is irrelevant: only one thread can "own" the monitor at a time.
I would strongly advise you to use braces for readability, by the way:
lock(_list)
{
_list.Add(...);
}
No, B will not. Both are locking on the same object and therefore the two locks are "linked." For this reason, if you need to highly-optimise such code, there are times where you might consider multiple lock objects.
As a side note, you should not be locking on the list itself but on an object created specifically for that purpose.
No, since they use the same locking object, they are mutually exclusive.
Often code is used to lock an object (for example a list) to perform an operation on it without interference from other threads. This requires that the item is locked no matter what operation is performed.
To elaborate, say you have a list that is designed to be threadsafe. If you try adding and deleting multiple items simultaneously, you could corrupt the list. By locking the list whenever it needs to be modified, you can help ensure thread safety.
This all depends on the fact that one object will make all locks mutually exclusive.
If Thread A is using the lock, then no other thread can use it (regardless of where the lock is being used). So, thread B will be blocked until that lock is free.
Consider that:
lock(obj)
{
//Do Stuff;
}
Is shorthand for:
Monitor.Enter(obj);
try
{
//Do Stuff;
}
finally
{
Monitor.Exit(obj);
}
Now consider that Monitor.Enter() is a method call like any other. It knows nothing about where in the code it was called. The only thing it knows about, is what object was passed to it.
As far as it's concerned, what you call "Lock 1" and "Lock 2" are the same lock.

Is it good practice to lock a thread in order to make things transactional in winforms?

This is a old school winforms application that I am working with, and they design pattern that was used is as follows:
Whenever you need to make things transactional, a operation is performed on its own thread, and the thread is locked (a specific lock object is used for each operation), and then a call is made to the wcf service, some local objects are updated, then the lock is released.
Is this good practise?
Yes, but be careful of multithreading and have a good read on it as too many locks might create a deadlock situation.
I don't quite know what you mean, "lock a thread." Is it something like this?
static object ThreadLock = new object();
void ThreadProc(object state)
{
lock (ThreadLock)
{
// do stuff here
}
}
If so, there's nothing really wrong with that design. Your UI thread spawns a thread that's supposed to execute that code, and the lock prevents multiple threads from executing concurrently. It's a little bit wasteful in that you could potentially have many threads queued up behind the lock, but in practice you probably don't have more than one or two threads waiting. There are more efficient ways to do it (implement a task queue of some sort), but what you have is simple and effective.
As long as you are not waiting on multiple lock objects, this should be fine. Deadlock occurs when you have a situation like this:
Thread A:
lock (lockObject1)
{
// Do some stuff
lock (lockObject2)
{
// Do some stuff
}
}
Thread B:
lock (lockObject2)
{
// Do some stuff
lock (lockObject1)
{
// Do some stuff
}
}
If you happen to lock lockObject1 in thread A, and thread B locks lockObject2 before thread A locks it, then both threads will be waiting for an object that is locked in another thread, and neither will unlock because each is waiting while having an object locked. This is an oversimplified example -- there are many ways one can end up in this situation.
To avoid deadlock, do not wait on a second object while you have a first object locked. If you lock one object at a time like this, you can't get deadlocked because eventually, the locking thread will release the object a waiting thread needs. So for example, the above should be unrolled:
Thread A:
lock (lockObject1)
{
// Do some stuff
}
lock (lockObject2)
{
// Do some stuff
}
Thread B:
lock (lockObject2)
{
// Do some stuff
}
lock (lockObject1)
{
// Do some stuff
}
In this case, each lock operation will complete without trying to acquire another resources, and so deadlock is avoided.
This is not making the action transactional. I would take that to mean that either the entire operation succeeds or it has no effect -- if I update two local object inside your synchronization block, an error with the second does not rollback changes to the first.
Also, there is nothing stopping the main thread from using the two objects while they are being updated -- it needs to cooperate by also locking.
Locking in the background thread is only meaningful if you also lock when you use those objects in the main thread.

Categories

Resources