If a singleton which is accessed from multiple threads is used, and the singleton itself is threadsafe, which thread will block when the singleton is accessed?
For example thinking that there is a mainthread A . A first accessed the singleton S. Then does something else.
A bit later thread B accesses the singleton S.
If B accesses S, will the singleton still be in context of thread A and also block thread A or only thread B (and other ones trying to actually access it?)
-> accesses
A->S {}
A->X {}
B->S {
...
C-S
} - will B only block C or also A?
To answer to the questions:
thread safe singleton (stripped down):
private static volatile Singleton instance;
private static object _sync = new Object();
private Singleton() {
// dosomething...
}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (_sync)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
(and of cause locking in methods)
And the question is mostly specific to the following point:
I know that the lock will prevent multiple threads to access the same region of code. My question is specific to the following point:
If the original thread in which scope the Singleton was produced does not hold the lock, will it also be blocked if another thread accesses the lock, as the Singleton Instance was created in the scope? Will the singleton just only run in the scope of the original thread?
Usually, thread safety for a singleton means mutual exclusion. That is, if a thread needs to use the singleton, it must acquire a lock/token, do what it needs, and release the token. During the whole time it is holding the token, no other thread will be able to acquire it. Any thread that tries that will be blocked and placed in a FIFO queue and will receive the token as soon as the holder releases it. This ensures only one thread accesses the protected resource (a singleton object in this case) at a time.
This is the typical scenario; your mileage might vary.
On a related note, Singleton is considered a bad idea by most people.
Syncronization mechanisms for C# are covered in part 2 of the tutorial linked by makc, which is quite nice.
Thread safe normally means only one thread can access it at a time. Locks around critical sections will mean multiple threads trying to run that piece of code will be blocked and only one at a time can proceed and access it.
Let's assume in your question that the class is synchronised at the class level, then while A is calling methods on S any other thread trying to call S at the same time will have to wait until A is finished.
Once A has finished running S then all waiting threads can be re-scheduled and one of them will then acquire the lock and run S (blocking any remaining waiting threads).
Meanwhile...A can go ahead and run X while someone else is accessing S (unless they share the same lock).
Think of a lock - specifically a mutex in this example - as a token, only the thread holding the token can run the code it protects. once it's done it drops the token and the next thread that picks it up can proceed.
Typically your synchronisation is done at a finer-grained level than across the whole class, say on a specific method or a specific block of code within a method. This avoids threads wasting time waiting around when they could actually both access different methods that don't affect each other.
It'll depend on how thread-safe is your singleton or any other object.
For example, if you use a Monitor or Mutex, only one thread will have access to the protected code block by one of these threading synchronization approaches. Let's say one thread tries to enter a synchronized code block and some other thread acquired the lock: then the second thread will wait till the first releases the lock.
In the other hand, if you use a Semaphore, you'll define how many threads can pass through a protected code block. Let's say the Semaphore allows 8 threads at the same time. If a possible 9th thread tries to enter to the protected code block it'll wait until Semaphore notifies that there's one or more slots available for the queued threads.
There're different strategies when it comes to synchronize objects when using multi-threading.
Check this MSDN article:
http://msdn.microsoft.com/en-us/library/ms173178(v=vs.110).aspx
UPDATE
I've checked your code in your updated question body.
Definitively: yes. Any thread, even the main thread, will be blocked until the one that acquired the lock releases it
Imagine that this wouldn't be this way: some threading rules work for any thread excluding the main one. It would be a chance to have a non-synchronized code region.
lock statement compiles into something like a Monitor.Enter and Monitor.Exit. This means that the locked object acquires an exclusive lock for the current thread.
UPDATE 2
Taken from some OP comment:
Can you explain why? I mean if the main thread does nothing with the
singleton in the moment, then the thread does not try to get that lock?
Ooops! I feel you forgot something about how threading works!
When you protect a code region using a thread synchronization approach like Monitor (lock keyword uses a Monitor behind the scene), you're blocking any thread that tries to enter to the protected/synchronized object rather than blocking any working thread until the Monitor releases the lock.
Let's say there're two threads A and B and you've this code:
lock(_syncObject)
{
// Do some stuff
}
Thread A goes through the synchronized code region and B is a background worker that's doing some other stuff that won't go through the protected region. In this case, B won't be blocked.
In other words: when you synchronize threaded access to some region you're protecting an object. lock (or Mutex, AutoResetEvent or whatever) is not equivalent to something like an hypothetical Thread.SleepAll(). If any thread is started and working and no one goes through a synchronized object access, no thread will be blocked.
Related
Consider two threads run simultaneously. A is reading and B is writing. When A is reading, in the middle of code ,CPU time for A finishes then B thread continues.
Is there any way to don't give back CPU until A finishes, but B can start or continue?
You need to understand that you have almost no control over when CPU is given back and to whom it is given. The operating system does that. To have control on that, you'd need to be the operating system. The only things you can usually do are:
start a thread
set thread priority, so some threads are may more likely get time than others
put a thread to sleep, immediatelly and ask the operating system to wake it up upon some condition, maybe with some timeout (waiting time limit)
as a special case, or a typical use case, the second point is often also provided with a shorthand:
put a thread to sleep, immediatelly for a specified amount of time
By "sleep" I mean that this thread is paused and will not get any CPU time, even if all CPUs are idle, unless the thread is woken up by the OS due to some condition.
Furthermore, in a typical case, there is no "thread A and thread B that switch CPU time between them", but there is "lots of threads from various processes and the operating system itself, and you two threads". This means that when your thread A loses the CPU, most probably it will not be the thread B that gets the time now. Some other thread from somewhere else will get it, and at some future point of time, maybe your thread A or maybe thread B will get it back.
This means that there is very little you can be sure. You can be sure that your threads are
either dead
or sleeping
or proceeding 'forward' in a hard to determine order
If you need to ensure that some threads are synchronized, you must .. not start them simultaneously, or put them sleep in precise moments and wake them up in precise order.
You've just said in comments:
You know, if in the middle of A CPU time finishes, data that has been retrieved is not complete
This means that you need to ensure that thread B does not try to touch the data before thread A finishes writing it. But also, if you think about it, you need to ensure that thread A doesn't start writing next data if the thread B is now reading previous data.
This means synchronization. This means that threads A and B must wait if the other thread is touching the data. This means that they need to be put to sleep and woken up when the other thread finishes.
In C#, the easiest way to do that is to use lock(x) keyword. When a thread enters a lock() section, it proceeds only if it is able to get the lock. If not, it is put to sleep. It can't get the lock if any other thread was faster and got it before. However, a thread releases the lock when it ends its job. Upon that time, one of the sleeping threads is woken up and given the lock.
lock(fooo) { // <- this line means 'acquire the lock or sleep'
iam.doing(myjob);
very.important(things);
thatshouldnt.be.interrupted();
byother(threads);
} // <- this line means 'release the lock'
So, when a thread gets through the lock(fooo){ line, you can't be sure it won't be interrupted. Oh, surely it will be. OS will switch the threads back and forth to other processes, and so on. But you can be sure that no other threads of your app will be inside the code block. If they tried to get inside while your thread got that lock, they'd imediatelly fall asleep in the first lock line. One of them be will be later woken up when your thread gets out of that code.
There's one more thing. lock() keyword requires a parameter. I wrote foo there. You need to pass there something that will act as the lock. It can be any object, even plain object:
private object thelock = new object();
private void dosomething()
{
lock(thelock)
{
foobarize(thebaz);
}
}
however you must ensure that all threads try to use the same lock instance. Writing a code like
private void dosomething()
{
object thelock = new object();
lock(thelock)
{
foobarize(thebaz);
}
}
is a nonsense since every potential thread executing that lines will try lockin upon their own new object instance and will see it as "free" (it's new, just created, noone took it earlier) and will immediatelly get into the protected code block.
Now you wrote about using ConcurrentQueue. This class provides safely mechanisms against concurrency. You can be sure that adding or reading or removing items from that queue is already safe. This collection makes it safe. You don't need to add synchronization to add or remove items safely. It's safe. If you observe any ill effects, then most probably you have tried putting an item into that collection and then you were modifying that item. Concurrent collection will not guard you against that. It can only make sure that add/remove/etc are safe. But it has no knowledge or control on what you do to the items:
In short, if some thread B tries to read items from the collection, then in thread A this is NOT safe:
concurrentcoll.Add(item);
item.x = 5;
item.foobarize();
but this is safe:
item.x = 5;
item.foobarize();
concurrentcoll.Add(item);
// and do not touch the Item anymore here.
Please explain the difference between these two types of locking.
I have a List which I want to access thread-safe:
var tasks = new List<string>();
1.
var locker = new object();
lock (locker)
{
tasks.Add("work 1");
}
2.
lock (tasks)
{
tasks.Add("work 2");
}
My thoughts:
Prevents two different threads from running the locked block of code at the same time.
But if another thread runs a different method where it tries to access task - this type of lock won't help.
Blocks the List<> instance so other threads in other methods will be blocked untill I unlock tasks.
Am I right or mistaking?
(2) only blocks other code that explicitly calls lock (tasks). Generally, you should only do this if you know that tasks is a private field and thus can enforce throughout your class that lock (tasks) means locking operations on the list. This can be a nice shortcut when the lock is conceptually linked with access to the collection and you don't need to worry about public exposure of the lock. You don't get this 'for free', though; it needs to be explicitly used just like locking on any other object.
They do the same thing. Any other code that tries to modify the list without locking the same object will cause potential race conditions.
A better way might be to encapsulate the list in another object that obtains a lock before doing any operations on the underlying list and then any other code can simple call methods on the wrapper object without worrying about obtaining the lock.
Two questions about the Lock() construct in .net
First, I am aware that if an object is locked within one class and another class attempts to lock the same object this produces a deadlock. But why? I have read about it on MSDN but MSDN is rarely all that clear.
----Edit Question One----
Still confused. I have a main thread (UI thread) that spawns many Threadpool threads. Each child thread locks the data before it works with it. This works fine every time.
If I then attempt to lock the same data from the UI thread to check if I should even bother creating a new thread for an edge case I create deadlock nearly every time.
----Edit Question Two----
Secondly, If I have a compound object that I lock are all child objects within it locked as well? Short code Demo:
internal sealed class Update
{
//Three objects instantiated via other external assemblies
public DataObject One { get; set; }
public DataObject Two { get; set; }
public ReplayStatus Status { get; set; }
}
If I call lock(UpdateObject) are each of the three internal objects and all of there child objects locked as well?
So I should do somthing like this to prevent threads from playing with my data objects:
Lock(UpdateObject.One)
{
Lock(UpdateObject.Two)
{
Lock(UpdateObject.Status)
{
//Do Stuff
}
}
}
First, I am aware that if an object is locked within one class and another class attempts to lock the same object this produces a deadlock.
No. If one thread locks an object and a second thread attempts to lock that object, that second thread must wait for the first thread to exit the lock.
Deadlock is something else:
1. thread1 locks instanceA
2. thread2 locks instanceB
3. thread1 attempts to lock instanceB and now must wait on thread2
4. thread2 attempts to lock instanceA and now must wait on thread1
These two threads can no longer execute, and so never release their locks. What a mess.
If I call lock(UpdateObject) are each of the three internal objects and all of there child objects locked as well?
No, the "lock" is only on the locked instance. Note: the lock doesn't prevent anything other than a second thread from acquiring a lock on that instance at the same time.
First, the whole point of a lock is that two sections of code can't get ahold of the same lock at once. This is to coordinate multiple threads working with the same stuff without interfering with each other. If you have a lock on an object, then anyone else that tries to get the lock will block (wait) until the original lock is released (only one thread can have the lock at any given time). You only have a deadlock if the first thread never gives up the lock, or if both threads are waiting for something from each other and neither can proceed until each gets what it's waiting for.
Second, if you lock an object in C#, you're not really "locking" the object in any semantic sense. You're acquiring a "lock" on the object (which you later release or relenquish). The object is purely a convenient token that is used to uniquely identify which lock you wish to obtain. So no, a lock on an object does not create a lock on any sub-parts of that object.
This is a old school winforms application that I am working with, and they design pattern that was used is as follows:
Whenever you need to make things transactional, a operation is performed on its own thread, and the thread is locked (a specific lock object is used for each operation), and then a call is made to the wcf service, some local objects are updated, then the lock is released.
Is this good practise?
Yes, but be careful of multithreading and have a good read on it as too many locks might create a deadlock situation.
I don't quite know what you mean, "lock a thread." Is it something like this?
static object ThreadLock = new object();
void ThreadProc(object state)
{
lock (ThreadLock)
{
// do stuff here
}
}
If so, there's nothing really wrong with that design. Your UI thread spawns a thread that's supposed to execute that code, and the lock prevents multiple threads from executing concurrently. It's a little bit wasteful in that you could potentially have many threads queued up behind the lock, but in practice you probably don't have more than one or two threads waiting. There are more efficient ways to do it (implement a task queue of some sort), but what you have is simple and effective.
As long as you are not waiting on multiple lock objects, this should be fine. Deadlock occurs when you have a situation like this:
Thread A:
lock (lockObject1)
{
// Do some stuff
lock (lockObject2)
{
// Do some stuff
}
}
Thread B:
lock (lockObject2)
{
// Do some stuff
lock (lockObject1)
{
// Do some stuff
}
}
If you happen to lock lockObject1 in thread A, and thread B locks lockObject2 before thread A locks it, then both threads will be waiting for an object that is locked in another thread, and neither will unlock because each is waiting while having an object locked. This is an oversimplified example -- there are many ways one can end up in this situation.
To avoid deadlock, do not wait on a second object while you have a first object locked. If you lock one object at a time like this, you can't get deadlocked because eventually, the locking thread will release the object a waiting thread needs. So for example, the above should be unrolled:
Thread A:
lock (lockObject1)
{
// Do some stuff
}
lock (lockObject2)
{
// Do some stuff
}
Thread B:
lock (lockObject2)
{
// Do some stuff
}
lock (lockObject1)
{
// Do some stuff
}
In this case, each lock operation will complete without trying to acquire another resources, and so deadlock is avoided.
This is not making the action transactional. I would take that to mean that either the entire operation succeeds or it has no effect -- if I update two local object inside your synchronization block, an error with the second does not rollback changes to the first.
Also, there is nothing stopping the main thread from using the two objects while they are being updated -- it needs to cooperate by also locking.
Locking in the background thread is only meaningful if you also lock when you use those objects in the main thread.
I have developed a generic producer-consumer queue which pulses by Monitor in the following way:
the enqueue :
public void EnqueueTask(T task)
{
_workerQueue.Enqueue(task);
Monitor.Pulse(_locker);
}
the dequeue:
private T Dequeue()
{
T dequeueItem;
if (_workerQueue.Count > 0)
{
_workerQueue.TryDequeue(out dequeueItem);
if(dequeueItem!=null)
return dequeueItem;
}
while (_workerQueue.Count == 0)
{
Monitor.Wait(_locker);
}
_workerQueue.TryDequeue(out dequeueItem);
return dequeueItem;
}
the wait section produces the following SynchronizationLockException :
"object synchronization method was called from an unsynchronized block of code"
do i need to synch it? why ? Is it better to use ManualResetEvents or the Slim version of .NET 4.0?
Yes, the current thread needs to "own" the monitor in order to call either Wait or Pulse, as documented. (So you'll need to lock for Pulse as well.) I don't know the details for why it's required, but it's the same in Java. I've usually found I'd want to do that anyway though, to make the calling code clean.
Note that Wait releases the monitor itself, then waits for the Pulse, then reacquires the monitor before returning.
As for using ManualResetEvent or AutoResetEvent instead - you could, but personally I prefer using the Monitor methods unless I need some of the other features of wait handles (such as atomically waiting for any/all of multiple handles).
From the MSDN description of Monitor.Wait():
Releases the lock on an object and blocks the current thread until it reacquires the lock.
The 'releases the lock' part is the problem, the object isn't locked. You are treating the _locker object as though it is a WaitHandle. Doing your own locking design that's provably correct is a form of black magic that's best left to our medicine man, Jeffrey Richter and Joe Duffy. But I'll give this one a shot:
public class BlockingQueue<T> {
private Queue<T> queue = new Queue<T>();
public void Enqueue(T obj) {
lock (queue) {
queue.Enqueue(obj);
Monitor.Pulse(queue);
}
}
public T Dequeue() {
T obj;
lock (queue) {
while (queue.Count == 0) {
Monitor.Wait(queue);
}
obj = queue.Dequeue();
}
return obj;
}
}
In most any practical producer/consumer scenario you will want to throttle the producer so it cannot fill the queue unbounded. Check Duffy's BoundedBuffer design for an example. If you can afford to move to .NET 4.0 then you definitely want to take advantage of its ConcurrentQueue class, it has lots more black magic with low-overhead locking and spin-waiting.
The proper way to view Monitor.Wait and Monitor.Pulse/PulseAll is not as providing a means of waiting, but rather (for Wait) as a means of letting the system know that the code is in a waiting loop which can't exit until something of interest changes, and (for Pulse/PulseAll) as a means of letting the system know that code has just changed something that might cause satisfy the exit condition some other thread's waiting loop. One should be able to replace all occurrences of Wait with Sleep(0) and still have code work correctly (even if much less efficiently, as a result of spending CPU time repeatedly testing conditions that haven't changed).
For this mechanism to work, it is necessary to avoid the possibility of the following sequence:
The code in the wait loop tests the condition when it isn't satisfied.
The code in another thread changes the condition so that it is satisfied.
The code in that other thread pulses the lock (which nobody is yet waiting on).
The code in the wait loop performs a Wait since its condition wasn't satisfied.
The Wait method requires that the waiting thread have a lock, since that's the only way it can be sure that the condition it's waiting upon won't change between the time it's tested and the time the code performs the Wait. The Pulse method requires a lock because that's the only way it can be sure that if another thread has "committed" itself to performing a Wait, the Pulse won't occur until after the other thread actually does so. Note that using Wait within a lock doesn't guarantee that it's being used correctly, but there's no way that using Wait outside a lock could possibly be correct.
The Wait/Pulse design actually works reasonably well if both sides cooperate. The biggest weaknesses of the design, IMHO, are (1) there's no mechanism for a thread to wait until any of a number of objects is pulsed; (2) even if one is "shutting down" an object such that all future wait loops should exit immediately (probably by checking an exit flag), the only way to ensure that any Wait to which a thread has committed itself will get a Pulse is to acquire the lock, possibly waiting indefinitely for it to become available.