Suppose that I have this mutex (or semaphore) called Mutex1
private static readonly SemaphoreSlim Mutex1 = new SemaphoreSlim(0, 1);
Will following code always work as my expectations?
Thread A
await Mutex1.WaitAsync(); // wait in thread "A" until thread "B" releases mutex
Thread B
Mutex1.Release();
await Mutex1.WaitAsync(); // thread A should continue and thread B should wait.
Is Mutex1.Release in thread B always guaranteed to cause continuation in thread A?
My guess answer is no, because before thread A continues, thread B may wait again and since "waiting threads" are not queued, thread B may continue again instead of thread A. am I right?
The safe approach that I'm currently using is with additional field called Mutex2.
private static readonly SemaphoreSlim Mutex2 = new SemaphoreSlim(0, 1);
Thread A
await Mutex1.WaitAsync(); // wait in thread "A" until thread "B" releases mutex
Thread B
Mutex1.Release();
await Mutex2.WaitAsync(); // notice Mutex TWO
Now I'm sure that switching between threads is handled correctly, but I wanted to know if I can save one field and use first approach safely or not. note that this operation is wrapped in lock so its single threaded and safe.
(for those who are curios this is the part of application for making synchronized communication between two apps using UWP app services)
From https://msdn.microsoft.com/en-us/library/system.threading.semaphoreslim(v=vs.110).aspx we have:
If multiple threads are blocked, there is no guaranteed order, such as FIFO or LIFO, that controls when threads enter the semaphore.
So, to me, this says that the slim semaphores do not queue up requests.
And, that your first approach has a race condition. That is, thread B can release the mutex and immediately reacquire it, locking out thread A for an indeterminate amount of time.
So, you probably have to use your two semaphore approach. Or, use a different synchronization primitive that supports a [fair] queue of waiters.
The way you are using this, you should try using a AutoResetEvent. https://msdn.microsoft.com/en-us/library/system.threading.autoresetevent(v=vs.110).aspx
That will do what you want.
Related
If I comment or pass 0 in Thread.Sleep(0) method then there is no deadlock. In other cases there is a deadlock. uptask is executed by a thread from the thread poll and that takes some time. In the mean time, the main thread acquires lockB, lockA and prints the string and releases the locks. After that uptask starts running and it sees that lockA and lockB are free. So in this case there is no deadlock. But if I sleep the main thread in the mean time uptask advances and sees that lockB is locked and a deadlock happens. Can anybody explain better or verify if this is the reason?
class MyAppClass
{
public static void Main()
{
object lockA = new object();
object lockB = new object();
var uptask = Task.Run(() =>
{
lock (lockA)
{
lock (lockB)
{
Console.WriteLine("A outer and B inner");
}
}
});
lock (lockB)
{
//Uncomment the following statement or sleep with 0 ms and see that there is no deadlock.
//But sleep with 1 or more lead to deadlock. Reason?
Thread.Sleep(1);
lock (lockA)
{
Console.WriteLine("B outer and A inner");
}
}
uptask.Wait();
Console.ReadKey();
}
}
You really cannot depend on Thread.Sleep to prevent a deadlock. It worked in your environment for some time. It might not work all the time and might not work in other environments.
Since you are obtaining the locks in reverse order, then the chance of a deadlock is there.
To prevent the deadlock, make sure you obtain the locks in order (e.g. lockA then lockB in both threads).
My best guess for why this is happening is that if you don't sleep, then the main thread will obtain and release both locks before the other thread (from the thread pool) obtains lockA. Please note that scheduling and running a Task on the thread-pool requires some time. In most cases, it is neglectable. But it your case, it made a difference.
To verify this, add the following line right before uptask.Wait():
Console.WriteLine("main thread is done");
And this line after obtaining lockA from the thread-pool thread:
Console.WriteLine("Thread pool thread: obtained lockA");
In case where there is no deadlock, you will see the first message printed to the console before the thread-pool thread prints it's message to the console.
You have two threads. The main thread and the thread that executes the task. If the main thread is able to take lockB and the task thread is able to take lockA then you have a deadlock. You should never lock two different resources in different sequence because that can lead to deadlock (but I expect that you already know this based on the abstract nature of your question).
In most cases the task thread will start slightly delayed and then the main thread will get both locks but if you insert a Thread.Sleep(1) between lockA and lockB then the task thread is able to get lockA before the main thread and BAM! you have a deadlock.
However, the Thread.Sleep(1) is not necessary condition for getting a deadlock. If the operating system decides to schedule the task thread in a way that makes it able to get lockA before the main thread you have your deadlock and just because there is no deadlock on your lightning fast machine you may experience deadlocks on other computers that have fewer processing resources.
Here is an illustration to visually explains why the delay increases the likelihood of getting a deadlock:
From https://msdn.microsoft.com/en-us/library/d00bd51t(v=vs.110).aspx, "millisecondsTimeout"
Type: System.Int32
The number of milliseconds for which the thread is suspended. If the value of the millisecondsTimeout argument is zero, the thread relinquishes the remainder of its time slice to any thread of equal priority that is ready to run. If there are no other threads of equal priority that are ready to run, execution of the current thread is not suspended."
Martin Liversage answered this question concisely.
To paraphrase, the code in your experiment is deadlock prone, even without the Thread.Sleep() statement. Without the Thread.Sleep() statement, the probability window for the deadlock to occur was extremely small and may have taken eons to occur. This is the reason you did not experience it when omitting the Thread.Sleep() statement. By adding any time-consuming logic at line 19 (ie: Thread.Sleep), you expand this window and increase the probability of the deadlock.
Also, this window could expand/decrease by running your code on a different hardware/OS, where task scheduling may be different.
The below code uses a background worker thread to process work items one by one. The worker thread starts waiting on a ManualResetEvent whenever it runs out of work items. Main thread periodically adds new work items and wakes the worker thread.
The waking mechanism has a race condition. If a new item is added by main thread while worker thread is at the place indicated by *, the worker thread will not get woken.
Is there a simple and correct way of waking the worker thread that does not have this problem?
ManualResetEvent m_waitEvent;
// Worker thread processes work items one by one
void WorkerThread()
{
while (true)
{
m_waitEvent.WaitOne();
bool noMoreItems = ProcessOneWorkItem();
if (noMoreItems)
{
// *
m_waitEvent.Reset(); // No more items, wait for more
}
}
}
// Main thread code that adds a new work item
AddWorkItem();
m_waitEvent.Set(); // Wake worker thread
You're using the wrong synchronization mechanism. Rather than a MRE just use a Semaphore. The semaphore will then represent the number of items yet to be processed. You can set it to add one, or wait on it to reduce it by one. There is no if, you always do every semaphore action and as a result there is no race condition.
That said, you can avoid the problem entirely. Rather than managing the synchronization primitives yourself you can just use a BlockingCollection. Have the producer add items and the consumer consume them. The synchronization will all be taken care of for you by that class, and likely more efficiently than your implementation would be as well.
I tend to use a current work items counter and increment and decrement that counter. You can turn your processor thread into a loop that is looking at that counter then sleeping rather than run once and done. That way, no matter where you are when the item is added, you are 1 sleep cycle from the item being processed.
If a singleton which is accessed from multiple threads is used, and the singleton itself is threadsafe, which thread will block when the singleton is accessed?
For example thinking that there is a mainthread A . A first accessed the singleton S. Then does something else.
A bit later thread B accesses the singleton S.
If B accesses S, will the singleton still be in context of thread A and also block thread A or only thread B (and other ones trying to actually access it?)
-> accesses
A->S {}
A->X {}
B->S {
...
C-S
} - will B only block C or also A?
To answer to the questions:
thread safe singleton (stripped down):
private static volatile Singleton instance;
private static object _sync = new Object();
private Singleton() {
// dosomething...
}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (_sync)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
(and of cause locking in methods)
And the question is mostly specific to the following point:
I know that the lock will prevent multiple threads to access the same region of code. My question is specific to the following point:
If the original thread in which scope the Singleton was produced does not hold the lock, will it also be blocked if another thread accesses the lock, as the Singleton Instance was created in the scope? Will the singleton just only run in the scope of the original thread?
Usually, thread safety for a singleton means mutual exclusion. That is, if a thread needs to use the singleton, it must acquire a lock/token, do what it needs, and release the token. During the whole time it is holding the token, no other thread will be able to acquire it. Any thread that tries that will be blocked and placed in a FIFO queue and will receive the token as soon as the holder releases it. This ensures only one thread accesses the protected resource (a singleton object in this case) at a time.
This is the typical scenario; your mileage might vary.
On a related note, Singleton is considered a bad idea by most people.
Syncronization mechanisms for C# are covered in part 2 of the tutorial linked by makc, which is quite nice.
Thread safe normally means only one thread can access it at a time. Locks around critical sections will mean multiple threads trying to run that piece of code will be blocked and only one at a time can proceed and access it.
Let's assume in your question that the class is synchronised at the class level, then while A is calling methods on S any other thread trying to call S at the same time will have to wait until A is finished.
Once A has finished running S then all waiting threads can be re-scheduled and one of them will then acquire the lock and run S (blocking any remaining waiting threads).
Meanwhile...A can go ahead and run X while someone else is accessing S (unless they share the same lock).
Think of a lock - specifically a mutex in this example - as a token, only the thread holding the token can run the code it protects. once it's done it drops the token and the next thread that picks it up can proceed.
Typically your synchronisation is done at a finer-grained level than across the whole class, say on a specific method or a specific block of code within a method. This avoids threads wasting time waiting around when they could actually both access different methods that don't affect each other.
It'll depend on how thread-safe is your singleton or any other object.
For example, if you use a Monitor or Mutex, only one thread will have access to the protected code block by one of these threading synchronization approaches. Let's say one thread tries to enter a synchronized code block and some other thread acquired the lock: then the second thread will wait till the first releases the lock.
In the other hand, if you use a Semaphore, you'll define how many threads can pass through a protected code block. Let's say the Semaphore allows 8 threads at the same time. If a possible 9th thread tries to enter to the protected code block it'll wait until Semaphore notifies that there's one or more slots available for the queued threads.
There're different strategies when it comes to synchronize objects when using multi-threading.
Check this MSDN article:
http://msdn.microsoft.com/en-us/library/ms173178(v=vs.110).aspx
UPDATE
I've checked your code in your updated question body.
Definitively: yes. Any thread, even the main thread, will be blocked until the one that acquired the lock releases it
Imagine that this wouldn't be this way: some threading rules work for any thread excluding the main one. It would be a chance to have a non-synchronized code region.
lock statement compiles into something like a Monitor.Enter and Monitor.Exit. This means that the locked object acquires an exclusive lock for the current thread.
UPDATE 2
Taken from some OP comment:
Can you explain why? I mean if the main thread does nothing with the
singleton in the moment, then the thread does not try to get that lock?
Ooops! I feel you forgot something about how threading works!
When you protect a code region using a thread synchronization approach like Monitor (lock keyword uses a Monitor behind the scene), you're blocking any thread that tries to enter to the protected/synchronized object rather than blocking any working thread until the Monitor releases the lock.
Let's say there're two threads A and B and you've this code:
lock(_syncObject)
{
// Do some stuff
}
Thread A goes through the synchronized code region and B is a background worker that's doing some other stuff that won't go through the protected region. In this case, B won't be blocked.
In other words: when you synchronize threaded access to some region you're protecting an object. lock (or Mutex, AutoResetEvent or whatever) is not equivalent to something like an hypothetical Thread.SleepAll(). If any thread is started and working and no one goes through a synchronized object access, no thread will be blocked.
I was reading AutoResetEvent documentation on MSDN and following warning kinda bothers me..
"Important:
There is no guarantee that every call to the Set method will release a thread. If two calls are too close together, so that the second call occurs before a thread has been released, only one thread is released. It is as if the second call did not happen. Also, if Set is called when there are no threads waiting and the AutoResetEvent is already signaled, the call has no effect."
But this warning basically kills the very reason to have such a thread synchronization techniques. For example I have a list which will hold jobs. And there is only one producer which will add jobs to the list. I have consumers (more than one), waiting to get the job from the list.. something like this..
Producer:
void AddJob(Job j)
{
lock(qLock)
{
jobQ.Enqueue(j);
}
newJobEvent.Set(); // newJobEvent is AutoResetEvent
}
Consumer
void Run()
{
while(canRun)
{
newJobEvent.WaitOne();
IJob job = null;
lock(qLock)
{
job = jobQ.Dequeue();
}
// process job
}
}
If the above warning is true, then if I enqueue two jobs very quickly, only one thread will pick up the job, isn't it? I was under the assumption that Set will be atomic, that is it does the following:
Set the event
If threads are waiting, pick one thread to wake up
reset the event
run the selected thread.
So I am basically confused about the warning in MSDN. is it a valid warning?
Even if the warning isn't true and Set is atomic, why would you use an AutoResetEvent here? Let's say you have some producers queue up 3 events in row and there's one consumer. After processing the 2nd job, the consumer blocks and never processes the third.
I would use a ReaderWriterLockSlim for this type of synchronization. Basically, you need multiple producers to be able to have write locks, but you don't want consumers to lock out producers for a long time while they are only reading the queue size.
The message on MSDN is a valid message indeed. What's happening internally is something like this:
Thread A waits for the event
Thread B sets the event
[If thread A is in spinlock]
[yes] Thread a detects that the event is set, unsets it and resumes its work
[no] The event will tell thread A to wake up, once woken, thread A will unset the event resume its work.
Note that the internal logic is not synchronous since Thread B doesn't wait for Thread A to continue its business. You can make this synchronous by introducing a temporary ManualResetEvent that thread A has to signal once it continues its work and on which Thread B has to wait. This is not done by default due to the inner working of the windows threading model. I guess the documentation is misleading but correct for saying that the Set method only releases one or more waiting threads.
Alternatively i would suggest you to look at the BlockingCollection class in the System.Collections.Concurrent namespace of the BCL introduced in .NET 4.0 which does exactly what you are trying to do
I am using multiple threads in my application using while(true) loop and now i want to exit from loop when all the active threads complete their work.
Assuming that you have a list of the threads themselves, here are two approaches.
Solution the first:
Use Thread.Join() with a timespan parameter to synch up with each thread in turn. The return value tells you whether the thread has finished or not.
Solution the second:
Check Thread.IsAlive() to see if the thread is still running.
In either situation, make sure that your main thread yields processor time to the running threads, else your wait loop will consume most/all the CPU and starve your worker threads.
You can use Process.GetCurrentProcess().Threads.Count.
There are various approaches here, but utlimately most of them come down to your changing the executed threads to do something whenever they leave (success or via exception, which you don't want to do anyway). A simple approach might be to use Interlock.Decrement to reduce a counter - and if it is zero (or -ve, which probably means an error) release a ManualResetEvent or Monitor.Pulse an object; in either case, the original thread would be waiting on that object. A number of such approaches are discussed here.
Of course, it might be easier to look at the TPL bits in 4.0, which provide a lot of new options here (not least things like Parallel.For in PLINQ).
If you are using a synchronized work queue, it might also be possible to set that queue to close (drain) itself, and simply wait for the queue to be empty? The assumption here being that your worker threads are doing something like:
T workItem;
while(queue.TryDequeue(out workItem)) { // this may block until either something
ProcessWorkItem(workItem); // todo, or the queue is terminated
}
// queue has closed - exit the thread
in which case, once the queue is empty all your worker threads should already be in the process of suicide.
You can use Thread.Join(). The Join method will block the calling thread until the thread (the one on which the Join method is called) terminates.
So if you have a list of thread, then you can loop through and call Join on each thread. You loop will only exit when all the threads are dead. Something like this:
for(int i = 0 ;i < childThreadList.Count; i++)
{
childThreadList[i].Join();
}
///...The following code will execute when all threads in the list have been terminated...///
I find that using the Join() method is the cleanest way. I use multiple threads frequently, and each thread is typically loading data from different data sources (Informix, Oracle and SQL at the same time.) A simple loop, as mentioned above, calling Join() on each thread object (which I store in a simple List object) works!!!
Carlos Merighe.
I prefer using a HashSet of Threads:
// create a HashSet of heavy tasks (threads) to run
HashSet<Thread> Threadlist = new HashSet<Thread>();
Threadlist.Add(new Thread(() => SomeHeavyTask1()));
Threadlist.Add(new Thread(() => SomeHeavyTask2()));
Threadlist.Add(new Thread(() => SomeHeavyTask3()));
// start the threads
foreach (Thread T in Threadlist)
T.Start();
// these will execute sequential
NotSoHeavyTask1();
NotSoHeavyTask2();
NotSoHeavyTask3();
// loop through tasks to see if they are still active, and join them to main thread
foreach (Thread T in Threadlist)
if (T.ThreadState == ThreadState.Running)
T.Join();
// finally this code will execute
MoreTasksToDo();