When multiple threads request a lock on the same object, does the CLR guarantee that the locks will be acquired in the order they were requested?
I wrote up a test to see if this was true, and it seems to indicate yes, but I'm not sure if this is definitive.
class LockSequence
{
private static readonly object _lock = new object();
private static DateTime _dueTime;
public static void Test()
{
var states = new List<State>();
_dueTime = DateTime.Now.AddSeconds(5);
for (int i = 0; i < 10; i++)
{
var state = new State {Index = i};
ThreadPool.QueueUserWorkItem(Go, state);
states.Add(state);
Thread.Sleep(100);
}
states.ForEach(s => s.Sync.WaitOne());
states.ForEach(s => s.Sync.Close());
}
private static void Go(object state)
{
var s = (State) state;
Console.WriteLine("Go entered: " + s.Index);
lock (_lock)
{
Console.WriteLine("{0,2} got lock", s.Index);
if (_dueTime > DateTime.Now)
{
var time = _dueTime - DateTime.Now;
Console.WriteLine("{0,2} sleeping for {1} ticks", s.Index, time.Ticks);
Thread.Sleep(time);
}
Console.WriteLine("{0,2} exiting lock", s.Index);
}
s.Sync.Set();
}
private class State
{
public int Index;
public readonly ManualResetEvent Sync = new ManualResetEvent(false);
}
}
Prints:
Go entered: 0
0 got lock
0 sleeping for 49979998 ticks
Go entered: 1
Go entered: 2
Go entered: 3
Go entered: 4
Go entered: 5
Go entered: 6
Go entered: 7
Go entered: 8
Go entered: 9
0 exiting lock
1 got lock
1 sleeping for 5001 ticks
1 exiting lock
2 got lock
2 sleeping for 5001 ticks
2 exiting lock
3 got lock
3 sleeping for 5001 ticks
3 exiting lock
4 got lock
4 sleeping for 5001 ticks
4 exiting lock
5 got lock
5 sleeping for 5001 ticks
5 exiting lock
6 got lock
6 exiting lock
7 got lock
7 exiting lock
8 got lock
8 exiting lock
9 got lock
9 exiting lock
IIRC, it's highly likely to be in that order, but it's not guaranteed. I believe there are at least theoretically cases where a thread will be woken spuriously, note that it still doesn't have the lock, and go to the back of the queue. It's possible that's only for Wait/Notify, but I have a sneaking suspicion it's for locking as well.
I definitely wouldn't rely on it - if you need things to occur in a sequence, build up a Queue<T> or something similar.
EDIT: I've just found this within Joe Duffy's Concurrent Programming on Windows which basically agrees:
Because monitors use kernel objects internally, they exhibit the same roughly-FIFO behavior that the OS synchronization mechanisms also exhibit (described in the previous chapter). Monitors are unfair, so if another thread tries to acquire the lock before an awakened waiting thread tries to acquire the lock, the sneaky thread is permitted to acquire a lock.
The "roughly-FIFO" bit is what I was thinking of before, and the "sneaky thread" bit is further evidence that you shouldn't make assumptions about FIFO ordering.
Normal CLR locks are not guaranteed to be FIFO.
But, there is a QueuedLock class in this answer which will provide a guaranteed FIFO locking behavior.
The lock statement is documented to use the Monitor class to implement its behavior, and the docs for the Monitor class make no mention (that I can find) of fairness. So you should not rely on requested locks being acquired in the order of request.
In fact, an article by Jeffery Richter indicates in fact lock is not fair:
Thread Synchronization Fairness in the .NET CLR
Granted - it's an old article so things may have changed, but given that no promises are made in the contract for the Monitor class about fairness, you need to assume the worst.
Slightly tangential to the question, but ThreadPool doesn't even guarantee that it will execute queued work items in the order they are added. If you need sequential execution of asynchronous tasks, one option is using TPL Tasks (also backported to .NET 3.5 via Reactive Extensions). It would look something like this:
public static void Test()
{
var states = new List<State>();
_dueTime = DateTime.Now.AddSeconds(5);
var initialState = new State() { Index = 0 };
var initialTask = new Task(Go, initialState);
Task priorTask = initialTask;
for (int i = 1; i < 10; i++)
{
var state = new State { Index = i };
priorTask = priorTask.ContinueWith(t => Go(state));
states.Add(state);
Thread.Sleep(100);
}
Task finalTask = priorTask;
initialTask.Start();
finalTask.Wait();
}
This has a few advantages:
Execution order is guaranteed.
You no longer require an explicit lock (the TPL takes care of those details).
You no longer need events and no longer need to wait on all events. You can simply say: wait for the last task to complete.
If an exception were thrown in any of the tasks, subsequent tasks would be aborted and the exception would be rethrown by the call to Wait. This may or may not match your desired behavior, but is generally the best behavior for sequential, dependent tasks.
By using the TPL, you have added flexibility for future expansion, such as cancellation support, waiting on parallel tasks for continuation, etc.
I am using this method to do FIFO lock
public class QueuedActions
{
private readonly object _internalSyncronizer = new object();
private readonly ConcurrentQueue<Action> _actionsQueue = new ConcurrentQueue<Action>();
public void Execute(Action action)
{
// ReSharper disable once InconsistentlySynchronizedField
_actionsQueue.Enqueue(action);
lock (_internalSyncronizer)
{
Action nextAction;
if (_actionsQueue.TryDequeue(out nextAction))
{
nextAction.Invoke();
}
else
{
throw new Exception("Something is wrong. How come there is nothing in the queue?");
}
}
}
}
The ConcurrentQueue will order the execution of the actions while the threads are waiting in the lock.
Related
I was wondering: Locking allows only 1 thread to enter a code region
And wait handles is for signaling : :
Signaling is when one thread waits until it receives notification from
another.
So I thought to myself , can this be used to replace a lock ?
something like :
Thread number 1 --please enter ( autoreset --> autlock)
dowork...
finish work...
set signal to invite the next thread
So I wrote this :
/*1*/ static EventWaitHandle _waitHandle = new AutoResetEvent(true);
/*2*/
/*3*/ volatile int i = 0;
/*4*/ void Main()
/*5*/ {
/*6*/
/*7*/ for (int k = 0; k < 10; k++)
/*8*/ {
/*9*/ var g = i;
/*10*/ Interlocked.Increment(ref i);
/*11*/ new Thread(() = > DoWork(g)).Start();
/*12*/
/*13*/ }
/*14*/
/*15*/ Console.ReadLine();
/*16*/ }
/*17*/
/*18*/
/*19*/ void DoWork(object o)
/*20*/ {
/*21*/ _waitHandle.WaitOne();
/*22*/ Thread.Sleep(10);
/*23*/ Console.WriteLine((int) o + "Working...");
/*24*/ _waitHandle.Set();
/*25*/
/*26*/ }
as you can see : lines #21 , #24 are the replacement for the lock.
Question :
Is it a valid replacement ? ( not that i will replace lock , but want to know about usages scenarios)
When should I use each ?
Thank you.
strange but SO does not contain a question regarding _lock vs EventWaitHandle_
Do not go there. An important property of a lock is that it provides fairness. In other words, a reasonable guarantee that threads that contend for the lock get a guarantee that they can eventually acquire it. The Monitor class provides such a guarantee, implemented by a wait queue in the CLR. And Mutex and Semaphore provide such a guarantee, implemented by the operating system.
WaitHandles do not provide such a guarantee. Which is very detrimental if the lock is contended, the same thread can acquire it repeatedly and other threads can starve forever.
Use an appropriate synchronization object for locks. Wait handles should only be used for signaling.
It is possible, but it is much slower than lock() and much harder to maintain.
By the way, you should never read a value directly when using Interlocked-methods to maintain it.
Your code should look like this:
var g = Interlocked.Increment(ref i);
Then g will contain the incremented value rather than an abitrary previous value.
I have two threads which use two different functions. First one to search from start to end and the second one to search from end to start.
Now I'm using Thread.Sleep(10) for synchronisation, but it takes too much time, and testing is not possible in such condition.
Any idea how can I sync two threads with different functions?
It depends slightly on what you want to do.
If you have two threads and you just want to exit one when the other reaches "success" (or n threads and you want to exit them all when one reaches "success" first) you just need to periodically check for success on each thread.
Use Interlocked to do this without locks, or some other mechanism (see below)
Use cancellable Task objects
If you need to do your search in phases, where each thread does something and then waits for the other to catch up, you need a different approach.
Use Barrier
Given that you are doing an A*-search you likely need a combination of all two/three anyway:
Barrier to coordinate the steps and update the open set between steps
Success signalling to work out when to exit threads if another thread succeeded
Task objects with CancellationToken to allow callers to cancel the search.
Another answer suggested Semaphore - this is not really suitable for your needs (see comments below).
Barrier can be used for searches such as this by:
enter step 0 of the algorithm
n threads split the current level into equal parts and work on each half, when each completes then it signals and waits for the other thread
when all threads are ready, proceed to the next step and repeat the search
Simple check for exit - Interlocked
The first part is checking for success. If you want to stay "lockless", you can use Interlocked to do this, the general pattern is:
// global success indicator
private const int NotDone = 0;
private const int AllDone = 1;
private int _allDone = NotDone;
private GeneralSearchFunction(bool directionForward) {
bool iFoundIt = false;
... do some search operations that won't take much time
if (iFoundIt) {
// set _allDone to AllDone!
Interlocked.Exchange(ref _allDone, AllDone);
return;
}
... do more work
// after one or a few iterations, if this thread is still going
// see if another thread has set _allDone to AllDone
if (Interlocked.CompareExchange(ref _allDone, NotDone, NotDone) == AllDone) {
return; // if they did, then exit
}
... loop to the top and carry on working
}
// main thread:
Thread t1 = new Thread(() => GeneralSearchFunction(true));
Thread t2 = new Thread(() => GeneralSearchFunction(false));
t1.Start(); t2.Start(); // start both
t1.Join(); t2.Join();
// when this gets to here, one of them will have succeeded
This is the general pattern for any kind of success or cancellation token:
do some work
if you succeed, set a signal every other thread checks periodically
if you haven't yet succeeded then in the middle of that work, either every iteration, or every few iterations, check to see if this thread should exit
So an implementation would look like:
class Program
{
// global success indicator
private const int NotDone = 0;
private const int AllDone = 1;
private static int _allDone = NotDone;
private static int _forwardsCount = 0; // counters to simulate a "find"
private static int _backwardsCount = 0; // counters to simulate a "find"
static void Main(string[] args) {
var searchItem = "foo";
Thread t1 = new Thread(() => DoSearchWithBarrier(SearchForwards, searchItem));
Thread t2 = new Thread(() => DoSearchWithBarrier(SearchBackwards, searchItem));
t1.Start(); t2.Start();
t1.Join(); t2.Join();
Console.WriteLine("all done");
}
private static void DoSearchWithBarrier(Func<string, bool> searchMethod, string searchItem) {
while (!searchMethod(searchItem)) {
// after one or a few iterations, if this thread is still going
// see if another thread has set _allDone to AllDone
if (Interlocked.CompareExchange(ref _allDone, NotDone, NotDone) == AllDone) {
return; // if they did, then exit
}
}
Interlocked.Exchange(ref _allDone, AllDone);
}
public static bool SearchForwards(string item) {
// return true if we "found it", false if not
return (Interlocked.Increment(ref _forwardsCount) == 10);
}
public static bool SearchBackwards(string item) {
// return true if we "found it", false if not
return (Interlocked.Increment(ref _backwardsCount) == 20); // make this less than 10 to find it backwards first
}
}
Using Tasks to the same end
Of course, this wouldn't be .NET 4.5 without using Task:
class Program
{
private static int _forwardsCount = 0; // counters to simulate a "find"
private static int _backwardsCount = 0; // counters to simulate a "find"
static void Main(string[] args) {
var searchItem = "foo";
var tokenSource = new CancellationTokenSource();
var allDone = tokenSource.Token;
Task t1 = Task.Factory.StartNew(() => DoSearchWithBarrier(SearchForwards, searchItem, tokenSource, allDone), allDone);
Task t2 = Task.Factory.StartNew(() => DoSearchWithBarrier(SearchBackwards, searchItem, tokenSource, allDone), allDone);
Task.WaitAll(new[] {t2, t2});
Console.WriteLine("all done");
}
private static void DoSearchWithBarrier(Func<string, bool> searchMethod, string searchItem, CancellationTokenSource tokenSource, CancellationToken allDone) {
while (!searchMethod(searchItem)) {
if (allDone.IsCancellationRequested) {
return;
}
}
tokenSource.Cancel();
}
...
}
However, now you have used the CancellationToken for the wrong things - really this should be kept for the caller of the search to cancel the search, so you should use CancellationToken to check for a requested cancellation (only the caller needs tokenSource then), and a different success synchronisation (such as the Interlocked sample above) to exit.
Phase/step synchronisation
This gets harder for many reasons, but there is a simple approach. Using Barrier (new to .NET 4) in conjunction with an exit signal you can:
Perform the assigned thread's work for the current step, and then wait for the other thread to catch up before doing another iteration
Exit both threads when one succeeds
There are many different approaches for thread sync, depending on exactly what you want to achieve. Some are:
Barrier: This is probably the most suitable if you are intending for both your forwards and backwards searches to run at the same time. It also screams out your intent, i.e. "all threads can't go on until they everyone reaches a barrier"
ManualResetEvent - when one thread releases a signal, all others can proceed until it is set again. AutoResetEvent is similar, except it only allows one thread to proceed before blocking again.
Interlocked - in combination with SpinWait this is a viable lockless solution
Semaphore - possible to use, but not really suited for your scenario
I have only provided a full sample for Barrier here as it seems the most suitable in your case. Barrier is one of the most performant, second only to ManualResetEventSlim (ref. albahari), but using ManualResetEvent will need more complex code.
Other techniques to look at, if none of the above work for you are Monitor.Wait and Monitor.Pulse (now you're using locking) and Task Continuations. The latter is more used for passing data from one async operation to another, but it could be used for your scenario. And, as with the samples at the top of the answer, you are more likely to combine Task with Barrier than use one instead of the other. Task Continuations could be used to do the post-step revision of the open set in the A*-search, but you can just as easily use Barrier for that anyway.
This code, using Barrier works. In essence, DoSearchWithBarrier is the only bit doing the synchronisation - all the rest is setup and teardown code.
class Program {
...
private static int _forwardsCount = 0; // counters to simulate a "find"
private static int _backwardsCount = 0; // counters to simulate a "find"
static void Main(string[] args) {
Barrier barrier = new Barrier(numThreads,
b => Console.WriteLine("Completed search iteration {0}", b.CurrentPhaseNumber));
var searchItem = "foo";
Thread t1 = new Thread(() => DoSearchWithBarrier(SearchForwards, searchItem, barrier));
Thread t2 = new Thread(() => DoSearchWithBarrier(SearchBackwards, searchItem, barrier));
t1.Start(); Console.WriteLine("Started t1");
t2.Start(); Console.WriteLine("Started t2");
t1.Join(); Console.WriteLine("t1 done");
t2.Join(); Console.WriteLine("t2 done");
Console.WriteLine("all done");
}
private static void DoSearchWithBarrier(Func<string, bool> searchMethod, string searchItem, Barrier barrier) {
while (!searchMethod(searchItem)) {
// while we haven't found it, wait for the other thread to catch up
barrier.SignalAndWait(); // check for the other thread AFTER the barrier
if (Interlocked.CompareExchange(ref _allDone, NotDone, NotDone) == AllDone) {
return;
}
}
// set success signal on this thread BEFORE the barrier
Interlocked.Exchange(ref _allDone, AllDone);
// wait for the other thread, and then exit (and it will too)
barrier.SignalAndWait();
}
...
}
There are two things going on here:
Barrier is used to synchronise the two threads so they can't do their next step until the other has caught up
The exit signal uses Interlocked, as I first described.
Implementing this for A* searches is very similar to the above sample. Once all threads reach the barrier and therefore continue you could use a ManualResetEvent or a simple lock to then let one (and only one) revise the open set.
A note on Semaphore
This is probably not what you want as it's most often used when you have a limited pool of resources, with more resource users requiring access than you have resources.
Think of the PlayStation with CoD on it in the corner of the work canteen - 4 controllers, 20 people waiting (WaitOne) to use it, as soon as your character dies you Release the controller and someone else takes your place. No particular FIFO/LIFO ordering is enforced, and in fact Release can be called by the bouncer you employ to prevent the inevitable fights (i.e. thread identity is not enforced).
Simple check for exit - other approaches
Use of lock for simple success indication
You can achieve the same with locking. Both Interlocked and lock ensure you don't see any memory cache issues with reading a common variable between threads:
private readonly object _syncAllDone = new object();
...
if (iFoundIt) {
lock (_syncAllDone) { _allDone = AllDone };
return;
}
...
// see if another thread has set _allDone to AllDone
lock (_syncAllDone) {
if (_allDone == AllDone) {
return; // if they did, then exit
}
}
The disadvantage of this is that locking may well be slower, but you need to test for your situation. The advantage is that if you are using lock anyway to do other things such as writing out results from your thread, you don't have any extra overhead.
Use of ManualResetEvent for simple success indication
This is not really the intended use of reset events, but it can work. (If using .NET 4 or later, use ManualResetEventSlim instead of ManualResetEvent):
private ManualResetEvent _mreAllDone = new ManualResetEvent(true); // will not block a thread
...
if (iFoundIt) {
_mreAllDone.Reset(); // stop other threads proceeding
return;
}
...
// see if another thread has reset _mreAllDone by testing with a 0 timeout
if (!_mreAllDone.WaitOne(0)) {
return; // if they did, then exit
}
Phase synchronisation - other approaches
All of the other approaches get a lot more complex, as you have to do two-way continuation checks to prevent race conditions and permanently blocked threads. I don't recommend them, so I won't provide a sample here (it would be long and complicated).
References:
Interlocked
ManualResetEvent
MSDN - ManualResetEvent and ManualResetEventSlim
Barrier
MSDN - Continuation Tasks
MSDN - Task Cancellation
Semaphore
thread.Join()
is possibly what your after. This will make your current thread block until the other thread ends.
It's possible to Join multiple threads there by syncing all of them to one point.
List<Thread> threads = new List<Thread>();
threads.Add(new Thread(new ThreadStart(<Actual method here>)));
threads.Add(new Thread(new ThreadStart(<Another method here>)));
threads.Add(new Thread(new ThreadStart(<Another method here>)));
foreach(Thread thread in threads)
{
thread.Start();
}
//All your threads are now running
foreach(Thread thread in threads)
{
thread.Join();
}
//You wont get here until all those threads have finished
In some cases You can use AutoResetEvent to wait some result from thread.
You can use Task's for start/stop/wait result of some workers.
You can use Producer/Consumer pattern with BlockingCollection in case your functions eat some data and returns collection of something.
Is there a general way to convert a critical section to one or more semaphores? That is, is there some sort of straightforward transformation of the code that can be done to convert them?
For example, if I have two threads doing protected and unprotected work like below. Can I convert them to Semaphores that can be signaled, cleared and waited on?
void AThread()
{
lock (this)
{
Do Protected Work
}
Do Unprotected work.
}
The question came to me after thinking about C#'s lock() statement and if I could implement equivalent functionality with an EventWaitHandle instead.
Yes there is a general way to convert a lock section to use a Semaphore, using the same try...finally block that lock is equivalent to, with a Semaphore with a max count of 1, initialised to count 1.
EDIT (May 11th) recent research has shown me that my reference for the try ... finally equivalence is out of date. The code samples below would need to be adjusted accordingly as a result of this. (end edit)
private readonly Semaphore semLock = new Semaphore(1, 1);
void AThread()
{
semLock.WaitOne();
try {
// Protected code
}
finally {
semLock.Release();
}
// Unprotected code
}
However you would never do this. lock:
is used to restrict resource access to a single thread at a time,
conveys the intent that resources in that section cannot be simultaneously accessed by more than one thread
Conversely Semaphore:
is intended to control simultaneous access to a pool of resources with a limit on concurrent access.
conveys the intent of either a pool of resources that can be accessed by a maximum number of threads, or of a controlling thread that can release a number of threads to do some work when it is ready.
with a max count of 1 will perform slower than lock.
can be released by any thread, not just the one that entered the section (added in edit)
Edit: You also mention EventWaitHandle at the end of your question. It is worth noting that Semaphore is a WaitHandle, but not an EventWaitHandle, and also from the MSDN documentation for EventWaitHandle.Set:
There is no guarantee that every call to the Set method will release a thread from an EventWaitHandle whose reset mode is EventResetMode.AutoReset. If two calls are too close together, so that the second call occurs before a thread has been released, only one thread is released. It is as if the second call did not happen.
The Detail
You asked:
Is there a general way to convert a critical section to one or more semaphores? That is, is there some sort of straightforward transformation of the code that can be done to convert them?
Given that:
lock (this) {
// Do protected work
}
//Do unprotected work
is equivalent (see below for reference and notes on this) to
**EDIT: (11th May) as per the above comment, this code sample needs adjusting before use as per this link
Monitor.Enter(this);
try {
// Protected code
}
finally {
Monitor.Exit(this);
}
// Unprotected code
You can achieve the same using Semaphore by doing:
private readonly Semaphore semLock = new Semaphore(1, 1);
void AThread()
{
semLock.WaitOne();
try {
// Protected code
}
finally {
semLock.Release();
}
// Unprotected code
}
You also asked:
For example, if I have two threads doing protected and unprotected work like below. Can I convert them to Semaphores that can be signaled, cleared and waited on?
This is a question I struggled to understand, so I apologise. In your example you name your method AThread. To me, it's not really AThread, it's AMethodToBeRunByManyThreads !!
private readonly Semaphore semLock = new Semaphore(1, 1);
void MainMethod() {
Thread t1 = new Thread(AMethodToBeRunByManyThreads);
Thread t2 = new Thread(AMethodToBeRunByManyThreads);
t1.Start();
t2.Start();
// Now wait for them to finish - but how?
}
void AMethodToBeRunByManyThreads() { ... }
So semLock = new Semaphore(1, 1); will protect your "protected code", but lock is more appropriate for that use. The difference is that a Semaphore would allow a third thread to get involved:
private readonly Semaphore semLock = new Semaphore(0, 2);
private readonly object _lockObject = new object();
private int counter = 0;
void MainMethod()
{
Thread t1 = new Thread(AMethodToBeRunByManyThreads);
Thread t2 = new Thread(AMethodToBeRunByManyThreads);
t1.Start();
t2.Start();
// Now wait for them to finish
semLock.WaitOne();
semLock.WaitOne();
lock (_lockObject)
{
// uses lock to enforce a memory barrier to ensure we read the right value of counter
Console.WriteLine("done: {0}", counter);
}
}
void AMethodToBeRunByManyThreads()
{
lock (_lockObject) {
counter++;
Console.WriteLine("one");
Thread.Sleep(1000);
}
semLock.Release();
}
However, in .NET 4.5 you would use Tasks to do this and control your main thread synchronisation.
Here are a few thoughts:
lock(x) and Monitor.Enter - equivalence
The above statement about equivalence is not quite accurate. In fact:
"[lock] is precisely equivalent [to Monitor.Enter try ... finally] except x is only evaluated once [by lock]"
(ref: C# Language Specification)
This is minor, and probably doesn't matter to us.
You may have to be careful of memory barriers, and incrementing counter-like fields, so if you are using Semaphore you may still need lock, or Interlocked if you are confident of using it.
Beware of lock(this) and deadlocks
My original source for this would be Jeffrey Richter's article "Safe Thread Synchronization". That, and general best practice:
Don't lock this, instead create an object field within your class on class instantiation (don't use a value type, as it will be boxed anyway)
Make the object field readonly (personal preference - but it not only conveys intent, it also prevents your locking object being changed by other code contributors etc.)
The implications are many, but to make team working easier, follow best practice for encapsulation and to avoid nasty edge case errors that are hard for tests to detect, it is better to follow the above rules.
Your original code would therefore become:
private readonly object m_lockObject = new object();
void AThread()
{
lock (m_lockObject) {
// Do protected work
}
//Do unprotected work
}
(Note: generally Visual Studio helps you in its snippets by using SyncRoot as your lock object name)
Semaphore and lock are intended for different use
lock grants threads a spot on the "ready queue" on a FIFO basis (ref. Threading in C# - Joseph Albahari, part 2: Basic Synchronization, Section: Locking). When anyone sees lock, they know that usually inside that section is a shared resource, such as a class field, that should only be altered by a single thread at a time.
The Semaphore is a non-FIFO control for a section of code. It is great for publisher-subscriber (inter-thread communication) scenarios. The freedom around different threads being able to release the Semaphore to the ones that acquired it is very powerful. Semantically it does not necessarily say "only one thread accesses the resources inside this section", unlike lock.
Example: to increment a counter on a class, you might use lock, but not Semaphore
lock (_lockObject) {
counter++;
}
But to only increment that once another thread said it was ok to do so, you could use a Semaphore, not a lock, where Thread A does the increment once it has the Semaphore section:.
semLock.WaitOne();
counter++;
return;
And thread B releases the Semaphore when it is ready to allow the increment:
// when I'm ready in thread B
semLock.Release();
(Note that this is forced, a WaitHandle such as ManualResetEvent might be more appropriate in that example).
Performance
From a performance perspective, running the simple program below on a small multi thread VM, lock wins over Semaphore by a long way, although the timescales are still very fast and would be sufficient for all but high throughput software. Note that this ranking was broadly the same when running the test with two parallel threads accessing the lock.
Time for 100 iterations in ticks on a small VM (smaller is better):
291.334 (Semaphore)
44.075 (SemaphoreSlim)
4.510 (Monitor.Enter)
6.991 (Lock)
Ticks per millisecond: 10000
class Program
{
static void Main(string[] args)
{
Program p = new Program();
Console.WriteLine("100 iterations in ticks");
p.TimeMethod("Semaphore", p.AThreadSemaphore);
p.TimeMethod("SemaphoreSlim", p.AThreadSemaphoreSlim);
p.TimeMethod("Monitor.Enter", p.AThreadMonitorEnter);
p.TimeMethod("Lock", p.AThreadLock);
Console.WriteLine("Ticks per millisecond: {0}", TimeSpan.TicksPerMillisecond);
}
private readonly Semaphore semLock = new Semaphore(1, 1);
private readonly SemaphoreSlim semSlimLock = new SemaphoreSlim(1, 1);
private readonly object _lockObject = new object();
const int Iterations = (int)1E6;
int sharedResource = 0;
void TimeMethod(string description, Action a)
{
sharedResource = 0;
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < Iterations; i++)
{
a();
}
sw.Stop();
Console.WriteLine("{0:0.000} ({1})", (double)sw.ElapsedTicks * 100d / (double)Iterations, description);
}
void TimeMethod2Threads(string description, Action a)
{
sharedResource = 0;
Stopwatch sw = new Stopwatch();
using (Task t1 = new Task(() => IterateAction(a, Iterations / 2)))
using (Task t2 = new Task(() => IterateAction(a, Iterations / 2)))
{
sw.Start();
t1.Start();
t2.Start();
Task.WaitAll(t1, t2);
sw.Stop();
}
Console.WriteLine("{0:0.000} ({1})", (double)sw.ElapsedTicks * (double)100 / (double)Iterations, description);
}
private static void IterateAction(Action a, int iterations)
{
for (int i = 0; i < iterations; i++)
{
a();
}
}
void AThreadSemaphore()
{
semLock.WaitOne();
try {
sharedResource++;
}
finally {
semLock.Release();
}
}
void AThreadSemaphoreSlim()
{
semSlimLock.Wait();
try
{
sharedResource++;
}
finally
{
semSlimLock.Release();
}
}
void AThreadMonitorEnter()
{
Monitor.Enter(_lockObject);
try
{
sharedResource++;
}
finally
{
Monitor.Exit(_lockObject);
}
}
void AThreadLock()
{
lock (_lockObject)
{
sharedResource++;
}
}
}
It's difficult to determine what you're asking for here.
If you just want something you can wait on, you can use a Monitor, which is what lock uses under the hood. That is, your lock sequence above is expanded to something like:
void AThread()
{
Monitor.Enter(this);
try
{
// Do protected work
}
finally
{
Monitor.Exit(this);
}
// Do unprotected work
}
By the way, lock (this) is generally not a good idea. You're better off creating a lock object:
private object _lockObject = new object();
Now, if you want to conditionally obtain the lock, you can use `Monitor.TryEnter:
if (Monitor.TryEnter(_lockObject))
{
try
{
// Do protected work
}
finally
{
Monitor.Exit(_lockObject);
}
}
If you want to wait with a timeout, use the TryEnter overload:
if (Monitor.TryEnter(_lockObject, 5000)) // waits for up to 5 seconds
The return value is true if the lock was obtained.
A mutex is fundamentally different from an EventWaitHandle or Semaphore in that only the thread that acquires the mutex can release it. Any thread can set or clear a WaitHandle, and any thread can release a Semaphore.
I hope that answers your question. If not, edit your question to give us more detail about what you're asking for.
You should consider taking a look a the Wintellect Power Threading libraries:
https://github.com/Wintellect/PowerThreading
One of the things these libraries do is create generic abstractions that allow threading primitives to be swapped out.
This means on a 1 or 2 processor machine where you see very little contention, you may use a standard lock. One a 4 or 8 processor machine where contention is common, perhaps a reader/writer lock is more correct. If you use the primitives such as ResourceLock you can swap out:
Spin Lock
Monitor
Mutex
Reader Writer
Optex
Semaphore
... and others
I've written code that dynamically, based on the number of processors, chose specific locks based on the amount of contention likely to be present. With the structure found in that library, this is practical to do.
I want to check the state of a Semaphore to see if it is signalled or not (so if t is signalled, I can release it). How can I do this?
EDIT1:
I have two threads, one would wait on semaphore and the other should release a Semaphore. The problem is that the second thread may call Release() several times when the first thread is not waiting on it. So the second thread should detect that if it calls Release() it generate any error or not (it generate an error if you try to release a semaphore if nobody waiting on it). How can I do this? I know that I can use a flag to do this, but it is ugly. Is there any better way?
You can check to see if a Semaphore is signaled by calling WaitOne and passing a timeout value of 0 as a parameter. This will cause WaitOne to return immediately with a true or false value indicating whether the semaphore was signaled. This, of course, could change the state of the semaphore which makes it cumbersome to use.
Another reason why this trick will not help you is because a semaphore is said to be signaled when at least one count is available. It sounds like you want to know when the semaphore has all counts available. The Semaphore class does not have that exact ability. You can use the return value from Release to infer what the count is, but that causes the semaphore to change its state and, of course, it will still throw an exception if the semaphore already had all counts available prior to making the call.
What we need is a semaphore with a release operation that does not throw. This is not terribly difficult. The TryRelease method below will return true if a count became available or false if the semaphore was already at the maximumCount. Either way it will never throw an exception.
public class Semaphore
{
private int count = 0;
private int limit = 0;
private object locker = new object();
public Semaphore(int initialCount, int maximumCount)
{
count = initialCount;
limit = maximumCount;
}
public void Wait()
{
lock (locker)
{
while (count == 0)
{
Monitor.Wait(locker);
}
count--;
}
}
public bool TryRelease()
{
lock (locker)
{
if (count < limit)
{
count++;
Monitor.PulseAll(locker);
return true;
}
return false;
}
}
}
Looks like you need an other synchronization object because Semaphore does not provide such functionality to check whether it is signalled or not in specific moment of time.
Semaphore allows automatic triggering of code which awaiting for signalled state using WaitOne()/Release() methods for instance.
You can take a look at the new .NET 4 class SemaphoreSlim which exposes CurrentCount property perhaps you can leverage it.
CurrentCount
Gets the number of threads that will be allowed to enter
the SemaphoreSlim.
EDIT: Updated due to updated question
As a quick solution you can wrap semaphore.Release() by try/catch and handle SemaphoreFullException , does it work as you expected?
Using SemaphoreSlim you can check CurrentCount in such way:
int maxCount = 5;
SemaphoreSlim slim = new SemaphoreSlim(5, maxCount);
if (slim.CurrentCount == maxCount)
{
// generate error
}
else
{
slim.Release();
}
The way to implement semaphore using signalling is as follows. It doesn't make sense to be able to query the state outside of this, as it wouldn't be threadsafe.
Create an instance with maxThreads slots, initially all available:
var threadLimit = new Semaphore(maxThreads, maxThreads);
Use the following to wait (block) for a spare slot (in case maxThreads have already been taken):
threadLimit.WaitOne();
Use the following to release a slot:
threadLimit.Release(1);
There's a full example here.
Knowing when all counts are available in a semaphore is useful. I have used the following logic/solution. I am sharing here because I haven't seen any other solutions addressing this.
//List to add a variable number of handles
private List<WaitHandle> waitHandles;
//Using a mutex to make sure that only one thread/process enters this section
using (new Mutex(....))
{
waitHandles = new List<WaitHandle>();
int x = [Maximum number of slots available in the semaphore];
//In this for loop we spin a thread per each slot of the semaphore
//The idea is to consume all the slots in this process
//not allowing anything else to enter the code protected by the semaphore
for (int i = 0; i < x; i++)
{
Thread t = new Thread(new ParameterizedThreadStart(TWorker));
ManualResetEvent mre = new ManualResetEvent(false);
waitHandles.Add(mre);
t.Start(mre);
}
WaitHandle.WaitAll(waitHandles.ToArray());
[... do stuff here, all semaphore slots are blocked now ...]
//Release all slots
semaphore.Release(x);
}
private void TWorker(object sObject)
{
ManualResetEvent mre = (ManualResetEvent)sObject;
//This is an static Semaphore declared and instantiated somewhere else
semaphore.WaitOne();
mre.Set();
}
I have developed an "object pool" and cannot seem to do it without using Thread.Sleep() which is "bad practice" I believe.
This relates to my other question "Is there a standard way of implementing a proprietary connection pool in .net?". The idea behind the object pool is similar to the one behind the connection pool used for database connections. However, in my case I am using it to share a limited resource in a standard ASP.NET Web Service (running in IIS6). This means that many threads will be requesting access to this limited resource. The pool would dish out these objects (a "Get) and once all the available pool objects have been used, the next thread requesting one would simply waits a set amount of time for one of these object to become available again (a thread would do a "Put" once done with the object). If an object does not become available in this set time, a timeout error will occur.
Here is the code:
public class SimpleObjectPool
{
private const int cMaxGetTimeToWaitInMs = 60000;
private const int cMaxGetSleepWaitInMs = 10;
private object fSyncRoot = new object();
private Queue<object> fQueue = new Queue<object>();
private SimpleObjectPool()
{
}
private static readonly SimpleObjectPool instance = new SimpleObjectPool();
public static SimpleObjectPool Instance
{
get
{
return instance;
}
}
public object Get()
{
object aObject = null;
for (int i = 0; i < (cMaxGetTimeToWaitInMs / cMaxGetSleepWaitInMs); i++)
{
lock (fSyncRoot)
{
if (fQueue.Count > 0)
{
aObject = fQueue.Dequeue();
break;
}
}
System.Threading.Thread.Sleep(cMaxGetSleepWaitInMs);
}
if (aObject == null)
throw new Exception("Timout on waiting for object from pool");
return aObject;
}
public void Put(object aObject)
{
lock (fSyncRoot)
{
fQueue.Enqueue(aObject);
}
}
}
To use use it, one would do the following:
public void ExampleUse()
{
PoolObject lTestObject = (PoolObject)SimpleObjectPool.Instance.Get();
try
{
// Do something...
}
finally
{
SimpleObjectPool.Instance.Put(lTestObject);
}
}
Now the question I have is: How do I write this so I get rid of the Thread.Sleep()?
(Why I want to do this is because I suspect that it is responsible for the "false" timeout I am getting in my testing. My test application has a object pool with 3 objects in it. It spins up 12 threads and each thread gets an object from the pool 100 times. If the thread gets an object from the pool, it holds on to if for 2,000 ms, if it does not, it goes to the next iteration. Now logic dictates that 9 threads will be waiting for an object at any point in time. 9 x 2,000 ms is 18,000 ms which is the maximum time any thread should have to wait for an object. My get timeout is set to 60,000 ms so no thread should ever timeout. However some do so something is wrong and I suspect its the Thread.Sleep)
Since you are already using lock, consider using Monitor.Wait and Monitor.Pulse
In Get():
lock (fSyncRoot)
{
while (fQueue.Count < 1)
Monitor.Wait(fSyncRoot);
aObject = fQueue.Dequeue();
}
And in Put():
lock (fSyncRoot)
{
fQueue.Enqueue(aObject);
if (fQueue.Count == 1)
Monitor.Pulse(fSyncRoot);
}
you should be using a semaphore.
http://msdn.microsoft.com/en-us/library/system.threading.semaphore.aspx
UPDATE:
Semaphores are one of the basic constructs of multi-threaded programming.
A semaphore can be used different ways, but the basic idea is when you have a limited resource and many clients who want to use that resource, you can limit the number of clients that can access the resource at any given time.
below is a very crude example. I didn't add any error checking or try/finally blocks but you should.
You can also check:
http://en.wikipedia.org/wiki/Semaphore_(programming)
Say you have 10 buckets and 100 people who want to use those buckets.
We can represent the buckets in a queue.
At the start, add all of your buckets to the queue
for(int i=0;i<10;i++)
{
B.Push(new Bucket());
}
Now create a semaphore to guard your bucket queue. This semaphore is created with no items triggered and a capacity of 10.
Semaphore s = new Semaphore(0, 10);
All clients should check the semaphore before accessing the queue. You might have 100 threads running the thread method below. The first 10 will pass the semaphore. All others will wait.
void MyThread()
{
while(true)
{
// thread will wait until the semaphore is triggered once
// there are other ways to call this which allow you to pass a timeout
s.WaitOne();
// after being triggered once, thread is clear to get an item from the queue
Bucket b = null;
// you still need to lock because more than one thread can pass the semaphore at the sam time.
lock(B_Lock)
{
b = B.Pop();
}
b.UseBucket();
// after you finish using the item, add it back to the queue
// DO NOT keep the queue locked while you are using the item or no other thread will be able to get anything out of it
lock(B_Lock)
{
B.Push(b);
}
// after adding the item back to the queue, trigger the semaphore and allow
// another thread to enter
s.Release();
}
}