class Class1
{
private static object consoleGate = new Object();
private static void Trace(string msg)
{
lock (consoleGate)
{
Console.WriteLine("[{0,3}/{1}]-{2}:{3}", Thread.CurrentThread.ManagedThreadId,
Thread.CurrentThread.IsThreadPoolThread ? "pool" : "fore",
DateTime.Now.ToString("HH:mm:ss.ffff"), msg);
}
}
private static void ProcessWorkItems()
{
lock (consoleGate)
{
for (int i = 0; i < 5; i++)
{
Trace("Processing " + i);
Thread.Sleep(250);
}
}
Console.WriteLine("Terminado.");
}
static void Main()
{
ProcessWorkItems(); Console.ReadLine();
}
}
output:
Processing 0
Processing 1
Processing 2
Processing 3
Processing 4
Terminated
Why is this code works? ProcessWorkItems static method locks ConsoleGate object and Trace did the same. I thought the object could only be locked once. ¿Some explanations?
locks in C# are re-entrant - a single thread can acquire the same lock multiple times without blocking. Sine you only have one thread here there is no problem - locks are for synchronizing access to resources across multiple threads.
From the MSDN documentation on lock:
The lock keyword ensures that one thread does not enter a critical
section of code while another thread is in the critical section. If
another thread tries to enter a locked code, it will wait, block,
until the object is released.
For more information on re-entrant-locking see this SO thread: "What is the Re-entrant lock and concept in general?"
All the code you have displayed here is running on the same thread. That is why it is running just like it would if you hadn't used "lock"
Related
We have a function being called within a ASP.Net (Blazor) application simply within the processing of an HTTP request. It is on a Scoped, injected object though this is irrelevant to the problem at hand. Also the function is synchronous (does not have async) though it may be called from async functions.
A part of this functions code needs to be run in mutual exclusion. I thought this should be the simplest thing in the world and wrote the following code
using (Mutex mutex = new(true, "SyncObject")) { ... }
Basically I created a named Mutex, which should globally prevent more than 1 thread entering the block. To my surprise this did not work and with breakpoints I could see that multiple WorkerThreads entered the block of code.
After a lot of research I found that .Net has two namespaces 'Local' and 'Global' for synchronization object, since I was only interested in the 'Local' I did not need to make any change but out of frustration, I added it to the Global namespace and tried it but no luck.
using (Mutex mutex = new(true, "Global\\SyncObject")) { ... }
The above code did not work either and multiple threads entered the code.
I considered the possibility that the Worker threads may not be System threads and therefore the ownership if the mutex is always granted, then how to synchronize across two async methods becomes a question. Also since the function is synchronous a single thread would not be able to re-enter it until it completes.
using (Mutex mutex = new(false, "Global\\SyncObject")) {
mutex.WaitOne()
...
mutex.ReleaseMutex()
}
No luck.
Since the named mutex refused to work I tried creating a static mutex object as
private static Object mutex = new();
I tried using the above in a lock statement as
lock(mutex) {...}
This did not work either.
I found this to be amazingly strange. The behaviour of the sync objects indicates a single system thread, but then what can be done to sync whatever artificial threads .Net is creating and how can an artificial thread re-enter the function, this is not logical.
After digging in a bit I was able to see that these are indeed 2 threads, which makes sense and is as per expectation, but why won't the mutex / lock work?
The documentation suggests that you have a static readonly mutex object (static so that it is shared between instances)
class Test13
{
// Create a new Mutex. The creating thread does not own the
// Mutex.
private static Mutex mut = new Mutex();
private const int numIterations = 1;
private const int numThreads = 3;
static void Main()
{
// Create the threads that will use the protected resource.
for(int i = 0; i < numThreads; i++)
{
Thread myThread = new Thread(new ThreadStart(MyThreadProc));
myThread.Name = String.Format("Thread{0}", i + 1);
myThread.Start();
}
// The main thread exits, but the application continues to
// run until all foreground threads have exited.
}
private static void MyThreadProc()
{
for(int i = 0; i < numIterations; i++)
{
UseResource();
}
}
// This method represents a resource that must be synchronized
// so that only one thread at a time can enter.
private static void UseResource()
{
// Wait until it is safe to enter.
mut.WaitOne();
Console.WriteLine("{0} has entered the protected area",
Thread.CurrentThread.Name);
// Place code to access non-reentrant resources here.
// Simulate some work.
Thread.Sleep(500);
Console.WriteLine("{0} is leaving the protected area\r\n",
Thread.CurrentThread.Name);
// Release the Mutex.
mut.ReleaseMutex();
}
}
I have two threads. How to get data from thread1 to thread2. It means, whe thread1 has done its work, it has some data, and this data must be used in the second "thread2". How to realize it ?
Here is code, but what to do..now ?
static void Main(string[] args)
{
Thread t1 = new Thread(thread1);
t1.Start();
Thread t2 = new Thread(thread2);
t2.Start();
}
static void thread1()
{
string newstring="123";
}
static void thread2()
{
//what to do here...what code will be here?
Console.WriteLine(newstring);
}
In thread1 can be whatever, but i need to get this "whatever", than i can use it in thread2
Data, which is used by both Thread must be commonly shared between both thread.
usually it is called common resource.
One this you must note that you have to achieve synchronization here.
As both threads are running independently and also reading/writing common data, chances of Race Condition is pretty high. To prevent such cases, you must implement synchronization on reading/writing data (on common object).
refere below code, where CommonResource is common between both threads and synchronization has been achieved by locking
In your example, one thread is writing data and other thread is reading data. If we don't implement Synchronization, there are chances that while thread 1 is writing new data, but thread 2 (because it is not waiting for thread 1 to complete it's task first) will bring old data (or invalid data).
Situation goes worst when there are multiple threads which are writing data, without waiting for other threads to complete their writing.
public class CommonResourceClass
{
object lockObj;
//Note: here main resource is private
//(thus not in scope of any thread)
string commonString;
//while prop is public where we have lock
public string CommonResource
{
get
{
lock (lockObj)
{
Console.WriteLine(DateTime.Now.ToString() + " $$$$$$$$$$$$$$$ Reading");
Thread.Sleep(1000 * 2);
return commonString;
}
}
set
{
lock (lockObj)
{
Console.WriteLine(DateTime.Now.ToString() + " ************* Writing");
Thread.Sleep(1000 * 5);
commonString = value;
}
}
}
public CommonResourceClass()
{
lockObj = new object();
}
}
and Thread calling be like
static CommonResourceClass commonResourceClass;
static void Main(string[] args)
{
commonResourceClass = new CommonResourceClass();
Thread t1 = new Thread(ThreadOneRunner);
Thread t2 = new Thread(ThreadTwoRunner);
t1.Start();
t2.Start();
}
static void ThreadOneRunner()
{
while(true)
{
Console.WriteLine(DateTime.Now.ToString() + " *******Trying To Write");
commonResourceClass.CommonResource = "Written";
Console.WriteLine(DateTime.Now.ToString() + " *******Writing Done");
}
}
static void ThreadTwoRunner()
{
while(true)
{
Console.WriteLine(DateTime.Now.ToString() + " $$$$$$$Trying To Read");
string Data = commonResourceClass.CommonResource;
Console.WriteLine(DateTime.Now.ToString() + " $$$$$$$Reading Done");
}
}
Output of it:
Note That, reading is taking 2 seconds and writing is taking 5 seconds, so reading is supposed to be faster. But if writing is going on, reading must wait till writing done.
you can clearly see in output, as one thread is trying to read or write, it cannot do it while other thread is performing it's task.
Please tell me if I am thinking it alright.
A different thread cannot enter the same critical section using
the same lock just because the first thread called Monitor.Wait, right? The Wait method only allows a different thread to acquire
the same monitor, i.e. the same synchronization lock but only for a different critical section and never for the same critical
section.
Is my understanding correct?
Because if the Wait method meant that anyone can now enter this
same critical section using this same lock, then that would defeat
the whole purpose of synchronization, right?
So, in the code below (written in notepad, so please forgive any
typos), ThreadProc2 can only use syncLock to enter the code in
ThreadProc2 and not in ThreadProc1 while the a previous thread
that held and subsequently relinquished the lock was executing
ThreadProc1, right?
Two or more threads can use the same synchronization lock to run
different pieces of code at the same time, right? Same question as
above, basically, but just confirming for the sake of symmetry with
point 3 below.
Two or more threads can use a different synchronization lock to
run the same piece of code, i.e. to enter the same critical section.
Boilerplate text to correct the formatting.
class Foo
{
private static object syncLock = new object();
public void ThreadProc1()
{
try
{
Monitor.Enter(syncLock);
Monitor.Wait(syncLock);
Thread.Sleep(1000);
}
finally
{
if (Monitor.IsLocked(syncLock))
{
Monitor.Exit(syncLock);
}
}
}
public void ThreadProc2()
{
bool acquired = false;
try
{
// Calling TryEnter instead of
// Enter just for the sake of variety
Monitor.TryEnter(syncLock, ref acquired);
if (acquired)
{
Thread.Sleep(200);
Monitor.Pulse(syncLock);
}
}
finally
{
if (acquired)
{
Monitor.Exit(syncLock);
}
}
}
}
Update
The following illustration confirms that #3 is correct although I don't think it will be a nice thing to do.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace DifferentSyncLockSameCriticalSection
{
class Program
{
static void Main(string[] args)
{
var sathyaish = new Person { Name = "Sathyaish Chakravarthy" };
var superman = new Person { Name = "Superman" };
var tasks = new List<Task>();
// Must not lock on string so I am using
// an object of the Person class as a lock
tasks.Add(Task.Run( () => { Proc1(sathyaish); } ));
tasks.Add(Task.Run(() => { Proc1(superman); }));
Task.WhenAll(tasks);
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
}
static void Proc1(object state)
{
// Although this would be a very bad practice
lock(state)
{
try
{
Console.WriteLine((state.ToString()).Length);
}
catch(Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
}
class Person
{
public string Name { get; set; }
public override string ToString()
{
return Name;
}
}
}
When a thread calls Monitor.Wait it is suspended and the lock released. This will allow another thread to acquire the lock, update some state, and then call Monitor.Pulse in order to communicate to other threads that something has happened. You must have acquired the lock in order to call Pulse. Before Monitor.Wait returns the framework will reacquire the lock for the thread that called Wait.
In order for two threads to communicate with each other they need to use the same synchronization primitive. In your example you've used a monitor, but you usually need to combine this with some kind of test that the Wait returned in response to a Pulse. This is because it is technically possible to Wait to return even if Pulse wasn't called (although this doesn't happen in practice).
It's also worth remembering that a call to Pulse isn't "sticky", so if nobody is waiting on the monitor then Pulse does nothing and a subsequent call to Wait will miss the fact that Pulse was called. This is another reason why you tend to record the fact that something has been done before calling Pulse (see the example below).
It's perfectly valid for two different threads to use the same lock to run different bits of code - in fact this is the typical use-case. For example, one thread acquires the lock to write some data and another thread acquires the lock to read the data. However, it's important to realize that they don't run at the same time. The act of acquiring the lock prevents another thread from acquiring the same lock, so any thread attempting to acquire the lock when it is already locked will block until the other thread releases the lock.
In point 3 you ask:
Two or more threads can use a different synchronization lock to run
the same piece of code, i.e. to enter the same critical section.
However, if two threads are using different locks then they are not entering the same critical section. The critical section is denoted by the lock that protects it - if they're different locks then they are different sections that just happen to access some common data within the section. You should avoid doing this as it can lead to some difficult to debug data race conditions.
Your code is a bit over-complicated for what you're trying to accomplish. For example, let's say we've got 2 threads, and one will signal when there is data available for another to process:
class Foo
{
private readonly object syncLock = new object();
private bool dataAvailable = false;
public void ThreadProc1()
{
lock(syncLock)
{
while(!dataAvailable)
{
// Release the lock and suspend
Monitor.Wait(syncLock);
}
// Now process the data
}
}
public void ThreadProc2()
{
LoadData();
lock(syncLock)
{
dataAvailable = true;
Monitor.Pulse(syncLock);
}
}
private void LoadData()
{
// Gets some data
}
}
}
On a console application, i am currently starting an array of threads. The thread is passed an object and running a method in it. I would like to know how to call a method on the object inside the individual running threads.
Dispatcher doesn't work. SynchronizationContext "Send" runs on the calling thread and "Post" uses a new thread. I would like to be able to call the method and pass parameters on a running thread on the target thread it's running on and not the calling thread.
Update 2: Sample code
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace CallingFromAnotherThread
{
class Program
{
static void Main(string[] args)
{
var threadCount = 10;
var threads = new Thread[threadCount];
Console.WriteLine("Main on Thread " + Thread.CurrentThread.ManagedThreadId);
for (int i = 0; i < threadCount; i++)
{
Dog d = new Dog();
threads[i] = new Thread(d.Run);
threads[i].Start();
}
Thread.Sleep(5000);
//how can i call dog.Bark("woof");
//on the individual dogs and make sure they run on the thread they were created on.
//not on the calling thread and not on a new thread.
}
}
class Dog
{
public void Run()
{
Console.WriteLine("Running on Thread " + Thread.CurrentThread.ManagedThreadId);
}
public void Bark(string text)
{
Console.WriteLine(text);
Console.WriteLine("Barking on Thread " + Thread.CurrentThread.ManagedThreadId);
}
}
}
Update 1:
Using synchronizationContext.Send results to using the calling thread
Channel created
Main thread 10
SyncData Added for thread 11
Consuming channel ran on thread 11
Calling AddConsumer on thread 10
Consumer added consumercb78b. Executed on thread 10
Calling AddConsumer on thread 10
Consumer added consumer783c4. Executed on thread 10
Using synchronizationContext.Post results to using a different thread
Channel created
Main thread 10
SyncData Added for thread 11
Consuming channel ran on thread 11
Calling AddConsumer on thread 12
Consumer added consumercb78b. Executed on thread 6
Calling AddConsumer on thread 10
Consumer added consumer783c4. Executed on thread 7
The target thread must run the code "on itself" - or it is just accessing the object across threads. This is done with some form of event dispatch loop on the target thread itself.
The SynchronizationContext abstraction can and does support this if the underlying provider supports it. For example in either WinForms or WPF (which themselves use the "window message pump") using Post will "run on the UI thread".
Basically, all such constructs follow some variation of the pattern:
// On "target thread"
while (running) {
var action = getNextDelegateFromQueue();
action();
}
// On other thread
postDelegateToQueue(actionToDoOnTargetThread);
It is fairly simple to create a primitive queue system manually - just make sure to use the correct synchronization guards. (Although I am sure there are tidy "solved problem" libraries out there; including wrapping everything up into a SynchronizationContext.)
Here is a primitive version of the manual queue. Note that there may be is1 a race condition.. but, FWIW:
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace DogPark
{
internal class DogPark
{
private readonly string _parkName;
private readonly Thread _thread;
private readonly ConcurrentQueue<Action> _actions = new ConcurrentQueue<Action>();
private volatile bool _isOpen;
public DogPark(string parkName)
{
_parkName = parkName;
_isOpen = true;
_thread = new Thread(OpenPark);
_thread.Name = parkName;
_thread.Start();
}
// Runs in "target" thread
private void OpenPark(object obj)
{
while (true)
{
Action action;
if (_actions.TryDequeue(out action))
{
Program.WriteLine("Something is happening at {0}!", _parkName);
try
{
action();
}
catch (Exception ex)
{
Program.WriteLine("Bad dog did {0}!", ex.Message);
}
}
else
{
// Nothing left!
if (!_isOpen && _actions.IsEmpty)
{
return;
}
}
Thread.Sleep(0); // Don't toaster CPU
}
}
// Called from external thread
public void DoItInThePark(Action action)
{
if (_isOpen)
{
_actions.Enqueue(action);
}
}
// Called from external thread
public void ClosePark()
{
_isOpen = false;
Program.WriteLine("{0} is closing for the day!", _parkName);
// Block until queue empty.
while (!_actions.IsEmpty)
{
Program.WriteLine("Waiting for the dogs to finish at {0}, {1} actions left!", _parkName, _actions.Count);
Thread.Sleep(0); // Don't toaster CPU
}
Program.WriteLine("{0} is closed!", _parkName);
}
}
internal class Dog
{
private readonly string _name;
public Dog(string name)
{
_name = name;
}
public void Run()
{
Program.WriteLine("{0} is running at {1}!", _name, Thread.CurrentThread.Name);
}
public void Bark()
{
Program.WriteLine("{0} is barking at {1}!", _name, Thread.CurrentThread.Name);
}
}
internal class Program
{
// "Thread Safe WriteLine"
public static void WriteLine(params string[] arguments)
{
lock (Console.Out)
{
Console.Out.WriteLine(arguments);
}
}
private static void Main(string[] args)
{
Thread.CurrentThread.Name = "Home";
var yorkshire = new DogPark("Yorkshire");
var thunderpass = new DogPark("Thunder Pass");
var bill = new Dog("Bill the Terrier");
var rosemary = new Dog("Rosie");
bill.Run();
yorkshire.DoItInThePark(rosemary.Run);
yorkshire.DoItInThePark(rosemary.Bark);
thunderpass.DoItInThePark(bill.Bark);
yorkshire.DoItInThePark(rosemary.Run);
thunderpass.ClosePark();
yorkshire.ClosePark();
}
}
}
The output should look about like the following - keep in mind that this will change when run multiples times due to the inherent nature of non-synchronized threads.
Bill the Terrier is running at Home!
Something is happening at Thunder Pass!
Something is happening at Yorkshire!
Rosie is running at Yorkshire!
Bill the Terrier is barking at Thunder Pass!
Something is happening at Yorkshire!
Rosie is barking at Yorkshire!
Something is happening at Yorkshire!
Rosie is running at Yorkshire!
Thunder Pass is closing for the day!
Thunder Pass is closed!
Yorkshire is closing for the day!
Yorkshire is closed!
There is nothing preventing a dog from performing at multiple dog parks simultaneously.
1 There is a race condition present and it is this: a park may close before the last dog action runs.
This is because the dog park thread dequeues the action before the action is run - and the method to close the dog park only waits until all the actions are dequeued.
There are multiple ways to address it, for instance:
The concurrent queue could first peek-use-then-dequeue-after-the-action, or
A separate volatile isClosed-for-real flag (set from the dog park thread) could be used, or ..
I've left the bug in as a reminder of the perils of threading..
A running thread is already executing a method. You cannot directly force that thread to leave the method and enter a new one. However, you could send information to that thread to leave the current method and do something else. But this only works if the executed method can react on that passed information.
In general, you can use threads to call/execute methods, but you cannot call a method ON a running thread.
Edit, based on your updates:
If you want to use the same threads to execute dog.run and dog.bark, and do it in the same objects, the you need to modify your code:
static void Main(string[] args)
{
var threadCount = 10;
var threads = new Thread[threadCount];
Console.WriteLine("Main on Thread " + Thread.CurrentThread.ManagedThreadId);
// keep the dog objects outside the creation block in order to access them later again. Always useful.
Dog[] dogs = New Dog[threadCount];
for (int i = 0; i < threadCount; i++)
{
dogs[i] = new Dog();
threads[i] = new Thread(d.Run);
threads[i].Start();
}
Thread.Sleep(5000);
//how can i call dog.Bark("woof") --> here you go:
for (int i = 0; i < threadCount; i++)
{
threads[i] = new Thread(d.Bark);
threads[i].Start();
}
// but this will create NEW threads because the others habe exited after finishing d.run, and habe been deleted. Is this a problem for you?
// maybe the other threads are still running, causing a parallel execution of d.run and d.bark.
//on the individual dogs and make sure they run on the thread they were created on.
//not on the calling thread and not on a new thread. -->
// instead of d.run, call d.doActions and loop inside that function, check for commands from external sources:
for (int i = 0; i < threadCount; i++)
{
threads[i] = new Thread(d.doActions);
threads[i].Start();
}
// but in this case there will be sequential execution of actions. No parallel run and bark.
}
Inside your dog class:
Enum class EnumAction
{
Nothing,
Run,
bark,
exit,
};
EnumAction m_enAction;
Object m_oLockAction;
void SetAction (EnumAction i_enAction)
{
Monitor.Enter (m_oLockAction);
m_enAction = i_enAction;
Monitor.Exit (m_oLockAction);
}
void SetAction (EnumAction i_enAction)
{
Monitor.Enter (m_oLockAction);
m_enAction = i_enAction;
Monitor.Exit (m_oLockAction);
}
Void doActions()
{
EnumAction enAction;
Do
{
Thread.sleep(20);
enAction = GetAction();
Switch(enAction)
{
Case EnumAction.run:
Run(); break;
Case ...
}
} while (enAction != EnumAction.exit);
}
Got it? ;-)
Sorry for any typos, I was typing on my mobile phone, and I usually use C++CLI.
Another advice: as you would read the variable m_enAction inside the thread and write it from outside, you need to ensure that it gets updated properly due to the access from different threads. The threads MUST NOT cache the variable in the CPU, otherwise they don't see it changing. Use locks (e.g. Monitor) to achieve that. (But do not use a Monitor on m_enAction, because you can use Monitors only on objects. Create a dummy object for this purpose.)
I have added the necessary code. Check out the differences between the edits to see the changes.
You cannot run second method while first method is running. If you want them to run in parallel you need another thread. However, your object needs to be thread safe.
Execution of thread simply means execution of sequence of instruction. Dispatcher is nothing else than an infinite loop that executes queued method one after another.
I recommend you to use tasks instead of threads. Use Parallel.ForEach to run Dog.Run method on each dog object instance. To run Bark method use Task.Run(dog.Bark).
Since you used running and barking dog as an example you could write your own "dispatcher". That means infinite loop that would execute all queued work. In that case you could have all dogs in single thread. Sounds weird, but you could have unlimited amount of dogs. At the end, only as many threads can be executed at the same time as many CPU cores is available
Is there a general way to convert a critical section to one or more semaphores? That is, is there some sort of straightforward transformation of the code that can be done to convert them?
For example, if I have two threads doing protected and unprotected work like below. Can I convert them to Semaphores that can be signaled, cleared and waited on?
void AThread()
{
lock (this)
{
Do Protected Work
}
Do Unprotected work.
}
The question came to me after thinking about C#'s lock() statement and if I could implement equivalent functionality with an EventWaitHandle instead.
Yes there is a general way to convert a lock section to use a Semaphore, using the same try...finally block that lock is equivalent to, with a Semaphore with a max count of 1, initialised to count 1.
EDIT (May 11th) recent research has shown me that my reference for the try ... finally equivalence is out of date. The code samples below would need to be adjusted accordingly as a result of this. (end edit)
private readonly Semaphore semLock = new Semaphore(1, 1);
void AThread()
{
semLock.WaitOne();
try {
// Protected code
}
finally {
semLock.Release();
}
// Unprotected code
}
However you would never do this. lock:
is used to restrict resource access to a single thread at a time,
conveys the intent that resources in that section cannot be simultaneously accessed by more than one thread
Conversely Semaphore:
is intended to control simultaneous access to a pool of resources with a limit on concurrent access.
conveys the intent of either a pool of resources that can be accessed by a maximum number of threads, or of a controlling thread that can release a number of threads to do some work when it is ready.
with a max count of 1 will perform slower than lock.
can be released by any thread, not just the one that entered the section (added in edit)
Edit: You also mention EventWaitHandle at the end of your question. It is worth noting that Semaphore is a WaitHandle, but not an EventWaitHandle, and also from the MSDN documentation for EventWaitHandle.Set:
There is no guarantee that every call to the Set method will release a thread from an EventWaitHandle whose reset mode is EventResetMode.AutoReset. If two calls are too close together, so that the second call occurs before a thread has been released, only one thread is released. It is as if the second call did not happen.
The Detail
You asked:
Is there a general way to convert a critical section to one or more semaphores? That is, is there some sort of straightforward transformation of the code that can be done to convert them?
Given that:
lock (this) {
// Do protected work
}
//Do unprotected work
is equivalent (see below for reference and notes on this) to
**EDIT: (11th May) as per the above comment, this code sample needs adjusting before use as per this link
Monitor.Enter(this);
try {
// Protected code
}
finally {
Monitor.Exit(this);
}
// Unprotected code
You can achieve the same using Semaphore by doing:
private readonly Semaphore semLock = new Semaphore(1, 1);
void AThread()
{
semLock.WaitOne();
try {
// Protected code
}
finally {
semLock.Release();
}
// Unprotected code
}
You also asked:
For example, if I have two threads doing protected and unprotected work like below. Can I convert them to Semaphores that can be signaled, cleared and waited on?
This is a question I struggled to understand, so I apologise. In your example you name your method AThread. To me, it's not really AThread, it's AMethodToBeRunByManyThreads !!
private readonly Semaphore semLock = new Semaphore(1, 1);
void MainMethod() {
Thread t1 = new Thread(AMethodToBeRunByManyThreads);
Thread t2 = new Thread(AMethodToBeRunByManyThreads);
t1.Start();
t2.Start();
// Now wait for them to finish - but how?
}
void AMethodToBeRunByManyThreads() { ... }
So semLock = new Semaphore(1, 1); will protect your "protected code", but lock is more appropriate for that use. The difference is that a Semaphore would allow a third thread to get involved:
private readonly Semaphore semLock = new Semaphore(0, 2);
private readonly object _lockObject = new object();
private int counter = 0;
void MainMethod()
{
Thread t1 = new Thread(AMethodToBeRunByManyThreads);
Thread t2 = new Thread(AMethodToBeRunByManyThreads);
t1.Start();
t2.Start();
// Now wait for them to finish
semLock.WaitOne();
semLock.WaitOne();
lock (_lockObject)
{
// uses lock to enforce a memory barrier to ensure we read the right value of counter
Console.WriteLine("done: {0}", counter);
}
}
void AMethodToBeRunByManyThreads()
{
lock (_lockObject) {
counter++;
Console.WriteLine("one");
Thread.Sleep(1000);
}
semLock.Release();
}
However, in .NET 4.5 you would use Tasks to do this and control your main thread synchronisation.
Here are a few thoughts:
lock(x) and Monitor.Enter - equivalence
The above statement about equivalence is not quite accurate. In fact:
"[lock] is precisely equivalent [to Monitor.Enter try ... finally] except x is only evaluated once [by lock]"
(ref: C# Language Specification)
This is minor, and probably doesn't matter to us.
You may have to be careful of memory barriers, and incrementing counter-like fields, so if you are using Semaphore you may still need lock, or Interlocked if you are confident of using it.
Beware of lock(this) and deadlocks
My original source for this would be Jeffrey Richter's article "Safe Thread Synchronization". That, and general best practice:
Don't lock this, instead create an object field within your class on class instantiation (don't use a value type, as it will be boxed anyway)
Make the object field readonly (personal preference - but it not only conveys intent, it also prevents your locking object being changed by other code contributors etc.)
The implications are many, but to make team working easier, follow best practice for encapsulation and to avoid nasty edge case errors that are hard for tests to detect, it is better to follow the above rules.
Your original code would therefore become:
private readonly object m_lockObject = new object();
void AThread()
{
lock (m_lockObject) {
// Do protected work
}
//Do unprotected work
}
(Note: generally Visual Studio helps you in its snippets by using SyncRoot as your lock object name)
Semaphore and lock are intended for different use
lock grants threads a spot on the "ready queue" on a FIFO basis (ref. Threading in C# - Joseph Albahari, part 2: Basic Synchronization, Section: Locking). When anyone sees lock, they know that usually inside that section is a shared resource, such as a class field, that should only be altered by a single thread at a time.
The Semaphore is a non-FIFO control for a section of code. It is great for publisher-subscriber (inter-thread communication) scenarios. The freedom around different threads being able to release the Semaphore to the ones that acquired it is very powerful. Semantically it does not necessarily say "only one thread accesses the resources inside this section", unlike lock.
Example: to increment a counter on a class, you might use lock, but not Semaphore
lock (_lockObject) {
counter++;
}
But to only increment that once another thread said it was ok to do so, you could use a Semaphore, not a lock, where Thread A does the increment once it has the Semaphore section:.
semLock.WaitOne();
counter++;
return;
And thread B releases the Semaphore when it is ready to allow the increment:
// when I'm ready in thread B
semLock.Release();
(Note that this is forced, a WaitHandle such as ManualResetEvent might be more appropriate in that example).
Performance
From a performance perspective, running the simple program below on a small multi thread VM, lock wins over Semaphore by a long way, although the timescales are still very fast and would be sufficient for all but high throughput software. Note that this ranking was broadly the same when running the test with two parallel threads accessing the lock.
Time for 100 iterations in ticks on a small VM (smaller is better):
291.334 (Semaphore)
44.075 (SemaphoreSlim)
4.510 (Monitor.Enter)
6.991 (Lock)
Ticks per millisecond: 10000
class Program
{
static void Main(string[] args)
{
Program p = new Program();
Console.WriteLine("100 iterations in ticks");
p.TimeMethod("Semaphore", p.AThreadSemaphore);
p.TimeMethod("SemaphoreSlim", p.AThreadSemaphoreSlim);
p.TimeMethod("Monitor.Enter", p.AThreadMonitorEnter);
p.TimeMethod("Lock", p.AThreadLock);
Console.WriteLine("Ticks per millisecond: {0}", TimeSpan.TicksPerMillisecond);
}
private readonly Semaphore semLock = new Semaphore(1, 1);
private readonly SemaphoreSlim semSlimLock = new SemaphoreSlim(1, 1);
private readonly object _lockObject = new object();
const int Iterations = (int)1E6;
int sharedResource = 0;
void TimeMethod(string description, Action a)
{
sharedResource = 0;
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < Iterations; i++)
{
a();
}
sw.Stop();
Console.WriteLine("{0:0.000} ({1})", (double)sw.ElapsedTicks * 100d / (double)Iterations, description);
}
void TimeMethod2Threads(string description, Action a)
{
sharedResource = 0;
Stopwatch sw = new Stopwatch();
using (Task t1 = new Task(() => IterateAction(a, Iterations / 2)))
using (Task t2 = new Task(() => IterateAction(a, Iterations / 2)))
{
sw.Start();
t1.Start();
t2.Start();
Task.WaitAll(t1, t2);
sw.Stop();
}
Console.WriteLine("{0:0.000} ({1})", (double)sw.ElapsedTicks * (double)100 / (double)Iterations, description);
}
private static void IterateAction(Action a, int iterations)
{
for (int i = 0; i < iterations; i++)
{
a();
}
}
void AThreadSemaphore()
{
semLock.WaitOne();
try {
sharedResource++;
}
finally {
semLock.Release();
}
}
void AThreadSemaphoreSlim()
{
semSlimLock.Wait();
try
{
sharedResource++;
}
finally
{
semSlimLock.Release();
}
}
void AThreadMonitorEnter()
{
Monitor.Enter(_lockObject);
try
{
sharedResource++;
}
finally
{
Monitor.Exit(_lockObject);
}
}
void AThreadLock()
{
lock (_lockObject)
{
sharedResource++;
}
}
}
It's difficult to determine what you're asking for here.
If you just want something you can wait on, you can use a Monitor, which is what lock uses under the hood. That is, your lock sequence above is expanded to something like:
void AThread()
{
Monitor.Enter(this);
try
{
// Do protected work
}
finally
{
Monitor.Exit(this);
}
// Do unprotected work
}
By the way, lock (this) is generally not a good idea. You're better off creating a lock object:
private object _lockObject = new object();
Now, if you want to conditionally obtain the lock, you can use `Monitor.TryEnter:
if (Monitor.TryEnter(_lockObject))
{
try
{
// Do protected work
}
finally
{
Monitor.Exit(_lockObject);
}
}
If you want to wait with a timeout, use the TryEnter overload:
if (Monitor.TryEnter(_lockObject, 5000)) // waits for up to 5 seconds
The return value is true if the lock was obtained.
A mutex is fundamentally different from an EventWaitHandle or Semaphore in that only the thread that acquires the mutex can release it. Any thread can set or clear a WaitHandle, and any thread can release a Semaphore.
I hope that answers your question. If not, edit your question to give us more detail about what you're asking for.
You should consider taking a look a the Wintellect Power Threading libraries:
https://github.com/Wintellect/PowerThreading
One of the things these libraries do is create generic abstractions that allow threading primitives to be swapped out.
This means on a 1 or 2 processor machine where you see very little contention, you may use a standard lock. One a 4 or 8 processor machine where contention is common, perhaps a reader/writer lock is more correct. If you use the primitives such as ResourceLock you can swap out:
Spin Lock
Monitor
Mutex
Reader Writer
Optex
Semaphore
... and others
I've written code that dynamically, based on the number of processors, chose specific locks based on the amount of contention likely to be present. With the structure found in that library, this is practical to do.