Is it necessary to have two locks for two objects? - c#

If I have two shared resources,these resources are updated in their own separate tasks that run concurrently. The second task checks on the status of the shared resource from the first task and then updates it's own shared resource. After one of the tasks completes I then check the status of both shared resources. Do I need two separate locks to make this thread safe or is one enough? For example:
private void example()
{
object lockObj = new object();
int x = 0;
int y =0;
List<Task> tasks = new List<Task>();
Task task1 = Task.Factory.StartNew(() =>
{
try
{
int z = doComputation()
resultofComputation = z;
}
catch
{
resultofComputation=-1;
}
finally
{
lock(lockObj)
{
x = resultofComputation;
}
}
}
tasks.Add(workTask);
Task task2 = Task.Factory.StartNew(() =>
{
try
{
checkOnstatusofThing(ref x);
lock(lockObj)
{
y +=x;
}
}
finally
{
}
}
Task.WaitAny(tasks.ToArray());
if(x =3 || y ==9)
{
return -1;
}
return 0;
}
checkOnstatusofThing(ref int x)
{
if(x == 5)
{
return;
}
}

Using a single lock object is the safe option. You define the variables that consist the shared state, and meticulously use the lock every time you write and read these variables, from any thread. If you do this, then the correctness of your application will be easy to prove (easy by the standards of multithreading, which is inherently difficult).
To minimize the contention for the lock you should release it as fast as possible. You should avoid doing anything that is not related with the shared state while holding the lock. For example if you must call a method with a shared variable as argument, take a snapshot of the variable, and use the snapshot as argument.
int snapshot;
lock (lockObj)
{
snapshot = sharedState;
}
MethodCall(snapshot);
If you follow this advice then the contention for the lock should be minimal, and should not affect significantly the performance of your application. But if your benchmarks reveal that there is too much contention for the lock, then you may consider introducing multiple locks to increase the granularity of the locking scheme and reduce the contention. Be aware that this change will increase the complexity of your application by a magnitude. Deadlocks will become possible, so you must be familiar with the classical synchronization problems like the Five Dining philosophers, and their solutions.

Related

calling Interlocked after Semaphore WaitOne

Came across the following code which blocks on a Semaphore when GenerateLabel is called more than 4 times concurrently. After the WaitOne a member mCurrentScanner is used to get access to a scanner. The question is if the Interlocked functions are needed after the WaitOne? I'd say no as the thread starts fresh when the WaitHandle is released, but not 100% sure.
mConcurrentLabels = new Semaphore(4, 4);
public string GenerateLabel()
{
mConcurrentLabels.WaitOne();
int current = 0;
Interlocked.Exchange(ref current, mCurrentScanner);
(scanner, dir) = ScanMappings[current];
Interlocked.Increment(ref mCurrentScanner);
mCurrentScanner %= 4;
DoLongRunningTask();
mConcurrentLabels.Release();
}
Like you said; The semaphore is used to limit the concurrent threads. But the body is still executed concurrently. So locks/interlocked is required.
The bigger problem is: Using Interlocked.Exchange(ref current, mCurrentScanner); to read the value safely and using the Interlocked.Increment(ref mCurrentScanner);.
It might be possible to concurrent read the same value Exchange() and increment it twice. So you'll select one value twice and skip the next one.
I also advice to use try/finallies when using Semaphores.
mConcurrentLabels = new Semaphore(4, 4);
public string GenerateLabel()
{
mConcurrentLabels.WaitOne();
try
{
int current = Interlocked.Increment(ref mCurrentScanner);
(scanner, dir) = ScanMappings[current];
// mCurrentScanner %= 4; <------ ?
DoLongRunningTask();
}
finally
{
mConcurrentLabels.Release();
}
}
But if you need to mod the mCurrentScanner, I wouldn't use Interlocked.
mConcurrentLabels = new Semaphore(4, 4);
object mSyncRoot = new object();
public string GenerateLabel()
{
mConcurrentLabels.WaitOne();
try
{
int current;
lock(mSyncRoot)
{
current = mCurrentScanner++;
mCurrentScanner %= 4;
}
(scanner, dir) = ScanMappings[current];
// mCurrentScanner %= 4; <------ ?
DoLongRunningTask();
}
finally
{
mConcurrentLabels.Release();
}
}
It seems that the purpose of the semaphore is to protect the long running task and not to protect access to the private variables.
This is is useful from a resource management perspective. For example to prevent too many concurrent long running tasks from trashing a shared resource like a database.
The interlocked statements are needed to protect the private variables because the semaphore allows this code to run up to four times concurrently on different threads.
It is good practice to put the main part of this code in a try {} finally{} block to guarantee mConcurrentLabels.Release() is called exactly one time for every time mConcurrentLabels.WaitOne() is called.

What's the best pattern for a thread safe write cache to database?

I've a method which could be called by multiple threads, to write data to a database. To reduce database traffic, I cache the data and write it in a bulk.
Now I wanted to know, is there a better (for example lock-free pattern) to use?
Here is a Example how I do it at the moment?
public class WriteToDatabase : IWriter, IDisposable
{
public WriteToDatabase(PLCProtocolServiceConfig currentConfig)
{
writeTimer = new System.Threading.Timer(Writer);
writeTimer.Change((int)currentConfig.WriteToDatabaseTimer.TotalMilliseconds, Timeout.Infinite);
this.currentConfig = currentConfig;
}
private System.Threading.Timer writeTimer;
private List<PlcProtocolDTO> writeChache = new List<PlcProtocolDTO>();
private readonly PLCProtocolServiceConfig currentConfig;
private bool disposed;
public void Write(PlcProtocolDTO row)
{
lock (this)
{
writeChache.Add(row);
}
}
private void Writer(object state)
{
List<PlcProtocolDTO> oldCachce = null;
lock (this)
{
if (writeChache.Count > 0)
{
oldCachce = writeChache;
writeChache = new List<PlcProtocolDTO>();
}
}
if (oldCachce != null)
{
using (var s = VisuDL.CreateSession())
{
s.Insert(oldCachce);
}
}
if (!this.disposed)
writeTimer.Change((int)currentConfig.WriteToDatabaseTimer.TotalMilliseconds, Timeout.Infinite);
}
public void Dispose()
{
this.disposed = true;
writeTimer.Dispose();
Writer(null);
}
}
There are a few issues I can see with the timer based code.
Even in the new version of the code there is still a chance to lose writes on restart or shutdown.
The Dispose method is not waiting for the completion of the last timer callback that may be currently in progress.
Since timer callbacks run on thread pool threads, which are background threads, they will be aborted when the main thread exits.
There is no limit on the size of the batches, this is going to break when you hit a limit of the underlying storage api
(e.g. sql databases have a limit on query length and the number of parameters used).
since you're doing i/o the implementation should probably be async
This will behave poorly under load.
in particular as the load keeps increasing the batches will get bigger and therefore slower to execute,
a slower batch execution in turn will give the next one additional time to accumulate items making them even slower, etc...
ultimately either writing the batch will fail (if you hit a sql limit or the query times out) or the application will just go out of memory.
To handle high load you really have only two choices which are applying backpressure (i.e. slowing down the producers) or dropping writes.
you might want to allow a limited number of concurrent writers if the database can handle it.
There's a race condition on the disposed field which might result in an ObjectDisposedException in writeTimer.Change.
I think a better pattern that addresses the issues above is the consumer-producer pattern, you can implement it in .net
with a ConcurrentQueue or with the new System.Threading.Channels api.
Also keep in mind that if your application crashes for any reason you will lose the records that are still buffered.
This is a sample implementation using channels:
public interface IWriter<in T>
{
ValueTask WriteAsync(IEnumerable<T> items);
}
public sealed record Options(int BatchSize, TimeSpan Interval, int MaxPendingWrites, int Concurrency);
public class BatchWriter<T> : IWriter<T>, IAsyncDisposable
{
readonly IWriter<T> writer;
readonly Options options;
readonly Channel<T> channel;
readonly Task[] consumers;
public BatchWriter(IWriter<T> writer, Options options)
{
this.writer = writer;
this.options = options;
channel = Channel.CreateBounded<T>(new BoundedChannelOptions(options.MaxPendingWrites)
{
// Choose between backpressure (Wait) or
// various ways to drop writes (DropNewest, DropOldest, DropWrite).
FullMode = BoundedChannelFullMode.Wait,
SingleWriter = false,
SingleReader = options.Concurrency == 1
});
consumers = Enumerable.Range(start: 0, options.Concurrency)
.Select(_ => Task.Run(Start))
.ToArray();
}
async Task Start()
{
var batch = new List<T>(options.BatchSize);
var timer = Task.Delay(options.Interval);
var canRead = channel.Reader.WaitToReadAsync().AsTask();
while (true)
{
if (await Task.WhenAny(timer, canRead) == timer)
{
timer = Task.Delay(options.Interval);
await Flush(batch);
}
else if (await canRead)
{
while (channel.Reader.TryRead(out var item))
{
batch.Add(item);
if (batch.Count == options.BatchSize)
{
await Flush(batch);
}
}
canRead = channel.Reader.WaitToReadAsync().AsTask();
}
else
{
await Flush(batch);
return;
}
}
async Task Flush(ICollection<T> items)
{
if (items.Count > 0)
{
await writer.WriteAsync(items);
items.Clear();
}
}
}
public async ValueTask WriteAsync(IEnumerable<T> items)
{
foreach (var item in items)
{
await channel.Writer.WriteAsync(item);
}
}
public async ValueTask DisposeAsync()
{
channel.Writer.Complete();
await Task.WhenAll(consumers);
}
}
Instead of using a mutable List and protecting it using locks, you could use an ImmutableList, and stop worrying about the possibility of the list being mutated by the wrong thread at the wrong time. With immutable collections it is cheap and easy to pass around snapshots of your data, because you don't need to block the writers (and possibly also the readers) while creating copies of the data. An immutable collection is a snapshot by itself.
Although you don't have to worry about the contents of the collection, you still have to worry about its reference. This is because updating an immutable collection means replacing the reference to the old collection with a new collection. You don't want to have multiple threads swapping references in an uncontrollable manner, so you still need some sort of synchronization. You can still use locks, but it is quite easy to avoid locking altogether by using interlocked operations. The example below uses the handy ImmutableInterlocked.Update method, that allows to do an atomic update-and-swap in a single line:
private ImmutableList<PlcProtocolDTO> writeCache
= ImmutableList<PlcProtocolDTO>.Empty;
public void Write(PlcProtocolDTO row)
{
ImmutableInterlocked.Update(ref writeCache, x => x.Add(row));
}
private void Writer(object state)
{
IList<PlcProtocolDTO> oldCache = Interlocked.Exchange(
ref writeCache, ImmutableList<PlcProtocolDTO>.Empty);
using (var s = VisuDL.CreateSession())
s.Insert(oldCache);
}
private void Dump()
{
foreach (var row in Volatile.Read(ref writeCache))
Console.WriteLine(row);
}
Here is the description of the ImmutableInterlocked.Update method:
Mutates a value in-place with optimistic locking transaction semantics via a specified transformation function. The transformation is retried as many times as necessary to win the optimistic locking race.
This method can be used for updating any type of reference-type variables. Its usage may be increased with the advent of the new C# 9 record types, that are immutable by default, and are intended to be used as such.

Is TakeWhile(...) and etc. extension methods thread safe in Rx 1.0?

I have an event source which fired by a Network I/O very frequently, based on underlying design, of course the event was always on different thread each time, now I wrapped this event via Rx with: Observable.FromEventPattern(...), now I'm using the TakeWhile(predict) to filter some special event data.
At now, I have some concerns on its thread safety, the TakeWhile(predict) works as a hit and mute, but in concurrent situation, can it still be guaranteed? because I guess the underlying implementation could be(I can't read the source code since it's too complicated...):
public static IObservable<TSource> TakeWhile<TSource>(this IObservable<TSource> source, Func<TSource, bool> predict)
{
ISubject<TSource> takeUntilObservable = new TempObservable<TSource>();
IDisposable dps = null;
// 0 for takeUntilObservable still active, 1 for predict failed, diposed and OnCompleted already send.
int state = 0;
dps = source.Subscribe(
(s) =>
{
/* NOTE here the 'hit and mute' still not thread safe, one thread may enter 'else' and under CompareExchange, but meantime another thread may passed the predict(...) and calling OnNext(...)
* so the CompareExchange here mainly for avoid multiple time call OnCompleted() and Dispose();
*/
if (predict(s) && state == 0)
{
takeUntilObservable.OnNext(s);
}
else
{
// !=0 means already disposed and OnCompleted send, avoid multiple times called via parallel threads.
if (0 == Interlocked.CompareExchange(ref state, 1, 0))
{
try
{
takeUntilObservable.OnCompleted();
}
finally
{
dps.Dispose();
}
}
}
},
() =>
{
try
{
takeUntilObservable.OnCompleted();
}
finally { dps.Dispose(); }
},
(ex) => { takeUntilObservable.OnError(ex); });
return takeUntilObservable;
}
That TempObservable is just a simple implementation of ISubject.
If my guess reasonable, then seems the thread safety can't be guaranteed, means some unexpected event data may still incoming to OnNext(...) because that 'mute' is still on going.
Then I write a simple testing to verify, but out of expectation, the results are all positive:
public class MultipleTheadEventSource
{
public event EventHandler OnSthNew;
int cocurrentCount = 1000;
public void Start()
{
for (int i = 0; i < this.cocurrentCount; i++)
{
int j = i;
ThreadPool.QueueUserWorkItem((state) =>
{
var safe = this.OnSthNew;
if (safe != null)
safe(j, null);
});
}
}
}
[TestMethod()]
public void MultipleTheadEventSourceTest()
{
int loopTimes = 10;
int onCompletedCalledTimes = 0;
for (int i = 0; i < loopTimes; i++)
{
MultipleTheadEventSource eventSim = new MultipleTheadEventSource();
var host = Observable.FromEventPattern(eventSim, "OnSthNew");
host.TakeWhile(p => { return int.Parse(p.Sender.ToString()) < 110; }).Subscribe((nxt) =>
{
//try print the unexpected values, BUT I Never saw it happened!!!
if (int.Parse(nxt.Sender.ToString()) >= 110)
{
this.testContextInstance.WriteLine(nxt.Sender.ToString());
}
}, () => { Interlocked.Increment(ref onCompletedCalledTimes); });
eventSim.Start();
}
// simply wait everything done.
Thread.Sleep(60000);
this.testContextInstance.WriteLine("onCompletedCalledTimes: " + onCompletedCalledTimes);
}
before I do the testing, some friends here suggest me try to use Synchronize<TSource> or ObserveOn to make it thread safe, so any idea on my proceeding thoughts and why the issue not reproduced?
As per your other question, the answer still remains the same: In Rx you should assume that Observers are called in a serialized fashion.
To provider a better answer; Originally the Rx team ensured that the Observable sequences were thread safe, however the performance penalty for well behaved/designed applications was unnecessary. So a decision was taken to remove the thread safety to remove the performance cost. To allow you to opt back into to thread safety you could apply the Synchronize() method which would serialize all method calls OnNext/OnError/OnCompleted. This doesn't mean they will get called on the same thread, but you wont get your OnNext method called while another one is being processed.
The bad news, from memory this happened in Rx 2.0, and you are specifically asking about Rx 1.0. (I am not sure Synchonize() even exists in 1.xx?)
So if you are in Rx v1, then you have this blurry certainty of what is thread safe and what isn't. I am pretty sure the Subjects are safe, but I can't be sure about the factory methods like FromEventPattern.
My recommendation is: if you need to ensure thread safety, Serialize your data pipeline. The easiest way to do this is to use a single threaded IScheduler implementation i.e. DispatcherScheduler or a EventLoopScheduler instance.
Some good news is that when I wrote the book on Rx it did target v1, so this section is very relevant for you http://introtorx.com/Content/v1.0.10621.0/15_SchedulingAndThreading.html
So if your query right now looked like this:
Observable.FromEventPatter(....)
.TakeWhile(x=>x>5)
.Subscribe(....);
To ensure that the pipeline is serialized you can create an EventLoopScheduler (at the cost of dedicating a thread to this):
var scheduler = new EventLoopScheduler();
Observable.FromEventPatter(....)
.ObserveOn(scheduler)
.TakeWhile(x=>x>5)
.Subscribe(....);

Is there a general way to convert a critical section to one or more semaphores?

Is there a general way to convert a critical section to one or more semaphores? That is, is there some sort of straightforward transformation of the code that can be done to convert them?
For example, if I have two threads doing protected and unprotected work like below. Can I convert them to Semaphores that can be signaled, cleared and waited on?
void AThread()
{
lock (this)
{
Do Protected Work
}
Do Unprotected work.
}
The question came to me after thinking about C#'s lock() statement and if I could implement equivalent functionality with an EventWaitHandle instead.
Yes there is a general way to convert a lock section to use a Semaphore, using the same try...finally block that lock is equivalent to, with a Semaphore with a max count of 1, initialised to count 1.
EDIT (May 11th) recent research has shown me that my reference for the try ... finally equivalence is out of date. The code samples below would need to be adjusted accordingly as a result of this. (end edit)
private readonly Semaphore semLock = new Semaphore(1, 1);
void AThread()
{
semLock.WaitOne();
try {
// Protected code
}
finally {
semLock.Release();
}
// Unprotected code
}
However you would never do this. lock:
is used to restrict resource access to a single thread at a time,
conveys the intent that resources in that section cannot be simultaneously accessed by more than one thread
Conversely Semaphore:
is intended to control simultaneous access to a pool of resources with a limit on concurrent access.
conveys the intent of either a pool of resources that can be accessed by a maximum number of threads, or of a controlling thread that can release a number of threads to do some work when it is ready.
with a max count of 1 will perform slower than lock.
can be released by any thread, not just the one that entered the section (added in edit)
Edit: You also mention EventWaitHandle at the end of your question. It is worth noting that Semaphore is a WaitHandle, but not an EventWaitHandle, and also from the MSDN documentation for EventWaitHandle.Set:
There is no guarantee that every call to the Set method will release a thread from an EventWaitHandle whose reset mode is EventResetMode.AutoReset. If two calls are too close together, so that the second call occurs before a thread has been released, only one thread is released. It is as if the second call did not happen.
The Detail
You asked:
Is there a general way to convert a critical section to one or more semaphores? That is, is there some sort of straightforward transformation of the code that can be done to convert them?
Given that:
lock (this) {
// Do protected work
}
//Do unprotected work
is equivalent (see below for reference and notes on this) to
**EDIT: (11th May) as per the above comment, this code sample needs adjusting before use as per this link
Monitor.Enter(this);
try {
// Protected code
}
finally {
Monitor.Exit(this);
}
// Unprotected code
You can achieve the same using Semaphore by doing:
private readonly Semaphore semLock = new Semaphore(1, 1);
void AThread()
{
semLock.WaitOne();
try {
// Protected code
}
finally {
semLock.Release();
}
// Unprotected code
}
You also asked:
For example, if I have two threads doing protected and unprotected work like below. Can I convert them to Semaphores that can be signaled, cleared and waited on?
This is a question I struggled to understand, so I apologise. In your example you name your method AThread. To me, it's not really AThread, it's AMethodToBeRunByManyThreads !!
private readonly Semaphore semLock = new Semaphore(1, 1);
void MainMethod() {
Thread t1 = new Thread(AMethodToBeRunByManyThreads);
Thread t2 = new Thread(AMethodToBeRunByManyThreads);
t1.Start();
t2.Start();
// Now wait for them to finish - but how?
}
void AMethodToBeRunByManyThreads() { ... }
So semLock = new Semaphore(1, 1); will protect your "protected code", but lock is more appropriate for that use. The difference is that a Semaphore would allow a third thread to get involved:
private readonly Semaphore semLock = new Semaphore(0, 2);
private readonly object _lockObject = new object();
private int counter = 0;
void MainMethod()
{
Thread t1 = new Thread(AMethodToBeRunByManyThreads);
Thread t2 = new Thread(AMethodToBeRunByManyThreads);
t1.Start();
t2.Start();
// Now wait for them to finish
semLock.WaitOne();
semLock.WaitOne();
lock (_lockObject)
{
// uses lock to enforce a memory barrier to ensure we read the right value of counter
Console.WriteLine("done: {0}", counter);
}
}
void AMethodToBeRunByManyThreads()
{
lock (_lockObject) {
counter++;
Console.WriteLine("one");
Thread.Sleep(1000);
}
semLock.Release();
}
However, in .NET 4.5 you would use Tasks to do this and control your main thread synchronisation.
Here are a few thoughts:
lock(x) and Monitor.Enter - equivalence
The above statement about equivalence is not quite accurate. In fact:
"[lock] is precisely equivalent [to Monitor.Enter try ... finally] except x is only evaluated once [by lock]"
(ref: C# Language Specification)
This is minor, and probably doesn't matter to us.
You may have to be careful of memory barriers, and incrementing counter-like fields, so if you are using Semaphore you may still need lock, or Interlocked if you are confident of using it.
Beware of lock(this) and deadlocks
My original source for this would be Jeffrey Richter's article "Safe Thread Synchronization". That, and general best practice:
Don't lock this, instead create an object field within your class on class instantiation (don't use a value type, as it will be boxed anyway)
Make the object field readonly (personal preference - but it not only conveys intent, it also prevents your locking object being changed by other code contributors etc.)
The implications are many, but to make team working easier, follow best practice for encapsulation and to avoid nasty edge case errors that are hard for tests to detect, it is better to follow the above rules.
Your original code would therefore become:
private readonly object m_lockObject = new object();
void AThread()
{
lock (m_lockObject) {
// Do protected work
}
//Do unprotected work
}
(Note: generally Visual Studio helps you in its snippets by using SyncRoot as your lock object name)
Semaphore and lock are intended for different use
lock grants threads a spot on the "ready queue" on a FIFO basis (ref. Threading in C# - Joseph Albahari, part 2: Basic Synchronization, Section: Locking). When anyone sees lock, they know that usually inside that section is a shared resource, such as a class field, that should only be altered by a single thread at a time.
The Semaphore is a non-FIFO control for a section of code. It is great for publisher-subscriber (inter-thread communication) scenarios. The freedom around different threads being able to release the Semaphore to the ones that acquired it is very powerful. Semantically it does not necessarily say "only one thread accesses the resources inside this section", unlike lock.
Example: to increment a counter on a class, you might use lock, but not Semaphore
lock (_lockObject) {
counter++;
}
But to only increment that once another thread said it was ok to do so, you could use a Semaphore, not a lock, where Thread A does the increment once it has the Semaphore section:.
semLock.WaitOne();
counter++;
return;
And thread B releases the Semaphore when it is ready to allow the increment:
// when I'm ready in thread B
semLock.Release();
(Note that this is forced, a WaitHandle such as ManualResetEvent might be more appropriate in that example).
Performance
From a performance perspective, running the simple program below on a small multi thread VM, lock wins over Semaphore by a long way, although the timescales are still very fast and would be sufficient for all but high throughput software. Note that this ranking was broadly the same when running the test with two parallel threads accessing the lock.
Time for 100 iterations in ticks on a small VM (smaller is better):
291.334 (Semaphore)
44.075 (SemaphoreSlim)
4.510 (Monitor.Enter)
6.991 (Lock)
Ticks per millisecond: 10000
class Program
{
static void Main(string[] args)
{
Program p = new Program();
Console.WriteLine("100 iterations in ticks");
p.TimeMethod("Semaphore", p.AThreadSemaphore);
p.TimeMethod("SemaphoreSlim", p.AThreadSemaphoreSlim);
p.TimeMethod("Monitor.Enter", p.AThreadMonitorEnter);
p.TimeMethod("Lock", p.AThreadLock);
Console.WriteLine("Ticks per millisecond: {0}", TimeSpan.TicksPerMillisecond);
}
private readonly Semaphore semLock = new Semaphore(1, 1);
private readonly SemaphoreSlim semSlimLock = new SemaphoreSlim(1, 1);
private readonly object _lockObject = new object();
const int Iterations = (int)1E6;
int sharedResource = 0;
void TimeMethod(string description, Action a)
{
sharedResource = 0;
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < Iterations; i++)
{
a();
}
sw.Stop();
Console.WriteLine("{0:0.000} ({1})", (double)sw.ElapsedTicks * 100d / (double)Iterations, description);
}
void TimeMethod2Threads(string description, Action a)
{
sharedResource = 0;
Stopwatch sw = new Stopwatch();
using (Task t1 = new Task(() => IterateAction(a, Iterations / 2)))
using (Task t2 = new Task(() => IterateAction(a, Iterations / 2)))
{
sw.Start();
t1.Start();
t2.Start();
Task.WaitAll(t1, t2);
sw.Stop();
}
Console.WriteLine("{0:0.000} ({1})", (double)sw.ElapsedTicks * (double)100 / (double)Iterations, description);
}
private static void IterateAction(Action a, int iterations)
{
for (int i = 0; i < iterations; i++)
{
a();
}
}
void AThreadSemaphore()
{
semLock.WaitOne();
try {
sharedResource++;
}
finally {
semLock.Release();
}
}
void AThreadSemaphoreSlim()
{
semSlimLock.Wait();
try
{
sharedResource++;
}
finally
{
semSlimLock.Release();
}
}
void AThreadMonitorEnter()
{
Monitor.Enter(_lockObject);
try
{
sharedResource++;
}
finally
{
Monitor.Exit(_lockObject);
}
}
void AThreadLock()
{
lock (_lockObject)
{
sharedResource++;
}
}
}
It's difficult to determine what you're asking for here.
If you just want something you can wait on, you can use a Monitor, which is what lock uses under the hood. That is, your lock sequence above is expanded to something like:
void AThread()
{
Monitor.Enter(this);
try
{
// Do protected work
}
finally
{
Monitor.Exit(this);
}
// Do unprotected work
}
By the way, lock (this) is generally not a good idea. You're better off creating a lock object:
private object _lockObject = new object();
Now, if you want to conditionally obtain the lock, you can use `Monitor.TryEnter:
if (Monitor.TryEnter(_lockObject))
{
try
{
// Do protected work
}
finally
{
Monitor.Exit(_lockObject);
}
}
If you want to wait with a timeout, use the TryEnter overload:
if (Monitor.TryEnter(_lockObject, 5000)) // waits for up to 5 seconds
The return value is true if the lock was obtained.
A mutex is fundamentally different from an EventWaitHandle or Semaphore in that only the thread that acquires the mutex can release it. Any thread can set or clear a WaitHandle, and any thread can release a Semaphore.
I hope that answers your question. If not, edit your question to give us more detail about what you're asking for.
You should consider taking a look a the Wintellect Power Threading libraries:
https://github.com/Wintellect/PowerThreading
One of the things these libraries do is create generic abstractions that allow threading primitives to be swapped out.
This means on a 1 or 2 processor machine where you see very little contention, you may use a standard lock. One a 4 or 8 processor machine where contention is common, perhaps a reader/writer lock is more correct. If you use the primitives such as ResourceLock you can swap out:
Spin Lock
Monitor
Mutex
Reader Writer
Optex
Semaphore
... and others
I've written code that dynamically, based on the number of processors, chose specific locks based on the amount of contention likely to be present. With the structure found in that library, this is practical to do.

Condition Variables C#/.NET

In my quest to build a condition variable class I stumbled on a trivially simple way of doing it and I'd like to share this with the stack overflow community. I was googling for the better part of an hour and was unable to actually find a good tutorial or .NET-ish example that felt right, hopefully this can be of use to other people out there.
It's actually incredibly simple, once you know about the semantics of lock and Monitor.
But first, you do need an object reference. You can use this, but remember that this is public, in the sense that anyone with a reference to your class can lock on that reference. If you are uncomfortable with this, you can create a new private reference, like this:
readonly object syncPrimitive = new object(); // this is legal
Somewhere in your code where you'd like to be able to provide notifications, it can be accomplished like this:
void Notify()
{
lock (syncPrimitive)
{
Monitor.Pulse(syncPrimitive);
}
}
And the place where you'd do the actual work is a simple looping construct, like this:
void RunLoop()
{
lock (syncPrimitive)
{
for (;;)
{
// do work here...
Monitor.Wait(syncPrimitive);
}
}
}
Yes, this looks incredibly deadlock-ish, but the locking protocol for Monitor is such that it will release the lock during the Monitor.Wait. In fact, it's a requirement that you have obtained the lock before you call either Monitor.Pulse, Monitor.PulseAll or Monitor.Wait.
There's one caveat with this approach that you should know about. Since the lock is required to be held before calling the communication methods of Monitor you should really only hang on to the lock for an as short duration as possible. A variation of the RunLoop that's more friendly towards long running background tasks would look like this:
void RunLoop()
{
for (;;)
{
// do work here...
lock (syncPrimitive)
{
Monitor.Wait(syncPrimitive);
}
}
}
But now we've changed up the problem a bit, because the lock is no longer protecting the shared resource throughout the processing. So, if some of your code in the do work here... bit needs to access a shared resource you'll need an separate lock managing access to that.
We can leverage the above to create a simple thread-safe producer consumer collection (although .NET already provides an excellent ConcurrentQueue<T> implementation; this is just to illustrate the simplicity of using Monitor in implementing such mechanisms).
class BlockingQueue<T>
{
// We base our queue on the (non-thread safe) .NET 2.0 Queue collection
readonly Queue<T> q = new Queue<T>();
public void Enqueue(T item)
{
lock (q)
{
q.Enqueue(item);
System.Threading.Monitor.Pulse(q);
}
}
public T Dequeue()
{
lock (q)
{
for (;;)
{
if (q.Count > 0)
{
return q.Dequeue();
}
System.Threading.Monitor.Wait(q);
}
}
}
}
Now the point here is not to build a blocking collection, that also available in the .NET framework (see BlockingCollection). The point is to illustrate how simple it is to build an event driven message system using the Monitor class in .NET to implement conditional variable. Hope you find this useful.
Use ManualResetEvent
The class that is similar to conditional variable is the ManualResetEvent, just that the method name is slightly different.
The notify_one() in C++ would be named Set() in C#.
The wait() in C++ would be named WaitOne() in C#.
Moreover, ManualResetEvent also provides a Reset() method to set the state of the event to non-signaled.
The accepted answer is not a good one.
According to the Dequeue() code, Wait() gets called in each loop, which causes unnecessary waiting thus excessive context switches. The correct paradigm should be, wait() is called when the waiting condition is met. In this case, the waiting condition is q.Count() == 0.
Here's a better pattern to follow when it comes to using a Monitor.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms682052%28v=vs.85%29.aspx
Another comment on C# Monitor is, it does not make use of a condition variable(which will essentially wake up all threads waiting for that lock, regardless of the conditions in which they went to wait; consequently, some threads may grab the lock and immediately return to sleep when they find the waiting condition hasn't been changed). It does not provide you with as find-grained threading control as pthreads. But it's .Net anyway, so not completely unexpected.
=============upon the request of John, here's an improved version=============
class BlockingQueue<T>
{
readonly Queue<T> q = new Queue<T>();
public void Enqueue(T item)
{
lock (q)
{
while (false) // condition predicate(s) for producer; can be omitted in this particular case
{
System.Threading.Monitor.Wait(q);
}
// critical section
q.Enqueue(item);
}
// generally better to signal outside the lock scope
System.Threading.Monitor.Pulse(q);
}
public T Dequeue()
{
T t;
lock (q)
{
while (q.Count == 0) // condition predicate(s) for consumer
{
System.Threading.Monitor.Wait(q);
}
// critical section
t = q.Dequeue();
}
// this can be omitted in this particular case; but not if there's waiting condition for the producer as the producer needs to be woken up; and here's the problem caused by missing condition variable by C# monitor: all threads stay on the same waiting queue of the shared resource/lock.
System.Threading.Monitor.Pulse(q);
return t;
}
}
A few things I'd like to point out:
1, I think my solution captures the requirements & definitions more precisely than yours. Specifically, the consumer should be forced to wait if and only if there's nothing left in the queue; otherwise it's up to the OS/.Net runtime to schedule threads. In your solution, however, the consumer is forced to wait in each loop, regardless whether it has actually consumed anything or not - this is the excessive waiting/context switches I was talking about.
2, My solution is symmetric in the sense that both the consumer and the producer code share the same pattern while yours is not. If you did know the pattern and just omitted for this particular case, then I take back this point.
3, Your solution signals inside the lock scope, while my solutions signals outside the lock scope. Please refer to this answer as to why your solution is worse.
why should we signal outside the lock scope
I was talking about the flaw of missing condition variables in C# monitor, and here's its impact: there's simply no way for C# to implemented the solution of moving the waiting thread from the condition queue to the lock queue. Therefore, the excessive context switch is doomed to take place in the three-thread scenario proposed by the answer in the link.
Also, the lack of condition variable makes it impossible to distinguish between the various cases where threads wait on the same shared resource/lock, but for different reasons. All waiting threads are place on a big waiting queue for that shared resource, which undermines efficiency.
"But it's .Net anyway, so not completely unexpected" --- it's understandable that .Net does not pursue as high efficiency as C++, it's understandable. But it does not imply programmers should not know the differences and their impacts.
Go to deadlockempire.github.io/. They have an amazing tutorial that will help you understand the condition variable as well as locks and will cetainly help you write your desired class.
You can step through the following code at deadlockempire.github.io and trace it. Here is the code snippet
while (true) {
Monitor.Enter(mutex);
if (queue.Count == 0) {
Monitor.Wait(mutex);
}
queue.Dequeue();
Monitor.Exit(mutex);
}
while (true) {
Monitor.Enter(mutex);
if (queue.Count == 0) {
Monitor.Wait(mutex);
}
queue.Dequeue();
Monitor.Exit(mutex);
}
while (true) {
Monitor.Enter(mutex);
queue.Enqueue(42);
Monitor.PulseAll(mutex);
Monitor.Exit(mutex);
}
As has been pointed out by h9uest's answer and comments the Monitor's Wait interface does not allow for proper condition variables (i.e. it does not allow for waiting on multiple conditions per shared lock).
The good news is that the other synchronization primitives (e.g. SemaphoreSlim, lock keyword, Monitor.Enter/Exit) in .NET can be used to implement a proper condition variable.
The following ConditionVariable class will allow you to wait on multiple conditions using a shared lock.
class ConditionVariable
{
private int waiters = 0;
private object waitersLock = new object();
private SemaphoreSlim sema = new SemaphoreSlim(0, Int32.MaxValue);
public ConditionVariable() {
}
public void Pulse() {
bool release;
lock (waitersLock)
{
release = waiters > 0;
}
if (release) {
sema.Release();
}
}
public void Wait(object cs) {
lock (waitersLock) {
++waiters;
}
Monitor.Exit(cs);
sema.Wait();
lock (waitersLock) {
--waiters;
}
Monitor.Enter(cs);
}
}
All you need to do is create an instance of the ConditionVariable class for each condition you want to be able to wait on.
object queueLock = new object();
private ConditionVariable notFullCondition = new ConditionVariable();
private ConditionVariable notEmptyCondition = new ConditionVariable();
And then just like in the Monitor class, the ConditionVariable's Pulse and Wait methods must be invoked from within a synchronized block of code.
T Take() {
lock(queueLock) {
while(queue.Count == 0) {
// wait for queue to be not empty
notEmptyCondition.Wait(queueLock);
}
T item = queue.Dequeue();
if(queue.Count < 100) {
// notify producer queue not full anymore
notFullCondition.Pulse();
}
return item;
}
}
void Add(T item) {
lock(queueLock) {
while(queue.Count >= 100) {
// wait for queue to be not full
notFullCondition.Wait(queueLock);
}
queue.Enqueue(item);
// notify consumer queue not empty anymore
notEmptyCondition.Pulse();
}
}
Below is a link to the full source code of a proper Condition Variable class using 100% managed code in C#.
https://github.com/CodeExMachina/ConditionVariable
i think i found "The WAY" on the tipical problem of a
List<string> log;
used by multiple thread, one tha fill it and the other processing and the other one empting
avoiding empty
while(true){
//stuff
Thread.Sleep(100)
}
variables used in Program
public static readonly List<string> logList = new List<string>();
public static EventWaitHandle evtLogListFilled = new AutoResetEvent(false);
the processor work like
private void bw_DoWorkLog(object sender, DoWorkEventArgs e)
{
StringBuilder toFile = new StringBuilder();
while (true)
{
try
{
{
//waiting form a signal
Program.evtLogListFilled.WaitOne();
try
{
//critical section
Monitor.Enter(Program.logList);
int max = Program.logList.Count;
for (int i = 0; i < max; i++)
{
SetText(Program.logList[0]);
toFile.Append(Program.logList[0]);
toFile.Append("\r\n");
Program.logList.RemoveAt(0);
}
}
finally
{
Monitor.Exit(Program.logList);
// end critical section
}
try
{
if (toFile.Length > 0)
{
Logger.Log(toFile.ToString().Substring(0, toFile.Length - 2));
toFile.Clear();
}
}
catch
{
}
}
}
catch (Exception ex)
{
Logger.Log(System.Reflection.MethodBase.GetCurrentMethod(), ex);
}
Thread.Sleep(100);
}
}
On the filler thread we have
public static void logList_add(string str)
{
try
{
try
{
//critical section
Monitor.Enter(Program.logList);
Program.logList.Add(str);
}
finally
{
Monitor.Exit(Program.logList);
//end critical section
}
//set start
Program.evtLogListFilled.Set();
}
catch{}
}
this solution is fully tested, the istruction Program.evtLogListFilled.Set(); may release the lock on Program.evtLogListFilled.WaitOne() and also the next future lock.
I think this is the simpliest way.

Categories

Resources