Lock Free Concurrent Queue - c#

This code snippet is from ConcurrentQueue implementation given from here.
internal bool TryPeek(out T result)
{
result = default(T);
int lowLocal = Low;
if (lowLocal > High)
return false;
SpinWait spin = new SpinWait();
while (m_state[lowLocal] == 0)
{
spin.SpinOnce();
}
result = m_array[lowLocal];
return true;
}
Is it really lock-free instead of spinning?

Spinning is a lock. This is stated in MSDN, Wikipedia and many other resources.
It's not about word. Lock-free is a guarantee. It doesn't mean that the code shouldn't use lock statement. Algorithm is lock-free if there is guaranteed system-wide progress. I don't see any difference between this code and the code using locks. The only difference is that the spin uses busy wait and thread yielding instead of putting thread in a sleep mode.
I don't see how this guarantees system-wide process, so personally I think that this is not a lock-free implementation. At least not this function.

Lock free means not using locks. Spinwaiting is not locking. There are a number of methods of synchronizing access to data without using locks. Performing spin waits is one (of many) options. Not all lock-free code will use spin-waits.

Spinning places the CPU in a tight loop without yielding the rest of it's current slice of processor time, avoiding problems that a user-provided loop may create. This can be useful if it is known that the state change is imminent. It is rare for this to be the best option for ordinary code, and represents an alternative to locking for this specialized situation.
So yes, the code is lock-free as the term lock is used in the .NET Framework.
http://msdn.microsoft.com/en-us/library/hh228603.aspx

Related

Why the lock inside AsyncLock does not block the thread?

I'm trying to understand how the AsyncLock works.
First of all, here's a snippet to prove that it actually works:
var l = new AsyncLock();
var tasks = new List<Task>();
while (true)
{
Console.ReadLine();
var i = tasks.Count + 1;
tasks.Add(Task.Run(async () =>
{
Console.WriteLine($"[{i}] Acquiring lock ...");
using (await l.LockAsync())
{
Console.WriteLine($"[{i}] Lock acquired");
await Task.Delay(-1);
}
}));
}
By "works" I mean that you can run as many tasks as you want (by hitting Enter) and the number of threads doesn't grow. If you replace it with traditional lock, you'll see that the new threads are started, which is what we try to avoid.
But the first thing you see in the source code is... the lock
Can somebody please explain me how this works, why it doesn't block, and what am I missing here?
Can somebody please explain me how this works, why it doesn't block, and what am I missing here?
The short answer is that lock is just an internal mechanism used to guarantee thread safety. The lock is never exposed in any way, and there's no way for any thread to hold that lock for any real amount of time. In this way, it's similar to the locks used internally by various concurrent collections.
There is an alternate approach that uses lock-free programming, but I have found lock-free programming to be extremely difficult to write, read, and maintain. A great example of this (which is sadly not online) was a bunch of Dr. Dobb's articles in the late '90s, each one trying to out-do the last with a better lock-free queue implementation. It turns out they were all faulty - in some cases, the bugs took more than a decade to find.
For my own code, I do not use lock-free programming, except where the correctness of the code is trivially obvious.
As far as the async lock vs lock concepts, I'm going to take a stab at explaining this. There's a feeling I get that I have only felt when working with asynchronous coordination primitives. It's something I've thought a lot about writing a blog post on, but I don't have the right words to make it understandable. That said, here goes...
Asynchronous coordination primitives exist on a completely different plane than normal coordination primitives. Synchronous primitives block threads and signal threads. Asynchronous primitives just work on plain objects; the blocking or signaling is just "by convention".
So, with a normal lock, the calling code must take the lock immediately. But with an asynchronous "lock", the attempted lock is just a request, just an object. The calling code doesn't even need to await it. It's possible to request several locks and await them all together with Task.WhenAll. Or even combine them with other things; code can do crazy things like (a)wait for two locks to both be free or for a signal (like AsyncManualResetEvent) to be sent, and then cancel the lock requests if the signal comes in first.
From a thread perspective, it's kinda-sorta like user-mode thread scheduling. There's also some similarities to cooperative multitasking (as opposed to preemptive). But overall, the asynchronous primitives are "lifted" to a different plane, where one works only with objects and blocks of code, not threads.
The lock inside AsyncLock is beeing released very quickly. Each task which tries to acquire AsyncLock, successfully acquires it's internal lock and the actual locking logic is done with a queue.
By wrapping LockAsync() within using block, the lock is being released when the block ends since LockAsync returns a disposable object Key which will be disposed at the end of the using block, and upon disposing the lock will be released. see https://github.com/StephenCleary/AsyncEx/blob/master/src/Nito.AsyncEx.Coordination/AsyncLock.cs#L182-L185

Implementation of traversing of chunk partition in PLINQ

I came across on implementation ContiguousChunkLazyEnumerator class, which is used by PLINQ (traversing of chunk is performed with this iterator). MoveNext method uses thread safe access to source IEnumerator (by using speficied lock), moreover, it saves results of access to internal buffer. It is brief piece of code:
lock (m_sourceSyncLock)
{
// Some .net stuff
try
{
for (; i < mutables.m_nextChunkMaxSize && m_source.MoveNext(); i++)
{
// Read the current entry into our buffer.
chunkBuffer[i] = m_source.Current;
}
}
// Some .net stuff
}
Such iterator will be used by worker threads (N worker threads work with the same iterator). But I really don't understand benefits of such parallel approach. Usage of lock in this context should kill any performance benefits. My assumption is that sequal access by the only worker thread should work with the same speed.
This is because using PLINQ optimizes for concurrent processing of the items, not for the concurrent enumeration of the items.
The heavy lock is done per chunk, so multiple threads will yield to each other between chunks.
This really shines when you have an IEnumerable that is quick to enumerate (like List<T> for example, in reality, there are internal optimisations for List<T>, so not the best example), and want to do some slow computational work on the results.
This code is about creating partitioned data to then be consumed by multiple threads. While it is thread-safe, it is not supposed to be about the fastest concurrent enumeration. It is optimised for data locality.

Interlocked.CompareExchange<Int> using GreaterThan or LessThan instead of equality

The System.Threading.Interlocked object allows for Addition (subtraction) and comparison as an atomic operation. It seems that a CompareExchange that just doesn't do equality but also GreaterThan/LessThan as an atomic comparison would be quite valuable.
Would a hypothetical Interlocked.GreaterThan a feature of the IL or is it a CPU-level feature? Both?
Lacking any other option, is it possible to create such a feature in C++ or direct IL code and expose that functionality to C#?
You can build other atomic operations out of InterlockedCompareExchange.
public static bool InterlockedExchangeIfGreaterThan(ref int location, int comparison, int newValue)
{
int initialValue;
do
{
initialValue = location;
if (initialValue >= comparison) return false;
}
while (System.Threading.Interlocked.CompareExchange(ref location, newValue, initialValue) != initialValue);
return true;
}
With these helper methods you can not only exchange value but also detect was it replaced or not.
Usage looks like this:
int currentMin = 10; // can be changed from other thread at any moment
int potentialNewMin = 8;
if (InterlockedExtension.AssignIfNewValueSmaller(ref currentMin, potentialNewMin))
{
Console.WriteLine("New minimum: " + potentialNewMin);
}
And here are methods:
public static class InterlockedExtension
{
public static bool AssignIfNewValueSmaller(ref int target, int newValue)
{
int snapshot;
bool stillLess;
do
{
snapshot = target;
stillLess = newValue < snapshot;
} while (stillLess && Interlocked.CompareExchange(ref target, newValue, snapshot) != snapshot);
return stillLess;
}
public static bool AssignIfNewValueBigger(ref int target, int newValue)
{
int snapshot;
bool stillMore;
do
{
snapshot = target;
stillMore = newValue > snapshot;
} while (stillMore && Interlocked.CompareExchange(ref target, newValue, snapshot) != snapshot);
return stillMore;
}
}
Update to the later post I made here: we found a better way to make the greater comparison by using additional lock object. We wrote many unit tests in order to validate that a lock and Interlocked can be used together, but only for some cases.
How the code works: Interlocked uses memory barriers that a read or write is atomic. The sync-lock is needed to make the greater-than comparison an atomic operation. So the rule now is that inside this class no other operation writes the value without this sync lock.
What we get with this class is an interlocked value which can be read very fast, but write takes a little bit more. Read is about 2-4 times faster in our application.
Here the code as view:
See here: http://files.thekieners.com/blogcontent/2012/ExchangeIfGreaterThan2.png
Here as code to copy&paste:
public sealed class InterlockedValue
{
private long _myValue;
private readonly object _syncObj = new object();
public long ReadValue()
{
// reading of value (99.9% case in app) will not use lock-object,
// since this is too much overhead in our highly multithreaded app.
return Interlocked.Read(ref _myValue);
}
public bool SetValueIfGreaterThan(long value)
{
// sync Exchange access to _myValue, since a secure greater-than comparisons is needed
lock (_syncObj)
{
// greather than condition
if (value > Interlocked.Read(ref _myValue))
{
// now we can set value savely to _myValue.
Interlocked.Exchange(ref _myValue, value);
return true;
}
return false;
}
}
}
This isn't actually true, but it is useful to think of concurrency as coming in 2 forms:
Lock free concurrency
Lock based concurrency
It's not true because software lock based concurrency ends up being implemented using lock free atomic instructions somewhere on the stack (often in the Kernel). Lock free atomic instructions, however, all ultimately end up acquiring a hardware lock on the memory bus. So, in reality, lock free concurrency and lock based concurrency are the same.
But conceptually, at the level of a user application, they are 2 distinct ways of doing things.
Lock based concurrency is based on the idea of "locking" access to a critical section of code. When one thread has "locked" a critical section, no other thread may have code running inside that same critical section. This is usually done by the use of "mutexes", which interface with the os scheduler and cause threads to become un-runnable while waiting to enter a locked critical section. The other approach is to use "spin locks" which cause a thread to spin in a loop, doing nothing useful, until the critical section becomes available.
Lock free concurrency is based on the idea of using atomic instructions (specially supported by the CPU), that are guaranteed by hardware to run atomically. Interlocked.Increment is a good example of lock free concurrency. It just calls special CPU instructions that do an atomic increment.
Lock free concurrency is hard. It gets particularly hard as the length and complexity of critical sections increase. Any step in a critical section can be simultaneously executed by any number of threads at once, and they can move at wildly different speeds. You have to make sure that despite that, the results of a system as a whole remain correct. For something like an increment, it can be simple (the cs is just one instruction). For more complex critical sections, things can get very complex very quickly.
Lock based concurrency is also hard, but not quite as hard as lock free concurrency. It allows you to create arbitrarily complex regions of code and know that only 1 thread is executing it at any time.
Lock free concurrency has one big advantage, however: speed. When used correctly it can be orders of magnitude faster than lock based concurrency. Spin loops are bad for long running critical sections because they waste CPU resources doing nothing. Mutexes can be bad for small critical sections because they introduce a lot of overhead. They involve a mode switch at a minimum, and multiple context switches in the worst case.
Consider implementing the Managed heap. Calling into the OS everytime "new" is called would be horrible. It would destroy the performance of your app. However, using lock free concurrency it's possible to implement gen 0 memory allocations using an interlocked increment (I don't know for sure if that's what the CLR does, but I'd be surprised if it wasn't. That can be a HUGE savings.
There are other uses, such as in lock free data structures, like persistent stacks and avl trees. They usually use "cas" (compare and swap).
The reason, however, that locked based concurrency and lock free concurrency are really equivalent is because of the implementation details of each.
Spin locks usually use atomic instructions (typically cas) in their loop condition. Mutexes need to use either spin locks or atomic updates of internal kernel structures in their implementation.
The atomic instructions are in turn implemented using hardware locks.
In any case, they both have their sets of trade offs, usually centering on perf vs complexity. Mutexes can be both faster and slower than lock free code. Lock free code can be both more and less complex than a mutex. The appropriate mechanism to use depends on specific circumstances.
Now, to answer your question:
A method that did an interlocked compare exchange if less than would imply to callers that it is not using locks. You can't implement it with a single instruction the same way increment or compare exchange can be done. You could simulate it doing a subtraction (to compute less than), with an interlocked compare exchange in a loop. You can also do it with a mutex (but that would imply a lock and so using "interlocked" in the name would be misleading). Is it appropriate to build the "simulated interlocked via cas" version? That depends. If the code is called very frequently, and has very little thread contention then the answer is yes. If not, you can turn a O(1) operation with moderately high constant factors into an infinite (or very long) loop, in which case it would be better to use a mutex.
Most of the time it's not worth it.
What do you think about this implementation:
// this is a Interlocked.ExchangeIfGreaterThan implementation
private static void ExchangeIfGreaterThan(ref long location, long value)
{
// read
long current = Interlocked.Read(ref location);
// compare
while (current < value)
{
// set
var previous = Interlocked.CompareExchange(ref location, value, current);
// if another thread has set a greater value, we can break
// or if previous value is current value, then no other thread has it changed in between
if (previous == current || previous >= value) // note: most commmon case first
break;
// for all other cases, we need another run (read value, compare, set)
current = Interlocked.Read(ref location);
}
}
All interlocked operations have direct support in the hardware.
Interlocked operations and atomic data type are different things.. Atomic type is a library level feature. On some platforms and for some data types atomics are implemented using interlocked instructions. In this case they are very effective.
In other cases when platform does not have interlocked operations at all or they are not available for some particular data type, library implements these operations using appropriate synchronization (crit_sect, mutex, etc).
I am not sure if Interlocked.GreaterThan is really needed. Otherwise it might be already implemented. If your know good example where it can be useful, I am sure that everybody here will be happy to hear this.
Greater/Less than and equal to are already atomic operations. That doesn't address the safe concurrent behavior of your application tho.
There is no point in making them part of the Interlocked family, so the question is: what are you actually trying to achieve?

thread-safety of primitive concurrent read and write

Simplified illustration below, how does .NET deal with such a situation?
and if it would cause problems, would i have to lock/gate access to each and every field/property that might at times be written to + accessed from different threads?
A field somewhere
public class CrossRoads(){
public int _timeouts;
}
A background thread writer
public void TimeIsUp(CrossRoads crossRoads){
crossRoads._timeouts++;
}
Possibly at the same time, trying to read elsewhere
public void HowManyTimeOuts(CrossRoads crossRoads){
int timeOuts = crossRoads._timeouts;
}
The simple answer is that the above code has the ability to cause problems if accessed simultaneously from multiple threads.
The .Net framework provides two solutions: interlocking and thread synchronization.
For simple data type manipulation (i.e. ints), interlocking using the Interlocked class will work correctly and is the recommended approach.
In fact, interlocked provides specific methods (Increment and Decrement) that make this process easy:
Add an IncrementCount method to your CrossRoads class:
public void IncrementCount() {
Interlocked.Increment(ref _timeouts);
}
Then call this from your background worker:
public void TimeIsUp(CrossRoads crossRoads){
crossRoads.IncrementCount();
}
The reading of the value, unless of a 64-bit value on a 32-bit OS, are atomic. See the Interlocked.Read method documentation for more detail.
For class objects or more complex operations, you will need to use thread synchronization locking (lock in C# or SyncLock in VB.Net).
This is accomplished by creating a static synchronization object at the level the lock is to be applied (for example, inside your class), obtaining a lock on that object, and performing (only) the necessary operations inside that lock:
private static object SynchronizationObject = new Object();
public void PerformSomeCriticalWork()
{
lock (SynchronizationObject)
{
// do some critical work
}
}
The good news is that reads and writes to ints are guaranteed to be atomic, so no torn values. However, it is not guaranteed to do a safe ++, and the read could potentially be cached in registers. There's also the issue of instruction re-ordering.
I would use:
Interlocked.Increment(ref crossroads._timeouts);
For the write, which will ensure no values are lost, and;
int timeouts = Interlocked.CompareExchange(ref crossroads._timeouts, 0, 0);
For the read, since this observes the same rules as the increment. Strictly speaking "volatile" is probably enough for the read, but it is so poorly understood that the Interlocked seems (IMO) safer. Either way, we're avoiding a lock.
Well, I'm not a C# developer, but this is how it typically works at this level:
how does .NET deal with such a situation?
Unlocked. Not likely to be guaranteed to be atomic.
Would i have to lock/gate access to each and every field/property that might at times be written to + accessed from different threads?
Yes. An alternative would be to make a lock for the object available to the clients, then tell the clients they must lock the object while using the instance. This will reduce the number of locks acquisitions, and guarantee a more consistent, predictable, state for your clients.
Forget dotnet. At the machine language level, crossRoads._timeouts++ will be implemented as an INC [memory] instruction. This is known as a Read-Modify-Write instruction. These instructions are atomic with respect to multi-threading on a single processor*, (essentially implemented with time-slicing,) but are not atomic with respect to multi-threading using multiple processors or multiple cores.
So:
If you can guarantee that only TimeIsUp() will ever modify crossRoads._timeouts, and if you can guarantee that only one thread will ever execute TimeIsUp(), then it will be safe to do this. The writing in TimeIsUp() will work fine, and the reading in HowManyTimeOuts() (and any place else) will work fine. But if you also modify crossRoads._timeouts elsewhere, or if you ever spawn one more background thread writer, you will be in trouble.
In either case, my advice would be to play it safe and lock it.
(*) They are atomic with respect to multi-threading on a single processor because context switches between threads happen on a periodic interrupt, and on the x86 architectures these instructions are atomic with respect to interrupts, meaning that if an interrupt occurs while the CPU is executing such an instruction, the interrupt will wait until the instruction completes. This does not hold true with more complex instructions, for example those with the REP prefix.
Although an int may be 'native' size to a CPU (dealing in 32 or 64 bits at a time), if you are reading and writing from different threads to the same variable, you are best off locking this variable and synchronizing access.
There is never a guarantee that reads/writes maybe atomic to an int.
You can also use Interlocked.Increment for your purposes here.

In C# would it be better to use Queue.Synchronized or lock() for thread safety?

I have a Queue object that I need to ensure is thread-safe. Would it be better to use a lock object like this:
lock(myLockObject)
{
//do stuff with the queue
}
Or is it recommended to use Queue.Synchronized like this:
Queue.Synchronized(myQueue).whatever_i_want_to_do();
From reading the MSDN docs it says I should use Queue.Synchronized to make it thread-safe, but then it gives an example using a lock object. From the MSDN article:
To guarantee the thread safety of the
Queue, all operations must be done
through this wrapper only.
Enumerating through a collection is
intrinsically not a thread-safe
procedure. Even when a collection is
synchronized, other threads can still
modify the collection, which causes
the enumerator to throw an exception.
To guarantee thread safety during
enumeration, you can either lock the
collection during the entire
enumeration or catch the exceptions
resulting from changes made by other
threads.
If calling Synchronized() doesn't ensure thread-safety what's the point of it? Am I missing something here?
Personally I always prefer locking. It means that you get to decide the granularity. If you just rely on the Synchronized wrapper, each individual operation is synchronized but if you ever need to do more than one thing (e.g. iterating over the whole collection) you need to lock anyway. In the interests of simplicity, I prefer to just have one thing to remember - lock appropriately!
EDIT: As noted in comments, if you can use higher level abstractions, that's great. And if you do use locking, be careful with it - document what you expect to be locked where, and acquire/release locks for as short a period as possible (more for correctness than performance). Avoid calling into unknown code while holding a lock, avoid nested locks etc.
In .NET 4 there's a lot more support for higher-level abstractions (including lock-free code). Either way, I still wouldn't recommend using the synchronized wrappers.
There's a major problem with the Synchronized methods in the old collection library, in that they synchronize at too low a level of granularity (per method rather than per unit-of-work).
There's a classic race condition with a synchronized queue, shown below where you check the Count to see if it is safe to dequeue, but then the Dequeue method throws an exception indicating the queue is empty. This occurs because each individual operation is thread-safe, but the value of Count can change between when you query it and when you use the value.
object item;
if (queue.Count > 0)
{
// at this point another thread dequeues the last item, and then
// the next line will throw an InvalidOperationException...
item = queue.Dequeue();
}
You can safely write this using a manual lock around the entire unit-of-work (i.e. checking the count and dequeueing the item) as follows:
object item;
lock (queue)
{
if (queue.Count > 0)
{
item = queue.Dequeue();
}
}
So as you can't safely dequeue anything from a synchronized queue, I wouldn't bother with it and would just use manual locking.
.NET 4.0 should have a whole bunch of properly implemented thread-safe collections, but that's still nearly a year away unfortunately.
There's frequently a tension between demands for 'thread safe collections' and the requirement to perform multiple operations on the collection in an atomic fashion.
So Synchronized() gives you a collection which won't smash itself up if multiple threads add items to it simultaneously, but it doesn't magically give you a collection that knows that during an enumeration, nobody else must touch it.
As well as enumeration, common operations like "is this item already in the queue? No, then I'll add it" also require synchronisation which is wider than just the queue.
This way we don't need to lock the queue just to find out it was empty.
object item;
if (queue.Count > 0)
{
lock (queue)
{
if (queue.Count > 0)
{
item = queue.Dequeue();
}
}
}
It seems clear to me that using a lock(...) {...} lock is the right answer.
To guarantee the thread safety of the Queue, all operations must be done through this wrapper only.
If other threads access the queue without using .Synchronized(), then you'll be up a creek - unless all your queue access is locked up.

Categories

Resources