Memory barrier vs Interlocked impact on memory caches coherency timing - c#

Simplified question:
Is there a difference in timing of memory caches coherency (or "flushing") caused by Interlocked operations compared to Memory barriers? Let's consider in C# - any Interlocked operations vs Thread.MemoryBarrier(). I believe there is a difference.
Background:
I read quite few information about memory barriers - all the impact on prevention of specific types of memory interaction instructions reordering, but I couldn't find consistent info on whether they should cause immediate flushing of read/write queues.
I actually found few sources mentioning that there is NO guarantee on immediacy of the operation (only the prevention of specific reordering is guaranteed).
E.g.
Wikipedia:
"However, to be clear, it does not mean any operations WILL have completed by the time the barrier completes; only the ORDERING of the completion of operations (when they do complete) is guaranteed"
Freebsd.org (barriers are HW specific, so I guess a specific OS doesn't matter): "memory barriers simply determine relative order of memory operations; they do not make any guarantee about timing of memory operations"
On the other hand Interlocked operations - from their definition - causes immediate flushing of all memory buffers to guarantee the most recent value of variable was updated causes memory subsystem to lock the entire cache line with the value, to prevent access (including reads) from any other CPU/core, until the operation is done.
Am I correct or am I mistaken?
Disclaimer:
This is an evolution of my original question here Variable freshness guarantee in .NET (volatile vs. volatile read)
EDIT1:
Fixed my statement about Interlocked operations - inline the text.
EDIT2:
Completely remove demonstration code + it's discussion (as some complained about too much information)

To understand C# interlocked operations, you need to understand Win32 interlocked operations.
The "pure" interlocked operations themselves only affect the freshness of the data directly referenced by the operation.
But in Win32, interlocked operations used to imply full memory barrier. I believe this is mostly to avoid breaking old programs on newer hardware. So InterlockedAdd does two things: interlocked add (very cheap, does not affect caches) and full memory barrier (rather heavy op).
Later, Microsoft realized this is expensive, and added versions of each operation that does no or partial memory barrier.
So there are now (in Win32 world) four versions of almost everything: e.g. InterlockedAdd (full fence), InterlockedAddAcquire (read fence), InterlockedAddRelease (write fence), pure InterlockedAddNoFence (no fence).
In C# world, there is only one version, and it matches the "classic" InterlockedAdd - that also does the full memory fence.

Short answer: CAS (Interlocked) operations have been (and most likely will) be the quickest caches flusher.
Background:
- CAS operations are supported in HW by single uninteruptable instruction. Compared to thread calling memory barrier which can be swapped right after placing the barrier but just before performing any reads/writes (so consistency guaranteed for the barrier is still met).
- CAS operations are foundations for majority (if not all) high level synchronization construct (mutexes, sempahores, locks - look on their implementation and you will find CAS operations). They wouldn't likely be used if they wouldn't guarantee immediate cross-thread state consistency or if there would be other, faster mechanism(s)

At least on Intel devices, a bunch of machinecode operations can be prefixed with a LOCK prefix, which ensures that the following operation is treated as atomic, even if the underlying datatype won't fit on the databus in one go, for example, LOCK REPNE SCASB will scan a string of bytes for a terminating zero, and won't be interrupted by other threads.
As far as I am aware, the Memory Barrier construct is basically a CAS based spinlock that causes a thread to wait for some Condition to be met, such as no other threads having any work to do. This is clearly a higher-level construct, but make no mistake there's a condition check in there, and it's likely to be atomic, and also likely to be CAS-protected, you're still going to pay the cache line price when you reach a memory barrier.

Related

Interlocked & Thread-Safe operations

1. Out of curiosity, what does operations like the following do behind the scenes when they get called for example from 2 or 3 threads at the same time?
Interlocked.Add(ref myInt, 24);
Interlocked.Increment(ref counter);
Does the C# creates an inside queue that tells for example Thread 2, now it's your turn to do the operation, then it tells Thread 1 now it's your turn, and then Thread 3 you do the operation? So that they will not interfere with each other?
2. Why doesn't the C# do this process automatically?
Isn't it obvious that when a programmer write something like the following inside a multi-thread method:
myByte++;
Sum = int1 + int2 + int3;
and this variables are Shared with other threads, that he wants each of this operations do be executed as an Atomic operation without interruptions?
Why does the programmer have to tell it Explicitly to do so?
Isn't it clear that that's what every programmer wants? Aren't this "Interlocked" methods just add unnecessary complication to the language?
Thanks.
what does operations like the following do behind the scenes
As far as how it's implemented internally, CPU hardware arbitrates which core gets ownership of the cache line when there's contention. See Can num++ be atomic for 'int num'? for a C++ and x86 asm / cpu-architecture explanation of the details.
Re: why CPUs and compilers want to load early and store late:
see Java instruction reordering and CPU memory reordering
Atomic RMW prevents that, so do seq_cst store semantics on most ISAs where you do a plain store and then a full barrier. (AArch64 has a special interaction between stlr and ldar to prevent StoreLoad reordering of seq_cst operations, but still allow reordering with other operations.)
Isn't it obvious that when a programmer write something like the following inside a multi-thread method [...]
What does that even mean? It's not running the same method in multiple threads that's a problem, it's accessing shared data. How is the compiler supposed to know which data will be accessed non-readonly from multiple threads at the same time, not inside a critical section?
There's no reasonable way to prove this in general, only in some simplistic cases. If a compiler were to try, it would have to be conservative, erring on the side of making more things atomic, at a huge cost in performance. The other kind of mistake would be a correctness problem, and if that could just happen when the compiler guesses wrong based on some undocumented heuristics, it would make the language unusable for multi-threaded programs.
Besides that, not all multi-threaded code needs sequential consistency all the time; often acquire/release or relaxed atomics are fine, but sometimes they aren't. It makes a lot of sense for programmers to be explicit about what ordering and atomicity their algorithm is built on.
Also you carefully design lock-free multi-threaded code to do things in a sensible order. In C++, you don't have to use Interlock..., but instead you make a variable std::atomic<int> shared_int; for example. (Or use std::atomic_ref<int> to do atomic operations on variables that other code can access non-atomically, like using Interlocked functions).
Having no explicit indication in the source of which operations are atomic with what ordering semantics would make it harder to read and maintain such code. Correct lock-free algorithms don't just happen by having the compiler turn individual operators into atomic ops.
Promoting every operation to atomic would destroy performance. Most data isn't shared, even in functions that access some shared data structures.
Atomic RMW (like x86 lock add [rdi], eax) is much slower than a non-atomic operation, especially since non-atomic lets the compiler optimize variables into registers.
An atomic RMW on x86 is a full memory barrier, so making every operation atomic would destroy memory-level parallelism every time you use a += or ++.
e.g. one per 18 cycle throughput on Skylake for lock xadd [mem], reg if hot in L1d cache, vs. one per 0.25 cycles for add reg, reg (https://uops.info), not to mention removing opportunities to optimize away and combine operations. And reducing the ability for out-of-order execution to overlap work.
This is a partial answer to you question you asked in the in the comments:
Why not? If I, as a programmer know exactly where I should put this protections, why can't the compiler?
In order for the compiler to do that, it would need to understand all possible execution paths through your program. This is effectively the Path Testing problem discussed here: https://softwareengineering.stackexchange.com/questions/277693/does-path-coverage-guarantee-finding-all-bugs
That article states that this is equivalent to the halting problem, which is computer-science-ese for saying it's an unsolvable problem.
The cool thing is that you want to do this in a world where you have multiple threads of execution running on possibly multiple processors. That makes an unsolvable problem that much harder to solve.
On the other hand, the programmer should know what his/her program does...

Volatile.Write freshness guarentee

The documentation for Volatile.Write says the following:
Writes the specified object reference to the specified field. On
systems that require it, inserts a memory barrier that prevents the
processor from reordering memory operations as follows: If a read or
write appears before this method in the code, the processor cannot
move it after this method.
and
value T
The object reference to write. The reference is written
immediately so that it is visible to all processors in the computer.
But it seems like quotes 1 and 2 are contradictory.
For the second quote to be true, I would think that the first quote would have to be changed as follows:
If a read or
write appears before after this method in the code, the processor cannot
move it after before this method.
Does Volatile.Write actually mean that other threads are guaranteed to pick up the write in a timely fashion, or is the second quote misleading?
It seems to me as though all these "Volatile"/"Memory Barriers" seem to be focused on is ensuring that if writes are exposed to other threads they are exposed in the correct order, but I can't seem to find what actually would be force them to be exposed.
I understand that it may be hard/impossible to expose writes to other threads immediately, but without volatile writes/reads there are cases when the writes are exposed never. So it seems there must be a way to ensure that writes are exposed "eventually", but I'm unsure what that is. Is it that writes are always exposed in .NET but reads can be cached? And if so does Volatile.Read stop this caching behaviour?
(Note I have read through Joseph Albahari's Threading in C# which tends to suggest I need explicit memory barriers before my reads and after my writes, although it's not clear why even that should be effective as the documentation for Thread.MemoryBarrier doesn't seem to explicitly say that the writes are shown to other threads).
You are misunderstanding the concept of barriers a little bit. As you wrote
The object reference to write. The reference is written immediately so that it is visible to all processors in the computer.
So the really important unit here is a processor, not thread.
So, there are processors, processor caches, store buffers and invalidation queues involved.
When a processor writes something into the memory, it looks like that or similar to that
The subject is at the store buffer level. As you can see, there are a lot of things is going on when you write something or read, and it does not happen instantly for all the processors in the system. At the beginning a read or write command is places into processor's store buffer, and those commands could be reordered, in other words, executed in different order by the processor.
While that happens, other processors don't know about changes, if the operation is write and the currently working processor doesn't know about changes other processors made.
When you place a barrier, that means that operations in the store buffer or invalidation queue should be completed before any read or write could be performed. That is necessary to actualize CPU caches across processors. So there is basically no mechanics to synchronize any data across threads, we are syncing data across processors.
When a thread A writes something on processor 1 and thread B reads something on the processor 1, they both starts by looking into the store buffer first, so they read actual data, whether any barriers placed or not.
It's just an overview of the mechanic involved, maybe I'm wrong in some details. You can find complete info if you read about MESI protocol, this PDF with explanation on invalidation queues and store buffers
I agree with you that the description in the MSDN documentation is bit confusing. I would say that "immediately" is strong word here as well as in regards to any subject related to parallel processes. The result won't be visible immediately but documentation doesn't say that - it says that the value will be written immediately, that is as soon as all prior load/store operation results become globally visible the store operation to write a value will be immediately initiated.
As for the memory barriers, they only can give a guarantee of prior operations exposure (global visibility) because in essence the memory barriers are instructions which are encountered by a CPU make the CPU "wait" for getting all pending load/store operations globally visible while the moment of global visibility of value written by Volatile.Write is neither barrier nor Volatile.Write concern.
Now about suggestion to use the barrier in lock-free programming. Of course it makes sense because it ensures the order of global visibility which is actual for multi-core systems. When you cannot be sure that an event B always happens after event A you just can't build reliable logic supposed to be executed in multi-core environemnts.

Why do c# iterators track creating thread over using an interlocked operation?

This is just something that's been puzzling me ever since I read about iterators on Jon Skeet's site.
There's a simple performance optimisation that Microsoft has implemented with their automatic iterators - the returned IEnumerable can be reused as an IEnumerator, saving an object creation. Now because an IEnumerator necessarily needs to track state, this is only valid the first time it's iterated.
What I cannot understand is why the design team took the approach they did to ensure thread safety.
Normally when I'm in a similar position I'd use what I consider to be a simple Interlocked.CompareExchange - to ensure that only one thread manages to change the state from "available" to "in process".
Conceptually it's very simple, a single atomic operation, no extra fields are required etc.
But the design teams approach? Every IEnumerable keeps a field of the managed thread ID of the creating thread, and then that thread ID is checked on calling GetEnumerator against this field, and only if it's the same thread, and it's the first time it's called, can the IEnumerable return itself as the IEnumerator. It seems harder to reason about, imo.
I'm just wondering why this approach was taken. Are Interlocked operations far slower than two calls to System.Threading.Thread.CurrentThread.ManagedThreadId, so much so that it justifies the extra field?
Or is there some other reason behind this, perhaps involving memory models or ARM devices or something I'm not seeing? Maybe the spec imparts specific requirements on the implementation of IEnumerable? Just genuinely puzzled.
I can't answer definatively, but as to your question:
Are Interlocked operations far slower than two calls to
System.Threading.Thread.CurrentThread.ManagedThreadId, so much so that
it justifies the extra field?
Yes interlocked operations are much slower that two calls to get the ManagedThreadId - interlocked operations aren't cheap because they required multi-CPU systems to synchonize their caches.
From Understanding the Impact of Low-Lock Techniques in Multithreaded Apps:
Interlocked instructions need to ensure that caches are synchronized
so that reads and writes don't seem to move past the instruction.
Depending on the details of the memory system and how much memory was
recently modified on various processors, this can be pretty expensive
(hundreds of instruction cycles).
In Threading in C#, it lists overhead the overhead as 10ns. Whereas getting the ManagedThreadId should be a normal non-locked read of static data.
Now this is just my speculation, but if you think about the normal use case it would be to call the function to retrieve the IEnumerable and immediately iterative over it once. So in the standard use case the object is:
Used once
Used on the same thread it was created
Short lived
So this design brings in no synchronization overhead and sacrifices 4 bytes, which will probably only be in use for a very short period of time.
Of course to prove this you would have to do performance analysis to determine the relative costs and code analysis to prove what the common case was.

Do I need to synchronize thread access to an int?

I've just written a method that is called by multiple threads simultaneously and I need to keep track of when all the threads have completed. The code uses this pattern:
private void RunReport()
{
_reportsRunning++;
try
{
//code to run the report
}
finally
{
_reportsRunning--;
}
}
This is the only place within the code that _reportsRunning's value is changed, and the method takes about a second to run.
Occasionally when I have more than six or so threads running reports together the final result for _reportsRunning can get down to -1. If I wrap the calls to _runningReports++ and _runningReports-- in a lock then the behaviour appears to be correct and consistent.
So, to the question: When I was learning multithreading in C++ I was taught that you didn't need to synchronize calls to increment and decrement operations because they were always one assembly instruction and therefore it was impossible for the thread to be switched out mid-call. Was I taught correctly, and if so, how come that doesn't hold true for C#?
A ++ operator is not atomic in C# (and I doubt it is guaranteed to be atomic in C++) so yes, your counting is subject to race conditions.
Use Interlocked.Increment and .Decrement
System.Threading.Interlocked.Increment(ref _reportsRunning);
try
{
...
}
finally
{
System.Threading.Interlocked.Decrement(ref _reportsRunning);
}
So, to the question: When I was
learning multithreading in C++ I was
taught that you didn't need to
synchronize calls to increment and
decrement operations because they were
always one assembly instruction and
therefore it was impossible for the
thread to be switched out mid-call.
Was I taught correctly, and if so how
come that doesn't hold true for C#?
This is incredibly wrong.
On some architectures, like x86, there are single increment and decrement instructions. Many architectures do not have them and need to do separate loads and stores. Even on x86, there is no guarantee the compiler will generate the memory version of these instructions - it'll likely load into a register first, especially if it needs to do several operations with the result.
Even if the compiler could be guaranteed to always generate the memory version of increment and decrement on x86, that still does not guarantee atomicity - two CPU's could modify the variable simultaneously and get inconsistent results. The instruction would need the lock prefix to force it to be an atomic operation - compilers never emit the lock variant by default since it is less performant since it guarantees the action is atomic.
Consider the following x86 assembly instruction:
inc [i]
If I is initially 0 and the code is run on two threads on two cores, the value after both threads finish could legally be either 1 or 2, since there is no guarantee that one thread will complete its read before the other thread finishes its write, or that one thread's write will even be visible before the other threads read.
Changing this to:
lock inc [i]
Will result in getting a final value of 2.
Win32's InterlockedIncrement and InterlockedDecrement and .NET's Interlocked.Increment and Interlocked.Decrement result in doing the equivalent (possibly the exact same machine code) of lock inc.
You were taught wrong.
There does exist hardware with atomic integer increment, so it's possible that what you were taught was right for the hardware and compiler you were using at the time. But in general in C++ you can't even guarantee that incrementing a non-volatile variable writes memory consecutively with reading it, let alone atomically with reading.
Incrementing the int is one instruction but what about loading the value in the register?
That's what i++ effectively does:
load i into a register
increment the register
unload the register into i
As you can see there are 3 (this may be different on other platforms) instructions which in any stage the cpu can context switch into a different thread leaving your variable in an unknown state.
You should use Interlocked.Increment and Interlocked.Decrement to solve that.
No, you need to synchronize access. On Windows you can do this easily with InterlockedIncrement() and InterlockedDecrement(). I'm sure there are equivalents for other platforms.
EDIT: Just noticed the C# tag. Do what the other guy said. See also: I've heard i++ isn't thread safe, is ++i thread-safe?
Any kind of increment/decrement operation in a higher level language (and yes, even C is higher level compared to machine instructions) is not atomic by nature. However, each processor platform usually has primitives that support various atomic operations.
If your lecturer was referring to machine instructions, Increment and Decrement operations are likely to be atomic. Yet, that is not always correct on the ever increasing multi-core platforms of today, unless they guarantee coherency.
The higher level languages usually implement support for atomic transactions using low level atomic machine instructions. This is provided as the interlock mechanism by the higher level API.
x++ probably isn't atomic, but ++x might be (not sure offhand, but if you consider the difference between post- and pre-increment it should be clear why pre- is more amenable to atomicity).
A bigger point is, if these runs take a second to run each, the amount of time added by a lock is going to be noise compared to the runtime of the method itself. It's probably not worth monkeying with trying to remove the lock in this case - you've got a correct solution with locking, that will likely not have a visible difference in performance from the non-locking solution.
On a single-processor machine, if one isn't using virtual memory, x++ (rvalue ignored) is likely to translate into a single atomic INC instruction on x86 architectures (if x is long, the operation is only atomic when using a 32-bit compiler). Also, movsb/movsw/movsl are atomic ways of moving a byte/word/longword; a compiler isn't apt to use those as the normal way of assigning variables, but one could have an atomic-move utility function. It would be possible for a virtual memory manager to be written in such a way that those instructions would behave atomically if a page fault occurs on the write, but I don't think that's normally guaranteed.
On a multi-processor machine, all bets are off unless one uses explicit interlocked instructions (invokable via special library calls). The most versatile instruction which is commonly available is CompareExchange. That instruction will alter a memory location only if it contains an expected value; it will return the value it had when it decided whether or not to alter it. If one wishes to "xor" a variable with 1, one could do something like (in vb.net)
Dim OldValue as Integer
Do
OldValue = Variable
While Threading.Interlocked.CompareExchange(Variable, OldValue Xor 1, OldValue) OldValue
This approach allows one to perform any sort of atomic update to a variable whose new value should depend on the old value. For certain common operations like increment and decrement, there are faster alternatives, but the CompareExchange allows one to implement other useful patterns as well.
Important caveats: (1) Keep the loop as short as possible; the longer the loop, the more likely it is that another task will hit the variable during the loop, and the more time will be wasted each time that happens; (2) a specified number of updates, divided arbitrarily among threads, will always complete, since the only way a thread can forced to re-execute the loop is if some other thread has made useful progress; if some threads can perform updates without making forward progress toward completion, however, the code may become live-locked.

Spinlocks, How Useful Are They?

How often do you find yourself actually using spinlocks in your code? How common is it to come across a situation where using a busy loop actually outperforms the usage of locks?
Personally, when I write some sort of code that requires thread safety, I tend to benchmark it with different synchronization primitives, and as far as it goes, it seems like using locks gives better performance than using spinlocks. No matter for how little time I actually hold the lock, the amount of contention I receive when using spinlocks is far greater than the amount I get from using locks (of course, I run my tests on a multiprocessor machine).
I realize that it's more likely to come across a spinlock in "low-level" code, but I'm interested to know whether you find it useful in even a more high-level kind of programming?
It depends on what you're doing. In general application code, you'll want to avoid spinlocks.
In low-level stuff where you'll only hold the lock for a couple of instructions, and latency is important, a spinlock mat be a better solution than a lock. But those cases are rare, especially in the kind of applications where C# is typically used.
In C#, "Spin locks" have been, in my experience, almost always worse than taking a lock - it's a rare occurrence where spin locks will outperform a lock.
However, that's not always the case. .NET 4 is adding a System.Threading.SpinLock structure. This provides benefits in situations where a lock is held for a very short time, and being grabbed repeatedly. From the MSDN docs on Data Structures for Parallel Programming:
In scenarios where the wait for the lock is expected to be short, SpinLock offers better performance than other forms of locking.
Spin locks can outperform other locking mechanisms in cases where you're doing something like locking through a tree - if you're only having locks on each node for a very, very short period of time, they can out perform a traditional lock. I ran into this in a rendering engine with a multithreaded scene update, at one point - spin locks profiled out to outperform locking with Monitor.Enter.
For my realtime work, particularly with device drivers, I've used them a fair bit. It turns out that (when last I timed this) waiting for a sync object like a semaphore tied to a hardware interrupt chews up at least 20 microseconds, no matter how long it actually takes for the interrupt to occur. A single check of a memory-mapped hardware register, followed by a check to RDTSC (to allow for a time-out so you don't lock up the machine) is in the high nannosecond range (basicly down in the noise). For hardware-level handshaking that shouldn't take much time at all, it is really tough to beat a spinlock.
My 2c: If your updates satisfy some access criteria then they are good spinlock candidates:
fast, ie you will have time to acquire the spinlock, perform the updates and release the spinlock in a single thread quanta so that you don't get pre-empted while holding the spinlock
localized all data you update are in preferably one single page that is already loaded, you do not want a TLB miss while you holding the spinlock, and you definetely don't want an page fault swap read!
atomic you do not need any other lock to perform the operation, ie. never wait for locks under spinlock.
For anything that has any potential to yield, you should use a notified lock structure (events, mutex, semaphores etc).
One use case for spin locks is if you expect very low contention but are going to have a lot of them. If you don't need support for recursive locking, a spinlock can be implemented in a single byte, and if contention is very low then the CPU cycle waste is negligible.
For a practical use case, I often have arrays of thousands of elements, where updates to different elements of the array can safely happen in parallel. The odds of two threads trying to update the same element at the same time are very small (low contention) but I need one lock for every element (I'm going to have a lot of them). In these cases, I usually allocate an array of ubytes of the same size as the array I'm updating in parallel and implement spinlocks inline as (in the D programming language):
while(!atomicCasUbyte(spinLocks[i], 0, 1)) {}
myArray[i] = newVal;
atomicSetUbyte(spinLocks[i], 0);
On the other hand, if I had to use regular locks, I would have to allocate an array of pointers to Objects, and then allocate a Mutex object for each element of this array. In scenarios such as the one described above, this is just plain wasteful.
If you have performance critical code and you have determined that it needs to be faster than it currently is and you have determined that the critical factor is the lock speed, then it'd be a good idea to try a spinlock. In other cases, why bother? Normal locks are easier to use correctly.
Please note the following points :
Most mutexe's implementations spin for a little while before the thread is actually unscheduled. Because of this it is hard to compare theses mutexes with pure spinlocks.
Several threads spining "as fast as possible" on the same spinlock will consome all the bandwidth and drasticly decrease your program efficiency. You need to add tiny "sleeping" time by adding noop in your spining loop.
You hardly ever need to use spinlocks in application code, if anything you should avoid them.
I can't thing of any reason to use a spinlock in c# code running on a normal OS. Busy locks are mostly a waste on the application level - the spinning can cause you to use the entire cpu timeslice, vs a lock will immediatly cause a context switch if needed.
High performance code where you have nr of threads=nr of processors/cores might benefit in some cases, but if you need performance optimization at that level your likely making next gen 3D game, working on an embedded OS with poor synchronization primitives, creating an OS/driver or in any case not using c#.
I used spin locks for the stop-the-world phase of the garbage collector in my HLVM project because they are easy and that is a toy VM. However, spin locks can be counter-productive in that context:
One of the perf bugs in the Glasgow Haskell Compiler's garbage collector is so annoying that it has a name, the "last core slowdown". This is a direct consequence of their inappropriate use of spinlocks in their GC and is excacerbated on Linux due to its scheduler but, in fact, the effect can be observed whenever other programs are competing for CPU time.
The effect is clear on the second graph here and can be seen affecting more than just the last core here, where the Haskell program sees performance degradation beyond only 5 cores.
Always keep these points in your mind while using spinlocks:
Fast user mode execution.
Synchronizes threads within a single process, or multiple processes if in shared memory.
Does not return until the object is owned.
Does not support recursion.
Consumes 100% of CPU while "waiting".
I have personally seen so many deadlocks just because someone thought it will be a good idea to use spinlock.
Be very very careful while using spinlocks
(I can't emphasize this enough).

Categories

Resources