Static member as volatile [duplicate] - c#

Can anyone provide a good explanation of the volatile keyword in C#? Which problems does it solve and which it doesn't? In which cases will it save me the use of locking?

I don't think there's a better person to answer this than Eric Lippert (emphasis in the original):
In C#, "volatile" means not only "make sure that the compiler and the
jitter do not perform any code reordering or register caching
optimizations on this variable". It also means "tell the processors to
do whatever it is they need to do to ensure that I am reading the
latest value, even if that means halting other processors and making
them synchronize main memory with their caches".
Actually, that last bit is a lie. The true semantics of volatile reads
and writes are considerably more complex than I've outlined here; in
fact they do not actually guarantee that every processor stops what it
is doing and updates caches to/from main memory. Rather, they provide
weaker guarantees about how memory accesses before and after reads and
writes may be observed to be ordered with respect to each other.
Certain operations such as creating a new thread, entering a lock, or
using one of the Interlocked family of methods introduce stronger
guarantees about observation of ordering. If you want more details,
read sections 3.10 and 10.5.3 of the C# 4.0 specification.
Frankly, I discourage you from ever making a volatile field. Volatile
fields are a sign that you are doing something downright crazy: you're
attempting to read and write the same value on two different threads
without putting a lock in place. Locks guarantee that memory read or
modified inside the lock is observed to be consistent, locks guarantee
that only one thread accesses a given chunk of memory at a time, and so
on. The number of situations in which a lock is too slow is very
small, and the probability that you are going to get the code wrong
because you don't understand the exact memory model is very large. I
don't attempt to write any low-lock code except for the most trivial
usages of Interlocked operations. I leave the usage of "volatile" to
real experts.
For further reading see:
Understand the Impact of Low-Lock Techniques in Multithreaded Apps
Sayonara volatile

If you want to get slightly more technical about what the volatile keyword does, consider the following program (I'm using DevStudio 2005):
#include <iostream>
void main()
{
int j = 0;
for (int i = 0 ; i < 100 ; ++i)
{
j += i;
}
for (volatile int i = 0 ; i < 100 ; ++i)
{
j += i;
}
std::cout << j;
}
Using the standard optimised (release) compiler settings, the compiler creates the following assembler (IA32):
void main()
{
00401000 push ecx
int j = 0;
00401001 xor ecx,ecx
for (int i = 0 ; i < 100 ; ++i)
00401003 xor eax,eax
00401005 mov edx,1
0040100A lea ebx,[ebx]
{
j += i;
00401010 add ecx,eax
00401012 add eax,edx
00401014 cmp eax,64h
00401017 jl main+10h (401010h)
}
for (volatile int i = 0 ; i < 100 ; ++i)
00401019 mov dword ptr [esp],0
00401020 mov eax,dword ptr [esp]
00401023 cmp eax,64h
00401026 jge main+3Eh (40103Eh)
00401028 jmp main+30h (401030h)
0040102A lea ebx,[ebx]
{
j += i;
00401030 add ecx,dword ptr [esp]
00401033 add dword ptr [esp],edx
00401036 mov eax,dword ptr [esp]
00401039 cmp eax,64h
0040103C jl main+30h (401030h)
}
std::cout << j;
0040103E push ecx
0040103F mov ecx,dword ptr [__imp_std::cout (40203Ch)]
00401045 call dword ptr [__imp_std::basic_ostream<char,std::char_traits<char> >::operator<< (402038h)]
}
0040104B xor eax,eax
0040104D pop ecx
0040104E ret
Looking at the output, the compiler has decided to use the ecx register to store the value of the j variable. For the non-volatile loop (the first) the compiler has assigned i to the eax register. Fairly straightforward. There are a couple of interesting bits though - the lea ebx,[ebx] instruction is effectively a multibyte nop instruction so that the loop jumps to a 16 byte aligned memory address. The other is the use of edx to increment the loop counter instead of using an inc eax instruction. The add reg,reg instruction has lower latency on a few IA32 cores compared to the inc reg instruction, but never has higher latency.
Now for the loop with the volatile loop counter. The counter is stored at [esp] and the volatile keyword tells the compiler the value should always be read from/written to memory and never assigned to a register. The compiler even goes so far as to not do a load/increment/store as three distinct steps (load eax, inc eax, save eax) when updating the counter value, instead the memory is directly modified in a single instruction (an add mem,reg). The way the code has been created ensures the value of the loop counter is always up-to-date within the context of a single CPU core. No operation on the data can result in corruption or data loss (hence not using the load/inc/store since the value can change during the inc thus being lost on the store). Since interrupts can only be serviced once the current instruction has completed, the data can never be corrupted, even with unaligned memory.
Once you introduce a second CPU to the system, the volatile keyword won't guard against the data being updated by another CPU at the same time. In the above example, you would need the data to be unaligned to get a potential corruption. The volatile keyword won't prevent potential corruption if the data cannot be handled atomically, for example, if the loop counter was of type long long (64 bits) then it would require two 32 bit operations to update the value, in the middle of which an interrupt can occur and change the data.
So, the volatile keyword is only good for aligned data which is less than or equal to the size of the native registers such that operations are always atomic.
The volatile keyword was conceived to be used with IO operations where the IO would be constantly changing but had a constant address, such as a memory mapped UART device, and the compiler shouldn't keep reusing the first value read from the address.
If you're handling large data or have multiple CPUs then you'll need a higher level (OS) locking system to handle the data access properly.

If you are using .NET 1.1, the volatile keyword is needed when doing double checked locking. Why? Because prior to .NET 2.0, the following scenario could cause a second thread to access an non-null, yet not fully constructed object:
Thread 1 asks if a variable is null.
//if(this.foo == null)
Thread 1 determines the variable is null, so enters a lock.
//lock(this.bar)
Thread 1 asks AGAIN if the variable is null.
//if(this.foo == null)
Thread 1 still determines the variable is null, so it calls a constructor and assigns the value to the variable.
//this.foo = new Foo();
Prior to .NET 2.0, this.foo could be assigned the new instance of Foo, before the constructor was finished running. In this case, a second thread could come in (during thread 1's call to Foo's constructor) and experience the following:
Thread 2 asks if variable is null.
//if(this.foo == null)
Thread 2 determines the variable is NOT null, so tries to use it.
//this.foo.MakeFoo()
Prior to .NET 2.0, you could declare this.foo as being volatile to get around this problem. Since .NET 2.0, you no longer need to use the volatile keyword to accomplish double checked locking.
Wikipedia actually has a good article on Double Checked Locking, and briefly touches on this topic:
http://en.wikipedia.org/wiki/Double-checked_locking

Sometimes, the compiler will optimize a field and use a register to store it. If thread 1 does a write to the field and another thread accesses it, since the update was stored in a register (and not memory), the 2nd thread would get stale data.
You can think of the volatile keyword as saying to the compiler "I want you to store this value in memory". This guarantees that the 2nd thread retrieves the latest value.

From MSDN:
The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access. Using the volatile modifier ensures that one thread retrieves the most up-to-date value written by another thread.

The CLR likes to optimize instructions, so when you access a field in code it might not always access the current value of the field (it might be from the stack, etc). Marking a field as volatile ensures that the current value of the field is accessed by the instruction. This is useful when the value can be modified (in a non-locking scenario) by a concurrent thread in your program or some other code running in the operating system.
You obviously lose some optimization, but it does keep the code more simple.

Simply looking into the official page for volatile keyword you can see an example of typical usage.
public class Worker
{
public void DoWork()
{
bool work = false;
while (!_shouldStop)
{
work = !work; // simulate some work
}
Console.WriteLine("Worker thread: terminating gracefully.");
}
public void RequestStop()
{
_shouldStop = true;
}
private volatile bool _shouldStop;
}
With the volatile modifier added to the declaration of _shouldStop in place, you'll always get the same results. However, without that modifier on the _shouldStop member, the behavior is unpredictable.
So this is definitely not something downright crazy.
There exists Cache coherence that is responsible for CPU caches consistency.
Also if CPU employs strong memory model (as x86)
As a result, reads and writes of volatile fields require no special instructions on the x86: Ordinary reads and writes (for example, using the MOV instruction) are sufficient.
Example from C# 5.0 specification (chapter 10.5.3)
using System;
using System.Threading;
class Test
{
public static int result;
public static volatile bool finished;
static void Thread2() {
result = 143;
finished = true;
}
static void Main() {
finished = false;
new Thread(new ThreadStart(Thread2)).Start();
for (;;) {
if (finished) {
Console.WriteLine("result = {0}", result);
return;
}
}
}
}
produces the output: result = 143
If the field finished had not been declared volatile, then it would be permissible for the store to result to be visible to the main thread after the store to finished, and hence for the main thread to read the value 0 from the field result.
Volatile behavior is platform dependent so you should always consider using volatile when needed by case to be sure it satisfies your needs.
Even volatile could not prevent (all kind of) reordering (C# - The C# Memory Model in Theory and Practice, Part 2)
Even though the write to A is volatile and the read from A_Won is also volatile, the fences are both one-directional, and in fact allow this reordering.
So I believe if you want to know when to use volatile (vs lock vs Interlocked) you should get familiar with memory fences (full, half) and needs of a synchronization. Then you get your precious answer yourself for your good.

I found this article by Joydip Kanjilal very helpful!
When you mark an object or a variable as volatile, it becomes a candidate for volatile reads and writes. It should be noted that in C# all memory writes are volatile irrespective of whether you are writing data to a volatile or a non-volatile object. However, the ambiguity happens when you are reading data. When you are reading data that is non-volatile, the executing thread may or may not always get the latest value. If the object is volatile, the thread always gets the most up-to-date value
I'll just leave it here for reference

The compiler sometimes changes the order of statements in code to optimize it. Normally this is not a problem in single-threaded environment, but it might be an issue in multi-threaded environment. See following example:
private static int _flag = 0;
private static int _value = 0;
var t1 = Task.Run(() =>
{
_value = 10; /* compiler could switch these lines */
_flag = 5;
});
var t2 = Task.Run(() =>
{
if (_flag == 5)
{
Console.WriteLine("Value: {0}", _value);
}
});
If you run t1 and t2, you would expect no output or "Value: 10" as the result. It could be that the compiler switches line inside t1 function. If t2 then executes, it could be that _flag has value of 5, but _value has 0. So expected logic could be broken.
To fix this you can use volatile keyword that you can apply to the field. This statement disables the compiler optimizations so you can force the correct order in you code.
private static volatile int _flag = 0;
You should use volatile only if you really need it, because it disables certain compiler optimizations, it will hurt performance. It's also not supported by all .NET languages (Visual Basic doesn't support it), so it hinders language interoperability.

So to sum up all this, the correct answer to the question is:
If your code is running in the 2.0 runtime or later, the volatile keyword is almost never needed and does more harm than good if used unnecessarily. I.E. Don't ever use it. BUT in earlier versions of the runtime, it IS needed for proper double check locking on static fields. Specifically static fields whose class has static class initialization code.

multiple threads can access a variable.
The latest update will be on the variable

Related

Why make a bool volatile when multithreading in C#?

I understand the volatile keyword in C++ fairly well. But in C#, it appears to take a different meaning, more so related to multi-threading. I thought bool operations were atomic, and I thought that if the operation were atomic, you wouldn't have threading concerns. What am I missing?
https://msdn.microsoft.com/en-us/library/x13ttww7.aspx
I thought bool operations were atomic
They are are indeed atomic.
and I thought that if the operation were atomic, you wouldn't have threading concerns.
That is where you have an incomplete picture. Imagine you have two threads running on separate cores, each with their own cache layers. Thread #1 has foo in its cache, and Thread #2 updates the value of foo. Thread #1 won't see the change unless foo is marked as volatile, acquired a lock, used the Interlocked class, or explicitly called Thread.MemoryBarrier() which would cause the value to be invalidated in the cache. Thus, guaranteeing that you read the most up to date value:
Using the volatile modifier ensures that one thread retrieves the most up-to-date value written by another thread.
Eric Lippert has a great post about volatility where he explains:
The true semantics of volatile reads
and writes are considerably more complex than I've outlined here; in
fact they do not actually guarantee that every processor stops what it
is doing and updates caches to/from main memory. Rather, they provide
weaker guarantees about how memory accesses before and after reads and
writes may be observed to be ordered with respect to each other.
Edit:
As per #xanatos comment, This doesn't mean that volatile guarantees an immediate read, it guarantees a read on the most updated value.
The volatile keyword in C# is all about reading/writing reordering, so it is something quite esoteric.
http://www.albahari.com/threading/part4.aspx#_Memory_Barriers_and_Volatility
(that I consider to be one of the "bibles" about threading) writes:
The volatile keyword instructs the compiler to generate an acquire-fence on every read from that field, and a release-fence on every write to that field. An acquire-fence prevents other reads/writes from being moved before the fence; a release-fence prevents other reads/writes from being moved after the fence. These “half-fences” are faster than full fences because they give the runtime and hardware more scope for optimization.
It is something quite unreadable :-)
Now... What it doesn't mean:
It doesn't mean that a value will be read now/will be written now
it simply means that if you read something from a volatile variable, everything else that has been read/written before this "special" read won't be moved after this "special" read. So it creates a barrier. So paradoxically, by reading from a volatile variable, you guarantee that all the writes you have done to any other variable (volatile or not) at the point of the reading will be done.
A volatile write is something probably more important, and is something that is in part guaranteed by the Intel CPU, and something that wasn't guaranteed by the first version of Java: no write reordering. The problem was:
object myrefthatissharedwithotherthreads = new MyClass(5);
where
class MyClass
{
public int Value;
MyClass(int value)
{
Value = value;
}
}
Now... that expression can be imagined to be:
var temp = new MyClass();
temp.Value = 5;
myrefthatissharedwithotherthreads = temp;
where temp is something generated by the compiler that you can't see.
If the writes can be reordered, you could have:
var temp = new MyClass();
myrefthatissharedwithotherthreads = temp;
temp.Value = 5;
and another thread could see a partially initialized MyClass, because the value of myrefthatissharedwithotherthreads is readable before the class MyClass has finished initializing.

Is volatile still needed inside lock statements?

I have read at different places people saying one should always use lock instead of volatile. I found that there are lots of confusing statements about Multithreading out there and even experts have different opinions on some things here.
After lots of research I found that the lock statement also will insert MemoryBarriers at least.
For example:
public bool stopFlag;
void Foo()
{
lock (myLock)
{
while (!stopFlag)
{
// do something
}
}
}
But if I am not totally wrong, the JIT compiler is free to never actually read the variable inside the loop but instead it may only read a cached version of the variable from a register. AFAIK MemoryBarriers won't help if the JIT made a register assignment to a variable, it just ensures that if we read from memory that the value is current.
Unless there is some compiler magic saying something like "if a code block contains a MemoryBarrier, register assignment of all variables after the MemoryBarrier is prevented".
Unless declared volatile or read with Thread.VolatileRead(), if myBool is set from another Thread to false, the loop may still run infinite, is this correct?
If yes, wouldn't this apply to ALL variables shared between Threads?
Whenever I see a question like this, my knee-jerk reaction is "assume nothing!" The .NET memory model is quite weak and the C# memory model is especially noteworthy for using language that can only apply to a processor with a weak memory model that isn't even supported anymore. Nothing what's there tells you anything what's going to happen in this code, you can reason about locks and memory barriers until you're blue in the face but you don't get anywhere with it.
The x64 jitter is quite clean and rarely throws a surprise. But its days are numbered, it is going to be replaced by Ryujit in VS2015. A rewrite that started with the x86 jitter codebase as a starting point. Which is a concern, the x86 jitter can throw you for a loop. Pun intended.
Best thing to do is to just try it and see what happens. Rewriting your code a little bit and making that loop as tight as possible so the jitter optimizer can do anything it wants:
class Test {
public bool myBool;
private static object myLock = new object();
public int Foo() {
lock (myLock) {
int cnt = 0;
while (!myBool) cnt++;
return cnt;
}
}
}
And testing it like this:
static void Main(string[] args) {
var obj = new Test();
new Thread(() => {
Thread.Sleep(1000);
obj.myBool = true;
}).Start();
Console.WriteLine(obj.Foo());
}
Switch to the Release build. Project + Properties, Build tab, tick the "Prefer 32-bit" option. Tools + Options, Debugging, General, untick the "Suppress JIT optimization" option. First run the Debug build. Works fine, program terminates after a second. Now switch to the Release build, run and observe that it deadlocks, the loop never completes. Use Debug + Break All to see that it hangs in the loop.
To see why, look at the generated machine code with Debug + Windows + Disassembly. Focusing on the loop only:
int cnt = 0;
013E26DD xor edx,edx ; cnt = 0
while (myBool) {
013E26DF movzx eax,byte ptr [esi+4] ; load myBool
013E26E3 test eax,eax ; myBool == true?
013E26E5 jne 013E26EC ; yes => bail out
013E26E7 inc edx ; cnt++
013E26E8 test eax,eax ; myBool == true?
013E26EA jne 013E26E7 ; yes => loop
}
return cnt;
The instruction at address 013E26E8 tells the tale. Note how the myBool variable is stored in the eax register, cnt in the edx register. A standard duty of the jitter optimizer, using the processor registers and avoiding memory loads and stores makes the code much faster. And note that when it tests the value, it still uses the register and does not reload from memory. This loop can therefore never end and it will always hang your program.
Code is pretty fake of course, nobody will ever write this. In practice this tends to work by accident, you'll have more code inside the while() loop. Too much to allow the jitter to optimize the variable way entirely. But there are no hard rules that will tell you when this happens. Sometimes it does pull it off, assume nothing. Proper synchronization should never be skipped. You really are only safe with an extra lock for myBool or an ARE/MRE or Interlocked.CompareExchange(). And if you want to cut such a volatile corner then you must check.
And noted in the comments, try Thread.VolatileRead() instead. You need to use a byte instead of a bool. It still hangs, it is not a synchronization primitive.
the JIT compiler is free to never actually read the variable inside the loop but instead it may only read a cached version of the variable from a register.
Well, it'll read the variable once, in the first iteration of the loop, but other than that, yes, it will continue to read a cached value, unless there is a memory barrier. Any time the code crosses a memory barrier it cannot use the cached value.
Using Thread.VolatileRead() adds the appropriate memory barriers, as does marking the field as volatile. There are plenty of other things that one could do that also implicitly add memory barriers; one of them is entering or leaving a lock statement.`
Since your loop is saying within the body of a single lock and not entering or leaving it, it's free to continue using the cached value.
Of course, the solution here isn't to add in a memory barrier. If you want to wait for another thread to notify you of when you should continue on, use a AutoResetEvent (or another similar synchronization tool specifically designed to allow threads to communicate).
how about this
public class Singleton<T> where T : class, new()
{
private static T _instance;
private static object _syncRoot = new Object();
public static T Instance
{
get
{
var instance = _instance;
if (instance == null)
{
lock (_syncRoot)
{
instance = Volatile.Read(ref _instance);
if (instance == null)
{
instance = new T();
}
Volatile.Write(ref _instance, instance);
}
}
return instance;
}
}
}

Volatile fields: How can I actually get the latest written value to a field?

Considering the following example:
private int sharedState = 0;
private void FirstThread() {
Volatile.Write(ref sharedState, 1);
}
private void SecondThread() {
int sharedStateSnapshot = Volatile.Read(ref sharedState);
Console.WriteLine(sharedStateSnapshot);
}
Until recently, I was under the impression that, as long as FirstThread() really did execute before SecondThread(), this program could not output anything but 1.
However, my understanding now is that:
Volatile.Write() emits a release fence. This means no preceding load or store (in program order) may happen after the assignment of 1 to sharedState.
Volatile.Read() emits an acquire fence. This means no subsequent load or store (in program order) may happen before the copying of sharedState to sharedStateSnapshot.
Or, to put it another way:
When sharedState is actually released to all processor cores, everything preceding that write will also be released, and,
When the value in the address sharedStateSnapshot is acquired; sharedState must have been already acquired.
If my understanding is therefore correct, then there is nothing to prevent the acquisition of sharedState being 'stale', if the write in FirstThread() has not already been released.
If this is true, how can we actually ensure (assuming the weakest processor memory model, such as ARM or Alpha), that the program will always print 1? (Or have I made an error in my mental model somewhere?)
Your understanding is correct, and it is true that you cannot ensure that the program will always print 1 using these techniques. To ensure your program will print 1, assuming thread 2 runs after thread one, you need two fences on each thread.
The easiest way to achieve that is using the lock keyword:
private int sharedState = 0;
private readonly object locker = new object();
private void FirstThread()
{
lock (locker)
{
sharedState = 1;
}
}
private void SecondThread()
{
int sharedStateSnapshot;
lock (locker)
{
sharedStateSnapshot = sharedState;
}
Console.WriteLine(sharedStateSnapshot);
}
I'd like to quote Eric Lippert:
Frankly, I discourage you from ever making a volatile field. Volatile fields are a sign that you are doing something downright crazy: you're attempting to read and write the same value on two different threads without putting a lock in place.
The same applies to calling Volatile.Read and Volatile.Write. In fact, they are even worse than volatile fields, since they require you to do manually what the volatile modifier does automatically.
You're right, there's no guarantee that release stores will be immediately visible to all processors. Volatile.Read and Volatile.Write give you acquire/release semantics, but no immediacy guarantees.
The volatile modifier seems to do this though. The compiler will emit an OpCodes.Volatile IL instruction, and the jitter will tell the processor not to store the variable on any of its registers (see Hans Passant's answer).
But why do you need it to be immediate anyway? What if your SecondThread happens to run a couple of milliseconds sooner, before the values are actually wrote? Seeing as the scheduling is non-deterministic, the correctness of your program shouldn't depend on this "immediacy" anyway.
Until recently, I was under the impression that, as long as
FirstThread() really did execute before SecondThread(), this program
could not output anything but 1.
As you go on to explain yourself, this impression is wrong. Volatile.Read simply issues a read operation on its target followed by a memory barrier; the memory barrier prevents operation reordering on the processor executing the current thread but this does not help here because
There are no operations to reorder (just the single read or write in each thread).
The race condition across your threads means that even if the no-reorder guarantee applied across processors, it would simply mean that the order of operations which you cannot predict anyway would be preserved.
If my understanding is therefore correct, then there is nothing to
prevent the acquisition of sharedState being 'stale', if the write in
FirstThread() has not already been released.
That is correct. In essence you are using a tool designed to help with weak memory models against a possible problem caused by a race condition. The tool won't help you because that's not what it does.
If this is true, how can we actually ensure (assuming the weakest
processor memory model, such as ARM or Alpha), that the program will
always print 1? (Or have I made an error in my mental model
somewhere?)
To stress once again: the memory model is not the problem here. To ensure that your program will always print 1 you need to do two things:
Provide explicit thread synchronization that guarantees the write will happen before the read (in the simplest case, SecondThread can use a spin lock on a flag which FirstThread uses to signal it's done).
Ensure that SecondThread will not read a stale value. You can do this trivially by marking sharedState as volatile -- while this keyword has deservedly gotten much flak, it was designed explicitly for such use cases.
So in the simplest case you could for example have:
private volatile int sharedState = 0;
private volatile bool spinLock = false;
private void FirstThread()
{
sharedState = 1;
// ensure lock is released after the shared state write!
Volatile.Write(ref spinLock, true);
}
private void SecondThread()
{
SpinWait.SpinUntil(() => spinLock);
Console.WriteLine(sharedState);
}
Assuming no other writes to the two fields, this program is guaranteed to output nothing other than 1.

Does Interlocked.CompareExchange use a memory barrier?

I'm reading Joe Duffy's post about Volatile reads and writes, and timeliness, and i'm trying to understand something about the last code sample in the post:
while (Interlocked.CompareExchange(ref m_state, 1, 0) != 0) ;
m_state = 0;
while (Interlocked.CompareExchange(ref m_state, 1, 0) != 0) ;
m_state = 0;
…
When the second CMPXCHG operation is executed, does it use a memory barrier to ensure that the value of m_state is indeed the latest value written to it? Or will it just use some value that is already stored in the processor's cache? (assuming m_state isn't declared as volatile).
If I understand correctly, if CMPXCHG won't use a memory barrier, then the whole lock acquisition procedure won't be fair since it's highly likely that the thread that was the first to acquire the lock, will be the one that will acquire all of following locks. Did I understand correctly, or am I missing out on something here?
Edit: The main question is actually whether calling to CompareExchange will cause a memory barrier before attempting to read m_state's value. So whether assigning 0 will be visible to all of the threads when they try to call CompareExchange again.
Any x86 instruction that has lock prefix has full memory barrier. As shown Abel's answer, Interlocked* APIs and CompareExchanges use lock-prefixed instruction such as lock cmpxchg. So, it implies memory fence.
Yes, Interlocked.CompareExchange uses a memory barrier.
Why? Because x86 processors did so. From Intel's Volume 3A: System Programming Guide Part 1, Section 7.1.2.2:
For the P6 family processors, locked operations serialize all outstanding load and store operations (that is, wait for them to complete). This rule is also true for the Pentium 4 and Intel Xeon processors, with one exception. Load operations that reference weakly ordered memory types (such as the WC memory type) may not be serialized.
volatile has nothing to do with this discussion. This is about atomic operations; to support atomic operations in CPU, x86 guarantees all previous loads and stores to be completed.
ref doesn't respect the usual volatile rules, especially in things like:
volatile bool myField;
...
RunMethod(ref myField);
...
void RunMethod(ref bool isDone) {
while(!isDone) {} // silly example
}
Here, RunMethod is not guaranteed to spot external changes to isDone even though the underlying field (myField) is volatile; RunMethod doesn't know about it, so doesn't have the right code.
However! This should be a non-issue:
if you are using Interlocked, then use Interlocked for all access to the field
if you are using lock, then use lock for all access to the field
Follow those rules and it should work OK.
Re the edit; yes, that behaviour is a critical part of Interlocked. To be honest, I don't know how it is implemented (memory barrier, etc - note they are "InternalCall" methods, so I can't check ;-p) - but yes: updates from one thread will be immediately visible to all others as long as they use the Interlocked methods (hence my point above).
There seems to be some comparison with the Win32 API functions by the same name, but this thread is all about the C# Interlocked class. From its very description, it is guaranteed that its operations are atomic. I'm not sure how that translates to "full memory barriers" as mentioned in other answers here, but judge for yourself.
On uniprocessor systems, nothing special happens, there's just a single instruction:
FASTCALL_FUNC CompareExchangeUP,12
_ASSERT_ALIGNED_4_X86 ecx
mov eax, [esp+4] ; Comparand
cmpxchg [ecx], edx
retn 4 ; result in EAX
FASTCALL_ENDFUNC CompareExchangeUP
But on multiprocessor systems, a hardware lock is used to prevent other cores to access the data at the same time:
FASTCALL_FUNC CompareExchangeMP,12
_ASSERT_ALIGNED_4_X86 ecx
mov eax, [esp+4] ; Comparand
lock cmpxchg [ecx], edx
retn 4 ; result in EAX
FASTCALL_ENDFUNC CompareExchangeMP
An interesting read with here and there some wrong conclusions, but all-in-all excellent on the subject is this blog post on CompareExchange.
Update for ARM
As often, the answer is, "it depends". It appears that prior to 2.1, the ARM had a half-barrier. For the 2.1 release, this behavior was changed to a full barrier for the Interlocked operations.
The current code can be found here and actual implementation of CompareExchange here. Discussions on the generated ARM assembly, as well as examples on generated code can be seen in the aforementioned PR.
MSDN says about the Win32 API functions:
"Most of the interlocked functions provide full memory barriers on all Windows platforms"
(the exceptions are Interlocked functions with explicit Acquire / Release semantics)
From that I would conclude that the C# runtime's Interlocked makes the same guarantees, as they are documented withotherwise identical behavior (and they resolve to intrinsic CPU statements on the platforms i know). Unfortunately, with MSDN's tendency to put up samples instead of documentation, it isn't spelled out explicitly.
The interlocked functions are guaranteed to stall the bus and the cpu while it resolves the operands. The immediate consequence is that no thread switch, on your cpu or another one, will interrupt the interlocked function in the middle of its execution.
Since you're passing a reference to the c# function, the underlying assembler code will work with the address of the actual integer, so the variable access won't be optimized away. It will work exactly as expected.
edit: Here's a link that explains the behaviour of the asm instruction better: http://faydoc.tripod.com/cpu/cmpxchg.htm
As you can see, the bus is stalled by forcing a write cycle, so any other "threads" (read: other cpu cores) that would try to use the bus at the same time would be put in a waiting queue.
According to ECMA-335 (section I.12.6.5):
5.
Explicit atomic operations.
The class library provides a variety of atomic operations
in the
System.Threading.Interlocked
class. These operations (e.g., Increment,
Decrement, Exchange, and CompareExchange) perform implicit acquire/release
operations.
So, these operations follow principle of least astonishment.

When should the volatile keyword be used in C#?

Can anyone provide a good explanation of the volatile keyword in C#? Which problems does it solve and which it doesn't? In which cases will it save me the use of locking?
I don't think there's a better person to answer this than Eric Lippert (emphasis in the original):
In C#, "volatile" means not only "make sure that the compiler and the
jitter do not perform any code reordering or register caching
optimizations on this variable". It also means "tell the processors to
do whatever it is they need to do to ensure that I am reading the
latest value, even if that means halting other processors and making
them synchronize main memory with their caches".
Actually, that last bit is a lie. The true semantics of volatile reads
and writes are considerably more complex than I've outlined here; in
fact they do not actually guarantee that every processor stops what it
is doing and updates caches to/from main memory. Rather, they provide
weaker guarantees about how memory accesses before and after reads and
writes may be observed to be ordered with respect to each other.
Certain operations such as creating a new thread, entering a lock, or
using one of the Interlocked family of methods introduce stronger
guarantees about observation of ordering. If you want more details,
read sections 3.10 and 10.5.3 of the C# 4.0 specification.
Frankly, I discourage you from ever making a volatile field. Volatile
fields are a sign that you are doing something downright crazy: you're
attempting to read and write the same value on two different threads
without putting a lock in place. Locks guarantee that memory read or
modified inside the lock is observed to be consistent, locks guarantee
that only one thread accesses a given chunk of memory at a time, and so
on. The number of situations in which a lock is too slow is very
small, and the probability that you are going to get the code wrong
because you don't understand the exact memory model is very large. I
don't attempt to write any low-lock code except for the most trivial
usages of Interlocked operations. I leave the usage of "volatile" to
real experts.
For further reading see:
Understand the Impact of Low-Lock Techniques in Multithreaded Apps
Sayonara volatile
If you want to get slightly more technical about what the volatile keyword does, consider the following program (I'm using DevStudio 2005):
#include <iostream>
void main()
{
int j = 0;
for (int i = 0 ; i < 100 ; ++i)
{
j += i;
}
for (volatile int i = 0 ; i < 100 ; ++i)
{
j += i;
}
std::cout << j;
}
Using the standard optimised (release) compiler settings, the compiler creates the following assembler (IA32):
void main()
{
00401000 push ecx
int j = 0;
00401001 xor ecx,ecx
for (int i = 0 ; i < 100 ; ++i)
00401003 xor eax,eax
00401005 mov edx,1
0040100A lea ebx,[ebx]
{
j += i;
00401010 add ecx,eax
00401012 add eax,edx
00401014 cmp eax,64h
00401017 jl main+10h (401010h)
}
for (volatile int i = 0 ; i < 100 ; ++i)
00401019 mov dword ptr [esp],0
00401020 mov eax,dword ptr [esp]
00401023 cmp eax,64h
00401026 jge main+3Eh (40103Eh)
00401028 jmp main+30h (401030h)
0040102A lea ebx,[ebx]
{
j += i;
00401030 add ecx,dword ptr [esp]
00401033 add dword ptr [esp],edx
00401036 mov eax,dword ptr [esp]
00401039 cmp eax,64h
0040103C jl main+30h (401030h)
}
std::cout << j;
0040103E push ecx
0040103F mov ecx,dword ptr [__imp_std::cout (40203Ch)]
00401045 call dword ptr [__imp_std::basic_ostream<char,std::char_traits<char> >::operator<< (402038h)]
}
0040104B xor eax,eax
0040104D pop ecx
0040104E ret
Looking at the output, the compiler has decided to use the ecx register to store the value of the j variable. For the non-volatile loop (the first) the compiler has assigned i to the eax register. Fairly straightforward. There are a couple of interesting bits though - the lea ebx,[ebx] instruction is effectively a multibyte nop instruction so that the loop jumps to a 16 byte aligned memory address. The other is the use of edx to increment the loop counter instead of using an inc eax instruction. The add reg,reg instruction has lower latency on a few IA32 cores compared to the inc reg instruction, but never has higher latency.
Now for the loop with the volatile loop counter. The counter is stored at [esp] and the volatile keyword tells the compiler the value should always be read from/written to memory and never assigned to a register. The compiler even goes so far as to not do a load/increment/store as three distinct steps (load eax, inc eax, save eax) when updating the counter value, instead the memory is directly modified in a single instruction (an add mem,reg). The way the code has been created ensures the value of the loop counter is always up-to-date within the context of a single CPU core. No operation on the data can result in corruption or data loss (hence not using the load/inc/store since the value can change during the inc thus being lost on the store). Since interrupts can only be serviced once the current instruction has completed, the data can never be corrupted, even with unaligned memory.
Once you introduce a second CPU to the system, the volatile keyword won't guard against the data being updated by another CPU at the same time. In the above example, you would need the data to be unaligned to get a potential corruption. The volatile keyword won't prevent potential corruption if the data cannot be handled atomically, for example, if the loop counter was of type long long (64 bits) then it would require two 32 bit operations to update the value, in the middle of which an interrupt can occur and change the data.
So, the volatile keyword is only good for aligned data which is less than or equal to the size of the native registers such that operations are always atomic.
The volatile keyword was conceived to be used with IO operations where the IO would be constantly changing but had a constant address, such as a memory mapped UART device, and the compiler shouldn't keep reusing the first value read from the address.
If you're handling large data or have multiple CPUs then you'll need a higher level (OS) locking system to handle the data access properly.
If you are using .NET 1.1, the volatile keyword is needed when doing double checked locking. Why? Because prior to .NET 2.0, the following scenario could cause a second thread to access an non-null, yet not fully constructed object:
Thread 1 asks if a variable is null.
//if(this.foo == null)
Thread 1 determines the variable is null, so enters a lock.
//lock(this.bar)
Thread 1 asks AGAIN if the variable is null.
//if(this.foo == null)
Thread 1 still determines the variable is null, so it calls a constructor and assigns the value to the variable.
//this.foo = new Foo();
Prior to .NET 2.0, this.foo could be assigned the new instance of Foo, before the constructor was finished running. In this case, a second thread could come in (during thread 1's call to Foo's constructor) and experience the following:
Thread 2 asks if variable is null.
//if(this.foo == null)
Thread 2 determines the variable is NOT null, so tries to use it.
//this.foo.MakeFoo()
Prior to .NET 2.0, you could declare this.foo as being volatile to get around this problem. Since .NET 2.0, you no longer need to use the volatile keyword to accomplish double checked locking.
Wikipedia actually has a good article on Double Checked Locking, and briefly touches on this topic:
http://en.wikipedia.org/wiki/Double-checked_locking
Sometimes, the compiler will optimize a field and use a register to store it. If thread 1 does a write to the field and another thread accesses it, since the update was stored in a register (and not memory), the 2nd thread would get stale data.
You can think of the volatile keyword as saying to the compiler "I want you to store this value in memory". This guarantees that the 2nd thread retrieves the latest value.
From MSDN:
The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access. Using the volatile modifier ensures that one thread retrieves the most up-to-date value written by another thread.
The CLR likes to optimize instructions, so when you access a field in code it might not always access the current value of the field (it might be from the stack, etc). Marking a field as volatile ensures that the current value of the field is accessed by the instruction. This is useful when the value can be modified (in a non-locking scenario) by a concurrent thread in your program or some other code running in the operating system.
You obviously lose some optimization, but it does keep the code more simple.
Simply looking into the official page for volatile keyword you can see an example of typical usage.
public class Worker
{
public void DoWork()
{
bool work = false;
while (!_shouldStop)
{
work = !work; // simulate some work
}
Console.WriteLine("Worker thread: terminating gracefully.");
}
public void RequestStop()
{
_shouldStop = true;
}
private volatile bool _shouldStop;
}
With the volatile modifier added to the declaration of _shouldStop in place, you'll always get the same results. However, without that modifier on the _shouldStop member, the behavior is unpredictable.
So this is definitely not something downright crazy.
There exists Cache coherence that is responsible for CPU caches consistency.
Also if CPU employs strong memory model (as x86)
As a result, reads and writes of volatile fields require no special instructions on the x86: Ordinary reads and writes (for example, using the MOV instruction) are sufficient.
Example from C# 5.0 specification (chapter 10.5.3)
using System;
using System.Threading;
class Test
{
public static int result;
public static volatile bool finished;
static void Thread2() {
result = 143;
finished = true;
}
static void Main() {
finished = false;
new Thread(new ThreadStart(Thread2)).Start();
for (;;) {
if (finished) {
Console.WriteLine("result = {0}", result);
return;
}
}
}
}
produces the output: result = 143
If the field finished had not been declared volatile, then it would be permissible for the store to result to be visible to the main thread after the store to finished, and hence for the main thread to read the value 0 from the field result.
Volatile behavior is platform dependent so you should always consider using volatile when needed by case to be sure it satisfies your needs.
Even volatile could not prevent (all kind of) reordering (C# - The C# Memory Model in Theory and Practice, Part 2)
Even though the write to A is volatile and the read from A_Won is also volatile, the fences are both one-directional, and in fact allow this reordering.
So I believe if you want to know when to use volatile (vs lock vs Interlocked) you should get familiar with memory fences (full, half) and needs of a synchronization. Then you get your precious answer yourself for your good.
I found this article by Joydip Kanjilal very helpful!
When you mark an object or a variable as volatile, it becomes a candidate for volatile reads and writes. It should be noted that in C# all memory writes are volatile irrespective of whether you are writing data to a volatile or a non-volatile object. However, the ambiguity happens when you are reading data. When you are reading data that is non-volatile, the executing thread may or may not always get the latest value. If the object is volatile, the thread always gets the most up-to-date value
I'll just leave it here for reference
The compiler sometimes changes the order of statements in code to optimize it. Normally this is not a problem in single-threaded environment, but it might be an issue in multi-threaded environment. See following example:
private static int _flag = 0;
private static int _value = 0;
var t1 = Task.Run(() =>
{
_value = 10; /* compiler could switch these lines */
_flag = 5;
});
var t2 = Task.Run(() =>
{
if (_flag == 5)
{
Console.WriteLine("Value: {0}", _value);
}
});
If you run t1 and t2, you would expect no output or "Value: 10" as the result. It could be that the compiler switches line inside t1 function. If t2 then executes, it could be that _flag has value of 5, but _value has 0. So expected logic could be broken.
To fix this you can use volatile keyword that you can apply to the field. This statement disables the compiler optimizations so you can force the correct order in you code.
private static volatile int _flag = 0;
You should use volatile only if you really need it, because it disables certain compiler optimizations, it will hurt performance. It's also not supported by all .NET languages (Visual Basic doesn't support it), so it hinders language interoperability.
So to sum up all this, the correct answer to the question is:
If your code is running in the 2.0 runtime or later, the volatile keyword is almost never needed and does more harm than good if used unnecessarily. I.E. Don't ever use it. BUT in earlier versions of the runtime, it IS needed for proper double check locking on static fields. Specifically static fields whose class has static class initialization code.
multiple threads can access a variable.
The latest update will be on the variable

Categories

Resources