There is a Volatile.Read method for all primitives and reference types, why is there no Volatile.Read for structures? Same applies to Volatile.Write. Likewise the old Thread.VolatileRead methods didn't have one for structs either.
What is the reason behind this? I can declare volatile structs in a class, why can't I do volatile reads with these methods?
There is only a guarantee in volatile operations if they're also atomic, which is not the case for all but the simplest structs (e.g. one field of a primitive or reference type, or any struct that fits 64 bits/8 bytes).
For instance, what would you expect from such Volatile methods on a 768 bits/96 bytes struct? Anything bigger than the greatest supported atomic operation would actually result in multiple volatile writes, each of which would be immediately visible without any guarantee.
In Microsoft implementations of .NET, long and double Volatile methods are atomic. Even on 32-bit architectures, at the cost of using interlocked operations in such architectures.
Related
I am confused with the documentation for .NET/C# regarding the volatile keyword vs System.Threading.Thread.VolatileRead/VolatileWrite and System.Threading.Volatile.Read/Write. I am trying to understand what exactly is guaranteed for a volatile field and what exactly these methods are doing.
I thought volatile provides the release/acquire semantics, but the documentation for Thread.VolatileRead/VolatileWrite makes me wonder if my understanding is actually correct.
This is the language reference for volatile: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/volatile
Adding the volatile modifier ensures that all threads will observe
volatile writes performed by any other thread in the order in which
they were performed. There is no guarantee of a single total ordering
of volatile writes as seen from all threads of execution.
So far makes sense. This is the language specification: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/classes#volatile-fields
For volatile fields, such reordering optimizations are restricted:
A read of a volatile field is called a volatile read. A volatile read has "acquire semantics"; that is, it is guaranteed to occur prior
to any references to memory that occur after it in the instruction
sequence.
A write of a volatile field is called a volatile write. A volatile write has "release semantics"; that is, it is guaranteed to happen
after any memory references prior to the write instruction in the
instruction sequence.
These restrictions ensure that all threads will observe volatile
writes performed by any other thread in the order in which they were
performed. A conforming implementation is not required to provide a
single total ordering of volatile writes as seen from all threads of
execution.
Again, this looks like volatile provides the release/acquire semantics.
But then I look at the documentation for Thread.VolatileRead:
https://learn.microsoft.com/en-us/dotnet/api/system.threading.thread.volatileread?view=netframework-4.8#System_Threading_Thread_VolatileRead_System_Int64__
Reads the value of a field. The value is the latest written by any
processor in a computer, regardless of the number of processors or the
state of processor cache. ... On a multiprocessor system, VolatileRead
obtains the very latest value written to a memory location by any
processor. This might require flushing processor caches.
For Thread.VolatileWrite:
Writes a value to a field immediately, so that the value is visible to
all processors in the computer.
This looks stricter then an individual store/load fence (release/acquire), specifically the part about flushing processor caches, i.e. more strict guarantee than just volatile. But then the same document says:
In C#, using the volatile modifier on a field guarantees that all
access to that field uses VolatileRead or VolatileWrite
So my question is - what is guaranteed for a volatile field with respect to the store buffer - just release/acquire, or the stronger Thread.VolatileRead/Write guarantees? Or is my understanding about VolatileRead/Write wrong and these are the same as volatile?
There is no difference between System.Threading.Thread.VolatileRead/VolatileWrite and System.Threading.Volatile.Read/Write - these are identical helper methods that mimic a full memory barrier before read or write (not necessary the MFENCE instruction). Here is the internal implementation:
public static void VolatileWrite(ref sbyte address, sbyte value)
{
Thread.MemoryBarrier();
address = value;
}
A volatile variable uses(-ed) acquire/release semantics on the obsolete IA processor architecture (Itanium) with a weaker memory model.
With the most popular x86 architecture, the volatile modifier avoids compiler optimizations and also may use instructions with lock prefix to guarantee consistency.
All in all, the compiler may use various tricks to comply with the C# memory model, which states:
No reads or writes can move before a volatile read or after a volatile write
All writes have the effect of volatile write
What is the difference, if any, of the Read(Int64) method of the .NET system classes System.Threading.Volatile and System.Threading.Interlocked?
Specifically, what are their respective guarantees / behaviour with regard to (a) atomicity and (b) memory ordering.
Note that this is about the Volatile class, not the volatile (lower case) keyword.
The MS docs state:
Volatile.Read Method
Reads the value of a field. On systems that require it, inserts a
memory barrier that prevents the processor from reordering memory
operations as follows: If a read or write appears after this method in
the code, the processor cannot move it before this method.
...
Returns Int64
The value that was read. This value is the latest written by any processor
in the computer, regardless of the number of processors or the state of
processor cache.
vs.
Interlocked.Read(Int64) Method
Returns a 64-bit value, loaded as an atomic operation.
Particularly confusing seems that the Volatile docs do not talk about atomicity and the Interlocked docs do not talk about ordering / memory barriers.
Side Note: Just as a reference: I'm more familiar with the C++ atomic API where atomic operations always also specify a memory ordering semantic.
The question link (and transitive links) helpfully provided by Pavel do a good job of explaining the difference / ortogonality of volatile-as-in-memory-barrier and atomic-as-in-no-torn-reads, but they do not explain how the two concepts apply to these two classes.
Does Volatile.Read make any guarantees about atomicity?
Does Interlocked.Read (or, really, any of the Interlocked functions) make any guarantees about memory order?
Interlocked.Read translates into a CompareExchange operation:
public static long Read(ref long location)
{
return Interlocked.CompareExchange(ref location, 0, 0);
}
Therefore it has all the benefits of CompareExchange:
Full memory barrier
Atomicity
Volatile.Read on the other hand has only acquire semantics. It helps you ensuring the execution order of your read operations, without any atomicity or freshness guarantee.
The documentation of the Volatile.Read(long) method doesn't mention anything about atomicity, but the source code is quite revealing:
private struct VolatileIntPtr { public volatile IntPtr Value; }
[Intrinsic]
[NonVersionable]
public static long Read(ref long location) =>
#if TARGET_64BIT
(long)Unsafe.As<long, VolatileIntPtr>(ref location).Value;
#else
// On 32-bit machines, we use Interlocked, since an ordinary volatile read would not be atomic.
Interlocked.CompareExchange(ref location, 0, 0);
#endif
On 32-bit machines, the Volatile.Read method invokes indirectly the Interlocked.CompareExchange, just like the Interlocked.Read does (source code), so there is no difference between the two. A full fence is emitted by both methods.
On 64-bit machines the atomicity of the reading is guaranteed by the CPU architecture, so a cheaper half fence is emitted instead.
So the Volatile.Read seems to be the preferable option overall. Although its atomicity is not guaranteed by the documentation, if it wasn't atomic its usefulness would be severely limited, if any. What use would you have for a value that can be potentially torn?
Note: the Intrinsic attribute means that the code of the decorated method can be potentially replaced/optimized by the Jitter. This can be slightly concerning, so please make your own judgement about whether it's safe to use the Volatile.Read for reading long values in a multithreaded environment.
Should I use static fields and interlocked together, in cases when i need to provide thread safety and atomic operations with static fields, Is static fields are atomic by default? For example:
Interlocked.Increment(ref Factory.DefectivePartsCount);
Thanks.
Yes.
The field (assuming Int32) is atomic, not because it's static but because it's 32 bits.
How ever, Factory.DefectivePartsCount += 1 requires a read and a write action on the variable so the whole operation is not thread-safe.
static doesn't guarantee anything in terms of thread-safety. Hence, an increment will still not be atomic even if the variable is static. As such, you will still need to use classic synchronization mechanisms depending on the situation. In your case Interlocked.Increment is fine.
I understand the functionality of Interlocked.Increment and lock(). But I'm confused on when to use one or the other. As far as I can tell Interlocked.Increment increments shared int/long value, whereas as lock() is meant to lock region of code.
For example, if i want to update string value it is possible with lock():
lock(_object)
{
sharedString = "Hi";
}
However, this is not possible with Interlocked class.
Why can't this be done via Interlocked?
What's the difference between these synchronization mechanisms?
Interlocked.Increment and related methods rely on hardware instructions to perform synchronized modification of a single 32bit or 64bit memory value, ensuring that multiple threads accessing the same value do not read/write stale data. this is necessary because at a hardware level a processor has a local/bus copy of memory values (for performance, often referred to as bus memory or cpu cache).
lock(){} performs synchronization for a section of code, rather than a single integral value. and instead of relying on hardware instructions to synchronize access to a variable the resulting code instead relies on operating system synchronization primitives (software, not hardware) to protect memory and code execution.
Further, the use of lock() emits a memory barrier, ensuring that accessing the same variables from multiple CPUs yields synchronized (non-stale) data. This is not true in other languages/platforms, where a memory barriers and fencing must be explicitly performed.
It's more efficient to use Interlocked methods on integral values because the hardware has native support for performing the necessary synchronization. but this hardware support only exists for native integrals such as __int32 and __int64, since the hardware does not have a notion of higher level complex types no such high level method is exposed from the Interlocked type. Thus you can't use Interlocked to synchronize the assignment of System.String or any System.Object derived types.
(Even though the assignment of a pointer to a string value can be done with the same hardware instruction if you were using a lower level language, the fact is that in .NET a string object is not represented as a pointer and thus it's just not possible in any "pure" .NET language. I am avoiding the fact that you can use unsafe methods to resolve the pointer and do an interlocked assignment of string values if you -really- wanted to, but I don't feel this is really what you are asking about, and further this is not supported by Interlocked because under the hood GC pinning would need to occur, which would likely become more expensive and invasive than using lock().)
Thus, for synchronized modification/assignment of "reference types" you will need to use a synchronization primitive (i.e. lock(){}, Monitor, etc). If all you need to synchronize is a single integral value (Int32, Int64) it would be more efficient to use the Interlocked methods. It may still make sense to use lock() statement if there are multiple integral values to synchronize, for example incrementing one integer while decrementing a second integer, where both need to be synchronized as a single logical operation.
Interlocked.Increment can and should be used to increment shared int variable.
Functionally using Interlocked.Increment is same as:
lock(_object)
{
counter++;
}
but Interlocked.Increment is much cheaper performance-wise.
If you want to exchange a reference value, and return the original value in an atomic operation, you can use Interlocked.Exchange. Interlocked.Increment does exactly what it says it does: it increments a number.
But simply assigning a reference value to a variable, or any 32-bit value type is atomic in .NET anyway. The only other case I can think of, in which the latter doesn't hold, is if you create a packed structure and set attributes which will force the compiler not to align members at 4-byte boundaries (but this is not something you do really often).
From the specification 10.5.3 Volatile fields:
The type of a volatile field must be one of the following:
A reference-type.
The type byte, sbyte, short, ushort,
int, uint, char, float, bool,
System.IntPtr, or System.UIntPtr.
An enum-type having an enum base type
of byte, sbyte, short, ushort, int,
or uint.
First I want to confirm my understanding is correct: I guess the above types can be volatile because they are stored as a 4-bytes unit in memory(for reference types because of its address), which guarantees the read/write operation is atomic. A double/long/etc type can't be volatile because they are not atomic reading/writing since they are more than 4 bytes in memory. Is my understanding correct?
And the second, if the first guess is correct, why a user defined struct with only one int field in it(or something similar, 4 bytes is ok) can't be volatile? Theoretically it's atomic right? Or it's not allowed simply because that all user defined structs(which is possibly more than 4 bytes) are not allowed to volatile by design?
So, I suppose you propose the following point to be added:
A value type consisting only of one field which can be legally marked volatile.
First, fields are usually private, so in external code, nothing should depend on a presence of a certain field. Even though the compiler has no issue accessing private fields, it is not such a good idea to restrict a certain feature based on something the programmer has no proper means to affect or inspect.
Since a field is usually a part of the internal implementation of a type, it can be changed at any time in a referenced assembly, but this could make a piece of C# code that used the type illegal.
This theoretical and practical reason means that the only feasible way would be to introduce a volatile modifier for value types that would ensure that point specified above holds. However, since the only group of types that would benefit from this modifier are value types with a single field, this feature probably wasn't very high on the list.
Basically, usage of the volatile keyword can sometimes be misleading. Its purpose is to allow that the latest value (or actually, an eventually fresh enough value)1 of the respective member is returned when accessed by any thread.
In fact, this is true to value types only2. Reference type members are represented in memory as the pointers to a location in the heap where the object is actually stored. So, when used on a reference type, volatile ensures you only get the fresh value of the reference (the pointer) to the object, not the object itself.
If you have a volatile List<String> myVolatileList which is modified by multiple threads by having elements added or removed, and if you expect it to be safely accessing the latest modification of the list, you are actually wrong. In fact, you are prone to the same issues as if the volatile keyword was not there -- race conditions and/or having the object instance corrupted -- it does not assist you in this case, neither it provides you with any thread safety.
If, however, the list itself is not modified by the different threads, but rather, each thread would only assign a different instance to the field (meaning the list is behaving like an immutable object), then you are fine. Here is an example:
public class HasVolatileReferenceType
{
public volatile List<int> MyVolatileMember;
}
The following usage is correct with respect to multi-threading, as each thread would replace the MyVolatileMember pointer. Here, volatile ensures that the other threads will see the latest list instance stored in the MyVolatileMember field.
HasVolatileReferenceTypeexample = new HasVolatileReferenceType();
// instead of modifying `example.MyVolatileMember`
// we are replacing it with a new list. This is OK with volatile.
example.MyVolatileMember = example.MyVolatileMember
.Where(x => x > 42).ToList();
In contrast, the below code is error prone, because it directly modifies the list. If this code is executed simultaneously with multiple threads, the list may become corrupted, or behave in an inconsistent manner.
example.MyVolatileMember.RemoveAll(x => x <= 42);
Let us return to value types for a while. In .NET all value types are actually reassigned when they are modified, they are safe to be used with the volatile keyword - see the code:
public class HasVolatileValueType
{
public volatile int MyVolatileMember;
}
// usage
HasVolatileValueType example = new HasVolatileValueType();
example.MyVolatileMember = 42;
1The notion of lates value here is a little misleading, as noted by Eric Lippert in the comments section. In fact latest here means that the .NET runtime will attempt (no guarantees here) to prevent writes to volatile members to happen in-between read operations whenever it deems it is possible. This would contribute to different threads reading a fresh value of the volatile member, as their read operations would probably be ordered after a write operation to the member. But there is more to count on probability here.
2In general, volatile is OK to be used on any immutable object, since modifications always imply reassignment of the field with a different value. The following code is also a correct example of the use of the volatile keyword:
public class HasVolatileImmutableType
{
public volatile string MyVolatileMember;
}
// usage
HasVolatileImmutableType example = new HasVolatileImmutableType();
example.MyVolatileMember = "immutable";
// string is reference type, but is *immutable*,
// so we need to reasign the modification result it in order
// to work with the new value later
example.MyVolatileMember = example.MyVolatileMember.SubString(2);
I'd recommend you to take a look at this article. It thoroughly explains the usage of the volatile keyword, the way it actually works and the possible consequences to using it.
I think it is because a struct is a value type, which is not one of the types listed in the specs. It is interesting to note that reference types can be a volatile field. So it can be accomplished with a user-defined class. This may disprove your theory that the above types are volatile because they can be stored in 4 bytes (or maybe not).
This is an educated guess at the answer... please don't shoot me down too much if I am wrong!
The documentation for volatile states:
The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access.
This implies that part of the design intent for volatile fields is to implement lock-free multithreaded access.
A member of a struct can be updated independently of the other members. So in order to write the new struct value where only part of it has been changed, the old value must be read. Writing is therefore not guaranteed to require a single memory operation. This means that in order to update the struct reliably in a multithreaded environment, some kind of locking or other thread synchronization is required. Updating multiple members from several threads without synchronization could soon lead to counter-intuitive, if not technically corrupt, results: to make a struct volatile would be to mark a non-atomic object as atomically updateable.
Additionally, only some structs could be volatile - those of size 4 bytes. The code that determines the size - the struct definition - could be in a completely separate part of the program to that which defines a field as volatile. This could be confusing as there would be unintended consequences of updating the definition of a struct.
So, whereas it would be technically possible to allow some structs to be volatile, the caveats for correct usage would be sufficiently complex that the disadvantages would outweigh the benefits.
My recommendation for a workaround would be to store your 4-byte struct as a 4-byte base type and implement static conversion methods to use each time you want to use the field.
To address the second part of your question, I would support the language designers decision based on two points:
KISS - Keep It Simple Simon - It would make the spec more complex and implementations hard to have this feature. All language features start at minus 100 points, is adding the ability to have a small minority of struts volatile really worth 101 points?
Compatibility - questions of serialization aside - Usually adding a new field to a type [class, struct] is a safe backwards source compatible move. If you adding a field should not break anyones compile. If the behavior of structs changed when adding a field this would break this.