The Thread.GetNamedDataSlot method acquires a slot name that can be used with Thread.SetData.
Can the result of the GetNamedDataSlot function be cached (and reused across all threads) or should it be invoked in/for every thread?
The documentation does not explicitly say it "shouldn't" be re-used although it does not say it can be either. Furthermore, the example shows GetNamedDataSlot used at every GetData/SetData site; even within the same thread.
For example (note that BarSlot slot is not created/assigned on all specific threads that the TLS is accessed);
public Foo {
private static LocalStorageDataSlot BarSlot = Thread.GetNamedDataSlot("foo_bar");
public static void SetMethodCalledFromManyThreads(string awesome) {
Thread.SetData(BarSlot, awesome);
}
public static void ReadMethodCalledFromManyThreads() {
Console.WriteLine("Data:" + Thread.GetData(BarSlot));
}
}
I asks this question in relationship to code structure; any micro performance gains, if any, are a freebie. Any critical issues or performance degradation with the reuse make it not a viable option.
Can the result of the GetNamedDataSlot function be cached (and reused across all threads) or should it be invoked in/for every thread?
Unfortunately, the documentation isn't 100% clear on this point. Some interesting passages include…
From Thread.GetNamedDataSlot Method (String):
Data slots are unique per thread. No other thread (not even a child thread) can get that data
And from LocalDataStoreSlot Class:
The data slots are unique per thread or context; their values are not shared between the thread or context objects
At best, these make clear that each thread gets its own copy of the data. But the passages can be read to mean either that the LocalDataStoreSlot itself is per-thread, or simply the data to which it refers is per-thread. I believe it's the latter, but I can't point to a specific MSDN page that says so.
So, we can look at the implementation details:
There is a single slot manager per process, which is used to maintain all of the per-thread slots. A LocalDataStoreSlot returned in one thread can be passed to another thread and used there, and it would be owned by the same manager, and use the same slot index (because the slot table is also per-process). It also happens that the Thread.SetData() method will implicitly create the thread-local data store for that slot if it doesn't already exist.
The Thread.GetData() method simply returns null if you haven't already set a value or the thread-local data store hasn't been created. So, the behavior of GetData() remains consistent whether or not you have called SetData() in that thread already.
Since the slots are managed at a process-level basis, you can reuse the LocalDataStoreSlot values across threads. Once allocated, the slot is used up for all threads, and the data stored for that slot will be unique for each thread. Sharing the LocalDataStoreSlot value across threads shares the slot, but even for a single slot, you get thread-local storage for each thread.
Indeed, looking at it this way, the implementation you show would be the desirable way to use this API. After all, it's an alternative to [ThreadStatic], and the only way to ensure a different LocalDataStoreSlot value for each thread in your code would be either to use [ThreadStatic] (which if you wanted to use, you should have just used for the data itself), or to maintain your own dictionary of LocalDataStoreSlot values, indexed presumably by Thread.ManagedThreadId.
Personally, I'd just use [ThreadStatic]. MSDN even recommends this, and it has IMHO clearer semantics. But if you want to use LocalDataStoreSlot, it seems to me that the implementation you have is correct.
Related
I'm looking to find a way to setup a variable inside a Parallel.Foreach loop and make the variable easily accessible anywhere in the system, to avoid having to pass all desired values deep into the system as parameters. This is primarily for logging purposes
Parallel.ForEach(orderIds, options, orderId =>
{
var currentOrderId = orderId;
});
And sometime later, deep in the code
public void DeepMethod(string searchVal)
{
// Access currentOrderId here somehow, so I can log this was called for the specified order
}
As noted in the comments, globally-scoped state for concurrently executing code is a poor design choice. If done correctly, you wind up with hard-to-maintain code and contention between concurrently executing code. If done incorrectly, you wind up with hard-to-find, hard-to-fix bugs.
There's not much context in your question, so it's impossible to suggest anything specific. But, given the description you've provided, the usual approach would be to define a class that represents the state for the concurrently executed operation, in which you keep the value or values that you want to be able to access at the "deep" level of the "system" (by this, I infer that you mean "deep" as in depth of call stack, and "system" as in the collection of methods involved in implementing this operation).
By using a class to contain the values and implementation of your concurrently executed operation, you then would have direct access to the value that's specific to that particular branch (thread) of the concurrently executed operation, as an instance field of your class, in the methods implemented in that class.
More broadly: a major tenet in writing concurrent code is to avoid sharing mutable data between threads. Shared data should be immutable (e.g. like a string object), and mutated data (like status values that you seem to be describing here) should be kept in data structures that are private to each thread.
TLDR; version of the main questions:
While working with threads, is it safe to read a list's contents with 1 thread, while another write to it, as long you do not delete list contents (reoganize order) and only reads new object after the new object is added fully
While an Int is being updated from "Old Value" to "New Value" by one thread, is there is a risk, if another thread reads this Int that the value returned is neither "Old Value" or "New Value"
Is it possible for a thread to "skip" a critical region if its busy, instead of just going to sleep and wait for the regions release?
I have 2 pieces of code running in seperate threads and I want to have the one act as a producer for the other. I do not want either thread "sleeping" while waiting for access, but instead skip forward in their internal code if the other thread is accessing this.
My original plan were to share the data via this approach (and once counter got high enough switch to a secondary list to avoid overflows).
pseudo code of flow as I original intended it.
Producer
{
Int counterProducer;
bufferedObject newlyProducedObject;
List <buffered_Object> objectsProducer;
while(true)
{
<Do stuff until a new product is created and added to newlyProducedObject>;
objectsProducer.add(newlyProducedObject_Object);
counterProducer++
}
}
Consumer
{
Int counterConsumer;
Producer objectProducer; (contains reference to Producer class)
List <buffered_Object> personalQueue
while(true)
<Do useful work, such as working on personal queue, and polish nails if no personal queue>
//get all outstanding requests and move to personal queue
while (counterConsumer < objectProducer.GetcounterProducer())
{
personalQueue.add(objectProducer.GetItem(counterconsumer+1));
counterConsumer++;
}
}
Looking at this, everything looked fine at first glance, I knew I would not be retrieving a half constructed product from the queue, so the status of the list regardless of where it is should not be a problem even if a thread switch occour while the Producer is adding a new object. Is this assumption correct, or can there be problems here? (my guess is as the consumer is asking for a specific location in the list and new objects are added to the end, and objects are never deleted that this will not be a problem)
But what caught my eye was, could a similar problem occour that "counterProducer" is at an unknown value while it is being "counterProducer++"? Could this result in the value temporary be "null" or some unknown value? Will this be a potential issue?
My goal is to have neither of the two threads lock while waiting for a mutex but instead continue their loops, which is why I made the above first, as there is no locking.
If the usage of the list will cause problems, my workaround will be to make a linked list implementation, and share it between the two classes, still use the counters to see if new work has been added and keep last location while the personalQueue moves new stuff to personal queue. So producer add new links, consumer reads them, and deletes previous. (no counter on the list, just external counters to know how much has been added and removed)
alternative pseudo code to avoid the counterConsumer++ risk (need help with this).
Producer
{
Int publicCounterProducer;
Int privateCounterProducer;
bufferedObject newlyProducedObject;
List <buffered_Object> objectsProducer;
while(true)
{
<Do stuff until a new product is created and added to newlyProducedObject>;
objectsProducer.add(newlyProducedObject_Object);
privateCounterProducer++
<Need Help: Some code that updates the publicCounterProducer to the privateCounterProducer if that variable is not
locked, else skips ahead, and the counter will get updated at next pass, at some point the consumer must be done reading stuff, and
new stuff is prepared already>
}
}
Consumer
{
Int counterConsumer;
Producer objectProducer; (contains reference to Producer class)
List <buffered_Object> personalQueue
while(true)
<Do useful work, such as working on personal queue, and polish nails if no personal queue>
//get all outstanding requests and move to personal queue
<Need Help: tries to read the publicProducerCounter and set readProducerCounter to this, else skips this code>
while (counterConsumer < readProducerCounter)
{
personalQueue.add(objectProducer.GetItem(counterconsumer+1));
counterConsumer++;
}
}
So goal in the 2nd part of code, and I have not been able to figure out how to code this, is to make both classes not wait for the other in case the other is in the "critical region" of updating the publicCounterProducer. If I read the lock functionality correct, the threads will go to sleep waiting for the release, which is not what I want. Might end up with having to use it though, in which case, first pseudocode would do it, and just set a "lock" on the getting of the value.
Hope you can help me out with my many questions.
No it is not safe. A context switch can occur within .Add after List has added the object, but before List has updated the internal data structure.
If it is int32, or if it is int64 and you are running in an x64 process, then there is no risk. But if you have any doubts, use the Interlocked class.
Yes, you can use a Semaphore, and when it is time to enter the critical region, use WaitOne overload that takes a timeout. Pass a timeout of 0. If WaitOne returns true, then you successfully acquired the lock and can enter. If it returns false, then you did not acquire the lock and should not enter.
You should really look at the System.Collections.Concurrent namespace. In particular, look at the BlockingCollection. It has a bunch of Try* operators you can use to add/remove items from the collection without blocking.
While working with threads, is it safe to read a list's contents with 1 thread, while another write to it, as long you do not delete list contents (reoganize order) and only reads new object after the new object is added fully
No, it is not. A side-effect of adding an item to a list may be to reallocate its underlying array. Current implementations of List<T> update the internal reference before copying the old data to it, so multiple threads may observe a list of the correct size but containing no data.
While an Int is being updated from "Old Value" to "New Value" by one thread, is there is a risk, if another thread reads this Int that the value returned is neither "Old Value" or "New Value"
Nope, int updates are atomic. But if two threads are both incrementing counterProducer at once, it will go wrong. You should use Interlocked.Increment() to increment it.
Is it possible for a thread to "skip" a critical region if its busy, instead of just going to sleep and wait for the regions release?
No, but you can use (for example) WaitHandle.WaitOne(int) to see if a wait succeeded, and branch accordingly. WaitHandle is implemented by several synchronization classes, such as ManualResetEvent.
Incidentally, is there a reason you are not using the built-in Producer/Consumer classes such as BlockingCollection<T>? BlockingCollection is easy to use (after you read the documentation!) and I'd recommend using it instead.
Simplified illustration below, how does .NET deal with such a situation?
and if it would cause problems, would i have to lock/gate access to each and every field/property that might at times be written to + accessed from different threads?
A field somewhere
public class CrossRoads(){
public int _timeouts;
}
A background thread writer
public void TimeIsUp(CrossRoads crossRoads){
crossRoads._timeouts++;
}
Possibly at the same time, trying to read elsewhere
public void HowManyTimeOuts(CrossRoads crossRoads){
int timeOuts = crossRoads._timeouts;
}
The simple answer is that the above code has the ability to cause problems if accessed simultaneously from multiple threads.
The .Net framework provides two solutions: interlocking and thread synchronization.
For simple data type manipulation (i.e. ints), interlocking using the Interlocked class will work correctly and is the recommended approach.
In fact, interlocked provides specific methods (Increment and Decrement) that make this process easy:
Add an IncrementCount method to your CrossRoads class:
public void IncrementCount() {
Interlocked.Increment(ref _timeouts);
}
Then call this from your background worker:
public void TimeIsUp(CrossRoads crossRoads){
crossRoads.IncrementCount();
}
The reading of the value, unless of a 64-bit value on a 32-bit OS, are atomic. See the Interlocked.Read method documentation for more detail.
For class objects or more complex operations, you will need to use thread synchronization locking (lock in C# or SyncLock in VB.Net).
This is accomplished by creating a static synchronization object at the level the lock is to be applied (for example, inside your class), obtaining a lock on that object, and performing (only) the necessary operations inside that lock:
private static object SynchronizationObject = new Object();
public void PerformSomeCriticalWork()
{
lock (SynchronizationObject)
{
// do some critical work
}
}
The good news is that reads and writes to ints are guaranteed to be atomic, so no torn values. However, it is not guaranteed to do a safe ++, and the read could potentially be cached in registers. There's also the issue of instruction re-ordering.
I would use:
Interlocked.Increment(ref crossroads._timeouts);
For the write, which will ensure no values are lost, and;
int timeouts = Interlocked.CompareExchange(ref crossroads._timeouts, 0, 0);
For the read, since this observes the same rules as the increment. Strictly speaking "volatile" is probably enough for the read, but it is so poorly understood that the Interlocked seems (IMO) safer. Either way, we're avoiding a lock.
Well, I'm not a C# developer, but this is how it typically works at this level:
how does .NET deal with such a situation?
Unlocked. Not likely to be guaranteed to be atomic.
Would i have to lock/gate access to each and every field/property that might at times be written to + accessed from different threads?
Yes. An alternative would be to make a lock for the object available to the clients, then tell the clients they must lock the object while using the instance. This will reduce the number of locks acquisitions, and guarantee a more consistent, predictable, state for your clients.
Forget dotnet. At the machine language level, crossRoads._timeouts++ will be implemented as an INC [memory] instruction. This is known as a Read-Modify-Write instruction. These instructions are atomic with respect to multi-threading on a single processor*, (essentially implemented with time-slicing,) but are not atomic with respect to multi-threading using multiple processors or multiple cores.
So:
If you can guarantee that only TimeIsUp() will ever modify crossRoads._timeouts, and if you can guarantee that only one thread will ever execute TimeIsUp(), then it will be safe to do this. The writing in TimeIsUp() will work fine, and the reading in HowManyTimeOuts() (and any place else) will work fine. But if you also modify crossRoads._timeouts elsewhere, or if you ever spawn one more background thread writer, you will be in trouble.
In either case, my advice would be to play it safe and lock it.
(*) They are atomic with respect to multi-threading on a single processor because context switches between threads happen on a periodic interrupt, and on the x86 architectures these instructions are atomic with respect to interrupts, meaning that if an interrupt occurs while the CPU is executing such an instruction, the interrupt will wait until the instruction completes. This does not hold true with more complex instructions, for example those with the REP prefix.
Although an int may be 'native' size to a CPU (dealing in 32 or 64 bits at a time), if you are reading and writing from different threads to the same variable, you are best off locking this variable and synchronizing access.
There is never a guarantee that reads/writes maybe atomic to an int.
You can also use Interlocked.Increment for your purposes here.
I've an application that makes use of parallelization for processing data.
The main program is in C#, while one of the routine for analyzing data is on an external C++ dll. This library scans data and calls a callback everytime a certain signal is found within the data. Data should be collected, sorted and then stored into HD.
Here is my first simple implementation of the method invoked by the callback and of the method for sorting and storing data:
// collection where saving found signals
List<MySignal> mySignalList = new List<MySignal>();
// method invoked by the callback
private void Collect(int type, long time)
{
lock(locker) { mySignalList.Add(new MySignal(type, time)); }
}
// store signals to disk
private void Store()
{
// sort the signals
mySignalList.Sort();
// file is a object that manages the writing of data to a FileStream
file.Write(mySignalList.ToArray());
}
Data is made up of a bidimensional array (short[][] data) of size 10000 x n, with n variable. I use parallelization in this way:
Parallel.For(0, 10000, (int i) =>
{
// wrapper for the external c++ dll
ProcessData(data[i]);
}
Now for each of the 10000 arrays I estimate that 0 to 4 callbacks could be fired. I'm facing a bottleneck and given that my CPU resources are not over-utilized, I suppose that the lock (together with thousand of callbacks) is the problem (am I right or there could be something else?). I've tried the ConcurrentBag collection but performances are still worse (in line with other user findings).
I thought that a possible solution for use lock-free code would be to have multiple collections. Then it would be necessary a strategy to make each thread of the parallel process working on a single collection. Collections could be for instance inside a dictionary with thread ID as key, but I do not know any .NET facility for this (I should know the threads ID for initialize the dictionary before launching the parallelization). Could be this idea feasible and, in case yes, does exist some .NET tool for this? Or alternatively, any other idea to speed up the process?
[EDIT]
I've followed the Reed Copsey's suggestion and I used the following solution (according to the profiler of VS2010, before the burden for locking and adding to the list was taking 15% of the resources, while now only 1%):
// master collection where saving found signals
List<MySignal> mySignalList = new List<MySignal>();
// thread-local storage of data (each thread is working on its List<MySignal>)
ThreadLocal<List<MySignal>> threadLocal;
// analyze data
private void AnalizeData()
{
using(threadLocal = new ThreadLocal<List<MySignal>>(() =>
{ return new List<MySignal>(); }))
{
Parallel.For<int>(0, 10000,
() =>
{ return 0;},
(i, loopState, localState) =>
{
// wrapper for the external c++ dll
ProcessData(data[i]);
return 0;
},
(localState) =>
{
lock(this)
{
// add thread-local lists to the master collection
mySignalList.AddRange(local.Value);
local.Value.Clear();
}
});
}
}
// method invoked by the callback
private void Collect(int type, long time)
{
local.Value.Add(new MySignal(type, time));
}
thought that a possible solution for use lock-free code would be to have multiple collections. Then it would be necessary a strategy to make each thread of the parallel process working on a single collection. Collections could be for instance inside a dictionary with thread ID as key, but I do not know any .NET facility for this (I should know the threads ID for initialize the dictionary before launching the parallelization). Could be this idea feasible and, in case yes, does exist some .NET tool for this? Or alternatively, any other idea to speed up the process?
You might want to look at using ThreadLocal<T> to hold your collections. This automatically allocates a separate collection per thread.
That being said, there are overloads of Parallel.For which work with local state, and have a collection pass at the end. This, potentially, would allow you to spawn your ProcessData wrapper, where each loop body was working on its own collection, and then recombine at the end. This would, potentially, eliminate the need for locking (since each thread is working on it's own data set) until the recombination phase, which happens once per thread (instead of once per task,ie: 10000 times). This could reduce the number of locks you're taking from ~25000 (0-4*10000) down to a few (system and algorithm dependent, but on a quad core system, probably around 10 in my experience).
For details, see my blog post on aggregating data with Parallel.For/ForEach. It demonstrates the overloads and explains how they work in more detail.
You don't say how much of a "bottleneck" you're encountering. But let's look at the locks.
On my machine (quad core, 2.4 GHz), a lock costs about 70 nanoseconds if it's not contended. I don't know how long it takes to add an item to a list, but I can't imagine that it takes more than a few microseconds. But let's it takes 100 microseconds (I would be very surprised to find that it's even 10 microseconds) to add an item to the list, taking into account lock contention. So if you're adding 40,000 items to the list, that's 4,000,000 microseconds, or 4 seconds. And I would expect one core to be pegged if this were the case.
I haven't used ConcurrentBag, but I've found the performance of BlockingCollection to be very good.
I suspect, though, that your bottleneck is somewhere else. Have you done any profiling?
The basic collections in C# aren't thread safe.
The problem you're having is due to the fact that you're locking the entire collection just to call an add() method.
You could create a thread-safe collection that only locks single elements inside the collection, instead of the whole collection.
Lets look at a linked list for example.
Implement an add(item (or list)) method that does the following:
Lock collection.
A = get last item.
set last item reference to the new item (or last item in new list).
lock last item (A).
unclock collection.
add new items/list to the end of A.
unlock locked item.
This will lock the whole collection for just 3 simple tasks when adding.
Then when iterating over the list, just do a trylock() on each object. if it's locked, wait for the lock to be free (that way you're sure that the add() finished).
In C# you can do an empty lock() block on the object as a trylock().
So now you can add safely and still iterate over the list at the same time.
Similar solutions can be implemented for the other commands if needed.
Any built-in solution for a collection is going to involve some locking. There may be ways to avoid it, perhaps by segregating the actual data constructs being read/written, but you're going to have to lock SOMEWHERE.
Also, understand that Parallel.For() will use the thread pool. While simple to implement, you lose fine-grained control over creation/destruction of threads, and the thread pool involves some serious overhead when starting up a big parallel task.
From a conceptual standpoint, I would try two things in tandem to speed up this algorithm:
Create threads yourself, using the Thread class. This frees you from the scheduling slowdowns of the thread pool; a thread starts processing (or waiting for CPU time) when you tell it to start, instead of the thread pool feeding requests for threads into its internal workings at its own pace. You should be aware of the number of threads you have going at once; the rule of thumb is that the benefits of multithreading are overcome by the overhead when you have more than twice the number of active threads as "execution units" available to execute threads. However, you should be able to architect a system that takes this into account relatively simply.
Segregate the collection of results, by creating a dictionary of collections of results. Each results collection is keyed to some token carried by the thread doing the processing and passed to the callback. The dictionary can have multiple elements READ at one time without locking, and as each thread is WRITING to a different collection within the Dictionary there shouldn't be a need to lock those lists (and even if you did lock them you wouldn't be blocking other threads). The result is that the only collection that has to be locked such that it would block threads is the main dictionary, when a new collection for a new thread is added to it. That shouldn't have to happen often if you're smart about recycling tokens.
I read some articles about the volatile keyword but I could not figure out its correct usage. Could you please tell me what it should be used for in C# and in Java?
Consider this example:
int i = 5;
System.out.println(i);
The compiler may optimize this to just print 5, like this:
System.out.println(5);
However, if there is another thread which can change i, this is the wrong behaviour. If another thread changes i to be 6, the optimized version will still print 5.
The volatile keyword prevents such optimization and caching, and thus is useful when a variable can be changed by another thread.
For both C# and Java, "volatile" tells the compiler that the value of a variable must never be cached as its value may change outside of the scope of the program itself. The compiler will then avoid any optimisations that may result in problems if the variable changes "outside of its control".
Reads of volatile fields have acquire semantics. This means that it is guaranteed that the memory read from the volatile variable will occur before any following memory reads. It blocks the compiler from doing the reordering, and if the hardware requires it (weakly ordered CPU), it will use a special instruction to make the hardware flush any reads that occur after the volatile read but were speculatively started early, or the CPU could prevent them from being issued early in the first place, by preventing any speculative load from occurring between the issue of the load acquire and its retirement.
Writes of volatile fields have release semantics. This means that it is guaranteed that any memory writes to the volatile variable are guaranteed to be delayed until all previous memory writes are visible to other processors.
Consider the following example:
something.foo = new Thing();
If foo is a member variable in a class, and other CPUs have access to the object instance referred to by something, they might see the value foo change before the memory writes in the Thing constructor are globally visible! This is what "weakly ordered memory" means. This could occur even if the compiler has all of the stores in the constructor before the store to foo. If foo is volatile then the store to foo will have release semantics, and the hardware guarantees that all of the writes before the write to foo are visible to other processors before allowing the write to foo to occur.
How is it possible for the writes to foo to be reordered so badly? If the cache line holding foo is in the cache, and the stores in the constructor missed the cache, then it is possible for the store to complete much sooner than the writes to the cache misses.
The (awful) Itanium architecture from Intel had weakly ordered memory. The processor used in the original XBox 360 had weakly ordered memory. Many ARM processors, including the very popular ARMv7-A have weakly ordered memory.
Developers often don't see these data races because things like locks will do a full memory barrier, essentially the same thing as acquire and release semantics at the same time. No loads inside the lock can be speculatively executed before the lock is acquired, they are delayed until the lock is acquired. No stores can be delayed across a lock release, the instruction that releases the lock is delayed until all of the writes done inside the lock are globally visible.
A more complete example is the "Double-checked locking" pattern. The purpose of this pattern is to avoid having to always acquire a lock in order to lazy initialize an object.
Snagged from Wikipedia:
public class MySingleton {
private static object myLock = new object();
private static volatile MySingleton mySingleton = null;
private MySingleton() {
}
public static MySingleton GetInstance() {
if (mySingleton == null) { // 1st check
lock (myLock) {
if (mySingleton == null) { // 2nd (double) check
mySingleton = new MySingleton();
// Write-release semantics are implicitly handled by marking
// mySingleton with 'volatile', which inserts the necessary memory
// barriers between the constructor call and the write to mySingleton.
// The barriers created by the lock are not sufficient because
// the object is made visible before the lock is released.
}
}
}
// The barriers created by the lock are not sufficient because not all threads
// will acquire the lock. A fence for read-acquire semantics is needed between
// the test of mySingleton (above) and the use of its contents. This fence
// is automatically inserted because mySingleton is marked as 'volatile'.
return mySingleton;
}
}
In this example, the stores in the MySingleton constructor might not be visible to other processors before the store to mySingleton. If that happens, the other threads that peek at mySingleton will not acquire a lock and they will not necessarily pick up the writes to the constructor.
volatile never prevents caching. What it does is guarantee the order in which other processors "see" writes. A store release will delay a store until all pending writes are complete and a bus cycle has been issued telling other processors to discard/writeback their cache line if they happen to have the relevant lines cached. A load acquire will flush any speculated reads, ensuring that they won't be stale values from the past.
To understand what volatile does to a variable, it's important to understand what happens when the variable is not volatile.
Variable is Non-volatile
When two threads A & B are accessing a non-volatile variable, each thread will maintain a local copy of the variable in it's local cache. Any changes done by thread A in it's local cache won't be visible to the thread B.
Variable is volatile
When variables are declared volatile it essentially means that threads should not cache such a variable or in other words threads should not trust the values of these variables unless they are directly read from the main memory.
So, when to make a variable volatile?
When you have a variable which can be accessed by many threads and you want every thread to get the latest updated value of that variable even if the value is updated by any other thread/process/outside of the program.
The volatile keyword has different meanings in both Java and C#.
Java
From the Java Language Spec :
A field may be declared volatile, in which case the Java memory model ensures that all threads see a consistent value for the variable.
C#
From the C# Reference (retrieved 2021-03-31):
The volatile keyword indicates that a field might be modified by multiple threads that are executing at the same time. The compiler, the runtime system, and even hardware may rearrange reads and writes to memory locations for performance reasons. Fields that are declared volatile are not subject to these optimizations. (...)
In Java, "volatile" is used to tell the JVM that the variable may be used by multiple threads at the same time, so certain common optimizations cannot be applied.
Notably the situation where the two threads accessing the same variable are running on separate CPU's in the same machine. It is very common for CPU's to cache aggressively the data it holds because memory access is very much slower than cache access. This means that if the data is updated in CPU1 it must immediately go through all caches and to main memory instead of when the cache decides to clear itself, so that CPU2 can see the updated value (again by disregarding all caches on the way).
When you are reading data that is non-volatile, the executing thread may or may not always get the updated value.
But if the object is volatile, the thread always gets the most up-to-date value.
Volatile is solving concurrency problem. To make that value in sync. This keyword is mostly use in a threading. When multiple thread updating same variable.