Is the `is` operator thread-safe/atomic in C#? - c#

Is the following code thread-safe?
public object DemoObject {get;set;}
public void DemoMethod()
{
if (DemoObject is IDemoInterface demo)
{
demo.DoSomething();
}
}
If other threads modify DemoObject (e.g. set to null) while DemoMethod is being processed, is it guaranteed that within the if block the local variable demo will always be assigned correctly (to an instance of type IDemoInterface)?

The is construct here is atomic much like interlocked. However the behavior of this code is almost 100% non deterministic. Unless the objective is to create unpredictable and non deterministic behavior this would be a bug.
Valid usage example of this code: In a game to simulate the possibility of some non deterministic event such as "Neo from the Matrix catching a bullet in mid air", this method may be more non deterministic that simply using a pseudo random number generator.
In any scenario where deterministic / predictable behavior is expected this code is a bug.
Explanation:
if (DemoObject is IDemoInterface demo)
is evaluated and assigned pseudo atomically.
Thereafter within the if statement:
even if DemoObject is set to null by another thread the value of demo has already been assigned and the DoSomething() operation is executed on the already assigned instance.
To answer your comment questions:
why is there a race?
The race condition is by design in this code. In the example code below:
16 threads are competing to set the value of DemoObject to null
while another 16 threads are competing to set the value of DemoObject to an instance of DemoClass.
At the same time 16 threads are competing to execute DoSomething() whenever they win the race condition when DemoObject is NOT null.
See: What is a race condition?
and why can i not predict whether DoSomething() will execute?
DoSomething() will execute each time
if (DemoObject is IDemoInterface demo)
evaluates to true. Each time DemoObject is null or NOT IDemoInterface it will NOT execute.
You cannot predict when it will execute. You can only predict that it will execute whenever the thread executing DoSomething() manages to get a reference to a non null instance of DemoObject. Or in other words when a thread running DemoMethod() manages to win the race condition:
A) after a thread running DemoMethod_Assign() wins the race condition
B) and before a thread running DemoMethod_Null() wins the race condition
Caveat: As per my understanding (Someone else please clarify this point) DemoObject may be both null and not null at the same time across different threads.
DemoObject may be read from cache or may be read from main memory. We cannot make it volatile since it is an object reference. Therefore the state of DemoObject may be simultaneously Null for one thread and not null for another thread. Meaning its value is non deterministic. In Schrödinger's cat, the cat is both dead and alive simultaneously. We have much the same situation here.
There are no locks or memory barriers in this code with respect to DemoObject. However a thread context switch forces the equivalent of a memory barrier. Therefore any thread resuming after a context switch will have an accurate copy of the value of DemoObject as retrieved from main memory. However a different thread may have altered the value of DemoObject but this altered value may not have been flushed to main memory yet. Which then brings into question which is the actual accurate value? The value fetched from main memory or the value not yet flushed to main memory.
Note: Someone else please clarify this Caveat as I may have missed something.
Here is some code to validate everything above except the Caveat. Ran this console app test on a machine with 64 logical cores. Null reference exception is never thrown.
internal class Program
{
private static ManualResetEvent BenchWaitHandle = new ManualResetEvent(false);
private class DemoClass : IDemoInterface
{
public void DoSomething()
{
Interlocked.Increment(ref Program.DidSomethingCount);
}
}
private interface IDemoInterface
{
void DoSomething();
}
private static object DemoObject { get; set; }
public static volatile int DidSomethingCount = 0;
private static void DemoMethod()
{
BenchWaitHandle.WaitOne();
for (int i = 0; i < 100000000; i++)
{
try
{
if (DemoObject is IDemoInterface demo)
{
demo.DoSomething();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
}
private static bool m_IsRunning = false;
private static object RunningLock = new object();
private static bool IsRunning
{
get { lock (RunningLock) { return m_IsRunning; } }
set { lock(RunningLock) { m_IsRunning = value; } }
}
private static void DemoMethod_Assign()
{
BenchWaitHandle.WaitOne();
while (IsRunning)
{
DemoObject = new DemoClass();
}
}
private static void DemoMethod_Null()
{
BenchWaitHandle.WaitOne();
while (IsRunning)
{
DemoObject = null;
}
}
static void Main(string[] args)
{
List<Thread> threadsListDoWork = new List<Thread>();
List<Thread> threadsList = new List<Thread>();
BenchWaitHandle.Reset();
for (int I =0; I < 16; I++)
{
threadsListDoWork.Add(new Thread(new ThreadStart(DemoMethod)));
threadsList.Add(new Thread(new ThreadStart(DemoMethod_Assign)));
threadsList.Add(new Thread(new ThreadStart(DemoMethod_Null)));
}
foreach (Thread t in threadsListDoWork)
{
t.Start();
}
foreach (Thread t in threadsList)
{
t.Start();
}
IsRunning = true;
BenchWaitHandle.Set();
foreach (Thread t in threadsListDoWork)
{
t.Join();
}
IsRunning = false;
foreach (Thread t in threadsList)
{
t.Join();
}
Console.WriteLine(#"Did Something {0} times", DidSomethingCount);
Console.ReadLine();
}
//On the last run this printed
//Did Something 112780926 times
//Which means that DemoMethod() threads won the race condition slightly over 7% of the time.

Related

Confusion about lock and thread-safe variable in .NET [duplicate]

I read recently about memory barriers and the reordering issue and now I have some confusion about it.
Consider the following scenario:
private object _object1 = null;
private object _object2 = null;
private bool _usingObject1 = false;
private object MyObject
{
get
{
if (_usingObject1)
{
return _object1;
}
else
{
return _object2;
}
}
set
{
if (_usingObject1)
{
_object1 = value;
}
else
{
_object2 = value;
}
}
}
private void Update()
{
_usingMethod1 = true;
SomeProperty = FooMethod();
//..
_usingMethod1 = false;
}
At Update method; is the _usingMethod1 = true statement always executed before getting or setting the property? or due to reordering issue we can not guarantee that?
Should we use volatile like
private volatile bool _usingMethod1 = false;
If we use lock; can we guarantee then every statement within the lock will be executed in order like:
private void FooMethod()
{
object locker = new object();
lock (locker)
{
x = 1;
y = a;
i++;
}
}
The subject of memory barriers is quite complex. It even trips up the experts from time to time. When we talk about a memory barrier we are really combining two different ideas.
Acquire fence: A memory barrier in which other reads & writes are not allowed to move before the fence.
Release fence: A memory barrier in which other reads & writes are not allowed to move after the fence.
A memory barrier that creates only one of two is sometimes called a half-fence. A memory barrier that creates both is sometimes called a full-fence.
The volatile keyword creates half-fences. Reads of volatile fields have acquire semantics while writes have release semantics. That means no instruction can be moved before a read or after a write.
The lock keyword creates full-fences on both boundaries (entry and exit). That means no instruction can be moved either before or after each boundary.
However, all of this moot if we are only concerned with one thread. Ordering, as it is perceived by that thread, is always preserved. In fact, without that fundamental guarentee no program would ever work right. The real issue is with how other threads perceive reads and writes. That is where you need to be concerned.
So to answer your questions:
From a single thread's perspective...yes. From another thread's perspective...no.
It depends. That might work, but I need to have better understanding of what you are trying to acheive.
From another thread's perspective...no. The reads and writes are free to move around within the boundaries of the lock. They just cannot move outside those boundaries. That is why it is important for other threads to also create memory barriers.
The volatile keyword doesn't accomplish anything here. It has very weak guarantees, it does not imply a memory barrier. Your code doesn't show another thread getting created so it is hard to guess if locking is required. It is however a hard requirement if two threads can execute Update() at the same time and use the same object.
Beware that your lock code as posted doesn't lock anything. Each thread would have its own instance of the "locker" object. You have to make it a private field of your class, created by the constructor or an initializer. Thus:
private object locker = new object();
private void Update()
{
lock (locker)
{
_usingMethod1 = true;
SomeProperty = FooMethod();
//..
_usingMethod1 = false;
}
}
Note that there will also be a race on the SomeProperty assignment.

C# controlling threads (resume/suspend)

I'm trying to simulate (very basic & simple) OS process manager subsystem, I have three "processes" (workers) writing something to console (this is an example):
public class Message
{
public Message() { }
public void Show()
{
while (true)
{
Console.WriteLine("Something");
Thread.Sleep(100);
}
}
}
Each worker is supposed to be run on a different thread. That's how I do it now:
I have a Process class which constructor takes Action delegate and starts a thread from it and suspends it.
public class Process
{
Thread thrd;
Action act;
public Process(Action act)
{
this.act = act;
thrd = new Thread(new ThreadStart(this.act));
thrd.Start();
thrd.Suspend();
}
public void Suspend()
{
thrd.Suspend();
}
public void Resume()
{
thrd.Resume();
}
}
In that state it waits before my scheduler resumes it, gives it a time slice to run, then suspends it again.
public void Scheduler()
{
while (true)
{
//ProcessQueue is just FIFO queue for processes
//MainQueue is FIFO queue for ProcessQueue's
ProcessQueue currentQueue = mainQueue.Dequeue();
int count = currentQueue.Count;
if (currentQueue.Count > 0)
{
while (count > 0)
{
Process currentProcess = currentQueue.GetNext();
currentProcess.Resume();
//this is the time slice given to the process
Thread.Sleep(1000);
currentProcess.Suspend();
Console.WriteLine();
currentQueue.Add(currentProcess);
count--;
}
}
mainQueue.Enqueue(currentQueue);
}
}
The problem is that it doesn't work consistently. It even doesn't work at all in this state, i have to add Thread.Sleep() before WriteLine in Show() method of the worker, like this.
public void Show()
{
while (true)
{
Thread.Sleep(100); //Without this line code doesn't work
Console.WriteLine("Something");
Thread.Sleep(100);
}
}
I've been trying to use ManualResetEvent instead of suspend/resume, it works, but since that event is shared, all threads relying on it wake up simultaneously, while I need only one specific thread to be active at one time.
If some could help me figure out how to pause/resume task/thread normally, that'd be great.
What I'm doing is trying to simulate simple preemptive multitasking.
Thanks.
Thread.Suspend is evil. It is about as evil as Thread.Abort. Almost no code is safe in the presence of being paused at arbitrary, unpredictable locations. It might hold a lock that causes other threads to pause as well. You quickly run into deadlocks or unpredictable stalls in other parts of the system.
Imagine you were accidentally pausing the static constructor of string. Now all code that wants to use a string is halted as well. Regex internally uses a locked cache. If you pause while this lock is taken all Regex related code might pause. These are just two egregious examples.
Probably, suspending some code deep inside the Console class is having unintended consequences.
I'm not sure what to recommend to you. This seems to be an academic exercise so thankfully this is not a production problem for you. User-mode waiting and cancellation must be cooperative in practice.
I manage to solve this problem using static class with array of ManualResetEvent's, where each process is identified by it's unique ID. But I think it's pretty dirty way to do it. I'm open to other ways of accomplishing this.
UPD: added locks to guarantee thread safety
public sealed class ControlEvent
{
private static ManualResetEvent[] control = new ManualResetEvent[100];
private static readonly object _locker = new object();
private ControlEvent() { }
public static object Locker
{
get
{
return _locker;
}
}
public static void Set(int PID)
{
control[PID].Set();
}
public static void Reset(int PID)
{
control[PID].Reset();
}
public static ManualResetEvent Init(int PID)
{
control[PID] = new ManualResetEvent(false);
return control[PID];
}
}
In worker class
public class RandomNumber
{
static Random R = new Random();
ManualResetEvent evt;
public ManualResetEvent Event
{
get
{
return evt;
}
set
{
evt = value;
}
}
public void Show()
{
while (true)
{
evt.WaitOne();
lock (ControlEvent.Locker)
{
Console.WriteLine("Random number: " + R.Next(1000));
}
Thread.Sleep(100);
}
}
}
At Process creation event
RandomNumber R = new RandomNumber();
Process proc = new Process(new Action(R.Show));
R.Event = ControlEvent.Init(proc.PID);
And, finally, in scheduler
public void Scheduler()
{
while (true)
{
ProcessQueue currentQueue = mainQueue.Dequeue();
int count = currentQueue.Count;
if (currentQueue.Count > 0)
{
while (count > 0)
{
Process currentProcess = currentQueue.GetNext();
//this wakes the thread
ControlEvent.Set(currentProcess.PID);
Thread.Sleep(quant);
//this makes it wait again
ControlEvent.Reset(currentProcess.PID);
currentQueue.Add(currentProcess);
count--;
}
}
mainQueue.Enqueue(currentQueue);
}
}
The single best advice I can give with regard to Suspend() and Resume(): Don't use it. You are doing it wrong™.
Whenever you feel a temptation to use Suspend() and Resume() pairs to control your threads, you should step back immediately and ask yourself, what you are doing here. I understand, that programmers tend to think of the execution of code paths as of something that must be controlled, like some dumb zombie worker that needs permament command and control. That's probably a function of the stuff learned about computers in school and university: Computers do only what you tell them.
Ladies & Gentlemen, here's the bad news: If you are doing it that way, this is called "micro management", and some even would call it "control freak thinking".
Instead, I would strongly encorage you to think about it in a different way. Try to think of your threads as intelligent entities, that do no harm and the only thing they want is to be fed with enough work. They just need a little guidance, that's all. You may place a container full of work just in front of them (work task queue) and have them pulling the tasks from that container themselves, as soon as the finished their previous task. When the container is empty, all tasks are processed and there's nothing left to do, they are allowed to fall asleep and WaitFor(alarm) which will be signaled whenever new tasks arrive.
So instead of command-and-controlling a herd of dumb zombie slaves that can't do anything right without you cracking the whip behind them, you deliberately guide a team of intelligent co-workers and just let it happen. That's the way a scalable architecture is built. You don't have to be a control freak, just have a little faith in your own code.
Of course, as always, there are exceptions to that rule. But there aren't that many, and I would recommend to start with the work hypothesis, that your code is probably the rule, rather than the exception.

How Java and C# threads deal with data synchronisation differently?

In the following C# code, t1 always(for the times I tried) finishes.
class MainClass
{
static void DoExperiment ()
{
int value = 0;
Thread t1 = new Thread (() => {
Console.WriteLine ("T one is running now");
while (value == 0) {
//do nothing
}
Console.WriteLine ("T one is done now");
});
Thread t2 = new Thread (() => {
Console.WriteLine ("T two is running now");
Thread.Sleep (1000);
value = 1;
Console.WriteLine ("T two changed value to 1");
Console.WriteLine ("T two is done now");
});
t1.Start ();
t2.Start ();
t1.Join ();
t1.Join ();
}
public static void Main (string[] args)
{
for (int i=0; i<10; i++) {
DoExperiment ();
Console.WriteLine ("------------------------");
}
}
}
But in the Java code which is very similar, t1 never(for the times I tried) exits:
public class MainClass {
static class Experiment {
private int value = 0;
public void doExperiment() throws InterruptedException {
Thread t1 = new Thread(new Runnable() {
#Override
public void run() {
System.out.println("T one is running now");
while (value == 0) {
//do nothing
}
System.out.println("T one is done now");
}
});
Thread t2 = new Thread(new Runnable() {
#Override
public void run() {
System.out.println("T two is running now");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
value = 1;
System.out.println("T two changed value to 1");
System.out.println("T two is done now");
}
}
);
t1.start();
t2.start();
t1.join();
t1.join();
}
}
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 10; i++) {
new Experiment().doExperiment();
System.out.println("------------------------");
}
}
}
Why is that?
I'm not sure how it happens in C#, but what happens in Java is JVM optimization. The value of value does not change inside the while loop and the JVM recognises it and just skip the test and change your bite code to something like this:
while (true) {
// do nothing
}
In order to fix this in java you need to declare value as volatile:
private volatile int value = 0;
This will make the JVM to not optimise this while loop and check the for the actual value of value at the start of each iteration.
There are a couple of things here.
Firstly, when you do:
t1.Start ();
t2.Start ();
You're asking the operating system to schedule the threads for runnings. It's possible that t2 will start first. In fact it may even finish before t1 is ever scheduled to run.
However, there is a memory model issue here. Your threads may run on different cores. It's possible that value is sitting in the CPU cache of each core, or is stored in a register on each core, and when you read/write to value you are writing to the cache value. There's no requirement for the language runtime to flush the writes to value back to main memory, and there's no requirement for it to read the value back from main memory each time.
If you want to access a shared variable then it's your responsibility to tell the runtime that the variable is shared, and that it must read/write from main memory and/or flush the CPU cache. This is typically done with lock, Interlocked or synchronized constructs in C# and Java. If you surround access to value with a lock (in C#) or synchronized (in Java) then you should see consistent results.
The reason things behave differently without locking is that each language defines a memory model, and these models are different. Without going into the specifics, C# on x86 writes back to main memory more than the Java memory model does. This is why you're seeing different outcomes.
Edit: For more information on the C# side of things take a look at Chapter 4 of Threading in C# by Joseph Albahari.

Thread-safe replace for code?

In process of developing I often face with the next problem: if some method is already executed by one thread - method is must not be executed by another thread. Another thread must do nothing - simple exit from method, beacuse of it I can't use "lock". Usually, I solve this problem like that:
private bool _isSomeMethodExecuted = false;
public void SomeMethod ()
{
if (!this._isSomeMethodExecuted) //check if method is already executed
{
this._isSomeMethodExecuted = true;
//Main code of method
this._isSomeMethodExecuted = false;
}
}
But this code is not thread-safe: if one thread execute condition statement but It be stopped before set flag in true and another thread can execute condition - then both threads are inside method code.
Is there any thread-safe replace for it?
the following is thread-safe and does not block if the method is already executing - even if it is alreasy executing on the same thread... which provides protection from reentrancy for all scenarios.
private long _isSomeMethodExecuted = 0;
public void SomeMethod ()
{
if (Interlocked.Increment (ref this._isSomeMethodExecuted) == 1) //check if method is already executed
{
//Main code of method
}
Interlocked.Decrement (ref this._isSomeMethodExecuted);
}
For refrences see http://msdn.microsoft.com/en-us/library/zs86dyzy.aspx
Monitor does this job for you, but the lock is thread-wide (and therefore open for recursive calls!). The lock statement uses a Monitor too (using the blocking Enter method), but you may work with the TryEnter method instead:
if(Monitor.TryEnter(myLockObject))
{
try
{
DoSomething(); // main code
}
finally
{
Monitor.Exit(myLockObject);
}
}
TryEnter does not block but returns a bool indicating whether the lock was successfully acquired or not.
If you want recursive calls not to enter the main code block again, you should use a semaphore instead. Semaphores use counters instead of locking objects, so you cannot reenter even from the same thread:
class Program
{
private static Semaphore sem = new Semaphore(1, 1);
static void Main(string[] args)
{
MyMethod();
MyMethod();
}
private static void MyMethod()
{
if(sem.WaitOne(0))
{
try
{
Console.WriteLine("Entered.");
MyMethod(); // recursive calls won't re-enter
}
finally
{
sem.Release();
}
}
else
{
Console.WriteLine("Not entered.");
}
}
}

Explain the code: c# locking feature and threads

I used this pattern in a few projects, (this snipped of code is from CodeCampServer), I understand what it does, but I'm really interesting in an explanation about this pattern. Specifically:
Why is the double check of _dependenciesRegistered.
Why to use lock (Lock){}.
Thanks.
public class DependencyRegistrarModule : IHttpModule
{
private static bool _dependenciesRegistered;
private static readonly object Lock = new object();
public void Init(HttpApplication context)
{
context.BeginRequest += context_BeginRequest;
}
public void Dispose() { }
private static void context_BeginRequest(object sender, EventArgs e)
{
EnsureDependenciesRegistered();
}
private static void EnsureDependenciesRegistered()
{
if (!_dependenciesRegistered)
{
lock (Lock)
{
if (!_dependenciesRegistered)
{
new DependencyRegistrar().ConfigureOnStartup();
_dependenciesRegistered = true;
}
}
}
}
}
This is the Double-checked locking pattern.
The lock statement ensures that the code inside the block will not run on two threads simultaneously.
Since a lock statement is somewhat expensive, the code checks whether it's already been initialized before entering the lock.
However, because a different thread might have initialized it just after the outer check, it needs to check again inside the lock.
Note that this is not the best way to do it.
The double-check is because two threads could hit EnsureDependenciesRegistered at the same time, both find it isn't registered, and thus both attempt to get the lock.
lock(Lock) is essentially a form of mutex; only one thread can have the lock - the other must wait until the lock is released (at the end of the lock(...) {...} statement).
So in this scenario, a thread might (although unlikely) have been the second thread into the lock - so each must double-check in case it was the second, and the work has already been done.
It's a matter of performance.
The initial test lets it bail out quickly if the job is already done. At this point it does the potentially expensive lock but it has to check it again as another thread could have already registered it.
The double checked locking pattern is roughly:
you have an operation that you want to conditionally perform once
if (needsToDoSomething) {
DoSomething();
needsToDoSomething = false;
}
however, if you're running on two threads, both threads might check the flag, and perform the action, before they both set the flag to false. Therefore, you add a lock.
lock (Lock) {
if (needsToDoSomething) {
DoSomething();
needsToDoSomething = false;
}
}
however, taking a lock every time you run this code might be slow, so you decide, lets only try to take the lock when we actually need to.
if (needsToDoSomething)
lock (Lock) {
if (needsToDoSomething) {
DoSomething();
needsToDoSomething = false;
}
}
You can't remove the inner check, because once again, you have the problem that any check performed outside of a lock can possibly turn out to be true twice on two different threads.
The lock prevents two threads from running ConfigureOnStartup(). Between the if (!_dependenciesRegistered) and the point that ConfigureOnStartup() sets _dependenciesRegistered = true, another thread could check if it's registered. In other words:
Thread 1: _dependenciesRegistered == false
Thread 2: _dependenciesRegistered == false
Thread 1: ConfigureOnStartup() / _dependenciesRegistered = true;
Thread 2: Doesn't "see" that it's already registered, so runs ConfigureOnStartup() again.

Categories

Resources