Here is the code that I was trying on my workstation.
class Program
{
public static volatile bool status = true;
public static void Main()
{
Thread FirstStart = new Thread(threadrun);
FirstStart.Start();
Thread.Sleep(200);
Thread thirdstart = new Thread(threadrun2);
thirdstart.Start();
Console.ReadLine();
}
static void threadrun()
{
while (status)
{
Console.WriteLine("Waiting..");
}
}
static void threadrun2()
{
status = false;
Console.WriteLine("the bool value is now made FALSE");
}
}
As you can see I have fired three threads in Main. Then using breakpoints I tracked the threads. My initial conception was all the three threads will be fired simultaneously, but my breakpoint flow showed that the thread-execution-flow followed one after other (and so was the output format i.e. Top to bottom execution of threads). Guys why is that happening ?
Additionally I tried to run the same program without using the volatile keyword in declaration, and I found no change in program execution. I doubt the volatile keyword is of no practical live use. Am I going wrong somewhere?
Your method of thinking is flawed.
The very nature of threading related issues is that they're non-deterministic. This means that what you have observed is potentially no indicator of what may happen in the future.
This is the very nature of why multithreaded programming is "hard." It often defies ad hoc testing, or even most unit testing. The only way to do it effectively is to understand your entire software and hardware stack, and diagram every possible occurrence through use of state machines.
In summary, threaded programming is not about what you've seen happen, it's about what might possibly happen, no matter how improbable.
Ok I will try to explain a very long story as short as possible:
Number 1: Trying to inspect the behavior of threads with the debugger is as useful as repeatedly running a multithreaded program and concluding that it works fine because out of 100 tests none failed: WRONG! Threads behave in a completely nondeterministic (some would say random) way and you need different methods to make sure such a program will run correctly.
Number 2: The use of volatile will become clear once you remove it and then run your program in Debug mode and then switch to Release mode. I think you will have a surprise... What happens in Release mode is that the compiler will optimize code (this includes reordering instructions and caching of values). Now, if your two threads run on different processor cores, then the core executing the thread that is checking for the value of status will cache its value instead of repeatedly checking for it. The other thread will set it but the first one will never see the change: deadlock! volatile prevents this kind of situation from occurring.
In a sense, volatile is a guard in case the code does not actually (and most likely will not) run as you think it will in a multithreaded scenario.
The fact that your simple code doesn't behave dirrefently with volatile doesn't mean anything. Your code is too simple and has nothing to do with volatile. You need to write very computation-intensive code to create a clearly visible memory race condition.
Also, volatile keyword may be useful on other platforms than x86/x64 with other memory models. (I mean like for example Itanium.)
Joe Duffy wrote interesting information about volatile on his blog. I strongly recommend to read it.
Then using breakpoints I tracked the threads. My initial conception
was all the three threads will be fired simultaneously, but my
breakpoint flow showed that the thread-execution-flow followed one
after other (and so was the output format i.e. Top to bottom execution
of threads). Guys why is that happening?
The debugger is temporarily suspending the threads to make it easier to debug.
I doubt the volatile keyword is of no practical live use. Am I going
wrong somewhere?
The Console.WriteLine calls are very likely fixing masking the problem. They are most likely generating the necessary memory barrier for you implicitly. Here is a really simple snippet of code that demonstrates that there is, in fact, a problem when volatile is not used to declare the stop variable.
Compile the following code with the Release configuration and run it outside of the debugger.
class Program
{
static bool stop = false;
public static void Main(string[] args)
{
var t = new Thread(() =>
{
Console.WriteLine("thread begin");
bool toggle = false;
while (!stop)
{
toggle = !toggle;
}
Console.WriteLine("thread end");
});
t.Start();
Thread.Sleep(1000);
stop = true;
Console.WriteLine("stop = true");
Console.WriteLine("waiting...");
t.Join();
}
}
Related
I understand that a thread can cache a value and ignore changes made on another thread, but I'm wondering about a variation of this. Is it possible for a thread to change a cached value, which then never leaves its cache and so is never visible to other threads?
For example, could this code print "Flag is true" because thread a never sees the change that thread b makes to flag? (I can't make it do so, but I can't prove that it, or some variation of it, wouldn't.)
var flag = true;
var a = new Thread(() => {
Thread.Sleep(200);
Console.WriteLine($"Flag is {flag}");
});
var b = new Thread(() => {
flag = false;
while (true) {
// Do something to avoid a memory barrier
}
});
a.Start();
b.Start();
a.Join();
I can imagine that on thread b flag could be cached in a CPU register where it is then set to false, and when b enters the while loop it never gets the chance to (or never cares to) write the value of flag back to memory, hence a always sees flag as true.
From the memory barrier generators listed in this answer this seems, to me, to be possible in theory. Am I correct? I haven't been able to demonstrate it in practice. Can anyone come up with a example that does?
Is it possible for a thread to change a cached value, which then never leaves its cache and so is never visible to other threads?
If we're talking literally about the hardware caches, then we need to talk about specific processor families. And if you're working (as seems likely) on x86 (and x64), you need to be aware that those processors actually have a far stronger memory model than is required for .NET. In x86 systems, the caches maintain coherency, and so no write can be ignored by other processors.
If we're talking about the optimization wherein a particular memory location has been read into a processor register and then a subsequent read from memory just reuses the register, then there isn't a similar analogue on the write side. You'll note that there's always at least one read from the actual memory location before we assume that nothing else is changing that memory location and so we can reuse the register.
On the write side, we've been told to push something to a particular memory location. We have to at least push to that location once, and it would likely be a deoptimization to always store the previously known value at that location (especially if our thread never reads from it) in a separate register just to be able to perform a comparison and elide the write operation.
In order from easiest to do correctly to easiest to screw up
Use locks when reading/writing
Use the functions on the Interlocked class
Use memory barriers
I'm not entirely sure that this answers your question, but here goes.
If you run the release (not debug) version of the following code, it will never terminate because waitForFlag() never sees the changed version of flag.
However, if you comment-out either of the indicated lines, the program will terminate.
It looks like making any call to an external library in the while (flag) loop causes the optimiser to not cache the value of flag.
Also (of course) making flag volatile will prevent such an optimisation.
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Demo
{
class Program
{
void run()
{
Task.Factory.StartNew(resetFlagAfter1s);
var waiter = Task.Factory.StartNew(waitForFlag);
waiter.Wait();
Console.WriteLine("Done");
}
void resetFlagAfter1s()
{
Thread.Sleep(1000);
flag = false;
}
void waitForFlag()
{
int x = 0;
while (flag)
{
++x;
// Uncommenting this line make this thread see the changed value of flag.
// Console.WriteLine("Spinning");
}
}
// Uncommenting "volatile" makes waitForFlag() see the changed value of flag.
/*volatile*/ bool flag = true;
static void Main()
{
new Program().run();
}
}
}
Research Thread.MemoryBarrier and you will be golden.
Considering the following example:
private int sharedState = 0;
private void FirstThread() {
Volatile.Write(ref sharedState, 1);
}
private void SecondThread() {
int sharedStateSnapshot = Volatile.Read(ref sharedState);
Console.WriteLine(sharedStateSnapshot);
}
Until recently, I was under the impression that, as long as FirstThread() really did execute before SecondThread(), this program could not output anything but 1.
However, my understanding now is that:
Volatile.Write() emits a release fence. This means no preceding load or store (in program order) may happen after the assignment of 1 to sharedState.
Volatile.Read() emits an acquire fence. This means no subsequent load or store (in program order) may happen before the copying of sharedState to sharedStateSnapshot.
Or, to put it another way:
When sharedState is actually released to all processor cores, everything preceding that write will also be released, and,
When the value in the address sharedStateSnapshot is acquired; sharedState must have been already acquired.
If my understanding is therefore correct, then there is nothing to prevent the acquisition of sharedState being 'stale', if the write in FirstThread() has not already been released.
If this is true, how can we actually ensure (assuming the weakest processor memory model, such as ARM or Alpha), that the program will always print 1? (Or have I made an error in my mental model somewhere?)
Your understanding is correct, and it is true that you cannot ensure that the program will always print 1 using these techniques. To ensure your program will print 1, assuming thread 2 runs after thread one, you need two fences on each thread.
The easiest way to achieve that is using the lock keyword:
private int sharedState = 0;
private readonly object locker = new object();
private void FirstThread()
{
lock (locker)
{
sharedState = 1;
}
}
private void SecondThread()
{
int sharedStateSnapshot;
lock (locker)
{
sharedStateSnapshot = sharedState;
}
Console.WriteLine(sharedStateSnapshot);
}
I'd like to quote Eric Lippert:
Frankly, I discourage you from ever making a volatile field. Volatile fields are a sign that you are doing something downright crazy: you're attempting to read and write the same value on two different threads without putting a lock in place.
The same applies to calling Volatile.Read and Volatile.Write. In fact, they are even worse than volatile fields, since they require you to do manually what the volatile modifier does automatically.
You're right, there's no guarantee that release stores will be immediately visible to all processors. Volatile.Read and Volatile.Write give you acquire/release semantics, but no immediacy guarantees.
The volatile modifier seems to do this though. The compiler will emit an OpCodes.Volatile IL instruction, and the jitter will tell the processor not to store the variable on any of its registers (see Hans Passant's answer).
But why do you need it to be immediate anyway? What if your SecondThread happens to run a couple of milliseconds sooner, before the values are actually wrote? Seeing as the scheduling is non-deterministic, the correctness of your program shouldn't depend on this "immediacy" anyway.
Until recently, I was under the impression that, as long as
FirstThread() really did execute before SecondThread(), this program
could not output anything but 1.
As you go on to explain yourself, this impression is wrong. Volatile.Read simply issues a read operation on its target followed by a memory barrier; the memory barrier prevents operation reordering on the processor executing the current thread but this does not help here because
There are no operations to reorder (just the single read or write in each thread).
The race condition across your threads means that even if the no-reorder guarantee applied across processors, it would simply mean that the order of operations which you cannot predict anyway would be preserved.
If my understanding is therefore correct, then there is nothing to
prevent the acquisition of sharedState being 'stale', if the write in
FirstThread() has not already been released.
That is correct. In essence you are using a tool designed to help with weak memory models against a possible problem caused by a race condition. The tool won't help you because that's not what it does.
If this is true, how can we actually ensure (assuming the weakest
processor memory model, such as ARM or Alpha), that the program will
always print 1? (Or have I made an error in my mental model
somewhere?)
To stress once again: the memory model is not the problem here. To ensure that your program will always print 1 you need to do two things:
Provide explicit thread synchronization that guarantees the write will happen before the read (in the simplest case, SecondThread can use a spin lock on a flag which FirstThread uses to signal it's done).
Ensure that SecondThread will not read a stale value. You can do this trivially by marking sharedState as volatile -- while this keyword has deservedly gotten much flak, it was designed explicitly for such use cases.
So in the simplest case you could for example have:
private volatile int sharedState = 0;
private volatile bool spinLock = false;
private void FirstThread()
{
sharedState = 1;
// ensure lock is released after the shared state write!
Volatile.Write(ref spinLock, true);
}
private void SecondThread()
{
SpinWait.SpinUntil(() => spinLock);
Console.WriteLine(sharedState);
}
Assuming no other writes to the two fields, this program is guaranteed to output nothing other than 1.
I was having a discussion with a teammate about locking in .NET. He's a really bright guy with an extensive background in both lower-level and higher-level programming, but his experience with lower level programming far exceeds mine. Anyway, He argued that .NET locking should be avoided on critical systems expected to be under heavy-load if at all possible in order to avoid the admittedly small possibility of a "zombie thread" crashing a system. I routinely use locking and I didn't know what a "zombie thread" was, so I asked. The impression I got from his explanation is that a zombie thread is a thread that has terminated but somehow still holds onto some resources. An example he gave of how a zombie thread could break a system was a thread begins some procedure after locking on some object, and then is at some point terminated before the lock can be released. This situation has the potential to crash the system, because eventually, attempts to execute that method will result in the threads all waiting for access to an object that will never be returned, because the thread that is using the locked object is dead.
I think I got the gist of this, but if I'm off base, please let me know. The concept made sense to me. I wasn't completely convinced that this was a real scenario that could happen in .NET. I've never previously heard of "zombies", but I do recognize that programmers who have worked in depth at lower levels tend to have a deeper understanding of computing fundamentals (like threading). I definitely do see the value in locking, however, and I have seen many world class programmers leverage locking. I also have limited ability to evaluate this for myself because I know that the lock(obj) statement is really just syntactic sugar for:
bool lockWasTaken = false;
var temp = obj;
try { Monitor.Enter(temp, ref lockWasTaken); { body } }
finally { if (lockWasTaken) Monitor.Exit(temp); }
and because Monitor.Enter and Monitor.Exit are marked extern. It seems conceivable that .NET does some kind of processing that protects threads from exposure to system components that could have this kind of impact, but that is purely speculative and probably just based on the fact that I've never heard of "zombie threads" before. So, I'm hoping I can get some feedback on this here:
Is there a clearer definition of a "zombie thread" than what I've explained here?
Can zombie threads occur on .NET? (Why/Why not?)
If applicable, How could I force the creation of a zombie thread in .NET?
If applicable, How can I leverage locking without risking a zombie thread scenario in .NET?
Update
I asked this question a little over two years ago. Today this happened:
Is there a clearer definition of a "zombie thread" than what I've explained here?
Seems like a pretty good explanation to me - a thread that has terminated (and can therefore no longer release any resources), but whose resources (e.g. handles) are still around and (potentially) causing problems.
Can zombie threads occur on .NET? (Why/Why not?)
If applicable, How could I force the creation of a zombie thread in .NET?
They sure do, look, I made one!
[DllImport("kernel32.dll")]
private static extern void ExitThread(uint dwExitCode);
static void Main(string[] args)
{
new Thread(Target).Start();
Console.ReadLine();
}
private static void Target()
{
using (var file = File.Open("test.txt", FileMode.OpenOrCreate))
{
ExitThread(0);
}
}
This program starts a thread Target which opens a file and then immediately kills itself using ExitThread. The resulting zombie thread will never release the handle to the "test.txt" file and so the file will remain open until the program terminates (you can check with process explorer or similar). The handle to "test.txt" won't be released until GC.Collect is called - it turns out it is even more difficult than I thought to create a zombie thread that leaks handles)
If applicable, How can I leverage locking without risking a zombie thread scenario in .NET?
Don't do what I just did!
As long as your code cleans up after itself correctly (use Safe Handles or equivalent classes if working with unmanaged resources), and as long as you don't go out of your way to kill threads in weird and wonderful ways (safest way is just to never kill threads - let them terminate themselves normally, or through exceptions if necessary), the only way that you are going to have something resembling a zombie thread is if something has gone very wrong (e.g. something goes wrong in the CLR).
In fact its actually surprisingly difficult to create a zombie thread (I had to P/Invoke into a function that esentially tells you in the documentation not to call it outside of C). For example the following (awful) code actually doesn't create a zombie thread.
static void Main(string[] args)
{
var thread = new Thread(Target);
thread.Start();
// Ugh, never call Abort...
thread.Abort();
Console.ReadLine();
}
private static void Target()
{
// Ouch, open file which isn't closed...
var file = File.Open("test.txt", FileMode.OpenOrCreate);
while (true)
{
Thread.Sleep(1);
}
GC.KeepAlive(file);
}
Despite making some pretty awful mistakes, the handle to "test.txt" is still closed as soon as Abort is called (as part of the finalizer for file which under the covers uses SafeFileHandle to wrap its file handle)
The locking example in C.Evenhuis answer is probably the easiest way to fail to release a resource (a lock in this case) when a thread is terminated in a non-weird way, but thats easily fixed by either using a lock statement instead, or putting the release in a finally block.
See also
Subtleties of C# IL
codegen
for a very subtle case where an exception can prevent a lock from
being released even when using the lock keyword (but only in .Net 3.5 and earlier)
Locks and exceptions do not
mix
I've cleaned up my answer a bit, but left the original one below for reference
It’s the first time I've heard of the term zombies so I'll assume its definition is:
A thread that has terminated without releasing all of its resources
So given that definition, then yes, you can do that in .NET, as with other languages (C/C++, java).
However, I do not think this as a good reason not to write threaded, mission critical code in .NET. There may be other reasons to decide against .NET but writing off .NET just because you can have zombie threads somehow doesn't make sense to me. Zombie threads are possible in C/C++ (I'd even argue that it’s a lot easier to mess up in C) and a lot of critical, threaded apps are in C/C++ (high volume trading, databases etc).
Conclusion
If you are in the process of deciding on a language to use, then I suggest you take the big picture into consideration: performance, team skills, schedule, integration with existing apps etc. Sure, zombie threads are something that you should think about, but since it’s so difficult to actually make this mistake in .NET compared to other languages like C, I think this concern will be overshadowed by other things like the ones mentioned above. Good luck!
Original Answer
Zombies† can exist if you don't write proper threading code. The same is true for other languages like C/C++ and Java. But this is not a reason not to write threaded code in .NET.
And just like with any other language, know the price before using something. It also helps to know what is happening under the hood so you can foresee any potential problems.
Reliable code for mission critical systems is not easy to write, whatever language you're in. But I'm positive it’s not impossible to do correctly in .NET. Also AFAIK, .NET threading is not that different from threading in C/C++, it uses (or is built from) the same system calls except for some .net specific constructs (like the light weight versions of RWL and event classes).
†first time I've heard of the term zombies but based on your description, your colleague probably meant a thread that terminated without release all resources. This could potentially cause a deadlock, memory leak or some other bad side effect. This is obviously not desirable but singling out .NET because of this possibility is probably not a good idea since it’s possible in other languages too. I'd even argue that it’s easier to mess up in C/C++ than in .NET (especially so in C where you don't have RAII) but a lot of critical apps are written in C/C++ right? So it really depends on your individual circumstances. If you want to extract every ounce of speed from your application and want to get as close to bare metal as possible, then .NET might not be the best solution. If you are on a tight budget and do a lot of interfacing with web services/existing .net libraries/etc then .NET may be a good choice.
Right now most of my answer has been corrected by the comments below. I won't delete the answer because I need the reputation points because the information in the comments may be valuable to readers.
Immortal Blue pointed out that in .NET 2.0 and up finally blocks are immune to thread aborts. And as commented by Andreas Niedermair, this may not be an actual zombie thread, but the following example shows how aborting a thread can cause problems:
class Program
{
static readonly object _lock = new object();
static void Main(string[] args)
{
Thread thread = new Thread(new ThreadStart(Zombie));
thread.Start();
Thread.Sleep(500);
thread.Abort();
Monitor.Enter(_lock);
Console.WriteLine("Main entered");
Console.ReadKey();
}
static void Zombie()
{
Monitor.Enter(_lock);
Console.WriteLine("Zombie entered");
Thread.Sleep(1000);
Monitor.Exit(_lock);
Console.WriteLine("Zombie exited");
}
}
However when using a lock() { } block, the finally would still be executed when a ThreadAbortException is fired that way.
The following information, as it turns out, is only valid for .NET 1 and .NET 1.1:
If inside the lock() { } block an other exception occurs, and the ThreadAbortException arrives exactly when the finally block is about to be ran, the lock is not released. As you mentioned, the lock() { } block is compiled as:
finally
{
if (lockWasTaken)
Monitor.Exit(temp);
}
If another thread calls Thread.Abort() inside the generated finally block, the lock may not be released.
This isn't about Zombie threads, but the book Effective C# has a section on implementing IDisposable, (item 17), which talks about Zombie objects which I thought you may find interesting.
I recommend reading the book itself, but the gist of it is that if you have a class either implementing IDisposable, or containing a Desctructor, the only thing you should be doing in either is releasing resources. If you do other things here, then there is a chance that the object will not be garbage collected, but will also not be accessible in any way.
It gives an example similar to below:
internal class Zombie
{
private static readonly List<Zombie> _undead = new List<Zombie>();
~Zombie()
{
_undead.Add(this);
}
}
When the destructor on this object is called, a reference to itself is placed on the global list, meaning it stays alive and in memory for the life of the program, but isn't accessible. This may mean that resources (particularly unmanaged resources) may not be fully released, which can cause all sorts of potential issues.
A more complete example is below. By the time the foreach loop is reached, you have 150 objects in the Undead list each containing an image, but the image has been GC'd and you get an exception if you try to use it. In this example, I am getting an ArgumentException (Parameter is not valid) when I try and do anything with the image, whether I try to save it, or even view dimensions such as height and width:
class Program
{
static void Main(string[] args)
{
for (var i = 0; i < 150; i++)
{
CreateImage();
}
GC.Collect();
//Something to do while the GC runs
FindPrimeNumber(1000000);
foreach (var zombie in Zombie.Undead)
{
//object is still accessable, image isn't
zombie.Image.Save(#"C:\temp\x.png");
}
Console.ReadLine();
}
//Borrowed from here
//http://stackoverflow.com/a/13001749/969613
public static long FindPrimeNumber(int n)
{
int count = 0;
long a = 2;
while (count < n)
{
long b = 2;
int prime = 1;// to check if found a prime
while (b * b <= a)
{
if (a % b == 0)
{
prime = 0;
break;
}
b++;
}
if (prime > 0)
count++;
a++;
}
return (--a);
}
private static void CreateImage()
{
var zombie = new Zombie(new Bitmap(#"C:\temp\a.png"));
zombie.Image.Save(#"C:\temp\b.png");
}
}
internal class Zombie
{
public static readonly List<Zombie> Undead = new List<Zombie>();
public Zombie(Image image)
{
Image = image;
}
public Image Image { get; private set; }
~Zombie()
{
Undead.Add(this);
}
}
Again, I am aware you were asking about zombie threads in particular, but the question title is about zombies in .net, and I was reminded of this and thought others may find it interesting!
On critical systems under heavy load, writing lock-free code is better primarily because of the performance improvments. Look at stuff like LMAX and how it leverages "mechanical sympathy" for great discussions of this. Worry about zombie threads though? I think that's an edge case that's just a bug to be ironed out, and not a good enough reason not to use lock.
Sounds more like your friend is just being fancy and flaunting his knowledege of obscure exotic terminology to me! In all the time I was running the performance labs at Microsoft UK, I never came across an instance of this issue in .NET.
1.Is there a clearer definition of a "zombie thread" than what I've explained here?
I do agree that "Zombie Threads" exist, it's a term to refer to what happens with Threads that are left with resources that they don't let go of and yet don't completely die, hence the name "zombie," so your explanation of this referral is pretty right on the money!
2.Can zombie threads occur on .NET? (Why/Why not?)
Yes they can occur. It's a reference, and actually referred to by Windows as "zombie": MSDN uses the Word "Zombie" for Dead processes/threads
Happening frequently it's another story, and depends on your coding techniques and practices, as for you that like Thread Locking and have done it for a while I wouldn't even worry about that scenario happening to you.
And Yes, as #KevinPanko correctly mentioned in the comments, "Zombie Threads" do come from Unix which is why they are used in XCode-ObjectiveC and referred to as "NSZombie" and used for debugging. It behaves pretty much the same way... the only difference is an object that should've died becomes a "ZombieObject" for debugging instead of the "Zombie Thread" which might be a potential problem in your code.
I can make zombie threads easily enough.
var zombies = new List<Thread>();
while(true)
{
var th = new Thread(()=>{});
th.Start();
zombies.Add(th);
}
This leaks the thread handles (for Join()). It's just another memory leak as far as we are concerned in the managed world.
Now then, killing a thread in a way that it actually holds locks is a pain in the rear but possible. The other guy's ExitThread() does the job. As he found, the file handle got cleaned up by the gc but a lock around an object wouldn't. But why would you do that?
I have two different result from exchanging two lines of code ( done = true with Console.Write() one )
If I put done = true, firstly, the result will be:
True
Else If I put Console.WriteLine() firstly, the result will be:
False
False
Why? ( see carefully, that bool variable is static! )
using System;
using System.Threading;
class Program
{
static bool done;
static void Main(string[] args)
{
new Thread(test).Start();
test();
}
static void test()
{
if (!done)
{
done = true;
Console.WriteLine(done);
}
}
}
My bet is that the Console.WriteLine will be enough work to keep the thread busy while the second call to test() has a chance to execute.
So basically the call to WriteLine delays the setting of done long enough for the second call to test to be able to test done and find it is still set as false.
If you leave it as shown, with done = true; before the write to the console then this will be set almost instantly and thus the second call to test will find done set to true and will therefore not perform the Console.WriteLine.
Hope that all makes sense.
I just found this which contains code very much like your question. If you didn't get your question from this page already, then I would suggest having a read as it explains in much more detail the cause of this effect.
With the follow key extract:
On a single-processor computer, a thread scheduler performs
time-slicing — rapidly switching execution between each of the active
threads. Under Windows, a time-slice is typically in the
tens-of-milliseconds region — much larger than the CPU overhead in
actually switching context between one thread and another (which is
typically in the few-microseconds region).
So essentially the call to Console.WriteLine is taking long enough for the processor to decide that it is time for the main thread to have another go before your extra thread is permitted to continue (and ultimate set the done flag)
Your code isn't thread safe, and the results will be unpredictable.
You need to lock access when reading / writing to the static boolean, like so:
static bool done;
static readonly object _mylock = new object();
static void Main()
{
//Application.EnableVisualStyles();
//Application.SetCompatibleTextRenderingDefault(false);
//Application.Run(new Form1());
new Thread(test).Start();
test();
Console.ReadKey();
}
static void test()
{
lock (_mylock)
{
if (!done)
{
Console.WriteLine(done);
done = true;
}
}
}
Edit : readonly thanks #d4wn
Looks like the scheduler just cut the CPU time from one thread after it's call of Console.Writeline and then gave it to the other thread, all before done was set to true.
Are you certain that it always prints False\nFalse when you call Console.Writeline before assigning done = true;? To my understanding, this should be quite random.
Each time a shared variable is accessed by one of the sharing threads must be protected by one of the syncronization techniques explicitly. The environment (clr..) doesn't do it for us, cause in the whole possible complexity of multithreading it would be impossible. So this definetely responsible and not easy task must be done by the developer, writing multithreading code.
I guess there you can find a great deal of necessary information:
Thread Synchronization (C# Programming Guide)
In "C# 4 in a Nutshell", the author shows that this class can write 0 sometimes without MemoryBarrier, though I can't reproduce in my Core2Duo:
public class Foo
{
int _answer;
bool _complete;
public void A()
{
_answer = 123;
//Thread.MemoryBarrier(); // Barrier 1
_complete = true;
//Thread.MemoryBarrier(); // Barrier 2
}
public void B()
{
//Thread.MemoryBarrier(); // Barrier 3
if (_complete)
{
//Thread.MemoryBarrier(); // Barrier 4
Console.WriteLine(_answer);
}
}
}
private static void ThreadInverteOrdemComandos()
{
Foo obj = new Foo();
Task.Factory.StartNew(obj.A);
Task.Factory.StartNew(obj.B);
Thread.Sleep(10);
}
This need seems crazy to me. How can I recognize all possible cases that this can occur? I think that if processor changes order of operations, it needs to guarantee that the behavior doesn't change.
Do you bother to use Barriers?
You are going to have a very hard time reproducing this bug. In fact, I would go as far as saying you will never be able to reproduce it using the .NET Framework. The reason is because Microsoft's implementation uses a strong memory model for writes. That means writes are treated as if they were volatile. A volatile write has lock-release semantics which means that all prior writes must be committed before the current write.
However, the ECMA specification has a weaker memory model. So it is theoretically possible that Mono or even a future version of the .NET Framework might start exhibiting the buggy behavior.
So what I am saying is that it is very unlikely that removing barriers #1 and #2 will have any impact on the behavior of the program. That, of course, is not a guarantee, but an observation based on the current implementation of the CLR only.
Removing barriers #3 and #4 will definitely have an impact. This is actually pretty easy to reproduce. Well, not this example per se, but the following code is one of the more well known demonstrations. It has to be compiled using the Release build and ran outside of the debugger. The bug is that the program does not end. You can fix the bug by placing a call to Thread.MemoryBarrier inside the while loop or by marking stop as volatile.
class Program
{
static bool stop = false;
public static void Main(string[] args)
{
var t = new Thread(() =>
{
Console.WriteLine("thread begin");
bool toggle = false;
while (!stop)
{
toggle = !toggle;
}
Console.WriteLine("thread end");
});
t.Start();
Thread.Sleep(1000);
stop = true;
Console.WriteLine("stop = true");
Console.WriteLine("waiting...");
t.Join();
}
}
The reason why some threading bugs are hard to reproduce is because the same tactics you use to simulate thread interleaving can actually fix the bug. Thread.Sleep is the most notable example because it generates memory barriers. You can verify that by placing a call inside the while loop and observing that the bug goes away.
You can see my answer here for another analysis of the example from the book you cited.
Odds are very good that the first task is completed by the time the 2nd task even starts running. You can only observe this behavior if both threads run that code simultaneously and there's no intervening cache-synchronizing operations. There is one in your code, the StartNew() method will take a lock inside the thread pool manager somewhere.
Getting two threads to run this code simultaneously is very hard. This code completes in a couple of nanoseconds. You would have to try billions of times and introduce variable delays to have any odds. Not much point to this of course, the real problem is when this happens randomly when you don't expect it.
Stay away from this, use the lock statement to write sane multi-threaded code.
If you use volatile and lock, the memory barrier is built in. But, yes, you do need it otherwise. Having said that, I suspect that you need half as many as your example shows.
Its very difficult to reproduce multithreaded bugs - usually you have to run the test code many times (thousands) and have some automated check that will flag if the bug occurs. You might try to add a short Thread.Sleep(10) in between some of the lines, but again it not always guarantees that you will get the same issues as without it.
Memory Barriers were introduced for people who need to do really hardcore low-level performance optimisation of their multithreaded code. In most cases you will be better off when using other synchronisation primitives, i.e. volatile or lock.
I'll just quote one of the great articles on multi-threading:
Consider the following example:
class Foo
{
int _answer;
bool _complete;
void A()
{
_answer = 123;
_complete = true;
}
void B()
{
if (_complete) Console.WriteLine (_answer);
}
}
If methods A and B ran concurrently on different threads, might it be
possible for B to write “0”? The answer is yes — for the following
reasons:
The compiler, CLR, or CPU may reorder your program's instructions to
improve efficiency. The compiler, CLR, or CPU may introduce caching
optimizations such that assignments to variables won't be visible to
other threads right away. C# and the runtime are very careful to
ensure that such optimizations don’t break ordinary single-threaded
code — or multithreaded code that makes proper use of locks. Outside
of these scenarios, you must explicitly defeat these optimizations by
creating memory barriers (also called memory fences) to limit the
effects of instruction reordering and read/write caching.
Full fences
The simplest kind of memory barrier is a full memory
barrier (full fence) which prevents any kind of instruction reordering
or caching around that fence. Calling Thread.MemoryBarrier generates a
full fence; we can fix our example by applying four full fences as
follows:
class Foo
{
int _answer;
bool _complete;
void A()
{
_answer = 123;
Thread.MemoryBarrier(); // Barrier 1
_complete = true;
Thread.MemoryBarrier(); // Barrier 2
}
void B()
{
Thread.MemoryBarrier(); // Barrier 3
if (_complete)
{
Thread.MemoryBarrier(); // Barrier 4
Console.WriteLine (_answer);
}
}
}
All the theory behind Thread.MemoryBarrier and why we need to use it in non-blocking scenarios to make the code safe and robust is described nicely here: http://www.albahari.com/threading/part4.aspx
If you are ever touching data from two different threads, this can occur. This is one of the tricks that processors use to increase speed - you could build processors that didn't do this, but they would be much slower, so no one does that anymore. You should probably read something like Hennessey and Patterson to recognize all of the various types of race conditions.
I always use some sort of higher level tool like a monitor or a lock, but internally they are doing something similar or are implemented with barriers.