Do zombies exist ... in .NET? - c#

I was having a discussion with a teammate about locking in .NET. He's a really bright guy with an extensive background in both lower-level and higher-level programming, but his experience with lower level programming far exceeds mine. Anyway, He argued that .NET locking should be avoided on critical systems expected to be under heavy-load if at all possible in order to avoid the admittedly small possibility of a "zombie thread" crashing a system. I routinely use locking and I didn't know what a "zombie thread" was, so I asked. The impression I got from his explanation is that a zombie thread is a thread that has terminated but somehow still holds onto some resources. An example he gave of how a zombie thread could break a system was a thread begins some procedure after locking on some object, and then is at some point terminated before the lock can be released. This situation has the potential to crash the system, because eventually, attempts to execute that method will result in the threads all waiting for access to an object that will never be returned, because the thread that is using the locked object is dead.
I think I got the gist of this, but if I'm off base, please let me know. The concept made sense to me. I wasn't completely convinced that this was a real scenario that could happen in .NET. I've never previously heard of "zombies", but I do recognize that programmers who have worked in depth at lower levels tend to have a deeper understanding of computing fundamentals (like threading). I definitely do see the value in locking, however, and I have seen many world class programmers leverage locking. I also have limited ability to evaluate this for myself because I know that the lock(obj) statement is really just syntactic sugar for:
bool lockWasTaken = false;
var temp = obj;
try { Monitor.Enter(temp, ref lockWasTaken); { body } }
finally { if (lockWasTaken) Monitor.Exit(temp); }
and because Monitor.Enter and Monitor.Exit are marked extern. It seems conceivable that .NET does some kind of processing that protects threads from exposure to system components that could have this kind of impact, but that is purely speculative and probably just based on the fact that I've never heard of "zombie threads" before. So, I'm hoping I can get some feedback on this here:
Is there a clearer definition of a "zombie thread" than what I've explained here?
Can zombie threads occur on .NET? (Why/Why not?)
If applicable, How could I force the creation of a zombie thread in .NET?
If applicable, How can I leverage locking without risking a zombie thread scenario in .NET?
Update
I asked this question a little over two years ago. Today this happened:

Is there a clearer definition of a "zombie thread" than what I've explained here?
Seems like a pretty good explanation to me - a thread that has terminated (and can therefore no longer release any resources), but whose resources (e.g. handles) are still around and (potentially) causing problems.
Can zombie threads occur on .NET? (Why/Why not?)
If applicable, How could I force the creation of a zombie thread in .NET?
They sure do, look, I made one!
[DllImport("kernel32.dll")]
private static extern void ExitThread(uint dwExitCode);
static void Main(string[] args)
{
new Thread(Target).Start();
Console.ReadLine();
}
private static void Target()
{
using (var file = File.Open("test.txt", FileMode.OpenOrCreate))
{
ExitThread(0);
}
}
This program starts a thread Target which opens a file and then immediately kills itself using ExitThread. The resulting zombie thread will never release the handle to the "test.txt" file and so the file will remain open until the program terminates (you can check with process explorer or similar). The handle to "test.txt" won't be released until GC.Collect is called - it turns out it is even more difficult than I thought to create a zombie thread that leaks handles)
If applicable, How can I leverage locking without risking a zombie thread scenario in .NET?
Don't do what I just did!
As long as your code cleans up after itself correctly (use Safe Handles or equivalent classes if working with unmanaged resources), and as long as you don't go out of your way to kill threads in weird and wonderful ways (safest way is just to never kill threads - let them terminate themselves normally, or through exceptions if necessary), the only way that you are going to have something resembling a zombie thread is if something has gone very wrong (e.g. something goes wrong in the CLR).
In fact its actually surprisingly difficult to create a zombie thread (I had to P/Invoke into a function that esentially tells you in the documentation not to call it outside of C). For example the following (awful) code actually doesn't create a zombie thread.
static void Main(string[] args)
{
var thread = new Thread(Target);
thread.Start();
// Ugh, never call Abort...
thread.Abort();
Console.ReadLine();
}
private static void Target()
{
// Ouch, open file which isn't closed...
var file = File.Open("test.txt", FileMode.OpenOrCreate);
while (true)
{
Thread.Sleep(1);
}
GC.KeepAlive(file);
}
Despite making some pretty awful mistakes, the handle to "test.txt" is still closed as soon as Abort is called (as part of the finalizer for file which under the covers uses SafeFileHandle to wrap its file handle)
The locking example in C.Evenhuis answer is probably the easiest way to fail to release a resource (a lock in this case) when a thread is terminated in a non-weird way, but thats easily fixed by either using a lock statement instead, or putting the release in a finally block.
See also
Subtleties of C# IL
codegen
for a very subtle case where an exception can prevent a lock from
being released even when using the lock keyword (but only in .Net 3.5 and earlier)
Locks and exceptions do not
mix

I've cleaned up my answer a bit, but left the original one below for reference
It’s the first time I've heard of the term zombies so I'll assume its definition is:
A thread that has terminated without releasing all of its resources
So given that definition, then yes, you can do that in .NET, as with other languages (C/C++, java).
However, I do not think this as a good reason not to write threaded, mission critical code in .NET. There may be other reasons to decide against .NET but writing off .NET just because you can have zombie threads somehow doesn't make sense to me. Zombie threads are possible in C/C++ (I'd even argue that it’s a lot easier to mess up in C) and a lot of critical, threaded apps are in C/C++ (high volume trading, databases etc).
Conclusion
If you are in the process of deciding on a language to use, then I suggest you take the big picture into consideration: performance, team skills, schedule, integration with existing apps etc. Sure, zombie threads are something that you should think about, but since it’s so difficult to actually make this mistake in .NET compared to other languages like C, I think this concern will be overshadowed by other things like the ones mentioned above. Good luck!
Original Answer
Zombies† can exist if you don't write proper threading code. The same is true for other languages like C/C++ and Java. But this is not a reason not to write threaded code in .NET.
And just like with any other language, know the price before using something. It also helps to know what is happening under the hood so you can foresee any potential problems.
Reliable code for mission critical systems is not easy to write, whatever language you're in. But I'm positive it’s not impossible to do correctly in .NET. Also AFAIK, .NET threading is not that different from threading in C/C++, it uses (or is built from) the same system calls except for some .net specific constructs (like the light weight versions of RWL and event classes).
†first time I've heard of the term zombies but based on your description, your colleague probably meant a thread that terminated without release all resources. This could potentially cause a deadlock, memory leak or some other bad side effect. This is obviously not desirable but singling out .NET because of this possibility is probably not a good idea since it’s possible in other languages too. I'd even argue that it’s easier to mess up in C/C++ than in .NET (especially so in C where you don't have RAII) but a lot of critical apps are written in C/C++ right? So it really depends on your individual circumstances. If you want to extract every ounce of speed from your application and want to get as close to bare metal as possible, then .NET might not be the best solution. If you are on a tight budget and do a lot of interfacing with web services/existing .net libraries/etc then .NET may be a good choice.

Right now most of my answer has been corrected by the comments below. I won't delete the answer because I need the reputation points because the information in the comments may be valuable to readers.
Immortal Blue pointed out that in .NET 2.0 and up finally blocks are immune to thread aborts. And as commented by Andreas Niedermair, this may not be an actual zombie thread, but the following example shows how aborting a thread can cause problems:
class Program
{
static readonly object _lock = new object();
static void Main(string[] args)
{
Thread thread = new Thread(new ThreadStart(Zombie));
thread.Start();
Thread.Sleep(500);
thread.Abort();
Monitor.Enter(_lock);
Console.WriteLine("Main entered");
Console.ReadKey();
}
static void Zombie()
{
Monitor.Enter(_lock);
Console.WriteLine("Zombie entered");
Thread.Sleep(1000);
Monitor.Exit(_lock);
Console.WriteLine("Zombie exited");
}
}
However when using a lock() { } block, the finally would still be executed when a ThreadAbortException is fired that way.
The following information, as it turns out, is only valid for .NET 1 and .NET 1.1:
If inside the lock() { } block an other exception occurs, and the ThreadAbortException arrives exactly when the finally block is about to be ran, the lock is not released. As you mentioned, the lock() { } block is compiled as:
finally
{
if (lockWasTaken)
Monitor.Exit(temp);
}
If another thread calls Thread.Abort() inside the generated finally block, the lock may not be released.

This isn't about Zombie threads, but the book Effective C# has a section on implementing IDisposable, (item 17), which talks about Zombie objects which I thought you may find interesting.
I recommend reading the book itself, but the gist of it is that if you have a class either implementing IDisposable, or containing a Desctructor, the only thing you should be doing in either is releasing resources. If you do other things here, then there is a chance that the object will not be garbage collected, but will also not be accessible in any way.
It gives an example similar to below:
internal class Zombie
{
private static readonly List<Zombie> _undead = new List<Zombie>();
~Zombie()
{
_undead.Add(this);
}
}
When the destructor on this object is called, a reference to itself is placed on the global list, meaning it stays alive and in memory for the life of the program, but isn't accessible. This may mean that resources (particularly unmanaged resources) may not be fully released, which can cause all sorts of potential issues.
A more complete example is below. By the time the foreach loop is reached, you have 150 objects in the Undead list each containing an image, but the image has been GC'd and you get an exception if you try to use it. In this example, I am getting an ArgumentException (Parameter is not valid) when I try and do anything with the image, whether I try to save it, or even view dimensions such as height and width:
class Program
{
static void Main(string[] args)
{
for (var i = 0; i < 150; i++)
{
CreateImage();
}
GC.Collect();
//Something to do while the GC runs
FindPrimeNumber(1000000);
foreach (var zombie in Zombie.Undead)
{
//object is still accessable, image isn't
zombie.Image.Save(#"C:\temp\x.png");
}
Console.ReadLine();
}
//Borrowed from here
//http://stackoverflow.com/a/13001749/969613
public static long FindPrimeNumber(int n)
{
int count = 0;
long a = 2;
while (count < n)
{
long b = 2;
int prime = 1;// to check if found a prime
while (b * b <= a)
{
if (a % b == 0)
{
prime = 0;
break;
}
b++;
}
if (prime > 0)
count++;
a++;
}
return (--a);
}
private static void CreateImage()
{
var zombie = new Zombie(new Bitmap(#"C:\temp\a.png"));
zombie.Image.Save(#"C:\temp\b.png");
}
}
internal class Zombie
{
public static readonly List<Zombie> Undead = new List<Zombie>();
public Zombie(Image image)
{
Image = image;
}
public Image Image { get; private set; }
~Zombie()
{
Undead.Add(this);
}
}
Again, I am aware you were asking about zombie threads in particular, but the question title is about zombies in .net, and I was reminded of this and thought others may find it interesting!

On critical systems under heavy load, writing lock-free code is better primarily because of the performance improvments. Look at stuff like LMAX and how it leverages "mechanical sympathy" for great discussions of this. Worry about zombie threads though? I think that's an edge case that's just a bug to be ironed out, and not a good enough reason not to use lock.
Sounds more like your friend is just being fancy and flaunting his knowledege of obscure exotic terminology to me! In all the time I was running the performance labs at Microsoft UK, I never came across an instance of this issue in .NET.

1.Is there a clearer definition of a "zombie thread" than what I've explained here?
I do agree that "Zombie Threads" exist, it's a term to refer to what happens with Threads that are left with resources that they don't let go of and yet don't completely die, hence the name "zombie," so your explanation of this referral is pretty right on the money!
2.Can zombie threads occur on .NET? (Why/Why not?)
Yes they can occur. It's a reference, and actually referred to by Windows as "zombie": MSDN uses the Word "Zombie" for Dead processes/threads
Happening frequently it's another story, and depends on your coding techniques and practices, as for you that like Thread Locking and have done it for a while I wouldn't even worry about that scenario happening to you.
And Yes, as #KevinPanko correctly mentioned in the comments, "Zombie Threads" do come from Unix which is why they are used in XCode-ObjectiveC and referred to as "NSZombie" and used for debugging. It behaves pretty much the same way... the only difference is an object that should've died becomes a "ZombieObject" for debugging instead of the "Zombie Thread" which might be a potential problem in your code.

I can make zombie threads easily enough.
var zombies = new List<Thread>();
while(true)
{
var th = new Thread(()=>{});
th.Start();
zombies.Add(th);
}
This leaks the thread handles (for Join()). It's just another memory leak as far as we are concerned in the managed world.
Now then, killing a thread in a way that it actually holds locks is a pain in the rear but possible. The other guy's ExitThread() does the job. As he found, the file handle got cleaned up by the gc but a lock around an object wouldn't. But why would you do that?

Related

Thread memory leak

I am trying to track down a memory leak in a larger C# program which spawns multiple threads. In the process, I have created a small side program which I am using to test some basic things, and I found some behavior that I really do not understand.
class Program
{
static void test()
{
}
static void Main(string[] args)
{
while (true)
{
Thread test_thread = new Thread(() => test());
test_thread.Start();
Thread.Sleep(20);
}
}
}
Running this program, I see that the memory usage of the program increases steadily without stopping. In just a few minutes the memory usage goes well over 100MB and keeps climbing. If I comment out the line test_thread.Start();, the memory used by the program maxes out at about a few megabytes, and levels out. I also tried forcing garbage collection at the end of the while loop using GC.Collect(), but it did not seem to do anything.
I thought that the thread would be dereferenced as soon as the function is finished executing allowing the GC to mop it up, but this doesn't seem to be happening. I must not be understanding something deeper here, and I would appreciate some help with fixing this leak. Thanks in advance!
This is by design, your test program is supposed to exhibit runaway memory usage. You can see the underlying reason from Taskmgr.exe. Use View + Select Columns and tick "Handles". Observe how the number of handles for your process is steadily increasing. Memory usage goes up along with that, reflecting the unmanaged memory used by the handle objects.
The design choice was a very courageous one, the CLR uses 5 operating system objects per thread. Plumbing, used for synchronization. These objects are themselves disposable, the design choice was to not make the Thread class implement IDisposable. That would be quite a hardship on .NET programmers, very difficult to make the Dispose() call at the right time. Courage that wasn't exhibited in the Task class design btw, causing lots of hand-wringing and the general advice not to bother.
This is not normally a problem in a well-designed .NET program. Where the GC runs often enough to clean up those OS objects. And Thread objects are creating sparingly, using the ThreadPool for very short running threads like your test program uses.
It can be, we can't see your real program. Do beware of drawing too many conclusions from such a synthetic test. You can see GC statistics with Perfmon.exe, gives you an idea if it is running often enough. A decent .NET memory profiler is the weapon of choice. GC.Collect() is the backup weapon. For example:
static void Main(string[] args) {
int cnt = 0;
while (true) {
Thread test_thread = new Thread(() => test());
test_thread.Start();
if (++cnt % 256 == 0) GC.Collect();
Thread.Sleep(20);
}
}
And you'll see it bounce back and forth now, never getting much higher than 4 MB.

Read Introduction in C# - how to protect against it?

An article in MSDN Magazine discusses the notion of Read Introduction and gives a code sample which can be broken by it.
public class ReadIntro {
private Object _obj = new Object();
void PrintObj() {
Object obj = _obj;
if (obj != null) {
Console.WriteLine(obj.ToString()); // May throw a NullReferenceException
}
}
void Uninitialize() {
_obj = null;
}
}
Notice this "May throw a NullReferenceException" comment - I never knew this was possible.
So my question is: how can I protect against read introduction?
I would also be really grateful for an explanation exactly when the compiler decides to introduce reads, because the article doesn't include it.
Let me try to clarify this complicated question by breaking it down.
What is "read introduction"?
"Read introduction" is an optimization whereby the code:
public static Foo foo; // I can be changed on another thread!
void DoBar() {
Foo fooLocal = foo;
if (fooLocal != null) fooLocal.Bar();
}
is optimized by eliminating the local variable. The compiler can reason that if there is only one thread then foo and fooLocal are the same thing. The compiler is explicitly permitted to make any optimization that would be invisible on a single thread, even if it becomes visible in a multithreaded scenario. The compiler is therefore permitted to rewrite this as:
void DoBar() {
if (foo != null) foo.Bar();
}
And now there is a race condition. If foo turns from non-null to null after the check then it is possible that foo is read a second time, and the second time it could be null, which would then crash. From the perspective of the person diagnosing the crash dump this would be completely mysterious.
Can this actually happen?
As the article you linked to called out:
Note that you won’t be able to reproduce the NullReferenceException using this code sample in the .NET Framework 4.5 on x86-x64. Read introduction is very difficult to reproduce in the .NET Framework 4.5, but it does nevertheless occur in certain special circumstances.
x86/x64 chips have a "strong" memory model and the jit compilers are not aggressive in this area; they will not do this optimization.
If you happen to be running your code on a weak memory model processor, like an ARM chip, then all bets are off.
When you say "the compiler" which compiler do you mean?
I mean the jit compiler. The C# compiler never introduces reads in this manner. (It is permitted to, but in practice it never does.)
Isn't it a bad practice to be sharing memory between threads without memory barriers?
Yes. Something should be done here to introduce a memory barrier because the value of foo could already be a stale cached value in the processor cache. My preference for introducing a memory barrier is to use a lock. You could also make the field volatile, or use VolatileRead, or use one of the Interlocked methods. All of those introduce a memory barrier. (volatile introduces only a "half fence" FYI.)
Just because there's a memory barrier does not necessarily mean that read introduction optimizations are not performed. However, the jitter is far less aggressive about pursuing optimizations that affect code that contains a memory barrier.
Are there other dangers to this pattern?
Sure! Let's suppose there are no read introductions. You still have a race condition. What if another thread sets foo to null after the check, and also modifies global state that Bar is going to consume? Now you have two threads, one of which believes that foo is not null and the global state is OK for a call to Bar, and another thread which believes the opposite, and you're running Bar. This is a recipe for disaster.
So what's the best practice here?
First, do not share memory across threads. This whole idea that there are two threads of control inside the main line of your program is just crazy to begin with. It never should have been a thing in the first place. Use threads as lightweight processes; give them an independent task to perform that does not interact with the memory of the main line of the program at all, and just use them to farm out computationally intensive work.
Second, if you are going to share memory across threads then use locks to serialize access to that memory. Locks are cheap if they are not contended, and if you have contention, then fix that problem. Low-lock and no-lock solutions are notoriously difficult to get right.
Third, if you are going to share memory across threads then every single method you call that involves that shared memory must either be robust in the face of race conditions, or the races must be eliminated. That is a heavy burden to bear, and that is why you shouldn't go there in the first place.
My point is: read introductions are scary but frankly they are the least of your worries if you are writing code that blithely shares memory across threads. There are a thousand and one other things to worry about first.
You cant really "protect" against read introduction as it's a compiler optimization (excepting using Debug builds with no optimization of course). It's pretty well documented that the optimizer will maintain the single-threaded semantics of the function, which as the article notes can cause issues in multi-threaded situations.
That said, I'm confused by his example. In Jeffrey Richter's book CLR via C# (v3 in this case), in the Events section he covers this pattern, and notes that in the example snippet you have above, in THEORY it wouldn't work. But, it was a recommended pattern by Microsoft early in .Net's existence, and therefore the JIT compiler people he spoke to said that they would have to make sure that sort of snippet never breaks. (It's always possible they may decide that it's worth breaking for some reason though - I imagine Eric Lippert could shed light on that).
Finally, unlike the article, Jeffrey offers the "proper" way to handle this in multi-threaded situations (I've modified his example with your sample code):
Object temp = Interlocked.CompareExchange(ref _obj, null, null);
if(temp != null)
{
Console.WriteLine(temp.ToString());
}
I only skimmed the article, but it seems that what the author is looking for is that you need to declare the _obj member as volatile.

C# monitors (from a Java developer POV)

I'm working on the port of a C#/directx game client to Java, so that I can learn some C# (as I am completely profane on it) and in the meanwhile improve my knowledge on a java opengl engine.
When I encounter something like the following:
Monitor.Enter(preloadDictionary);
try {
foreach (PreloadEntry entry in preloadDictionary.Values) {
if (entry.loaded) continue;
return entry;
}
} finally {
Monitor.Exit(preloadDictionary);
}
can i assume it is like the following?
syncronized(preloadDictionary) {
[...]
}
And in the case of:
Monitor.Enter(worldServerMap);
try {
worldServerMap[rv.WorldName] = entry;
Monitor.PulseAll(worldServerMap);
} finally {
Monitor.Exit(worldServerMap);
}
is the additional PulseAll() like a notifyAll() to wake up all thread waiting on the resource? (but i could not find any place in the code where Monitor.Wait() is called).
lock(x) is itentical to Monitor.Enter then Monitor.Exit in the finally. It is a langauge shortcut.
If you ask me, it is one of the weaker parts of the C# language - simply because while it was nice while there was a MINOTIR, these days there are various versions of monitors (slim, spinlock etc.) and lock only supports one of them. It is convenient, but I am not sure it is wise ;)
is the additional PulseAll() like a notifyAll() to wake up all thread
waiting on the resource? (but i could not find any place in the code
where Monitor.Wait() is called).
The PulseAll makes little sense unless you have an explicit wait, possibly from other threads not wanting to Enter. Enter waits if it can not get a lock, so Exit is enough to make normal synchronization.
I would start looking for a Wait or something else - PulseAll on a MOnitor makes only sense if you have threads waiting WITHOUT trying to Enter at this stage. It could lead to a bad design issue that basically has them waiting then get pulsed then try to enter, it could be part of a non blocking design of some dsort - hard to say, but it is unusual. Unless you can find a Wait somewhere in your code, I would likely try killing the PulseAll and see what happens.

Do we really need VOLATILE keyword in C#?

Here is the code that I was trying on my workstation.
class Program
{
public static volatile bool status = true;
public static void Main()
{
Thread FirstStart = new Thread(threadrun);
FirstStart.Start();
Thread.Sleep(200);
Thread thirdstart = new Thread(threadrun2);
thirdstart.Start();
Console.ReadLine();
}
static void threadrun()
{
while (status)
{
Console.WriteLine("Waiting..");
}
}
static void threadrun2()
{
status = false;
Console.WriteLine("the bool value is now made FALSE");
}
}
As you can see I have fired three threads in Main. Then using breakpoints I tracked the threads. My initial conception was all the three threads will be fired simultaneously, but my breakpoint flow showed that the thread-execution-flow followed one after other (and so was the output format i.e. Top to bottom execution of threads). Guys why is that happening ?
Additionally I tried to run the same program without using the volatile keyword in declaration, and I found no change in program execution. I doubt the volatile keyword is of no practical live use. Am I going wrong somewhere?
Your method of thinking is flawed.
The very nature of threading related issues is that they're non-deterministic. This means that what you have observed is potentially no indicator of what may happen in the future.
This is the very nature of why multithreaded programming is "hard." It often defies ad hoc testing, or even most unit testing. The only way to do it effectively is to understand your entire software and hardware stack, and diagram every possible occurrence through use of state machines.
In summary, threaded programming is not about what you've seen happen, it's about what might possibly happen, no matter how improbable.
Ok I will try to explain a very long story as short as possible:
Number 1: Trying to inspect the behavior of threads with the debugger is as useful as repeatedly running a multithreaded program and concluding that it works fine because out of 100 tests none failed: WRONG! Threads behave in a completely nondeterministic (some would say random) way and you need different methods to make sure such a program will run correctly.
Number 2: The use of volatile will become clear once you remove it and then run your program in Debug mode and then switch to Release mode. I think you will have a surprise... What happens in Release mode is that the compiler will optimize code (this includes reordering instructions and caching of values). Now, if your two threads run on different processor cores, then the core executing the thread that is checking for the value of status will cache its value instead of repeatedly checking for it. The other thread will set it but the first one will never see the change: deadlock! volatile prevents this kind of situation from occurring.
In a sense, volatile is a guard in case the code does not actually (and most likely will not) run as you think it will in a multithreaded scenario.
The fact that your simple code doesn't behave dirrefently with volatile doesn't mean anything. Your code is too simple and has nothing to do with volatile. You need to write very computation-intensive code to create a clearly visible memory race condition.
Also, volatile keyword may be useful on other platforms than x86/x64 with other memory models. (I mean like for example Itanium.)
Joe Duffy wrote interesting information about volatile on his blog. I strongly recommend to read it.
Then using breakpoints I tracked the threads. My initial conception
was all the three threads will be fired simultaneously, but my
breakpoint flow showed that the thread-execution-flow followed one
after other (and so was the output format i.e. Top to bottom execution
of threads). Guys why is that happening?
The debugger is temporarily suspending the threads to make it easier to debug.
I doubt the volatile keyword is of no practical live use. Am I going
wrong somewhere?
The Console.WriteLine calls are very likely fixing masking the problem. They are most likely generating the necessary memory barrier for you implicitly. Here is a really simple snippet of code that demonstrates that there is, in fact, a problem when volatile is not used to declare the stop variable.
Compile the following code with the Release configuration and run it outside of the debugger.
class Program
{
static bool stop = false;
public static void Main(string[] args)
{
var t = new Thread(() =>
{
Console.WriteLine("thread begin");
bool toggle = false;
while (!stop)
{
toggle = !toggle;
}
Console.WriteLine("thread end");
});
t.Start();
Thread.Sleep(1000);
stop = true;
Console.WriteLine("stop = true");
Console.WriteLine("waiting...");
t.Join();
}
}

Why we need Thread.MemoryBarrier()?

In "C# 4 in a Nutshell", the author shows that this class can write 0 sometimes without MemoryBarrier, though I can't reproduce in my Core2Duo:
public class Foo
{
int _answer;
bool _complete;
public void A()
{
_answer = 123;
//Thread.MemoryBarrier(); // Barrier 1
_complete = true;
//Thread.MemoryBarrier(); // Barrier 2
}
public void B()
{
//Thread.MemoryBarrier(); // Barrier 3
if (_complete)
{
//Thread.MemoryBarrier(); // Barrier 4
Console.WriteLine(_answer);
}
}
}
private static void ThreadInverteOrdemComandos()
{
Foo obj = new Foo();
Task.Factory.StartNew(obj.A);
Task.Factory.StartNew(obj.B);
Thread.Sleep(10);
}
This need seems crazy to me. How can I recognize all possible cases that this can occur? I think that if processor changes order of operations, it needs to guarantee that the behavior doesn't change.
Do you bother to use Barriers?
You are going to have a very hard time reproducing this bug. In fact, I would go as far as saying you will never be able to reproduce it using the .NET Framework. The reason is because Microsoft's implementation uses a strong memory model for writes. That means writes are treated as if they were volatile. A volatile write has lock-release semantics which means that all prior writes must be committed before the current write.
However, the ECMA specification has a weaker memory model. So it is theoretically possible that Mono or even a future version of the .NET Framework might start exhibiting the buggy behavior.
So what I am saying is that it is very unlikely that removing barriers #1 and #2 will have any impact on the behavior of the program. That, of course, is not a guarantee, but an observation based on the current implementation of the CLR only.
Removing barriers #3 and #4 will definitely have an impact. This is actually pretty easy to reproduce. Well, not this example per se, but the following code is one of the more well known demonstrations. It has to be compiled using the Release build and ran outside of the debugger. The bug is that the program does not end. You can fix the bug by placing a call to Thread.MemoryBarrier inside the while loop or by marking stop as volatile.
class Program
{
static bool stop = false;
public static void Main(string[] args)
{
var t = new Thread(() =>
{
Console.WriteLine("thread begin");
bool toggle = false;
while (!stop)
{
toggle = !toggle;
}
Console.WriteLine("thread end");
});
t.Start();
Thread.Sleep(1000);
stop = true;
Console.WriteLine("stop = true");
Console.WriteLine("waiting...");
t.Join();
}
}
The reason why some threading bugs are hard to reproduce is because the same tactics you use to simulate thread interleaving can actually fix the bug. Thread.Sleep is the most notable example because it generates memory barriers. You can verify that by placing a call inside the while loop and observing that the bug goes away.
You can see my answer here for another analysis of the example from the book you cited.
Odds are very good that the first task is completed by the time the 2nd task even starts running. You can only observe this behavior if both threads run that code simultaneously and there's no intervening cache-synchronizing operations. There is one in your code, the StartNew() method will take a lock inside the thread pool manager somewhere.
Getting two threads to run this code simultaneously is very hard. This code completes in a couple of nanoseconds. You would have to try billions of times and introduce variable delays to have any odds. Not much point to this of course, the real problem is when this happens randomly when you don't expect it.
Stay away from this, use the lock statement to write sane multi-threaded code.
If you use volatile and lock, the memory barrier is built in. But, yes, you do need it otherwise. Having said that, I suspect that you need half as many as your example shows.
Its very difficult to reproduce multithreaded bugs - usually you have to run the test code many times (thousands) and have some automated check that will flag if the bug occurs. You might try to add a short Thread.Sleep(10) in between some of the lines, but again it not always guarantees that you will get the same issues as without it.
Memory Barriers were introduced for people who need to do really hardcore low-level performance optimisation of their multithreaded code. In most cases you will be better off when using other synchronisation primitives, i.e. volatile or lock.
I'll just quote one of the great articles on multi-threading:
Consider the following example:
class Foo
{
int _answer;
bool _complete;
void A()
{
_answer = 123;
_complete = true;
}
void B()
{
if (_complete) Console.WriteLine (_answer);
}
}
If methods A and B ran concurrently on different threads, might it be
possible for B to write “0”? The answer is yes — for the following
reasons:
The compiler, CLR, or CPU may reorder your program's instructions to
improve efficiency. The compiler, CLR, or CPU may introduce caching
optimizations such that assignments to variables won't be visible to
other threads right away. C# and the runtime are very careful to
ensure that such optimizations don’t break ordinary single-threaded
code — or multithreaded code that makes proper use of locks. Outside
of these scenarios, you must explicitly defeat these optimizations by
creating memory barriers (also called memory fences) to limit the
effects of instruction reordering and read/write caching.
Full fences
The simplest kind of memory barrier is a full memory
barrier (full fence) which prevents any kind of instruction reordering
or caching around that fence. Calling Thread.MemoryBarrier generates a
full fence; we can fix our example by applying four full fences as
follows:
class Foo
{
int _answer;
bool _complete;
void A()
{
_answer = 123;
Thread.MemoryBarrier(); // Barrier 1
_complete = true;
Thread.MemoryBarrier(); // Barrier 2
}
void B()
{
Thread.MemoryBarrier(); // Barrier 3
if (_complete)
{
Thread.MemoryBarrier(); // Barrier 4
Console.WriteLine (_answer);
}
}
}
All the theory behind Thread.MemoryBarrier and why we need to use it in non-blocking scenarios to make the code safe and robust is described nicely here: http://www.albahari.com/threading/part4.aspx
If you are ever touching data from two different threads, this can occur. This is one of the tricks that processors use to increase speed - you could build processors that didn't do this, but they would be much slower, so no one does that anymore. You should probably read something like Hennessey and Patterson to recognize all of the various types of race conditions.
I always use some sort of higher level tool like a monitor or a lock, but internally they are doing something similar or are implemented with barriers.

Categories

Resources