I've read several articles and posts that say that lock(this), lock(typeof(MyType)), lock("a string") are all bad practice because another thread could lock on the same key and cause a deadlock. In order to understand this problem, I was trying to create some sample code to illustrate the deadlock but have been unable to wrap my head around this.
Can someone write a concise bit of code that illustrates this classic problem? Please keep it short, I can digest code in smaller chunks only.
Edit:
I think lassevk sums it up well; that the real problem is that you have lost control over your locks. Once that happens, you cannot control the order the locks are called, and you are allowing a potential deadlock situation.
lock(this), lock(typeof(MyType)), etc all are situations where you have chosen a lock that is impossible to control.
A deadlock will only occur if you have more than one lock. You need a situation where both threads hold a resource that the other needs (which means there has to be a least two resources, and the two threads have to attempt to acquire them in a different order)
So a simple example:
// thread 1
lock(typeof(int)) {
Thread.Sleep(1000);
lock(typeof(float)) {
Console.WriteLine("Thread 1 got both locks");
}
}
// thread 2
lock(typeof(float)) {
Thread.Sleep(1000);
lock(typeof(int)) {
Console.WriteLine("Thread 2 got both locks");
}
}
Assuming both threads are started within a second of each others, they will both have time to grab the first lock before anyone gets to the inner lock. Without the Sleep() call, one of the threads would most likely have time to get and release both locks before the other thread even got started.
The idea is that you should never lock on something you cannot control who has access to.
Type objects are singletons visible to every .net piece of code and you cannot control who locks on your "this" object from the outside.
Same thing is for strings: since strings are immutable, the framework keeps just one instance of "hard coded" strings and puts them in a pool (the string is said to be interned), if you write two times in your code the string "hello", you will always get the same abject.
Consider the following example: you wrote just Thread1 in your super private call, while Thread2 is called by some library you are using in a background thread...
void Thread1()
{
lock (typeof(int))
{
Thread.Sleep(1000);
lock (typeof(long))
// do something
}
}
void Thread2()
{
lock (typeof(long))
{
Thread.Sleep(1000);
lock (typeof(int))
// do something
}
}
Sure, here you go.
Note that the common example for a deadlock is when you acquire multiple locks, and two or more threads end up waiting for each other.
For instance, two threads that locks like this:
Thread 1 Thread 2
Lock "A" Lock "B"
Lock "B" Lock "A" <-- both threads will stop dead here
waiting for the lock to be come
available.
However, in this example I didn't bother with that, I just let one thread lock indefinitely. You really don't want to loose control over your locks, so while this is a contrived example, the fact that the background thread can completely block the main thread like this, is bad.
using System;
using System.Threading;
namespace ConsoleApplication7
{
public class Program
{
public static void Main(string[] args)
{
LockableClass lockable = new LockableClass();
new Thread(new ParameterizedThreadStart(BackgroundMethod)).Start(lockable);
Thread.Sleep(500);
Console.Out.WriteLine("calling Reset");
lockable.Reset();
}
private static void BackgroundMethod(Object lockable)
{
lock (lockable)
{
Console.Out.WriteLine("background thread got lock now");
Thread.Sleep(Timeout.Infinite);
}
}
}
public class LockableClass
{
public Int32 Value1 { get; set; }
public Int32 Value2 { get; set; }
public void Reset()
{
Console.Out.WriteLine("attempting to lock on object");
lock (this)
{
Console.Out.WriteLine("main thread got lock now");
Value1 = 0;
Value2 = 0;
}
}
}
}
This is pretty standard bad-ness. Grabing the locks out of order and then sleeping with the lock. Two bad things to do. :)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace DeadLock
{
public class Program
{
static void Main(string[] args)
{
var ddt = new DontDoThat();
ddt.Go();
}
}
public class DontDoThat
{
private int _badSharedState = 0;
private readonly object _lock1 = new object();
private readonly object _lock2 = new object();
public void Go()
{
new Thread(BadGuy1).Start();
new Thread(BadGuy2).Start();
Console.WriteLine("Leaving Go!");
}
public void BadGuy1()
{
lock (_lock1)
{
Thread.Sleep(100); // yeild with the lock is bad
lock (_lock2)
{
_badSharedState++;
Console.Write("From Bad Guy #1: {0})", _badSharedState );
}
}
}
public void BadGuy2()
{
lock (_lock2)
{
lock (_lock1)
{
_badSharedState++;
Console.Write("From Bad Guy #2: {0})", _badSharedState);
}
}
}
}
}
The problem is that lock("a string") is locking on a singleton. This means that other objects that use the same lock could be an infinite wait.
for example:
using System;
using System.Threading;
namespace ThreadLock
{
class Program
{
static void Main(string[] args)
{
lock ("my lock")
{
ManualResetEvent evt = new ManualResetEvent(false);
WorkerObject worker = new WorkerObject(evt);
Thread t = new Thread(new ThreadStart(worker.Work));
t.Start();
evt.WaitOne();
}
}
}
class WorkerObject
{
private ManualResetEvent _evt;
public WorkerObject(ManualResetEvent evt)
{
_evt = evt;
}
public void Work()
{
lock ("my lock")
{
Console.WriteLine("worked.");
_evt.Set();
}
}
}
}
In this case, the calling code creates a lock on a string then makes a worker object. The worker object in Work() locks on the same string, which is a singleton in C#. It ends up in deadlock because the caller owns the lock and is waiting for a signal which will never come.
class Character
{
public Character Other;
public string Name;
private object locker = new object();
public Character(string name)
{
Name = name;
}
public void Go()
{
lock (locker)
{
Thread.Sleep(1000);
Console.WriteLine("go in {0}", Name);
Other.Go();
}
}
}
class Program
{
static void Main(string[] args)
{
Character a = new Character("A");
Character b = new Character("B");
a.Other = b;
b.Other = a;
new Thread(a.Go).Start();
b.Go();
Console.ReadLine();
}
}
Related
I've got a fairly simple program to enter and serialize an object using a lambda expression to pass things off to another thread.
using System;
using System.Threading;
using Newtonsoft.Json;
namespace MultithreadingApplication
{
class ThreadCreationProgram
{
static void Main(string[] args)
{
myObject theObject = new myObject();
Console.WriteLine("Enter the following:");
Console.WriteLine("color:");
theObject.Color = Console.ReadLine();
Console.WriteLine("number");
theObject.Color = Console.ReadLine();
Console.WriteLine("shape:");
theObject.Shape = Console.ReadLine();
Thread myNewThread = new Thread(() => Serialize(theObject));
myNewThread.Start();
myNewThread.Abort();
Console.ReadKey();
}
public static void Serialize(myObject theObject)
{
string json = JsonConvert.SerializeObject(theObject, Formatting.Indented);
Console.WriteLine(json);
Thread.Sleep(1000);
}
}
public class myObject
{
private Int32 number;
private String color, shape;
public Int32 Number
{
get { return number; }
set { number = value; }
}
public String Color
{
get { return color; }
set { color = value; }
}
public String Shape
{
get { return shape; }
set { shape = value; }
}
public myObject()
{
}
}
}
When I run this thing, I notice that sometimes, it won't actually call the Serialize method. I've examined it with breakpoints and it will instantiate the thread with the lambda expression statement and immediately terminate it without ever going down to the Serialize method. I'm new to multithreading, so what's the deal here?
myNewThread.Start();
myNewThread.Abort();
The thread sometimes fails to make any progress because you abort it before it has a chance to execute. If you want the thread to execute don't abort it.
The whole point of threads is that they execute independent of each other. When you call Start the thread is instructed to begin executing. In the meantime the calling thread is free to continue. In your case it immediately aborts the thread. And this can happen before the thread has got going.
The problem here is that you call Thread.Start() which tells the computer: "Hey, begin doing this work in the lamda expression in a different context.
Right when that message is sent, you are immediately calling Thread.Abort() [http://msdn.microsoft.com/en-us/library/System.Threading.Thread_methods%28v=vs.110%29.aspx ]. It immediately kills the thread so sometimes no work will get done at all.
This SO answer should point you in the right direction: How to wait for thread to finish with .NET?
Checkout Thread.Join() here
Consider the following pattern:
private AutoResetEvent signal = new AutoResetEvent(false);
private void Work()
{
while (true)
{
Thread.Sleep(5000);
signal.Set();
//has a waiting thread definitely been signaled by now?
signal.Reset();
}
}
public void WaitForNextEvent()
{
signal.WaitOne();
}
The purpose of this pattern is to allow external consumers to wait for a certain event (e.g. - a message arriving). WaitForNextEvent is not called from within the class.
To give an example that should be familiar, consider System.Diagnostics.Process. It exposes an Exited event, but it also exposes a WaitForExit method, which allows the caller to wait synchronously until the process exits. this is what I am trying to achieve here.
The reason I need signal.Reset() is that if a thread calls WaitForNextEvent after signal.Set() has already been called (or in other words, if .Set was called when no threads were waiting), it returns immediately, as the event has already been previously signaled.
The question
Is it guaranteed that a thread calling WaitForNextEvent() will be signaled before signal.Reset() is called? If not, what are other solutions for implementing a WaitFor method?
Instead of using AutoResetEvent or ManualResetEvent, use this:
public sealed class Signaller
{
public void PulseAll()
{
lock (_lock)
{
Monitor.PulseAll(_lock);
}
}
public void Pulse()
{
lock (_lock)
{
Monitor.Pulse(_lock);
}
}
public void Wait()
{
Wait(Timeout.Infinite);
}
public bool Wait(int timeoutMilliseconds)
{
lock (_lock)
{
return Monitor.Wait(_lock, timeoutMilliseconds);
}
}
private readonly object _lock = new object();
}
Then change your code like so:
private Signaller signal = new Signaller();
private void Work()
{
while (true)
{
Thread.Sleep(5000);
signal.Pulse(); // Or signal.PulseAll() to signal ALL waiting threads.
}
}
public void WaitForNextEvent()
{
signal.Wait();
}
There is no guarantee. This:
AutoResetEvent flag = new AutoResetEvent(false);
new Thread(() =>
{
Thread.CurrentThread.Priority = ThreadPriority.Lowest;
Console.WriteLine("Work Item Started");
flag.WaitOne();
Console.WriteLine("Work Item Executed");
}).Start();
// For fast systems, you can help by occupying processors.
for (int ix = 0; ix < 2; ++ix)
{
new Thread(() => { while (true) ; }).Start();
}
Thread.Sleep(1000);
Console.WriteLine("Sleeped");
flag.Set();
// Decomment here to make it work
//Thread.Sleep(1000);
flag.Reset();
Console.WriteLine("Finished");
Console.ReadLine();
won't print "Work Item Executed" on my system. If I add a Thread.Sleep between the Set and the Reset it prints it. Note that this is very processor dependent, so you could have to create tons of threads to "fill" the CPUs. On my PC it's reproducible 50% of the times :-)
For the Exited:
readonly object mylock = new object();
then somewhere:
lock (mylock)
{
// Your code goes here
}
and the WaitForExit:
void WaitForExit()
{
lock (mylock) ;
// exited
}
void bool IsExited()
{
bool lockTacken = false;
try
{
Monitor.TryEnter(mylock, ref lockTacken);
}
finally
{
if (lockTacken)
{
Monitor.Exit(mylock);
}
}
return lockTacken;
}
Note that the lock construct isn't compatible with async/await (as aren't nearly all the locking primitives of .NET)
I would use TaskCompletionSources:
private volatile TaskCompletionSource<int> signal = new TaskCompletionSource<int>();
private void Work()
{
while (true)
{
Thread.Sleep(5000);
var oldSignal = signal;
signal = new TaskCompletionSource<int>()
//has a waiting thread definitely been signaled by now?
oldSignal.SetResult(0);
}
}
public void WaitForNextEvent()
{
signal.Task.Wait();
}
By the time that the code calls SetResult, no new code entering WaitForNextEvent can obtain the TaskCompletionSource that is being signalled.
I believe it is not guaranteed.
However, your logic flow is not understood by me. If your main thread Sets the signal, why should it wait until that signal reaches its destination? Wouldn't it be better to continue your "after signal set" logic in that thread which was waiting?
If you cannot do that, I recommend you to use second WaitHandle to signal the first thread that the second one has reveiced the signal. But I cannot see any pros of such a strategy.
I have some code that looks like the below. Does this create a deadlock?
private readonly object objectLock = new object();
public void MethodA()
{
lock(objectLock)
{
MethodB();
}
}
public void MethodB()
{
lock(objectLock)
{
//do something
}
}
UPDATE: There will 2 threads running
No - but this would be:
private readonly object objectLockA = new object();
private readonly object objectLockB = new object();
public void MethodA()
{
lock(objectLockA)
{
lock(objectLockB)
{
//...
}
}
}
public void MethodB()
{
lock(objectLockB)
{
lock(objectLockA)
{
//do something
}
}
}
If you call both Methods in parallel (from 2 different threads) then you would get a deadlock...
No its not a deadlock. Its the same thread locking on the same synchronization object. A thread can take nested locks. It just needs to release it equal no. of times.
No, you'd need two lock objects to enable a deadlock.
If this is the only mutex involved, it isn't. The same thread can lock the same mutex multiple times, as long it unlocks it equal number of times.
Calling MethodA produces the following sequence of operations on the same thread:
Lock objectLock.
Call MethodB.
Lock objectLock.
Unlock objectLock.
Exit MethodB.
Unlock objectLock.
So, objectLock is locked twice and unlocaked twice, but there is no deadlock:
If another thread tries to call MethodA it will simply block on the first lock but will not deadlock.
It if calls MethodB, the same would happen.
And if first thread call MethodB and then other thread calls MethodA, again "normal" blocking would take place but not a deadlock.
If you copy paste the following lines, compile and run see that "never called" is not printed in the console.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace deadlocktest
{
class Program
{
static object object1 = new object();
static object object2 = new object();
public static void FunctionOne()
{
lock (object1)
{
Console.WriteLine("FunctionOne called 1");
Thread.Sleep(1000);
lock (object2)
{
Console.WriteLine("FunctionOne called 2, never called");
}
}
}
public static void FunctionTwo()
{
lock (object2)
{
Console.WriteLine("FunctionTwo called 1");
Thread.Sleep(1000);
lock (object1)
{
Console.WriteLine("FunctionTwo called 2, never called");
}
}
}
static void Main(string[] args)
{
Thread thread1 = new Thread(FunctionOne);
Thread thread2 = new Thread(FunctionTwo);
thread1.Start();
thread2.Start();
int i = 0;
while (i < 9)
{
Console.WriteLine("How bad thread!");
i++;
}
thread1.Join();
thread2.Join();
Console.ReadLine();
}
}
}
My main concern is with the boolean flag... is it safe to use it without any synchronization? I've read in several places that it's atomic (including the documentation).
class MyTask
{
private ManualResetEvent startSignal;
private CountDownLatch latch;
private bool running;
MyTask(CountDownLatch latch)
{
running = false;
this.latch = latch;
startSignal = new ManualResetEvent(false);
}
// A method which runs in a thread
public void Run()
{
startSignal.WaitOne();
while(running)
{
startSignal.WaitOne();
//... some code
}
latch.Signal();
}
public void Stop()
{
running = false;
startSignal.Set();
}
public void Start()
{
running = true;
startSignal.Set();
}
public void Pause()
{
startSignal.Reset();
}
public void Resume()
{
startSignal.Set();
}
}
Is this a safe way to design a task in this way? Any suggestions, improvements, comments?
Note: I wrote my custom CountDownLatch class in case you're wondering where I'm getting it from.
Update:
Here is my CountDownLatch too:
public class CountDownLatch
{
private volatile int m_remain;
private EventWaitHandle m_event;
public CountDownLatch (int count)
{
if (count < 0)
throw new ArgumentOutOfRangeException();
m_remain = count;
m_event = new ManualResetEvent(false);
if (m_remain == 0)
{
m_event.Set();
}
}
public void Signal()
{
// The last thread to signal also sets the event.
if (Interlocked.Decrement(ref m_remain) == 0)
m_event.Set();
}
public void Wait()
{
m_event.WaitOne();
}
}
You better mark it volatile though:
The volatile keyword indicates that a
field might be modified by multiple
concurrently executing threads. Fields
that are declared volatile are not
subject to compiler optimizations that
assume access by a single thread. This
ensures that the most up-to-date value
is present in the field at all times.
But I would change your loop:
startSignal.WaitOne();
while(running)
{
//... some code
startSignal.WaitOne();
}
As it is in your post the 'some code' might execute when the thread is stopped (ie. when Stop is called) which is unexpected and may be even incorrect.
Booleans are atomic in C#, however, if you want to modify it in one thread and read it in another, you will need to mark it volatile at the very least,. Otherwise the reading thread may only actually read it once into a register.
Booleans are atomic in C#: http://msdn.microsoft.com/en-us/library/aa691278(VS.71).aspx
BTW, I just noticed this part of the code:
// A method which runs in a thread
public void Run()
{
startSignal.WaitOne();
while(running)
{
startSignal.WaitOne();
//... some code
}
latch.Signal();
}
You will need to unblock the worker thread twice using "startSignal.Set()" for the code within the while block to execute.
Is this deliberate?
For a "log information for support" type of function I'd like to enumerate and dump active thread information.
I'm well aware of the fact that race conditions can make this information semi-inaccurate, but I'd like to try to get the best possible result, even if it isn't 100% accurate.
I looked at Process.Threads, but it returns ProcessThread objects, I'd like to have a collection of Thread objects, so that I can log their name, and whether they're background threads or not.
Is there such a collection available, even if it is just a snapshot of the active threads when I call it?
ie.
Thread[] activeThreads = ??
Note, to be clear, I am not asking about Process.Threads, this collection gives me a lot, but not all of what I want. I want to know how much time specific named threads in our application is currently using (which means I will have to look at connecting the two types of objects later, but the names is more important than the CPU time to begin with.)
If you're willing to replace your application's Thread creations with another wrapper class, said wrapper class can track the active and inactive Threads for you. Here's a minimal workable shell of such a wrapper:
namespace ThreadTracker
{
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Threading;
public class TrackedThread
{
private static readonly IList<Thread> threadList = new List<Thread>();
private readonly Thread thread;
private readonly ParameterizedThreadStart start1;
private readonly ThreadStart start2;
public TrackedThread(ParameterizedThreadStart start)
{
this.start1 = start;
this.thread = new Thread(this.StartThreadParameterized);
lock (threadList)
{
threadList.Add(this.thread);
}
}
public TrackedThread(ThreadStart start)
{
this.start2 = start;
this.thread = new Thread(this.StartThread);
lock (threadList)
{
threadList.Add(this.thread);
}
}
public TrackedThread(ParameterizedThreadStart start, int maxStackSize)
{
this.start1 = start;
this.thread = new Thread(this.StartThreadParameterized, maxStackSize);
lock (threadList)
{
threadList.Add(this.thread);
}
}
public TrackedThread(ThreadStart start, int maxStackSize)
{
this.start2 = start;
this.thread = new Thread(this.StartThread, maxStackSize);
lock (threadList)
{
threadList.Add(this.thread);
}
}
public static int Count
{
get
{
lock (threadList)
{
return threadList.Count;
}
}
}
public static IEnumerable<Thread> ThreadList
{
get
{
lock (threadList)
{
return new ReadOnlyCollection<Thread>(threadList);
}
}
}
// either: (a) expose the thread object itself via a property or,
// (b) expose the other Thread public methods you need to replicate.
// This example uses (a).
public Thread Thread
{
get
{
return this.thread;
}
}
private void StartThreadParameterized(object obj)
{
try
{
this.start1(obj);
}
finally
{
lock (threadList)
{
threadList.Remove(this.thread);
}
}
}
private void StartThread()
{
try
{
this.start2();
}
finally
{
lock (threadList)
{
threadList.Remove(this.thread);
}
}
}
}
}
and a quick test driver of it (note I do not iterate over the list of threads, merely get the count in the list):
namespace ThreadTracker
{
using System;
using System.Threading;
internal static class Program
{
private static void Main()
{
var thread1 = new TrackedThread(DoNothingForFiveSeconds);
var thread2 = new TrackedThread(DoNothingForTenSeconds);
var thread3 = new TrackedThread(DoNothingForSomeTime);
thread1.Thread.Start();
thread2.Thread.Start();
thread3.Thread.Start(15);
while (TrackedThread.Count > 0)
{
Console.WriteLine(TrackedThread.Count);
}
Console.ReadLine();
}
private static void DoNothingForFiveSeconds()
{
Thread.Sleep(5000);
}
private static void DoNothingForTenSeconds()
{
Thread.Sleep(10000);
}
private static void DoNothingForSomeTime(object seconds)
{
Thread.Sleep(1000 * (int)seconds);
}
}
}
Not sure if you can go such a route, but it will accomplish the goal if you're able to incorporate at an early stage of development.
Is it feasible for you to store thread information in a lookup as you create each thread in your application?
As each thread starts, you can get its ID using AppDomain.GetCurrentThreadId(). Later, you can use this to cross reference with the data returned from Process.Threads.