Potential deadlock? - c#

public class MyClass
{
public void DoSomething()
{
Lock(this)
{
//Do Something.
}
}
}
public class AnotherClass
{
MyClass myclass = new MyClass();
Public void DoAnotherThing()
{
lock(myclass)
{
Myclass.DoSomething();
}
}
}
Will this create a deadlock ?
As per my understanding and as per the articles I have read – It will. Why ? – Whenever DoSomething() is called it will try to attain a lock and wait for lock(myclass) to be released and hence a deadlock. Please confirm my understanding(a little explanation is also requested) and correct me if I am wrong.

I think what the articles you read were trying to tell you is that you shouldn't lock (this) because some other code might also try to lock the same object. This will only happen if two or more threads are involved.
Here's some sample code which demonstrates a deadlock problem. Try running it and look at the result.
Then make the edit suggested on the lock (this) line and try it again.
The deadlock occurs because some code OUTSIDE the class is locking on the same instance of the class that the code INSIDE the class is using for a lock - and without careful documentation and visibility, that's a real possibility.
The moral of this story is that in general you should not lock on anything that is visible outside the class (unless you document carefully how to use the locking for that class), and you should NEVER lock (this).
using System;
using System.Threading.Tasks;
namespace Demo
{
class MyClass
{
public void DoSomething()
{
Console.WriteLine("Attempting to enter the DoSomething lock.");
lock (this) // Change to lock(_locker) to prevent the deadlock.
{
Console.WriteLine("In the DoSomething lock.");
}
}
readonly object _locker = new object();
}
internal static class Program
{
static void Main(string[] args)
{
var myClass = new MyClass();
lock (myClass)
{
var task = Task.Run(() => myClass.DoSomething());
Console.WriteLine("Waiting for the task to complete.");
if (!task.Wait(1000))
Console.WriteLine("ERROR: The task did not complete.");
else
Console.WriteLine("Task completed.");
Console.WriteLine("Press any key to continue...");
Console.ReadKey();
}
}
}
}

This cannot create a deadlock, because only one thread is involved. True, a lock on your MyClass object is requested when a lock on it is already held. But C# locks are "recursive", meaning that a single thread can hold multiple locks on the same object, with the result being the same as if it only held the outermost lock: The inner locks immediately succeed, and the object is locked until the last lock is released. (You don't truly understand how useful this is until you're forced to use a language which doesn't have it.)
But I agree with everyone above: lock(this) is bad juju.

Related

Concurrency with reading but locking with mutating

I'm looking for a solution that allows multiple threads to read the shared resource (concurrency permitted) but then locks these reading threads once a thread enters a mutating block, to achieve best of both world.
I've looked up this reference but it seems the solution is to lock both reading and writing threads.
class Foo {
List<string> sharedResource;
public void reading() // multiple reading threads allowed, concurrency ok, lock this only if a thread enters the mutating block below.
{
}
public void mutating() // this should lock any threads entering this block as well as lock the reading threads above
{
lock(this)
{
}
}
}
Is there such a solution in C#?
Edit
All threads entering in both GetMultiton and constructor should return the same instance. want them to be thread safe.
class Foo: IFoo {
public static IFoo GetMultiton(string key, Func<IFoo> fooRef)
{
if (instances.TryGetValue(key, out IFoo obj))
{
return obj;
}
return fooRef();
}
public Foo(string key) {
instances.Add(key, this);
}
}
protected static readonly IDictionary<string, IFoo> instances = new ConcurrentDictionary<string, IFoo>();
Use
Foo.GetMultiton("key1", () => new Foo("key1"));
There is a pre-built class for this behavior ReaderWriterLockSlim
class Foo {
List<string> sharedResource;
ReaderWriterLockSlim _lock = new ReaderWriterLockSlim();
public void reading() // multiple reading threads allowed, concurrency ok, lock this only if a thread enters the mutating block below.
{
_lock.EnterReadLock();
try
{
//Do reading stuff here.
}
finally
{
_lock.ExitReadLock();
}
}
public void mutating() // this should lock any threads entering this block as well as lock the reading threads above
{
_lock.EnterWriteLock();
try
{
//Do writing stuff here.
}
finally
{
_lock.ExitWriteLock();
}
}
}
Multiple threads can enter the read lock at the same time but if a write lock tries to be taken it will block till all current readers finish then block all new writers and new readers till the write lock finishes.
With your update you don't need locks at all. Just use GetOrAdd from ConcurrentDictionary
class Foo: IFoo {
public static IFoo GetMultiton(string key, Func<IFoo> fooRef)
{
return instances.GetOrAdd(key, k=> fooRef());
}
public Foo(string key) {
instances.Add(key, this);
}
}
Note that fooRef() may be called more than once, but only the first one to return will be used as the result for all the threads. If you want fooRef() to only be called once it will require slightly more complicated code.
class Foo: IFoo {
public static IFoo GetMultiton(string key, Func<IFoo> fooRef)
{
return instances.GetOrAdd(key, k=> new Lazy<IFoo>(fooRef)).Value;
}
public Foo(string key) {
instances.Add(key, new Lazy<IFoo>(()=>this);
}
}
protected static readonly IDictionary<string, Lazy<IFoo>> instances = new ConcurrentDictionary<string, Lazy<IFoo>>();
The solution depends on your requirements. If performance of ReaderWriterLockSlim (note that it's approximately twice slower than regular lock in current .NET Framework, so maximum performance you can achieve if you modify rarely and reading is quite heavy operation, otherwise overhead will be more than profit), you can try to create copy of data, modify it and atomically swap reference with help of Interlocked class (if it's not a requirement to have the most recent data in each thread as soon as it was changed).
class Foo
{
IReadOnlyList<string> sharedResource = new List<string>();
public void reading()
{
// Here you can safely* read from sharedResource
}
public void mutating()
{
var copyOfData = new List<string>(sharedResource);
// modify copyOfData here
// Following line is correct only in case of single writer:
Interlocked.Exchange(ref sharedResource, copyOfData);
}
}
Benefits of lock-free case:
We have no locks on read, so we get maximum performance.
Drawbacks:
We have to copy data => memory traffic (allocations, garbage collection)
Reader thread can observe not the most recent update (if it reads reference before it was updated)
If reader uses sharedResource reference multiple times, then we must copy this reference to local variable via Interlocked.Exchange (if this usages of reference assume that it's the same collection)
If sharedResource is a list of mutable objects, then we must be careful with updating this objects in mutating since reader might be using them at the same moment => in this case it's better to make copies of these objects as well
If there are several updater threads, then we must use Interlocked.CompareExchange instead of Interlocked.Exchange in mutating and a kind of a loop
So, if you want to go lock-free, then it's better to use immutable objects. And anyway you will pay with memory allocations/GC for the performance.
UPDATE
Here is version that allows multiple writers as well:
class Foo
{
IReadOnlyList<string> sharedResource = new List<string>();
public void reading()
{
// Here you can safely* read from sharedResource
}
public void mutating()
{
IReadOnlyList<string> referenceToCollectionForCopying;
List<string> copyOfData;
do
{
referenceToCollectionForCopying = Volatile.Read(ref sharedResource);
copyOfData = new List<string>(referenceToCollectionForCopying);
// modify copyOfData here
} while (!ReferenceEquals(Interlocked.CompareExchange(ref sharedResource, copyOfData,
referenceToCollectionForCopying), referenceToCollectionForCopying));
}
}

How to lock a dictionary?

I have a static dictionary in a multi thread application.
class A Reads the dictionary and class B Removes from it.
I want to lock dictionary when removing from it or reading from it to prevent accessing problems to it in concurrency situations.
How can I Lock the dictionary?
public static Dictionary<string, Thread> DicThreads = new Dictionary<string, Thread>();
Class A()
{
private void MethodA()
{
if (DicThreads.ContainsKey(key))
if (DicThreads[key] == null || DicThreads[key].ThreadState == ThreadState.Stopped)
{
//--- Do something
}
}
class B
{
private void MethodB()
{
DicThreads.Remove(key)
}
}
You could use a ConcurrentDictionary as pwas suggests. If you want to synchronise the dictionary that you have, you use the lock keyword.
You should generally use a separate object for the synchronising, and don't expose that object outside your scope. That ensures that code outside the block can't use the same object for locks and cause conflicts.
public static Dictionary<string, Thread> DicThreads = new Dictionary<string, Thread>();
private static object sync = new Object();
Class A() {
private void MethodA() {
lock (sync) {
if (DicThreads.ContainsKey(key)) {
if (DicThreads[key] == null || DicThreads[key].ThreadState == ThreadState.Stopped) {
//--- Do something
}
}
}
}
}
class B {
private void MethodB() {
lock (sync) {
DicThreads.Remove(key)
}
}
}
you can use lock
lock (DicThreads)
{
// Any code here is synchronized with other
// (including this block on other threads)
// lock(DicThreads) blocks
}
However, if you have a dictionary of threads in your application, you are probably doing it wrong. Read all about the Task-Based Asynchronous Pattern (TAP) here.
Stephen Cleary has wirtten a useful AsyncCollection<T> class. Available in the Nito.AsyncEx package on NuGet.
If you need an asynchronous collection its a good candidate, it actually takes a ConcurrentBag/Stack/Queue or some other IProducerConsumerCollection to provide backing state.
Remember, as stated, you should not be managing the threads yourself, as illustrated in the question.
Use a ConcurrentDictionary<T>.

Calling method after destructor runs

I've readed about this, but I forgot, where I saw an example. So it looks like this
public class Program
{
private static void Main()
{
new SomeClass(10).Foo();
}
}
public class SomeClass
{
public int I;
public SomeClass(int input)
{
I = input;
Console.WriteLine("I = {0}", I);
}
~SomeClass()
{
Console.WriteLine("deleted");
}
public void Foo()
{
Thread.Sleep(2000);
Console.WriteLine("Foo");
}
}
so output should be:
I = 10
deleted
Foo
why? due to optimizer. It sees that method doesn't use any field so it can destroy an object before the method is called. So why it doesn't do it?
I'l post an example if i found it.
so i found the source. Pro .NET Performance: Sasha Goldshtein , Dima Zurbalev , Ido Flatow
Another problem has to do with the asynchronous nature of finalization
which occurs in a dedicated thread. A finalizer might attempt to
acquire a lock that is held by the application code, and the
application might be waiting for finalization to complete by calling
GC.WaitForPendingFinalizers(). The only way to resolve this issue is
to acquire the lock with a timeout and fail gracefully if it can’t be
acquired. Yet another scenario is caused by the garbage collector’s
eagerness to reclaim memory as soon as possible. Consider the
following code which represents a naïve implementation of a File class
with a finalizer that closes the file handle:
class File3
{
Handle handle;
public File3(string filename)
{
handle = new Handle(filename);
}
public byte[] Read(int bytes)
{
return Util.InternalRead(handle, bytes);
}
~File3()
{
handle.Close();
}
}
class Program
{
static void Main()
{
File3 file = new File3("File.txt");
byte[] data = file.Read(100);
Console.WriteLine(Encoding.ASCII.GetString(data));
}
}
This innocent piece of code can break in a very nasty manner. The Read
method can take a long time to complete, and it only uses the handle
contained within the object, and not the object itself. The rules for
determining when a local variable is considered an active root dictate
that the local variable held by the client is no longer relevant after
the call to Read has been dispatched. Therefore, the object is
considered eligible for garbage collection and its finalizer might
execute before the Read method returns! If this happens, we might be
closing the handle while it is being used, or just before it is used.
but i can't reproduce this behaviour
public void Foo()
{
Thread.Sleep(1000);
Console.WriteLine("Foo");
}
Methods that don't use any instance member of a class should be declared static. Which has several advantages, it is for one very helpful to a reader of the code. It unambiguously states that a method doesn't mutate the object state.
And for another has the great advantage that you'll now understand why there's no discrepancy in seeing the method running after the object got finalized. The GC just doesn't have any reason to keep this alive, there are no references left to the object when Foo() starts executing. So no trouble at all getting it collected and finalized.
You'll find more background info on how the jitter reports object references to the garbage collector in this answer.
Anyway, i found the way to reproduce it, i just should read more attentive :) :
public class Program
{
private static void Main()
{
new Thread(() =>
{
Thread.Sleep(100);
GC.Collect();
}).Start();
new SomeClass(10).Foo();
}
}
public class SomeClass
{
public int I;
public SomeClass(int input)
{
I = input;
Console.WriteLine("I = {0}", I);
}
~SomeClass()
{
Console.WriteLine("deleted");
}
public void Foo()
{
Thread.Sleep(1000);
Console.WriteLine("Foo");
}
}
so in this case destructor will be called before Foo method.
The problem is because you're using threading in Foo. You tell the code to wait for 1 second, but you don't tell it to wait for the second to be up before executing everything else. Therefore the original thread executes the destructor before Foo finishes.
A better way of writing Foo would be something like this:
public void Foo()
{
var mre = new ManualResetEvent(false);
mre.WaitOne(1000);
Console.WriteLine("Foo");
}
Using the ManualResetEvent will force the code to completely pause until, in this case, the timeout is hit. After which the code will continue.

how to prevent a deadlock when you need to lock multiple objects

Image this code:
You have 2 arrays, and you need to lock both of them in same moment (for any reason - you just need to keep locked both of them because they are somehow depending on each other) - you could nest the lock
lock (array1)
{
lock (array2)
{
... do your code
}
}
but this may result in a deadlock in case that someone in other part of your code would do
lock (array2)
{
lock (array1)
{
... do your code
}
}
and array 1 was locked - execution context switched - then array 2 was locked by second thread.
Is there a way to atomically lock them? such as
lock_array(array1, array2)
{
....
}
I know I could just create some extra "lock object" and lock that instead of both arrays everywhere in my code, but that just doesn't seem correct to me...
In general you should avoid locking on publicly accessible members (the arrays in your case). You'd rather have a private static object you'd lock on.
You should never allow locking on publicly accessible variable as Darin said. For example
public class Foo
{
public object Locker = new object();
}
public class Bar
{
public void DoStuff()
{
var foo = new Foo();
lock(foo.Locker)
{
// doing something here
}
}
}
rather do something like this.
public class Foo
{
private List<int> toBeProtected = new List<int>();
private object locker = new object();
public void Add(int value)
{
lock(locker)
{
toBeProtected.Add(value);
}
}
}
The reason for this is if you have multiple threads accessing multiple public synchronization constructs then run the very real possiblity of deadlock. Then you have to be very careful about how you code. If you are making your library available to others can you be sure that you can grab the lock? Perhaps someone using your library has also grabbed the lock and between the two of you have worked your way into a deadlock scenario. This is the reason Microsoft recommend not using SyncRoot.
I am not sure what you mean by lock to arrays.
You can easily perform operation on both arrays in single lock.
static readonly object a = new object();
lock(a){
//Perform operation on both arrays
}

What's the use of the SyncRoot pattern?

I'm reading a c# book that describes the SyncRoot pattern. It shows
void doThis()
{
lock(this){ ... }
}
void doThat()
{
lock(this){ ... }
}
and compares to the SyncRoot pattern:
object syncRoot = new object();
void doThis()
{
lock(syncRoot ){ ... }
}
void doThat()
{
lock(syncRoot){ ... }
}
However, I don't really understand the difference here; it seems that in both cases both methods can only be accessed by one thread at a time.
The book describes ... because the object of the instance can also be used for synchronized access from the outside and you can't control this form the class itself, you can use the SyncRoot pattern Eh? 'object of the instance'?
Can anyone tell me the difference between the two approaches above?
If you have an internal data structure that you want to prevent simultaneous access to by multiple threads, you should always make sure the object you're locking on is not public.
The reasoning behind this is that a public object can be locked by anyone, and thus you can create deadlocks because you're not in total control of the locking pattern.
This means that locking on this is not an option, since anyone can lock on that object. Likewise, you should not lock on something you expose to the outside world.
Which means that the best solution is to use an internal object, and thus the tip is to just use Object.
Locking data structures is something you really need to have full control over, otherwise you risk setting up a scenario for deadlocking, which can be very problematic to handle.
The actual purpose of this pattern is implementing correct synchronization with wrappers hierarchy.
For example, if class WrapperA wraps an instance of ClassThanNeedsToBeSynced, and class WrapperB wraps the same instance of ClassThanNeedsToBeSynced, you can't lock on WrapperA or WrapperB, since if you lock on WrapperA, lock on WrappedB won't wait.
For this reason you must lock on wrapperAInst.SyncRoot and wrapperBInst.SyncRoot, which delegate lock to ClassThanNeedsToBeSynced's one.
Example:
public interface ISynchronized
{
object SyncRoot { get; }
}
public class SynchronizationCriticalClass : ISynchronized
{
public object SyncRoot
{
// you can return this, because this class wraps nothing.
get { return this; }
}
}
public class WrapperA : ISynchronized
{
ISynchronized subClass;
public WrapperA(ISynchronized subClass)
{
this.subClass = subClass;
}
public object SyncRoot
{
// you should return SyncRoot of underlying class.
get { return subClass.SyncRoot; }
}
}
public class WrapperB : ISynchronized
{
ISynchronized subClass;
public WrapperB(ISynchronized subClass)
{
this.subClass = subClass;
}
public object SyncRoot
{
// you should return SyncRoot of underlying class.
get { return subClass.SyncRoot; }
}
}
// Run
class MainClass
{
delegate void DoSomethingAsyncDelegate(ISynchronized obj);
public static void Main(string[] args)
{
SynchronizationCriticalClass rootClass = new SynchronizationCriticalClass();
WrapperA wrapperA = new WrapperA(rootClass);
WrapperB wrapperB = new WrapperB(rootClass);
// Do some async work with them to test synchronization.
//Works good.
DoSomethingAsyncDelegate work = new DoSomethingAsyncDelegate(DoSomethingAsyncCorrectly);
work.BeginInvoke(wrapperA, null, null);
work.BeginInvoke(wrapperB, null, null);
// Works wrong.
work = new DoSomethingAsyncDelegate(DoSomethingAsyncIncorrectly);
work.BeginInvoke(wrapperA, null, null);
work.BeginInvoke(wrapperB, null, null);
}
static void DoSomethingAsyncCorrectly(ISynchronized obj)
{
lock (obj.SyncRoot)
{
// Do something with obj
}
}
// This works wrong! obj is locked but not the underlaying object!
static void DoSomethingAsyncIncorrectly(ISynchronized obj)
{
lock (obj)
{
// Do something with obj
}
}
}
Here is an example :
class ILockMySelf
{
public void doThat()
{
lock (this)
{
// Don't actually need anything here.
// In this example this will never be reached.
}
}
}
class WeveGotAProblem
{
ILockMySelf anObjectIShouldntUseToLock = new ILockMySelf();
public void doThis()
{
lock (anObjectIShouldntUseToLock)
{
// doThat will wait for the lock to be released to finish the thread
var thread = new Thread(x => anObjectIShouldntUseToLock.doThat());
thread.Start();
// doThis will wait for the thread to finish to release the lock
thread.Join();
}
}
}
You see that the second class can use an instance of the first one in a lock statement. This leads to a deadlock in the example.
The correct SyncRoot implementation is:
object syncRoot = new object();
void doThis()
{
lock(syncRoot ){ ... }
}
void doThat()
{
lock(syncRoot ){ ... }
}
as syncRoot is a private field, you don't have to worry about external use of this object.
Here's one other interesting thing related to this topic:
Questionable value of SyncRoot on Collections (by Brad Adams):
You’ll notice a SyncRoot property on many of the Collections in System.Collections. In retrospeced (sic), I think this property was a mistake. Krzysztof Cwalina, a Program Manger on my team, just sent me some thoughts on why that is – I agree with him:
We found the SyncRoot-based synchronization APIs to be insufficiently flexible for most scenarios. The APIs allow for thread safe access to a single member of a collection. The problem is that there are numerous scenarios where you need to lock on multiple operations (for example remove one item and add another). In other words, it’s usually the code that uses a collection that wants to choose (and can actually implement) the right synchronization policy, not the collection itself. We found that SyncRoot is actually used very rarely and in cases where it is used, it actually does not add much value. In cases where it’s not used, it is just an annoyance to implementers of ICollection.
Rest assured we will not make the same mistake as we build the generic versions of these collections.
See this Jeff Richter's article. More specifically, this example which demonstrates that locking on "this" can cause a deadlock:
using System;
using System.Threading;
class App {
static void Main() {
// Construct an instance of the App object
App a = new App();
// This malicious code enters a lock on
// the object but never exits the lock
Monitor.Enter(a);
// For demonstration purposes, let's release the
// root to this object and force a garbage collection
a = null;
GC.Collect();
// For demonstration purposes, wait until all Finalize
// methods have completed their execution - deadlock!
GC.WaitForPendingFinalizers();
// We never get to the line of code below!
Console.WriteLine("Leaving Main");
}
// This is the App type's Finalize method
~App() {
// For demonstration purposes, have the CLR's
// Finalizer thread attempt to lock the object.
// NOTE: Since the Main thread owns the lock,
// the Finalizer thread is deadlocked!
lock (this) {
// Pretend to do something in here...
}
}
}
Another concrete example:
class Program
{
public class Test
{
public string DoThis()
{
lock (this)
{
return "got it!";
}
}
}
public delegate string Something();
static void Main(string[] args)
{
var test = new Test();
Something call = test.DoThis;
//Holding lock from _outside_ the class
IAsyncResult async;
lock (test)
{
//Calling method on another thread.
async = call.BeginInvoke(null, null);
}
async.AsyncWaitHandle.WaitOne();
string result = call.EndInvoke(async);
lock (test)
{
async = call.BeginInvoke(null, null);
async.AsyncWaitHandle.WaitOne();
}
result = call.EndInvoke(async);
}
}
In this example, the first call will succeed, but if you trace in the debugger the call to DoSomething will block until the lock is release. The second call will deadlock, since the Main thread is holding the monitor lock on test.
The issue is that Main can lock the object instance, which means that it can keep the instance from doing anything that the object thinks should be synchronized. The point being that the object itself knows what requires locking, and outside interference is just asking for trouble. That's why the pattern of having a private member variable that you can use exclusively for synchronization without having to worry about outside interference.
The same goes for the equivalent static pattern:
class Program
{
public static class Test
{
public static string DoThis()
{
lock (typeof(Test))
{
return "got it!";
}
}
}
public delegate string Something();
static void Main(string[] args)
{
Something call =Test.DoThis;
//Holding lock from _outside_ the class
IAsyncResult async;
lock (typeof(Test))
{
//Calling method on another thread.
async = call.BeginInvoke(null, null);
}
async.AsyncWaitHandle.WaitOne();
string result = call.EndInvoke(async);
lock (typeof(Test))
{
async = call.BeginInvoke(null, null);
async.AsyncWaitHandle.WaitOne();
}
result = call.EndInvoke(async);
}
}
Use a private static object to synchronize on, not the Type.

Categories

Resources