(invalid way to) avoid double checked locks in C# - c#

Is this a valid and optimized way to avoid double checked locks:
public class SomeBaseClass
{
protected static object InitializeLock = new object();
protected static bool IsInitialized = false;
public void SomeFunction()
{
if (!IsInitialized)
{
System.Threading.Thread.MemoryBarrier();
lock (InitializeLock)
{
// do init stuff
IsInitialized = true;
}
}
//Do stuff that have to happen when function is called
}
}
With this being the double-checked alternative:
public class SomeBaseClass
{
protected static object InitializeLock = new object();
protected static bool IsInitialized = false;
public void SomeFunction()
{
if (!IsInitialized)
{
lock (InitializeLock)
{
if (!IsInitialized)
{
// do init stuff
IsInitialized = true;
}
}
}
//Do stuff that have to happen when function is called
}
}

No, because thread switch can happen right after two threads pass if (!IsInitialized)
There is a great article where this topic is explained in context of creating singleton: http://csharpindepth.com/Articles/General/Singleton.aspx (by Jon Skeet)

This is the second time this question has come up today. See:
C# manual lock/unlock
The short answer to your question is no, that is absolutely not valid. If the non-volatile read of "IsInitialized" is reordered with respect to the non-volatile read of whatever state is being initialized then the code path never has a memory barrier on it of any sort, and therefore the reads can be re-ordered, and therefore "IsInitialized" can be true while the out-of-date cached uninitialized state is still good.
What you have to do is either (1) don't do double-checked locking; it is dangerous, or (2) ensure that there is always at least one volatile read of IsInitialized to prevent reads of the initialized state being moved backwards in time.

The MemoryBarrier call in your first example is completely superfluous since the subsequent lock call creates an implicit memory barrier anyway.
Even if you moved the memory barrier before the first IsInitialized check, the code is still unsafe: there's a window for the thread to be interrupted between the IsInitialized check and the lock statement. That's why you generally need a second IsInitialized check inside the lock block.

You can help the check by making the IsInitialized flag volatile which will prevent other threads from caching it (a very minor improvement since you're locking), but you still need the flag after you're locking. In other words, you can't avoid the double-checked lock unless you use some tricky initialization.
However, you can do away with the locks if you re-design your class and if you go to an optimistic approach of changing the state... this should work like a charm:
public class Internals
{
private readonly bool IsInitialized;
public Internals(bool initialized)
{
IsInitialized = initialized;
}
}
public class SomeBaseClass
{
protected static Internals internals = new Internals(false);
public void SomeFunction()
{
do
{
Internals previous = internals;
}while(!previous.IsInitialized && previous != Interlocked.CompareExchange(internals, new Internals(true), previous))
}
}

Related

Mysterious deadlock corruption with ReaderWriterLockSlim

I wrote a fairly trivial wrapper around ReaderWriterLockSlim:
class SimpleReaderWriterLock
{
private class Guard : IDisposable
{
public Guard(Action action)
{
_Action = action;
}
public void Dispose()
{
_Action?.Invoke();
_Action = null;
}
private Action _Action;
}
private readonly ReaderWriterLockSlim _Lock
= new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
public IDisposable ReadLocked()
{
_Lock.EnterReadLock();
return new Guard(_Lock.ExitReadLock);
}
public IDisposable WriteLocked()
{
_Lock.EnterWriteLock();
return new Guard(_Lock.ExitWriteLock);
}
public IDisposable UpgradableReadLocked()
{
_Lock.EnterUpgradeableReadLock();
return new Guard(_Lock.ExitUpgradeableReadLock);
}
}
(This is probably not the most efficient thing in the world, so I am interested in suggested improvements to this class as well.)
It is used like so:
using (_Lock.ReadLocked())
{
// protected code
}
(There are a significant number of reads happening very frequently, and almost never any writes.)
This always seems to work as expected in Release mode and in production. However in Debug mode and in the debugger, very occasionally the process deadlocks in a peculiar state -- it has called EnterReadLock, the lock itself is not held by anything (the owner is 0, the properties that report whether it has any readers/writers/waiters say not, etc) but the spin lock inside is locked, and it's endlessly spinning there.
I don't know what triggers this, except that it seems to happen more often if I'm stopping at breakpoints and single-stepping (in completely unrelated code).
If I manually toggle the spinlock _isLocked field back to 0, then the process resumes and everything seems to work as expected afterwards.
Is there something wrong with the code or with the lock itself? Is the debugger doing something to accidentally provoke deadlocking the spinlock? (I'm using .NET 4.6.2.)
I've read an article that indicates that ThreadAbortException can be a problem for these locks -- and my code does have calls to Abort() in some places -- but I don't think those involve code which calls into this locked code (though I could be mistaken) and if the problem were that the lock had been acquired and never released then it should appear differently than what I'm seeing. (Though as an aside, the framework docs specifically ban acquiring a lock in a constrained region, as encouraged in that article.)
I can change the code to avoid the lock indirection, but aren't using guards the recommended practice in general?
Since the using statement is not abort-safe, you could try replacing it with the abort-safe workaround suggested in the linked article. Something like this:
public void WithReadLock(Action action)
{
var lockAcquired = false;
try
{
try { }
finally
{
_Lock.EnterReadLock();
lockAcquired = true;
}
action();
}
finally
{
if (lockAcquired) _Lock.ExitReadLock();
}
}
Usage:
var locker = new SimpleReaderWriterLock();
locker.WithReadLock(() =>
{
// protected code
});

how to prevent a deadlock when you need to lock multiple objects

Image this code:
You have 2 arrays, and you need to lock both of them in same moment (for any reason - you just need to keep locked both of them because they are somehow depending on each other) - you could nest the lock
lock (array1)
{
lock (array2)
{
... do your code
}
}
but this may result in a deadlock in case that someone in other part of your code would do
lock (array2)
{
lock (array1)
{
... do your code
}
}
and array 1 was locked - execution context switched - then array 2 was locked by second thread.
Is there a way to atomically lock them? such as
lock_array(array1, array2)
{
....
}
I know I could just create some extra "lock object" and lock that instead of both arrays everywhere in my code, but that just doesn't seem correct to me...
In general you should avoid locking on publicly accessible members (the arrays in your case). You'd rather have a private static object you'd lock on.
You should never allow locking on publicly accessible variable as Darin said. For example
public class Foo
{
public object Locker = new object();
}
public class Bar
{
public void DoStuff()
{
var foo = new Foo();
lock(foo.Locker)
{
// doing something here
}
}
}
rather do something like this.
public class Foo
{
private List<int> toBeProtected = new List<int>();
private object locker = new object();
public void Add(int value)
{
lock(locker)
{
toBeProtected.Add(value);
}
}
}
The reason for this is if you have multiple threads accessing multiple public synchronization constructs then run the very real possiblity of deadlock. Then you have to be very careful about how you code. If you are making your library available to others can you be sure that you can grab the lock? Perhaps someone using your library has also grabbed the lock and between the two of you have worked your way into a deadlock scenario. This is the reason Microsoft recommend not using SyncRoot.
I am not sure what you mean by lock to arrays.
You can easily perform operation on both arrays in single lock.
static readonly object a = new object();
lock(a){
//Perform operation on both arrays
}

Multiple locks locking the same functions C# .Net

I have a simple question about lock.
Are Process1 and Process2 the same because they are eventually locking the LongProcess?
Thank you.
private static readonly object _Locker = new object();
public void Process1()
{
lock(_LockerA){
LongProcess()
}
}
public void Process2()
{
if(curType == A)
ProcessTypeA();
else if(curtype == B)
ProcessTypeB();
}
private static readonly object _LockerA = new object();
public void ProcessTypeA()
{
lock(_LockerA){
LongProcess()
}
}
private static readonly object _LockerB = new object();
public void ProcessTypeB()
{
lock(_LockerB){
LongProcess()
}
}
public void LongProcess()
{
}
No, they are not the same. If you lock against a different object than an already existing lock, then both code paths will be allowed. So, in the case of Process2 curtype == 'b' the lock is using the _LockerB object. If one of the other locks using the _LockerA object is attempted, then they will be allowed to enter the LongProcess.
Process1 and Process2 have the potential to lock the same object, but they are definitely not the same. Locks on the same object are however allowed (I think, rarely if ever had to do it) within the same call stack (also referred to as recursive locking in the case where Process1 invokes Process2). This could likely be better described as dependent locking.
Your question is however fairly vague so you'll have to elaborate on what you mean by the same...

Properly locking a List<T> in MultiThreaded Scenarios?

Okay, I just can't get my head around multi-threading scenarios properly. Sorry for asking a similar question again, I'm just seeing many different "facts" around the internet.
public static class MyClass {
private static List<string> _myList = new List<string>;
private static bool _record;
public static void StartRecording()
{
_myList.Clear();
_record = true;
}
public static IEnumerable<string> StopRecording()
{
_record = false;
// Return a Read-Only copy of the list data
var result = new List<string>(_myList).AsReadOnly();
_myList.Clear();
return result;
}
public static void DoSomething()
{
if(_record) _myList.Add("Test");
// More, but unrelated actions
}
}
The idea is that if Recording is activated, calls to DoSomething() get recorded in an internal List, and returned when StopRecording() is called.
My specification is this:
StartRecording is not considered Thread-Safe. The user should call this while no other Thread is calling DoSomething(). But if it somehow could be, that would be great.
StopRecording is also not officially thread-safe. Again, it would be great if it could be, but that is not a requirement.
DoSomething has to be thread-safe
The usual way seems to be:
public static void DoSomething()
{
object _lock = new object();
lock(_lock){
if(_record) _myList.Add("Test");
}
// More, but unrelated actions
}
Alternatively, declaring a static variable:
private static object _lock;
public static void DoSomething()
{
lock(_lock){
if(_record) _myList.Add("Test");
}
// More, but unrelated actions
}
However, this answer says that this does not prevent other code from accessing it.
So I wonder
How would I properly lock a list?
Should I create the lock object in my function or as a static class variable?
Can I wrap the functionality of Start and StopRecording in a lock-block as well?
StopRecording() does two things: Set a boolean variable to false (to prevent DoSomething() from adding more stuff) and then copying the list to return a copy of the data to the caller). I assume that _record = false; is atomic and will be in effect immediately? So normally I wouldn't have to worry about Multi-Threading here at all, unless some other Thread calls StartRecording() again?
At the end of the day, I am looking for a way to express "Okay, this list is mine now, all other threads have to wait until I am done with it".
I will lock on the _myList itself here since it is private, but using a separate variable is more common. To improve on a few points:
public static class MyClass
{
private static List<string> _myList = new List<string>;
private static bool _record;
public static void StartRecording()
{
lock(_myList) // lock on the list
{
_myList.Clear();
_record = true;
}
}
public static IEnumerable<string> StopRecording()
{
lock(_myList)
{
_record = false;
// Return a Read-Only copy of the list data
var result = new List<string>(_myList).AsReadOnly();
_myList.Clear();
return result;
}
}
public static void DoSomething()
{
lock(_myList)
{
if(_record) _myList.Add("Test");
}
// More, but unrelated actions
}
}
Note that this code uses lock(_myList) to synchronize access to both _myList and _record. And you need to sync all actions on those two.
And to agree with the other answers here, lock(_myList) does nothing to _myList, it just uses _myList as a token (presumably as key in a HashSet). All methods must play fair by asking permission using the same token. A method on another thread can still use _myList without locking first, but with unpredictable results.
We can use any token so we often create one specially:
private static object _listLock = new object();
And then use lock(_listLock) instead of lock(_myList) everywhere.
This technique would have been advisable if myList had been public, and it would have been absolutely necessary if you had re-created myList instead of calling Clear().
Creating a new lock in DoSomething() would certainly be wrong - it would be pointless, as each call to DoSomething() would use a different lock. You should use the second form, but with an initializer:
private static object _lock = new object();
It's true that locking doesn't stop anything else from accessing your list, but unless you're exposing the list directly, that doesn't matter: nothing else will be accessing the list anyway.
Yes, you can wrap Start/StopRecording in locks in the same way.
Yes, setting a Boolean variable is atomic, but that doesn't make it thread-safe. If you only access the variable within the same lock, you're fine in terms of both atomicity and volatility though. Otherwise you might see "stale" values - e.g. you set the value to true in one thread, and another thread could use a cached value when reading it.
There are a few ways to lock the list. You can lock on _myList directly providing _myList is never changed to reference a new list.
lock (_myList)
{
// do something with the list...
}
You can create a locking object specifically for this purpose.
private static object _syncLock = new object();
lock (_syncLock)
{
// do something with the list...
}
If the static collection implements the System.Collections.ICollection interface (List(T) does), you can also synchronize using the SyncRoot property.
lock (((ICollection)_myList).SyncRoot)
{
// do something with the list...
}
The main point to understand is that you want one and only one object to use as your locking sentinal, which is why creating the locking sentinal inside the DoSomething() function won't work. As Jon said, each thread that calls DoSomething() will get its own object, so the lock on that object will succeed every time and grant immediate access to the list. By making the locking object static (via the list itself, a dedicated locking object, or the ICollection.SyncRoot property), it becomes shared across all threads and can effectively serialize access to your list.
The first way is wrong, as each caller will lock on a different object.
You could just lock on the list.
lock(_myList)
{
_myList.Add(...)
}
You may be misinterpreting the this answer, what is actually being stated is that they lock statement is not actually locking the object in question from being modified, rather it is preventing any other code using that object as a locking source from executing.
What this really means is that when you use the same instance as the locking object the code inside the lock block should not get executed.
In essence you are not really attempting to "lock" your list, you are attempting to have a common instance that can be used as a reference point for when you want to modify your list, when this is in use or "locked" you want to prevent other code from executing that would potentially modify the list.

Threadsafe Lazy Class

I have class Lazy which lazily evaluates an expression:
public sealed class Lazy<T>
{
Func<T> getValue;
T value;
public Lazy(Func<T> f)
{
getValue = () =>
{
lock (getValue)
{
value = f();
getValue = () => value;
}
return value;
};
}
public T Force()
{
return getValue();
}
}
Basically, I'm trying to avoid the overhead of locking objects after they've been evaluated, so I replace getValue with another function on invocation.
It apparently works in my testing, but I have no way of knowing if it'll blow up in production.
Is my class threadsafe? If not, what can be done to guarantee thread safety?
Can’t you just omit re-evaluating the function completely by either using a flag or a guard value for the real value? I.e.:
public sealed class Lazy<T>
{
Func<T> f;
T value;
volatile bool computed = false;
void GetValue() { lock(LockObject) { value = f(); computed = true; } }
public Lazy(Func<T> f)
{
this.f = f;
}
public T Force()
{
if (!computed) GetValue();
return value;
}
}
Your code has a few issues:
You need one object to do the locking on. Don't lock on a variable that gets changed - locks always deal with objects, so if getValue is changed, multiple threads might enter the locked section at once.
If multiple threads are waiting for the lock, all of them will evaluate the function f() after each other. You'd have to check inside the lock that the function wasn't evaluated already.
You might need a memory barrier even after fixing the above issues to ensure that the delegate gets replaced only after the new value was stored to memory.
However, I'd use the flag approach from Konrad Rudolph instead (just ensure you don't forget the "volatile" required for that). That way you don't need to invoke a delegate whenever the value is retrieved (delegate calls are quite fast; but not they're not as fast as simply checking a bool).
I'm not entirely sure what you're trying to do with this code, but I just published an article on The Code Project on building a sort of "lazy" class that automatically, asynchronously calls a worker function and stores its value.
This looks more like a caching mechanism than a "lazy evaluation". In addition, do not change the value of a locking reference within the lock block. Use a temporary variable to lock on.
The wait you have it right now would work in a large number of cases, but if you were to have two different threads try to evaluate the expression in this order:
Thread 1
Thread 2
Thread 1 completes
Thread 2 would never complete, because Thread 1 would be releasing a lock on a different reference than was used to acquire the lock (more precisely, he'd be releasing a nonexistent lock, since the newly-created reference was never locked to begin with), and not releasing the original lock, which is blocking Thread 2.
While I'm not entirely certain what this would do (aside from perform a synchronized evaluation of the expression and caching of the result), this should make it safer:
public sealed class Lazy<T>
{
Func<T> getValue;
T value;
object lockValue = new object();
public Lazy(Func<T> f)
{
getValue = () =>
{
lock (lockValue)
{
value = f();
getValue = () => value;
}
return value;
};
}
public T Force()
{
return getValue();
}
}

Categories

Resources