In C#, I have the following extremely verbose syntax for pulling a simple list of items from a database:
if (malls == null)
{
lock (_lock)
{
if (malls == null)
{
using (var session = NhibernateHelper.OpenSession())
{
malls = session.CreateCriteria<Mall>()
.AddOrder(Order.Asc("Name")).List<Mall>();
CacheManager.Set(CACHE_KEY, malls, TimeSpan.FromMinutes(CACHE_DURATION));
}
}
}
}
I'm aware of the benefits of double checked locking and I strongly support its use, but it seems incredibly verbose. Can you recommend any syntax shortcuts or styles that might clean it up a bit?
Presumably you're using double-checked locking because you have a resource which you want initialized in a lazy, threadsafe manner.
Double-checked locking is a mechanism for achieving that, but as you've correctly noted, the verbosity of the mechanism is thoroughly overwhemling the meaning of the code.
When you have a mechanism that is obscuring the meaning, hide the mechanism by creating an abstraction. One way to do that would be to create a "lazy threadsafe instantiation" class and pass a delegate to it which does the operation you would like done in a lazy, threadsafe manner.
However, there's a better way. The better way is to not do that work yourself, but rather to let a world-class expert on threading do it for you. That way you don't have to worry about getting it right. Joe Duffy has to worry about getting it right. As Joe wisely says, rather than repeating the locking mechanism all over the place, write it once and then use the abstraction.
Joe's code is here:
http://www.bluebytesoftware.com/blog/PermaLink,guid,a2787ef6-ade6-4818-846a-2b2fd8bb752b.aspx
and a variation of this code will ship in the next version of the base class library.
To cut down on noise you can do this:
public List<Mall> Malls()
{
EnsureMallsInitialized();
return malls;
}
private void EnsureMallsInitialized()
{
if (malls == null) // not set
lock (_lock) // get lock
if (malls == null) // still not set
{
InitializeMalls();
}
}
private void InitializeMalls()
{
using (var session = NhibernateHelper.OpenSession())
{
malls = session.CreateCriteria<Mall>()
.AddOrder(Order.Asc("Name")).List<Mall>();
CacheManager.Set(CACHE_KEY, malls, TimeSpan.FromMinutes(CACHE_DURATION));
}
}
Related
actually what I'm simply trying to achieve is to get to know multithreading in C#.
SO i have this class called WeakeningEvictionary{TKey, TValue}, which has a private Dictionary{TKey, CachedValue{TValue}} that functions as the cache. CachedValue is a Wrapper that has a Strong- and WeakReference to TValue. After a predefined Time a Task is created to nullify the StrongReference and put it into WeakReference. I also have a HashSet implemented that keeps track of which keyValuePairs to evict. (added to when weakening happened, removed from when SetValue is called) Immediately after GC has done its Job another Task is created to evict all those mentioned Pairs.
Actually I wouldn't need a RecursiveLock for this, but I encountered Issues, when some stored Information is asked recursively because a construction series required so.
So I came up with this code: (Updated, was a not-going-to-work ExtensionMethod before)
public void RecursiveEnter(Action action)
{
if (_spinLock.IsHeldByCurrentThread)
{
action();
}
else
{
bool gotLock = false;
_spinLock.Enter(ref gotLock);//blocking until acquired
action();
if (gotLock) _spinLock.Exit();
}
}
So what I'm trying to do now is:
private void Evict()
{
RecursiveEnter(() =>
{
foreach (TKey key in toEvict)
{
_dict.Remove(key);
}
}
);
}
Alright what if I use
And my Question is: What are the Risks? And are Closures known to cause Issues when being used by Threads in this way?
Thanks for your Input ;-)
Right off the bat, the method call is 100% not going to work: SpinLock is a value type, you must pass it by reference (RecursiveEnter(ref SpinLock spinLock, Action action)) and not by value.
See for example https://learn.microsoft.com/en-us/dotnet/api/system.threading.spinlock?view=netframework-4.7.2#remarks
I'm not sure this is the best thing for you to use: you should start with a higher-level primitive (maybe a ReaderWriterLockSlim) and refine things only with careful testing and understanding.
I have a class which manipulates a resource which is shared by multiple threads. The threads pass around control of a mutex in order to manage access to the resource. I would like to manage control of the mutex using the RAII idiom via a disposable object.
There is an additional caveat. When the class begins an operation to manipulate the resource, it is possible that the operation is no longer necessary, or may no longer be performed. This is the result of user action which occurs after the operation has been scheduled to be carried out -- no way around it unfortunately. There are many different operations which might possibly be carried out, and all of them must acquire the mutex in this way. I'm imagining it will look something like this, for example:
public void DoAnOperation()
{
using(RAIIMutex raii = new RAIIMutex(TheMutex))
{
if(ShouldContinueOperation())
{
// Do operation-specific stuff
}
}
}
However, because I'm lazy, I'd like to not have to repeat that if(ShouldContinueOperation()) statement for each operation's function. Is there a way to do this while keeping the // Do operation-specific stuff in the body of the using statement? That seems like the most readable way to write it. For example, I don't want something like this (I'd prefer repeating the if statement if something like this is the only alternative):
public void DoAnOperation()
{
var blaarm = new ObjectThatDoesStuffWithRAIIMutex(TheMutex, ActuallyDoAnOperation);
blaarm.DoAnOperationWithTheMutex();
}
private void ActuallyDoAnOperation()
{
// Do operation-specific stuff
}
It is not entirely clear what ShouldContinueOperation depends on, but assuming that it can be a static function (based on the example code provided in the question), you might like something along the lines of:
public static void TryOperation(Mutex mutex, Action action)
{
using (RAIIMutex raii = new RAIIMutex(mutex))
{
if (ShouldContinueOperation())
action();
}
}
Which you can then use like:
RAIIMutex.TryOperation(TheMutex, () =>
{
// Do operation-specific stuff
});
This combines the using and the ShouldContinueOperation check in one line for the caller. I'm not quite sure about the readability of the lambda syntax used, but that's a matter of personal preference.
To synchronize the access to my properties I use the ReaderWriterLockSlim class. I use the following code to access my properties in a thread-safe way.
public class SomeClass
{
public readonly ReaderWriterLockSlim SyncObj = new ReaderWriterLockSlim();
public string AProperty
{
get
{
if (SyncObj.IsReadLockHeld)
return ComplexGetterMethod();
SyncObj.EnterReadLock();
try
{
return ComplexGetterMethod();
}
finally
{
SyncObj.ExitReadLock();
}
}
set
{
if (SyncObj.IsWriteLockHeld)
ComplexSetterMethod(value);
else
{
SyncObj.EnterWriteLock();
ComplexSetterMethod(value);
SyncObj.ExitWriteLock();
}
}
}
// more properties here ...
private string ComplexGetterMethod()
{
// This method is not thread-safe and reads
// multiple values, calculates stuff, ect.
}
private void ComplexSetterMethod(string newValue)
{
// This method is not thread-safe and reads
// and writes multiple values.
}
}
// =====================================
public static SomeClass AClass = new SomeClass();
public void SomeMultiThreadFunction()
{
...
// access with locking from within the setter
AClass.AProperty = "new value";
...
// locking from outside of the class to increase performance
AClass.SyncObj.EnterWriteLock();
AClass.AProperty = "new value 2";
AClass.AnotherProperty = "...";
...
AClass.SyncObj.ExitWriteLock();
...
}
To avoid unnecessary locks whenever I get or set multiple properties a once I published the ReaderWriterLockSlim-Object and lock it from outside of the class every time I'm about to get or set a bunch of properties. To achieve this my getter and setter methods check if the lock has been acquired using the IsReadLockHeld property and the IsWriteLockHeld property of ReaderWriterLockSlim. This works fine and has increased the performance of my code.
So far so good but when I re-read the documentation about IsReadLockHeld and IsWriteLockHeld I noticed the remark form Microsoft:
This property is intended for use in asserts or for other debugging
purposes. Do not use it to control the flow of program execution.
My question is: Is there a reason why I should not use IsReadLockHeld/IsWriteLockHeld for this purpose? Is there anything wrong with my code? Everything works as expected and much faster than using recursive locks (LockRecursionPolicy.SupportsRecursion).
To clarify this up: This is a minimal example. I don't want to know if the lock itself is necessary or can be removed or achieved in a different way. I just want to know why I should not use IsReadLockHeld/IsWriteLockHeld to control the flow of the programm as stated by the documentation.
After some further research I posted the same question on the German Support Forum of the Microsoft Developer Network and got into discussion with the very helpful moderator Marcel Roma. He was able to contact the programmer of the ReaderWriterLockSlim Joe Duffy who wrote this answer:
I'm afraid my answer may leave something to be desired.
The property works fine and as documented. The guidance really is just
because conditional acquisition and release of locks tends to be buggy
and error-prone in practice, particularly with exceptions thrown into
the mix.
It's typically a good idea to structure your code so that you either
use recursive acquires, or you don't, (and of course the latter is
always easier to reason about); using properties like IsReadLockHeld
lands you somewhere in the middle.
I was one of the primary designers of RWLS and I have to admit it has
way too many bells and whistles. I don't necessarily regret adding
IsReadLockHeld -- as it can come in handy for debugging and assertions
-- however as soon as we added it, Pandora's box was opened, and we RWLS was instantly opened up to this kind of usage.
I'm not surprised that people want to use it as shown in the
StackOverflow thread, and I'm sure there are some legitimate scenarios
where it works better than the alternatives. I merely advise erring on
the side of not using it.
To sum things up: You can use the IsReadLockHeld and the IsWriteLockHeld property to acquire a lock conditionally and everything will work fine, but it is bad programming style and one should avoid it. It is better to stick to recursive or non-recursive locks. To maintain a good coding style IsReadLockHeld and IsWriteLockHeld should only be used for debugging purposes.
I want to thank Marcel Roma and Joe Duffy again for their precious help.
Documentation is advising you the right thing.
Considere the following interleaved execution.
Thread1.AcqrireReadLock();
Thread1.ComplexGetterMethod();
Thread2.ReadIsReaderLockHeldProperty();
Thread1.ReleaseReadLock();
Thread2.ComplexGetterMethod(); // performing read without lock.
The other wrong thing with your code that I see is
SyncObj.EnterReadLock();
try
{
return ComplexGetterMethod();
}
finally
{
SyncObj.ExitReadLock();
}
is not the right way to do things. This is one right:
try
{
SyncObj.EnterReadLock();
return ComplexGetterMethod();
}
finally
{
if (SyncObj.IsReadLockHeld)
SyncObj.ExitReadLock();
}
And this shall be exact definition of your getter method.
I'm very new to multi-threading and for some reason this class is giving me more trouble than it should.
I am setting up a dictionary in the ASP.net cache - It will be frequently queried for individual objects, enumerated occasionally, and written extremely infrequently. I'll note that the dictionary data is almost never changed, I'm planning on letting it expire daily with a callback to rebuild from the database when it leaves the cache.
I believe that the enumeration and access by keys are safe so long as the dictionary isn't being written. I'm thinking a ReaderWriterLockSlim based wrapper class is the way to go but I'm fuzzy on a few points.
If I use Lock I believe that I can either lock on a token or the actual object I'm protecting. I don't see how to do something similar using the ReaderWriter Lock. Am I correct in thinking that multiple instances of my wrapper will not lock properly as the ReaderWriterLocks are out of each other's scope?
What is the best practice for writing a wrapper like this? Building it as a static almost seems redundant as the primary object is being maintained by the cache. Singleton's seem to be frowned upon, and I'm concerned about the above mentioned scoping issues for individual instances.
I've seen a few implementations of similar wrappers around but I haven't been able to answer these questions. I just want to make sure that I have a firm grasp on what I'm doing rather than cutting & pasting my way through. Thank you very much for your help!
**Edit: Hopefully this is a clearer summary of what I'm trying to find out- **
1. Am I correct in thinking that the lock does not affect the underlying data and is scoped like any other variable?
As an example lets say i have the following -
MyWrapperClass
{
ReaderWriterLockSlim lck = new ReaderWriterLockSlim();
Do stuff with this lock on the underlying cached dictionary object...
}
MyWrapperClass wrapA = new MyWrapperClass();
MyWrapperClass wrapB = new MyWrapperClass();
Am I right in thinking that the wrapA lock and wrapB lock won't interact, And that if wrapA & wrapB both attempt operations it will be unsafe?
2. If this is the case what is the best practice way to "share" the lock data?
This is an Asp.net app - there will be multiple pages that need to access the data which is why i'm doing this in the first place. What is the best practice for ensuring that the various wrappers are using the same lock? Should my wrapper be a static or singleton that all threads are using, if not what is the more elegant alternative?
You have multiple dictionary objects in the Cache, and you want each one locked independently. The "best" way is to just use a simple class that does it for you.
public class ReadWriteDictionary<K,V>
{
private readonly Dictionary<K,V> dict = new Dictionary<K,V>();
private readonly ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();
public V Get(K key)
{
return ReadLock(() => dict[key]);
}
public void Set(K key, V value)
{
WriteLock(() => dict.Add(key, value));
}
public IEnumerable<KeyValuePair<K, V>> GetPairs()
{
return ReadLock(() => dict.ToList());
}
private V2 ReadLock<V2>(Func<V2> func)
{
rwLock.EnterReadLock();
try
{
return func();
}
finally
{
rwLock.ExitReadLock();
}
}
private void WriteLock(Action action)
{
rwLock.EnterWriteLock();
try
{
action();
}
finally
{
rwLock.ExitWriteLock();
}
}
}
Cache["somekey"] = new ReadWriteDictionary<string,int>();
There is also a more complete example on the help page of ReaderWriterLockSlim on MSDN. It wouldn't be hard to make it generic.
edit To answer your new questions -
1.) You are correct wrapA and wrapB will not interact. They both have their own instance of ReaderWriterLockSlim.
2.) If you need a shared lock amongst all your wrapper classes, then it must be static.
ConcurrentDictionary does everything you want and then some. Part of System.Concurrent.Collections
The standard way to lock is: object lck = new object(); ... lock(lck) { ... } in this instance the object lck represents the lock.
ReadWriterLockSlim isn't much different, its just in this case the actual ReadWriterLockSlim class represents the actual lock, so everywhere you would have used lck you now use your ReadWriterLockSlim.
ReadWriterLockSlim lck = new ReadWriterLockSlim();
...
lck.EnterReadLock();
try
{
...
}
finally
{
lck.ExitReadLock();
}
I'm looking at using a singleton in a multithreaded Win service for doing logging, and wanted to know what are some of the problems I might encounter. I have already set up the get instance to handle syncing with
private static volatile Logging _instance;
private static object _syncRoot = new object();
private Logging(){}
public static Logging Instance
{
get
{
if (_instance==null)
{
lock(_syncRoot)
{
if (_instance == null)
{
_instance = new Logging();
}
}
}
return _instance;
}
}
Is there anything else I might need to worry about?
That looks pretty good to me.
See Implementing the Singleton Pattern in C# for more info.
Edit: Should probably put the return inside the lock, though.
This is more informational than anything else.
What you've posted is the double-checked locking algorithm - and what you've posted will work, as far as I'm aware. (As of Java 1.5 it works there, too.) However, it's very fragile - if you get any bit of it wrong, you could introduce very subtle race conditions.
I usually prefer to initialize the singleton in the static initializer:
public class Singleton
{
private static readonly Singleton instance = new Singleton();
public static Singleton Instance
{
get { return instance; }
}
private Singleton()
{
// Do stuff
}
}
(Add a static constructor if you want a bit of extra laziness.)
That pattern's easier to get right, and in most cases it does just as well.
There's more detail on my C# singleton implementation page (also linked by Michael).
As for the dangers - I'd say the biggest problem is that you lose testability. Probably not too bad for logging.
Singleton's have the potential to become a bottleneck for access to the resource embodied by the class, and force sequential access to a resource that could otherwise be used in parallel.
In this case, that may not be a bad thing, because you don't want multiple items writing to your file at the same instant, and even so I don't think your implementation will have that result. But it's something to be aware of.
You need to ensure that each method in the logger are safe to run concurrently, i.e. that they don't write to shared state without proper locking.
You are using double-checked locking what is considered a anti-pattern. Wikipedia has patterns with and without lazy initialization for different languages.
After creating the singleton instance you must of course ensure that all methods are thread-safe.
A better suggestion would be to establish the logger in a single-threaded setup step, so it's guaranteed to be there when you need it. In a Windows Service, OnStart is a great place to do this.
Another option you have is to used the System.Threading.Interlocked.CompareExchange(T%, T, T) : T method to switch out. It's less confusing and it's guaranteed to work.
System.Threading.Interlocked.CompareExchange<Logging>(_instance, null, new Logging());
There is some debate with respect to the need to make the first check for null use Thread.VolatileRead() if you use the double checked locking pattern and want it to work on all memory models. An example of the debate can be read at http://social.msdn.microsoft.com/forums/en-US/csharpgeneral/thread/b1932d46-877f-41f1-bb9d-b4992f29cedc/.
That said, I typically use Jon Skeet's solution from above.
I think if Logging instance methods are thread-safe there's nothing to worry about.