I have some code for the instance property of a controller class that looks like this:
public class Controller
{
private static volatile Controller _instance;
private static object syncRoot = new Object();
private Controller() { }
public static Controller Instance
{
get
{
if (_instance == null)
{
lock (syncRoot)
{
if (_instance == null)
_instance = new Controller();
}
}
return _instance;
}
}
public void Start()
{
}
}
After reading through the msdn docs on the volatile keyword, I'm not sure if the second null check is redundant or not and whether the better way to write the getter would be something like this:
get
{
lock (syncRoot)
{
if (_instance == null)
_instance = new Controller();
}
return _instance;
}
Which of the two implementations are better for multi-thread performance and DRY'ness (redundancy removal)?
This is called the "double checked locking" pattern. It is an attempt at a low-lock optimization and is therefore extremely dangerous.
The pattern is not guaranteed to work correctly on CLR v1.0. Whether it is so guaranteed on later versions is a matter of some debate; some articles say yes, some say no. It is very confusing.
I would avoid it entirely unless you had a very good reason to suppose that the solution with locking was insufficient to meet your needs. I would use higher-level primitives, like Lazy<T>, written by experts like Joe Duffy. They're more likely to be correct.
This question is a duplicate of
The need for volatile modifier in double checked locking in .NET
See the detailed answers there for more information. In particular, if you ever intend to write any low-lock code you absolutely have to read Vance's article:
http://msdn.microsoft.com/en-us/magazine/cc163715.aspx
and Joe's article:
http://www.bluebytesoftware.com/blog/PermaLink,guid,543d89ad-8d57-4a51-b7c9-a821e3992bf6.aspx
Note that Vance's article makes the claim that double-checked locking is guaranteed to work in CLR v2. It is not clear to me that the guarantees discussed in the article were actually implemented in CLR v2 or not; I have never gotten a straight answer out of anyone and have heard both that they were and were not implemented as specified. Again, you're on the trapeze without a net when you do this low-lock stuff yourself; avoid it if you possibly can.
A much better option would be to use Lazy<T>:
private static readonly Lazy<Controller> _instance = new Lazy<Controller>
(() => new Controller());
private Controller() { }
public static Controller Instance
{
get
{
return _instance.Value;
}
}
If, however, you're stuck with a version prior to .NET 4, I'd recommend reading Jon Skeet's article on Singletons in C#. It discusses the advantages and disadvantages to the above techiques, as well as providing a better implementation of a lazy instantiated singleton for .NET 3.5 and earlier.
you want to check before locking the object so that no unnecessary locking occurs
you also want to check after locking to avoid multiple instantiations of the object
the first is not needed but highly advisable for performance sake
the classic way to do this is double-checked locking: You check once before acquiring the lock to reduce overhead (acquiring the lock is relatively expensive) - then check again after you got the lock to be sure it wasn't set already.
Edit: As was pointed out Lazy<T> is a much better option here, I'm leaving this answer here for completeness.
if (_instance == null)
{
lock (syncRoot)
{
if (_instance == null)
_instance = new Controller();
}
}
return _instance;
Related
Trying to understand when it's considered best-practice to lock a static variable or not. Is the static Instance setter thread-safe? If not, should it be and why (what's the consequence of not making it thread-safe)?
class MyClass
{
private static MyClass _instance;
private static readonly object _padlock = new object();
public static MyClass Instance
{
get
{
if(_instance == null)
{
lock(_padlock)
{
if(_instance == null)
{
_instance = new MyClass();
}
}
}
return _instance;
}
set => _instance = value;
}
}
This is called Double-checked locking.
However, Double-checked locking requires that the underlying field is volatile1.
In short, the assignment is atomic, yet it will need to be synchronized (full fence, via a lock) across the different cores/CPUs. The reason why is another core concurrently reading the value might get an outdated value cached1.
There are several ways to make the code thread-safe:
Avoid double-checked locking, and simply perform everything within the lock statement.
Make the field volatile using the volatile keyword.
Use the Lazy class, which is guaranteed to be thread-safe.
Note : The completely unguarded setter further adds a complication3..
However, in your case, using double-checked locking will probably work fine with a single check and lock with the volatile field, yet i think your best bet is to just full lock everything and be safe
public static MyClass Instance
{
get
{
lock(_padlock)
{
if(_instance == null)
_instance = new MyClass();
return _instance;
}
}
set
{
lock(_padlock)
{
_instance = value;
}
}
}
Note : Yes it will incur a performance penalty
Reference
1 Double-checked lock is not thread-safe
2The famous double-checked locking technique in C#
3 Comment from #user2864740
Additional Resources
Double-checked locking in .NET
The famous double-checked locking technique in C#
Double-checked locking
It seems to me that locks or no locks (on the setter), you will always have an issue of timing. Imagine these scenarios:
You have a lock on the setter but a call to the getter comes in just before the lock is engaged. The caller gets the old instance.
You have a lock on the setter but a call to the getter comes in just after the lock is engaged. The caller waits for the lock to be free, and then gets the new instance.
You don't have a lock on the setter, and the call comes in just before you replace the instance. The caller gets the old instance.
You don't have a lock on the setter, and the call comes in just after you replace the instance. The caller gets the new instance.
With locks and without locks, it's a matter of timing which instance the caller receives.
The only issue I can see is if you want to be able to set Instance to null. If that's the case, your current code will not work because _instance could be changed between the if statement and returning it. You can resolve this by taking a copy of the reference:
public static MyClass Instance
{
get
{
var instanceSafeRef = _instance;
if(instanceSafeRef == null)
{
lock(_padlock)
{
if(_instance == null)
{
_instance = new MyClass();
}
instanceSafeRef = _instance;
}
}
return instanceSafeRef;
}
set => _instance = value;
}
Is it correct to use double check locking with not static fields?
class Foo
{
private SomeType member;
private readonly object memeberSync = new object();
public SomeType Memeber
{
get
{
if(member == null)
{
lock(memeberSync)
{
if(member == null)
{
member = new SomeType();
}
}
}
return object;
}
}
}
Is it correct to use double check locking with not static fields?
Yes, nothing wrong with your code to use double checking with lock to get thread-safe and lazy loading. If you are using from .NET 4, it would be suggested using Lazy class, this approach get the same result with thread-safe and lazy loading but it also makes your code simpler, more readable.
class Foo
{
private readonly Lazy<SomeType> _member =
new Lazy<SomeType>(() => new SomeType());
public SomeType Member
{
get { return _member.Value; }
}
}
The outer check gives a performance boost in that, once member is initialised, you don't have to obtain the lock every time you access the property. If you're accessing the property frequently from multiple threads, the performance hit of the lock could be quite noticeable.
The inner check is necessary to prevent race conditions: without that, it would be possible for two threads to process the outer if statement, and then both would initialise member.
Strictly speaking, the outer if isn't necessary, but it's considered good practise and (in a heavily-threaded application) the performance benefit would be noticeable.
It is a practice recommended by some because your lock may not apply until another lock is released.
In this case two threads access the getter at the same time, the first one gets the lock and the second waits.
Once the first is finished, the second thread now has the lock.
In cases where this is possible, you should check to see if the variable has already been created by another thread before the current thread acquired lock.
I've a class that contains a static collection to store the logged-in users in an ASP.NET MVC application. I just want to know about the below code is thread-safe or not. Do I need to lock the code whenever I add or remove item to the onlineUsers collection.
public class OnlineUsers
{
private static List<string> onlineUsers = new List<string>();
public static EventHandler<string> OnUserAdded;
public static EventHandler<string> OnUserRemoved;
private OnlineUsers()
{
}
static OnlineUsers()
{
}
public static int NoOfOnlineUsers
{
get
{
return onlineUsers.Count;
}
}
public static List<string> GetUsers()
{
return onlineUsers;
}
public static void AddUser(string userName)
{
if (!onlineUsers.Contains(userName))
{
onlineUsers.Add(userName);
if (OnUserAdded != null)
OnUserAdded(null, userName);
}
}
public static void RemoveUser(string userName)
{
if (onlineUsers.Contains(userName))
{
onlineUsers.Remove(userName);
if (OnUserRemoved != null)
OnUserRemoved(null, userName);
}
}
}
That is absolutely not thread safe. Any time 2 threads are doing something (very common in a web application), chaos is possible - exceptions, or silent data loss.
Yes you need some kind of synchronization such as lock; and static is usually a very bad idea for data storage, IMO (unless treated very carefully and limited to things like configuration data).
Also - static events are notorious for a good way to keep object graphs alive unexpectedly. Treat those with caution too; if you subscribe once only, fine - but don't subscribe etc per request.
Also - it isn't just locking the operations, since this line:
return onlineUsers;
returns your list, now unprotected. all access to an item must be synchronized. Personally I'd return a copy, i.e.
lock(syncObj) {
return onlineUsers.ToArray();
}
Finally, returning a .Count from such can be confusing - as it is not guaranteed to still be Count at any point. It is informational at that point in time only.
Yes, you need to lock the onlineUsers to make that code threadsafe.
A few notes:
Using a HashSet<string> instead of the List<string> may be a good idea, since it is much more efficient for operations like this (Contains and Remove especially). This does not change anything on the locking requirements though.
You can declare a class as "static" if it has only static members.
Yes you do need to lock your code.
object padlock = new object
public bool Contains(T item)
{
lock (padlock)
{
return items.Contains(item);
}
}
Yes. You need to lock the collection before you read or write to the collection, since multiple users are potentially being added from different threadpool workers. You should probably also do it on the count as well, though if you're not concerned with 100% accuracy that may not be an issue.
As per Lucero's answer, you need to lock onlineUsers. Also be careful what will clients of your class do with the onlineUsers returned from GetUsers(). I suggest you change your interface - for example use IEnumerable<string> GetUsers() and make sure the lock is used in its implementation. Something like this:
public static IEnumerable<string> GetUsers() {
lock (...) {
foreach (var element in onlineUsers)
yield return element;
// We need foreach, just "return onlineUsers" would release the lock too early!
}
}
Note that this implementation can expose you to deadlocks if users try to call some other method of OnlineUsers that uses lock, while still iterating over the result of GetUsers().
That code it is not thread-safe per se.
I will not make any suggestions relative to your "design", since you didn't ask any. I'll assume you found good reasons for those static members and exposing your list's contents as you did.
However, if you want to make your code thread-safe, you should basically use a lock object to lock on, and wrap the contents of your methods with a lock statement:
private readonly object syncObject = new object();
void SomeMethod()
{
lock (this.syncObject)
{
// Work with your list here
}
}
Beware that those events being raised have the potential to hold the lock for an extended period of time, depending on what the delegates do.
You could omit the lock from the NoOfOnlineUsers property while declaring your list as volatile. However, if you want the Count value to persist for as long as you are using it at a certain moment, use a lock there, as well.
As others suggested here, exposing your list directly, even with a lock, will still pose a "threat" on it's contents. I would go with returning a copy (and that should fit most purposes) as Mark Gravell advised.
Now, since you said you are using this in an ASP.NET environment, it is worth saying that all local and member variables, as well as their member variables, if any, are thread safe.
Isn't this a simpler as well as safe (and hence better) way to implement a singleton instead of doing double-checked locking mambo-jambo? Any drawbacks of this approach?
public class Singleton
{
private static Singleton _instance;
private Singleton() { Console.WriteLine("Instance created"); }
public static Singleton Instance
{
get
{
if (_instance == null)
{
Interlocked.CompareExchange(ref _instance, new Singleton(), null);
}
return _instance;
}
}
public void DoStuff() { }
}
EDIT: the test for thread-safety failed, can anyone explain why? How come Interlocked.CompareExchange isn't truly atomic?
public class Program
{
static void Main(string[] args)
{
Parallel.For(0, 1000000, delegate(int i) { Singleton.Instance.DoStuff(); });
}
}
Result (4 cores, 4 logical processors)
Instance created
Instance created
Instance created
Instance created
Instance created
If your singleton is ever in danger of initializing itself multiple times, you have a lot worse problems. Why not just use:
public class Singleton
{
private static Singleton instance=new Singleton();
private Singleton() {}
public static Singleton Instance{get{return instance;}}
}
Absolutely thread-safe in regards to initialization.
Edit: in case I wasn't clear, your code is horribly wrong. Both the if check and the new are not thread-safe! You need to use a proper singleton class.
You may well be creating multiple instances, but these will get garbage collected because they are not used anywhere. In no case does the static _instance field variable change its value more than once, the single time that it goes from null to a valid value. Hence consumers of this code will only ever see the same instance, despite the fact that multiple instances have been created.
Lock free programming
Joe Duffy, in his book entitled Concurrent Programming on Windows actually analyses this very pattern that you are trying to use on chapter 10, Memory models and Lock Freedom, page 526.
He refers to this pattern as a Lazy initialization of a relaxed reference:
public class LazyInitRelaxedRef<T> where T : class
{
private volatile T m_value;
private Func<T> m_factory;
public LazyInitRelaxedRef(Func<T> factory) { m_factory = factory; }
public T Value
{
get
{
if (m_value == null)
Interlocked.CompareExchange(ref m_value, m_factory(), null);
return m_value;
}
}
/// <summary>
/// An alternative version of the above Value accessor that disposes
/// of garbage if it loses the race to publish a new value. (Page 527.)
/// </summary>
public T ValueWithDisposalOfGarbage
{
get
{
if (m_value == null)
{
T obj = m_factory();
if (Interlocked.CompareExchange(ref m_value, obj, null) != null && obj is IDisposable)
((IDisposable)obj).Dispose();
}
return m_value;
}
}
}
As we can see, in the above sample methods are lock free at the price of creating throw-away objects. In any case the Value property will not change for consumers of such an API.
Balancing Trade-offs
Lock Freedom comes at a price and is a matter of choosing your trade-offs carefully. In this case the price of lock freedom is that you have to create instances of objects that you are not going to use. This may be an acceptable price to pay since you know that by being lock free, there is a lower risk of deadlocks and also thread contention.
In this particular instance however, the semantics of a singleton are in essence to Create a single instance of an object, so I would much rather opt for Lazy<T> as #Centro has quoted in his answer.
Nevertheless, it still begs the question, when should we use Interlocked.CompareExchange? I liked your example, it is quite thought provoking and many people are very quick to diss it as wrong when it is not horribly wrong as #Blindy quotes.
It all boils down to whether you have calculated the tradeoffs and decided:
How important is it that you produce one and only one instance?
How important is it to be lock free?
As long as you are aware of the trade-offs and make it a conscious decision to create new objects for the benefit of being lock free, then your example could also be an acceptable answer.
In order not to use 'double-checked locking mambo-jambo' or simply not to implement an own singleton reinventing the wheel, use a ready solution included into .NET 4.0 - Lazy<T>.
public class Singleton
{
private static Singleton _instance = new Singleton();
private Singleton() {}
public static Singleton Instance
{
get
{
return _instance;
}
}
}
I am not convinced you can completely trust that. Yes, Interlocked.CompareExchanger is atomic, but new Singleton() is in not going to be atomic in any non-trivial case. Since it would have to evaluated before exchanging values, this would not be a thread-safe implementation in general.
what about this?
public sealed class Singleton
{
Singleton()
{
}
public static Singleton Instance
{
get
{
return Nested.instance;
}
}
class Nested
{
// Explicit static constructor to tell C# compiler
// not to mark type as beforefieldinit
static Nested()
{
}
internal static readonly Singleton instance = new Singleton();
}
}
It's the fifth version on this page:
http://www.yoda.arachsys.com/csharp/singleton.html
I'm not sure, but the author seems to think its both thread-safe and lazy loading.
Your singleton initializer is behaving exactly as it should. See Raymond Chen's Lock-free algorithms: The singleton constructor:
This is a double-check lock, but without the locking. Instead of taking lock when doing the initial construction, we just let it be a free-for-all over who gets to create the object. If five threads all reach this code at the same time, sure, let's create five objects. After everybody creates what they think is the winning object, they called InterlockedCompareExchangePointerRelease to attempt to update the global pointer.
This technique is suitable when it's okay to let multiple threads try to create the singleton (and have all the losers destroy their copy). If creating the singleton is expensive or has unwanted side-effects, then you don't want to use the free-for-all algorithm.
Each thread creates the object; as it thinks nobody has created it yet. But then during the InterlockedCompareExchange, only one thread will really be able to set the global singleton.
Bonus reading
One-Time Initialization helper functions save you from having to write all this code yourself. They deal with all the synchronization and memory barrier issues, and support both the one-person-gets-to-initialize and the free-for-all-initialization models.
A lazy initialization primitive for .NET provides a C# version of the same.
This is not thread-safe.
You would need a lock to hold the if() and the Interlocked.CompareExchange() together, and then you wouldn't need the CompareExchange anymore.
You still have the issue that you're quite possibly creating and throwing away instances of your singleton. When you execute Interlocked.CompareExchange(), the Singleton constructor will always be executed, regardless of whether the assignment will succeed. So you're no better off (or worse off, IMHO) than if you said:
if ( _instance == null )
{
lock(latch)
{
_instance = new Singleton() ;
}
}
Better performance vis-a-vis thread contention than if you swapped the position of the lock and the test for null, but at the risk of an extra instance being constructed.
An obvious singleton implementation for .NET?
Auto-Property initialization (C# 6.0) does not seem to cause the multiple instantiations of Singleton you are seeing.
public class Singleton
{
static public Singleton Instance { get; } = new Singleton();
private Singleton();
}
I think the simplest way after .NET 4.0 is using System.Lazy<T>:
public class Singleton
{
private static readonly Lazy<Singleton> lazy = new Lazy<Singleton>(() => new Singleton());
public static Singleton Instance { get { return lazy.Value; } }
private Singleton() { }
}
Jon Skeet has a nice article here that covers a lot of ways of implementing singleton and the problems of each one.
Don't use locking. Use your language environment
Mostly simple Thread-safe implementation is:
public class Singleton
{
private static readonly Singleton _instance;
private Singleton() { }
static Singleton()
{
_instance = new Singleton();
}
public static Singleton Instance
{
get { return _instance; }
}
}
I've been sitting on this idea for quite a long time and would like to hear what you guys think about it.
The standard idiom for writing a singleton is roughly as follows:
public class A {
...
private static A _instance;
public static A Instance() {
if(_instance == null) {
_instance = new A();
}
return _instance;
}
...
}
Here I'm proposing another solution:
public class A {
...
private static A _instance;
public static A Instance() {
try {
return _instance.Self();
} catch(NullReferenceExceptio) {
_instance = new A();
}
return _instance.Self();
}
public A Self() {
return this;
}
...
}
The basic idea behind it is that the runtime cost of 1 dereference and unthrown exception is lesser than that of one null check. I've tried to measure the potentional performance gain and here are my numbers:
Sleep 1sec (try/catch): 188788ms
Sleep 1sec (nullcheck): 207485ms
And the test code:
using System;
using System.Collections.Generic;
using System.Threading;
using System.Diagnostics;
public class A
{
private static A _instance;
public static A Instance() {
try {
return _instance.Self();
} catch(NullReferenceException) {
_instance = new A();
}
return _instance.Self();
}
public A Self() {
return this;
}
public void DoSomething()
{
Thread.Sleep(1);
}
}
public class B
{
private static B _instance;
public static B Instance() {
if(_instance == null) {
_instance = new B();
}
return _instance;
}
public void DoSomething()
{
Thread.Sleep(1);
}
}
public class MyClass
{
public static void Main()
{
Stopwatch sw = new Stopwatch();
sw.Reset();
sw.Start();
for(int i = 0; i < 100000; ++i) {
A.Instance().DoSomething();
}
Console.WriteLine(sw.ElapsedMilliseconds);
sw.Reset();
sw.Start();
for(int i = 0; i < 100000; ++i) {
B.Instance().DoSomething();
}
Console.WriteLine(sw.ElapsedMilliseconds);
RL();
}
#region Helper methods
private static void WL(object text, params object[] args)
{
Console.WriteLine(text.ToString(), args);
}
private static void RL()
{
Console.ReadLine();
}
private static void Break()
{
System.Diagnostics.Debugger.Break();
}
#endregion
}
The resulting performance gain is almost 10%, the question is whether it's a micro-op, or it can offer significant performance boost for singleton happy applications (or it's middleware, like, logging)?
What you're asking about is the best way to implement a bad singleton pattern. You should have a look at Jon Skeet's article on how to implement the singleton pattern in C#. You'll find that there are much better (safer) ways and they don't suffer from the same performance issues.
This is a horrid way to do this. As others have pointed out, exceptions should only be used for handling exceptional, unexpected situations. And because we make that assumption, many organizations run their code in contexts which aggressively seek out and report exceptions, even handled ones, because a handled null reference exception is almost certainly a bug. If your program is unexpectedly dereferencing invalid memory and continuing merrily along then odds are good that something is deeply, badly broken in your program and it should be brought to someone's attention.
Do not "cry wolf" and deliberately construct a situation that looks horribly broken but is in fact by design. That's just making more work for everyone. There is a standard, straightforward, accepted way to make a singleton in C#; do that if that's what you mean. Don't try to invent some crazy thing that violates good programming principles. People smarter than me have designed a singleton implementation that works; it's foolish to not use it.
I learned this the hard way. Long story short, I once deliberately used a test that would most of the time dereference bad memory in a mainline scenario in the VBScript runtime. I made sure to carefully handle the exception and recover correctly, and the ASP team went crazy that afternoon. Suddenly all their checks for server integrity started reporting that huge numbers of their pages were violating memory integrity and recovering from that. I ended up rearchitecting the implementation of that scenario to only do code paths that did not result in exceptions.
A null ref exception should always be a bug, period. When you handle a null ref exception, you are hiding a bug.
More musing on exception classifications:
http://ericlippert.com/2008/09/10/vexing-exceptions/
I think that if my application is calling a singleton's constructor often enough for a 10% performance boost to mean anything I'd be rather worried.
That said, neither version is thread-safe. Focus on getting things like that right first.
"We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil."
This optimization seems trivial. I've always be taught to use try/catch blocks only to catch conditions that would be difficult or impossible to check with an if-statement.
Its not that your proposed approach wouldn't work. It just isn't significantly better than the original way.
The "improved" implementation has a big problem and that is that it fires an interrupt. A rule of thumb is that application logic should not be dependent on exceptions firing.
You should never use exception handling for control flow.
Besides, instead of:
if(_instance == null) {
_instance = new A();
}
return _instance;
you can do:
return _instance ?? (_instance = new A());
Which is semantically the same.
why not just:
public class A {
...
private static A _instance = new A();
public static A Instance() {
return _instance;
}
...
}
I think that your solution is perfectly acceptable though I do think there are a few things that you should (and are rarely) consider before implementing a lazy loaded singleton panti-pattern.
Could you provide a DI version instead, would a unity container suffice with an interface contract? This would allow you to swap the implementation later if need be, also makes testing a lot easier.
If you must insist on using a singleton, do you really need a lazy-loaded implementation? The cost of creating the instance on the static constructor either implicitly or explicitly will only be executed when the class has been referenced at run-time, and in my experience almost ALL singletons are only ever referenced when they want to get access to "Instance" anyway.
If you need to implement it the way you've described, you'd do it like the following.
UPDATE: I've updated the example to instead lock class type instead of a lock variable as Brian Gideon points out the instance could be in a half initialized state. Any use of lock(typeof()) is strongly advised against and I recommend that you never use this approach.
public class A {
private static A _instance;
private A() { }
public static A Instance {
get {
try {
return _instance.Self();
} catch (NullReferenceException) {
lock (typeof(A)) {
if (_instance == null)
_instance = new A();
}
}
return _instance.Self();
}
}
private A Self() { return this; }
}