Initialize a resource and destroy it until another thread needs it - c#

I'm developing a multithread application in C#.
I've a resource I want to initialize when a first thread needs it.
This resource is able to be used by as many threads as it's necessary.
I need to detect, when this resource is free (there is any thread using it) in order to destroy it, and later, when another thread requests it, initialize it again.
Any ideas?

You could do something like the following:
public class SomeClass // basic class for example
{
public void foo() { }
public void Close()
{
// release any resources you might have open
}
}
public static class SingletonInstance
{
private static object m_lock = new object();
private static SomeClass m_instance = null;
private static int m_counter = 0;
public static SomeClass Instance
{
get
{
lock (m_lock) {
if (m_instance == null) {
m_instance = new SomeClass();
}
++m_counter;
}
return m_instance;
}
set
{
lock (m_lock) {
if (m_counter > 0 && --m_counter == 0) {
m_instance.Close();
m_instance = null;
}
}
}
}
}
And then in some other initialization code you could simply say SingletonInstance.Instance = null; to have the SingletonInstance be statically initialized (since static classes are initialized on the first call to them). Calling SingletonInstance.Instance = null; before any thread code will ensure no race conditions happen on the static init of the class; that is, if 2 threads call SingletonInstance.Instance.foo();, you can still have a race condition as to who initialized the class first.
Then in your thread code you could do something like the following:
void MyThreadFunction()
{
SingletonInstance.Instance.foo();
// ... more thread code ...
SingletonInstance.Instance = null;
}
This is a very basic example though, more to illustrate the point and your needs might be slightly different, but the idea is the same.
Hope that can help.

You could wrap your resource in a singleton handler which will destroy it when it is not referenced any longer by any threads.
You can look here for example for how to create such multi-threaded singleton objects. Initialize the resource when the object is created and liberate it in its destructor.

Related

C# lock to simultaneously read/write and display results

here's my question:
Say I have this program (I'll try to semplify as much as I can):
receiveResultThread waits for result from differents network clients, while displayResultToUIThread updates the UI with all the results received.
class Program
{
private static Tests TestHolder;
static void Main(string[] args)
{
TestHolder = new Tests();
Thread receiveResultsThread = new Thread(ReceiveResult);
receiveResultsThread.Start();
Thread displayResultToUIThread = new Thread(DisplayResults);
displayResultToUIThread.Start();
Console.ReadKey();
}
public static void ReceiveResult()
{
while (true)
{
if (IsNewTestResultReceivedFromNetwork())
{
lock (Tests.testLock)
TestHolder.ExecutedTests.Add(new Test { Result = "OK" });
}
Thread.Sleep(200);
}
}
private static void DisplayResults(object obj)
{
while (true)
{
lock (Tests.testLock)
{
DisplayAllResultInUIGrid(TestHolder.ExecutedTests);
}
Thread.Sleep(200);
}
}
}
class Test
{
public string Result { get; set; }
}
class Tests
{
public static readonly object testLock = new object();
public List<Test> ExecutedTests;
public Tests()
{
ExecutedTests = new List<Test>();
}
}
class UIManager
{
public static void DisplayAllResultInUIGrid(List<Test> list)
{
//Code to update UI.
}
}
Considering that the scope is to not update the UI while the other thread is adding tests to the list, it is safe to use:
lock (Tests.testLock)
or should I use:
lock (TestHolder.testLock)
(changing the static property of testLock)?
Do you think this is a good way to write this kind of program or can you suggest a better pattern?
Thank you for your help!
Public (not talking about public static) lock objects tend to be dangerous. Please see here
The reason it's bad practice to lock on a public object is that you can never be sure who ELSE is locking on that object.
Furthermore just having a List<T> and adding objects from an outer scope could be a smell, too.
In my opinion it'd be a better idea to have a method AddTest in Tests
class Tests
{
private static readonly object testLock = new object();
private List<Test> executedTests;
public Tests()
{
ExecutedTests = new List<Test>();
}
public void AddTest(Test t)
{
lock(testLock)
{
executedTests.Add(t);
}
}
public IEnumerable<Test> GetTests()
{
lock(testLock)
{
return executedTests.ToArray();
}
}
[...]
}
Clients of your tests class do not have to worry about using the lock object correctly. Precisely, they don't have to worry about any of the internals of your class.
You could, anyway, rename your class to ConcurrentTestsCollectionor the like, that users of the class know, that it's thread safe to some extent.
While you can use Tasks and the async/await keywords to do this less verbosely, I don't think it will fully solve your question.
I will assume that ExecutedTests is a List(or like) that you want to be thread safe, which is why you are creating a lock while accessing it.
I would make the list, itself, thread safe, rather than the operations against it. This will remove the need for a lock or a lock object.
You could implement this yourself or use something in the System.Collections.Concurrent namespace.
P.S.
If the threads are meant to be closed(aborted) when the process is exited you should set the Thread's IsBackground property to true.

Singleton in current thread

I have my singleton as below:
public class CurrentSingleton
{
private static CurrentSingleton uniqueInstance = null;
private static object syncRoot = new Object();
private CurrentSingleton() { }
public static CurrentSingleton getInstance()
{
if (uniqueInstance == null)
{
lock (syncRoot)
{
if (uniqueInstance == null)
uniqueInstance = new CurrentSingleton();
}
}
return uniqueInstance;
}
}
I would like check, if I will have two thread, are there two different singletons? I think, I shall have two different singletons (with different references), so what I'm doing:
class Program
{
static void Main(string[] args)
{
int currentCounter = 0;
for (int i = 0; i < 100; i++)
{
cs1 = null;
cs2 = null;
Thread ct1 = new Thread(cfun1);
Thread ct2 = new Thread(cfun2);
ct1.Start();
ct2.Start();
if (cs1 == cs2) currentCounter++;
}
Console.WriteLine(currentCounter);
Console.Read();
}
static CurrentSingleton cs1;
static CurrentSingleton cs2;
static void cfun1()
{
cs1 = CurrentSingleton.getInstance();
}
static void cfun2()
{
cs2 = CurrentSingleton.getInstance();
}
}
I suppose that I should got currentCounter = 0 (in this case every two singleton are different - because are creating by other threrad). Unfortunately, I got for example currentCounter = 70 so in 70 cases I have the same singletons... Could you tell me why?
I would like check, if I will have two thread, are there two different singletons
No, there are not. A static field is shared across each entire AppDomain, not each thread.
If you want to have separate values per thread, I'd recommend using ThreadLocal<T> to store the backing data, as this will provide a nice wrapper for per-thread data.
Also, in C#, it's typically better to implement a lazy singleton via Lazy<T> instead of via double checked locking. This would look like:
public sealed class CurrentSingleton // Seal your singletons if possible
{
private static Lazy<CurrentSingleton> uniqueInstance = new Lazy<CurrentSingleton>(() => new CurrentSingleton());
private CurrentSingleton() { }
public static CurrentSingleton Instance // use a property, since this is C#...
{
get { return uniqueInstance.Value; }
}
}
To make a class that provides one instance per thread, you could use:
public sealed class InstancePerThread
{
private static ThreadLocal<InstancePerThread> instances = new ThreadLocal<InstancePerThread>(() => new InstancePerThread());
private InstancePerThread() {}
public static InstancePerThread Instance
{
get { return instances.Value; }
}
}
By default, a static field is a single instance shared by all threads that access it.
You should take a look at the [ThreadStatic] attribute. Apply it to a static field to make it have a distinct instance for each thread that accesses it.
Use of a locking object ensures that only one value gets created; you can verify this by putting some logging in your CurrentSingleton constructor.
However, I think there's a small gap in your logic: imagine that two threads simultaneously call this method, while uniqueInstance is null. Both will evaluate the = null clause, and advance to the locking. One will win, lock on syncRoot, and initialize uniqueInstance. When the lock block ends, the other will get its own lock, and initialize uniqueInstance again.
You need to lock on syncRoot before even testing whether uniqueInstance is null.
No matter what you do you are never going to get currentCounter = 0.
Because we are forgetting the the fact that application/C# code is also running in some thread and there are some priorities set by C# to run the code. If you debug the code by putting break points in Main method and CurrentSingleton you will notice that. By the time you reach and create the new Object for CurrentSingleton, for loop may be iteration 3 or 4 or any number. Iterations are fast and code is comparing null values and Object or Object and null value. And I think this is the catch.
Reed has got point static will always be shared hence you need to change your code in following way
public class CurrentSingleton
{
[ThreadStatic]
private static CurrentSingleton uniqueInstance = null;
private static object syncRoot = new Object();
private CurrentSingleton() { }
public static CurrentSingleton getInstance()
{
if (uniqueInstance == null)
uniqueInstance = new CurrentSingleton();
return uniqueInstance;
}
}
And as per analysis you are getting two different objects at 70th iteration but, that is something just mismatch may be null and Object or Object and null. To get successful two different object you need to use [ThreadStatic]

Thread Safe C# Singleton Pattern

I have some questions regarding the the singleton pattern as documented here:
http://msdn.microsoft.com/en-us/library/ff650316.aspx
The following code is an extract from the article:
using System;
public sealed class Singleton
{
private static volatile Singleton instance;
private static object syncRoot = new object();
private Singleton() {}
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
}
return instance;
}
}
}
Specifically, in the above example, is there a need to compare instance to null twice, before and after the lock? Is this necessary? Why not perform the lock first and make the comparison?
Is there a problem in simplifying to the following?
public static Singleton Instance
{
get
{
lock (syncRoot)
{
if (instance == null)
instance = new Singleton();
}
return instance;
}
}
Is the performing the lock expensive?
Performing the lock is terribly expensive when compared to the simple pointer check instance != null.
The pattern you see here is called double-checked locking. Its purpose is to avoid the expensive lock operation which is only going to be needed once (when the singleton is first accessed). The implementation is such because it also has to ensure that when the singleton is initialized there will be no bugs resulting from thread race conditions.
Think of it this way: a bare null check (without a lock) is guaranteed to give you a correct usable answer only when that answer is "yes, the object is already constructed". But if the answer is "not constructed yet" then you don't have enough information because what you really wanted to know is that it's "not constructed yet and no other thread is intending to construct it shortly". So you use the outer check as a very quick initial test and you initiate the proper, bug-free but "expensive" procedure (lock then check) only if the answer is "no".
The above implementation is good enough for most cases, but at this point it's a good idea to go and read Jon Skeet's article on singletons in C# which also evaluates other alternatives.
The Lazy<T> version:
public sealed class Singleton
{
private static readonly Lazy<Singleton> lazy
= new Lazy<Singleton>(() => new Singleton());
public static Singleton Instance
=> lazy.Value;
private Singleton() { }
}
Requires .NET 4 and C# 6.0 (VS2015) or newer.
Performing a lock: Quite cheap (still more expensive than a null test).
Performing a lock when another thread has it: You get the cost of whatever they've still to do while locking, added to your own time.
Performing a lock when another thread has it, and dozens of other threads are also waiting on it: Crippling.
For performance reasons, you always want to have locks that another thread wants, for the shortest period of time at all possible.
Of course it's easier to reason about "broad" locks than narrow, so it's worth starting with them broad and optimising as needed, but there are some cases that we learn from experience and familiarity where a narrower fits the pattern.
(Incidentally, if you can possibly just use private static volatile Singleton instance = new Singleton() or if you can possibly just not use singletons but use a static class instead, both are better in regards to these concerns).
The reason is performance. If instance != null (which will always be the case except the very first time), there is no need to do a costly lock: Two threads accessing the initialized singleton simultaneously would be synchronized unneccessarily.
In almost every case (that is: all cases except the very first ones), instance won't be null. Acquiring a lock is more costly than a simple check, so checking once the value of instance before locking is a nice and free optimization.
This pattern is called double-checked locking: http://en.wikipedia.org/wiki/Double-checked_locking
This is called Double checked locking mechanism, first, we will check whether the instance is created or not. If not then only we will synchronize the method and create the instance. It will drastically improve the performance of the application. Performing lock is heavy. So to avoid the lock first we need to check the null value. This is also thread safe and it is the best way to achieve the best performance. Please have a look at the following code.
public sealed class Singleton
{
private static readonly object Instancelock = new object();
private Singleton()
{
}
private static Singleton instance = null;
public static Singleton GetInstance
{
get
{
if (instance == null)
{
lock (Instancelock)
{
if (instance == null)
{
instance = new Singleton();
}
}
}
return instance;
}
}
}
Jeffrey Richter recommends following:
public sealed class Singleton
{
private static readonly Object s_lock = new Object();
private static Singleton instance = null;
private Singleton()
{
}
public static Singleton Instance
{
get
{
if(instance != null) return instance;
Monitor.Enter(s_lock);
Singleton temp = new Singleton();
Interlocked.Exchange(ref instance, temp);
Monitor.Exit(s_lock);
return instance;
}
}
}
You could eagerly create the a thread-safe Singleton instance, depending on your application needs, this is succinct code, though I would prefer #andasa's lazy version.
public sealed class Singleton
{
private static readonly Singleton instance = new Singleton();
private Singleton() { }
public static Singleton Instance()
{
return instance;
}
}
Another version of Singleton where the following line of code creates the Singleton instance at the time of application startup.
private static readonly Singleton singleInstance = new Singleton();
Here CLR (Common Language Runtime) will take care of object initialization and thread safety. That means we will not require to write any code explicitly for handling the thread safety for a multithreaded environment.
"The Eager loading in singleton design pattern is nothing a process in
which we need to initialize the singleton object at the time of
application start-up rather than on demand and keep it ready in memory
to be used in future."
public sealed class Singleton
{
private static int counter = 0;
private Singleton()
{
counter++;
Console.WriteLine("Counter Value " + counter.ToString());
}
private static readonly Singleton singleInstance = new Singleton();
public static Singleton GetInstance
{
get
{
return singleInstance;
}
}
public void PrintDetails(string message)
{
Console.WriteLine(message);
}
}
from main :
static void Main(string[] args)
{
Parallel.Invoke(
() => PrintTeacherDetails(),
() => PrintStudentdetails()
);
Console.ReadLine();
}
private static void PrintTeacherDetails()
{
Singleton fromTeacher = Singleton.GetInstance;
fromTeacher.PrintDetails("From Teacher");
}
private static void PrintStudentdetails()
{
Singleton fromStudent = Singleton.GetInstance;
fromStudent.PrintDetails("From Student");
}
Reflection resistant Singleton pattern:
public sealed class Singleton
{
public static Singleton Instance => _lazy.Value;
private static Lazy<Singleton, Func<int>> _lazy { get; }
static Singleton()
{
var i = 0;
_lazy = new Lazy<Singleton, Func<int>>(() =>
{
i++;
return new Singleton();
}, () => i);
}
private Singleton()
{
if (_lazy.Metadata() == 0 || _lazy.IsValueCreated)
throw new Exception("Singleton creation exception");
}
public void Run()
{
Console.WriteLine("Singleton called");
}
}

Multiple locks locking the same functions C# .Net

I have a simple question about lock.
Are Process1 and Process2 the same because they are eventually locking the LongProcess?
Thank you.
private static readonly object _Locker = new object();
public void Process1()
{
lock(_LockerA){
LongProcess()
}
}
public void Process2()
{
if(curType == A)
ProcessTypeA();
else if(curtype == B)
ProcessTypeB();
}
private static readonly object _LockerA = new object();
public void ProcessTypeA()
{
lock(_LockerA){
LongProcess()
}
}
private static readonly object _LockerB = new object();
public void ProcessTypeB()
{
lock(_LockerB){
LongProcess()
}
}
public void LongProcess()
{
}
No, they are not the same. If you lock against a different object than an already existing lock, then both code paths will be allowed. So, in the case of Process2 curtype == 'b' the lock is using the _LockerB object. If one of the other locks using the _LockerA object is attempted, then they will be allowed to enter the LongProcess.
Process1 and Process2 have the potential to lock the same object, but they are definitely not the same. Locks on the same object are however allowed (I think, rarely if ever had to do it) within the same call stack (also referred to as recursive locking in the case where Process1 invokes Process2). This could likely be better described as dependent locking.
Your question is however fairly vague so you'll have to elaborate on what you mean by the same...

What's the use of the SyncRoot pattern?

I'm reading a c# book that describes the SyncRoot pattern. It shows
void doThis()
{
lock(this){ ... }
}
void doThat()
{
lock(this){ ... }
}
and compares to the SyncRoot pattern:
object syncRoot = new object();
void doThis()
{
lock(syncRoot ){ ... }
}
void doThat()
{
lock(syncRoot){ ... }
}
However, I don't really understand the difference here; it seems that in both cases both methods can only be accessed by one thread at a time.
The book describes ... because the object of the instance can also be used for synchronized access from the outside and you can't control this form the class itself, you can use the SyncRoot pattern Eh? 'object of the instance'?
Can anyone tell me the difference between the two approaches above?
If you have an internal data structure that you want to prevent simultaneous access to by multiple threads, you should always make sure the object you're locking on is not public.
The reasoning behind this is that a public object can be locked by anyone, and thus you can create deadlocks because you're not in total control of the locking pattern.
This means that locking on this is not an option, since anyone can lock on that object. Likewise, you should not lock on something you expose to the outside world.
Which means that the best solution is to use an internal object, and thus the tip is to just use Object.
Locking data structures is something you really need to have full control over, otherwise you risk setting up a scenario for deadlocking, which can be very problematic to handle.
The actual purpose of this pattern is implementing correct synchronization with wrappers hierarchy.
For example, if class WrapperA wraps an instance of ClassThanNeedsToBeSynced, and class WrapperB wraps the same instance of ClassThanNeedsToBeSynced, you can't lock on WrapperA or WrapperB, since if you lock on WrapperA, lock on WrappedB won't wait.
For this reason you must lock on wrapperAInst.SyncRoot and wrapperBInst.SyncRoot, which delegate lock to ClassThanNeedsToBeSynced's one.
Example:
public interface ISynchronized
{
object SyncRoot { get; }
}
public class SynchronizationCriticalClass : ISynchronized
{
public object SyncRoot
{
// you can return this, because this class wraps nothing.
get { return this; }
}
}
public class WrapperA : ISynchronized
{
ISynchronized subClass;
public WrapperA(ISynchronized subClass)
{
this.subClass = subClass;
}
public object SyncRoot
{
// you should return SyncRoot of underlying class.
get { return subClass.SyncRoot; }
}
}
public class WrapperB : ISynchronized
{
ISynchronized subClass;
public WrapperB(ISynchronized subClass)
{
this.subClass = subClass;
}
public object SyncRoot
{
// you should return SyncRoot of underlying class.
get { return subClass.SyncRoot; }
}
}
// Run
class MainClass
{
delegate void DoSomethingAsyncDelegate(ISynchronized obj);
public static void Main(string[] args)
{
SynchronizationCriticalClass rootClass = new SynchronizationCriticalClass();
WrapperA wrapperA = new WrapperA(rootClass);
WrapperB wrapperB = new WrapperB(rootClass);
// Do some async work with them to test synchronization.
//Works good.
DoSomethingAsyncDelegate work = new DoSomethingAsyncDelegate(DoSomethingAsyncCorrectly);
work.BeginInvoke(wrapperA, null, null);
work.BeginInvoke(wrapperB, null, null);
// Works wrong.
work = new DoSomethingAsyncDelegate(DoSomethingAsyncIncorrectly);
work.BeginInvoke(wrapperA, null, null);
work.BeginInvoke(wrapperB, null, null);
}
static void DoSomethingAsyncCorrectly(ISynchronized obj)
{
lock (obj.SyncRoot)
{
// Do something with obj
}
}
// This works wrong! obj is locked but not the underlaying object!
static void DoSomethingAsyncIncorrectly(ISynchronized obj)
{
lock (obj)
{
// Do something with obj
}
}
}
Here is an example :
class ILockMySelf
{
public void doThat()
{
lock (this)
{
// Don't actually need anything here.
// In this example this will never be reached.
}
}
}
class WeveGotAProblem
{
ILockMySelf anObjectIShouldntUseToLock = new ILockMySelf();
public void doThis()
{
lock (anObjectIShouldntUseToLock)
{
// doThat will wait for the lock to be released to finish the thread
var thread = new Thread(x => anObjectIShouldntUseToLock.doThat());
thread.Start();
// doThis will wait for the thread to finish to release the lock
thread.Join();
}
}
}
You see that the second class can use an instance of the first one in a lock statement. This leads to a deadlock in the example.
The correct SyncRoot implementation is:
object syncRoot = new object();
void doThis()
{
lock(syncRoot ){ ... }
}
void doThat()
{
lock(syncRoot ){ ... }
}
as syncRoot is a private field, you don't have to worry about external use of this object.
Here's one other interesting thing related to this topic:
Questionable value of SyncRoot on Collections (by Brad Adams):
You’ll notice a SyncRoot property on many of the Collections in System.Collections. In retrospeced (sic), I think this property was a mistake. Krzysztof Cwalina, a Program Manger on my team, just sent me some thoughts on why that is – I agree with him:
We found the SyncRoot-based synchronization APIs to be insufficiently flexible for most scenarios. The APIs allow for thread safe access to a single member of a collection. The problem is that there are numerous scenarios where you need to lock on multiple operations (for example remove one item and add another). In other words, it’s usually the code that uses a collection that wants to choose (and can actually implement) the right synchronization policy, not the collection itself. We found that SyncRoot is actually used very rarely and in cases where it is used, it actually does not add much value. In cases where it’s not used, it is just an annoyance to implementers of ICollection.
Rest assured we will not make the same mistake as we build the generic versions of these collections.
See this Jeff Richter's article. More specifically, this example which demonstrates that locking on "this" can cause a deadlock:
using System;
using System.Threading;
class App {
static void Main() {
// Construct an instance of the App object
App a = new App();
// This malicious code enters a lock on
// the object but never exits the lock
Monitor.Enter(a);
// For demonstration purposes, let's release the
// root to this object and force a garbage collection
a = null;
GC.Collect();
// For demonstration purposes, wait until all Finalize
// methods have completed their execution - deadlock!
GC.WaitForPendingFinalizers();
// We never get to the line of code below!
Console.WriteLine("Leaving Main");
}
// This is the App type's Finalize method
~App() {
// For demonstration purposes, have the CLR's
// Finalizer thread attempt to lock the object.
// NOTE: Since the Main thread owns the lock,
// the Finalizer thread is deadlocked!
lock (this) {
// Pretend to do something in here...
}
}
}
Another concrete example:
class Program
{
public class Test
{
public string DoThis()
{
lock (this)
{
return "got it!";
}
}
}
public delegate string Something();
static void Main(string[] args)
{
var test = new Test();
Something call = test.DoThis;
//Holding lock from _outside_ the class
IAsyncResult async;
lock (test)
{
//Calling method on another thread.
async = call.BeginInvoke(null, null);
}
async.AsyncWaitHandle.WaitOne();
string result = call.EndInvoke(async);
lock (test)
{
async = call.BeginInvoke(null, null);
async.AsyncWaitHandle.WaitOne();
}
result = call.EndInvoke(async);
}
}
In this example, the first call will succeed, but if you trace in the debugger the call to DoSomething will block until the lock is release. The second call will deadlock, since the Main thread is holding the monitor lock on test.
The issue is that Main can lock the object instance, which means that it can keep the instance from doing anything that the object thinks should be synchronized. The point being that the object itself knows what requires locking, and outside interference is just asking for trouble. That's why the pattern of having a private member variable that you can use exclusively for synchronization without having to worry about outside interference.
The same goes for the equivalent static pattern:
class Program
{
public static class Test
{
public static string DoThis()
{
lock (typeof(Test))
{
return "got it!";
}
}
}
public delegate string Something();
static void Main(string[] args)
{
Something call =Test.DoThis;
//Holding lock from _outside_ the class
IAsyncResult async;
lock (typeof(Test))
{
//Calling method on another thread.
async = call.BeginInvoke(null, null);
}
async.AsyncWaitHandle.WaitOne();
string result = call.EndInvoke(async);
lock (typeof(Test))
{
async = call.BeginInvoke(null, null);
async.AsyncWaitHandle.WaitOne();
}
result = call.EndInvoke(async);
}
}
Use a private static object to synchronize on, not the Type.

Categories

Resources