we have a quite large winforms desktop application. Our application runs every once in a while into a deadlock and we are not sure how this happens.
We do know that this is caused by a locking operation. So we have quite a lot code parts like this:
lock (_someObj)
DoThreadSaveOperation();
Our approach to be able to detect what the deadlock was caused by we want to convert all those lock operations into something like this:
bool lockTaken = false;
var temp = _someObj;
try {
System.Threading.Monitor.TryEnter(temp, 1000, ref lockTaken);
if (!lockTaken)
{
// log "can't get lock, maybe deadlock, print stacktrace
}
DoThreadSaveOperation();
}
finally {
System.Threading.Monitor.Exit(temp);
}
This "lock service" should be at a central position. The problem is that it then has to be called like this:
LockService.RunWithLock(object objToLock, Action methodToRun);
That would mean that we had to create a delegate function for each statement which is executed after a lock.
Since this would be a lot of refactoring, I thought I'd give a try on stackoverflow if you guys have a better/good idea about this and also ask for your opinion.
Thanks for you help =)
Since the existing lock functionality closely models a using statement, I suggest that you wrap up your logic in a class that implements IDisposable.
The class's constructor would attempt to get the lock, and if it failed to get the lock you could either throw an exception or log it. The Dispose() method would release the lock.
You would use it in a using statement so it will be robust in the face of exceptions.
So something like this:
public sealed class Locker: IDisposable
{
readonly object _lockObject;
readonly bool _wasLockAcquired;
public Locker(object lockObject, TimeSpan timeout)
{
_lockObject = lockObject;
Monitor.TryEnter(_lockObject, timeout, ref _wasLockAcquired);
// Throw if lock wasn't acquired?
}
public bool WasLockAquired
{
get
{
return _wasLockAcquired;
}
}
public void Dispose()
{
if (_wasLockAcquired)
Monitor.Exit(_lockObject);
}
}
Which you could use like this:
using (var locker = new Locker(someObj, TimeSpan.FromSeconds(1)))
{
if (locker.WasLockAquired)
{
// ...
}
}
Which I think will help to minimise your code changes.
Related
I want to start some new threads each for one repeating operation. But when such an operation is already in progress, I want to discard the current task. In my scenario I need very current data only - dropped data is not an issue.
In the MSDN I found the Mutex class but as I understand it, it waits for its turn, blocking the current thread. Also I want to ask you: Does something exist in the .NET framework already, that does the following:
Is some method M already being executed?
If so, return (and let me increase some counter for statistics)
If not, start method M in a new thread
The lock(someObject) statement, which you may have come across, is syntactic sugar around Monitor.Enter and Monitor.Exit.
However, if you use the monitor in this more verbose way, you can also use Monitor.TryEnter which allows you to check if you'll be able to get the lock - hence checking if someone else already has it and is executing code.
So instead of this:
var lockObject = new object();
lock(lockObject)
{
// do some stuff
}
try this (option 1):
int _alreadyBeingExecutedCounter;
var lockObject = new object();
if (Monitor.TryEnter(lockObject))
{
// you'll only end up here if you got the lock when you tried to get it - otherwise you'll never execute this code.
// do some stuff
//call exit to release the lock
Monitor.Exit(lockObject);
}
else
{
// didn't get the lock - someone else was executing the code above - so I don't need to do any work!
Interlocked.Increment(ref _alreadyBeingExecutedCounter);
}
(you'll probably want to put a try..finally in there to ensure the lock is released)
or dispense with the explicit lock althogether and do this
(option 2)
private int _inUseCount;
public void MyMethod()
{
if (Interlocked.Increment(ref _inUseCount) == 1)
{
// do dome stuff
}
Interlocked.Decrement(ref _inUseCount);
}
[Edit: in response to your question about this]
No - don't use this to lock on. Create a privately scoped object to act as your lock.
Otherwise you have this potential problem:
public class MyClassWithLockInside
{
public void MethodThatTakesLock()
{
lock(this)
{
// do some work
}
}
}
public class Consumer
{
private static MyClassWithLockInside _instance = new MyClassWithLockInside();
public void ThreadACallsThis()
{
lock(_instance)
{
// Having taken a lock on our instance of MyClassWithLockInside,
// do something long running
Thread.Sleep(6000);
}
}
public void ThreadBCallsThis()
{
// If thread B calls this while thread A is still inside the lock above,
// this method will block as it tries to get a lock on the same object
// ["this" inside the class = _instance outside]
_instance.MethodThatTakesLock();
}
}
In the above example, some external code has managed to disrupt the internal locking of our class just by taking out a lock on something that was externally accessible.
Much better to create a private object that you control, and that no-one outside your class has access to, to avoid these sort of problems; this includes not using this or the type itself typeof(MyClassWithLockInside) for locking.
One option would be to work with a reentrancy sentinel:
You could define an int field (initialize with 0) and update it via Interlocked.Increment on entering the method and only proceed if it is 1. At the end just do a Interlocked.Decrement.
Another option:
From your description it seems that you have a Producer-Consumer-Scenario...
For this case it might be helpful to use something like BlockingCollection as it is thread-safe and mostly lock-free...
Another option would be to use ConcurrentQueue or ConcurrentStack...
You might find some useful information on the following site (the PDf is also downlaodable - recently downloaded it myself). The Adavnced threading Suspend and Resume or Aborting chapters maybe what you are inetrested in.
You should use Interlocked class atomic operations - for best performance - since you won't actually use system-level sychronizations(any "standard" primitive needs it, and involve system call overhead).
//simple non-reentrant mutex without ownership, easy to remake to support //these features(just set owner after acquiring lock(compare Thread reference with Thread.CurrentThread for example), and check for matching identity, add counter for reentrancy)
//can't use bool because it's not supported by CompareExchange
private int lock;
public bool TryLock()
{
//if (Interlocked.Increment(ref _inUseCount) == 1)
//that kind of code is buggy - since counter can change between increment return and
//condition check - increment is atomic, this if - isn't.
//Use CompareExchange instead
//checks if 0 then changes to 1 atomically, returns original value
//return true if thread succesfully occupied lock
return CompareExchange(ref lock, 1, 0)==0;
return false;
}
public bool Release()
{
//returns true if lock was occupied; false if it was free already
return CompareExchange(ref lock, 0, 1)==1;
}
I have to implement some .Net code involving a shared resource accessed by different threads. In principle, this should be solved with a simple read write lock. However, my solution requires that some of the read accessions do end up producing a write operation. I first checked the ReaderWriterLockSlim, but by itself it does not solve the problem, because it requires that I know in advance if a read operation can turn into a write operation, and this is not my case. I finally opted by simply using a ReaderWriterLockSlim, and when the read operation "detects" that needs to do a write operation, release the read lock and acquire a write lock. I am not sure if there is a better solution, or event if this solution could lead to some synchronization issue (I have experience with Java, but I am fairly new to .Net).
Below some sample code illustrating my solution:
public class MyClass
{
private int[] data;
private readonly ReaderWriterLockSlim syncLock = new ReaderWriterLockSlim();
public void modifyData()
{
try
{
syncLock.EnterWriteLock();
// clear my array and read from database...
}
finally
{
syncLock.ExitWriteLock();
}
}
public int readData(int index)
{
try
{
syncLock.EnterReadLock();
// some initial preprocessing of the arguments
try
{
_syncLock.ExitReadLock();
_syncLock.EnterWriteLock();
// check if a write is needed <--- this operation is fast, and, in most cases, the result will be false
// if true, perform the write operation
}
finally
{
_syncLock.ExitWriteLock();
_syncLock.EnterReadLock();
}
return data[index];
}
finally
{
syncLock.ExitReadLock();
}
}
}
I've got a problem with making calls to a third-party C++ dll which I've wrapped in a class using DllImport to access its functions.
The dll demands that before use a session is opened, which returns an integer handle used to refer to that session when performing operations. When finished, one must close the session using the same handle. So I did something like this:
public void DoWork(string input)
{
int apiHandle = DllWrapper.StartSession();
try
{
// do work using the apiHandle
}
catch(ApplicationException ex)
{
// log the error
}
finally
{
DllWrapper.CloseSession(apiHandle);
}
}
The problem I have is that CloseSession() sometimes causes the Dll in question to throw an error when running threaded:
System.AggregateException: One or more errors occurred. --->
System.AccessViolationException: Attempted to read or write protected
memory. This is often an indication that other memory is corrupt.
I'm not sure there's much I can do about stopping this error, since it seems to be arising from using the Dll in a threaded manner - it is supposed to be thread safe. But since my CloseSession() function does nothing except call that Dll's close function, there's not much wiggle room for me to "fix" anything.
The end result, however, is that the session doesn't close properly. So when the process tries again, which it's supposed to do, it encounters an open session and just keeps throwing new errors. That session absolutely has to be closed.
I'm at a loss as to how to design an error handling statement that's more robust any will ensure the session always closes?
I would change the wrapper to include disposal of the external resource and to also wrap the handle. I.e. instead of representing a session by a handle, you would represent it by a wrapper object.
Additionally, wrapping the calls to the DLL in lock-statements (as #Serge suggests), could prevent the multithreading issues completely. Note that the lock object is static, so that all DllWrappers are using the same lock object.
public class DllWrapper : IDisposable
{
private static object _lockObject = new object();
private int _apiHandle;
private bool _isOpen;
public void StartSession()
{
lock (_lockObject) {
_apiHandle = ...; // TODO: open the session
}
_isOpen = true;
}
public void CloseSession()
{
const int MaxTries = 10;
for (int i = 0; _isOpen && i < MaxTries; i++) {
try {
lock (_lockObject) {
// TODO: close the session
}
_isOpen = false;
} catch {
}
}
}
public void Dispose()
{
CloseSession();
}
}
Note that the methods are instance methods, now.
Now you can ensure the closing of the session with a using statement:
using (var session = new DllWrapper()) {
try {
session.StartSession();
// TODO: work with the session
} catch(ApplicationException ex) {
// TODO: log the error
// This is for exceptions not related to closing the session. If such exceptions
// cannot occur, you can drop the try-catch completely.
}
} // Closes the session automatically by calling `Dispose()`.
You can improve naming by calling this class Session and the methods Open and Close. The user of this class does not need to know that it is a wrapper. This is just an implementation detail. Also, the naming of the methods is now symmetrical and there is no need to repeat the name Session.
By encapsulating all the session related stuff, including error handling, recovery from error situations and disposal of resources, you can considerably diminish the mess in your code. The Session class is now a high-level abstraction. The old DllWrapper was somewhere at mid distance between low-level and high-level.
I see this a lot:
object lockObj;
List<string> myStrs;
// ...
lock(lockObj)
{
myStrs.Add("hello world");
}
Why have the separate object? Surely you can just do this:
List<string> myStrs;
// ...
lock(myStrs)
{
myStrs.Add("hello world");
}
It is a problem to lock directly on the list only if myStrs is public, and thus can be locked by other callers as well, resulting in a possible deadlock.
If it is a private member, then there should be no problem, but locking on a seperate object is a good habit in any case.
See this similar question for a more detailed answer:
Why is lock(this) {...} bad?
In general, avoid locking on a public type, or instances beyond your code's control. The common constructs lock (this), lock (typeof (MyType)), and lock ("myLock") violate this guideline:
lock (this) is a problem if the instance can be accessed publicly.
lock (typeof (MyType)) is a problem if MyType is publicly accessible.
lock(“myLock”) is a problem since any other code in the process using the same string, will share the same lock.
Best practice is to define a private object to lock on, or a private static object variable to protect data common to all instances.
Form the documentation lock c#
The idea is to always lock a private members that can only be accessed by the code that we are looking at. Whereas when we lock the members that we do not have control upon like public member or similar, chances are some other part of code can already hold a lock. This could lead to unexpected blocking behavior.
So, I think this had lead to thumb rule / best practice of having a private object especially for locking.
I would be interested in seeing if there are more reasons coming up.
Your list of string is a list used for internal implementation details.
The problem with the second version could rise if you change your implementation such a way that you re-initilazie the strings list.
Then the thread-safety of your implementation could be broken.
So it's better that you use a seperate object for synchronization and declare this object as read only.
If you use the list as the lock object, and it is reset to null, then the lock(myStringList) will throw an ArgumentNullException. Below the simple test code of a console application.
private static IList<string> mystringList = new List<string>();
static void Main(string[] args)
{
new Thread(() =>
{
try
{
while (true)
{
//Acquire the lock
lock (mystringList)
{
//Do something with the data
Thread.Sleep(100);
Console.WriteLine("Lock acquired");
}
}
}
catch (Exception exception)
{
Console.WriteLine("Exception: " +exception.Message);
}
}).Start();
new Thread(() =>
{
//Suppose we do something
Thread.Sleep(1000);
//And by some how reset the list to null
mystringList = null;
}).Start();
Console.ReadLine();
}
I have a multithreaded app that uses sqlite. When two threads try to update the db at once i get the exception
Additional information: The database file is locked
I thought it would retry in a few milliseconds. My querys arent complex. The most complex one (which happens frequently) is update, select, run trivial code update/delete, commit. Why does it throw the exception? How can i make it retry a few times before throwing an exception?
SQLite isn't thread safe for access, which is why you get this error message.
You should synchronize the access to the database (create an object, and "lock" it) whenever you go to update. This will cause the second thread to block and wait until the first thread's update finishes automatically.
try to make your transaction / commit blocks as short as possible. The only time you can deadlock/block is with a transaction -- thus if you don't do them you won't have the problem.
That said, there are times when you need to do transactions (mostly on data updates), but don't do them while you are "run trivial code" if you can avoid it.
A better approach may be to use an update queue, if you can do the database updates out of line with the rest of your code. For example, you could do something like:
m_updateQueue.Add(()=>InsertOrder(o));
Then you could have a dedicated update thread that processed the queue.
That code would look similar to this (I haven't compiled or tested it):
class UpdateQueue : IDisposable
{
private object m_lockObj;
private Queue<Action> m_queue;
private volatile bool m_shutdown;
private Thread m_thread;
public UpdateQueue()
{
m_lockObj = new Object();
m_queue = new Queue<Action>();
m_thread = new Thread(ThreadLoop);
m_thread.Start();
}
public void Add(Action a)
{
lock(m_lockObj)
{
m_queue.Enqueue(a);
Monitor.Pulse(m_lockObj);
}
}
public void Dispose()
{
if (m_thread != null)
{
m_shutdown = true;
Monitor.PulseAll(m_lockObj);
m_thread.Join();
m_thread = null;
}
}
private void ThreadLoop()
{
while (! m_shutdown)
{
Action a;
lock (m_lockObj)
{
if (m_queue.Count == 0)
{
Monitor.Wait(m_lockObj);
}
if (m_shutdown)
{
return;
}
a = m_queuue.Dequeue();
}
a();
}
}
}
Or, you could use something other than Sql Lite.