Threading test question - c#

I recently had a interview question in a test that was similar to the below, I do not have very much experience of development using threads can someone please help advise me how to approach this question?:
public class StringQueue
{
private object _lockObject = new object();
private List<string> _items = new List<string>();
public bool IsEmpty()
{
lock (_lockObject)
return _items.Count == 0;
}
public void Enqueue(string item)
{
lock (_lockObject)
_items.Add(item);
}
public string Dequeue()
{
lock (_lockObject)
{
string result = _items[0];
_items.RemoveAt(0);
return result;
}
}
}
Is the following method thread safe with the above implementation and why?
public string DequeueOrNull()
{
if (IsEmpty())
return null;
return Dequeue();
}

It seems to me the answer is no.
While isEmpty() procedure locks the object, it is released as soon as the call is returned -
a different thread could potentially call DequeueOrNull() between the call to IsEmpty() and Dequeue() (at which time the object is unlocked), thus removing the only item that existed, making Dequeue() invalid at that time.
A plausible fix would be to put the lock over both statements in DequeueOrNull(), so no other thread could call DeQueue() after the check but before the DeQueue().

It is not threadsafe. At the marked line it is possible that the Dequeue method is called from another thread and thus, the consequent Dequeue return a wrong value:
public string DequeueOrNull()
{
if (IsEmpty())
return null;
/// << it is possible that the Dequeue is called from another thread here.
return Dequeue();
}
The thread safe code would be:
public string DequeueOrNull()
{
lock(_lockObject) {
if (IsEmpty())
return null;
return Dequeue();
}
}

No, because the state of _items could potentially change between the thread-safe IsEmpty() and the thread-safe Dequeue() calls.
Fix it with something like the following, which ensures that _items is locked during the whole operation:
public string DequeueOrNull()
{
lock (_lockObject)
{
if (IsEmpty())
return null;
return Dequeue();
}
}
Note: depending in the implementation of _lock, you may wish to avoid double-locking the resource by moving the guts of IsEmpty() and Dequeue into separate helper functions.

Related

What's c# locks's syntactic sugar when locking a ConcurrentDictionary aka how to implement a conditional non-blocking lock?

My challenge:
avoid race conditions in a method DoWorkWithId that's being triggered from the UI by multiple users for multiple ids, e.g. lock
re-entrance into DoWorkWithId permitted for different ids but not for the same id , e.g. using a ConcurrentDictionary where id is the key and value is an object
should be a non-blocking lock so threads skip critical section without waiting and letting others know that it's already running (at least was running when invoked a second time), e.g. Monitor.TryEnter or Interlocked.*
My attempt:
I guess 1. and 2. could be solved using a ConcurrentDictionary and lock
private static readonly ConcurrentDictionary<Guid, object> concurrentDictionaryMethodLock = new();
public string CallToDoWorkWithId(Guid id) // [Edited from DoWorkWithId]
{
concurrentDictionaryMethodLock.TryAdd(id, new object()); // atomic adding
lock (concurrentDictionaryMethodLock[id])
{
DoWorkWithId(id);
}
return "done";
}
Now, I don't want threads with the same id to wait. When the first thread is done, threads that waited would find out in the first line of DoWorkWithId that there's nothing to do as the object id refers to has been already modified.
I was hoping to tackle 3. with Monitor.TryEnter or Interlocked.*
3.1. MSDN says lock's basically
object __lockObj = x;
bool __lockWasTaken = false;
try
{
System.Threading.Monitor.Enter(__lockObj, ref __lockWasTaken);
// Your code...
}
finally
{
if (__lockWasTaken) System.Threading.Monitor.Exit(__lockObj);
}
3.2. MSDN example of Monitor.TryEnter
var lockObj = new Object();
bool lockTaken = false;
try
{
Monitor.TryEnter(lockObj, ref lockTaken);
if (lockTaken)
{
// The critical section.
}
else
{
// The lock was not acquired.
}
}
finally
{
// Ensure that the lock is released.
if (lockTaken)
{
Monitor.Exit(lockObj);
}
}
In order to correctly release the lock I thought I need to keep track who's acquired the lock, e.g.
private static readonly ConcurrentDictionary<Guid, bool> bools = new();.
So, I tried:
private static readonly ConcurrentDictionary<Guid, object> concurrentDictionaryMethodLock = new();
private static readonly ConcurrentDictionary<Guid, bool> bools = new();
public string CallToDoWorkWithId(Guid id) // [Edited from DoWorkWithId]
{
concurrentDictionaryMethodLock.TryAdd(id, new object());
bools.TryAdd(id, false);
try
{
Monitor.TryEnter(concurrentDictionaryMethodLock[id], ref bools[id]); // error
if (bools[id])
{
DoWorkWithId(id);
}
else
{
return "Process already running. Wait, refresh page and try again.";
}
}
finally
{
if (bools[id])
{
Monitor.Exit(concurrentDictionaryMethodLock[id]);
}
}
return "done";
}
This gives me CE CS0206: A property or indexer may not be passed as an out or ref parameter.
Same thing when using Interlocked.*. I need to keep track of the id based booleans, here called usingResource, MSDN example.
My question:
How can I use the verbose syntax of lock and store the bools used to identify who has the lock/ressource when using a lock on a ConcurrentDictionary?
PS:
I'm also interested in better suited solutions for what I'm trying to do. I was thinking of using a ConcurrentDictionary without a lock:
private static readonly ConcurrentDictionary<Guid, string> concurrentDictionaryMethodLock = new();
public string CallToDoWorkWithId(Guid id) // [Edited from DoWorkWithId]
{
var userName = GetUserName();
if (!concurrentDictionaryMethodLock.TryAdd(id, userName)) // false means unable to add, was already added, please skip and let UI know
{
if (concurrentDictionaryMethodLock.TryGetValue(id, out var value))
{
return $"Please wait. {value} started process already.";
}
else // in case id has been removed in the meantime, even TryGetOrAdd is not atomic (MSDN remarks about valueFactory)
{
return "Process was running and finished by now. Please refresh and try again.";
}
}
try
{
DoWorkWithId(id);
}
finally
{
concurrentDictionaryMethodLock.TryRemove(id, out _);
}
return "done";
}
Any pitfalls using this approach?

Prevent two threads entering a code block with the same value

Say I have this function (assume I'm accessing Cache in a threadsafe way):
object GetCachedValue(string id)
{
if (!Cache.ContainsKey(id))
{
//long running operation to fetch the value for id
object value = GetTheValueForId(id);
Cache.Add(id, value);
}
return Cache[id];
}
I want to prevent two threads from running the "long running operation" at the same time for the same value. Obviously I can wrap the whole thing in a lock(), but then the whole function would block regardless of value and I want two threads to be able to perform the long running operation as long as they're looking for different id's.
Is there a built-in locking mechanism to lock based on a value so one thread can block while the other thread completes the long running operation so I don't need to do it twice (or N times)? Ideally as long as the long running operation is being performed in one thread, no other thread should be able to do it for the same id value.
I could roll my own by putting the id's in a HashSet and then removing them once the operation completes, but that seems like a hack.
I would use Lazy<T> here. Below code will lock the cache, put the Lazy into the cache and return immediately. Long-running-operation will be executed once in a thread safe manner.
new Thread(() => Console.WriteLine("1-" + GetCachedValue("1").Value)).Start();
new Thread(() => Console.WriteLine("2-" + GetCachedValue("1").Value)).Start();
Lazy<object> GetCachedValue(string id)
{
lock (Cache)
{
if (!Cache.ContainsKey(id))
{
Lazy<object> lazy = new Lazy<object>(() =>
{
Console.WriteLine("**Long Running Job**");
Thread.Sleep(3000);
return int.Parse(id);
},
true);
Cache.Add(id, lazy);
Console.WriteLine("added to cache");
}
return Cache[id];
}
}
In this case I would like to have interface like this
using (SyncDispatcher.Enter(id))
{
//any code here...
}
so I could execute any code and it would be thread safe if id is the same.
If I need to get value from Cache I get it straight forward, as just there is no concurrency calls.
My implementation for SyncDispatcher is this:
public class SyncDispatcher : IDisposable
{
private static object _lock = new object();
private static Dictionary<object, SyncDispatcher> _container = new Dictionary<object, SyncDispatcher>();
private AutoResetEvent _syncEvent = new AutoResetEvent(true);
private SyncDispatcher() { }
private void Lock()
{
_syncEvent.WaitOne();
}
public void Dispose()
{
_syncEvent.Set();
}
public static SyncDispatcher Enter(object obj)
{
var objDispatcher = GetSyncDispatcher(obj);
objDispatcher.Lock();
return objDispatcher;
}
private static SyncDispatcher GetSyncDispatcher(object obj)
{
lock (_lock)
{
if (!_container.ContainsKey(obj))
{
_container.Add(obj, new SyncDispatcher());
}
return _container[obj];
}
}
}
Simple test:
static void Main(string[] args)
{
new Thread(() => Execute("1", 1000, "Resource 1")).Start();
new Thread(() => Execute("2", 200, "Resource 2")).Start();
new Thread(() => Execute("1", 0, "Resource 1 again")).Start();
}
static void Execute(object id, int timeout, string message)
{
using (SyncDispatcher.Enter(id))
{
Thread.Sleep(timeout);
Console.WriteLine(message);
}
}
Move your locking down to where your comment is. I think you need to maintain a list of currently-executing long running operations, and lock accesses to that list, and only execute the GetValueForId if the id you're looking for isn't in that list. I'll try and whip something up.
private List<string> m_runningCacheIds = new List<string>();
object GetCachedValue(string id)
{
if (!Cache.ContainsKey(id))
{
lock (m_runningCacheIds) {
if (m_runningCacheIds.Contains(id)) {
// Do something to wait until the other Get is done....
}
else {
m_runningCacheIds.Add(id);
}
}
//long running operation to fetch the value for id
object value = GetTheValueForId(id);
Cache.Add(id, value);
lock (m_runningCacheIds)
m_runningCacheIds.Remove(id);
}
return Cache[id];
}
There's still the issue of what the thread is going to do while it waits on the other thread is getting the value.
I use in that cases the Mutex as:
object GetCachedValue(string Key)
{
// note here that I use the key as the name of the mutex
// also here you need to check that the key have no invalid charater
// to used as mutex name.
var mut = new Mutex(true, key);
try
{
// Wait until it is safe to enter.
mut.WaitOne();
// here you create your cache
if (!Cache.ContainsKey(Key))
{
//long running operation to fetch the value for id
object value = GetTheValueForId(Key);
Cache.Add(Key, value);
}
return Cache[Key];
}
finally
{
// Release the Mutex.
mut.ReleaseMutex();
}
}
Notes:
Some characters are not valid for the mutex name (like the slash)
If the Cache is different for every application (or web pool) that you use, and it is if we speak for the cache of asp.net, then the mutex is lock all threads and pools in the computer, in this case I also use a static random integer that I add it to the key, and not make the lock different for each key but also for each pool.
It's not the most elegant solution in the world, but I've gotten around this issue with a double check and a lock:
object GetCachedValue(string id)
{
if (!Cache.ContainsKey(id))
{
lock (_staticObj)
{
if (!Cache.ContainsKey(id))
{
//long running operation to fetch the value for id
object value = GetTheValueForId(id);
Cache.Add(id, value);
}
}
}
return Cache[id];
}

Threading and List<> collection

I have List<string> collection called List<string> list.
I have two threads.
One thread is enumerating through all list elements and adding to collection.
Second thread is enumerating through all list elements and removing from it.
How can make it thread safe?
I tried creating global Object "MyLock" and using lock(MyLock) block in each thread function but it didn't work.
Can you help me?
If you have access to .NET 4.0 you can use the class ConcurrentQueue or a BlockingCollection with a ConcurrentQueue backing it. It does exactly what you are trying to do and does not require any locking. The BlockingCollection will make your thread wait if there is no items available in the list.
A example of removing from the ConcurrentQueue you do something like
ConcurrentQueue<MyClass> cq = new ConcurrentQueue<MyClass>();
void GetStuff()
{
MyClass item;
if(cq.TryDeqeue(out item))
{
//Work with item
}
}
This will try to remove a item, but if there are none available it does nothing.
BlockingCollection<MyClass> bc = BlockingCollection<MyClass>(new ConcurrentQueue<MyClass>());
void GetStuff()
{
if(!bc.IsCompleated) //check to see if CompleatedAdding() was called and the list is empty.
{
try
{
MyClass item = bc.Take();
//Work with item
}
catch (InvalidOpperationExecption)
{
//Take is marked as completed and is empty so there will be nothing to take
}
}
}
This will block and wait on the Take till there is something available to take from the list. Once you are done you can call CompleteAdding() and Take will throw a execption when the list becomes empty instead of blocking.
Without knowing more about your program and requirements, I'm going say that this is a "Bad Idea". Altering a List<> while iterating through it's contents will most likely throw an exception.
You're better off using a Queue<> instead of a List<>, as a Queue<> was designed with synchronization in mind.
You should be able to lock directly on your list:
lock(list) {
//work with list here
}
However adding/removing from the list while enumerating it will likely cause an exception...
Lock on the SyncRoot of your List<T>:
lock(list.SyncRoot)
{
}
More information on how to use it properly can be found here
You could implement your own version of IList<T> that wraps the underlying List<T> to provide locking on every method call.
public class LockingList<T> : IList<T>
{
public LockingList(IList<T> inner)
{
this.Inner = inner;
}
private readonly object gate = new object();
public IList<T> Inner { get; private set; }
public int IndexOf(T item)
{
lock (gate)
{
return this.Inner.IndexOf(item);
}
}
public void Insert(int index, T item)
{
lock (gate)
{
this.Inner.Insert(index, item);
}
}
public void RemoveAt(int index)
{
lock (gate)
{
this.Inner.RemoveAt(index);
}
}
public T this[int index]
{
get
{
lock (gate)
{
return this.Inner[index];
}
}
set
{
lock (gate)
{
this.Inner[index] = value;
}
}
}
public void Add(T item)
{
lock (gate)
{
this.Inner.Add(item);
}
}
public void Clear()
{
lock (gate)
{
this.Inner.Clear();
}
}
public bool Contains(T item)
{
lock (gate)
{
return this.Inner.Contains(item);
}
}
public void CopyTo(T[] array, int arrayIndex)
{
lock (gate)
{
this.Inner.CopyTo(array, arrayIndex);
}
}
public int Count
{
get
{
lock (gate)
{
return this.Inner.Count;
}
}
}
public bool IsReadOnly
{
get
{
lock (gate)
{
return this.Inner.IsReadOnly;
}
}
}
public bool Remove(T item)
{
lock (gate)
{
return this.Inner.Remove(item);
}
}
public IEnumerator<T> GetEnumerator()
{
lock (gate)
{
return this.Inner.ToArray().AsEnumerable().GetEnumerator();
}
}
IEnumerator IEnumerable.GetEnumerator()
{
lock (gate)
{
return this.Inner.ToArray().GetEnumerator();
}
}
}
You would use this code like this:
var list = new LockingList<int>(new List<int>());
If you're using large lists and/or performance is an issue then this kind of locking may not be terribly performant, but in most cases it should be fine.
It is very important to notice that the two GetEnumerator methods call .ToArray(). This forces the evaluation of the enumerator before the lock is released thus ensuring that any modifications to the list don't affect the actual enumeration.
Using code like lock (list) { ... } or lock (list.SyncRoot) { ... } do not cover you against list changes occurring during enumerations. These solutions only cover against concurrent modifications to the list - and that's only if all callers do so within a lock. Also these solutions can cause your code to die if some nasty bit of code takes a lock and doesn't release it.
In my solution you'll notice I have a object gate that is a private variable internal to the class that I lock on. Nothing outside the class can lock on this so it is safe.
I hope this helps.
As others already said, you can use concurrent collections from the System.Collections.Concurrent namespace. If you can use one of those, this is preferred.
But if you really want a list which is just synchronized, you could look at the SynchronizedCollection<T>-Class in System.Collections.Generic.
Note that you had to include the System.ServiceModel assembly, which is also the reason why I don't like it so much. But sometimes I use it.

C#: Can you detect whether or not the current execution context is within `lock (this)`?

If I have an object that I would like to force to be accessed from within a lock, like so:
var obj = new MyObject();
lock (obj)
{
obj.Date = DateTime.Now;
obj.Name = "My Name";
}
Is it possible, from within the AddOne and RemoveOne functions to detect whether the current execution context is within a lock?
Something like:
Monitor.AreWeCurrentlyEnteredInto(this)
Edit: (for clarification of intent)
The intent here is to be able to reject any modification made outside of the lock, so that all changes to the object itself will be transactional and thread-safe. Locking on a mutex within the object itself does not ensure a transactional nature to the edits.
I know that it is possible to do this:
var obj = new MyObject();
obj.MonitorEnterThis();
try
{
obj.Date = DateTime.Now;
obj.Name = "My Name";
}
finally
{
obj.MonitorExitThis();
}
But this would allow any other thread to call the Add/Remove functions without first calling the Enter, thereby circumventing the protection.
Edit 2:
Here is what I'm currently doing:
var obj = new MyObject();
using (var mylock = obj.Lock())
{
obj.SetDate(DateTime.Now, mylock);
obj.SetName("New Name", mylock);
}
Which is simple enough, but it has two problems:
I'm implementing IDisposable on the
mylock object, which is a little bit
of an abuse of the IDisposable
interface.
I would like to change the SetDate and SetName functions to
Properties, for clarity.
I don't think that's possible without tracking the state yourself (e.g. by using some kind of semaphore). But even if it were, that'd be a gross violation of encapsulation. Your methods usually shouldn't care whether or not they're executing in a particular locking context.
There's no documented method of checking for this kind of condition at runtime, and if there were, I'd be suspicious of any code that used it, because any code that alters its behaviour based on the call stack would be very difficult to debug.
True ACID semantics are not trivial to implement, and I personally wouldn't try; that's what we have databases for, and you can use an in-memory database if you need the code to be fast/portable. If you just want forced-single-threaded semantics, that is a somewhat easier beast to tame, although as a disclaimer I should mention that in the long run you'd be better off simply providing atomic operations as opposed to trying to prevent multi-threaded access.
Let's suppose that you have a very good reason for wanting to do this. Here is a proof-of-concept class you could use:
public interface ILock : IDisposable
{
}
public class ThreadGuard
{
private static readonly object SlotMarker = new Object();
[ThreadStatic]
private static Dictionary<Guid, object> locks;
private Guid lockID;
private object sync = new Object();
public void BeginGuardedOperation()
{
lock (sync)
{
if (lockID == Guid.Empty)
throw new InvalidOperationException("Guarded operation " +
"was blocked because no lock has been obtained.");
object currentLock;
Locks.TryGetValue(lockID, out currentLock);
if (currentLock != SlotMarker)
{
throw new InvalidOperationException("Guarded operation " +
"was blocked because the lock was obtained on a " +
"different thread from the calling thread.");
}
}
}
public ILock GetLock()
{
lock (sync)
{
if (lockID != Guid.Empty)
throw new InvalidOperationException("This instance is " +
"already locked.");
lockID = Guid.NewGuid();
Locks.Add(lockID, SlotMarker);
return new ThreadGuardLock(this);
}
}
private void ReleaseLock()
{
lock (sync)
{
if (lockID == Guid.Empty)
throw new InvalidOperationException("This instance cannot " +
"be unlocked because no lock currently exists.");
object currentLock;
Locks.TryGetValue(lockID, out currentLock);
if (currentLock == SlotMarker)
{
Locks.Remove(lockID);
lockID = Guid.Empty;
}
else
throw new InvalidOperationException("Unlock must be invoked " +
"from same thread that invoked Lock.");
}
}
public bool IsLocked
{
get
{
lock (sync)
{
return (lockID != Guid.Empty);
}
}
}
protected static Dictionary<Guid, object> Locks
{
get
{
if (locks == null)
locks = new Dictionary<Guid, object>();
return locks;
}
}
#region Lock Implementation
class ThreadGuardLock : ILock
{
private ThreadGuard guard;
public ThreadGuardLock(ThreadGuard guard)
{
this.guard = guard;
}
public void Dispose()
{
guard.ReleaseLock();
}
}
#endregion
}
There's a lot going on here but I'll break it down for you:
Current locks (per thread) are held in a [ThreadStatic] field which provides type-safe, thread-local storage. The field is shared across instances of the ThreadGuard, but each instance uses its own key (Guid).
The two main operations are GetLock, which verifies that no lock has already been taken and then adds its own lock, and ReleaseLock, which verifies that the lock exists for the current thread (because remember, locks is ThreadStatic) and removes it if that condition is met, otherwise throws an exception.
The last operation, BeginGuardedOperation, is intended to be used by classes that own ThreadGuard instances. It's basically an assertion of sorts, it verifies that the currently-executed thread owns whichever lock is assigned to this ThreadGuard, and throws if the condition isn't met.
There's also an ILock interface (which doesn't do anything except derive from IDisposable), and a disposable inner ThreadGuardLock to implement it, which holds a reference to the ThreadGuard that created it and calls its ReleaseLock method when disposed. Note that ReleaseLock is private, so the ThreadGuardLock.Dispose is the only public access to the release function, which is good - we only want a single point of entry for acquisition and release.
To use the ThreadGuard, you would include it in another class:
public class MyGuardedClass
{
private int id;
private string name;
private ThreadGuard guard = new ThreadGuard();
public MyGuardedClass()
{
}
public ILock Lock()
{
return guard.GetLock();
}
public override string ToString()
{
return string.Format("[ID: {0}, Name: {1}]", id, name);
}
public int ID
{
get { return id; }
set
{
guard.BeginGuardedOperation();
id = value;
}
}
public string Name
{
get { return name; }
set
{
guard.BeginGuardedOperation();
name = value;
}
}
}
All this does is use the BeginGuardedOperation method as an assertion, as described earlier. Note that I'm not attempting to protect read-write conflicts, only multiple-write conflicts. If you want reader-writer synchronization then you'd need to either require the same lock for reading (probably not so good), use an additional lock in MyGuardedClass (the most straightforward solution) or alter the ThreadGuard to expose and acquire a true "lock" using the Monitor class (be careful).
And here's a test program to play with:
class Program
{
static void Main(string[] args)
{
MyGuardedClass c = new MyGuardedClass();
RunTest(c, TestNoLock);
RunTest(c, TestWithLock);
RunTest(c, TestWithDisposedLock);
RunTest(c, TestWithCrossThreading);
Console.ReadLine();
}
static void RunTest(MyGuardedClass c, Action<MyGuardedClass> testAction)
{
try
{
testAction(c);
Console.WriteLine("SUCCESS: Result = {0}", c);
}
catch (Exception ex)
{
Console.WriteLine("FAIL: {0}", ex.Message);
}
}
static void TestNoLock(MyGuardedClass c)
{
c.ID = 1;
c.Name = "Test1";
}
static void TestWithLock(MyGuardedClass c)
{
using (c.Lock())
{
c.ID = 2;
c.Name = "Test2";
}
}
static void TestWithDisposedLock(MyGuardedClass c)
{
using (c.Lock())
{
c.ID = 3;
}
c.Name = "Test3";
}
static void TestWithCrossThreading(MyGuardedClass c)
{
using (c.Lock())
{
c.ID = 4;
c.Name = "Test4";
ThreadPool.QueueUserWorkItem(s => RunTest(c, cc => cc.ID = 5));
Thread.Sleep(2000);
}
}
}
As the code (hopefully) implies, only the TestWithLock method completely succeeds. The TestWithCrossThreading method partially succeeds - the worker thread fails, but the main thread has no trouble (which, again, is the desired behaviour here).
This isn't intended to be production-ready code, but it should give you the basic idea of what has to be done in order to both (a) prevent cross-thread calls and (b) allow any thread to take ownership of the object as long as nothing else is using it.
Lets redisgn your class to make it actually work like transaction.
using (var transaction = account.BeginTransaction())
{
transaction.Name = "blah";
transaction.Date = DateTime.Now;
transaction.Comit();
}
Changes will not be propagated until commit is called.
In commit you can take a lock and set the properties on the target object.
You can override AddOne and RemoveOne to take a boolean flag that is set to true if it's being called from a lock. I don't see any other way.
You can also play with the ExecutionContext class if you want to know something about the current execution context. You can get the current context by calling ExecutionContext.Capture().
using thread local storage you can store the entering and exiting of a lock.
If your requirement is that the lock must be acquired for the duration of either method AddOne() or RemoveOne(), then why not simply acquire the lock inside each method? It shouldn't be a problem if the caller has already acquired the lock for you.
However, if your requirement is that the lock must be acquired before calling AddOne() and RemoveOne() together (because other concurrent operations performed on the instance are potentially unsafe), then maybe you should consider changing the public interface so that locking can be handled internally without concerning client code with the details.
One possible way to accomplish the later would be to provide methods for Begin- and End-Changes that have to be called before and after AddOne and RemoveOne. An exception should be raised if AddOne or RemoveOne is called outside of the Begin-End scope.
I ran into this same problem and created a helper class that looks like this:
public class BusyLock : IDisposable
{
private readonly Object _lockObject = new Object();
private int _lockCount;
public bool IsBusy
{
get { return _lockCount > 0; }
}
public IDisposable Enter()
{
if (!Monitor.TryEnter(_lockObject, TimeSpan.FromSeconds(1.0)))
throw new InvalidOperationException("Cannot begin operation as system is already busy");
Interlocked.Increment(ref _lockCount);
return this;
}
public bool TryEnter(out IDisposable busyLock)
{
if (Monitor.TryEnter(_lockObject))
{
busyLock = this;
Interlocked.Increment(ref _lockCount);
return true;
}
busyLock = null;
return false;
}
#region IDisposable Members
public void Dispose()
{
if (_lockCount > 0)
{
Monitor.Exit(_lockObject);
Interlocked.Decrement(ref _lockCount);
}
}
#endregion
}
You can then create an instance wrapped like this:
public sealed class AutomationManager
{
private readonly BusyLock _automationLock = new BusyLock();
public IDisposable AutomationLock
{
get { return _automationLock.Enter(); }
}
public bool IsBusy
{
get { return _automationLock.IsBusy; }
}
}
And use it like this:
public void DoSomething()
{
using (AutomationLock)
{
//Do important busy stuff here
}
}
For my particular case, I only wanted an enforcing lock (two threads shouldn't ever try to acquire the lock at the same time if they're well-behaved), so I throw an exception. You can easily modify it to perform more typical locking and still take advantage of the IsBusy.

Async result handle to return to callers

I have a method that queues some work to be executed asynchronously. I'd like to return some sort of handle to the caller that can be polled, waited on, or used to fetch the return value from the operation, but I can't find a class or interface that's suitable for the task.
BackgroundWorker comes close, but it's geared to the case where the worker has its own dedicated thread, which isn't true in my case. IAsyncResult looks promising, but the provided AsyncResult implementation is also unusable for me. Should I implement IAsyncResult myself?
Clarification:
I have a class that conceptually looks like this:
class AsyncScheduler
{
private List<object> _workList = new List<object>();
private bool _finished = false;
public SomeHandle QueueAsyncWork(object workObject)
{
// simplified for the sake of example
_workList.Add(workObject);
return SomeHandle;
}
private void WorkThread()
{
// simplified for the sake of example
while (!_finished)
{
foreach (object workObject in _workList)
{
if (!workObject.IsFinished)
{
workObject.DoSomeWork();
}
}
Thread.Sleep(1000);
}
}
}
The QueueAsyncWork function pushes a work item onto the polling list for a dedicated work thread, of which there will only over be one. My problem is not with writing the QueueAsyncWork function--that's fine. My question is, what do I return to the caller? What should SomeHandle be?
The existing .Net classes for this are geared towards the situation where the asynchronous operation can be encapsulated in a single method call that returns. That's not the case here--all of the work objects do their work on the same thread, and a complete work operation might span multiple calls to workObject.DoSomeWork(). In this case, what's a reasonable approach for offering the caller some handle for progress notification, completion, and getting the final outcome of the operation?
Yes, implement IAsyncResult (or rather, an extended version of it, to provide for progress reporting).
public class WorkObjectHandle : IAsyncResult, IDisposable
{
private int _percentComplete;
private ManualResetEvent _waitHandle;
public int PercentComplete {
get {return _percentComplete;}
set
{
if (value < 0 || value > 100) throw new InvalidArgumentException("Percent complete should be between 0 and 100");
if (_percentComplete = 100) throw new InvalidOperationException("Already complete");
if (value == 100 && Complete != null) Complete(this, new CompleteArgs(WorkObject));
_percentComplete = value;
}
public IWorkObject WorkObject {get; private set;}
public object AsyncState {get {return WorkObject;}}
public bool IsCompleted {get {return _percentComplete == 100;}}
public event EventHandler<CompleteArgs> Complete; // CompleteArgs in a usual pattern
// you may also want to have Progress event
public bool CompletedSynchronously {get {return false;}}
public WaitHandle
{
get
{
// initialize it lazily
if (_waitHandle == null)
{
ManualResetEvent newWaitHandle = new ManualResetEvent(false);
if (Interlocked.CompareExchange(ref _waitHandle, newWaitHandle, null) != null)
newWaitHandle.Dispose();
}
return _waitHandle;
}
}
public void Dispose()
{
if (_waitHandle != null)
_waitHandle.Dispose();
// dispose _workObject too, if needed
}
public WorkObjectHandle(IWorkObject workObject)
{
WorkObject = workObject;
_percentComplete = 0;
}
}
public class AsyncScheduler
{
private Queue<WorkObjectHandle> _workQueue = new Queue<WorkObjectHandle>();
private bool _finished = false;
public WorkObjectHandle QueueAsyncWork(IWorkObject workObject)
{
var handle = new WorkObjectHandle(workObject);
lock(_workQueue)
{
_workQueue.Enqueue(handle);
}
return handle;
}
private void WorkThread()
{
// simplified for the sake of example
while (!_finished)
{
WorkObjectHandle handle;
lock(_workQueue)
{
if (_workQueue.Count == 0) break;
handle = _workQueue.Dequeue();
}
try
{
var workObject = handle.WorkObject;
// do whatever you want with workObject, set handle.PercentCompleted, etc.
}
finally
{
handle.Dispose();
}
}
}
}
If I understand correctly you have a collection of work objects (IWorkObject) that each complete a task via multiple calls to a DoSomeWork method. When an IWorkObject object has finished its work you'd like to respond to that somehow and during the process you'd like to respond to any reported progress?
In that case I'd suggest you take a slightly different approach. You could take a look at the Parallel Extension framework (blog). Using the framework, you could write something like this:
public void QueueWork(IWorkObject workObject)
{
Task.TaskFactory.StartNew(() =>
{
while (!workObject.Finished)
{
int progress = workObject.DoSomeWork();
DoSomethingWithReportedProgress(workObject, progress);
}
WorkObjectIsFinished(workObject);
});
}
Some things to note:
QueueWork now returns void. The reason for this is that the actions that occur when progress is reported or when the task completes have become part of the thread that executes the work. You could of course return the Task that the factory creates and return that from the method (to enable polling for example).
The progress-reporting and finish-handling are now part of the thread because you should always avoid polling when possible. Polling is more expensive because usually you either poll too frequently (too early) or not often enough (too late). There is no reason you can't report on the progress and finishing of the task from within the thread that is running the task.
The above could also be implemented using the (lower level) ThreadPool.QueueUserWorkItem method.
Using QueueUserWorkItem:
public void QueueWork(IWorkObject workObject)
{
ThreadPool.QueueUserWorkItem(() =>
{
while (!workObject.Finished)
{
int progress = workObject.DoSomeWork();
DoSomethingWithReportedProgress(workObject, progress);
}
WorkObjectIsFinished(workObject);
});
}
The WorkObject class can contain the properties that need to be tracked.
public class WorkObject
{
public PercentComplete { get; private set; }
public IsFinished { get; private set; }
public void DoSomeWork()
{
// work done here
this.PercentComplete = 50;
// some more work done here
this.PercentComplete = 100;
this.IsFinished = true;
}
}
Then in your example:
Change the collection from a List to a Dictionary that can hold Guid values (or any other means of uniquely identifying the value).
Expose the correct WorkObject's properties by having the caller pass the Guid that it received from QueueAsyncWork.
I'm assuming that you'll start WorkThread asynchronously (albeit, the only asynchronous thread); plus, you'll have to make retrieving the dictionary values and WorkObject properties thread-safe.
private Dictionary<Guid, WorkObject> _workList =
new Dictionary<Guid, WorkObject>();
private bool _finished = false;
public Guid QueueAsyncWork(WorkObject workObject)
{
Guid guid = Guid.NewGuid();
// simplified for the sake of example
_workList.Add(guid, workObject);
return guid;
}
private void WorkThread()
{
// simplified for the sake of example
while (!_finished)
{
foreach (WorkObject workObject in _workList)
{
if (!workObject.IsFinished)
{
workObject.DoSomeWork();
}
}
Thread.Sleep(1000);
}
}
// an example of getting the WorkObject's property
public int GetPercentComplete(Guid guid)
{
WorkObject workObject = null;
if (!_workList.TryGetValue(guid, out workObject)
throw new Exception("Unable to find Guid");
return workObject.PercentComplete;
}
The simplest way to do this is described here. Suppose you have a method string DoSomeWork(int). You then create a delegate of the correct type, for example:
Func<int, string> myDelegate = DoSomeWork;
Then you call the BeginInvoke method on the delegate:
int parameter = 10;
myDelegate.BeginInvoke(parameter, Callback, null);
The Callback delegate will be called once your asynchronous call has completed. You can define this method as follows:
void Callback(IAsyncResult result)
{
var asyncResult = (AsyncResult) result;
var #delegate = (Func<int, string>) asyncResult.AsyncDelegate;
string methodReturnValue = #delegate.EndInvoke(result);
}
Using the described scenario, you can also poll for results or wait on them. Take a look at the url I provided for more info.
Regards,
Ronald
If you don't want to use async callbacks, you can use an explicit WaitHandle, such as a ManualResetEvent:
public abstract class WorkObject : IDispose
{
ManualResetEvent _waitHandle = new ManualResetEvent(false);
public void DoSomeWork()
{
try
{
this.DoSomeWorkOverride();
}
finally
{
_waitHandle.Set();
}
}
protected abstract DoSomeWorkOverride();
public void WaitForCompletion()
{
_waitHandle.WaitOne();
}
public void Dispose()
{
_waitHandle.Dispose();
}
}
And in your code you could say
using (var workObject = new SomeConcreteWorkObject())
{
asyncScheduler.QueueAsyncWork(workObject);
workObject.WaitForCompletion();
}
Don't forget to call Dispose on your workObject though.
You can always use alternate implementations which create a wrapper like this for every work object, and who call _waitHandle.Dispose() in WaitForCompletion(), you can lazily instantiate the wait handle (careful: race conditions ahead), etc. (That's pretty much what BeginInvoke does for delegates.)

Categories

Resources