I have the following code that I want to achieve the following with.
Check if a value is in cache
If in cache, get the value from it and proceed
If not in cache, perform the logic to enter it in cache but do this async as the operation to do such may take a long period of time and I dont want to hold up the user
As you will see in my code I place a lock on the cache in the async thread. Is my setup below thread safe? And by placing the lock will this mean that the cache will not be accessible for other threads to read from cache while the async operation takes place. I do not want a circumstance where the cache is locked in an async thread preventing other requests from accessing it.
There is also a chance that the same request may be called by several threads hence the lock.
Any recommendations as how I could improve the code would be great.
// Check if the value is in cache
if (!this.Cache.Contains(key))
{
// Perform processing of files async in another thread so rendering is not slowed down
ThreadPool.QueueUserWorkItem(delegate
{
lock (this.Cache)
{
if (!this.Cache.Contains(key))
{
// Perform the operation to get value for cache here
var cacheValue = operation();
this.Cache.Add(key, cacheValue);
}
}
});
return "local value";
}
else
{
// Return the string from cache as they are present there
return this.Cache.GetFilename(key);
}
Note: this.Cache represents a cache object.
The application is a web application on .net 3.5.
How about changing the delegate to look like this:
var cacheValue = operation();
lock (this.Cache)
{
if (!this.Cache.Contains(key))
{
// Perform the operation to get value for cache here
this.Cache.Add(key, cacheValue);
}
}
This kind of coding locks the dictionary for a very short time. You can also try using ConcurrentDictionary that mostly doesn't to any locking at all.
Alex.
There are several problems with your code. Problems include: calling Cache.Contains outside a lock while other threads may be modifying the collection; invoking operation within a lock which may cause deadlocks; etc.
Here's a thread-safe implementation of a cache that satisfies all your requirements:
class Cache<TKey, TValue>
{
private readonly ConcurrentDictionary<TKey, Task<TValue>> items;
public Cache()
{
this.items = new ConcurrentDictionary<TKey, Task<TValue>>();
}
public Task<TValue> GetAsync(TKey key, Func<TKey, TValue> valueFactory)
{
return this.items.GetOrAdd(key,
k => Task.Factory.StartNew<TValue>(() => valueFactory(k)));
}
}
The GetAsync method works as follows: First it checks if there is a Task in the items dictionary for the given key. If there is no such Task, it runs valueFactory asynchronously on the ThreadPool and stores the Task object that represents the pending asynchronous operation in the dictionary. Code calling GetAsync can wait for the Task to finish, which will return the value calculated by valueFactory. This all happens in an asynchronous, non-blocking, thread-safe manner.
Example usage:
var cache = new Cache<string, int>();
Task<int> task = cache.GetAsync("Hello World", s => s.Length);
// ... do something else ...
task.Wait();
Console.WriteLine(task.Result);
Looks like a standard solution, except for the retrieval in the background thread. It will be thread safe as long as all other bits of the code that use the cache also take out a lock on the same cache reference before modifying it.
From your code, other threads will still be able to read from the cache (or write to it if they don't take out a lock(). The code will only block at the point a lock() statement is encountered.
Does the return "local value" make sense? Would you not need to retrieve the item in that function anyway in the case of a cache miss?
Related
I've just started using Nito.AsyncEx package and AsyncLock instead of a normal lock() { ... } section where I have async calls within the locked section (since you can't use lock() in such cases for good reasons I've just read about). This is within a job that I'm running from Hangfire. Let's call this the 'worker' thread.
In another thread, from an ASP.NET controller, I'd like to check if there's a thread that's currently executing within the locked section. If there's no thread in the locked section then I'll schedule a background job via Hangfire. If there's already a thread in the locked section then I don't want to schedule another one. (Yes, this might sound a little weird, but that's another story).
Is there a way to check this using the Nito.AsyncEx objects, or should I just set a flag at the start of the locked section and unset it at the end?
e.g. I'd like to this:
public async Task DoAJobInTheBackground(string queueName, int someParam)
{
// do other stuff...
// Ensure I'm the only job in this section
using (await _asyncLock.LockAsync())
{
await _aService.CallSomethingAsync());
}
// do other stuff...
}
and from a service called by a controller use my imaginary method IsSomeoneInThereNow():
public void ScheduleAJobUnlessOneIsRunning(string queueName, int someParam)
{
if (!_asyncLock.IsSomeoneInThereNow())
{
_backgroundJobClient.Enqueue<MyJob>(x =>
x.DoAJobInTheBackground(queueName, someParam));
}
}
but so far I can only see how to do this with a separate variable (imagining _isAnybodyInHere is a thread-safe bool or I used Interlocked instead):
public async Task DoAJobInTheBackground(string queueName, int someParam)
{
// do other stuff...
// Ensure I'm the only job in this section
using (await _asyncLock.LockAsync())
{
try
{
_isAnybodyInHere = true;
await _aService.CallSomethingAsync());
}
finally
{
_isAnybodyInHere = false;
}
}
// do other stuff...
}
and from a service called by a controller:
public void ScheduleAJobUnlessOneIsRunning(string queueName, int someParam)
{
if (!_isAnybodyInHere)
{
_backgroundJobClient.Enqueue<MyJob>(x =>
x.DoAJobInTheBackground(queueName, someParam));
}
}
Really it feels like there should be a better way. The AsyncLock doc says:
You can call Lock or LockAsync with an already-cancelled CancellationToken
to attempt to acquire the AsyncLock immediately
without actually entering the wait queue.
but I don't understand how to do that, at least using the synchronous Lock method.
I don't understand how to do that
You can create a new CancellationToken and pass true to create one that is already canceled:
using (_asyncLock.Lock(new CancellationToken(canceled: true)))
{
...
}
The call to Lock will throw if the lock is already held.
That said, I don't think this is a good solution to your problem. There's always the possibility that the background job is just about to finish, the controller checks the lock and determines it's held, and then the background job releases the lock. In that case, the controller will not trigger a background job.
You must never(!) make any assumptions about any other thread or process!
What you must instead do, in this particular example, is to "schedule another job," unless you have already done so. (To avoid "fork bombs.") Then, the job, once it actually begins executing, must decide: "Should I be doing this?" If not, the job quietly exits.
Or – perhaps the actual question here is: "Has somebody else already ≤scheduled_this_job≥?"
I have a method that looks like
public async Task<OpenResult> openAsync()
I want to do something like if there is a current call to openAsync in the process of getting executed, I would like to place any calls to OpenAsync be added to a queue.
When the first call completes, I want to complete all the ones in the queue with the result of the first call.
What’s the way to achieve this in C#
Usually, this kind of detail is left to the caller, i.e. by making the caller await appropriately and only call methods when they should call methods. However, if you must do this, one simple way is via a semaphore; consider:
class HazProtected
{
private readonly SemaphoreSlim _lock = new SemaphoreSlim(1, 1);
public async Task<OpenResult> OpenAsync(CancellationToken cancellationToken = default)
{
await _lock.WaitAsync(cancellationToken);
try
{
return await DoOpenAsync(cancellationToken);
}
finally
{
_lock.Release();
}
}
private async Task<OpenResult> DoOpenAsync(CancellationToken cancellationToken)
{
// ... your real code here
}
}
The code in OpenAsync ensures that only one concurrent async caller can be attempting to open it at a time. When there is a conflict, callers are held asynchronously until the semaphore can be acquired. There is a complication, though; SempahoreSlim has some known problems on .NET Framework (resolved in .NET Core) when there are both asynchronous and synchronous semaphore acquisitions at the same time - which can lead to a spiral of death.
In more complex scenarios, it is possible to write your own queue of pending callers; this is a very very exotic scenario and should usually not be attempted unless you understand exactly why you're doing it!
My team is developing a multi-threaded application using async/await in C# 5.0. In the process of implementing thread synchronization, after several iterations, we came up with a (possibly novel?) new SynchronizationContext implementation with an internal lock that:
When calling Post:
if a lock can be taken, the delegate is executed immediately
if a lock cannot be taken, the delegate is queued
When calling Send:
if a lock can be taken the delegate is executed
if a lock cannot be taken, the thread is blocked
In all cases, before executing the delegate, the context sets itself as the current context and restores the original context when the delegate returns.
It’s an unusual pattern and since we’re clearly not the first people writing such an application I’m wondering:
Is the pattern really safe?
Are there better ways of achieving thread synchronization?
Here’s the source for SerializingSynchronizationContext and a demo on GitHub.
Here’s how it’s used:
Each class wanting protection creates its own instance of the context like a mutex.
The context is awaitable so that statements like the following are possible.
await myContext;
This simply causes the rest of the method to be run under protection of the context.
All methods and properties of the class use this pattern to protect data. Between awaits, only one thread can run on the context at a time and so state will remain consistent. When an await is reached, the next scheduled thread is allowed to run on the context.
The custom SynchronizationContext can be used in combination with AsyncLock if needed to maintain atomicity even with awaited expressions.
Synchronous methods of a class can use custom context for protection as well.
Having a sync context that never runs more than one operation at a time is certainly not novel, and also not bad at all. Here you can see Stephen Toub describing how to make one two years ago. (In this case it's used simply as a tool to create a message pump, which actually sounds like it might be exactly what you want, but even if it's not, you could pull the sync context out of the solution and use it separately.)
It of course makes perfect conceptual sense to have a single threaded synchronization context. All of the sync contexts representing UI states are like this. The winforms, WPF, winphone, etc. sync contexts all ensure that only a single operation from that context is ever running at one time.
The one worrying bit is this:
In all cases, before executing the delegate, the context sets itself as the current context and restores the original context when the delegate returns.
I'd say that the context itself shouldn't be doing this. If the caller wants this sync context to be the current context they can set it. If they want to use it for something other than the current context you should allow them to do so. Sometimes you want to use a sync context without setting it as the current one to synchronize access to a certain resource; in such a case only operation specifically accessing that resource would need to use this context.
Regarding the use of locks. This question would be more appropriate for Code Review, but from the first glance I don't think your SerializingSynchronizationContext.Post is doing all right. Try calling it on a tight loop. Because of the Task.Run((Action)ProcessQueue), you'll quickly end up with more and more ThreadPool threads being blocked on lock (_lock) while waiting to acquire it inside ProcessQueue().
[EDITED] To address the comment, here is your current implementation:
public override void Post(SendOrPostCallback d, object state)
{
_queue.Enqueue(new CallbackInfo(d, state));
bool lockTaken = false;
try
{
Monitor.TryEnter(_lock, ref lockTaken);
if (lockTaken)
{
ProcessQueue();
}
else
{
Task.Run((Action)ProcessQueue);
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(_lock);
}
}
}
// ...
private void ProcessQueue()
{
if (!_queue.IsEmpty)
{
lock (_lock)
{
var outer = SynchronizationContext.Current;
try
{
SynchronizationContext.SetSynchronizationContext(this);
CallbackInfo callback;
while (_queue.TryDequeue(out callback))
{
try
{
callback.D(callback.State);
}
catch (Exception e)
{
Console.WriteLine("Exception in posted callback on {0}: {1}",
GetType().FullName, e);
}
}
}
finally
{
SynchronizationContext.SetSynchronizationContext(outer);
}
}
}
}
In Post, why enqueue a callback with _queue.Enqueue and then preoccupy a new thread from the pool with Task.Run((Action)ProcessQueue), in the situation when ProcessQueue() is already pumping the _queue in a loop on another pool thread and dispatching callbacks? In this case, Task.Run looks like wasting a pool thread to me.
I have a .NET 4 WCF service that maintains a thread-safe, in-memory, dictionary cache of objects (SynchronizedObject). I want to provide safe, concurrent access to read and modify both the collection and the objects in the collection. Safely modifying the objects and the cache can be accomplished with reader-writer locks.
I am running into trouble providing read access to an object in the cache. My Read method returns a SynchronizedObject, but I do not know how to elegantly ensure no other threads are modifying the object while WCF is serializing the SynchronizedObject.
I have tried placing the Read return clause inside the read-lock and setting a breakpoint in a custom XmlObjectSerializer. When the XmlObjectSerializer::WriteObject(Stream,object) method is called, a read-lock is not held on the SynchronizedObject.
I am specifically concerned with the following scenario:
Thread A calls Read(int). Execution continues until just after the return statement. By this point, the finally has also been executed, and the read lock on the SynchronizedObject has been released. Thread A's execution is interrupted.
Thread B calls Modify(int) for the same id. The write lock is available and obtained. Sometime between obtaining the write lock and releasing it, Thread B is interrupted.
Thread A restarts and serialization continues. Thread B has a write-lock on the same SynchronizedObject, and is in the middle of some critical section, but Thread A is reading the state of the SynchronizedObject and thus returns a potentially invalid object to the caller of Read(int).
I see two options:
Maintain a custom XmlObjectSerializer that grabs the read-lock before calling the base.WriteObject(Stream, object) method, and releases it after. I do not like this option because sub-classing and overriding a framework serialization function to perform a certain action if a the object to be serialized matches a certain type smells to me.
Create a deep-copy of a SynchronizedObject in the Read method while the read-lock is held, release the lock, and return the deep copy. I do not like this option because there will be many sub-classes of SynchronizedObject that I would have to implement and maintain correct deep-copiers for and deep-copies could be expensive.
What other options do I have? How should I implement the thread-safe Read method?
I have provided a dummy Service below for more explicit references:
public class Service : IService
{
IDictionary<int, SynchronizedObject> collection = new Dictionary<int, SynchronizedObject>();
ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim();
public SynchronizedObject Read(int id)
{
rwLock.EnterReadLock();
try
{
SynchronizedObject result = collection[id];
result.rwLock.EnterReadLock();
try
{
return result;
}
finally
{
result.rwLock.ExitReadLock();
}
}
finally
{
rwLock.ExitReadLock();
}
}
public void ModifyObject(int id)
{
rwLock.EnterReadLock();
try
{
SynchronizedObject obj = collection[id];
obj.rwLock.EnterWriteLock();
try
{
// modify obj
}
finally
{
obj.rwLock.ExitWriteLock();
}
}
finally
{
rwLock.ExitReadLock();
}
}
public void ModifyCollection(int id)
{
rwLock.EnterWriteLock();
try
{
// modify collection
}
finally
{
rwLock.ExitWriteLock();
}
}
}
public class SynchronizedObject
{
public ReaderWriterLockSlim rwLock { get; private set; }
public SynchronizedObject()
{
rwLock = new ReaderWriterLockSlim();
}
}
New answer
Based on your new information and clearer scenario, I believe you want to use something similar to functional programming's immutability feature. Instead of serializing the object that could be changed, make a copy that no other thread could possibly access, then serialize that.
Previous (not valuable) answer
From http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim.enterwritelock.aspx:
If other threads have entered the lock
in read mode, a thread that calls the
EnterWriteLock method blocks until
those threads have exited read mode.
When there are threads waiting to
enter write mode, additional threads
that try to enter read mode or
upgradeable mode block until all the
threads waiting to enter write mode
have either timed out or entered write
mode and then exited from it.
So, all you need to do is call EnterWriteLock and ExitWriteLock inside ModifyObject(). Your attempt to make sure you have both a read and a write lock is actually stopping the code from working.
I have two working threads.I have locked both with a same lock, but threadB is getting executed before threadA, so exception came.I locked both using the same lock object.Thread B is using delegate function.How can I solve the issue.
Detailed Information:
I have a class called StateSimulation.
Inside that there are two functions called
a) OnSimulationCollisionReset
b) OnSimulationProgressEvent
Implementation is like this:
private void OnSimulationCollisionReset()
{
Thread XmlReset = new Thread(XmlResetFn);
XmlReset.Start();
}
private void OnSimulationProgressEvent()
{
DataStoreSingleTon.Instance.IsResetCompleted = true;
Thread ThrdSimulnProgress = new Thread(SimulnProgress);
ThrdSimulnProgress.Start();
}
where SimulnProgress() and XmlResetFn() are as follows:
private void SimulnProgress()
{
//uses a delegate
UIControlHandler.Instance.ShowSimulationProgress();
}
private void XmlResetFn()
{
DataStoreSingleTon.Instance.GetFPBConfigurationInstance().ResetXmlAfterCollision();
}
In which OnSimulationProgressEvent() is using a delegate function.
Both showSimulationProgress and ResetXML...() uses a same resource FPBArrayList.
My requirement is SimulationProgressEvent() should work only after Reset..(). In resetXML..() I clear the FPBList.
In SimulationProgress() I access FPBList[i] where i:0--->size;
I have locked both functions using a same lock object.I expected, reset() will complete first. But after entering to reset, before complete reset, showProgress() started and exception occured..
How to solve my issue?
This is how I locked the functions
public System.Object lockThis = new System.Object();
private void SimulnProgress()
{
lock (lockThis)
{
UIControlHandler.Instance.ShowSimulationProgress();
}
}
private void XmlResetFn()
{
lock (lockThis)
{
DataStoreSingleTon.Instance.GetFPBConfigurationInstance().ResetXmlAfterCollision();
}
}
Please give a solution.
Regards
Nidhin KR
It's not a good idea to write multithreaded code that assumes or requires that execution on different threads occurs in a particular order. The whole point of multithreading is to allow things to be executed independently of each other. Independently means no particular order is expressed or implied. CPU time might not be distributed evenly between the two threads, for example, particularly is one thread is waiting for an external signaling event and the other thread is in a compute loop.
For your particular code, it seems very odd that IsResetCompleted = true; is set in the OnSimulationProgressEvent handler. The completion state of the Reset activity should be set by the Reset activity, not by some other event executing in another thread assuming "If we're here, the work in the other thread must be finished."
You should review your design and identify your assumptions and dependencies between threads. If thread B must not proceed until after thread A has completed something, you should first reexamine why you're putting this work in different threads, and then perhaps use a synchronization object (such as an AutoResetEvent) to coordinate between the threads.
The key point here is if you take a sequential task and split it into multiple threads, but the threads use locks or synch objects to serialize their execution, then there is no benefit to using multiple threads. The operation is still sequential.
Locks are intended to prevent several threads from entering a given section of code simultaneously. They are not intended to synchronize the threads in any other way, like, making them execute code in some specific order.
To enforce the execution order you need to implement some signalling between your threads.
Have a look at Synchronization Primitives, specifically, Auto/ManualResetEvent is probably what you want.
I am not sure if I understand the question entirely, but if your requirement is simply that you want to prevent the body of SimulnProgress from executing before XmlResetfn has executed at least once, you can do:
public readonly object lockThis = new object();
private readonly ManualResetEvent resetHandle = new ManualResetEvent(false);
private void SimulnProgress()
{
resetHandle.WaitOne();
lock (lockThis)
{
UIControlHandler.Instance.ShowSimulationProgress();
}
}
private void XmlResetFn()
{
lock (lockThis)
{
DataStoreSingleTon.Instance.GetFPBConfigurationInstance().ResetXmlAfterCollision();
}
resetHandle.Set();
}