Related
I have an application that needs to read events from AzureEventHub using the EventProcessorClient and write them to SqlServer. For the uninitiated, the EventProcessorClient opens background tasks when it receives events and passes them to its Handler, so I have concurrent tasks. I wrote the following code:
public class LogProcessorBackgroundService : BackgroundService
{
// Omit configuration and initialization of objects
private readonly ConcurrentDictionary<Guid, KeyValuePair<DateTime, ClientEvent>> _clientEvents = new();
private readonly ConcurrentDictionary<string, int> _eventsPerPartition = new();
private readonly ILogger<LogProcessorBackgroundService> _logger;
private readonly IServiceProvider _serviceProvider;
public LogProcessorBackgroundService(ILogger<LogProcessorBackgroundService> logger,
IServiceProvider serviceProvider)
{
Guard.Against.Null(logger);
Guard.Against.Null(serviceProvider);
_logger = logger;
_serviceProvider = serviceProvider;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
try
{
var storageClient = new BlobContainerClient(StorageConnectionString, BlobContainerName);
var processor = new EventProcessorClient(
storageClient, ConsumerGroup, EventHubConnectionString, EventProcessorClientOptions);
processor.ProcessEventAsync += OnProcessEvent;
processor.ProcessErrorAsync += OnProcessError;
await processor.StartProcessingAsync(stoppingToken);
}
catch (TaskCanceledException e)
{
_logger.LogError(e.Demystify(), "Task was canceled");
}
catch (Exception e)
{
_logger.LogError(e.Demystify(), "An unhandled exception has occurred");
}
}
private Task OnProcessError(ProcessErrorEventArgs errorEventArgs)
{
_logger.LogError(errorEventArgs.Exception, "OnProcess error");
return Task.CompletedTask;
}
private async Task OnProcessEvent(ProcessEventArgs eventArgs)
{
try
{
if (eventArgs.Data?.EventBody is null ||
eventArgs.Data.EventBody.ToString().Contains(Constants.EventType.Failure))
return;
var logAudit = JsonSerializer.Deserialize<LogAuditRoot>(eventArgs.Data.EventBody.ToString(),
SerializerOptions);
var clientEvent = logAudit.ClientEvent;
//i try to retrieve the value of the event with the key,
//if the current event is newer than the one inserted previously,
//i remove the old one and after that i insert the current one
if (_clientEvents.TryGetValue(clientEvent.ClientId, out var item))
{
if (item.Key > clientEvent.TimeStamp)
return;
_clientEvents.Remove(clientEvent.ClientId, out _);
}
if(clientEvent.ClientId != Guid.Empty || !string.IsNullOrEmpty(clientEvent.ClientName))
{
_clientEvents.TryAdd(clientEvent.ClientId,
new KeyValuePair<DateTime, ClientEvent>(clientEvent.TimeStamp, clientEvent));
}
var partitionId = eventArgs.Partition.PartitionId;
// Initialize or increment the count for the current partition.
if (!_eventsPerPartition.TryAdd(partitionId, 0))
{
++_eventsPerPartition[partitionId];
}
//if the events are greater than 100 I go to update or insert the db,
//and I clean the ConcurrentDIctionary and I go to reset the counter
//of the partitions in the other ConcurrentDictionary that I need to do the checkpointing
if (_eventsPerPartition[partitionId] >= 100)
{
await eventArgs.UpdateCheckpointAsync().ConfigureAwait(false);
await UpsertDataOnDatabase();
_clientEvents.Clear();
_eventsPerPartition[partitionId] = 0;
}
}
catch (Exception e)
{
_logger.LogError(e, "ProcessEventHandler exception");
}
}
private async Task UpsertDataOnDatabase()
{
using var scope = _serviceProvider.CreateScope();
var repository = scope.ServiceProvider.GetRequiredService<IRepository>();
_logger.LogInformation("Count: {Count}", _clientEvents.Count.ToString());
foreach (var keyValuePair in _clientEvents)
{
var activity = await repository.GetAsync<ClientActivity>(keyValuePair.Key);
if (activity == null)
{
activity = keyValuePair.Value.Value.ToEntity();
repository.Insert(activity);
}
else
{
activity.ActivityId = keyValuePair.Value.Value.ActivityId;
activity.TimeStamp = keyValuePair.Value.Value.TimeStamp;
}
}
await repository.SaveChangesAsync();
}
}
The problem is that at some point I start getting exceptions when I call the SaveChangesAsync() of the type:
An exception occurred in the database while saving changes for context type 'TeamSystem.IT.IAM.EventHubLogProcessor.Data.DataContext'.
Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while saving the entity changes. See the inner exception for details.
---> Microsoft.Data.SqlClient.SqlException (0x80131904): Violation of PRIMARY KEY constraint 'PK_ClientsActivities'. Cannot insert duplicate key in object 'dbo.ClientsActivities'. The duplicate key value is (0664d2f2-c672-4f26-a8a
6-4f63c1f053d9).
Because using the dbcontext in multiple concurrent tasks maybe while one is trying to do the insert, this thing has already been done in another task and therefore the system can't work. How can I fix this?
Is there any example guide that could help me figure out how to fix this?
I've never worked in applications of this type, so I really don't know where to start.
Thank you in advance for any replies.
Regards.
You need to ensure that your await repository.SaveChangesAsync(); is not called by another thread while it is being executed by another. You can't use lock for asynchronous functions so your best bet is using a Semaphore.
Something along the lines of:
static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1,1);
...
await semaphoreSlim.WaitAsync();
try
{
await repository.SaveChangesAsync();
}
finally
{
semaphoreSlim.Release();
}
You can read this article for more details.
I'm using an IMemoryCache to cache a token retrieved from an Identity server.
In the past I've used the GetOrCreateAsync extension method available in the Microsoft.Extensions.Caching.Abstractions library.
It is very helpful because I can define the function and the expiration date at the same time.
However with a token, I won't know the expires in x amount of seconds value until the request is finished.
I want to account for a use case of the value not existing by not caching the token at all.
I have tried the following
var token = await this.memoryCache.GetOrCreateAsync<string>("SomeKey", async cacheEntry =>
{
var jwt = await GetTokenFromServer();
var tokenHasValidExpireValue = int.TryParse(jwt.ExpiresIn, out int tokenExpirationSeconds);
if (tokenHasValidExpireValue)
{
cacheEntry.AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(tokenExpirationSeconds);
}
else // Do not cache value. Just return it.
{
cacheEntry.AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(0); //Exception thrown. Value needs to be positive.
}
return jwt.token;
}
As you can see, an exception is thrown when I try to set an expiration of no time TimeSpan.FromSeconds(0).
Is there a way around this besides calling the Get and Set methods separately?
I would like to use the GetOrCreateAsync method if possible.
You can't actually accomplish this with the current extension because it will always create an entry BEFORE calling the factory method. That said, you can encapsulate the behavior in your own extension in a manner which feels very similar to GetOrCreateAsync.
public static class CustomMemoryCacheExtensions
{
public static async Task<TItem> GetOrCreateIfValidTimestampAsync<TItem>(
this IMemoryCache cache, object key, Func<Task<(int, TItem)>> factory)
{
if (!cache.TryGetValue(key, out object result))
{
(int tokenExpirationSeconds, TItem factoryResult) =
await factory().ConfigureAwait(false);
if (tokenExpirationSeconds <= 0)
{
// if the factory method did not return a positive timestamp,
// return the data without caching.
return factoryResult;
}
// since we have a valid timestamp:
// 1. create a cache entry
// 2. Set the result
// 3. Set the timestamp
using ICacheEntry entry = cache.CreateEntry(key);
entry.Value = result;
entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(tokenExpirationSeconds);
}
return (TItem)result;
}
}
You can then call your extension method in a very similar manner:
var memoryCache = new MemoryCache(new MemoryCacheOptions());
var token = await memoryCache.GetOrCreateIfValidTimestampAsync<string>("SomeKey", async () =>
{
var jwt = await GetTokenFromServer();
var tokenHasValidExpireValue = int.TryParse(jwt.ExpiresIn, out int tokenExpirationSeconds);
return (tokenExpirationSeconds, jwt.token);
}
I have the need for this and simply set the expiration time to now, I assume you can set it to past time as well just to be sure:
// Don't wanna cache if this is the result
if (key == null || key.Expiration < DateTime.UtcNow)
{
entry.AbsoluteExpiration = DateTimeOffset.Now;
return null;
}
Maybe something in these lines:
await _memoryCache.GetOrCreateAsync("key",
async entry =>
{
var value = // get your value here;
entry.AbsoluteExpiration = value != null
? DateTimeOffset.UtcNow.AddMinutes(5)
: DateTimeOffset.UtcNow; // don't cache nulls
return value;
});
I have ported my old HttpHandler (.ashx) TwitterFeed code to a WebAPI application. The core of the code uses the excellent Linq2Twitter package (https://linqtotwitter.codeplex.com/). Part of the port involved upgrading this component from version 2 to version 3, which now provides a number of asynchronous method calls - which are new to me. Here is the basic controller:
public async Task<IEnumerable<Status>>
GetTweets(int count, bool includeRetweets, bool excludeReplies)
{
var auth = new SingleUserAuthorizer
{
CredentialStore = new SingleUserInMemoryCredentialStore
{
ConsumerKey = ConfigurationManager.AppSettings["twitterConsumerKey"],
ConsumerSecret = ConfigurationManager.AppSettings["twitterConsumerKeySecret"],
AccessToken = ConfigurationManager.AppSettings["twitterAccessToken"],
AccessTokenSecret = ConfigurationManager.AppSettings["twitterAccessTokenSecret"]
}
};
var ctx = new TwitterContext(auth);
var tweets =
await
(from tweet in ctx.Status
where (
(tweet.Type == StatusType.Home)
&& (tweet.ExcludeReplies == excludeReplies)
&& (tweet.IncludeMyRetweet == includeRetweets)
&& (tweet.Count == count)
)
select tweet)
.ToListAsync();
return tweets;
}
This works fine, but previously, I had cached the results to avoid 'over calling' the Twitter API. It is here that I have run into a problem (more to do with my lack of understanding of the asynchronous protocol than anything else I suspect).
In overview, what I want to do is to first check the cache, if data doesn't exists, then rehydrate the cache and return the data to the caller (web page). Here is my attempt at the code
public class TwitterController : ApiController {
private const string CacheKey = "TwitterFeed";
public async Task<IEnumerable<Status>>
GetTweets(int count, bool includeRetweets, bool excludeReplies)
{
var context = System.Web.HttpContext.Current;
var tweets = await GetTweetData(context, count, includeRetweets, excludeReplies);
return tweets;
}
private async Task<IEnumerable<Status>>
GetTweetData(HttpContext context, int count, bool includeRetweets, bool excludeReplies)
{
var cache = context.Cache;
Mutex mutex = null;
bool iOwnMutex = false;
IEnumerable<Status> data = (IEnumerable<Status>)cache[CacheKey];
// Start check to see if available on cache
if (data == null)
{
try
{
// Lock base on resource key
mutex = new Mutex(true, CacheKey);
// Wait until it is safe to enter (someone else might already be
// doing this), but also add 30 seconds max.
iOwnMutex = mutex.WaitOne(30000);
// Now let's see if some one else has added it...
data = (IEnumerable<Status>)cache[CacheKey];
// They did, so send it...
if (data != null)
{
return data;
}
if (iOwnMutex)
{
// Still not there, so now is the time to look for it!
data = await CallTwitterApi(count, includeRetweets, excludeReplies);
cache.Remove(CacheKey);
cache.Add(CacheKey, data, null, GetTwitterExpiryDate(),
TimeSpan.Zero, CacheItemPriority.Normal, null);
}
}
finally
{
// Release the Mutex.
if ((mutex != null) && (iOwnMutex))
{
// The following line throws the error:
// Object synchronization method was called from an
// unsynchronized block of code.
mutex.ReleaseMutex();
}
}
}
return data;
}
private DateTime GetTwitterExpiryDate()
{
string szExpiry = ConfigurationManager.AppSettings["twitterCacheExpiry"];
int expiry = Int32.Parse(szExpiry);
return DateTime.Now.AddMinutes(expiry);
}
private async Task<IEnumerable<Status>>
CallTwitterApi(int count, bool includeRetweets, bool excludeReplies)
{
var auth = new SingleUserAuthorizer
{
CredentialStore = new SingleUserInMemoryCredentialStore
{
ConsumerKey = ConfigurationManager.AppSettings["twitterConsumerKey"],
ConsumerSecret = ConfigurationManager.AppSettings["twitterConsumerKeySecret"],
AccessToken = ConfigurationManager.AppSettings["twitterAccessToken"],
AccessTokenSecret = ConfigurationManager.AppSettings["twitterAccessTokenSecret"]
}
};
var ctx = new TwitterContext(auth);
var tweets =
await
(from tweet in ctx.Status
where (
(tweet.Type == StatusType.Home)
&& (tweet.ExcludeReplies == excludeReplies)
&& (tweet.IncludeMyRetweet == includeRetweets)
&& (tweet.Count == count)
&& (tweet.RetweetCount < 1)
)
select tweet)
.ToListAsync();
return tweets;
}
}
The problem occurs in the finally code block where the Mutex is released (though I have concerns about the overall pattern and approach of the GetTweetData() method):
if ((mutex != null) && (iOwnMutex))
{
// The following line throws the error:
// Object synchronization method was called from an
// unsynchronized block of code.
mutex.ReleaseMutex();
}
If I comment out the line, the code works correctly, but (I assume) I should release the Mutex having created it. From what I have found out, this problem is related to the thread changing between creating and releasing the mutex.
Because of my lack of general knowledge on asynchronous coding, I am not sure a) if the pattern I'm using is viable and b) if it is, how I address the problem.
Any advice would be much appreciated.
Using a mutex like that isn't going to work. For one thing, a Mutex is thread-affine, so it can't be used with async code.
Other problems I noticed:
Cache is threadsafe, so it shouldn't need a mutex (or any other protection) anyway.
Asynchronous methods should follow the Task-based Asynchronous Pattern.
There is one major tip regarding caching: when you just have an in-memory cache, then cache the task rather than the resulting data. On a side note, I have to wonder whether HttpContext.Cache is the best cache to use, but I'll leave it as-is since your question is more about how asynchronous code changes caching patterns.
So, I'd recommend something like this:
private const string CacheKey = "TwitterFeed";
public Task<IEnumerable<Status>> GetTweetsAsync(int count, bool includeRetweets, bool excludeReplies)
{
var context = System.Web.HttpContext.Current;
return GetTweetDataAsync(context, count, includeRetweets, excludeReplies);
}
private Task<IEnumerable<Status>> GetTweetDataAsync(HttpContext context, int count, bool includeRetweets, bool excludeReplies)
{
var cache = context.Cache;
Task<IEnumerable<Status>> data = cache[CacheKey] as Task<IEnumerable<Status>>;
if (data != null)
return data;
data = CallTwitterApiAsync(count, includeRetweets, excludeReplies);
cache.Insert(CacheKey, data, null, GetTwitterExpiryDate(), TimeSpan.Zero);
return data;
}
private async Task<IEnumerable<Status>> CallTwitterApiAsync(int count, bool includeRetweets, bool excludeReplies)
{
...
}
There's a small possibility that if two different requests (from two different sessions) request the same twitter feed at the same exact time, that the feed will be requested twice. But I wouldn't lose sleep over it.
I am caching data in an ASP.NET website through the System.Web.Caching.Cache-Class, because retrieving the data is very costly and it changes only once in a while, when our content people change data in the backend.
So I create the data in Application_Start and store it in Cache, with an expiration time of 1 day.
When accessing the data (happens on many pages of the website), I have something like this now in a static CachedData class:
public static List<Kategorie> GetKategorieTitelListe(Cache appCache)
{
// get Data out of Cache
List<Kategorie> katList = appCache[CachedData.NaviDataKey] as List<Kategorie>;
// Cache expired, retrieve and store again
if (katList == null)
{
katList = DataTools.BuildKategorienTitelListe();
appCache.Insert(CachedData.NaviDataKey, katList, null, DateTime.Now.AddDays(1d), Cache.NoSlidingExpiration);
}
return katList;
}
The problem I see with this code is that its not threadsafe.
If two users open two of these pages at the same time and the cache just ran out, there is a risk the data while be retrieved multiple times.
But if I lock the method body, I will run into performance troubles, because only one user at a time can get the data list.
Is there an easy way to prevent this? What's best practice for a case like this?
You are right, your code is not thread safe.
// this must be class level variable!!!
private static readonly object locker = new object();
public static List<Kategorie> GetKategorieTitelListe(Cache appCache)
{
// get Data out of Cache
List<Kategorie> katList = appCache[CachedData.NaviDataKey] as List<Kategorie>;
// Cache expired, retrieve and store again
if (katList == null)
{
lock (locker)
{
katList = appCache[CachedData.NaviDataKey] as List<Kategorie>;
if (katlist == null) // make sure that waiting thread is not executing second time
{
katList = DataTools.BuildKategorienTitelListe();
appCache.Insert(CachedData.NaviDataKey, katList, null, DateTime.Now.AddDays(1d), Cache.NoSlidingExpiration);
}
}
}
return katList;
}
The MSDN documentation
states that the ASP.NET Cache class is thread safe -- meaning that their contents are freely accessible by any thread in the AppDomain (a read/write will be atomic for example).
Just keep in mind that as the size of the cache grows, so does the cost of synchronization. You might want to take a look at this post
By adding a private object to lock on, you should be able to run your method safely so that other threads do not interfere.
private static readonly myLockObject = new object();
public static List<Kategorie> GetKategorieTitelListe(Cache appCache)
{
// get Data out of Cache
List<Kategorie> katList = appCache[CachedData.NaviDataKey] as List<Kategorie>;
lock (myLockObject)
{
// Cache expired, retrieve and store again
if (katList == null)
{
katList = DataTools.BuildKategorienTitelListe();
appCache.Insert(CachedData.NaviDataKey, katList, null, DateTime.Now.AddDays(1d), Cache.NoSlidingExpiration);
}
return katList;
}
}
I dont see other solution than locking.
private static readonly object _locker = new object ();
public static List<Kategorie> GetKategorieTitelListe(Cache appCache)
{
List<Kategorie> katList;
lock (_locker)
{
// get Data out of Cache
katList = appCache[CachedData.NaviDataKey] as List<Kategorie>;
// Cache expired, retrieve and store again
if (katList == null)
{
katList = DataTools.BuildKategorienTitelListe();
appCache.Insert(CachedData.NaviDataKey, katList, null, DateTime.Now.AddDays(1d), Cache.NoSlidingExpiration);
}
}
return katList;
}
Once the data is in the cache, concurrent threads will only wait the time of getting the data out, i.e. this line of code:
katList = appCache[CachedData.NaviDataKey] as List<Kategorie>;
So the performance cost will not be dramatic.
I know in certain circumstances, such as long running processes, it is important to lock ASP.NET cache in order to avoid subsequent requests by another user for that resource from executing the long process again instead of hitting the cache.
What is the best way in c# to implement cache locking in ASP.NET?
Here's the basic pattern:
Check the cache for the value, return if its available
If the value is not in the cache, then implement a lock
Inside the lock, check the cache again, you might have been blocked
Perform the value look up and cache it
Release the lock
In code, it looks like this:
private static object ThisLock = new object();
public string GetFoo()
{
// try to pull from cache here
lock (ThisLock)
{
// cache was empty before we got the lock, check again inside the lock
// cache is still empty, so retreive the value here
// store the value in the cache here
}
// return the cached value here
}
For completeness a full example would look something like this.
private static object ThisLock = new object();
...
object dataObject = Cache["globalData"];
if( dataObject == null )
{
lock( ThisLock )
{
dataObject = Cache["globalData"];
if( dataObject == null )
{
//Get Data from db
dataObject = GlobalObj.GetData();
Cache["globalData"] = dataObject;
}
}
}
return dataObject;
There is no need to lock the whole cache instance, rather we only need to lock the specific key that you are inserting for.
I.e. No need to block access to the female toilet while you use the male toilet :)
The implementation below allows for locking of specific cache-keys using a concurrent dictionary. This way you can run GetOrAdd() for two different keys at the same time - but not for the same key at the same time.
using System;
using System.Collections.Concurrent;
using System.Web.Caching;
public static class CacheExtensions
{
private static ConcurrentDictionary<string, object> keyLocks = new ConcurrentDictionary<string, object>();
/// <summary>
/// Get or Add the item to the cache using the given key. Lazily executes the value factory only if/when needed
/// </summary>
public static T GetOrAdd<T>(this Cache cache, string key, int durationInSeconds, Func<T> factory)
where T : class
{
// Try and get value from the cache
var value = cache.Get(key);
if (value == null)
{
// If not yet cached, lock the key value and add to cache
lock (keyLocks.GetOrAdd(key, new object()))
{
// Try and get from cache again in case it has been added in the meantime
value = cache.Get(key);
if (value == null && (value = factory()) != null)
{
// TODO: Some of these parameters could be added to method signature later if required
cache.Insert(
key: key,
value: value,
dependencies: null,
absoluteExpiration: DateTime.Now.AddSeconds(durationInSeconds),
slidingExpiration: Cache.NoSlidingExpiration,
priority: CacheItemPriority.Default,
onRemoveCallback: null);
}
// Remove temporary key lock
keyLocks.TryRemove(key, out object locker);
}
}
return value as T;
}
}
Just to echo what Pavel said, I believe this is the most thread safe way of writing it
private T GetOrAddToCache<T>(string cacheKey, GenericObjectParamsDelegate<T> creator, params object[] creatorArgs) where T : class, new()
{
T returnValue = HttpContext.Current.Cache[cacheKey] as T;
if (returnValue == null)
{
lock (this)
{
returnValue = HttpContext.Current.Cache[cacheKey] as T;
if (returnValue == null)
{
returnValue = creator(creatorArgs);
if (returnValue == null)
{
throw new Exception("Attempt to cache a null reference");
}
HttpContext.Current.Cache.Add(
cacheKey,
returnValue,
null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
System.Web.Caching.Cache.NoSlidingExpiration,
CacheItemPriority.Normal,
null);
}
}
}
return returnValue;
}
Craig Shoemaker has made an excellent show on asp.net caching:
http://polymorphicpodcast.com/shows/webperformance/
I have come up with the following extension method:
private static readonly object _lock = new object();
public static TResult GetOrAdd<TResult>(this Cache cache, string key, Func<TResult> action, int duration = 300) {
TResult result;
var data = cache[key]; // Can't cast using as operator as TResult may be an int or bool
if (data == null) {
lock (_lock) {
data = cache[key];
if (data == null) {
result = action();
if (result == null)
return result;
if (duration > 0)
cache.Insert(key, result, null, DateTime.UtcNow.AddSeconds(duration), TimeSpan.Zero);
} else
result = (TResult)data;
}
} else
result = (TResult)data;
return result;
}
I have used both #John Owen and #user378380 answers. My solution allows you to store int and bool values within the cache aswell.
Please correct me if there's any errors or whether it can be written a little better.
I saw one pattern recently called Correct State Bag Access Pattern, which seemed to touch on this.
I modified it a bit to be thread-safe.
http://weblogs.asp.net/craigshoemaker/archive/2008/08/28/asp-net-caching-and-performance.aspx
private static object _listLock = new object();
public List List() {
string cacheKey = "customers";
List myList = Cache[cacheKey] as List;
if(myList == null) {
lock (_listLock) {
myList = Cache[cacheKey] as List;
if (myList == null) {
myList = DAL.ListCustomers();
Cache.Insert(cacheKey, mList, null, SiteConfig.CacheDuration, TimeSpan.Zero);
}
}
}
return myList;
}
This article from CodeGuru explains various cache locking scenarios as well as some best practices for ASP.NET cache locking:
Synchronizing Cache Access in ASP.NET
I've wrote a library that solves that particular issue: Rocks.Caching
Also I've blogged about this problem in details and explained why it's important here.
I modified #user378380's code for more flexibility. Instead of returning TResult now returns object for accepting different types in order. Also adding some parameters for flexibility. All the idea belongs to
#user378380.
private static readonly object _lock = new object();
//If getOnly is true, only get existing cache value, not updating it. If cache value is null then set it first as running action method. So could return old value or action result value.
//If getOnly is false, update the old value with action result. If cache value is null then set it first as running action method. So always return action result value.
//With oldValueReturned boolean we can cast returning object(if it is not null) appropriate type on main code.
public static object GetOrAdd<TResult>(this Cache cache, string key, Func<TResult> action,
DateTime absoluteExpireTime, TimeSpan slidingExpireTime, bool getOnly, out bool oldValueReturned)
{
object result;
var data = cache[key];
if (data == null)
{
lock (_lock)
{
data = cache[key];
if (data == null)
{
oldValueReturned = false;
result = action();
if (result == null)
{
return result;
}
cache.Insert(key, result, null, absoluteExpireTime, slidingExpireTime);
}
else
{
if (getOnly)
{
oldValueReturned = true;
result = data;
}
else
{
oldValueReturned = false;
result = action();
if (result == null)
{
return result;
}
cache.Insert(key, result, null, absoluteExpireTime, slidingExpireTime);
}
}
}
}
else
{
if(getOnly)
{
oldValueReturned = true;
result = data;
}
else
{
oldValueReturned = false;
result = action();
if (result == null)
{
return result;
}
cache.Insert(key, result, null, absoluteExpireTime, slidingExpireTime);
}
}
return result;
}
The accepted answer (recommending reading outside of the lock) is very bad advice and is being implemented since 2008. It could work if the cache uses a concurrent dictionary, but that itself has a lock for reads.
Reading outside of the lock means that other threads could be modifying the cache in the middle of read. This means that the read could be inconsistent.
For example, depending on the implementation of the cache (probably a dictionary whose internals are unknown), the item could be checked and found in the cache, at a certain index in the underlying array of the cache, then another thread could modify the cache so that the items from the underlying array are no longer in the same order, and then the actual read from the cache could be from a different index / address.
Another scenario is that the read could be from an index that is now outside of the underlying array (because items were removed), so you can get exceptions.