How to manage a mutex in an asynchronous method - c#

I have ported my old HttpHandler (.ashx) TwitterFeed code to a WebAPI application. The core of the code uses the excellent Linq2Twitter package (https://linqtotwitter.codeplex.com/). Part of the port involved upgrading this component from version 2 to version 3, which now provides a number of asynchronous method calls - which are new to me. Here is the basic controller:
public async Task<IEnumerable<Status>>
GetTweets(int count, bool includeRetweets, bool excludeReplies)
{
var auth = new SingleUserAuthorizer
{
CredentialStore = new SingleUserInMemoryCredentialStore
{
ConsumerKey = ConfigurationManager.AppSettings["twitterConsumerKey"],
ConsumerSecret = ConfigurationManager.AppSettings["twitterConsumerKeySecret"],
AccessToken = ConfigurationManager.AppSettings["twitterAccessToken"],
AccessTokenSecret = ConfigurationManager.AppSettings["twitterAccessTokenSecret"]
}
};
var ctx = new TwitterContext(auth);
var tweets =
await
(from tweet in ctx.Status
where (
(tweet.Type == StatusType.Home)
&& (tweet.ExcludeReplies == excludeReplies)
&& (tweet.IncludeMyRetweet == includeRetweets)
&& (tweet.Count == count)
)
select tweet)
.ToListAsync();
return tweets;
}
This works fine, but previously, I had cached the results to avoid 'over calling' the Twitter API. It is here that I have run into a problem (more to do with my lack of understanding of the asynchronous protocol than anything else I suspect).
In overview, what I want to do is to first check the cache, if data doesn't exists, then rehydrate the cache and return the data to the caller (web page). Here is my attempt at the code
public class TwitterController : ApiController {
private const string CacheKey = "TwitterFeed";
public async Task<IEnumerable<Status>>
GetTweets(int count, bool includeRetweets, bool excludeReplies)
{
var context = System.Web.HttpContext.Current;
var tweets = await GetTweetData(context, count, includeRetweets, excludeReplies);
return tweets;
}
private async Task<IEnumerable<Status>>
GetTweetData(HttpContext context, int count, bool includeRetweets, bool excludeReplies)
{
var cache = context.Cache;
Mutex mutex = null;
bool iOwnMutex = false;
IEnumerable<Status> data = (IEnumerable<Status>)cache[CacheKey];
// Start check to see if available on cache
if (data == null)
{
try
{
// Lock base on resource key
mutex = new Mutex(true, CacheKey);
// Wait until it is safe to enter (someone else might already be
// doing this), but also add 30 seconds max.
iOwnMutex = mutex.WaitOne(30000);
// Now let's see if some one else has added it...
data = (IEnumerable<Status>)cache[CacheKey];
// They did, so send it...
if (data != null)
{
return data;
}
if (iOwnMutex)
{
// Still not there, so now is the time to look for it!
data = await CallTwitterApi(count, includeRetweets, excludeReplies);
cache.Remove(CacheKey);
cache.Add(CacheKey, data, null, GetTwitterExpiryDate(),
TimeSpan.Zero, CacheItemPriority.Normal, null);
}
}
finally
{
// Release the Mutex.
if ((mutex != null) && (iOwnMutex))
{
// The following line throws the error:
// Object synchronization method was called from an
// unsynchronized block of code.
mutex.ReleaseMutex();
}
}
}
return data;
}
private DateTime GetTwitterExpiryDate()
{
string szExpiry = ConfigurationManager.AppSettings["twitterCacheExpiry"];
int expiry = Int32.Parse(szExpiry);
return DateTime.Now.AddMinutes(expiry);
}
private async Task<IEnumerable<Status>>
CallTwitterApi(int count, bool includeRetweets, bool excludeReplies)
{
var auth = new SingleUserAuthorizer
{
CredentialStore = new SingleUserInMemoryCredentialStore
{
ConsumerKey = ConfigurationManager.AppSettings["twitterConsumerKey"],
ConsumerSecret = ConfigurationManager.AppSettings["twitterConsumerKeySecret"],
AccessToken = ConfigurationManager.AppSettings["twitterAccessToken"],
AccessTokenSecret = ConfigurationManager.AppSettings["twitterAccessTokenSecret"]
}
};
var ctx = new TwitterContext(auth);
var tweets =
await
(from tweet in ctx.Status
where (
(tweet.Type == StatusType.Home)
&& (tweet.ExcludeReplies == excludeReplies)
&& (tweet.IncludeMyRetweet == includeRetweets)
&& (tweet.Count == count)
&& (tweet.RetweetCount < 1)
)
select tweet)
.ToListAsync();
return tweets;
}
}
The problem occurs in the finally code block where the Mutex is released (though I have concerns about the overall pattern and approach of the GetTweetData() method):
if ((mutex != null) && (iOwnMutex))
{
// The following line throws the error:
// Object synchronization method was called from an
// unsynchronized block of code.
mutex.ReleaseMutex();
}
If I comment out the line, the code works correctly, but (I assume) I should release the Mutex having created it. From what I have found out, this problem is related to the thread changing between creating and releasing the mutex.
Because of my lack of general knowledge on asynchronous coding, I am not sure a) if the pattern I'm using is viable and b) if it is, how I address the problem.
Any advice would be much appreciated.

Using a mutex like that isn't going to work. For one thing, a Mutex is thread-affine, so it can't be used with async code.
Other problems I noticed:
Cache is threadsafe, so it shouldn't need a mutex (or any other protection) anyway.
Asynchronous methods should follow the Task-based Asynchronous Pattern.
There is one major tip regarding caching: when you just have an in-memory cache, then cache the task rather than the resulting data. On a side note, I have to wonder whether HttpContext.Cache is the best cache to use, but I'll leave it as-is since your question is more about how asynchronous code changes caching patterns.
So, I'd recommend something like this:
private const string CacheKey = "TwitterFeed";
public Task<IEnumerable<Status>> GetTweetsAsync(int count, bool includeRetweets, bool excludeReplies)
{
var context = System.Web.HttpContext.Current;
return GetTweetDataAsync(context, count, includeRetweets, excludeReplies);
}
private Task<IEnumerable<Status>> GetTweetDataAsync(HttpContext context, int count, bool includeRetweets, bool excludeReplies)
{
var cache = context.Cache;
Task<IEnumerable<Status>> data = cache[CacheKey] as Task<IEnumerable<Status>>;
if (data != null)
return data;
data = CallTwitterApiAsync(count, includeRetweets, excludeReplies);
cache.Insert(CacheKey, data, null, GetTwitterExpiryDate(), TimeSpan.Zero);
return data;
}
private async Task<IEnumerable<Status>> CallTwitterApiAsync(int count, bool includeRetweets, bool excludeReplies)
{
...
}
There's a small possibility that if two different requests (from two different sessions) request the same twitter feed at the same exact time, that the feed will be requested twice. But I wouldn't lose sleep over it.

Related

Is it possible to avoid caching when calling the extension method GetOrCreateAsync on an IMemoryCache?

I'm using an IMemoryCache to cache a token retrieved from an Identity server.
In the past I've used the GetOrCreateAsync extension method available in the Microsoft.Extensions.Caching.Abstractions library.
It is very helpful because I can define the function and the expiration date at the same time.
However with a token, I won't know the expires in x amount of seconds value until the request is finished.
I want to account for a use case of the value not existing by not caching the token at all.
I have tried the following
var token = await this.memoryCache.GetOrCreateAsync<string>("SomeKey", async cacheEntry =>
{
var jwt = await GetTokenFromServer();
var tokenHasValidExpireValue = int.TryParse(jwt.ExpiresIn, out int tokenExpirationSeconds);
if (tokenHasValidExpireValue)
{
cacheEntry.AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(tokenExpirationSeconds);
}
else // Do not cache value. Just return it.
{
cacheEntry.AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(0); //Exception thrown. Value needs to be positive.
}
return jwt.token;
}
As you can see, an exception is thrown when I try to set an expiration of no time TimeSpan.FromSeconds(0).
Is there a way around this besides calling the Get and Set methods separately?
I would like to use the GetOrCreateAsync method if possible.
You can't actually accomplish this with the current extension because it will always create an entry BEFORE calling the factory method. That said, you can encapsulate the behavior in your own extension in a manner which feels very similar to GetOrCreateAsync.
public static class CustomMemoryCacheExtensions
{
public static async Task<TItem> GetOrCreateIfValidTimestampAsync<TItem>(
this IMemoryCache cache, object key, Func<Task<(int, TItem)>> factory)
{
if (!cache.TryGetValue(key, out object result))
{
(int tokenExpirationSeconds, TItem factoryResult) =
await factory().ConfigureAwait(false);
if (tokenExpirationSeconds <= 0)
{
// if the factory method did not return a positive timestamp,
// return the data without caching.
return factoryResult;
}
// since we have a valid timestamp:
// 1. create a cache entry
// 2. Set the result
// 3. Set the timestamp
using ICacheEntry entry = cache.CreateEntry(key);
entry.Value = result;
entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(tokenExpirationSeconds);
}
return (TItem)result;
}
}
You can then call your extension method in a very similar manner:
var memoryCache = new MemoryCache(new MemoryCacheOptions());
var token = await memoryCache.GetOrCreateIfValidTimestampAsync<string>("SomeKey", async () =>
{
var jwt = await GetTokenFromServer();
var tokenHasValidExpireValue = int.TryParse(jwt.ExpiresIn, out int tokenExpirationSeconds);
return (tokenExpirationSeconds, jwt.token);
}
I have the need for this and simply set the expiration time to now, I assume you can set it to past time as well just to be sure:
// Don't wanna cache if this is the result
if (key == null || key.Expiration < DateTime.UtcNow)
{
entry.AbsoluteExpiration = DateTimeOffset.Now;
return null;
}
Maybe something in these lines:
await _memoryCache.GetOrCreateAsync("key",
async entry =>
{
var value = // get your value here;
entry.AbsoluteExpiration = value != null
? DateTimeOffset.UtcNow.AddMinutes(5)
: DateTimeOffset.UtcNow; // don't cache nulls
return value;
});

Why is my .net core API cancelling requests?

I have a an aync method that is looped:
private Task<HttpResponseMessage> GetResponseMessage(Region region, DateTime startDate, DateTime endDate)
{
var longLatString = $"q={region.LongLat.Lat},{region.LongLat.Long}";
var startDateString = $"{startDateQueryParam}={ConvertDateTimeToApixuQueryString(startDate)}";
var endDateString = $"{endDateQueryParam}={ConvertDateTimeToApixuQueryString(endDate)}";
var url = $"http://api?key={Config.Key}&{longLatString}&{startDateString}&{endDateString}";
return Client.GetAsync(url);
}
I then take the response and save it to my ef core database, however in some instances I get this Exception message: The Operaiton was canceled
I really dont understand that. This is a TCP handshake issue?
Edit:
For context I am making many of these calls, passing response to the method that writes to db (which is also so slow Its unbelievable):
private async Task<int> WriteResult(Response apiResponse, Region region)
{
// since context is not thread safe we ensure we have a new one for each insert
// since a .net core app can insert data at the same time from different users different instances of context
// must be thread safe
using (var context = new DalContext(ContextOptions))
{
var batch = new List<HistoricalWeather>();
foreach (var forecast in apiResponse.Forecast.Forecastday)
{
// avoid inserting duplicates
var existingRecord = context.HistoricalWeather
.FirstOrDefault(x => x.RegionId == region.Id &&
IsOnSameDate(x.Date.UtcDateTime, forecast.Date));
if (existingRecord != null)
{
continue;
}
var newHistoricalWeather = new HistoricalWeather
{
RegionId = region.Id,
CelsiusMin = forecast.Day.Mintemp_c,
CelsiusMax = forecast.Day.Maxtemp_c,
CelsiusAverage = forecast.Day.Avgtemp_c,
MaxWindMph = forecast.Day.Maxwind_mph,
PrecipitationMillimeters = forecast.Day.Totalprecip_mm,
AverageHumidity = forecast.Day.Avghumidity,
AverageVisibilityMph = forecast.Day.Avgvis_miles,
UvIndex = forecast.Day.Uv,
Date = new DateTimeOffset(forecast.Date),
Condition = forecast.Day.Condition.Text
};
batch.Add(newHistoricalWeather);
}
context.HistoricalWeather.AddRange(batch);
var inserts = await context.SaveChangesAsync();
return inserts;
}
Edit: I am making 150,000 calls. I know this is questionable since It all goes in memory I guess before even doing a save but this is where I got to in trying to make this run faster... only I guess my actual writing code is blocking :/
var dbInserts = await Task.WhenAll(
getTasks // the list of all api get requests
.Select(async x => {
// parsed can be null if get failed
var parsed = await ParseApixuResponse(x.Item1); // readcontentasync and just return the deserialized json
return new Tuple<ApiResult, Region>(parsed, x.Item2);
})
.Select(async x => {
var finishedGet = await x;
if(finishedGet.Item1 == null)
{
return 0;
}
return await writeResult(finishedGet.Item1, finishedGet.Item2);
})
);
.net core has a DefaultConnectionLimit setting as answered in comments.
this limits outgoing connections to specific domains to ensure all ports are not taken etc.
i did my parallel work incorrectly causing it to go over the limit - which everything i read says should not be 2 on .net core but it was - and that caused connections to close before receiving responses.
I made it greater, did parallel work correctly, lowered it again.

Hangfire get last execution time

I'm using hangfire 1.5.3. In my recurring job I want to call a service that uses the time since the last run. Unfortunately the LastExecution is set to the current time, because the job data was updated before executing the job.
Job
public abstract class RecurringJobBase
{
protected RecurringJobDto GetJob(string jobId)
{
using (var connection = JobStorage.Current.GetConnection())
{
return connection.GetRecurringJobs().FirstOrDefault(p => p.Id == jobId);
}
}
protected DateTime GetLastRun(string jobId)
{
var job = GetJob(jobId);
if (job != null && job.LastExecution.HasValue)
{
return job.LastExecution.Value.ToLocalTime();
}
return DateTime.Today;
}
}
public class NotifyQueryFilterSubscribersJob : RecurringJobBase
{
public const string JobId = "NotifyQueryFilterSubscribersJob";
private readonly IEntityFilterChangeNotificationService _notificationService;
public NotifyQueryFilterSubscribersJob(IEntityFilterChangeNotificationService notificationService)
{
_notificationService = notificationService;
}
public void Run()
{
var lastRun = GetLastRun(JobId);
_notificationService.CheckChangesAndSendNotifications(DateTime.Now - lastRun);
}
}
Register
RecurringJob.AddOrUpdate<NotifyQueryFilterSubscribersJob>(NotifyQueryFilterSubscribersJob.JobId, job => job.Run(), Cron.Minutely, TimeZoneInfo.Local);
I know, that it is configured as minutely, so I could calculate the time roughly. But I'd like to have a configuration independent implementation. So my Question is: How can I implement RecurringJobBase.GetLastRun to return the time of the previous run?
To address my comment above, where you might have more than one type of recurring job running but want to check previous states, you can check that the previous job info actually relates to this type of job by the following (although this feels a bit hacky/convoluted).
If you're passing the PerformContext into the job method than you can use this:
var jobName = performContext.BackgroundJob.Job.ToString();
var currentJobId = int.Parse(performContext.BackgroundJob.Id);
JobData jobFoundInfo = null;
using (var connection = JobStorage.Current.GetConnection()) {
var decrementId = currentJobId;
while (decrementId > currentJobId - 50 && decrementId > 1) { // try up to 50 jobs previously
decrementId--;
var jobInfo = connection.GetJobData(decrementId.ToString());
if (jobInfo.Job.ToString().Equals(jobName)) { // **THIS IS THE CHECK**
jobFoundInfo = jobInfo;
break;
}
}
if (jobFoundInfo == null) {
throw new Exception($"Could not find the previous run for job with name {jobName}");
}
return jobFoundInfo;
}
You could take advantage of the fact you already stated - "Unfortunately the LastExecution is set to the current time, because the job data was updated before executing the job".
The job includes the "LastJobId" property which seems to be an incremented Id. Hence, you should be able to get the "real" previous job by decrement LastJobId and querying the job data for that Id.
var currentJob = connection.GetRecurringJobs().FirstOrDefault(p => p.Id == CheckForExpiredPasswordsId);
if (currentJob == null)
{
return null; // Or whatever suits you
}
var previousJob = connection.GetJobData((Convert.ToInt32(currentJob.LastJobId) - 1).ToString());
return previousJob.CreatedAt;
Note that this is the time of creation, not execution. But it might be accurate enough for you. Bear in mind the edge case when it is your first run, hence there will be no previous job.
After digging around, I came up with the following solution.
var lastSucceded = JobStorage.Current.GetMonitoringApi().SucceededJobs(0, 1000).OrderByDescending(j => j.Value.SucceededAt).FirstOrDefault(j => j.Value.Job.Method.Name == "MethodName" && j.Value.Job.Type.FullName == "NameSpace.To.Class.Containing.The.Method").Value;
var lastExec = lastSucceded.SucceededAt?.AddMilliseconds(Convert.ToDouble(-lastSucceded.TotalDuration));
It's not perfect but i think a little cleaner than the other solutions.
Hopefully they will implement an official way soon.
The answer by #Marius Steinbach is often good enough but if you have thousands of job executions (my case) loading all of them from DB doesn't seem that great. So finally I decided to write a simple SQL query and use it directly (this is for PostgreSQL storage though changing it to SqlServer should be straightforward):
private async Task<DateTime?> GetLastSuccessfulExecutionTime(string jobType)
{
await using var conn = new NpgsqlConnection(_connectionString);
if (conn.State == ConnectionState.Closed)
conn.Open();
await using var cmd = new NpgsqlCommand(#"
SELECT s.data FROM hangfire.job j
LEFT JOIN hangfire.state s ON j.stateid = s.id
WHERE j.invocationdata LIKE $1 AND j.statename = $2
ORDER BY s.createdat DESC
LIMIT 1", conn)
{
Parameters =
{
new() { Value = $"%{jobType}%" } ,
new() { Value = SucceededState.StateName }
}
};
var result = await cmd.ExecuteScalarAsync();
if (result is not string data)
return null;
var stateData = JsonSerializer.Deserialize<Dictionary<string, string>>(data);
return JobHelper.DeserializeNullableDateTime(stateData?.GetValueOrDefault("SucceededAt"));
}
Use this method that return Last exucution time and Next execution time of one job. this method return last and next execution time of one job.
public static (DateTime?, DateTime?) GetExecutionDateTimes(string jobName)
{
DateTime? lastExecutionDateTime = null;
DateTime? nextExecutionDateTime = null;
using (var connection = JobStorage.Current.GetConnection())
{
var job = connection.GetRecurringJobs().FirstOrDefault(p => p.Id == jobName);
if (job != null && job.LastExecution.HasValue)
lastExecutionDateTime = job.LastExecution;
if (job != null && job.NextExecution.HasValue)
nextExecutionDateTime = job.NextExecution;
}
return (lastExecutionDateTime, nextExecutionDateTime);
}

EF and MVC - approach to work together

I used the following approach long time (approx 5 years):
Create one big class with initialization of XXXEntities in controller and create each method for each action with DB. Example:
public class DBRepository
{
private MyEntities _dbContext;
public DBRepository()
{
_dbContext = new MyEntities();
}
public NewsItem NewsItem(int ID)
{
var q = from i in _dbContext.News where i.ID == ID select new NewsItem() { ID = i.ID, FullText = i.FullText, Time = i.Time, Topic = i.Topic };
return q.FirstOrDefault();
}
public List<Screenshot> LastPublicScreenshots()
{
var q = from i in _dbContext.Screenshots where i.isPublic == true && i.ScreenshotStatus.Status == ScreenshotStatusKeys.LIVE orderby i.dateTimeServer descending select i;
return q.Take(5).ToList();
}
public void SetPublicScreenshot(string filename, bool val)
{
var screenshot = Get<Screenshot>(p => p.filename == filename);
if (screenshot != null)
{
screenshot.isPublic = val;
_dbContext.SaveChanges();
}
}
public void SomeMethod()
{
SomeEntity1 s1 = new SomeEntity1() { field1="fff", field2="aaa" };
_dbContext.SomeEntity1.Add(s1);
SomeEntity2 s2 = new SomeEntity2() { SE1 = s1 };
_dbContext.SomeEntity1.Add(s2);
_dbContext.SaveChanges();
}
And some external code create DBRepository object and call methods.
It worked fine. But now Async operations came in. So, if I use code like
public async void AddStatSimplePageAsync(string IPAddress, string login, string txt)
{
DateTime dateAdded2MinsAgo = DateTime.Now.AddMinutes(-2);
if ((from i in _dbContext.StatSimplePages where i.page == txt && i.dateAdded > dateAdded2MinsAgo select i).Count() == 0)
{
StatSimplePage item = new StatSimplePage() { IPAddress = IPAddress, login = login, page = txt, dateAdded = DateTime.Now };
_dbContext.StatSimplePages.Add(item);
await _dbContext.SaveChangesAsync();
}
}
can be a situation, when next code will be executed before SaveChanged completed and one more entity will be added to _dbContext, which should not be saved before some actions. For example, some code:
DBRepository _rep = new DBRepository();
_rep.AddStatSimplePageAsync("A", "b", "c");
_rep.SomeMethod();
I worry, that SaveChanged will be called after line
_dbContext.SomeEntity1.Add(s1);
but before
_dbContext.SomeEntity2.Add(s2);
(i.e. these 2 actions is atomic operation)
Am I right? My approach is wrong now? Which approach should be used?
PS. As I understand, will be the following stack:
1. calling AddStatSimplePageAsync
2. start calling await _dbContext.SaveChangesAsync(); inside AddStatSimplePageAsync
3. start calling SomeMethod(), _dbContext.SaveChangesAsync() in AddStatSimplePageAsync is executing in another (child) thread.
4. complete _dbContext.SaveChangesAsync() in child thread. Main thread is executing something in SomeMethod()
Ok this time I (think)'ve got your problem.
At first, it's weird that you have two separate calls to SaveChangesmethod. Usually you should try to have it at the end of all your operations and then dispose it.
Even thought yes, your concerns are right, but some clarifications are needed here.
When encountering an asyncor await do not think about threads, but about tasks, that are two different concepts.
Have a read to this great article. There is an image that will practically explain you everything.
To say that in few words, if you do not await an async method, you can have the risk that your subsequent operation could "harm" the execution of the first one. To solve it, simply await it.

What is the best way to lock cache in asp.net?

I know in certain circumstances, such as long running processes, it is important to lock ASP.NET cache in order to avoid subsequent requests by another user for that resource from executing the long process again instead of hitting the cache.
What is the best way in c# to implement cache locking in ASP.NET?
Here's the basic pattern:
Check the cache for the value, return if its available
If the value is not in the cache, then implement a lock
Inside the lock, check the cache again, you might have been blocked
Perform the value look up and cache it
Release the lock
In code, it looks like this:
private static object ThisLock = new object();
public string GetFoo()
{
// try to pull from cache here
lock (ThisLock)
{
// cache was empty before we got the lock, check again inside the lock
// cache is still empty, so retreive the value here
// store the value in the cache here
}
// return the cached value here
}
For completeness a full example would look something like this.
private static object ThisLock = new object();
...
object dataObject = Cache["globalData"];
if( dataObject == null )
{
lock( ThisLock )
{
dataObject = Cache["globalData"];
if( dataObject == null )
{
//Get Data from db
dataObject = GlobalObj.GetData();
Cache["globalData"] = dataObject;
}
}
}
return dataObject;
There is no need to lock the whole cache instance, rather we only need to lock the specific key that you are inserting for.
I.e. No need to block access to the female toilet while you use the male toilet :)
The implementation below allows for locking of specific cache-keys using a concurrent dictionary. This way you can run GetOrAdd() for two different keys at the same time - but not for the same key at the same time.
using System;
using System.Collections.Concurrent;
using System.Web.Caching;
public static class CacheExtensions
{
private static ConcurrentDictionary<string, object> keyLocks = new ConcurrentDictionary<string, object>();
/// <summary>
/// Get or Add the item to the cache using the given key. Lazily executes the value factory only if/when needed
/// </summary>
public static T GetOrAdd<T>(this Cache cache, string key, int durationInSeconds, Func<T> factory)
where T : class
{
// Try and get value from the cache
var value = cache.Get(key);
if (value == null)
{
// If not yet cached, lock the key value and add to cache
lock (keyLocks.GetOrAdd(key, new object()))
{
// Try and get from cache again in case it has been added in the meantime
value = cache.Get(key);
if (value == null && (value = factory()) != null)
{
// TODO: Some of these parameters could be added to method signature later if required
cache.Insert(
key: key,
value: value,
dependencies: null,
absoluteExpiration: DateTime.Now.AddSeconds(durationInSeconds),
slidingExpiration: Cache.NoSlidingExpiration,
priority: CacheItemPriority.Default,
onRemoveCallback: null);
}
// Remove temporary key lock
keyLocks.TryRemove(key, out object locker);
}
}
return value as T;
}
}
Just to echo what Pavel said, I believe this is the most thread safe way of writing it
private T GetOrAddToCache<T>(string cacheKey, GenericObjectParamsDelegate<T> creator, params object[] creatorArgs) where T : class, new()
{
T returnValue = HttpContext.Current.Cache[cacheKey] as T;
if (returnValue == null)
{
lock (this)
{
returnValue = HttpContext.Current.Cache[cacheKey] as T;
if (returnValue == null)
{
returnValue = creator(creatorArgs);
if (returnValue == null)
{
throw new Exception("Attempt to cache a null reference");
}
HttpContext.Current.Cache.Add(
cacheKey,
returnValue,
null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
System.Web.Caching.Cache.NoSlidingExpiration,
CacheItemPriority.Normal,
null);
}
}
}
return returnValue;
}
Craig Shoemaker has made an excellent show on asp.net caching:
http://polymorphicpodcast.com/shows/webperformance/
I have come up with the following extension method:
private static readonly object _lock = new object();
public static TResult GetOrAdd<TResult>(this Cache cache, string key, Func<TResult> action, int duration = 300) {
TResult result;
var data = cache[key]; // Can't cast using as operator as TResult may be an int or bool
if (data == null) {
lock (_lock) {
data = cache[key];
if (data == null) {
result = action();
if (result == null)
return result;
if (duration > 0)
cache.Insert(key, result, null, DateTime.UtcNow.AddSeconds(duration), TimeSpan.Zero);
} else
result = (TResult)data;
}
} else
result = (TResult)data;
return result;
}
I have used both #John Owen and #user378380 answers. My solution allows you to store int and bool values within the cache aswell.
Please correct me if there's any errors or whether it can be written a little better.
I saw one pattern recently called Correct State Bag Access Pattern, which seemed to touch on this.
I modified it a bit to be thread-safe.
http://weblogs.asp.net/craigshoemaker/archive/2008/08/28/asp-net-caching-and-performance.aspx
private static object _listLock = new object();
public List List() {
string cacheKey = "customers";
List myList = Cache[cacheKey] as List;
if(myList == null) {
lock (_listLock) {
myList = Cache[cacheKey] as List;
if (myList == null) {
myList = DAL.ListCustomers();
Cache.Insert(cacheKey, mList, null, SiteConfig.CacheDuration, TimeSpan.Zero);
}
}
}
return myList;
}
This article from CodeGuru explains various cache locking scenarios as well as some best practices for ASP.NET cache locking:
Synchronizing Cache Access in ASP.NET
I've wrote a library that solves that particular issue: Rocks.Caching
Also I've blogged about this problem in details and explained why it's important here.
I modified #user378380's code for more flexibility. Instead of returning TResult now returns object for accepting different types in order. Also adding some parameters for flexibility. All the idea belongs to
#user378380.
private static readonly object _lock = new object();
//If getOnly is true, only get existing cache value, not updating it. If cache value is null then set it first as running action method. So could return old value or action result value.
//If getOnly is false, update the old value with action result. If cache value is null then set it first as running action method. So always return action result value.
//With oldValueReturned boolean we can cast returning object(if it is not null) appropriate type on main code.
public static object GetOrAdd<TResult>(this Cache cache, string key, Func<TResult> action,
DateTime absoluteExpireTime, TimeSpan slidingExpireTime, bool getOnly, out bool oldValueReturned)
{
object result;
var data = cache[key];
if (data == null)
{
lock (_lock)
{
data = cache[key];
if (data == null)
{
oldValueReturned = false;
result = action();
if (result == null)
{
return result;
}
cache.Insert(key, result, null, absoluteExpireTime, slidingExpireTime);
}
else
{
if (getOnly)
{
oldValueReturned = true;
result = data;
}
else
{
oldValueReturned = false;
result = action();
if (result == null)
{
return result;
}
cache.Insert(key, result, null, absoluteExpireTime, slidingExpireTime);
}
}
}
}
else
{
if(getOnly)
{
oldValueReturned = true;
result = data;
}
else
{
oldValueReturned = false;
result = action();
if (result == null)
{
return result;
}
cache.Insert(key, result, null, absoluteExpireTime, slidingExpireTime);
}
}
return result;
}
The accepted answer (recommending reading outside of the lock) is very bad advice and is being implemented since 2008. It could work if the cache uses a concurrent dictionary, but that itself has a lock for reads.
Reading outside of the lock means that other threads could be modifying the cache in the middle of read. This means that the read could be inconsistent.
For example, depending on the implementation of the cache (probably a dictionary whose internals are unknown), the item could be checked and found in the cache, at a certain index in the underlying array of the cache, then another thread could modify the cache so that the items from the underlying array are no longer in the same order, and then the actual read from the cache could be from a different index / address.
Another scenario is that the read could be from an index that is now outside of the underlying array (because items were removed), so you can get exceptions.

Categories

Resources