Refresh app cache on expiry in c# - c#

I have a C# WebAPI that has a query which collates a lot of data. Subsequently, I am using HttpRuntime cache to cache the result object for 10 mins. The problem is, when the cache expires, that person gets a 12 second load. This application utilises 3 delivery servers and we don't have the option of distributed cache.
Using .NET, we can use the cache expired event, but how best to use that without impacting the calling request?
One thought was to have a never expires cache, so that if the main cache is expired, fallback to that, then have a windows service or similar which polls every 5 mins to refresh both caches.
Ideas?

Perhaps caching the results separately to the page cache will help.
Based on http://johnnycoder.com/blog/2008/12/10/c-cache-helper-class/
Since it is static, you could use WCF to refresh at your own pace.
I modified to be static and not http
public static class CacheHelper
{
public static void WriteOutCacheHelper()
{
foreach (KeyValuePair<string, object> cache in Cache)
{
Console.WriteLine(cache.Key);
}
}
public static void WriteOutCacheHelper(string key)
{
Console.WriteLine(Get<object>(key).ToString());
}
public static bool Enabled { get; set; }
private static Dictionary<string, object> _cache;
public static Dictionary<string, object> Cache
{
get
{
if (_cache == null) _cache = new Dictionary<string, object>();
return _cache;
}
}
public static object lockObject = new object();
public static void Add<T>(T o, string key)
{
if (!Enabled) return;
lock (lockObject)
{
if (Exists(key))
Cache[key] = o;
else
Cache.Add(key, o);
}
}
public static void Clear(string key)
{
if (!Enabled) return;
Cache.Remove(key);
}
public static bool Exists(string key)
{
if (!Enabled) return false;
return Cache.ContainsKey(key);
}
public static T Get<T>(string key)
{
if (!Enabled) return default(T);
T value;
try
{
value = (!Exists(key) ? default(T) : (T) Cache[key]);
}
catch
{
value = default(T);
}
return value;
}
public static void ClearAll(bool force = false)
{
if (!force && !Enabled) return;
Cache.Clear();
}
public static List<T> GetStartingWith<T>(string cacheKey) where T : class
{
if (!Enabled) new List<T>();
return Cache.ToList().FindAll(f => f.Key.StartsWith(cacheKey, StringComparison.CurrentCultureIgnoreCase))
.Select(s => s.Value as T).ToList();
}
}

Related

Why aren't closure helper instances (DisplayClass) only created when actually needed?

I have a question regarding closures and heap allocation. Consider the following code:
//ORIGINAL CODE, VERSION 1
public class Program
{
private ConcurrentDictionary<object, object> _coll = new ConcurrentDictionary<object, object>();
public object Test(String x){
if(x == "abort") return null;
return _coll.GetOrAdd(x, (k)=> TestCallback());
}
public static object TestCallback() => null;
}
Within Test a static callback function is used. And, according to https://sharplab.io, this gets lowered to (abbr.):
//LOWERED CODE, VERSION 1
public class Program
{
private sealed class <>c
{
public static readonly <>c <>9 = new <>c(); // <== HELPER1 CREATION
public static Func<object, object> <>9__1_0;
internal object <Test>b__1_0(object k)
{
return TestCallback();
}
}
private ConcurrentDictionary<object, object> _coll = new ConcurrentDictionary<object, object>();
public object Test(string x)
{
if (x == "abort")
{
return null;
}
return _coll.GetOrAdd(x, <>c.<>9__1_0 ?? (<>c.<>9__1_0 = new Func<object, object>(<>c.<>9.<Test>b__1_0))); // <== HELPER2 CREATION
}
public static object TestCallback() //==> STATIC METHOD
{
return null;
}
}
So, the compiler creates a few helper objects, but does this only once (the helpers are static).
Now, if I remove static from TestCallback...:
//ORIGINAL CODE, VERSION 1
public class Program
{
private ConcurrentDictionary<object, object> _coll = new ConcurrentDictionary<object, object>();
public object Test(String x){
if(x == "abort") return null;
return _coll.GetOrAdd(x, (k)=> TestCallback());
}
public object TestCallback() => null; //==> INSTANCE METHOD
}
...the lowered code changes to:
//LOWERED CODE, VERSION 2
public class Program
{
private ConcurrentDictionary<object, object> _coll = new ConcurrentDictionary<object, object>();
public object Test(string x)
{
if (x == "abort")
{
return null;
}
return _coll.GetOrAdd(x, new Func<object, object>(<Test>b__1_0)); // <== HELPER1 CREATION
}
public object TestCallback()
{
return null;
}
private object <Test>b__1_0(object k)
{
return TestCallback();
}
}
It now appears that a new Func is created on every call, if x == "abort" is not true (i.e. _coll.GetOrAdd is actually called).
Finally, if I change Test to include a callback parameter...:
//ORIGINAL CODE, VERSION 3
public class Program
{
private ConcurrentDictionary<object, object> _coll = new ConcurrentDictionary<object, object>();
public object Test(String x, Func<object> callback){
if(x == "abort") return null;
return _coll.GetOrAdd(x, (k)=> callback());
}
}
...the lowered code changes to:
//LOWERED CODE, VERSION 3
public class Program
{
private sealed class <>c__DisplayClass1_0
{
public Func<object> callback;
internal object <Test>b__0(object k)
{
return callback();
}
}
private ConcurrentDictionary<object, object> _coll = new ConcurrentDictionary<object, object>();
public object Test(string x, Func<object> callback)
{
<>c__DisplayClass1_0 <>c__DisplayClass1_ = new <>c__DisplayClass1_0(); // <== HELPER1 CREATION
<>c__DisplayClass1_.callback = callback;
if (x == "abort")
{
return null;
}
return _coll.GetOrAdd(x, new Func<object, object>(<>c__DisplayClass1_.<Test>b__0)); // <== HELPER2 CREATION
}
}
Here, it appears as if, a new <>c__DisplayClass1_0 is created on every call, regardless of x == "abort".
To summarize:
Version1: create 2 helpers once.
Version2: create 1 helper whenever _cao..GetOrAdd is actually called.
Version3: create 2 helper on every call.
Is this correct? If the lowered code is correct (and is what the actual compiler uses), why is the creation of new <>c__DisplayClass1_0 not done immediately before the relevant call?
Then unneccessary allocations would be prevented. Ultimately I'm wondering, if this is an actual improvement:
public IMetadata GetOrDefineMetadata(object key, Func<IMetadata> createCallback)
{
if (_coll.TryGetValue(key, out var result)) return result; //THIS LINE WAS INSERTED AS AN IMPROVEMENT
return _coll.GetOrAdd(key, (k) => createCallback()); // ==> WILL THIS STILL CAUSE ALLOCATIONS ON EVERY CALL?
}
This looks like an opportunity for a compiler optimization.
I moved the call to _coll.GetOrAdd to a static method. In the lowered code this moves the allocation further down.
public class Program
{
private ConcurrentDictionary<object, object> _coll = new ConcurrentDictionary<object, object>();
public object Test(String x, Func<object> callback){
if(x == "abort") return null;
return GetOrAdd(x, _coll, callback);
}
private static object GetOrAdd(string x, ConcurrentDictionary<object, object> dict, Func<object> callback)
{
return dict.GetOrAdd(x, (_)=> callback());
}
}
Lowered version:
public class Program
{
[CompilerGenerated]
private sealed class <>c__DisplayClass2_0
{
public Func<object> callback;
internal object <GetOrAdd>b__0(object _)
{
return callback();
}
}
private ConcurrentDictionary<object, object> _coll = new ConcurrentDictionary<object, object>();
public object Test(string x, Func<object> callback)
{
if (x == "abort")
{
return null;
}
return GetOrAdd(x, _coll, callback);
}
private static object GetOrAdd(string x, ConcurrentDictionary<object, object> dict, Func<object> callback)
{
<>c__DisplayClass2_0 <>c__DisplayClass2_ = new <>c__DisplayClass2_0();
<>c__DisplayClass2_.callback = callback;
return dict.GetOrAdd(x, new Func<object, object>(<>c__DisplayClass2_.<GetOrAdd>b__0));
}
}

Create Microsoft.Extensions.Caching.Memory as a Singleton class

Im trying to use Microsoft.Extensions.Caching.Memory as singleton class in Asp.net MVC 5.
Problem is with each call the Instance Variable returns null and class reinstantiating it
self.(old value of Cache will be deleted)
this is my cache class :
public class CacheManagerHelper<TItem>
{
private static CacheManagerHelper<TItem> instance;
private static MemoryCache memoryCache;
private static object syncRoot = new();
private CacheManagerHelper() { }
public static CacheManagerHelper<TItem> Instance
{
get
{
if (instance == null)
{
instance = new CacheManagerHelper<TItem>();
memoryCache = new MemoryCache(new MemoryCacheOptions()
{
});
}
return instance;
}
}
private static ConcurrentDictionary<object, SemaphoreSlim> _locks = new ConcurrentDictionary<object, SemaphoreSlim>();
public async Task<TItem> GetOrCreate(object key, Func<Task<TItem>> createItem)
{
TItem cacheEntry;
if (!memoryCache.TryGetValue(key, out cacheEntry))
{
SemaphoreSlim mylock = _locks.GetOrAdd(key, k => new SemaphoreSlim(1, 1));
await mylock.WaitAsync();
try
{
if (!memoryCache.TryGetValue(key, out cacheEntry))
{
cacheEntry = await createItem();
var cacheEntryOptions = new MemoryCacheEntryOptions()
.SetPriority(Microsoft.Extensions.Caching.Memory.CacheItemPriority.NeverRemove)
.SetSlidingExpiration(TimeSpan.FromHours(24))
.SetAbsoluteExpiration(TimeSpan.FromHours(48));
memoryCache.Set(key, cacheEntry);
}
}
finally
{
mylock.Release();
}
}
return cacheEntry;
}
public T Get<T>(string key)
{
return (T)Convert.ChangeType(memoryCache.Get(key), typeof(T));
}
public void RemoveCache(string key)
{
memoryCache.Remove(key);
}
public static void ClearAllCacheObject()
{
IDictionaryEnumerator enumerator = HttpContext.Current.Cache.GetEnumerator();
while (enumerator.MoveNext())
HttpContext.Current.Cache.Remove(enumerator.Key.ToString());
}
}

Tentative locks in C#?

Suppose I'd like to allow parallel execution of some code, but need other code wait for all these operations to finish.
Let's imagine a softlock in addition to lock:
public static class MySimpleCache
{
private static readonly SynchronizedCollection<KeyValuePair<string, string>> Collection = new SynchronizedCollection<KeyValuePair<string, string>>();
public static string Get(string key, Func<string> getter)
{
// Allow parallel enumerations here,
// but force modifications to the collections to wait.
softlock(Collection.SyncRoot)
{
if (Collection.Any(kvp => kvp.Key == key))
{
return Collection.First(kvp => kvp.Key == key).Value;
}
}
var data = getter();
// Wait for previous soft-locks before modifying the collection and let subsequent softlocks wait
lock (Collection.SyncRoot)
{
Collection.Add(new KeyValuePair<string, string>(key, data));
}
return data;
}
}
Is there any design-pattern or language/framework features in C#/.NET to achieve this in a straightforward and reliable fashion, or would one have to implement this from the ground up?
I'm currently limited to .NET 3.5 and I'm mostly interested in the conceptual issue, not so much in other possible collections that might solve the example in itself.
In situations like this you can use a ReaderWriterLockSlim, it will allow multiple readers until someone wants to write, it then blocks all readers and only allows a single writer through.
public static class MySimpleCache
{
private static readonly SynchronizedCollection<KeyValuePair<string, string>> Collection = new SynchronizedCollection<KeyValuePair<string, string>>();
private static readonly ReaderWriterLockSlim Lock = new ReaderWriterLockSlim();
public static string Get(string key, Func<string> getter)
{
//This allows multiple readers to run concurrently.
Lock.EnterReadLock();
try
{
var result = Collection.FirstOrDefault(kvp => kvp.Key == key);
if (!Object.Equals(result, default(KeyValuePair<string, string>)))
{
return result.Value;
}
}
finally
{
Lock.ExitReadLock();
}
var data = getter();
//This blocks all future EnterReadLock(), once all finish it allows the function to continue
Lock.EnterWriteLock();
try
{
Collection.Add(new KeyValuePair<string, string>(key, data));
return data;
}
finally
{
Lock.ExitWriteLock();
}
}
}
However, you may want to check to see while you where waiting to take the write lock someone else may have entered the record in to the cache, in that case you can use a EnterUpgradeableReadLock(), this allows unlimited people to be inside EnterReadLock() but only a single person can be in the upgrade lock (and there will still be no write locks). The upgrade-able lock is useful when you know you will likely be writing but there is a opportunity to not write.
public static class MySimpleCache
{
private static readonly SynchronizedCollection<KeyValuePair<string, string>> Collection = new SynchronizedCollection<KeyValuePair<string, string>>();
private static readonly ReaderWriterLockSlim Lock = new ReaderWriterLockSlim();
public static string Get(string key, Func<string> getter)
{
//This allows multiple readers to run concurrently.
Lock.EnterReadLock();
try
{
var result = Collection.FirstOrDefault(kvp => kvp.Key == key);
if (!Object.Equals(result, default(KeyValuePair<string, string>)))
{
return result.Value;
}
}
finally
{
Lock.ExitReadLock();
}
//This allows unlimited EnterReadLock to run concurrently, but only one thread can be in upgrade mode, other threads will block.
Lock.EnterUpgradeableReadLock();
try
{
//We need to check to see if someone else filled the cache while we where waiting.
var result = Collection.FirstOrDefault(kvp => kvp.Key == key);
if (!Object.Equals(result, default(KeyValuePair<string, string>)))
{
return result.Value;
}
var data = getter();
//This blocks all future EnterReadLock(), once all finish it allows the function to continue
Lock.EnterWriteLock();
try
{
Collection.Add(new KeyValuePair<string, string>(key, data));
return data;
}
finally
{
Lock.ExitWriteLock();
}
}
finally
{
Lock.ExitUpgradeableReadLock();
}
}
}
P.S. You mentioned in a comment that the value could be null so FirstOrDefault() would not work. In that case use a extension method to make a TryFirst() function.
public static class ExtensionMethods
{
public static bool TryFirst<T>(this IEnumerable<T> #this, Func<T, bool> predicate, out T result)
{
foreach (var item in #this)
{
if (predicate(item))
{
result = item;
return true;
}
}
result = default(T);
return false;
}
}
//Used like
Lock.EnterReadLock();
try
{
KeyValuePair<string, string> result;
bool found = Collection.TryFirst(kvp => kvp.Key == key, out result);
if (found)
{
return result.Value;
}
}
finally
{
Lock.ExitReadLock();
}

Get list of active items from ConditionalWeakTable<T>

The .NET 4.0 ConditionalWeakTable<T> is effectively a dictionary where the dictionary's keys are weak referenced and can be collected, which is exactly what I need. The problem is that I need to be able to get all live keys from this dictionary, but MSDN states:
It does not include all the methods (such as GetEnumerator or
Contains) that a dictionary typically has.
Is there a possibility to retrieve the live keys or key-value pairs from a ConditionalWeakTable<T>?
I ended up creating my own wrapper:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.CompilerServices;
public sealed class ConditionalHashSet<T> where T : class
{
private readonly object locker = new object();
private readonly List<WeakReference> weakList = new List<WeakReference>();
private readonly ConditionalWeakTable<T, WeakReference> weakDictionary =
new ConditionalWeakTable<T, WeakReference>();
public void Add(T item)
{
lock (this.locker)
{
var reference = new WeakReference(item);
this.weakDictionary.Add(item, reference);
this.weakList.Add(reference);
this.Shrink();
}
}
public void Remove(T item)
{
lock (this.locker)
{
WeakReference reference;
if (this.weakDictionary.TryGetValue(item, out reference))
{
reference.Target = null;
this.weakDictionary.Remove(item);
}
}
}
public T[] ToArray()
{
lock (this.locker)
{
return (
from weakReference in this.weakList
let item = (T)weakReference.Target
where item != null
select item)
.ToArray();
}
}
private void Shrink()
{
// This method prevents the List<T> from growing indefinitely, but
// might also cause a performance problem in some cases.
if (this.weakList.Capacity == this.weakList.Count)
{
this.weakList.RemoveAll(weak => !weak.IsAlive);
}
}
}
In some recent framework version, the ConditionalWeakTable<TKey,TValue> now implements IEnumerator interface. Check out Microsoft Docs.
This applies to
.NET Core >= 2.0
.NET Standard >= 2.1
This is not solving the problem if someone is stuck with .NET Framework. Otherwise, this may help if, like me, it's only a matter of updating from .NET Standard 2.0 to 2.1.
This will work without the performance problems.
The key to the problem is to use a "holder" object as a value in the ConditionalWeakTable, so that when the key gets dropped, the holder's finalizer will trigger, which removes the key from the "active list" of keys.
I tested this and it works.
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace Util
{
public class WeakDictionary<TKey, TValue> : IDictionary<TKey, TValue>, IDisposable
where TKey : class
where TValue : class
{
private readonly object locker = new object();
//private readonly HashSet<WeakReference> weakKeySet = new HashSet<WeakReference>(new ObjectReferenceEqualityComparer<WeakReference>());
private ConditionalWeakTable<TKey, WeakKeyHolder> keyHolderMap = new ConditionalWeakTable<TKey, WeakKeyHolder>();
private Dictionary<WeakReference, TValue> valueMap = new Dictionary<WeakReference, TValue>(new ObjectReferenceEqualityComparer<WeakReference>());
private class WeakKeyHolder
{
private WeakDictionary<TKey, TValue> outer;
private WeakReference keyRef;
public WeakKeyHolder(WeakDictionary<TKey, TValue> outer, TKey key)
{
this.outer = outer;
this.WeakRef = new WeakReference(key);
}
public WeakReference WeakRef { get; private set; }
~WeakKeyHolder()
{
this.outer?.onKeyDrop(this.WeakRef); // Nullable operator used just in case this.outer gets set to null by GC before this finalizer runs. But I haven't had this happen.
}
}
private void onKeyDrop(WeakReference weakKeyRef)
{
lock(this.locker)
{
if (!this.bAlive)
return;
//this.weakKeySet.Remove(weakKeyRef);
this.valueMap.Remove(weakKeyRef);
}
}
// The reason for this is in case (for some reason which I have never seen) the finalizer trigger doesn't work
// There is not much performance penalty with this, since this is only called in cases when we would be enumerating the inner collections anyway.
private void manualShrink()
{
var keysToRemove = this.valueMap.Keys.Where(k => !k.IsAlive).ToList();
foreach (var key in keysToRemove)
valueMap.Remove(key);
}
private Dictionary<TKey, TValue> currentDictionary
{
get
{
lock(this.locker)
{
this.manualShrink();
return this.valueMap.ToDictionary(p => (TKey) p.Key.Target, p => p.Value);
}
}
}
public TValue this[TKey key]
{
get
{
if (this.TryGetValue(key, out var val))
return val;
throw new KeyNotFoundException();
}
set
{
this.set(key, value, isUpdateOkay: true);
}
}
private bool set(TKey key, TValue val, bool isUpdateOkay)
{
lock (this.locker)
{
if (this.keyHolderMap.TryGetValue(key, out var weakKeyHolder))
{
if (!isUpdateOkay)
return false;
this.valueMap[weakKeyHolder.WeakRef] = val;
return true;
}
weakKeyHolder = new WeakKeyHolder(this, key);
this.keyHolderMap.Add(key, weakKeyHolder);
//this.weakKeySet.Add(weakKeyHolder.WeakRef);
this.valueMap.Add(weakKeyHolder.WeakRef, val);
return true;
}
}
public ICollection<TKey> Keys
{
get
{
lock(this.locker)
{
this.manualShrink();
return this.valueMap.Keys.Select(k => (TKey) k.Target).ToList();
}
}
}
public ICollection<TValue> Values
{
get
{
lock (this.locker)
{
this.manualShrink();
return this.valueMap.Select(p => p.Value).ToList();
}
}
}
public int Count
{
get
{
lock (this.locker)
{
this.manualShrink();
return this.valueMap.Count;
}
}
}
public bool IsReadOnly => false;
public void Add(TKey key, TValue value)
{
if (!this.set(key, value, isUpdateOkay: false))
throw new ArgumentException("Key already exists");
}
public void Add(KeyValuePair<TKey, TValue> item)
{
this.Add(item.Key, item.Value);
}
public void Clear()
{
lock(this.locker)
{
this.keyHolderMap = new ConditionalWeakTable<TKey, WeakKeyHolder>();
this.valueMap.Clear();
}
}
public bool Contains(KeyValuePair<TKey, TValue> item)
{
WeakKeyHolder weakKeyHolder = null;
object curVal = null;
lock (this.locker)
{
if (!this.keyHolderMap.TryGetValue(item.Key, out weakKeyHolder))
return false;
curVal = weakKeyHolder.WeakRef.Target;
}
return (curVal?.Equals(item.Value) == true);
}
public bool ContainsKey(TKey key)
{
lock (this.locker)
{
return this.keyHolderMap.TryGetValue(key, out var weakKeyHolder);
}
}
public void CopyTo(KeyValuePair<TKey, TValue>[] array, int arrayIndex)
{
((IDictionary<TKey, TValue>) this.currentDictionary).CopyTo(array, arrayIndex);
}
public IEnumerator<KeyValuePair<TKey, TValue>> GetEnumerator()
{
return this.currentDictionary.GetEnumerator();
}
public bool Remove(TKey key)
{
lock (this.locker)
{
if (!this.keyHolderMap.TryGetValue(key, out var weakKeyHolder))
return false;
this.keyHolderMap.Remove(key);
this.valueMap.Remove(weakKeyHolder.WeakRef);
return true;
}
}
public bool Remove(KeyValuePair<TKey, TValue> item)
{
lock (this.locker)
{
if (!this.keyHolderMap.TryGetValue(item.Key, out var weakKeyHolder))
return false;
if (weakKeyHolder.WeakRef.Target?.Equals(item.Value) != true)
return false;
this.keyHolderMap.Remove(item.Key);
this.valueMap.Remove(weakKeyHolder.WeakRef);
return true;
}
}
public bool TryGetValue(TKey key, out TValue value)
{
lock (this.locker)
{
if (!this.keyHolderMap.TryGetValue(key, out var weakKeyHolder))
{
value = default(TValue);
return false;
}
value = this.valueMap[weakKeyHolder.WeakRef];
return true;
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return this.GetEnumerator();
}
private bool bAlive = true;
public void Dispose()
{
this.Dispose(true);
}
protected void Dispose(bool bManual)
{
if (bManual)
{
Monitor.Enter(this.locker);
if (!this.bAlive)
return;
}
try
{
this.keyHolderMap = null;
this.valueMap = null;
this.bAlive = false;
}
finally
{
if (bManual)
Monitor.Exit(this.locker);
}
}
~WeakDictionary()
{
this.Dispose(false);
}
}
public class ObjectReferenceEqualityComparer<T> : IEqualityComparer<T>
{
public static ObjectReferenceEqualityComparer<T> Default = new ObjectReferenceEqualityComparer<T>();
public bool Equals(T x, T y)
{
return ReferenceEquals(x, y);
}
public int GetHashCode(T obj)
{
return RuntimeHelpers.GetHashCode(obj);
}
}
public class ObjectReferenceEqualityComparer : ObjectReferenceEqualityComparer<object>
{
}
}

C# reload singleton cache

I need some directions here.
I have the following key/value cache:
public class Cache<TKey, TValue> : ICache<TKey, TValue>
{
private readonly IDictionary<TKey, TValue> _internalCache;
private readonly object _syncLock = new object();
public Cache()
{
_internalCache = new Dictionary<TKey, TValue>();
}
public TValue this[TKey key]
{
get
{
lock (_syncLock) {
//...
}
}
set
{
lock (_syncLock) {
//...
}
}
}
public ICollection<TValue> GetAll()
{
lock (_syncLock) {
return _internalCache.Values;
}
}
public bool ContainsKey(TKey key)
{
lock (_syncLock)
{
return _internalCache.ContainsKey(key);
}
}
}
The cache above is used by a singleton wrapper:
public class ActivityCache : ICache<string, Activity>
{
private readonly ICache<string, Activity> _cache = new Cache<string, Activity>();
private static readonly ActivityCache _instance = new ActivityCache();
// http://www.yoda.arachsys.com/csharp/singleton.html
static ActivityCache()
{
}
ActivityCache()
{
}
public static ActivityCache Instance
{
get { return _instance; }
}
public Activity this[string activityUrl]
{
get
{
if (string.IsNullOrEmpty(activityUrl))
{
return null;
}
return _cache[activityUrl];
}
set
{
if (string.IsNullOrEmpty(activityUrl))
{
return;
}
_cache[activityUrl] = value;
}
}
public ICollection<Activity> GetAll()
{
return _cache.GetAll();
}
public bool ContainsKey(string key)
{
return _cache.ContainsKey(key);
}
}
This is working fine (I haven't noticed/heard of any errors... yet :) ).
But now I have a problem. I need to reload the cache with new key/values.
Question 1.) Can I implement a "safe" reload method that reloads the cache (the Dictionary in the Cache class) ?
E.g:
public void Reload(IDictionary<TKey, TValue> values)
{
lock (_syncLock)
{
_internalCache.Clear();
foreach (KeyValuePair<TKey, TValue> value in values)
{
/* Problems can (will) occur if another
thread is calling the GetAll method... */
_internalCache[value.Key] = value.Value;
}
}
}
Question 2.) Should I use some IoC container or some other library instead?
Thanks!
Note: I'm using .NET 3.5
use ConcurrentDictionary, then you wont have to deal with Synchronization.
Also, you dont want to reload all the cache items as well. instead you want to do staggering, load the cache objects in the key/value store on demand.
You can use timestamp or some versioning for that. if you reload the data per each key/value pair then you wont have to lock the whole collection.
I really recommend you to use ConcurrentDictionary.
You should lock around GetAll as well.
You could use a double buffer type technique to make the reload less painful :
public void Reload(IDictionary<TKey, TValue> values)
{
cache = new Dictionary<TKey, TValue> ();
foreach (KeyValuePair<TKey, TValue> value in values)
{
cache[value.Key] = value.Value;
}
lock (_syncLock)
{
_internalCache = cache;
}
}
This will work providing you don't mind readers accessing potentially out of date information whilst you call reload.

Categories

Resources