MemoryCache does not appear to be caching - c#

I am trying to implement caching in one of our projects.
I have no experience with this part of the framework, so am likely doing something very wrong
Using code I found over at Code Review (https://codereview.stackexchange.com/questions/48148/generic-thread-safe-memorycache-manager-for-c) I came up with, more or less, the same - added the ability to add lists to the cache.
I have this (just showing the code i am using):
private CacheItemPolicy _defaultCacheItemPolicy = new CacheItemPolicy()
{
SlidingExpiration = new TimeSpan(0, 15, 0)
};
public CacheUtil(string cacheName)
: base(cacheName) { }
public void Set(string cacheKey, Func<T> getData)
{
this.Set(cacheKey, getData(), _defaultCacheItemPolicy);
}
public bool TryGetAndSet(string cacheKey, Func<List<T>> getData, out List<T> returnData)
{
if (TryGet(cacheKey, out returnData))
{
return true;
}
returnData = getData();
this.Set(cacheKey, returnData, _defaultCacheItemPolicy);
return true;
}
public bool TryGet(string cacheKey, out List<T> returnItem)
{
returnItem = (List<T>)this[cacheKey];
return returnItem != null;
}
I can then call this by doing this:
public override List<T> GetAll()
{
string keyName = typeof(T).ToString();
List<T> t;
_cache.TryGetAndSet(keyName, () => base.GetAll(), out t);
return t;
}
base.GetAll() is a function in a repository class that fetches data, via EF.
If i call my GetAll() twice it sets the list into the cache again - returnItem = (List<T>)this[cacheKey]; is coming up null every time.
What am i doing wrong?

Related

Looking for a way to do less locking while caching

I am using the code below to cache items. It's pretty basic.
The issue I have is that every time it caches an item, section of the code locks. So with roughly a million items arriving every hour or so, this is a problem.
I've tried creating a dictionary of static lock objects per cacheKey, so that locking is granular, but that in itself becomes an issue with managing expiration of them, etc...
Is there a better way to implement minimal locking?
private static readonly object cacheLock = new object();
public static T GetFromCache<T>(string cacheKey, Func<T> GetData) where T : class {
// Returns null if the string does not exist, prevents a race condition
// where the cache invalidates between the contains check and the retrieval.
T cachedData = MemoryCache.Default.Get(cacheKey) as T;
if (cachedData != null) {
return cachedData;
}
lock (cacheLock) {
// Check to see if anyone wrote to the cache while we where
// waiting our turn to write the new value.
cachedData = MemoryCache.Default.Get(cacheKey) as T;
if (cachedData != null) {
return cachedData;
}
// The value still did not exist so we now write it in to the cache.
cachedData = GetData();
MemoryCache.Default.Set(cacheKey, cachedData, new CacheItemPolicy(...));
return cachedData;
}
}
You may want to consider using ReaderWriterLockSlim, which you can obtain write lock only when needed.
Using cacheLock.EnterReadLock(); and cacheLock.EnterWriteLock(); should greatly improve the performance.
That link I gave even have an example of a cache, exactly what you need, I copy here:
public class SynchronizedCache
{
private ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim();
private Dictionary<int, string> innerCache = new Dictionary<int, string>();
public int Count
{ get { return innerCache.Count; } }
public string Read(int key)
{
cacheLock.EnterReadLock();
try
{
return innerCache[key];
}
finally
{
cacheLock.ExitReadLock();
}
}
public void Add(int key, string value)
{
cacheLock.EnterWriteLock();
try
{
innerCache.Add(key, value);
}
finally
{
cacheLock.ExitWriteLock();
}
}
public bool AddWithTimeout(int key, string value, int timeout)
{
if (cacheLock.TryEnterWriteLock(timeout))
{
try
{
innerCache.Add(key, value);
}
finally
{
cacheLock.ExitWriteLock();
}
return true;
}
else
{
return false;
}
}
public AddOrUpdateStatus AddOrUpdate(int key, string value)
{
cacheLock.EnterUpgradeableReadLock();
try
{
string result = null;
if (innerCache.TryGetValue(key, out result))
{
if (result == value)
{
return AddOrUpdateStatus.Unchanged;
}
else
{
cacheLock.EnterWriteLock();
try
{
innerCache[key] = value;
}
finally
{
cacheLock.ExitWriteLock();
}
return AddOrUpdateStatus.Updated;
}
}
else
{
cacheLock.EnterWriteLock();
try
{
innerCache.Add(key, value);
}
finally
{
cacheLock.ExitWriteLock();
}
return AddOrUpdateStatus.Added;
}
}
finally
{
cacheLock.ExitUpgradeableReadLock();
}
}
public void Delete(int key)
{
cacheLock.EnterWriteLock();
try
{
innerCache.Remove(key);
}
finally
{
cacheLock.ExitWriteLock();
}
}
public enum AddOrUpdateStatus
{
Added,
Updated,
Unchanged
};
~SynchronizedCache()
{
if (cacheLock != null) cacheLock.Dispose();
}
}
I don't know how MemoryCache.Default is implemented, or whether or not you have control over it.
But in general, prefer using ConcurrentDictionary over Dictionary with lock in a multi threaded environment.
GetFromCache would just become
ConcurrentDictionary<string, T> cache = new ConcurrentDictionary<string, T>();
...
cache.GetOrAdd("someKey", (key) =>
{
var data = PullDataFromDatabase(key);
return data;
});
There are two more things to take care about.
Expiry
Instead of saving T as the value of the dictionary, you can define a type
struct CacheItem<T>
{
public T Item { get; set; }
public DateTime Expiry { get; set; }
}
And store the cache as a CacheItem with a defined expiry.
cache.GetOrAdd("someKey", (key) =>
{
var data = PullDataFromDatabase(key);
return new CacheItem<T>() { Item = data, Expiry = DateTime.UtcNow.Add(TimeSpan.FromHours(1)) };
});
Now you can implement expiration in an asynchronous thread.
Timer expirationTimer = new Timer(ExpireCache, null, 60000, 60000);
...
void ExpireCache(object state)
{
var needToExpire = cache.Where(c => DateTime.UtcNow >= c.Value.Expiry).Select(c => c.Key);
foreach (var key in needToExpire)
{
cache.TryRemove(key, out CacheItem<T> _);
}
}
Once a minute, you search for all cache entries that need to be expired, and remove them.
"Locking"
Using ConcurrentDictionary guarantees that simultaneous read/writes won't corrupt the dictionary or throw an exception.
But, you can still end up with a situation where two simultaneous reads cause you to fetch the data from the database twice.
One neat trick to solve this is to wrap the value of the dictionary with Lazy
ConcurrentDictionary<string, Lazy<CacheItem<T>>> cache = new ConcurrentDictionary<string, Lazy<CacheItem<T>>>();
...
var data = cache.GetOrData("someKey", key => new Lazy<CacheItem<T>>(() =>
{
var data = PullDataFromDatabase(key);
return new CacheItem<T>() { Item = data, Expiry = DateTime.UtcNow.Add(TimeSpan.FromHours(1)) };
})).Value;
Explanation
with GetOrAdd you might end up invoking the "get from database if not in cache" delegate multiple times in the case of simultaneous requests.
However, GetOrAdd will end up using only one of the values that the delegate returned, and by returning a Lazy, you guaranty that only one Lazy will get invoked.

WCF: How to cache collections from OData in client

Is there a possibility to cache a collection, retrieved using WCF from an OData service.
The situation is the following:
I generated a WCF service client with Visual Studio 2015 using the metadata of the odata service. VS generated a class inheriting from System.Data.Services.Client.DataServiceContext. This class has some properties of type System.Data.Services.Client.DataServiceQuery<T>. The data of some of these properties change seldom. Because of performance reasons I want the WCF client to load these properties just the first time and not every time I use it in the code.
Is there a built in possibility to cache the data of these properties? Or can I tell the service client not to load specific proeprties newly every time.
Assuming the service client class is ODataClient and one of its properties is `Area, for now I get the values in the following way:
var client = new ODataClient("url_to_the_service");
client.IgnoreMissingProperties = true;
var propertyInfo = client.GetType().GetProperty("Area");
var area = propertyInfo.GetValue(client) as IEnumerable<object>;
The reason why I do this in such a complicated way is, that the client should be very generic: The properties to be handled can be configured in a configuration file.
* EDIT *
I already tried to find properties in the System.Data.Services.Client.DataServiceContext class or the System.Data.Services.Client.DataServiceQuery<T> class for the caching. But i wasn't able to find any.
To my knowledge there is no "out of the box" caching concept on the client. There are options for caching the output of a request on the server which is something you might want consider as well. Googling "WCF Caching" would get you a bunch of info on this.
Regarding client side caching...#Evk is correct it is pretty straight forward. Here is an sample using MemoryCache.
using System;
using System.Runtime.Caching;
namespace Services.Util
{
public class CacheWrapper : ICacheWrapper
{
ObjectCache _cache = MemoryCache.Default;
public void ClearCache()
{
MemoryCache.Default.Dispose();
_cache = MemoryCache.Default;
}
public T GetFromCache<T>(string key, Func<T> missedCacheCall)
{
return GetFromCache<T>(key, missedCacheCall, TimeSpan.FromMinutes(5));
}
public T GetFromCache<T>(string key, Func<T> missedCacheCall, TimeSpan timeToLive)
{
var result = _cache.Get(key);
if (result == null)
{
result = missedCacheCall();
if (result != null)
{
_cache.Set(key, result, new CacheItemPolicy { AbsoluteExpiration = DateTimeOffset.Now.Add(timeToLive) });
}
}
return (T)result;
}
public void InvalidateCache(string key)
{
_cache.Remove(key);
}
}
}
This is an example of code that uses the cache...
private class DataAccessTestStub
{
public const string DateTimeTicksCacheKey = "GetDateTimeTicks";
ICacheWrapper _cache;
public DataAccessTestStub(ICacheWrapper cache)
{
_cache = cache;
}
public string GetDateTimeTicks()
{
return _cache.GetFromCache(DateTimeTicksCacheKey, () =>
{
var result = DateTime.Now.Ticks.ToString();
Thread.Sleep(100); // Create some delay
return result;
});
}
public string GetDateTimeTicks(TimeSpan timeToLive)
{
return _cache.GetFromCache(DateTimeTicksCacheKey, () =>
{
var result = DateTime.Now.Ticks.ToString();
Thread.Sleep(500); // Create some delay
return result;
}, timeToLive);
}
public void ClearDateTimeTicks()
{
_cache.InvalidateCache(DateTimeTicksCacheKey);
}
public void ClearCache()
{
_cache.ClearCache();
}
}
And some tests if you fancy...
[TestClass]
public class CacheWrapperTest
{
private DataAccessTestStub _dataAccessTestClass;
[TestInitialize]
public void Init()
{
_dataAccessTestClass = new DataAccessTestStub(new CacheWrapper());
}
[TestMethod]
public void GetFromCache_ShouldExecuteCacheMissCall()
{
var original = _dataAccessTestClass.GetDateTimeTicks();
Assert.IsNotNull(original);
}
[TestMethod]
public void GetFromCache_ShouldReturnCachedVersion()
{
var copy1 = _dataAccessTestClass.GetDateTimeTicks();
var copy2 = _dataAccessTestClass.GetDateTimeTicks();
Assert.AreEqual(copy1, copy2);
}
[TestMethod]
public void GetFromCache_ShouldRespectTimeToLive()
{
_dataAccessTestClass.ClearDateTimeTicks();
var copy1 = _dataAccessTestClass.GetDateTimeTicks(TimeSpan.FromSeconds(2));
var copy2 = _dataAccessTestClass.GetDateTimeTicks();
Assert.AreEqual(copy1, copy2);
Thread.Sleep(3000);
var copy3 = _dataAccessTestClass.GetDateTimeTicks();
Assert.AreNotEqual(copy1, copy3);
}
[TestMethod]
public void InvalidateCache_ShouldClearCachedVersion()
{
var original = _dataAccessTestClass.GetDateTimeTicks();
_dataAccessTestClass.ClearDateTimeTicks();
var updatedVersion = _dataAccessTestClass.GetDateTimeTicks();
Assert.AreNotEqual(original, updatedVersion);
}
}

Call code once when controller is first accessed [duplicate]

I have read lots of information about page caching and partial page caching in a MVC application. However, I would like to know how you would cache data.
In my scenario I will be using LINQ to Entities (entity framework). On the first call to GetNames (or whatever the method is) I want to grab the data from the database. I want to save the results in cache and on the second call to use the cached version if it exists.
Can anyone show an example of how this would work, where this should be implemented (model?) and if it would work.
I have seen this done in traditional ASP.NET apps , typically for very static data.
Here's a nice and simple cache helper class/service I use:
using System.Runtime.Caching;
public class InMemoryCache: ICacheService
{
public T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class
{
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(10));
}
return item;
}
}
interface ICacheService
{
T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class;
}
Usage:
cacheProvider.GetOrSet("cache key", (delegate method if cache is empty));
Cache provider will check if there's anything by the name of "cache id" in the cache, and if there's not, it will call a delegate method to fetch data and store it in cache.
Example:
var products=cacheService.GetOrSet("catalog.products", ()=>productRepository.GetAll())
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
string[] names = Cache["names"] as string[];
if(names == null) //not in cache
{
names = DB.GetNames();
Cache["names"] = names;
}
return names;
}
A bit simplified but I guess that would work. This is not MVC specific and I have always used this method for caching data.
I'm referring to TT's post and suggest the following approach:
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
var noms = Cache["names"];
if(noms == null)
{
noms = DB.GetNames();
Cache["names"] = noms;
}
return ((string[])noms);
}
You should not return a value re-read from the cache, since you'll never know if at that specific moment it is still in the cache. Even if you inserted it in the statement before, it might already be gone or has never been added to the cache - you just don't know.
So you add the data read from the database and return it directly, not re-reading from the cache.
For .NET 4.5+ framework
add reference: System.Runtime.Caching
add using statement:
using System.Runtime.Caching;
public string[] GetNames()
{
var noms = System.Runtime.Caching.MemoryCache.Default["names"];
if(noms == null)
{
noms = DB.GetNames();
System.Runtime.Caching.MemoryCache.Default["names"] = noms;
}
return ((string[])noms);
}
In the .NET Framework 3.5 and earlier versions, ASP.NET provided an in-memory cache implementation in the System.Web.Caching namespace. In previous versions of the .NET Framework, caching was available only in the System.Web namespace and therefore required a dependency on ASP.NET classes. In the .NET Framework 4, the System.Runtime.Caching namespace contains APIs that are designed for both Web and non-Web applications.
More info:
https://msdn.microsoft.com/en-us/library/dd997357(v=vs.110).aspx
https://learn.microsoft.com/en-us/dotnet/framework/performance/caching-in-net-framework-applications
Steve Smith did two great blog posts which demonstrate how to use his CachedRepository pattern in ASP.NET MVC. It uses the repository pattern effectively and allows you to get caching without having to change your existing code.
http://ardalis.com/Introducing-the-CachedRepository-Pattern
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
In these two posts he shows you how to set up this pattern and also explains why it is useful. By using this pattern you get caching without your existing code seeing any of the caching logic. Essentially you use the cached repository as if it were any other repository.
I have used it in this way and it works for me.
https://msdn.microsoft.com/en-us/library/system.web.caching.cache.add(v=vs.110).aspx
parameters info for system.web.caching.cache.add.
public string GetInfo()
{
string name = string.Empty;
if(System.Web.HttpContext.Current.Cache["KeyName"] == null)
{
name = GetNameMethod();
System.Web.HttpContext.Current.Cache.Add("KeyName", name, null, DateTime.Noew.AddMinutes(5), Cache.NoSlidingExpiration, CacheitemPriority.AboveNormal, null);
}
else
{
name = System.Web.HttpContext.Current.Cache["KeyName"] as string;
}
return name;
}
AppFabric Caching is distributed and an in-memory caching technic that stores data in key-value pairs using physical memory across multiple servers. AppFabric provides performance and scalability improvements for .NET Framework applications. Concepts and Architecture
Extending #Hrvoje Hudo's answer...
Code:
using System;
using System.Runtime.Caching;
public class InMemoryCache : ICacheService
{
public TValue Get<TValue>(string cacheKey, int durationInMinutes, Func<TValue> getItemCallback) where TValue : class
{
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
public TValue Get<TValue, TId>(string cacheKeyFormat, TId id, int durationInMinutes, Func<TId, TValue> getItemCallback) where TValue : class
{
string cacheKey = string.Format(cacheKeyFormat, id);
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback(id);
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
}
interface ICacheService
{
TValue Get<TValue>(string cacheKey, Func<TValue> getItemCallback) where TValue : class;
TValue Get<TValue, TId>(string cacheKeyFormat, TId id, Func<TId, TValue> getItemCallback) where TValue : class;
}
Examples
Single item caching (when each item is cached based on its ID because caching the entire catalog for the item type would be too intensive).
Product product = cache.Get("product_{0}", productId, 10, productData.getProductById);
Caching all of something
IEnumerable<Categories> categories = cache.Get("categories", 20, categoryData.getCategories);
Why TId
The second helper is especially nice because most data keys are not composite. Additional methods could be added if you use composite keys often. In this way you avoid doing all sorts of string concatenation or string.Formats to get the key to pass to the cache helper. It also makes passing the data access method easier because you don't have to pass the ID into the wrapper method... the whole thing becomes very terse and consistant for the majority of use cases.
Here's an improvement to Hrvoje Hudo's answer. This implementation has a couple of key improvements:
Cache keys are created automatically based on the function to update data and the object passed in that specifies dependencies
Pass in time span for any cache duration
Uses a lock for thread safety
Note that this has a dependency on Newtonsoft.Json to serialize the dependsOn object, but that can be easily swapped out for any other serialization method.
ICache.cs
public interface ICache
{
T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class;
}
InMemoryCache.cs
using System;
using System.Reflection;
using System.Runtime.Caching;
using Newtonsoft.Json;
public class InMemoryCache : ICache
{
private static readonly object CacheLockObject = new object();
public T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class
{
string cacheKey = GetCacheKey(getItemCallback, dependsOn);
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
lock (CacheLockObject)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.Add(duration));
}
}
return item;
}
private string GetCacheKey<T>(Func<T> itemCallback, object dependsOn) where T: class
{
var serializedDependants = JsonConvert.SerializeObject(dependsOn);
var methodType = itemCallback.GetType();
return methodType.FullName + serializedDependants;
}
}
Usage:
var order = _cache.GetOrSet(
() => _session.Set<Order>().SingleOrDefault(o => o.Id == orderId)
, new { id = orderId }
, new TimeSpan(0, 10, 0)
);
public sealed class CacheManager
{
private static volatile CacheManager instance;
private static object syncRoot = new Object();
private ObjectCache cache = null;
private CacheItemPolicy defaultCacheItemPolicy = null;
private CacheEntryRemovedCallback callback = null;
private bool allowCache = true;
private CacheManager()
{
cache = MemoryCache.Default;
callback = new CacheEntryRemovedCallback(this.CachedItemRemovedCallback);
defaultCacheItemPolicy = new CacheItemPolicy();
defaultCacheItemPolicy.AbsoluteExpiration = DateTime.Now.AddHours(1.0);
defaultCacheItemPolicy.RemovedCallback = callback;
allowCache = StringUtils.Str2Bool(ConfigurationManager.AppSettings["AllowCache"]); ;
}
public static CacheManager Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new CacheManager();
}
}
}
return instance;
}
}
public IEnumerable GetCache(String Key)
{
if (Key == null || !allowCache)
{
return null;
}
try
{
String Key_ = Key;
if (cache.Contains(Key_))
{
return (IEnumerable)cache.Get(Key_);
}
else
{
return null;
}
}
catch (Exception)
{
return null;
}
}
public void ClearCache(string key)
{
AddCache(key, null);
}
public bool AddCache(String Key, IEnumerable data, CacheItemPolicy cacheItemPolicy = null)
{
if (!allowCache) return true;
try
{
if (Key == null)
{
return false;
}
if (cacheItemPolicy == null)
{
cacheItemPolicy = defaultCacheItemPolicy;
}
String Key_ = Key;
lock (Key_)
{
return cache.Add(Key_, data, cacheItemPolicy);
}
}
catch (Exception)
{
return false;
}
}
private void CachedItemRemovedCallback(CacheEntryRemovedArguments arguments)
{
String strLog = String.Concat("Reason: ", arguments.RemovedReason.ToString(), " | Key-Name: ", arguments.CacheItem.Key, " | Value-Object: ", arguments.CacheItem.Value.ToString());
LogManager.Instance.Info(strLog);
}
}
I use two classes. First one the cache core object:
public class Cacher<TValue>
where TValue : class
{
#region Properties
private Func<TValue> _init;
public string Key { get; private set; }
public TValue Value
{
get
{
var item = HttpRuntime.Cache.Get(Key) as TValue;
if (item == null)
{
item = _init();
HttpContext.Current.Cache.Insert(Key, item);
}
return item;
}
}
#endregion
#region Constructor
public Cacher(string key, Func<TValue> init)
{
Key = key;
_init = init;
}
#endregion
#region Methods
public void Refresh()
{
HttpRuntime.Cache.Remove(Key);
}
#endregion
}
Second one is list of cache objects:
public static class Caches
{
static Caches()
{
Languages = new Cacher<IEnumerable<Language>>("Languages", () =>
{
using (var context = new WordsContext())
{
return context.Languages.ToList();
}
});
}
public static Cacher<IEnumerable<Language>> Languages { get; private set; }
}
I will say implementing Singleton on this persisting data issue can be a solution for this matter in case you find previous solutions much complicated
public class GPDataDictionary
{
private Dictionary<string, object> configDictionary = new Dictionary<string, object>();
/// <summary>
/// Configuration values dictionary
/// </summary>
public Dictionary<string, object> ConfigDictionary
{
get { return configDictionary; }
}
private static GPDataDictionary instance;
public static GPDataDictionary Instance
{
get
{
if (instance == null)
{
instance = new GPDataDictionary();
}
return instance;
}
}
// private constructor
private GPDataDictionary() { }
} // singleton
HttpContext.Current.Cache.Insert("subjectlist", subjectlist);
You can also try and use the caching built into ASP MVC:
Add the following attribute to the controller method you'd like to cache:
[OutputCache(Duration=10)]
In this case the ActionResult of this will be cached for 10 seconds.
More on this here

ISupportIncrementalLoading retrieving exception

I have implemented a ISupportIncrementalLoading interface to perform the incremental loading of a ListView.
The interface has the following code:
public interface IIncrementalSource<T>
{
Task<IEnumerable<T>> GetPagedItems(int pageIndex, int pageSize);
}
public class IncrementalLoadingCollection<T, I> : ObservableCollection<I>,
ISupportIncrementalLoading where T : IIncrementalSource<I>, new()
{
private T source;
private int itemsPerPage;
private bool hasMoreItems;
private int currentPage;
public IncrementalLoadingCollection(int itemsPerPage = 10)
{
this.source = new T();
this.itemsPerPage = itemsPerPage;
this.hasMoreItems = true;
}
public void UpdateItemsPerPage(int newItemsPerPage)
{
this.itemsPerPage = newItemsPerPage;
}
public bool HasMoreItems
{
get { return hasMoreItems; }
}
public IAsyncOperation<LoadMoreItemsResult> LoadMoreItemsAsync(uint count)
{
return Task.Run<LoadMoreItemsResult>(
async () =>
{
uint resultCount = 0;
var dispatcher = Window.Current.Dispatcher;
var result = await source.GetPagedItems(currentPage++, itemsPerPage);
if(result == null || result.Count() == 0)
{
hasMoreItems = false;
} else
{
resultCount = (uint)result.Count();
await Task.WhenAll(Task.Delay(10), dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
foreach (I item in result)
this.Add(item);
}).AsTask());
}
return new LoadMoreItemsResult() { Count = resultCount };
}).AsAsyncOperation<LoadMoreItemsResult>();
}
}
The instance of the interface is this one:
var collection = new IncrementalLoadingCollection<LiveTextCode, LiveText>();
this.LTextLW.ItemsSource = collection;
Where LiveText is a UserForm and LiveTextCode is a class that, among other functionalities, sets the previous UserForm up.
The UserForm is filled by reading XML files located in a server so the code must perform async operations and, for that, the containing scope has to be also. Due to an unknown reason, the instance of the custom interface is called before it's filling so, I'm getting a NullReferenceException (or at least that the hypothesis that makes most sense to me...).
I'm pretty lost and I don't know how to fix it, if anyone could help it would be much appreciated.
Thanks in advance!
Instead of using this.LTextLW.ItemsSource = collection;
Specify an ObservableCollection item say collection. Now bind this to your listview by binding it to your ItemsSource="{Binding collection}".
Since its an ObservableCollection type as soon as your collection value gets updated it will be reflected in your View also.
Else you can also specify a collection with a RaisePropertyChanged Event
private IncrementalLoadingCollection<LiveTextCode, LiveText> _collection;
public IncrementalLoadingCollection<LiveTextCode, LiveText> collection
{
get { return _collection; }
set
{
_collection = value;
RaisePropertyChanged();
}
}
This will handle updation of UI whenever the value changes.

Get Instance using an existing delegate Factory based on Type (or Previous ViewModel)

Based on this page we've created a Wizard that has three steps. Everything works great, but we have one problem with the code given in the link, which is how it creates the next step instance (copy pasted from the link):
protected override IScreen DetermineNextItemToActivate(IList<IScreen> list, int lastIndex)
{
var theScreenThatJustClosed = list[lastIndex] as BaseViewModel;
var state = theScreenThatJustClosed.WorkflowState;
var nextScreenType = TransitionMap.GetNextScreenType(theScreenThatJustClosed);
var nextScreen = Activator.CreateInstance(nextScreenType, state);
return nextScreen as IScreen;
}
Currently, it looks like this in our project:
protected override IWizardScreen DetermineNextItemToActivate(IList<IWizardScreen> list, int lastIndex)
{
var theScreenThatJustClosed = list[lastIndex];
if (theScreenThatJustClosed == null) throw new Exception("Expecting a screen here!");
if (theScreenThatJustClosed.NextTransition == WizardTransition.Done)
{
TryClose(); // Close the entire Wizard
}
var state = theScreenThatJustClosed.WizardAggregateState;
var nextScreenType = _map.GetNextScreenType(theScreenThatJustClosed);
if (nextScreenType == null) return null;
// TODO: CreateInstance requires all constructors for each WizardStep, even if they aren't needed. This should be different!
var nextScreen = Activator.CreateInstance(nextScreenType, state, _applicationService, _wfdRegisterInstellingLookUp,
_adresService, _userService, _documentStore, _windowManager, _fileStore, _fileUploadService, _dialogService,
_eventAggregator, _aanstellingViewModelFactory);
return nextScreen as IWizardScreen;
}
As you can see, we have quite a few parameters we need in some steps. In step 1 we only need like two, but because of the Activator.CreateInstance(nextScreenType, state, ...); we still need to pass all of them.
What I'd like instead is to use a delegate Factory. We use them at more places in our project, and let AutoFac take care of the rest of the parameters. For each of the three steps we only need a delegate Factory that uses the state.
Because all three uses the same delegate Factory with just state, I've placed this Factory in their Base class:
public delegate WizardBaseViewModel<TViewModel> Factory(AggregateState state);
How I'd like to change the DetermineNextItemToActivate method:
protected override IWizardScreen DetermineNextItemToActivate(IList<IWizardScreen> list, int lastIndex)
{
var theScreenThatJustClosed = list[lastIndex];
if (theScreenThatJustClosed == null) throw new Exception("Expecting a screen here!");
if (theScreenThatJustClosed.NextTransition == WizardTransition.Done)
{
TryClose(); // Close the entire Wizard
}
return _map.GetNextScreenFactoryInstance(state);
}
But now I'm stuck at making the GetNextScreenFactoryInstance method:
public IWizardScreen GetNextScreenFactoryInstance(IWizardScreen screenThatClosed)
{
var state = screenThatClosed.WizardAggregateState;
// This is where I'm stuck. How do I get the instance using the Factory, when I only know the previous ViewModel
// ** Half-Pseudocode
var nextType = GetNextScreenType(screenThatClosed);
var viewModelFactory = get delegate factory based on type?;
var invokedInstance = viewModelFactory.Invoke(state);
// **
return invokedInstance as IWizardScreen;
}
Feel free to change the GetNextScreenFactoryInstance any way you'd like. As long as I can get the next Step-ViewModel based on the previous one in the map.
NOTE: Other relevant code, can be found in the link, but I'll post it here as well to keep it all together:
The WizardTransitionMap (only change is it not being a Singleton anymore, so we can instantiate a map outselves):
public class WizardTransitionMap : Dictionary<Type, Dictionary<WizardTransition, Type>>
{
public void Add<TIdentity, TResponse>(WizardTransition transition)
where TIdentity : IScreen
where TResponse : IScreen
{
if (!ContainsKey(typeof(TIdentity)))
{
Add(typeof(TIdentity), new Dictionary<WizardTransition, Type> { { transition, typeof(TResponse) } });
}
else
{
this[typeof(TIdentity)].Add(transition, typeof(TResponse));
}
}
public Type GetNextScreenType(IWizardScreen screenThatClosed)
{
var identity = screenThatClosed.GetType();
var transition = screenThatClosed.NextTransition;
if (!transition.HasValue) return null;
if (!ContainsKey(identity))
{
throw new InvalidOperationException(String.Format("There are no states transitions defined for state {0}", identity));
}
if (!this[identity].ContainsKey(transition.Value))
{
throw new InvalidOperationException(String.Format("There is no response setup for transition {0} from screen {1}", transition, identity));
}
return this[identity][transition.Value];
}
}
Our InitializeMap-method:
protected override void InitializeMap()
{
_map = new WizardTransitionMap();
_map.Add<ScreenOneViewModel, ScreenTwoViewModel>(WizardTransition.Next);
_map.Add<ScreenTwoViewModel, ScreenOneViewModel>(WizardTransition.Previous);
_map.Add<ScreenTwoViewModel, ScreenThreeViewModel>(WizardTransition.Next);
_map.Add<ScreenThreeViewModel, ScreenTwoViewModel>(WizardTransition.Previous);
_map.Add<ScreenThreeViewModel, ScreenThreeViewModel>(WizardTransition.Done);
}
We've changed the code:
The WizardTransitionMap now accepts Delegates. Also, instead of retrieving the type by the WizardTransition-enum value (Next, Previous, etc.), we now retrieve the Factory-invoke based on the next Type (so the inner Dictionary is reversed). So, this is our current WizardTransitionMap:
using System;
using System.Collections.Generic;
namespace NatWa.MidOffice.CustomControls.Wizard
{
public class WizardTransitionMap : Dictionary<Type, Dictionary<Type, Delegate>>
{
public void Add<TCurrentScreenType, TNextScreenType>(Delegate delegateFactory)
{
if (!ContainsKey(typeof(TCurrentScreenType)))
{
Add(typeof(TCurrentScreenType), new Dictionary<Type, Delegate> { { typeof(TNextScreenType), delegateFactory } });
}
else
{
this[typeof(TCurrentScreenType)].Add(typeof(TNextScreenType), delegateFactory);
}
}
public IWizardScreen GetNextScreen(IWizardScreen screenThatClosed)
{
var identity = screenThatClosed.GetType();
var state = screenThatClosed.State;
var transition = screenThatClosed.NextScreenType;
if (!ContainsKey(identity))
{
throw new InvalidOperationException(String.Format("There are no states transitions defined for state {0}", identity));
}
if (!this[identity].ContainsKey(transition))
{
throw new InvalidOperationException(String.Format("There is no response setup for transition {0} from screen {1}", transition, identity));
}
if (this[identity][transition] == null)
return null;
return (IWizardScreen)this[identity][transition].DynamicInvoke(state);
}
}
}
Our InitializeMap is now changed to this:
protected override void InitializeMap()
{
_map = new WizardTransitionMap();
_map.Add<ScreenOneViewModel, ScreenTwoViewModel>(_screenTwoFactory);
_map.Add<ScreenTwoViewModel, ScreenOneViewModel>(_screenOneFactory);
_map.Add<ScreenTwoViewModel, ScreenThreeViewModel>(_screenThreeFactory);
_map.Add<ScreenThreeViewModel, ScreenTwoViewModel>(_screenTwoFactory);
_map.Add<ScreenThreeViewModel, ScreenThreeViewModel>(null);
}
And our DetemineNexttemToActivate method to this:
protected override IWizardScreen DetermineNextItemToActivate(IList<IWizardScreen> list, int previousIndex)
{
var theScreenThatJustClosed = list[previousIndex];
if (theScreenThatJustClosed == null) throw new Exception("Expecting a screen here!");
var nextScreen = _map.GetNextScreen(theScreenThatJustClosed);
if (nextScreen == null)
{
TryClose();
return ActiveItem; // Can't return null here, because Caliburn's Conductor will automatically get into this method again with a retry
}
return nextScreen;
}
We also removed our entire WizardBaseViewModel and just let every Step-ViewModel implement the IWizardScreen:
public interface IWizardScreen : IScreen
{
AggregateState State { get; }
Type NextScreenType { get; }
void Next();
void Previous();
}
With the following implementation in our ScreenOneViewModel:
public AggregateState State { get { return _state; } }
public Type NextScreenType { get; private set; }
public void Next()
{
if (!IsValid()) return;
NextScreenType = typeof(ScreenTwoViewModel);
TryClose();
}
public void Previous()
{
throw new NotImplementedException(); // Isn't needed in first screen, because we have no previous
}
And the following implementation in our ScreenThreeViewModel:
public AggregateState State { get { return _state; } }
public Type NextScreenType { get; private set; }
public void Next()
{
NextScreenType = typeof(ScreenThreeViewModel); // Own type, because we have no next
TryClose();
}
public void Previous()
{
NextScreenType = typeof(ScreenTwoViewModel);
TryClose();
}
And each Step-ViewModel has its own delegate Factory, like this one for ScreenTwoViewModel:
public delegate ScreenTwoViewModel Factory(AggregateState state);

Categories

Resources