I have a custom centralized MemoryCache and wrapper to be exposed over network. The objects to be stored are expected to be large and accessed frequently. The network traffic might become a bottleneck just because of the size of the data.
So, I want to return some kind of value that the clients may ask first to check if they need a new value from the cache server. I have 3 options:
Generate checksum in the cache server and return to client while adding object to cache.
Generate
Get current time tick in the cache server and return to client while adding object to cache.
In both the cases, the checksum or time tick will also be added in cache with different key so that clients may query it separately.
I can avoid this if either of these values is available from MemoryCache directly. Any idea?
Thanks in advance.
Related
I am currently working on a task where I need to synchronize the data, example:
Client makes a request to get data to the API (API has access to the database), client received the data and saves it to the list. That list is used for some time etc..... Then in the meanwhile another client accesses the same API and database and makes some changes... to the same table... After a while the first client want's to update his current data, since the table is quite big let's say 10 thousand records, grabbing the entire table again is inefficient. I would like to grab only the records that have been modified,deleted, or newly created. And then update the current list the client 1 has. if the client has no records, he classifies all of them as newly created (at start up) and just grabs them all. I would like to do as much checking on the client's side.
How would I go about this ? I do have fields such as Modified, LastSync, IsDeleted. So I can find the records I need but main issue is how to do it efficiently with minimal repetition.
At the moment I tried to get all the rows at first, then after I want to update (Synchronize) I get the minimal required info LastSync Modified IsDeleted Key, from the API, which I compare with what I have on the client and then send only keys of the rows that don't match to the server to get the entire values that match the keys. But I am not sure about efficiency of this also... not sure how to update the current list with those values efficiently the only way I can think of is using loop in loop to compare keys and update the list, but I know it's not a good approach.
This will never work as long as you do not do the checks on the server side. There is always a chance that someone post between your api get call to server and you post call to server. Whatever test you can do at the client side, you can do it on the server side.
Depending on your DB and other setup, you could accomplish this by adding triggers on the tables/fields that you want to track in the database and setting up a cache (could use Redis, Memcached, Aerospike, etc.) and a cache refresh service . If something in the DB is added/updated/deleted, you can set up your trigger to write to a separate archive table in the DB. You can then set up a job from, e.g., Jenkins (or have a Kafka connector -- there are many ways to accomplish this) to poll the archive tables and the original table for changes based on an ID or date or whatever criteria you need. Anything that has changed will be refreshed and then written back to the cache. Then your API wouldn't be accessing the DB at all. It would just call the cache for the most recent data whenever a client requests it. Your separate service wold be responsible for synchronization of data, database access, and keeping the cache up to date.
We are storing a serialized object in to Redis cache. We want to check the age of cache before retrieving new data and update the cache. If it is less than 10 mins, as data might not have changed in that duration, so we pull from cache and send to API output. if not, we will still return cached data to output, request new pull of data, update cache if new data is available and intimate the web listener to get latest update.
Is there a way with existing cache API, that we can check the age of Redis cache key? If no, any workarounds?
I think you're looking at it from the wrong perspective.
What would happen if you were to use Redis' EXPIRE with a TTL of 10min for each of your cache's keys? Redis will keep the data for 10min and then expire it, simplifying the application's logic. You would no longer need to actively check the "age" of the key - if it isn't in Redis then you need to fetch (for reference look up the so-called "Cache Aside" pattern).
I have implemented REST service using WebAPI2, service implemeted to manage different sessions which are created and joined by different clients which are accessing service.
Session contains information about access of application functionality and information of participants which have joined same session.
Each client get session information and access list from server for synchronization purpose on every second. According to access changed, client functionality will changed(Enable/Disable).
I am using MemoryCache class to store session info in WebAPI service as below.
public static class SessionManager{
private static object objForLock = new object();
public static List<Session> SessionCollection
{
get
{
lock (objForLock)
{
MemoryCache memoryCache = MemoryCache.Default;
return memoryCache.Get("SessionCollection") as List<Session>;
// return HttpContext.Current.Application["SessionCollection"] as List<Session>;
}
}
set
{
lock (objForLock)
{
MemoryCache memoryCache = MemoryCache.Default;
memoryCache.Add("SessionCollection", value, DateTimeOffset.UtcNow.AddHours(5));
//HttpContext.Current.Application["SessionCollection"] = value;
}
}
}
}
My problem is regarding inconsistent behavior of cache.
When clients send synchronization call, it will gives inconsistent results. For some requests, clients gets proper data and for some requests client gets null data alternative after some requests.
I have add debugger and monitor the object for null result, then "memoryCache.Get("SessionCollection")" also null. After some consecutive request it will be proper again. I am not getting why this object is not persistent.
Alternative, I have tried "HttpContext.Current.Application["SessionCollection"]" as well, But same issue is there.
I have read about "app pool recycle", it recycle all cache after particulate time. If my cached object is recycled by app pool recycle, then how can I get this object again?
Please some can help me to get out of this issue. Thanks in advance.
You should store client specific information in Session instead of Cache. Cache should be for the whole application (shared)
However, it's not recommended as web api is built with RESTful in mind and RESTful services should be stateless (APIs do not cache state). Stateless applications have many benefits:
Reduce Memory Usage
Better scalability: Your application scales better. Image what happens if you store information of millions of client at the same time.
Better in load balancing scenario: every server can handle every client without losing state.
Session expiration problem.
In case you want to store client state, you could do it anyway. Please try the suggestions in the following post: ASP.NET Web API session or something?
In general, caching state locally on the web server is bad (both Session and local MemoryCache). The cache could lose for many reasons:
App pool recycle.
Load balancing environment
Multiple worker processes in IIS
Regarding your requirements:
Each client get session information and access list from server for
synchronization purpose on every second. According to access changed,
client functionality will changed(Enable/Disable).
I'm not sure if you want to update the other clients with new access list immediately when a client sends synchronization call. If that's the case, SignalR would be a better choice.
Otherwise, you could just store the updated access list somewhere (shared cache or even in database) and update the other clients whenever they reconnect with another request.
#ScottHanselman said about a bug in .NET 4 here. I hope this fix help you:
The temporary fix:
Create memory cache instance under disabled execution context flow
using (ExecutionContext.SuppressFlow()) {
// Create memory cache instance under disabled execution context flow
return new YourCacheThing.GeneralMemoryCache(…);
}
The Hotfix is http://support.microsoft.com/kb/2828843 and you can request it here: https://support.microsoft.com/contactus/emailcontact.aspx?scid=sw;%5BLN%5D;1422
Just a caution, MemoryCache will keep data in memory in single server. So if you have multiple web servers(in front of load balancers), that cache will not be available to other servers. You also use the cache name - "SessionCollection". That data will be shared to all clients. If you need to store data in cache unique to each client, you need to return a token (guid) to the client and use that token to get/update data in cache in subsequent requests.
Try introducing a class level variable. So your code will look like below. (Some code remove for clarity)
private readonly MemoryCache _memCache = MemoryCache.Default;
....
return _memCache.Get("SessionCollection") as List<Session>;
...
_memCache .Add("SessionCollection", value, DateTimeOffset.UtcNow.AddHours(5));
How can I use server side caching on a C# WCF Rest service?
For example, I generate a lot of data into one object (not through database) and I do not want to do that every call a (random) user makes. How can I cache the object.
Verifying question: Is it right that a HttpContext cache object is only between a specific client and the host?
Is it right that a HttpContext cache object is only between a specific client and the host?
No, it is a shared object, as per msdn
There is one instance of the Cache class per application domain. As a
result, the Cache object that is returned by the Cache property is the
Cache object for all requests in the application domain.
Depending on the load, you may also use a database for chaching (depending what you call caching). There are also in-memory databases specifically optimised for distributed caching, see memchached, redis and Memcache vs. Redis?
The HttpContext.Cache is local to the Application Domain, and so is shared by all code that runs in that Application Domain. It is certainly fast and flexible enough for most applications.
How you would use it, depends of course on your needs. You may use a serialized version of input parameters as the key, for instance, like in this example:
public MyObject GetMyObject(int size, string cultureId, string extra)
{
// Input validation first
...
// Determine cache key
string cacheKey = size.ToString() + cultureId.ToString() + extra.ToString();
// rest of your code here
}
I've read that you can store classes directly into a session variable i.e.
Session["var"] = myclass;
My question is how the memory management works. Does it automatically serialize this into the session on the client side?
Or does it hold the data for the instance of the class in server memory, and just holds a reference in the session object?
ASP.Net will store your object in a static nested dictionary in memory on the server.
It then sends a cookie to the client with the session ID.
Next time the client sends a request, ASP.Net will retrieve the session associated with that ID from the outer dictionary, then give you the inner dictionary containing the objects in that session.
(This is the way the default session provider works; other providers can serialize objects to SQL Server, or do something else entirely)
You don't store classes in the session but instances of these classes. And yes the default session store is memory. You can use SQL Server as session store as well however. Then some serialization will take place.
The session data is not available on client side.
It depends on how you have sessions set up in ASP.NET. The default is the session resides in the server's memory, and is basically just a dictionary. The user is given a session cookie which is used to identify which of these session dictionaries to grab for a given request (one session dictionary per user)
The object never gets sent to the client because the client only has a cookie, and cookies are too small to hold much of anything, and besides sending an object to the client is likely a security problem.
You can configure ASP.NET to use a database instead of memory to store the session, that is detailed here
The Default Session store is in memory. Which is the easiest to use because the objects dont necessarily need to be serializable.
If you changed the session store to lets say SQL SERVER Database. Then all the objects you store in the session will need to be serializable or else they will throw an exception.
Your Session by default only lasts 20mins. You can change this in the web.config to be as long as you want. But after that time is up, the Garbage collection will remove it from memory.