.NET Data Caching Framework - How to cache data - c#

Currently I'm writing a Web Application that needs to access the database quite often to retrieve records.
Now, I want to retrieve records and store them in the cache, and that's not a problem, but assume that I need to cache also in a Windows Application, what kind of object is best chosen then to do in-memory caching? IList<>, List<>, array, ...
So, in fact I would like to setup something general, and based on the type of application the appropriate type of object to store the items in will be choosed.

Take a look at System.Runtime.Caching namespace. Maybe it fits to your scenario.

Related

C# Web API - Store And Persist Data inside In-Memory Cache

I am writing a REST API which needs to provide integration services with my organization's ActiveDirectory, specifically to query user and group data and then provide an endpoint in the API for an autocomplete field query.
My organization's ActiveDirectory is very large and it has about 130K user and group objects combined.
Querying all of these objects and storing them in our current backing store (MongoDB) takes approximately 40 minutes.
We decided to check if there is an option to skip the usage of Mongo and store all of the queried AD objects in the Web API memory.
Looking at other questions in SO I realized a Singleton wouldn't work because the data stored inside it will be lost every time the IIS Application Pool is reset, and then the API can't provide data for about 40 minutes which can't happen.
I also looked at this question which refers to the namespace System.Runtime.Caching. But MemoryCache provided by the namespace will also lose all of it's data upon reset of the IIS Application Pool.
My question is - is there any other solution to store the data from AD inside the Web API memory. We currently want to avoid using a persistent store (either relational or document DB) to hold the information, but if no viable solution appears we might stick to Mongo (unless a better store is offered).
In-Memory sounds like a bad plan from a scalability stand point. If you want to load balance your API, you could potentially have multiple copies of this data In-Memory, but they could all be slightly different leading to different results when one instance of your API handles requests or another instance.
In-Memory, probably infers a list, or better still a dictionary. You could consider Redis server which supports multiple nodes, and its stores the data in memory. Its similar to the dictionary in that you store key value pairs as <string, string>. All the instances of your API could point to the same Redis cluster so you would gain consistency, scale and performance.
You could also consider Service Fabric which has special Collections that allow you to share state across stateful and stateless services. Like Redis the data is serialised and stored across the Service Fabric cluster. Its very fault tolerant, has inbuilt HA and DR.
Nothing would beat a Singleton Dictionary in terms of performance though so it depends what you need now and in future.

Where to place a collection (cache)?

In an ASP.NET or MVC website project (or any other) where and how a collection of users taken from the database should be placed?
For example, I have a table of users in the database and I want to load it once to the memory as a dictionary of <UserId,User> and perform all the operations on it (and from it to the database).
The collection should be accessible from all of the pages/controllers.
What will be the "Best practices" way to do that?
Should I create a static object called Users that will contained the dictionary and some methods (add, remove, etc.) also as static?
Or should it be a non static object with a static dictionary inside it? and if so, where should it be placed?
Or maybe I am thinking of it in a totally wrong way?
Sorry if my question is not 100% clear, I just gave an example that I think can illustrate the scenario.
It seems to me like a basic issue but I am really confused about the right way of designing it.
For our WCF server, we used a static object that contained a table of users and their authorizations. This worked well and prevented frequent database round-trips on every connection.
The real challenge was ensuring this table was up-to-date when user accounts change. We implemented a state refresh mechanism. When someone saves a change to user accounts, the web service detects this change and refreshes its state information.
Note that the .NET Framework 4.0 and higher a MemoryCache class built in.
First of all, using static objects (static properties) in a web application is a horrible idea. Concurrency becomes an issue and weird things with user values changing due to other user input (since a static object is shared across the whole app domain) become apparent.
A static read-only object is an exception to the above.
Probably the best way to handle the scenario in your question is using caching. Cache the list, and then rebuild the cache each time after any updates.
If using .net 4.0 or above, take a look at the System.Runtime.Caching namespace. It is similar to the old System.Web.Caching namespace from earlier versions, but now available to the entire .net framework, and also extensible if needed.
This will take care of "where to put the data".
Then you can implement a Business Logic Layer that handles pulling data from the cache and sending to the UI, communicate with data layer, update the cache after any database updates are performed, etc.
That's how I'd do something like this.

Caching objects and stuff in NHibernate

I've written my own caching layer for my objects that come out of data access. My reasoning here is I'd like my data access layer to do just that -- data access. I don't really want it to worry about caching, and I'd only like to go in to that layer when I need to fetch data out of the database. Perhaps this is not the right way to think about things -- please let me know if I'm off track.
Anyway, there is at least one issue that I've ran in to so far. In one scenario, I load an object from NHibernate and stick it in the cache in one request. In the next request I get that object from the cache, modify it, and go back down to NHibernate to save it. Obviously NHibernate pukes, in this particular instance with a "Illegal attempt to associate a collection with two open sessions" exception.
So my question is, I guess, is there anything I should be aware of or do to make this work? Or should I just use a 2nd level cache that's built in to NHibernate?
NHibernate has caching for a reason.. use it :)
You'll find there are quite a few options for a second level cache provider that give you much more flexibility for cheaper then you could build it yourself. A perfect example is something like memcache if you decide you need to run a service on multiple systems.

Why is the asp.NET profile designed in such a horrible way?

In the current project I'm working on, we are using the asp.NET profile to store information about users, such as their involvment in a mailing list.
Now, in order to get a list of all the users in the mailing list, I cannot simply do a database query, as the asp.NET profile table is, simply put, awful.
For those who do not know, the profile table has two main columns, the 'keys' column, and 'values' column, and they are organised as so:
Keys:
Key1:dataType:startIndex:endIndex:key2:dataType . . etc.
Values:
value1value2value3...
This is pretty much impossible to query using SQL, so the only option to find users that have a specific property is to load up a list of ALL the users and loop through it.
In a site with over 150k members, this is understandably very slow!
Are there specific reasons why the Profile was designed like this, or is it just a terrible way of doing dynamically-generated data?
I agree that it's a pretty bad way to store profile data, but I suspect the use case was just to get the profile data for a user with a single query but in such a way that it can be extended to handle any number of different profile properties. If you don't like it, you can always write your own, custom profile provider that separates each value out into its own column. Having implemented various membership and role providers, I don't think that this would be too complicated a task. The number of methods doesn't look too large.
The whole point of the Provider model is that it abstracts away the data source. The idea is that, as a developer, you don't need to know how the data is stored or in what format - you just have a common set of methods for accessing it. This means you can swap providers without changing a single line of code. It also means that you specifically do not try and access data direct from the data source (eg. going straight to the database) by bypassing the provider methods - that defeats the whole point.
The default ASP.NET profile provider is actually very powerful, as it can not only store simple value types (strings, ints etc.) but it can also store complex objects and entire collections in a single field. Try doing that in a relational database! However, the downside of this generic-ism is that it comes at a cost of efficiency. Which is why, if you have a specific need, then you are supposed to implement your own provider. For example, see SearchableSqlProfileProvider - The Searchable SQL Profile Provider.
Of course, your third option is to simple not use the profile provider - nobody is forcing you to! You could implement your own classes/database entirely, as you would have had to do in other frameworks.
I have implemented various custom providers (membership/Sitemap/Roles etc) and havent really looked at the ASP.NET Profile Provider after seeing that kind of thing (Name/Value pairs or XML data). I am not sure, but I think the Profile is primary created for User Preferences/Settings where the settings are only required for a specific user, I dont think the Profile is meant for User "Data" that can be queried?
Note: This is an assumtion based on what I think I know, please comment on this otherwise.

Asp.net - Caching vs Static Variable for storing a Dictionary

I am building a web-store with many departments and categories. They are stored in our database and accessed often.
We are using URL rewriting so almost every request within the store generates a lookup. We also need to iterate over the data frequently to generate menus for the main store and the department pages.
This information will not change often so I'm thinking that I should load the database into a dictionary to speed up the information retrieval.
I know the standard practice is to load data into the application cache, however i assume that there is some level of serialization that occurs during caching, and for a large data-structure I'm thinking the overhead would be significant.
My impulse on this is to put the dictionary in a static variable in one of the related classes. I would however like to get some input input on this. Am I right in thinking that this method would be faster? Is it horrible practice? Is there a better way that I'm missing?
I can't seem to find much information on this and I'd really appreciate any information that you can share. Thanks!
The Application and Cache collections do not serialize the objects you pass into them, they store the actual reference. Retrieving an object from Cache will not be an expensive operation, no matter how large the object is. Always stick with the Cache objects unless you have a very good reason not to, its just good practice.
The only other thing worth mentioning is to make sure you think about multithreaded access to this collection. You're going to end up with some serious issues very quickly if you don't lock properly
Well, I don't think it's so much work to rewrite code to use a static field instead of application cache if there's need to do so. I'd personally use the cache first. There's no need for premature optimization, have you measured the performance? It may behave just right with the application cache object. Maybe it even works well with db queries? :)
So, my answer is - use the cache and see how it works.
memcached is your friend! (but could be overkill if you're not scaling out)
Any idea how large your dictionary would be in application cache? I'd be tempted to recommend that as a good first option.
IMHO, generally speaking, if you have control on updates on underlying object, you should use static storage. Otherwise, if you are dependent on a 3rd party API for data retrievals, use Caching technology.

Categories

Resources