Newbie to the whole ASP.Net conundrum !!
So I have a set of Web API's (Actions) in a controller which will be called in succession to each other from the client. All the API's depend on a data model object fetched from the database. Right now I have a DAO layer which fetches this data and transparently caches it. So there is no immediate issue as there is no round trip to database for each API call. But the DAO layer is maintained by a different team and makes no guarantee that the cache will continue to exist or it's behaviour won't change.
Now the properties or attributes of the model object would change but not too often. So If I refer the API calls made in succession from a client as a bundle, then I can safely assume that the bundle can actually query this data once and use it without having to worry about change in value. How can I achieve this ? Is there a design pattern somewhere in ASP.Net world which I can use ? What I would like is to fetch this value at a periodic interval and refresh it in case one of the API calls failed indicating the underlying values have changed.
There are a few techniques that might be used. First of all, is there a reason for the need of a second cache, because your Data Access Layer already has it, right?
You can place a cache on the Web API response level by using a third party library called Strathweb.CacheOutput, and:
CacheOutput will take care of server side caching and set the appropriate client side (response) headers for you.
You can also cache the data from your data access layer by using a more manual approach by using the MemoryCache from System.Runtime.Caching.
Depending on what's the infrastructure available, distributed caches like Cassandra or Redis may be the best choice.
Related
I am looking at options for caching data at service layer off my web application (server layer gets data from other systems and at Web Front End I dont want to go on round trip for that data each time - I would like to cache it for say 20 mins and if it is not null load it from cache if not go and retrieve it
I have looked at Dynacache which basically looks as if it should do exactly what I want but I have been having problems getting it working with SimpleInjector my DI Framework. Has anyone used a similar NuGet package or got an example of doing similar?
I typically set up my web service layer with as little caching as possible, and leave the caching up to the client. If a website needs to only cache a set of data, then that's its own responsibility. If another web application needs real-time access, then I don't want to hinder that.
If I DO need to cache, say a static list that hardly changes, then I typically use something like MemoryCache and set a rolling timeout. For this, I usually write a wrapper for this that utilizes a lambda Func in the .Get() property of my caching service as the source of the cache for that key if the value happens to be null.
I have a question that is more related to how ASP.NET works and will try my best to explain so here goes.
I have a four tiered architecture in my app
Web (ASP.NET Web Application)
Business (Class Library)
Generic CRUD layer ** (Class Library)
Data (Class Library)
This generic CRUD layer of mine uses reflection to set and read properties of an object, however I have read that using PropertyInfo is fairly expensive and thus want to cache these items.
Here is the scenario:
Two people access the site, lets call them Fred and Jim. Fred creates a Customer which in turn called the Generic CRUD layer and caches the property info of the Customer class within System.RuntimeCache. Jim, then seconds later also creates a Customer.
My question is will the two requests from both Fred and Jim cause the obtaining of propery info to be triggered twice? Or will ASP.NET retrieve it from cache the second time, i.e. Jim's request is quicker as property info is obtained via the cache?
My thinking is that because my CRUD is a class library and not having access to System.Web.Cache, the property info won't be cached across all sessions / users?
No, it will issue new queries for each request (unless you've coded otherwise).
There are multiple layers of caching that could happen in ASP.Net application (browser, proxies, server side response caching, intermediate objects caching, DA layer caching) which all can be configured/used. But nothing is ever cached in ASP.Net (or in any application) unless one specifically wrote code/rules/configuration to do so.
As Alexei Levenkov points out, you have to configure caching to happen explicitly: you're not going to get automatic caching of specific property values.
But let me point out that while using PropertyInfo is expensive compared to writing code to directly access properties, it pales in comparison to the cost of a database round-trip or the latency between your server and your end user. Tools like Entity Framework, WebForms, and MVC all make extensive use of reflection because the performance cost it incurs is totally worth the reduced maintenance costs.
Avoid premature optimization.
I'm try to understand Repository pattern to implement it in my app. And I'm stuck with it in a some way.
Here is a simplified algorithm of how the app is accessing to a data:
At first time the app has no data. It needs to connect to a web-service to get this data. So all the low-level logic of interaction with the web-service will be hiding behind the WebServiceRepository class. All the data passed from the web-service to the app will be cached.
Next time when the app will request the data this data will be searched in the cache before requesting them from the web-service. Cache represents itself as a database and XML files and will be accessed through the CacheRepository.
The cached data can be in three states: valid (can be shown to user), invalid (old data that can't be shown) and partly-valid (can be shown but must be updated as soon as possible).
a) If the cached data is valid then after we get them we can stop.
b) If the chached data is invalid or partly-valid we need to access WebServiceRepository. If the access to the web-service is ended with a success then requested data will be cached and then will be showed to user (I think this must be implemented as a second call to the CacheRepository).
c) So the entry point of the data access is the CacheRepository. Web-service will be called only if there is no fully valid cache.
I can't figure out where to place the logic of verifying the cache (valid/invalid/partly-valid)? Where to place the call of the WebServiceRepository? I think that this logic can't be placed in no one of Repositories, because of violation the Single Responsibility Principle (SRP) from SOLID.
Should I implement some sort of RepositoryService and put all the logic in it? Or maybe is there a way to link WebServiceRepository and WebServiceRepository?
What are patterns and approaches to implement that?
Another question is how to get partly-valid data from cache and then request the web-service in the one method's call? I think to use delegates and events. Is there other approaches?
Please, give an advice. Which is the correct way to link all the functionality listed above?
P.S. Maybe I described all a bit confusing. I can give some additional clarifications if needed.
P.P.S. Under CacheRepository (and under WebServiceRepository) I meant a set of repositories - CustomerCacheRepository, ProductCacheRepository and so on. Thanks #hacktick for the comment.
if your webservice gives you crud methods for different entities create a repository for every entityroot.
if there are customers create a CustomerRepository. if there are documents with attachments as childs create a DocumentRepository that returns documents with attachments as a property.
a repository is only responsible for a specific type of entity (ie. customers or documents). repositories are not used for "cross cutting concerns" such as caching. (ie. your example of an CacheRepository)
inject (ie. StuctureMap) a IDataCache instance for every repository.
a call to Repository.GetAll() returns all entities for the current repository. every entity is registered in the cache. note the id of that object in the cache.
a call to Repository.FindById() checks the cache first for the id. if the object is valid return it.
notifications about invalidation of an object is routed to the cache. you could implement client-side invalidation or push messages from the server to the client for example via messagequeues.
information about the status whether an object is currently valid or not should not be stored in the entity object itself but rather only in the cache.
What I want is pretty simple conceptually but I can't figure out how it would be best to implement such a thing.
In my web application I have services which access repositories which access EF which interacts with the SQL Server database. All of these are instanced once per web request.
I want to have an extra layer between the repositories and EF (or the services and the repositories?) which statically keeps track of objects being pulled from and pushed to the database.
The goal, assuming DB-access is only accomplished through the application, would be that we know for a fact that unless some repository access EF and commits a change, the object set didn't really change.
An example would be:
Repository invokes method GetAllCarrots();
GetAllCarrots() performs a query on SQL Server retrieving a List<Carrot>, if nothing else happens in between, I would like to prevent this query from being actually made on the SQL Server each time (regardless of it being on a different web request, I want to be able to handle that scenario)
Now, if a call to BuyCarrot() adds a Carrot to the table, then I want that to invalidate the static cache for Carrots, which would make GetAllCarrots(); require a query to the database once again.
What are some good resources on database caching?
You can use LinqToCache for this.
It allows you to use the following code inside your repository:
var queryTags = from t in ctx.Tags select t;
var tags = queryTags.AsCached("Tags");
foreach (Tag t in tags)
{
...
}
The idea is that you use SqlDependency to be notified when the result of a query changes. As long as the result doesn't change you can cache it.
LinqToCache keeps track of your queries and returns the cached data when queried. When a notification is received from SqlServer the cache is reset.
I recommend you reading the http://rusanu.com/2010/08/04/sqldependency-based-caching-of-linq-queries/ .
I had a similar challenge, and due to EF's use and restrictions, i've decided to implement the cache as an additional service between the client and server's service, using an IoC. Monitoring all service methods that could affect the cached data.
Off course is not a perfect solution when you have a farm of servers running the services, if the goal is to support multiple servers i would implement using the SqlDependency.
I don't know very much of WCF...
I want to do a clean job to serve entities on client side using DataContracts. Imagine two DataContracts "System" and "Building": "System" may have many "Buildings" and "Building" may have many "Systems". So, we have a many-to-many relationship between them.
In service contract model, "System" have a "Buildings" property that is a collection. "Building" also have a collection of "Systems".
The WCF uses DataSets for the underlying data access (with stored procedures for CRUD) and I have a table between SYSTEM and BUILDING representing the relationship.
So, how can I implement this scenario cleanly? I want the clients to be able to get a simple representation of "Buildings" in "System", for example, I could use:
system = GetSystem(id);
foreach (Building building in system.Buildings) {
// do whatever with each buildings...
}
Thank you!
I think this question is too broad to cover in full detail, but I can give you a few pointers to get you started.
Forget about WCF and build the Data Access Layer (DAL). This should be a library which contains code to query the database and return strongly typed objects. This library might contain a method called GetBuildings() which returns a list of Building objects. The library might work with DataSets (and other database specific types), but should not expose DataSets to external callers.
Now that you have a library which can be used to get data from the database, write the WCF service. Code in the service component should call into the DAL and turn that information into DataContract objects to be sent over the web service boundary. Don't try to represent all your data in the DataContract objects - you want your data packets to be relatively small, so don't include information that isn't required. Balance this with trying to make as few web service calls as possible. In designing your DataContract classes, consider what the client application will be doing with the data.
Write the Service Client component. This is code which makes calls to the WCF Service, and turns that information into Entity objects.
The final (and most rewarding step) is to write the client application logic. Now you have another set off issues to confront about how you will structure client code (I recommend using MVVM). The client application should call into the Service Client component, and use the data to meet the requirements of your application.
By following the above 4 steps, you should end up with:
A Data Access Layer that talks to the database.
A Service Layer, which knows nothing about the database but is able to fetch data from the Data Access Layer.
A Service Client layer, which knows nothing about databases but knows how to fetch data from the Service Layer.
Application code, which knows nothing about databases or web services, but calls into the Service Client layer to get data and presents the data to a User Interface.
Everyone will do this differently, but the main thing is to separate concerns by using a layered architecture.