Caching and ASP.NET - c#

I have a question that is more related to how ASP.NET works and will try my best to explain so here goes.
I have a four tiered architecture in my app
Web (ASP.NET Web Application)
Business (Class Library)
Generic CRUD layer ** (Class Library)
Data (Class Library)
This generic CRUD layer of mine uses reflection to set and read properties of an object, however I have read that using PropertyInfo is fairly expensive and thus want to cache these items.
Here is the scenario:
Two people access the site, lets call them Fred and Jim. Fred creates a Customer which in turn called the Generic CRUD layer and caches the property info of the Customer class within System.RuntimeCache. Jim, then seconds later also creates a Customer.
My question is will the two requests from both Fred and Jim cause the obtaining of propery info to be triggered twice? Or will ASP.NET retrieve it from cache the second time, i.e. Jim's request is quicker as property info is obtained via the cache?
My thinking is that because my CRUD is a class library and not having access to System.Web.Cache, the property info won't be cached across all sessions / users?

No, it will issue new queries for each request (unless you've coded otherwise).
There are multiple layers of caching that could happen in ASP.Net application (browser, proxies, server side response caching, intermediate objects caching, DA layer caching) which all can be configured/used. But nothing is ever cached in ASP.Net (or in any application) unless one specifically wrote code/rules/configuration to do so.

As Alexei Levenkov points out, you have to configure caching to happen explicitly: you're not going to get automatic caching of specific property values.
But let me point out that while using PropertyInfo is expensive compared to writing code to directly access properties, it pales in comparison to the cost of a database round-trip or the latency between your server and your end user. Tools like Entity Framework, WebForms, and MVC all make extensive use of reflection because the performance cost it incurs is totally worth the reduced maintenance costs.
Avoid premature optimization.

Related

Reuse a data object model across multiple requests

Newbie to the whole ASP.Net conundrum !!
So I have a set of Web API's (Actions) in a controller which will be called in succession to each other from the client. All the API's depend on a data model object fetched from the database. Right now I have a DAO layer which fetches this data and transparently caches it. So there is no immediate issue as there is no round trip to database for each API call. But the DAO layer is maintained by a different team and makes no guarantee that the cache will continue to exist or it's behaviour won't change.
Now the properties or attributes of the model object would change but not too often. So If I refer the API calls made in succession from a client as a bundle, then I can safely assume that the bundle can actually query this data once and use it without having to worry about change in value. How can I achieve this ? Is there a design pattern somewhere in ASP.Net world which I can use ? What I would like is to fetch this value at a periodic interval and refresh it in case one of the API calls failed indicating the underlying values have changed.
There are a few techniques that might be used. First of all, is there a reason for the need of a second cache, because your Data Access Layer already has it, right?
You can place a cache on the Web API response level by using a third party library called Strathweb.CacheOutput, and:
CacheOutput will take care of server side caching and set the appropriate client side (response) headers for you.
You can also cache the data from your data access layer by using a more manual approach by using the MemoryCache from System.Runtime.Caching.
Depending on what's the infrastructure available, distributed caches like Cassandra or Redis may be the best choice.

Where to place a collection (cache)?

In an ASP.NET or MVC website project (or any other) where and how a collection of users taken from the database should be placed?
For example, I have a table of users in the database and I want to load it once to the memory as a dictionary of <UserId,User> and perform all the operations on it (and from it to the database).
The collection should be accessible from all of the pages/controllers.
What will be the "Best practices" way to do that?
Should I create a static object called Users that will contained the dictionary and some methods (add, remove, etc.) also as static?
Or should it be a non static object with a static dictionary inside it? and if so, where should it be placed?
Or maybe I am thinking of it in a totally wrong way?
Sorry if my question is not 100% clear, I just gave an example that I think can illustrate the scenario.
It seems to me like a basic issue but I am really confused about the right way of designing it.
For our WCF server, we used a static object that contained a table of users and their authorizations. This worked well and prevented frequent database round-trips on every connection.
The real challenge was ensuring this table was up-to-date when user accounts change. We implemented a state refresh mechanism. When someone saves a change to user accounts, the web service detects this change and refreshes its state information.
Note that the .NET Framework 4.0 and higher a MemoryCache class built in.
First of all, using static objects (static properties) in a web application is a horrible idea. Concurrency becomes an issue and weird things with user values changing due to other user input (since a static object is shared across the whole app domain) become apparent.
A static read-only object is an exception to the above.
Probably the best way to handle the scenario in your question is using caching. Cache the list, and then rebuild the cache each time after any updates.
If using .net 4.0 or above, take a look at the System.Runtime.Caching namespace. It is similar to the old System.Web.Caching namespace from earlier versions, but now available to the entire .net framework, and also extensible if needed.
This will take care of "where to put the data".
Then you can implement a Business Logic Layer that handles pulling data from the cache and sending to the UI, communicate with data layer, update the cache after any database updates are performed, etc.
That's how I'd do something like this.

Architecture of an ASP.NET MVC application

I'm in the process of doing the analysis of a potentially big web site, and I have a number of questions.
The web site is going to be written in ASP.NET MVC 3 with razor view engine. In most examples I find that controllers directly use the underlying database (using domain/repository pattern), so there's no WCF service in between. My first question is: is this architecture suitable for a big site with a lot of traffic? It's always possible to load balance the site, but is this a good approach? Or should I make the site use WCF services that interact with the data?
Question 2: I would like to adopt CQS principles, which means that I want to separate the querying from the commanding part. So this means that the querying part will have a different model (optimized for the views) than the commanding part (optimized to business intend and only containing properties that are needed for completing the command) - but both act on the same database. Do you think this is a good idea?
Thanks for the advice!
For scalability, it helps to separate back-end code from front-end code. So if you put UI code in the MVC project and as much processing code as possible in one or more separate WCF and business logic projects, not only will your code be clearer but you will also be able to scale the layers/tiers independently of each other.
CQRS is great for high-traffic websites. I think CQRS, properly combined with a good base library for DDD, is good even for low-traffic sites because it makes business logic easier to implement. The separation of data into a read-optimized model and a write-optimized model makes sense from an architectural point of view also because it makes changes easier to do (maybe some more work, but it's definitely easier to make changes without breaking something).
However, if both act on the same database, I would make sure that the read model consists entirely of Views so that you can modify entities as needed without breaking the Read code. This has the advantage that you'll need to write less code, but your write model will still consist of a full-fledged entity model rather than just an event store.
EDIT to answer your extra questions:
What I like to do is use a WCF Data Service for the Read model. This technology (specific to .NET 4.0) builds an OData (= REST + Atom with LINQ support) web service on top of a data model, such as an Entity Framework EDMX.
So, I build a Read model in SQL Server (Views), then build an Entity Framework model from that, then build a WCF Data Service on top of that, in read-only mode. That sounds a lot more complicated than it is, it only takes a few minutes. You don't need to create yet another model, just expose the EDMX as read-only. See also http://msdn.microsoft.com/en-us/library/cc668794.aspx.
The Command service is then just a one-way regular WCF service, the Read service is the WCF Data Service, and your MVC application consumes them both.

Entity Framework Brainmush Kerfuffle Spectacular

I've been wading through all the new EF and WCF stuff in .NET 4 for a major project in its early stages, and I think my brain's now officially turned to sludge. It's the first large-scale development work I've done in .NET since the 1.1 days. As usual everything needs to be done yesterday, so I'm playing catch-up.
This is what I need to hammer together - any sanity checks or guidance would be greatly appreciated. The project itself can be thought of as essentially a posh e-commerce system, with multiple clients, both web and Windows-based, connecting to central servers with live data.
On the server side:
A WCF service, the implementation
using EF to connect to an SQL Server
data store (that will probably end up
having many hundreds of tables and all the other accoutrements of a complex DB system)
Underlying classes used for EF and
WCF must be extensible both at a
property and class (ie field and
record) level, for validation,
security, high-level auditing and
other custom logic
On the client side:
WCF client
Underlying classes the same as the
server side, but with some of the
customisations not present
When an object is updated on the
client, preferably only the modified
properties should be sent to the
server
The client-side WCF API details will
probably end up being published
publicly, so sensitive server-side
implementation hints should not be
leaked through the API unless
absolutely unavoidable - this
includes EF attributes in properties
and classes
General requirements:
Network efficiency is important,
insofar as we don't want to make it
*in*efficient from day one - I can
foresee data traffic and server
workload increasing exponentially
within a few years
The database gets developed first, so
the (POCO, C#) classes generated by EF
will be based on it. Somehow they
need to be made suitable for
both EF and WCF on both client and
server side, and have various layers
of customisation, but appear as if
custom-written for each scenario
Sorry this is so open-ended, but as I said, my brain's completely turned to sludge and I've confused myself to the point where I'm frozen.
Could anyone point me in the general direction of how to build the classes to do all this? Honestly, thanks very, very much.
A few hints in no particular order:
POCO would be the way to go to avoid dependencies on EF classes in your data objects.
Consider adding a intermediate layer based on data transfer objects to cope with your "only modified properties" are passed (this requirement will be the tricky part). These DTO will be passed between service and clients to exchange modifications
Use a stateless communication model (no WCF session) to be able to implement load-balancing and fail-over very easily.
Share the POCO between client and services, use subclassing on the server to add the internal customized information.
You would end up in the server side with at least:
A project for the service contracts and the DTO (shared)
A project for the POCO (shared)
A project for the WCF service layer
A project for business logic (call by the WCF layer)
I have few notes to your requirements:
A WCF service, the implementation
using EF to connect to an SQL Server
data store (that will probably end up
having many hundreds of tables and all
the other accoutrements of a complex
DB system)
Are you going to build only data access layer exposed as set of WCF services or heavy business logic exposed as WCF services? This strongly affect rest of your requirements. If you want to do the former case check WCF Data Services. In the later case check my other notes.
Underlying classes used for EF and WCF
must be extensible both at a property
and class (ie field and record) level,
for validation, security, high-level
auditing and other custom logic
Divide your data classes into two sets. Internally your services will use POCO classes implemented as domain objects. Domain objects will be materialized / persisted by EF (you need .NET 4.0) and they will also contain custom logic. If you want to build heavy business layer you should also think about Domain driven design = repositories, aggregate roots, etc.
Underlying classes the same as the
server side, but with some of the
customisations not present
Second set of data classes will be Data transfers objects which will be exposed by WCF services and shared among server and clients. Your domain objects will be converted to DTOs when sending data to client and DTOs will be converted to domain objects when returning from client.
Your WCF services should be build on the top of business logic - domain objects / domain services. WCF servies should expose chunky interfaces (instead of chatty CRUD interfaces) where DTO transfers data from several domain operations. This will also help you improve performance by reducing number of round trips between client and service.
When an object is updated on the
client, preferably only the modified
properties should be sent to the
server
I think this is achievable only by correct definition of DTOs or perhaps by some custom serialization.
Network efficiency is important,
insofar as we don't want to make it
*in*efficient from day one - I can foresee data traffic and server
workload increasing exponentially
within a few years
As already mentioned you have to design your service to be ready for load balancing and you should also think about caching (distributed) - check AppFabric. Good idea is to use stateless services.
The database gets developed first, so
the (POCO, C#) classes generated by EF
will be based on it.
This seems like simple requirement but you can easily model database which will be hard to use with Entity Framework.
The main advice:
Your project looks big and complex so the first thing you should do is to hire some developers with experience with WCF, EF, etc. Each of these technologies have some pitfalls so it is really big risk to use them in such scale without having experienced people.

ADO.NET data services their place in overall design

ADO.NET Data service is the next generation of data access layer within applications. I have seen a lot of examples using it directly from a UI layer such as Silverlight or Ajax to get data. This is almost as having a two tiered system, with business layer completely removed. Should DAL be accessed by the Business layer, and not directly from UI?
ADO.NET Data Services is one more tool to be evaluated in order to move data.
.NET RIA Services is another one. Much better I would say.
I see ADO.NET Data Services as a low level services to be used by some
high level framework. I would not let my UI talk directly to it.
The main problem I see with ADO.NET Data Services has more to do with
security than with anything else.
For simple/quick tasks, in a Intranet, and if you are not too pick with your
design, it can be useful. (IMO)
It can be quite handy when you need to quickly expose data from an existing database.
I say handy, but it would not be my first choice as I avoid as much as I can
the "quick and dirty" solutions.
Those solutions are like ghosts, always come back to haunt you.
ADO.NET Data service is the next generation of data access layer within applications
I have no idea where you got that from! Perhaps you're confusing ADO.NET Data Services with ADO.NET Entity Framework?
One shouldn't assume that everything Microsoft produces is of value to every developer. In my opinion, ADO.NET Data Services is a quick way to create CRUD services, which maybe have a few other operations defined on the entity, but the operations are all stored procedures. If all you need is a database-oriented service, then this may be what you want. Certainly, there's relatively little reason to do any coding for a service like this, except in the database.
But that doesn't mean that ADO.NET Data Services "has a place in the overall design" of every project. It's something that fills a need of enough customers that Microsoft thought it worthwhile to spend money developing and maintaining it.
For that matter, they also thought ASP.NET MVC was a good idea...
:-)
In my opinion other answers underestimate importance of ADO.Net Data Services. Though using it directly in your application brings some similarity to two tiered system , other Microsoft products such as .Net RIA Services , Windows Asure Storage Services based on it. On the contrary to the phrase in one of the answers "For simple/quick tasks, in a Intranet, and if you are not too pick with your design, it can be useful" it may be useful for public websites including websites in ASP.Net MVC.
Dino Esposito describes the driving force for Ado.Net Data Services in his blog
http://weblogs.asp.net/despos/archive/2008/04/21/the-quot-driving-force-quot-pattern-part-1-of-n.aspx
"ADO.NET Data Services (aka, Astoria)
Driving force: the need of building richly interactive Web systems.
What's that in abstract: New set of tools for building a middle-tier or, better yet, the service layer on top of a middle-tier in any sort of application, including enterprise class applications.
What's that in concrete: provides you with URLs to invoke from hyperlinks to bring data down to the client. Better for scenarios where a client needs a direct|partially filtered access to data. Not ideal for querying data from IE, but ideal for building a new generation of Web controls that breath AJAX. And just that."

Categories

Resources