I am going through a tutorial in ASP.Net MVC 5 and I learned about caching. But I could not understand what determines whether I should cache at client or server.
Here is the code snippet.
For client:
[OutputCache(Duration = 86400, Location = OutputCacheLocation.Client)]
public ActionResult SelectLocation()
{
}
For server:
[OutputCache(Duration = 86400, Location = OutputCacheLocation.Server)]
public ActionResult SelectLocation()
{
}
Question: Can someone tell me when I should apply client caching and when should I use the server one. And the downside or any consequences I should look for?
In regards to OutputCache, "client" caching simply means that cache-control headers and/or an expires header will be sent with the response, indicating that the client may cache the document. Typically, the client, especially if it's a web browser, will choose to do so. It then will not need to make a new request if the same resource is needed again. However, the browser may still occasionally make a HEAD request just to check if there's a new version of the resource.
"Server" caching means, still in regards to OutputCache, that the server will cache the response locally, usually in memory. This means that as long as the cache is still valid, the server will not actually render the action again, but rather, will just serve up the cached resource, instead.
The main difference, then, between the two is that the server cache will be used for all requests for that resource, regardless of what client is currently making the request, while client cache will obviously just be limited to that one particular client. The server will not need to render the action again for that client, but will for the next client that comes along.
However, the default is Any, which includes server and client caching (as well as other locations). In other words, server and client caching are not mutually exclusive, and usually you'd do both to minimize both the work the server needs to do and the amount of requests it needs to respond to.
Have a look here. It is very nicely explained
By default, content is cached in three locations: the web server, any proxy servers, and the user's browser. You can control the content's cached location by changing the location parameter. When you cache on the server, every user receives the same content and when it is only client side, the cached content differs by users.
The location parameter has the default value Any which is appropriate for most the scenarios. But some times there are scenarios when you required more control over the cached data.
Suppose you want to cache the logged in use information then you should cached the data on client browser since this data is specific to a user. If you will cached this data on the server, all the user will see the same information that is wrong.
You should cache the data on the server which is common to all the users and is sensitive.
One point of view could by the cache invalidation scenario. When caching on the client side, you'd need to adjust the URL the client hits to avoid cache hit / force recalculation of response. When caching on the server side, you might invalidate the cached content easier. See this question: How to programmatically clear outputcache for controller action method
As described in https://msdn.microsoft.com/en-us/library/system.web.ui.outputcachelocation(v=vs.110).aspx
Client will instruct browser to cache html in it's own cache.
Pros: faster since it is cached on browser, does not take up server memory.
Cons: it is dependable on user browser settings (cache size, expiration etc.) and user can delete on it's own
Server will keep cached html on server
Pros: does not depend on browser rules since it sends no-cache, you control the cache, not user.
Cons: slower then client since it will always include transport, takes up server memory
The reason that there are a variety of options when dealing with cache control is simple; there is no universal correct answer that is applicable to all sites.
A "business card" site that is pretty static in it's design and content as well as being the only site on the server could pretty much be set to cache it everywhere for an infinite time period. However, if that server is actually hosting a thousand sites then we start have to worry about the server cache and its viability because IIS will start dumping cache items if the memory gets low, so we may not want that server cache.
If we have an ecomm site that is very high traffic with product changes and additions on an hourly basis, we would want to reduce the max-age so that the content remains up to date. But then again, the content generation for these more demanding applications can slow the server down due to all of the dynamic content processing, especially if this site is on a shared server.
There are plenty of resources on the general internet, MSDN, and here that you can review to help you determine on what is best for you. With the wide variety of sites that I have worked with in single and shared environments I have most as Server and client locations, some will use the Last-Modified header and others use eTag.
Related
To describe the app, it has an default page where it will be checking user role from request header then assign the user id into session and redirect to corresponding pages. In every other pages, it will check whether the session has value or not, if no then will redirect the default page.
This has been tested in my dev environment and its working without any issue. However, when I hosted it in IIS (AWS EC2 environment). It started behaving very weird. If the application's bindings is stick to default. I can browse it in the server using http://localhost:26943/ with no issue.
default bindings
However, when i change the bindings to hostname and browse using http://testing.com/, I found that the session containing user ID is empty.
hostname bindings
I have tried several methods including :
Add Session["init"] = 0 in Global.asax
Change cookieless=true in web.config
Change sessionState's mode to "StateServer"
Redirect to "~/page.aspx" instead of "page.aspx"
Only change cookieless method worked for me but it will show session ID in the URL which I doubt is the correct method.
Details of app:
.NetFramework 4.8
Uses WCF service
Current session state info is sessionState mode="InProc" cookieless="false" timeout="60"
Configured c:\Windows\System32\Drivers\etc\hosts to add 127.0.0.1 testing.com
Tested using IE 11
Since AWS is on a server farm?
Then in-proc sessions are going to be VERY flakey and problematic. Those massive cloud systems will spool out your web server multiple times - a WILD guess as to where the next page will come from. If pages are served across different instances of the IIS server?
You going to lose session values. As noted, even some un-handled code errors will cause a app-pool re-set. All of these issues add up to easy and frequent loss of sessions.
I would suggest you adopt SQL server based session management. This should eliminate a zillion issues that can cause a session() re-set. I like in- proc. Memory based is fast, and since your not writing the next Facebook, then of course typical server loads are next to nothing (again, this favors use of in-proc sessions). However, since you a have server farm, and some application errors will become problematic? Adopt SQL server based sessions, and 99 if not 100% of your session() re-sets and loss will go away.
this suggestion is MUCH more warranted since you using AWS and you have little control over the VM's they run and their behind the scenes "fabric" controller could for fail safe and redundancy issues be running multiple copies of your server. So, adopt SQL based session management.
HttpContext.Current.Session["myvariable"]
My Goal: Cache basically all the pages all the time so that users rarely ever have to hit my CMS for content.
I have a c#/.Net MVC 5 Web App deployed in Azure. I also have all the OutputCache's on my controllers set for 1 week [604800s] (content rarely changes). I assume, maybe naively, that the cached outputs are stored in memory in Azure. However, when I start my app and crawl the website, I'd expect the Azure memory to fill up with cached content, but in practice, there might be a bump in memory utilization. It goes back to its "resting state" of like 60% utilization after about 5 mins, though. I've also tried using MemoryCache, but it has a similar result - a bump in memory usage, and it goes down to normal shortly after.
In any case, the result is that the pages act like they weren't cached. For example, if I crawl 1 page and visit it - it loads in about 1 second (it's cached). If i crawl 2000 pages and visit a random one, it loads in 3-4 seconds (it's not cached). I've tested this by putting a datetime in the view itself.
So... the bottom line is: cached = fast, not cached = average. I want it to be fast!
I've looked at Redis Cache, which could be a way to do this, and seems easy enough... but my gut says this should be basic functionality (since it's built into the framework).
Azure Web App did support in-memory OutputCache. We can easily confirm it using following code. The output datetime will not be changed after you refresh the TestCache page.
[OutputCache(Duration = 3600)]
public ActionResult TestCache()
{
return Content(DateTime.Now.ToString());
}
But there are some problems when using in-memory cache in Azure Web App.
First problem with this is that it limits you to the memory that is available on your web app instance and this may create an out of memory issue when you cache a large amount of page output data. Your web app will be restarted if your memory is full. If the web app is restart, all the cached content will be lost. Another issue is that your application runs on multiple load balanced instances. The next request might go to another instance, which creates a new copy of ASP.NET Output Cache data in this instance, as well. These redundant copies of page outputs in each Web Role instance consume a lot of extra memory.
To avoid the upper problems, I suggest you use Redis Cache to store the cached content. For how to use Redis Cache, link below is for your reference.
ASP.NET Output Cache Provider for Azure Redis Cache
I have a web application with private/protected methods or private/protected variables
First I would like to know when a web-server has a connection established already for a certain web application and then receives a new connection does it run a new instance of the web application for this new connection and thus re-initializing all the variables in that web application just like on a computer?
I have goggled the Internet and I am terribly confused!
Second I am using the visual studio development server and I have learned that it doesn't accept connections from other computers, I have gotten around this by using a port forwarding software. So the question is, By doing this does VS2010 web-server see each different requests as a new request or same request since am forwarding them from an app on the local computer?
Finally if I have a web application open on one browser and then decide to open it on another browser and keep the current browser open is this treated as a new request or a post-back?
The app domain is constant (can be recycled) and is created only on the first request (also can be set before that).
That is to say all the static variables are initialized only once
but all the not static classes on which your request depends are initialized on every request.
So basically all your pages in normal asp.net and all the controllers in asp.net MVC are initialized on every request.
read more about it here http://www.codeproject.com/Articles/73728/ASP-NET-Application-and-Page-Life-Cycle
*note - the image has been take from the article referred above
Its a little more complicated than that. The process is optimised for mutiple connections and is stateless, however cashing can be used to imporve scalabilty: That which does not need to be reprocessed can simply be reused: http://www.dotnetfunda.com/articles/article821-beginners-guide-how-iis-process-aspnet-request.aspx is a good place to start understanding what can go on http://msdn.microsoft.com/en-us/library/bb470252%28v=vs.100%29.aspx is a somewhat dryer ms version "iis asp page life cycle" is a good google
The web application instance handles many many requests. And shared state (cache etc) is used very effectively across those requests, whether for a single session or multiple concurrent sessions.
When a request is made, the request object (and any "page" / "controller" object) is created for that request. The state of this object is fresh, but systems like "session state", "view state", cookies, and request values can be used to repopulate it - sometimes largely automated.
A single user making separate requests is not a post-back. They are separate sessions, but even a single session that opens the same page twice (tabs, etc) is not a post-back. It mainly depends on the http verb and other evidences to determine a post-back.
You've got to read this great article: https://lowleveldesign.org/2011/07/20/global-asax-in-asp-net/ for your question. Though it's a little late, it may help others out.
I would like to know if it is possible to cache an ASP.NET UserControl on the Client.
I have an User Control that queries a DB and renders a GridView. It must be on the Client because the query results vary from user to user (by the User.Identity.Name).
The page is for an intranet.
Any help would be really appreciated!
Thanks in advance,
PS: Where are the user controls cached by default? Server, Proxy?
Proxy servers and clients can only cache static data. Dynamic data must always be served by IIS, though you can improve efficiency by storing data in an in-memory cache instead of querying a database for every request.
The term "caching", in regards to ASP.NET, most often refers to storing data in memory on the server. You serve different data to different users by using a key value, such as User.Identity.Name. If you want to cache a users' results on the server, you can add a DataSet (or other DTO) to the Cache dictionary using Cache.Add, using the user's ID as the key. If the data in question doesn't change very often, this can be an efficient way to serve user-specific data. If the data does change often, you can use the Cache object's callback mechanisms to expire cache items when (for example) a file changes.
IIS/ASP.NET also supports Page caching and partial caching of pages based on a querystring. These are controlled by page directives in the .aspx page, and per-site by the web.config.
User controls are ALWAYS cached on the web server, not on proxy servers or in web browsers.
To address your intent though, you can render different cached results based upon the VaryBy attributes of the OutputCache directive. These include:
VaryByParam
VaryByControl
VaryByCustom
We have a CMS system and in the production mode a number of servers only have read-only access to the content (with a few exceptions) and the editors for the site work on the content on servers behind it (which are not available to the public).
We're caching the content quite a long time on the front servers, but sometimes we want the content the editors publish to be available for visitors instantly. What is best practice for invalidating the cache in those cases?
Doesn't the answer depend on the front-end servers and their APIs ?
Assuming the cache is only in the front-end servers, if they expose a method to clear a part of the cache, call it.
If you used the HTTP headers to tell the browser and intermediate proxies that the content can be cached for some time, I don't see a way to invalidate this at their level.
The best way, I guess, is to invalidate cache within the CMS core.