I'm serving pages using MVC5 and getting data from WebApi services. The MVC5 app makes around 60 requests to the WebApi initial page load to get all the data, I'm using OutputCache on the MVC5 side.
This works until the cache expires. When the Output cache expires new calls to the API are triggered, since theres a lot of people using the site the first request won't finish before subsequent requests are made. This means that suddenly our data service has to cope with a huge load. Say that a 100 new visitors come in then the backend receives approximately 6000 requests, of which most include database calls and some requests to other services. Response times get longer and eventually the WebApi tier crashes.
Are there any methods I can apply to cope with this sudden increase in requests? I've considered adding another layer of caching on the WebApi side but would like to know if theres more that can be done.
My two cents here will be suggesting you that, maybe, it's the time to implement a second-level cache mechanism.
Instead of delegating everything to the ASP.NET output cache, you should have cached data in a neutral layer between your front-end and Web API backend which may be updated overtime and asynchronously. That is, users won't get the performance impact of refreshing the whole cache.
If you want to learn more about this topic, some question was thrown in Meta StackExchange and a developer from StackExchange was proud to share with everyone how they implemented caching (either L1 and L2 cache).
I think a sliding expiry cache could help. But I think you'll need to either go third party or create your own attribute for that.
Maybe this can help some:
Asp.net Mvc OutputCache attribute and sliding expiration
Related
My Goal: Cache basically all the pages all the time so that users rarely ever have to hit my CMS for content.
I have a c#/.Net MVC 5 Web App deployed in Azure. I also have all the OutputCache's on my controllers set for 1 week [604800s] (content rarely changes). I assume, maybe naively, that the cached outputs are stored in memory in Azure. However, when I start my app and crawl the website, I'd expect the Azure memory to fill up with cached content, but in practice, there might be a bump in memory utilization. It goes back to its "resting state" of like 60% utilization after about 5 mins, though. I've also tried using MemoryCache, but it has a similar result - a bump in memory usage, and it goes down to normal shortly after.
In any case, the result is that the pages act like they weren't cached. For example, if I crawl 1 page and visit it - it loads in about 1 second (it's cached). If i crawl 2000 pages and visit a random one, it loads in 3-4 seconds (it's not cached). I've tested this by putting a datetime in the view itself.
So... the bottom line is: cached = fast, not cached = average. I want it to be fast!
I've looked at Redis Cache, which could be a way to do this, and seems easy enough... but my gut says this should be basic functionality (since it's built into the framework).
Azure Web App did support in-memory OutputCache. We can easily confirm it using following code. The output datetime will not be changed after you refresh the TestCache page.
[OutputCache(Duration = 3600)]
public ActionResult TestCache()
{
return Content(DateTime.Now.ToString());
}
But there are some problems when using in-memory cache in Azure Web App.
First problem with this is that it limits you to the memory that is available on your web app instance and this may create an out of memory issue when you cache a large amount of page output data. Your web app will be restarted if your memory is full. If the web app is restart, all the cached content will be lost. Another issue is that your application runs on multiple load balanced instances. The next request might go to another instance, which creates a new copy of ASP.NET Output Cache data in this instance, as well. These redundant copies of page outputs in each Web Role instance consume a lot of extra memory.
To avoid the upper problems, I suggest you use Redis Cache to store the cached content. For how to use Redis Cache, link below is for your reference.
ASP.NET Output Cache Provider for Azure Redis Cache
EDIT: This was apparently an issue with testing in the browser, not with the code. Sessions in Core are disabled by default as they should be.
Original question:
I'm working on a web API which needs to handle multiple requests concurrently, even from the same client. It is an ASP.NET Core MVC (1.1.2) app currently targeting the full framework (4.5.2), mostly for compatibility with other libraries.
When I first tested my concurrency, I was surprised that the requests were not concurrent at all. I googled around and found out that ASP.NET does not handle requests within a session concurrently, but instead queues them. A quick test shows that sessions are likely to be the culprit:
[HttpGet("sleep")]
public string Sleep()
{
Thread.Sleep(5000);
return $"Done at {DateTime.Now:u}";
}
When I rapidly request this from multiple tabs in the same browser, it takes 5 seconds between each tab. This is not the case when I use multiple browsers, it responds multiple times within a 5 second window.
When searching for a solution I kept stumbling upon ways to disable session state in ASP.NET, but nothing for Core.
As far as sessions are concerned, I am using the default API project template and I have done nothing to specifically enable/setup session state.
Is there a way to get rid of session state in ASP.NET Core? Or is there a better solution to enable concurrent requests?
You already have concurrent requests enabled in ASP.NET Core, no need to modify your code. Session in ASP.NET Core is non-locking. If multiple requests modify the session, the last action will win.
As-stated in the documentation:
Session state is non-locking. If two requests simultaneously attempt
to modify the contents of a session, the last request overrides the
first. Session is implemented as a coherent session, which means that
all the contents are stored together. When two requests seek to modify
different session values, the last request may override session
changes made by the first.
If you set this attribute on your controller class
[SessionState(SessionStateBehavior.ReadOnly)]
It will set the session to not lock and therefore you will be able to make concurrent requests
You can read more about it here
I'm meeting a problem regarding session-state blocking in ASP.NET web page.
Normally, web-api projects don't have session state. However, as we develop from a legacy projects, web-api 2 module is injected to the old web-form project, and we proceed from there. However, now we detect 2 problems:
The AJAX requests always queue and execute one-by-one. It beats
the purpose of concurrent processing
For long-processing requests,
the user cannot move on another page, even if he/she doesn't need to
know the request result.
The culprit for (1) and (2) is session state blocking. We save authentication information into session state for the web-form projects; and then reuse them on the Web-API controllers. However, as any requests requiring the session, then they execute one-by-one, and for long requests, new request cannot come in even if the user already leave that page (the session is still blocked)
I have checked several answers from related issues, and it seems I'm not alone:
This question informs about the situation clearly, however didn't provide a clear solution (for Web-API case)
This question also give good advises about using Read-only session which does not block concurrency. However, we still not find out how to do with WebAPI controllers inside Web-form project.
According to MVC documents, we can disable session state by adding attribute [System.Web.Mvc.SessionState(System.Web.SessionState.SessionStateBehavior.ReadOnly)] on controllers' level. However it didn't work (as expected, because we have here Web-API controllers). Unfortunately, I didn't find similar attributes/any mechanism for web-api controllers.
I also check: in PHP case, they have the same issue, yet they can release the session lock early if they want to.
Therefore I'm wondering:
Is there a way to disable session for certain Web-API actions/controllers?
Is there a way to release the lock on session early, when we don't need it any more?
Dot Net serializes session state by default. You can decorate your class with a session modifier so that it can be accessed asynchronously without session state.
This works in cases where the client is java or jquery and they are polling your controller. The controller will release later than sooner. To make it more predictable place this specified on your class:
[SessionState(SessionStateBehavior.ReadOnly)]
function Do()
{
}
I am writing an MVC webAPI that will be used to return values that will be bound to dropdown boxes or used as type-ahead textbox results on a website, and I want to cache values in memory so that I do not need to perform database requests every time the API is hit.
I am going to use the MemoryCache class and I know I can populate the cache when the first request comes in but I don't want the first request to the API to be slower than others. My question is: Is there a way for me to automatically populate the cache when the WebAPI first starts? I see there is an "App_Start" folder, maybe I just throw something in here?
After the initial population, I will probably run an hourly/daily request to update the cache as required.
MemoryCache:
http://msdn.microsoft.com/en-us/library/system.runtime.caching.memorycache.aspx
UDPATE
Ela's answer below did the trick, basically I just needed to look at the abilities of Global.asax.
Thanks for the quick help here, this has spun up a separate question for me about the pros/cons of different caching types.
Pros/Cons of different ASP.NET Caching Options
You can use the global.asax appplication start method to initialize resources.
Resources which will be used application wide basically.
The following link should help you to find more information:
http://www.asp.net/web-forms/tutorials/data-access/caching-data/caching-data-at-application-startup-cs
Hint:
If you use in process caching (which is usually the case if you cache something within the web context / thread), keep in mind that your web application is controlled by IIS.
The standard IIS configuration will shut down your web application after 20 minutes if no user requests have to be served.
This means, that any resources you have in memory, will be freed.
After this happens, the next time a user accesses your web application, the global asax, application start will be excecuted again, because IIS reinitializes your web application.
If you want to prevent this behaviour, you either configure the application pool idle timeout to not time out after 20minutes. Or you use a different cache strategy (persistent cache, distributed cache...).
To configure IIS for this, here you can find more information:
http://brad.kingsleyblog.com/IIS7-Application-Pool-Idle-Time-out-Settings/
Here's an API server which can give me real time news: every minute there will be something new to retrieve. There's also my web page, with a Javascript that will ask the API to get some news once every minute.
And this is not fine... unless my web page is made for a single user and will be open only on one machine at a time (which is not the case of the internet). the API, infact, restricts the number of call I can do per minute: let's suppose the API will ban me if I do more than 1 call per minute. If 100 users will load my web page, the API will receive 100 calls per minute (!!!).
Since the flow is my web page >> calls >> the API I think there is no solution without inserting another node which lazy loads from the api server.
my web page >> calls >> my server >> calls every minute >> the API
Since the instances of my web page may be many while my server is just one I think this is the solution.
However, I have no idea if:
a) is this the correct solution? Or could I somehow get my web page to behave correctly without the need of an intermediary server?
b) how can I implement this in ASP.NET MVC4? Is there any support for server side timers?
c) even if I can get IIS to retrieve the data every minute, should I then store it in a database to serve it to my web page?
d) the api server I'm talking about is The Times Newswire API. If anyone ever used a similar API, did you really created a domain model, a database table, a service and a routine just to retrieve data from it or did you just writed some javascript code in my web page? What then if you have milions of users?
You can use SignalR for this purpose, This is a push service which works by using sockets and therefore can be configured to send out one message to 1,000,000 listeners (or more obviously).
I've used this to great effect when creating a little prototype game last year and found it to be very reliable. You can use NuGet to grab the package in vs2010 and vs2012.
see Asp.net SignalR, see examples or simply google SignalR and you'll find a host of examples.
the API will ban me if I do more than 1 call per minute
Then you need a solution that calls the API for you every minute and stores it on your server. There's tons of ways of doing this, depending on many requirements. You can even go as far as writing a static HTML file which you then show the client.
You should call the API from your serverside. Fetch it one time every minute to your database. Then serve it to the users from your database via in example a restful service. ASP.NET MVC 4 can make all of this.
Also yes it do have a timer. You can check out Timer class.
You don't have to struggle with API restricts with this solution.
A) The best solution is probably to use your web server as an intermediate to the API
B) There are a lot of possibilities to choose from. The easiest would be starting a new thread when the web application starts (i.e in the Global.asax OnApplicationStarted event), and have that thread poll the external API and store the result for your clients to fetch. Another options, if you want full control of the lifecycle of this background procces, would be to create a windows service and host for instance a WebAPI in it that you clients can connect to.
Of course, these are just suggestions.
C) Depends on whether you want your clients to be able to access the latest fetch data, even if the background process has failed and the data is old. Otherwise, you can just store it in a static field somewhere. As long as the application isn't terminated the static field persists its value.