So I have this web application (ASP.NET MVC 4 Web Site) which has at least 2,000 online users at any time. One of the most popular pages in my application contains data about user, and this data is not located in my repository, it is contained in some external vendor which is integrated into my system. So whenever this page is drawn I have to make a call to those services (currently there are 17) and than draw the page according to the data given by them. The data is subject to change in any given moment so I cannot cache it. Everything is working OK most of the time and the CPU utilization is 5% - 30% (depending on the number of online users of course). For each service call I have timeouts of 5000 milliseconds (for service references I set the SendTimeout and for the raw HttpWebRequests' I set the TimeOut property to be equal to 5000 milliseconds) Now suppose that one service is down, the CPU utilization of my server goes unxpectidly low like 3% - 8% and the application is lagging, I mean it takes some time to load pages (any page), for instance, if in a normal mood the response from my application would have taken (150-250ms) now it takes 1-3 seconds. I'm out of ideas of what to do. I cannot decrease the timeout because some services are taking 3-4 seconds sometimes so the 5 second timeout is the lease I can give. What can I do to prevent the late response ? I know it's bit general question. Any suggestion would be appreciated. Thanks in advance.
It looks like you have a threading problem. Too many threads are waiting for response from the external service and they can not process other requests.
What I recommand you is to use Async Controller: http://www.asp.net/mvc/overview/performance/using-asynchronous-methods-in-aspnet-mvc-4
Suggestion 1
What if you replicate this data to your server?
Meaning you can have another service, that works separately and synchronize data of external service with your server... and your websites always point to your local data... Right, this is some kind of caching, and web pages can show kind of old data... but you can set replication service to check data as often as you need...
Suggestion 2
Another suggestion that come to mind, can you use push notification instead? All the web pages open and wait, where server checks the data, and notify all the clients with the fresh data... in this case only one thread will be busy with external data, and all the opened users will have fresh data as soon as it is available. As a starting point, check SignalR
Related
We have an Azure web app used for internal reporting, and 99% of the time it can handle all the traffic / requests it needs to on the minimum pricing tier (3.5 GB RAM).
But there is one specific request to generate an Excel Report that temporarily requires ~8 GB of RAM to service (ClosedXML is a beast, and we've already minimized the peak RAM footprint in every way possible). Unfortunately, this requires not only the next pricing tier up (7GB) but the one after that, giving us 14 GB to play with.
This request only takes ~1 minute to service, so after trying everything else, I'm considering using Azure APIs to programmatically change the App Service Plan when the request comes in, wait the 10 seconds or so for it to kick in, then process the request, and scale back down afterwards.
Is this a sane approach, or is there some other feature I'm not aware of to temporarily perform a memory-hungry action? I considered an Azure function, but I've read those are limited to 1.5GB RAM... As far as I can tell, this work can't be subdivided up in any way without becoming an expert on manipulating the zipped-XML underlying Excel workbooks.
Sounds reasonable what you are trying to do, we are doing similar things where we scale thing up before running massive monthly imports, we scale both the front end functions and the back end CosmosDB and then scale back down again once the import is done so I don't think you will have any issues doing this.
On a side note there is no 1.5 GB limit on azure functions, it totally depends on the underlying hosting solution, you can host a function on a P3V3 App Service Plan or even bigger dedicated plans and benefit from the resources they provide but that is a different topic.
No out of box from AppService (Plan). In similar situation, we started with an automation account but upgraded to LogicApps.
Using LogicApp as request broker, for specific kind of operation invoke https://learn.microsoft.com/en-us/rest/api/appservice/app-service-plans/update to scaleup the AppService plan and after the successful completion scale down. Btw, hosting the LogicApp on the APIM as well before exposing the url!
Is there a strategy for caching data in a console application that is maintained after the console application is terminated and starts up again?
For example, when my console application starts up, 4 calls are made to my database which returns a fair amount of data. The rest of the application runs and uses these lists. When the console application starts up again at the next scheduled interval it will have to retrieve these four lists again. Is there a way to have those lists cached for a certain amount of time to reduce the amount of times I have to call the database?
My current set up is a Powershell script that just pings a URL on my website which obviously can cache these 4 lists and maintain them. However I think I need to move this function into console applications to remove the load from the IIS process as I've had some high CPU spikes on my server and Im assuming its to do with this code.
One idea I had was to give an API endpoint for these four lists in my website (so they can be cached) and call that from my console application. Is that the best way to handle this or is there a proper way of caching data and maintaining it after a console application has ended and started up again?
You could use a local file and store the values. Maybe in conjunction with a database or endpoint, adding some expire date to a tag in the file.
A local file access will be much faster then accessing a database or a any remote call. A remote call, lets say from a database or IIS endpoint could be used for your first load time.
I have a rather high-load deployment on Azure: 4 Large instances serving about 300-600 requests per second. Under normal conditions: "Average Response Time" is 70 to 150ms, but sometimes it may grow up to 200-300ms, but it's absolutely OK.
Though, one or two times per day (not at "Rush Hours") I see such picture on the Web Site Monitoring tab:
So, number of requests per minute significantly drops, average response time is growing on to 3 minutes, and after a while – everything comes back to normal.
During this "Blackout" there is only 0.1% requests being dropped (Http Server Errors with timeout), other requests just wait in queue and are normally processed after few minutes. Though, not all clients are ready to wait :-(
Memory usage is under 30% all the time, CPU usage is only up to 40-50%.
What I've already checked?:
Traces for timed-out requests: they did timed out at random locations.
Throttling for Azure Storage and other components used: no throttling at all.
I also tried to route all traffic through CloudFlare: and saw the same problems.
What could be the reason for such problems? What may I check next?
Thank you all in advance!
Update 1: BenV proposed good thing to try, but unfortunately it showed nothing :-(
I configured processes recycling every 500k requests and also added worker nodes, so CPU utilization is now less than 40% all day long, but blackouts still appear.
Update 2: Project uses ASP.Net MVC 4.
I had this exact same problem. For me I saw a lot of WinCache errors in my logs.
Whenever the site would fail, it would have a lot of WinCache errors in the log. WinCache is how IIS handles PHP to try to speed up the processing. It’s a Microsoft built add-on that is enabled by default in IIS and all Azure sites. WinCache would get hung up and instead of recycling and continuing, it would consume all the memory and file handles on an instance, essentially locking it up.
I added new App setting in the Azure Portal to scan a folder for php.ini settings changes.
d:\home\site\ini
Added a file in d:\home\site\ini\settings.ini
that contains the following
wincache.fcenabled=1
session.save_handler = files
memory_limit = 256M
wincache.chkinterval=5
wincache.ucachesize=200
wincache.scachesize=64
wincache.enablecli=1
wincache.ocenabled=0
This does a few things:
wincache.fcenabled=1
Enables file caching using WinCache (I think that's the default anyway)
session.save_handler = files
Changes the session handler from WinCache (Azure Default) to standard file based to reduce the cache engine stress
memory_limit = 256M
wincache.chkinterval=5
wincache.ucachesize=200
wincache.scachesize=64
wincache.enablecli=1
Sets the WinCache size to 256 megabytes per thread and limits the overall Cache size. This forces WinCache to clear out old data and recycle the cache more often.
wincache.ocenabled=0
This is the big one. DISABLE WinCache Operational Code caching. That is WinCache caching the actual PHP scripts into memory. Files are still cached from line one, but PHP is interpreted per normal and not cached into large binary files.
I went from having a my Azure Website crash about once every 3 days with logs that look like yours to 120 days straight so far without any issues.
Good luck!
There's some nice tools available for Web Apps in the preview portal.
The Application Insights extension especially can be useful for monitoring and troubleshooting app performance.
I have some code that pulls data from SQL DB, then loops through the records to generate a string, which will eventually be written to a text file.
The code runs fine on my local, from VS, but on the live server, after about a minute and half, I get "No Data Received" error (chrome). The code stops in middle of looping through the DataTable. Hosting support said "The connection was reset" error was thrown.
I"m not sure if this is a timeout issue or what. I've set the executionTimeout in my web.config (with debug = false) and it didn't seem to help. I also checked the Server.ScriptTimeout property, and it does match the executionTimeout value set in the web.config.
Additionally, a timeout would normally give "Page not available" message.
Any suggestions are appreciated.
after about a minute and half
There's your problem. This is a web application? A minute and a half is a very long time for a web application to respond to a request. Long enough that it's not really worth engaging in various trickery to make it kind of sort of work.
You'll want to offload this process to be more asynchronous with the web application itself. The nature of web applications is that they should receive a request and respond in a timely manner. What you have here is a long-running process which can't respond in a timely manner. The web application can facilitate interactions with the data, but shouldn't directly handle the processing thereof in the request/response directly.
How does the web application interact with the process? Does it just start it, or does it provide information for the process to begin? I would recommend that the process itself be handled by something like a Windows Service or perhaps a Console Application. The more de-coupled from the web application, the better. Now, since I don't know anything about the process itself, I'm making a few assumptions about its behavior...
The web application can receive a request to start the process, along with any information needed for the process. It can store this in a database with a status value (pending, queued, etc.) and then respond to the user (in a timely manner) that the request has been received and the process has been queued. The web application can have a page which checks the status so that the user can see how the process is doing (if it's started, how many records it's gone through, etc.).
The offline application (Windows Service, et al) would just monitor that database for newly-queued data to be processed. When it sees it, it updates the status (running, processing, etc.) and provides any relevant feedback during the process (number of records processed, etc.) by updating that data. So the offline application and the web application are both interacting with the same data, but not in a manner which blocks the thread of the web application and prevents a response to the user.
When the process is finished, the status is again updated. The web application can show that it's finished and provide a link to download the results. The offline process could even perhaps send an email to the user when it's done, or maybe the web application can have some kind of notification system (I'm picturing the little notification icons in Facebook) which would alert the user to new activity.
This way the thread isn't blocked, the user can continue to interact with the application (if there's even anything with which to interact), etc. And you get other added benefits, too. For example, results of the process are thus saved in the database and automatically historically tracked.
It sounds like it's the browser that's timing out waiting for a response, not on the server. You can't control what the browser has set for this. What you can do is send a response of some kind to the browser, so that it knows you're still around and haven't crashed in some way.
For this to work, you can't wait until you finish building the entire string. You need to re-think your code so that instead of appending to a string, you are writing each addition to an output stream. This has the added advantage of being a much more efficient way to create your text file. For purposes keeping the browser alive, you can write out anything, as long as some data is coming back for the browser to read. Html comments can work for this. You also need to periodically flush your response stream, so that your data isn't sitting buffered on your web server. Otherwise you might still timeout.
Of course, the real solution here is to re-think your design, such that your operation doesn't take 90 seconds plus in the first place. But until you can do that, hopefully this is helpful.
it does sound like a timeout, Could you try and return the information via a View, this would certainly speed things up.(if possible).
When i had this error, i was able to resolve it by adding in the Web.config file:
<system.web>
<httpRuntime executionTimeout="600" maxRequestLength="51200" />
</system.web>
Scenario: A WCF service receives an XDocument from clients, processes it and inserts a row in an MS SQL Table.
Multiple clients could be calling the WCF service simultaneously. The call usually doesn't take long (a few secs).
Now I need something to poll the SQL Table and run another set of processes in an asynchronous way.
The 2nd process doesn't have to callback anything nor is related to the WCF in any way. It just needs to read the table and perform a series of methods and maybe a Web Service call (if there are records of course), but that's all.
The WCF service clients consuming the above mentioned service have no idea of this and don't care about it.
I've read about this question in StackOverflow and I also know that a Windows Service would be ideal, but this WCF Service will be hosted on a Shared Hosting (discountasp or similar) and therefore, installing a Windows Service will not be an option (as far as I know).
Given that the architecture is fixed (I.E.: I cannot change the table, it comes from a legacy format, nor change the mechanism of the WCF Service), what would be your suggestion to poll/process this table?
I'd say I need it to check every 10 minutes or so. It doesn't need to be instant.
Thanks.
Cheat. Expose this process as another WCF service and fire a go command from a box under your control at a scheduled time.
Whilst you can fire up background threads in WCF, or use cache expiry as a poor man's scheduler those will stop when your app pool recycles until the next hit on your web site and the app pool spins up again. At least firing the request from a machine you control means you know the app pool will come back up every 10 minutes or so because you've sent a request in its direction.
A web application is not suited at all to be running something at a fixed interval. If there are no requests coming in, there is no code running in the application, and if the application is inactive for a while the IIS can decide to shut it down completely until the next request comes in.
For some applications it isn't at all important that something is run at a specific interval, only that it has been run recently. If that is the case for your application then you could just keep track of when the table was last polled, and for every request check if enough time has passed for the table to be polled again.
If you have access to administer the database, there is a scheduler in SQL Server. It can run queries, stored procedures, and even start processes if you have permission (which is very unlikely on a shared hosting, though).
If you need the code on a specific interval, and you can't access the server to schedule it or run it as a service, or can't use the SQL Server scheduler, it's simply not doable.
Make you application pool "always active" and do whatever you want with your threads.