Knowing that Entity Framework is slow on a cold query (first query after model compilation), I am doing some of the standard work around methods to speed it up. Mainly pre-compiled views as well as making a dummy http request on the client side as soon as the application loads to trigger a query to start the model process.
My question here is specifically around how this works for a deployed application. For example, if I deploy this on Azure, is it the first cold query for the entire application that will trigger the model compilation, or will this slow cold query happen for each individual user that uses the application? In simple terms, does it happen once and only once, or every time a user hits the site for a new session?
The EF slow start is triggered from the first request/s coming into the web server that requires database services.
A couple points to note,
If you deploy to an Azure web app, ensure that the 'AlwaysOn' application setting is enabled. If not, after a given time period the web app will be suspended and the next request will trigger another cold start.
Similarly if you deploy to a VM with IIS you'll need to check the application recycling settings.
When you deploy a new version of the application code, the process will need to be restarted which will cause another slow start.
A good approach to mitigate such slow starts is by using deployment slots and pre-warming slots before sending actual user traffic to it. This is straightforward to achieve using Azure Web App deployment slots.
Related
Our Web Application uses an .net-core web api running on a loab balancer and an angular client. We access the DB using EF core.
We have a long running background-task that does a great amount of calculation and takes about 2-3 hours to do so, but will only be initiated by administrators of the application 3-4 times a year.
While the job is running we want to prevent users from adding/editing/deleting data and our client told us its even fine if the application is not avaliable for the duration as they will mostly do it overnight.
The easiest way to do this is to redirect users to an informationpage while the job is running but I have found no way of actually getting to the information if the task is running or not.
I could set a flag whether the job is running or not and just check that flag at every request but I found no way to access an applicationwide state.
I cannot save a flag to the DB because while the transaction is commiting at the end of the job (~1 hour) we cannot read from the DB
What baffles me most is that I have not found a single article or question about a problem like that which doesn't seem to be too outlandish to me, so I guess I'm missing something very obvious.
The simplest way is to store the value for your "Maintenance Mode" in a Singleton class on the server. (No database call needed). The value will remain there for as long as the server is actively running.
If distributed cache (as already mentioned) is not an option, you can run long running task in (uniquely) named transaction and then the check list of active transactions to determine if task is still running.
This is completely dependent on your setup but a simple way to approach this problem might be to make it the long-running job's responsibility to divert traffic from your site while it is running, and then undo that once it is finished.
As an example, if you were running this with an old-school .NET site in IIS the job could drop an app_offline.htm file into the site folder, run, then delete it again. Your setup is different, but if you could do something similar with your load-balancer (configure it to serve some static file instead of routing the requests to your servers) then it could work for you.
The goal is that I want my overall website response time to be instantaneous.
The problem is that I do no have IIS access, my website is hosted using external service and I have no control to the IIS panel.
My current approach now is having a scheduled code that keeps my website alive. The problem with this only approach is the hosting service has an algorithm to shutdown all their hosted website like every some hours.
This is why I need to implement another approach which to warm up / pre-load the website each time it runs.
How to do this when there is no access to the IIS panel?
The solution requires no 3rd party sites; robots; or apps, you merely write a very simple app yourself that periodically performs a trivial web function, perhaps a REST GET. By performing this function say every few minutes not only do you guarentee that the IIS pool won't timeout and be cold for a client, but it also has the nice effect of ensuring your website is up and running in a warm condition (JIT'd; and running) ready for a real request for your non-heartbeat website requests.
e.g.
In your website expose a REST API, say www.misspiggy.com/api/hiyaaaa that does nothing other than to return HTTP 200 OK.
By implemententing this in your ASP.NET app, any request to the above URL will cause your stopped or cold ASP.NET website to be JIT'd during:
first deployment (and even then only during a request is made to it)
after the IIS AppPool has timed out and needs to restart on demand
The client code that makes the REST request can be anything:
a console app
a Windows service
WinForms/WPF app
The console app can be triggered to fire via Windows Task Scheduler say every 5 minutes thus saving you the hastle of building in a scheduler.
My current approach now is having a scheduled code that keeps my website alive. The problem with this only approach is the hosting service has an algorithm to shutdown all their hosted website like every some hours
I suggest you set your ping period to be a matter of minutes rather than hours.
No admin access to server required
The problem is that I do no have IIS access, my website is hosted using external service and I have no control to the IIS panel
It should be pointed out that this solution does not require you to install anything new on the server nor make any changes on the server.
Azure
It is interesting to note that Azure Application Insights has Availability Tests that though designed for testing web site availability, can be used for this exact same purpose of keeping your website alive and warm ready to go for web clients. In fact this is what I do for my web apps.
Doing so keeps response times and latency as low as possible.
There are a number of things you can do but a real simple solution is to use a website monitoring site something like statuscake or uptime robot there are a large number of them out there. You set them up call a page or pages on your website at set intervals to ensure it is still up this has the added bonus of keeping the site warm.
I would also precompile your mvc app if you arent already doing that.
HTH
Our team has an application in Android, with a .NET c# backend, hosted in IIS.
Recently, we have observed sudden and unexplainable latencies in our customers with the following scenario:
Without any warning, users are enable to change the channel (Zapping) , since the product has to do with Live Media Streaming, and they can not even log out of the application
The mobile application connected to another backend (still a c# backend) , is working properly, without any problem
After some time (which varies from 6 hours of the first incident, to 5 minutes of the last one), it all turns back to normal.
I have enabled Failed Request Tracing logs, to see if I can get anything from there, and I have results as follows:
<failedRequest url="https://ourDNS.com:443/servertime.aspx"
siteId="1"
appPoolId="DefaultAppPool"
processId="22232"
verb="POST"
remoteUserName=""
userName=""
tokenUserName="NT AUTHORITY\IUSR"
authenticationType="anonymous"
activityId="{80013C53-0802-B500-B63F-84710C7967BB}"
failureReason="TIME_TAKEN"
statusCode="200"
triggerStatusCode="0"
timeTaken="45141"
xmlns:freb="http://schemas.microsoft.com/win/2006/06/iis/freb"
>
The page described above is a simple page, that first gets the server's timezone, and then after getting the customer's timezone (that can be set manually from the client), returns the exact date and time of the device where the application is hosted, for further calculations of stream program, what is playing now etc. However, for this page, that returns a simple JSON with a string in it, it requires some times more than 45 seconds (to me this is insane).
Another log from Client side in the moment is one Exception as above:
java.net.SocketTimeoutException
at java.net.PlainSocketImpl.read(PlainSocketImpl.java:491)
at java.net.PlainSocketImpl.access$000(PlainSocketImpl.java:46)
at java.net.PlainSocketImpl$PlainSocketInputStream.read(PlainSocketImpl.java:240)
at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:103)
at org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:191)
at org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:82)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:174)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:180)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:235)
at org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:259)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:279)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:428)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465)
at com.framework.utilityframe.webhelper.HttpRequest.getHttpResponse(HttpRequest.java:316)
at com.framework.utilityframe.webhelper.HttpRequest.httpRequest(HttpRequest.java:393)
at com.tibo.webtv.web.TiboLog.logBufferingError(TiboLog.java:319)
at com.tibo.webtv.CustomVideoView$Buffering_Problem.doInBackground(CustomVideoView.java:324)
at com.tibo.webtv.CustomVideoView$Buffering_Problem.doInBackground(CustomVideoView.java:307)
at android.os.AsyncTask$2.call(AsyncTask.java:287)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305)
at java.util.concurrent.FutureTask.run(FutureTask.java:137)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
Reading through different forums, I have seen different causes of performance leaks, starting from database to IIS and even a misconfiguration of the application. I have discarded database as a cause because:
At the moment of the problem, database parameters were absolutely fine, no changes in queries time execution, no waiting tasks, no locking
Secondly, the mobile and Decoder application connect to the same database, and the mobile application is running just fine with the same queries
Now, if I think of IIS, every Application hosted at that AppPool, was running fine and without delays, but still there may be something I am missing over there
And at least, something that makes me suspicious is the fact that the mobile application differs in two ways with the Decoder application:
First, the mobile application takes the responses from the Backend in XML format, the Decoder uses JSON.
Second,the mobile application uses http requests, and the Decoder uses https (SSL)
If anyone has experienced similar issues, their help would be greatly appreciated. And for any other detail you need, just ask and I will provide.
So,
Today, our team made another test, which included :
Application hosted in one server and database in another
Application and database hosted in a completely different server (Azure environment)
In both cases, the result was the same: Latencies and problem at the service.
The problem was neither at the backend nor the server. First, the Java application by mistake executed Sync Tasks when saving the logs to another server(dedicated, with full potential to keep as much data as you can give). Second, the log server had a full HDD, with more than 1 TB of only DB Logs, so when the application executed those Sync Tasks (which came as the first call, before any interaction with the channels), they received the Socket exceptions. So, maybe for someone else who may see this post: PLEASE,ALWAYS CHECK YOUR TASKS IN YOUR APPLICATION,AND ALWAYS CHECK ANY SERVER RELATED TO YOUR APPLICATION!!! Thank you very much :D
I have one application which is developed in ASP .NET MVC 3 which using a SQL server database.
Apart from this, I have one console application which calls an external web service and update the same database with the information and business rules. (Basically we iterate the records from Web service and process the business rule and update the same database), we have configured the console application with Windows scheduler to process it periodically.
The problem is, when my Console application runs periodically, it uses the 100% CPU usage (because we're getting more than 2000 records from web service), and because of that my current MVC application is gets haging OR sometime works very very slow because both application are configured on same windows server.
Could anybody please do let me know that How would I resolve this problem where I want both the things on same server because I have central database used by both application.
Thanks in advance.
You haven't given any detail that anyone can really provide resolution, so I'll simply suggest how I would approach it.
First, I would review the database schema with a DBA to make sure there aren't things like table locks (or if there are, come up with strategies to compensate for them). I would then use the SQL Server profiler to see where (or if) there are any bottle necks in SQL server while these things are running. I would then profile the console application to make sure it's not doing something it doesn't need to be doing. I might even consider profiling the web site to see if there's anything in there that might be contributing to slowness.
After that, I would figure out how to get rid of the Console application and work its functionality into the site. Spawning another application on a given web request is not scalable. More than a couple of those come in at once and you've got the potential to bog the server down very easily.
Scenario: A WCF service receives an XDocument from clients, processes it and inserts a row in an MS SQL Table.
Multiple clients could be calling the WCF service simultaneously. The call usually doesn't take long (a few secs).
Now I need something to poll the SQL Table and run another set of processes in an asynchronous way.
The 2nd process doesn't have to callback anything nor is related to the WCF in any way. It just needs to read the table and perform a series of methods and maybe a Web Service call (if there are records of course), but that's all.
The WCF service clients consuming the above mentioned service have no idea of this and don't care about it.
I've read about this question in StackOverflow and I also know that a Windows Service would be ideal, but this WCF Service will be hosted on a Shared Hosting (discountasp or similar) and therefore, installing a Windows Service will not be an option (as far as I know).
Given that the architecture is fixed (I.E.: I cannot change the table, it comes from a legacy format, nor change the mechanism of the WCF Service), what would be your suggestion to poll/process this table?
I'd say I need it to check every 10 minutes or so. It doesn't need to be instant.
Thanks.
Cheat. Expose this process as another WCF service and fire a go command from a box under your control at a scheduled time.
Whilst you can fire up background threads in WCF, or use cache expiry as a poor man's scheduler those will stop when your app pool recycles until the next hit on your web site and the app pool spins up again. At least firing the request from a machine you control means you know the app pool will come back up every 10 minutes or so because you've sent a request in its direction.
A web application is not suited at all to be running something at a fixed interval. If there are no requests coming in, there is no code running in the application, and if the application is inactive for a while the IIS can decide to shut it down completely until the next request comes in.
For some applications it isn't at all important that something is run at a specific interval, only that it has been run recently. If that is the case for your application then you could just keep track of when the table was last polled, and for every request check if enough time has passed for the table to be polled again.
If you have access to administer the database, there is a scheduler in SQL Server. It can run queries, stored procedures, and even start processes if you have permission (which is very unlikely on a shared hosting, though).
If you need the code on a specific interval, and you can't access the server to schedule it or run it as a service, or can't use the SQL Server scheduler, it's simply not doable.
Make you application pool "always active" and do whatever you want with your threads.