I am having issues about running an API on Godaddy server. The API basically sends requests to a website constantly in a certain period time as soon as it starts to operate. Therefore it has timer that is created in Application_Start for controlling this action. For some reason, my API stops working after some time if no one makes a request. However I need my API to work all the time since I need a list that has live data which is collected from another website. Below you can read the steps I take and problem that I encounter in details:
I created my Web API on Visual Studio 2013 written in C#.
I bought the server from Godaddy having Windows Deluxe hosting.
I uploaded my files to httpdocs folder of my server using ftp.
When I call my API by typing "mysite.com/myWebAPI/myList" it starts to work and initially it returns an empty list (which is normal i think)
Then I make same request in 2 seconds (allow my API to collect data) and the list that I desire is returned with live data inside collected from another website.
After this point, my API should not stop. It has to send request every X seconds to a website and update the information in the list.
However, after 5 or 10 minutes, if no one sends a request then my API stops therefore it stops collecting information from another website and list is not updated.
Then, if another request is made, it becomes active again and starts to work but now my list is empty once again. This means that the list is created all over again. This can only happen if Application_Start is called once more.
Note that when I am running this Web API on my localhost server, it works perfectly. It does not stop and gathers the information correctly by sending requests to the website in every X seconds. Even though I don't make any request for 30 minutes, it returns me the list I want after I send request when 30 minutes have passed.
So the question is, Is there a way to fix this problem and make my API work all the time without stopping on a GoDaddy server having Windows Deluxe Hosting?
I may have to do something in IIS application pool thing but I am not sure what to do.
Thank you for your help.
I'm not sure specifically about GoDaddy, but on Azure Web App Service this is also common and it is resolved by enabling the 'always on' feature which presumably automates the job of pinging the API every x minutes. If GoDaddy has a similar feature, enabling it may solve this problem.
Create a scheduled task to ping your API before the IIS idle
timeout.
http://www.codeproject.com/Articles/12117/Simulate-a-Windows-Service-using-ASP-NET-to-run-sc
http://www.quartz-scheduler.net/
You can also setup Pingdom to ping your API.
Related
The goal is that I want my overall website response time to be instantaneous.
The problem is that I do no have IIS access, my website is hosted using external service and I have no control to the IIS panel.
My current approach now is having a scheduled code that keeps my website alive. The problem with this only approach is the hosting service has an algorithm to shutdown all their hosted website like every some hours.
This is why I need to implement another approach which to warm up / pre-load the website each time it runs.
How to do this when there is no access to the IIS panel?
The solution requires no 3rd party sites; robots; or apps, you merely write a very simple app yourself that periodically performs a trivial web function, perhaps a REST GET. By performing this function say every few minutes not only do you guarentee that the IIS pool won't timeout and be cold for a client, but it also has the nice effect of ensuring your website is up and running in a warm condition (JIT'd; and running) ready for a real request for your non-heartbeat website requests.
e.g.
In your website expose a REST API, say www.misspiggy.com/api/hiyaaaa that does nothing other than to return HTTP 200 OK.
By implemententing this in your ASP.NET app, any request to the above URL will cause your stopped or cold ASP.NET website to be JIT'd during:
first deployment (and even then only during a request is made to it)
after the IIS AppPool has timed out and needs to restart on demand
The client code that makes the REST request can be anything:
a console app
a Windows service
WinForms/WPF app
The console app can be triggered to fire via Windows Task Scheduler say every 5 minutes thus saving you the hastle of building in a scheduler.
My current approach now is having a scheduled code that keeps my website alive. The problem with this only approach is the hosting service has an algorithm to shutdown all their hosted website like every some hours
I suggest you set your ping period to be a matter of minutes rather than hours.
No admin access to server required
The problem is that I do no have IIS access, my website is hosted using external service and I have no control to the IIS panel
It should be pointed out that this solution does not require you to install anything new on the server nor make any changes on the server.
Azure
It is interesting to note that Azure Application Insights has Availability Tests that though designed for testing web site availability, can be used for this exact same purpose of keeping your website alive and warm ready to go for web clients. In fact this is what I do for my web apps.
Doing so keeps response times and latency as low as possible.
There are a number of things you can do but a real simple solution is to use a website monitoring site something like statuscake or uptime robot there are a large number of them out there. You set them up call a page or pages on your website at set intervals to ensure it is still up this has the added bonus of keeping the site warm.
I would also precompile your mvc app if you arent already doing that.
HTH
Users will be able to configure the printer/scanner on a web application.
So I've a windows service running on client machine that communicate to cloud db through API and get the printer/scanner details and configure them accordingly in local network.
The service is configured to run every 30 minutes. So when a user modifies a printer/scanner property using the web application, the update will be available in client machine only after windows service run once. So the maximum time delay will be 30 minutes as the windows service is configure to run every 30 minutes.
Windows service cannot invoke API frequently as there will be too much load on the web server and also the update doesn't happen often but when it happens, customer expects their local network printer/scanner to be updated immediately with new configuration.
So, the question here is how effectively I could update the cloud data to the local service so that they will be in sync as soon as possible.
Please share if there is any other way to achieve this.
I've heard about message queue/clickone but I am not sure how it could suit here
Our team has an application in Android, with a .NET c# backend, hosted in IIS.
Recently, we have observed sudden and unexplainable latencies in our customers with the following scenario:
Without any warning, users are enable to change the channel (Zapping) , since the product has to do with Live Media Streaming, and they can not even log out of the application
The mobile application connected to another backend (still a c# backend) , is working properly, without any problem
After some time (which varies from 6 hours of the first incident, to 5 minutes of the last one), it all turns back to normal.
I have enabled Failed Request Tracing logs, to see if I can get anything from there, and I have results as follows:
<failedRequest url="https://ourDNS.com:443/servertime.aspx"
siteId="1"
appPoolId="DefaultAppPool"
processId="22232"
verb="POST"
remoteUserName=""
userName=""
tokenUserName="NT AUTHORITY\IUSR"
authenticationType="anonymous"
activityId="{80013C53-0802-B500-B63F-84710C7967BB}"
failureReason="TIME_TAKEN"
statusCode="200"
triggerStatusCode="0"
timeTaken="45141"
xmlns:freb="http://schemas.microsoft.com/win/2006/06/iis/freb"
>
The page described above is a simple page, that first gets the server's timezone, and then after getting the customer's timezone (that can be set manually from the client), returns the exact date and time of the device where the application is hosted, for further calculations of stream program, what is playing now etc. However, for this page, that returns a simple JSON with a string in it, it requires some times more than 45 seconds (to me this is insane).
Another log from Client side in the moment is one Exception as above:
java.net.SocketTimeoutException
at java.net.PlainSocketImpl.read(PlainSocketImpl.java:491)
at java.net.PlainSocketImpl.access$000(PlainSocketImpl.java:46)
at java.net.PlainSocketImpl$PlainSocketInputStream.read(PlainSocketImpl.java:240)
at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:103)
at org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:191)
at org.apache.http.impl.conn.DefaultResponseParser.parseHead(DefaultResponseParser.java:82)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:174)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:180)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:235)
at org.apache.http.impl.conn.AbstractClientConnAdapter.receiveResponseHeader(AbstractClientConnAdapter.java:259)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:279)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:428)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465)
at com.framework.utilityframe.webhelper.HttpRequest.getHttpResponse(HttpRequest.java:316)
at com.framework.utilityframe.webhelper.HttpRequest.httpRequest(HttpRequest.java:393)
at com.tibo.webtv.web.TiboLog.logBufferingError(TiboLog.java:319)
at com.tibo.webtv.CustomVideoView$Buffering_Problem.doInBackground(CustomVideoView.java:324)
at com.tibo.webtv.CustomVideoView$Buffering_Problem.doInBackground(CustomVideoView.java:307)
at android.os.AsyncTask$2.call(AsyncTask.java:287)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305)
at java.util.concurrent.FutureTask.run(FutureTask.java:137)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569)
at java.lang.Thread.run(Thread.java:856)
Reading through different forums, I have seen different causes of performance leaks, starting from database to IIS and even a misconfiguration of the application. I have discarded database as a cause because:
At the moment of the problem, database parameters were absolutely fine, no changes in queries time execution, no waiting tasks, no locking
Secondly, the mobile and Decoder application connect to the same database, and the mobile application is running just fine with the same queries
Now, if I think of IIS, every Application hosted at that AppPool, was running fine and without delays, but still there may be something I am missing over there
And at least, something that makes me suspicious is the fact that the mobile application differs in two ways with the Decoder application:
First, the mobile application takes the responses from the Backend in XML format, the Decoder uses JSON.
Second,the mobile application uses http requests, and the Decoder uses https (SSL)
If anyone has experienced similar issues, their help would be greatly appreciated. And for any other detail you need, just ask and I will provide.
So,
Today, our team made another test, which included :
Application hosted in one server and database in another
Application and database hosted in a completely different server (Azure environment)
In both cases, the result was the same: Latencies and problem at the service.
The problem was neither at the backend nor the server. First, the Java application by mistake executed Sync Tasks when saving the logs to another server(dedicated, with full potential to keep as much data as you can give). Second, the log server had a full HDD, with more than 1 TB of only DB Logs, so when the application executed those Sync Tasks (which came as the first call, before any interaction with the channels), they received the Socket exceptions. So, maybe for someone else who may see this post: PLEASE,ALWAYS CHECK YOUR TASKS IN YOUR APPLICATION,AND ALWAYS CHECK ANY SERVER RELATED TO YOUR APPLICATION!!! Thank you very much :D
So I have this web application (ASP.NET MVC 4 Web Site) which has at least 2,000 online users at any time. One of the most popular pages in my application contains data about user, and this data is not located in my repository, it is contained in some external vendor which is integrated into my system. So whenever this page is drawn I have to make a call to those services (currently there are 17) and than draw the page according to the data given by them. The data is subject to change in any given moment so I cannot cache it. Everything is working OK most of the time and the CPU utilization is 5% - 30% (depending on the number of online users of course). For each service call I have timeouts of 5000 milliseconds (for service references I set the SendTimeout and for the raw HttpWebRequests' I set the TimeOut property to be equal to 5000 milliseconds) Now suppose that one service is down, the CPU utilization of my server goes unxpectidly low like 3% - 8% and the application is lagging, I mean it takes some time to load pages (any page), for instance, if in a normal mood the response from my application would have taken (150-250ms) now it takes 1-3 seconds. I'm out of ideas of what to do. I cannot decrease the timeout because some services are taking 3-4 seconds sometimes so the 5 second timeout is the lease I can give. What can I do to prevent the late response ? I know it's bit general question. Any suggestion would be appreciated. Thanks in advance.
It looks like you have a threading problem. Too many threads are waiting for response from the external service and they can not process other requests.
What I recommand you is to use Async Controller: http://www.asp.net/mvc/overview/performance/using-asynchronous-methods-in-aspnet-mvc-4
Suggestion 1
What if you replicate this data to your server?
Meaning you can have another service, that works separately and synchronize data of external service with your server... and your websites always point to your local data... Right, this is some kind of caching, and web pages can show kind of old data... but you can set replication service to check data as often as you need...
Suggestion 2
Another suggestion that come to mind, can you use push notification instead? All the web pages open and wait, where server checks the data, and notify all the clients with the fresh data... in this case only one thread will be busy with external data, and all the opened users will have fresh data as soon as it is available. As a starting point, check SignalR
I am right now in the middle of creating a new web application for Azure.
I've noticed that if I do not visit the site for a while (30+ minutes) that it will take a while to load (20+ seconds) on my first visit. I presume this is because Azure has to go though and compile the application. Is there a way to either prevent the application from having to be complied after idling for an extend amount of time. Or somehow pre-complie the web application locally - and then deploy it to Azure, so it does not need to be complied on the server?
I am using VS 2012, Web Application (Web Forms) and Web Deploy
You can access my Web Site Here.
Unfortunately there is no way round this with Azure websites. As you said it is due to the fact that IIS is a demand driven web server and so only does things when it is asked to. So an IIS worker process only spins up when a request arrives for the site that is hosted in this worker process.
If you're using VS2012 and web deploy then you are most probably already compiling the code. In .Net this compile step only takes it part way though into IL (intermediate language) which is CPU independent, the worker process then needs to take this and convert it into native code that can be run on that machine. That is why your site is taking a while to load.
They did start shipping a warm up module (Application initialisation) with IIS 7.5 which was included in IIS 8 to solve this problem for initialisation heavy sites unfortunately it's not available with Azure web sites as it's a native module. If you want to use it then you would have to switch to Azure cloud services or virtual machine to run your site.
The other alternative which I've known people to use is to use a cloud monitoring service such as pingdom which obviously continuously makes request to a page on your site which keeps the worker process alive. One last alternative which is far from ideal is to have a simple script somewhere that makes a request to the page to keep it alive.
If your website becomes popular, however, there is no need for any of these steps as the mere fact that people are visiting your website will keep the worker process alive.
I just ran your site through Google Page Speed:
https://developers.google.com/speed/pagespeed/insights#url=http_3A_2F_2Fffinfo.azurewebsites.net_2F&mobile=false
If you are concerned about speed/performance on a "shared" WebSite instance you should fix some of the items listed in there. Having a huge background of 375kb is probably not the best idea...and its not even compressed.
If you can, move to an "extra small" instance of a cloud service and you can optimize a lot of additional things (turn off ASP.NET modules, remove headers, control compression, client caching). Your goal is to have a popular site, correct?...start it off right :)
It's not out of the box, but one easy way to keep your cloud service warm by having a scheduled ping is to use Windows Azure Mobile Services as described here
It's basically a small script that is scheduled to hit your website every 15 minutes.
function KeepAlive()
{
KeepSiteAlive("http://www.yousayhello.co.uk");
}
function KeepSiteAlive (siteurl)
{
console.info("Warming up "+siteurl);
var httpRequest = require('request');
httpRequest(siteurl, function(err, response, body)
{
if (err)
{
console.warn("Couldn't retrieve site");
}
else if (response.statusCode !== 200)
{
console.warn("Bad response");
}
else
{
console.info("Site Warmed Up!");
}
});
}