I am right now in the middle of creating a new web application for Azure.
I've noticed that if I do not visit the site for a while (30+ minutes) that it will take a while to load (20+ seconds) on my first visit. I presume this is because Azure has to go though and compile the application. Is there a way to either prevent the application from having to be complied after idling for an extend amount of time. Or somehow pre-complie the web application locally - and then deploy it to Azure, so it does not need to be complied on the server?
I am using VS 2012, Web Application (Web Forms) and Web Deploy
You can access my Web Site Here.
Unfortunately there is no way round this with Azure websites. As you said it is due to the fact that IIS is a demand driven web server and so only does things when it is asked to. So an IIS worker process only spins up when a request arrives for the site that is hosted in this worker process.
If you're using VS2012 and web deploy then you are most probably already compiling the code. In .Net this compile step only takes it part way though into IL (intermediate language) which is CPU independent, the worker process then needs to take this and convert it into native code that can be run on that machine. That is why your site is taking a while to load.
They did start shipping a warm up module (Application initialisation) with IIS 7.5 which was included in IIS 8 to solve this problem for initialisation heavy sites unfortunately it's not available with Azure web sites as it's a native module. If you want to use it then you would have to switch to Azure cloud services or virtual machine to run your site.
The other alternative which I've known people to use is to use a cloud monitoring service such as pingdom which obviously continuously makes request to a page on your site which keeps the worker process alive. One last alternative which is far from ideal is to have a simple script somewhere that makes a request to the page to keep it alive.
If your website becomes popular, however, there is no need for any of these steps as the mere fact that people are visiting your website will keep the worker process alive.
I just ran your site through Google Page Speed:
https://developers.google.com/speed/pagespeed/insights#url=http_3A_2F_2Fffinfo.azurewebsites.net_2F&mobile=false
If you are concerned about speed/performance on a "shared" WebSite instance you should fix some of the items listed in there. Having a huge background of 375kb is probably not the best idea...and its not even compressed.
If you can, move to an "extra small" instance of a cloud service and you can optimize a lot of additional things (turn off ASP.NET modules, remove headers, control compression, client caching). Your goal is to have a popular site, correct?...start it off right :)
It's not out of the box, but one easy way to keep your cloud service warm by having a scheduled ping is to use Windows Azure Mobile Services as described here
It's basically a small script that is scheduled to hit your website every 15 minutes.
function KeepAlive()
{
KeepSiteAlive("http://www.yousayhello.co.uk");
}
function KeepSiteAlive (siteurl)
{
console.info("Warming up "+siteurl);
var httpRequest = require('request');
httpRequest(siteurl, function(err, response, body)
{
if (err)
{
console.warn("Couldn't retrieve site");
}
else if (response.statusCode !== 200)
{
console.warn("Bad response");
}
else
{
console.info("Site Warmed Up!");
}
});
}
Related
The goal is that I want my overall website response time to be instantaneous.
The problem is that I do no have IIS access, my website is hosted using external service and I have no control to the IIS panel.
My current approach now is having a scheduled code that keeps my website alive. The problem with this only approach is the hosting service has an algorithm to shutdown all their hosted website like every some hours.
This is why I need to implement another approach which to warm up / pre-load the website each time it runs.
How to do this when there is no access to the IIS panel?
The solution requires no 3rd party sites; robots; or apps, you merely write a very simple app yourself that periodically performs a trivial web function, perhaps a REST GET. By performing this function say every few minutes not only do you guarentee that the IIS pool won't timeout and be cold for a client, but it also has the nice effect of ensuring your website is up and running in a warm condition (JIT'd; and running) ready for a real request for your non-heartbeat website requests.
e.g.
In your website expose a REST API, say www.misspiggy.com/api/hiyaaaa that does nothing other than to return HTTP 200 OK.
By implemententing this in your ASP.NET app, any request to the above URL will cause your stopped or cold ASP.NET website to be JIT'd during:
first deployment (and even then only during a request is made to it)
after the IIS AppPool has timed out and needs to restart on demand
The client code that makes the REST request can be anything:
a console app
a Windows service
WinForms/WPF app
The console app can be triggered to fire via Windows Task Scheduler say every 5 minutes thus saving you the hastle of building in a scheduler.
My current approach now is having a scheduled code that keeps my website alive. The problem with this only approach is the hosting service has an algorithm to shutdown all their hosted website like every some hours
I suggest you set your ping period to be a matter of minutes rather than hours.
No admin access to server required
The problem is that I do no have IIS access, my website is hosted using external service and I have no control to the IIS panel
It should be pointed out that this solution does not require you to install anything new on the server nor make any changes on the server.
Azure
It is interesting to note that Azure Application Insights has Availability Tests that though designed for testing web site availability, can be used for this exact same purpose of keeping your website alive and warm ready to go for web clients. In fact this is what I do for my web apps.
Doing so keeps response times and latency as low as possible.
There are a number of things you can do but a real simple solution is to use a website monitoring site something like statuscake or uptime robot there are a large number of them out there. You set them up call a page or pages on your website at set intervals to ensure it is still up this has the added bonus of keeping the site warm.
I would also precompile your mvc app if you arent already doing that.
HTH
Published in the ghost app - Server Fault Question
I am relatively new to this field although I've been a programmer for years.
My company has a website hosted in Azure. I am the one that performs the "Publish" action after confirming that the team finished developing a certain module. However, I have to take the site down on every publish (adding the app_offline.htm while copying dll's, aspx files etc.).
This seems redundant, right? there should be a better way to do it.
I was thinking of the obvious, two servers that while I "talk" to one the other take all the traffic, and afterwards they sync or I can make a publish to the second.
Environment: VisualStudio2013, AzureWebSite, ASP.NET 4.0.
Please share your thoughts, knowledge or even just where should I start my investigation from?
Thanks!
If you are publishing the site to a cloud service, then you can publish the site to the staging instance first and then swap over to production after the staging deployment has finished.
The idea being that you'll have version 5 of the website in the production slot and version 4 of the website in the staging slot. You would deploy version 6 to the staging slot and wait for it to finish. Then you can swap the virtual IP addresses once the staging slot is ready.
The swap takes maybe 20-30 seconds so it's minimal downtime.
The added benefit is that if the new version has issues, you can swap again and get the old version back up.
Cloud services from my experience are a bit easier to manage for availability than a VM.
Currently working on a .NET solution for an application server. I'm using .NET 4.0 running on Windows Server 2008 R2 with IIS 7.5.
My requirements are:
The application server can run multiple Console applications at once on a schedule - Quartz.net looks like a really good solution to this problem - and is working well for me so far
The application server will also host a web application that will report on jobs (what time they ran, what they did, how long they took etc)
I would like to be able to restart the "service" that is running my jobs and trigger ad hoc jobs from the web interface.
The Service that is running my jobs needs to run all the time
Once this is live I will not have direct access to the machine to restart a Windows Service, but i could potentially setup IIS to be able to do this for me.
WCF Services looks quite promising to me - but I'm not sure where to host it. My current project uses a WCF Service to run console applications using the Quartz.net plugin. Configuration for what to run and when to run it is stored in an oracle database and my WCF service connets directly to the database to retrieve this information (not sure if that is the intended use of WCF).
If I host the WCF Service in IIS / WAS then running the console applications might be a security concern from what I've read. I can keep the WCF service running all the time using appFabric at least. Alternatively I could host it in a Windows Service and allow my web app to consume the WCF service to report on the jobs. I'm concerned about using a Windows Service though as I wont have direct maintenance access to this machine and if it breaks I'm in trouble. I would really like to be able to do the maintenance from a web application. A windows service also feel a little unnecessary given it can be hosted from IIS.
So my question is - is a WCF Service the right approach to this problem or can anyone think of a better approach?
If a WCF service is a good approach - where should I host it so that I can perform maintenance via a web interface given I will not have direct access to the machine itself?
Should the WCF service be the one to start and schedule the jobs?
I think you're overengineering it, possibly.
The Problem: You have a web site which needs to start up jobs on an ad-hoc basis. Other jobs will be run to a fixed schedule. The web site will report on all/any of these jobs.
For running the scheduled jobs, a Windows Service using Quartz is indeed an ideal solution for the fixed schedule part. However, to report on those jobs the data must be collected by the Service and made available. A service can be set up to restart on fail, so you can guarantee that it will always be running (barring a minute or two when it's restarting if it fails - and why should it?. However, any history will be lost unless the Service stores it somewhere it can be retrieve it after a restart.
The simpler solution to the web site getting the history is for the Service to write its data to a database. Then it doesn't worry about a restart: all the history has already been saved, and the data can be read by the web site at any time.
Similarly, if the web site talks directly to the Service (as a WCF Service or otherwise) then what happens if the service is not currently running? The request gets fails until the restart is completed. Frustrating for the user. Alternatively, the web site puts the request into the database. The service monitors the database for requests, and starts jobs appropriately when it sees a new request. If a request is written while the service is not running, when it restarts it will see the request(s) in the DB and execute them.
So I think using a WCF service is overkill, and actually introduces some problems: persistence of history, and what to do about requests made while the service is down. These problems don't arise if you go the way I've described.
Cheers -
I am creating a website in asp.net but i have some issue..
I have coded a program which can crawl a give web page i.e. thenextweb.com for its links, and content and images.
Now i want to store these crawled data inside my table *Crawlr_Data*.
I want that the crawler runs after every 30 minutes and updated the table with new links if available.
{ON the Home page of my website i am showing the information stored in the database}
How can i run the crawler on back end and update the database ?
What technology like (web services, WCF) should i use or any other thing in visual studio which i can use so that i if host website online its crawler keeps on running and updating table}
Please suggest
Thanks
There are two ways to do this with the Microsoft stack.
Create a service to run on the server. Have the service itself manage when it wakes up and crawls.
Create a console app to do the crawl. Run the console app as a scheduled task using windows task scheduler as often as you like.
I guess there are other ways to do it -- so saying there are just two is not totally accurate -- there are 3rd party programs that will do it for you also... I expect most if not all of them are implemented as a service. You could also write a program that runs on the server not as a console app or as a service. But this is generally a bad idea.
I'm creating a ASP.NET .NET 4.0 website and part of this site requires that there is an "always running" application. Normally I would create a Windows Service for this, but the site will be hosted within a shared hosting environment, and unless I get a virtual server, then this isn't a possibility.
My first thought was to have a thread running in the background that would do this and it would be created on Application_Start and destroyed on Application_End. I've looked around and this seems like it could be an option, but I would of course have to hit the site in order to cause the Application_Start to be called and if the associated AppPool is recylced, then this process would have to be repeated (so I believe?!?).
Within a normal ASP.NET website does these seem possible?
In the end I had a seperate thread that sits and waits for a signal to be set. Once set it then does it's work. To make sure the thread is always active I make a HTTP request for a "dummy" page to ensure that, if the AppPool was recycled, then the Application_Start event is triggered and the thread restarted.
It depends on what mean by 'always running application':
If it's a realtime service, it still makes sense to run it as separate process, even if it may have a web front-end. It's so because ASP .NET server was designed by Microsoft for specific tasks(to run web apps, render pages etc.) in many aspects like memory usage or multithreading. And I'd prefer to use at least a VDS in such case.
Another case is when it's is a periodically(say every hour) alarmed application which does some uncomplicated work - perhaps your shared hoster has some mechanisms to trigger a specific page to do some work(as my hoster does). For example, I have an ASP .NET page that monitors the tour date list of my favourite band, and sends email notification when they are going to play a gig in my town - it's triggered by hoster every 4 hours.