Published in the ghost app - Server Fault Question
I am relatively new to this field although I've been a programmer for years.
My company has a website hosted in Azure. I am the one that performs the "Publish" action after confirming that the team finished developing a certain module. However, I have to take the site down on every publish (adding the app_offline.htm while copying dll's, aspx files etc.).
This seems redundant, right? there should be a better way to do it.
I was thinking of the obvious, two servers that while I "talk" to one the other take all the traffic, and afterwards they sync or I can make a publish to the second.
Environment: VisualStudio2013, AzureWebSite, ASP.NET 4.0.
Please share your thoughts, knowledge or even just where should I start my investigation from?
Thanks!
If you are publishing the site to a cloud service, then you can publish the site to the staging instance first and then swap over to production after the staging deployment has finished.
The idea being that you'll have version 5 of the website in the production slot and version 4 of the website in the staging slot. You would deploy version 6 to the staging slot and wait for it to finish. Then you can swap the virtual IP addresses once the staging slot is ready.
The swap takes maybe 20-30 seconds so it's minimal downtime.
The added benefit is that if the new version has issues, you can swap again and get the old version back up.
Cloud services from my experience are a bit easier to manage for availability than a VM.
Related
Our web application is developed by 2 teams. One team works on the client side, with it's own Branch for development, and the other works on the server side, also with it's own development branch. The client and the server are running separately, each one as a website on a different port. The websites are hosted over IIS Express during development, and in production they will run over IIS.
Our ideal situation is that each team can develop completely separately and whenever a develop session is over, both teams merge their change-set to a common Branch in order to integrate, than each team merges back to their development branch, and continues.
In order for a full separation, We have x2 SERVER projects, one to handle the real HTTP requests and another one, a "Stub server" Which responses to all the clients HTTP requests with default values, just in order so that the Client side team can test their code without being dependent on the functionality of the server.
The problem is that both the "Stub server" and the real server and using the same Port which the Client side project is directed to.
This causes many annoying mistakes (mostly for the Server side team) of running the application with the "Stub server" instead of the real one, during reviews, tests etc. The only solution for us is to manually create a virtual directory for the real web server project every time before running / or after finding out we were running the wrong server.
Is there a smarter solution to overcome this annoying problem? That would improve our lives!
If anything I said was foolish / not clear please correct me (I'm new to this), or ask for more details, I'll be glad!
Thanks for helpers!
I believe your problem is more related to build automation then server configuration. You should really keep the stub server and the real server into separate ports, and change that port during some kind of build process of your client.
If you are using AngularJS, then I suggest you to create steps into the build process of your client application using common tools like gulp or grunt. You could create build processes that will set a global variable or modify a constant (e.g. the API endpoint) and name them local testing (pointing your client to the stub server) and integration (for the real server).
Please note that you can easily integrate those build processes into Visual Studio, making them part of your global debug/build process.
Here it is a simple gulp task useful for replacing text inside any file: https://www.npmjs.com/package/gulp-replace
There are 2 ways for publish website to Azure - via simple Publish feature vs Deploy as Cloud service. I have also one worker role in solution, so, I selected Cloud Service instead of simple Publish website feature.
But I'm very disappointed with Cloud service. First at all, deploy as cloud service takes in 10 times more time, than simple Publish website. Second problem - I have to each time, when I want to deploy, change connection strings in web.config to SQL Azure (instead of my local SQL Server). Website Publish has ability to set necessary SQL connection strings for deploy. Maybe I do something wrong and deploy can doing in 10 sec and exist ability to set different connection strings (like Website publish)?
I think about put to Cloud only worker role and website deploy as website, without Cloud service...
First, I would highly recommend that you go through this question comparing Azure Websites and Cloud Service: What is the difference between an Azure Web Site and an Azure Web Role
Now coming on to your questions:
First at all, deploy as cloud service takes in 10 times more time,
than simple Publish website.
It is bound to happen because when you deploy a cloud service (say through Visual Studio), following things happen that will cause the delay:
As a part of build process for cloud services, Visual Studio creates a package file and uploads it into blob storage. This package is then used to create a cloud service.
Azure Fabric Controller which is responsible for managing life cycle of a cloud service creates a brand new Virtual Machine for you, installs necessary software (IIS for example) and then deploys your code from the package file.
Both of these things don't happen in websites.
Second problem - I have to each time, when I want to deploy, change
connection strings in web.config to SQL Azure (instead of my local SQL
Server). Website Publish has ability to set necessary SQL connection
strings for deploy. Maybe I do something wrong and deploy can doing in
10 sec and exist ability to set different connection strings (like
Website publish)?
You're not doing anything wrong per se. Your web.config file gets bundled into the package file so after any change you make to your web.config file, you would need to recreate the package and update the deployment (which will include uploading to blob storage).
One possible solution for your problem would be to use config transformation and have your web.config.release file contain the connection string for your production database. When you build your project in release mode, you will have correct connection string in your web.config file.
I think about put to Cloud only worker role and website deploy as
website, without Cloud service...
This is certainly a viable option. Another alternative would be look into WebJobs. Like Worker Roles, they are meant for handling background processing workloads but have the same convenience of a website when it comes to deployment. You may also find this blog post useful as well: http://www.hanselman.com/blog/IntroducingWindowsAzureWebJobs.aspx.
I have a asp.net web application that has been deployed in the local IIS server of clients. These clients are more than 100 in number as of yet.
Since the application is in early phase of release, it gets updated almost on a daily basis. It becomes cumbersome to deploy new updates to each client's server individually.
Therefore, I would like to implement a mechanism where the user can automatically check for updates, download and replace the IIS application file.
I did try the solution given in this article Building a Self Updating Site Using NuGet but it felt as if it wasn't very reliable and scalable.
So any help on this would be highly appreciated.
Many Thanks.
I have a product, and a front end website where people can purchase the product. Upon purchase, I have a system that creates an A record in my DNS server that points to an IP address. It then creates a new IIS website with the bindings required.
All this works well, but I'm now looking at growing the business and to do this I'll need to handle upgrades of the application.
Currently, I have my application running 40 websites. It's all the same code base and each website uses it's own SQL Server database. Each website is ran in a separate application pool and operate completely independently.
I've looked at using TeamCity to build the application and then have a manual step that runs MSDeploy for each website but this isn't particularly ideal since I'd need to a) purchase a full license and b) always remember to add a new website to the TeamCity build.
How do you handle the upgrade and deployments of the same code base running many different websites and separate SQL Server databases?
First thing, it is possible to have a build configuration in TeamCity that builds and deploys to a specific location...whether a local path or a network drive. I don't remember exactly how but one of the companies I worked with in Perth had exactly the same environment. This assumes that all websites are pointing to the same physical path in the file system.
Now, a word of advice, I don't know how you have it all setup, but if this A record is simply creating a subdomain, I'd shift my approach to a real multi-tenant environment. That is, one single website, one single app pool for all clients and multiple bindings associated to a specific subdomain. This approach is way more scalable and uses way less memory resources...I've done some benchmark profiling in the past and amount of memory each process (apppool) was consuming was a massive waste of resources. There's a catch though, you will need to prepare your app for a multi-tenant architecture to avoid any sort of bleeding such as
Avoiding any per-client singleton component
Avoiding static variables
Cache cannot be global and MUST a client context associated
Pay special attention to how your save client files to the file system
Among other stuff. If you need more details about setting up TeamCity in your current environment, let me know. I could probably find some useful info
looking for examples of what people have done inorder to deploy the same webapp or processes to multiple servers.
The deployment process right now consists of copying the same file multiple times to different servers within our company. There has to be a better way to do this right now I am looking into ms build does anyone have other ideas? Thanks in advance.
Take a look at msdeploy and Web Deploy.
I've done this using a variety of methods. However, I think the best one is what I call a "rolling" deployment.
The following assumes a code only deployment:
Take one or more web servers "offline" by removing them from the load balancing list, let's call this group A. You should keep enough running to keep up with existing traffic, we'll call those group B. Push the code to the offline servers (group A).
Then, put group A back into rotation and pull group B out. Make sure the app is still functional with the new code. If all is good, update group B and put them back in rotation. In the event of a problem, just put group B back in and take A out again.
In the case of a database update there are other factors to consider. If you can take the whole site down for a limited period then do so and perform all necessary updates. This is by far the easiest.
However, if you can't then do a modified "rolling" deployment which requires multiple database servers. Pick a point in time and move a copy of the production database to the second one. Apply your changes. Then pull a group of web servers out, update their code to production and test. If all is good, put those web servers back into rotation and take out group B. Update the code on B while pointing them to the second DB server. Put them back into rotation.
Finally, apply all data changes that occurred on the primary production database to the secondary one.
Note, I don't use Web Deploy or MS Deploy for pushes to production. Quite frankly I want the files ready to be copy/pasted into the correct directory on the server so that the push can run as quickly as possible. Both Web and MS Deploy options involve transferring those files over a network connection; which is typically much slower than simply copy/pasting from one local directory to another.
You can build a simple console app that connects to a fixed sftp download, uncompress
and run all the files in a fixed directory. A meta XML file can be usefull to create rules
such as each machine will run each application, pre-requirements and so on.
You can also use dropbox api to download your files if you don't have a centralized server to unify your apps.
Have a look at kwateeSDCM. It's language and platform agnostic (Windows, Linux, Solaris, MacOS). There's an article dedicated to deployment of a webapp on multiple tomcat servers.