looking for examples of what people have done inorder to deploy the same webapp or processes to multiple servers.
The deployment process right now consists of copying the same file multiple times to different servers within our company. There has to be a better way to do this right now I am looking into ms build does anyone have other ideas? Thanks in advance.
Take a look at msdeploy and Web Deploy.
I've done this using a variety of methods. However, I think the best one is what I call a "rolling" deployment.
The following assumes a code only deployment:
Take one or more web servers "offline" by removing them from the load balancing list, let's call this group A. You should keep enough running to keep up with existing traffic, we'll call those group B. Push the code to the offline servers (group A).
Then, put group A back into rotation and pull group B out. Make sure the app is still functional with the new code. If all is good, update group B and put them back in rotation. In the event of a problem, just put group B back in and take A out again.
In the case of a database update there are other factors to consider. If you can take the whole site down for a limited period then do so and perform all necessary updates. This is by far the easiest.
However, if you can't then do a modified "rolling" deployment which requires multiple database servers. Pick a point in time and move a copy of the production database to the second one. Apply your changes. Then pull a group of web servers out, update their code to production and test. If all is good, put those web servers back into rotation and take out group B. Update the code on B while pointing them to the second DB server. Put them back into rotation.
Finally, apply all data changes that occurred on the primary production database to the secondary one.
Note, I don't use Web Deploy or MS Deploy for pushes to production. Quite frankly I want the files ready to be copy/pasted into the correct directory on the server so that the push can run as quickly as possible. Both Web and MS Deploy options involve transferring those files over a network connection; which is typically much slower than simply copy/pasting from one local directory to another.
You can build a simple console app that connects to a fixed sftp download, uncompress
and run all the files in a fixed directory. A meta XML file can be usefull to create rules
such as each machine will run each application, pre-requirements and so on.
You can also use dropbox api to download your files if you don't have a centralized server to unify your apps.
Have a look at kwateeSDCM. It's language and platform agnostic (Windows, Linux, Solaris, MacOS). There's an article dedicated to deployment of a webapp on multiple tomcat servers.
Related
I have an application that has two main parts. First, the client, basicly is the user iterface, second, a repository that is a library, that connects with the database and has all the logic to insert, update, delete... and ensures the coherence of the data.
The application is not deplyed yet, and by the moment the client uses directly the repository to access to the database. But when I will have to deploy the application to be used for many users, inside the LAN, I think that this is not the best solution.
First solution
Install the client and the repository in all the computer of the users that need the application.
This have the disadvantage that when I update the application, I have to update many applications, and perhaps not all the applications are updated because of any reason. So if the update is of the repository that fix some problem, if the client that has not updated the application will introduce incoherence data in the database, if the fix is to correct this type of problem.
Second solution
The client use direcly the repository, but the application is installed in a network drive. I have only one installation, so if I need to update the application, I have to do it once.
The application is not so big, about 12MB, but it could be a bit slow because has to go through the net from the server to user computer. So perhaps some user could copy the application to the local computer, so I can't ensure that happens the problem with the first solution.
Third solution
The client application does not use the repository directly, the repository is in the server and the client use WCF to communicate with the server, and the server uses the repository to access to the database.
The disadvantage is that the server has to run the repository, so if there are many clients connected, it needs a lot of RAM, instead that if the computers of the users have the application in local, the memory is needed in the local computer.
In sumary, when I have to deply this kind of application, which is the best solution, or which is the solution that would you use in your projects?
Thank you so much.
This really depends on your deployment method, are you using a ClickOnce to deploy it? If so you could keep the data local to each PC, avoid those RAM issue, and if you send out a new update change the required version number and set it to check prior to running, that way they will be unable to run the program without updating it. The problem is they must have network access, but this would also be an issue with remote data. In this situation you would only need network access during the update, not sure if this would be an issue or not.
I have a product, and a front end website where people can purchase the product. Upon purchase, I have a system that creates an A record in my DNS server that points to an IP address. It then creates a new IIS website with the bindings required.
All this works well, but I'm now looking at growing the business and to do this I'll need to handle upgrades of the application.
Currently, I have my application running 40 websites. It's all the same code base and each website uses it's own SQL Server database. Each website is ran in a separate application pool and operate completely independently.
I've looked at using TeamCity to build the application and then have a manual step that runs MSDeploy for each website but this isn't particularly ideal since I'd need to a) purchase a full license and b) always remember to add a new website to the TeamCity build.
How do you handle the upgrade and deployments of the same code base running many different websites and separate SQL Server databases?
First thing, it is possible to have a build configuration in TeamCity that builds and deploys to a specific location...whether a local path or a network drive. I don't remember exactly how but one of the companies I worked with in Perth had exactly the same environment. This assumes that all websites are pointing to the same physical path in the file system.
Now, a word of advice, I don't know how you have it all setup, but if this A record is simply creating a subdomain, I'd shift my approach to a real multi-tenant environment. That is, one single website, one single app pool for all clients and multiple bindings associated to a specific subdomain. This approach is way more scalable and uses way less memory resources...I've done some benchmark profiling in the past and amount of memory each process (apppool) was consuming was a massive waste of resources. There's a catch though, you will need to prepare your app for a multi-tenant architecture to avoid any sort of bleeding such as
Avoiding any per-client singleton component
Avoiding static variables
Cache cannot be global and MUST a client context associated
Pay special attention to how your save client files to the file system
Among other stuff. If you need more details about setting up TeamCity in your current environment, let me know. I could probably find some useful info
I have three applications running in three separate app pools. One of the applications is an administrative app that few people have privileged access to. One of the function the administrative app allows is creating downtime notices. So when a user goes into the administrative app and creates a downtime notice the other two apps are supposed to pick up on there being a new notice and display it on the login page.
The problem is that these notices are cached and being that each app is in a separate app pool the administrative app doesn't have any way to clear the downtime notices cache in the other two applications.
I'm trying to figure out a way around this. The only thing I can think of is to insert a record in the DB that denotes the cache needs to be cleared and the other two apps will check the DB when loading the login page. Does anyone have another approach that might work a little cleaner?
*Side note, this is more widespread than just the downtime notices, but I just used this as an example.
EDIT
Restarting the app pools is not feasible as it will most likely kill background threads.
If I understand correctly, you're basically trying to send a message from the administrative app to other apps. Maybe you should consider creating WCF service on these apps that could be called from the administrative application. That is a standard way to communicate between different apps if you don't want to use e.g. shared medium such a database and it doesn't force you to use polling model.
Another way to look at this is that this is basically an inter-application messaging problem, which has a number of libraries already out there that could help you solve it. RabbitMQ comes to mind for this. It has a C# client all ready to go. MSMQ is another potential technology, and one that already comes with Windows - you just need to install it.
If it's database information you're caching, you might try your luck at setting up and SqlCacheDependency.
Otherwise, I would recommend not using the ASP.NET cache, and either find a 3rd party solution that uses a distributed caching scheme, that way all applications are using one cache, instead of 3 separate ones.
I'm not saying this is the best answer or even the right answer, its just what I did.
I have a series of ecommerce websites on separate servers and data centers that rely on pulling catalog data from a central backoffice website location and then caches them locally. In my first iteration of this I simply used GET requests that the central location could ping the corresponding consuming website to initiate its own cache refresh routine. I used SSL on each of the eCommerce servers as I already had that setup and could then have the backoffice web app send credentials via SSL GET to initiate the refresh securely.
At a later stage, we found it more efficient to use sockets instead on the backoffice where each consuming website would be a client and listen for changes in the data. The backoffice website could then communicate to its corresponding website when a particular account change and then communicate this very specifically. This approach is much more granular and we could update in small bits as needed as opposed to a large chunked update but this was definitely more complicated than our first try.
I see a ton of questions about uploading multiple files, but none about uploading a single file to multiple servers, so here goes...
I have an ASP.NET app that will be running on two load balanced servers, and I would like to allow users to upload files and have them end up on both servers. What is the cleanest way to do this? I am using IIS 6 btw.
Some ideas that come to mind are:
1) Use a virtual directory that points to some shared location that both servers can access. Will there be any access issues if the application runs at Network Service? I'm assuming the application will need to run as a user account that exists on the shared location machine. How should the permissions be set for this?
2) It would be nice if I could via jQuery post the request to both of my servers, referencing them by their port numbers. Even though the servers are on the same domain, this violates the same origin policy, right?
Is there another solution I'm overlooking? How do other sites do this?
I think you want to consider this problem more carefully - having a pair (or more) of servers means that some of them will be offline some of the time (at least for occasional reboots).
Uploads when not all of the servers are online won't be able to be sent to all servers immediately, so you'd need either an intermediate server (which would be a point of failure unless it was highly available itself) or a queuing system to "remember" which files were where, and to transfer them when the relevant servers were restored.
Also, you'll want a backup system, and some way to add newly provisioned servers to your cluster. You will also want a way to monitor these files are the same in case they get out of sync. Your architecture needs a lot of careful thought. I don't have the answers :)
The cleanest approach is forwarding the files server-side, really. If you force two uploads via JavaScript, not only will you have to worry about working around XSS safeguards, but you'll also force the user to use their very limited upstream bandwidth twice for each file.
You shouldn't be exposing that kind of detail to the client anyway. The browser doesn't need to know where the file ends up, just who to send it to. If you keep that logic server-side, not only do you keep the details hidden (and thus less prone to errors and exploits), but you'll also get more control over the process. You can create a gateway service later that handles a multitude of back end storages and you can handle failing servers better. You can queue failed uploads and retry. All these come at a very low cost if you do them on the server side, but are a pain to be made to work reliably on the client side.
Keep back end logic to your back end. Load balancing should be hidden from the user, so there's no need to tell them where they are sending their files exactly. Make it optional, if you want, but hide the action from them. Just swallow the file on the gateway server (which can be either of the load balancing servers -- in fact, it should probably be load balanced, too, so it should work with either of them in place) and send it to the other servers from there. The transfer from server to server will probably be faster too.
Your best bet is definitely a NAS, if one is available -- a shared file system that is not specifically associated with any machine. Then you can focus on making the NAS highly available via a clustered frontend.
If that's not an option, you can use a virtual directory on each machine that points to one folder on one of the machines, but then you lose redundancy.
I'm faced with this same challenge at my work. My app is small but needs to be highly available, but there's no NAS in sight. So in each machine's web.config I place a list of all the UNC paths that the uploaded file should be stored. After uploading to a temp folder, I copy the file to each machine one by one. It's not perfect -- a machine could go down, in which case when it came up it might not have all the files (and the copy would be slowed by the hunt for the missing machine) -- but in my situation uploads are so infrequent that it's not worth improvement.
As others have mentioned, Javascript is right out. Upload once.
I have seen this problem solved with a NAS, using credentials for the app pool that can read/write files to that NAS. Make sure your NAS is setup for high availability to prevent single point of failure ie:hot swap w/ raid, multiple array controllers, power supplies..etc
You could also put folder monitoring software on the severs that keep certain directories in sync. I don't recommend this solution.
I am scratching my head about this. My scenario are that I need to upload a file to the company server machine(to a folder on c:) from our hosting one(totally different server). I don't know how I should do this. Any of you got tips or code on how this is done.
Thanks Guys
I would set up an FTP server (like the one in IIS or a third-party server) on the Company Server. If security is an issue then you'll want to set up SFTP (secure FTP) rather than vanilla FTP since FTP is not a natively secure transfer protocol. Then create a service on the Hosting Server to pick up the file(s) as they come in and ship them to the company server using C#/.NET's FTP control. Honestly, it should be pretty straightforward.
Update: Reading your question, I am under the strong impression that you will NOT have a web site running on the company server. That is, you do not need a file upload control in your web app (or already know how to implement one given that the control is right in the web page toolbox). Your question, as I understand it, is how to get a file from the web server over to the company server.
Update 2: Added a note about security. Note that this is less of a concern if the servers are on the same subdomain and won't be routed outside of the company network and/or if the data is not sensitive. I didn't think of this at first because I am working a project like this now but our data is not, in any way, sensitive.
Darren Johnstone's File Upload control is as good a solution as you will find anywhere. It has the ability to handle large files without impacting the ASP.NET server memory, and can display file upload progress without requiring a Flash or Silverlight dependency.
http://darrenjohnstone.net/2008/07/15/aspnet-file-upload-module-version-2-beta-1/
There isnt enough info to tell your whole hosting scenario but I have a few suggestions that might get you started in the right direction:
Is your external server owned by another company or group and you cant modify it? If not you might consider hosting the process on the same machine, either in process or as a separate service on the machine. If it cannot be modified, you might consider hosting the service on the destination machine, that way its in the same place as the files need to show up at.
Do the files need to stay in sync with the process? I.e. do they need to be uploaded, moved and verified as a single operation? If not then a separate process is probably the best way to go. The separate process will give you some flexibility, but remember it will be a separate process and a separate set of code to manage and work with
How big is the file(s) that are being uploaded? Do they vary by upload? are the plain files, binaries (zips, executables, etc)? If the files are small you have more options than if they are large. If they are small enough, you can even relay them in line.
Depending on the answers to the above some of these might work for you:
Use MSMQ. This will work for simple messages under about 3MB without too much hassle. Its ideal for messages that can be directly worked with (such as XML).
Use direct HTTP(s) relaying. On the host machine open a HTTP(s) connetion to the destination machine and transfer the file. Again this will work better for smaller files (i.e. only a few KB since it will be done in-line)
If you have access to the host machine, deploy a separate process on the machine which builds or collects the files and uses any of the listed methods to send them to the destination machine.
You can use SCP, FTP (of any form SFTP, etc) on either the host machine (if you have access) or the target machine to host the incoming files and use a batch process to move the files. This will have a lot of issues to address, such as file size, keeping submissions in sync, and timing. I would consider this as a last resort, depending on the situation.
Again depending on message size, you could also use a layer of abstraction such as a DB to act as the intermediate layer between the two machines. This will work as long as the two machines can see the DB (or other storage location) and both act on it. SQL Server Service Broker could be used for this purpose (and most other DB products offer similar products).
You can look at other products like WSO2 ESB or NServiceBus to facilitate messaging between the two apps and do it inline.
Hopefully that will give you some starting points to look into.