I have a product, and a front end website where people can purchase the product. Upon purchase, I have a system that creates an A record in my DNS server that points to an IP address. It then creates a new IIS website with the bindings required.
All this works well, but I'm now looking at growing the business and to do this I'll need to handle upgrades of the application.
Currently, I have my application running 40 websites. It's all the same code base and each website uses it's own SQL Server database. Each website is ran in a separate application pool and operate completely independently.
I've looked at using TeamCity to build the application and then have a manual step that runs MSDeploy for each website but this isn't particularly ideal since I'd need to a) purchase a full license and b) always remember to add a new website to the TeamCity build.
How do you handle the upgrade and deployments of the same code base running many different websites and separate SQL Server databases?
First thing, it is possible to have a build configuration in TeamCity that builds and deploys to a specific location...whether a local path or a network drive. I don't remember exactly how but one of the companies I worked with in Perth had exactly the same environment. This assumes that all websites are pointing to the same physical path in the file system.
Now, a word of advice, I don't know how you have it all setup, but if this A record is simply creating a subdomain, I'd shift my approach to a real multi-tenant environment. That is, one single website, one single app pool for all clients and multiple bindings associated to a specific subdomain. This approach is way more scalable and uses way less memory resources...I've done some benchmark profiling in the past and amount of memory each process (apppool) was consuming was a massive waste of resources. There's a catch though, you will need to prepare your app for a multi-tenant architecture to avoid any sort of bleeding such as
Avoiding any per-client singleton component
Avoiding static variables
Cache cannot be global and MUST a client context associated
Pay special attention to how your save client files to the file system
Among other stuff. If you need more details about setting up TeamCity in your current environment, let me know. I could probably find some useful info
Related
I have three applications running in three separate app pools. One of the applications is an administrative app that few people have privileged access to. One of the function the administrative app allows is creating downtime notices. So when a user goes into the administrative app and creates a downtime notice the other two apps are supposed to pick up on there being a new notice and display it on the login page.
The problem is that these notices are cached and being that each app is in a separate app pool the administrative app doesn't have any way to clear the downtime notices cache in the other two applications.
I'm trying to figure out a way around this. The only thing I can think of is to insert a record in the DB that denotes the cache needs to be cleared and the other two apps will check the DB when loading the login page. Does anyone have another approach that might work a little cleaner?
*Side note, this is more widespread than just the downtime notices, but I just used this as an example.
EDIT
Restarting the app pools is not feasible as it will most likely kill background threads.
If I understand correctly, you're basically trying to send a message from the administrative app to other apps. Maybe you should consider creating WCF service on these apps that could be called from the administrative application. That is a standard way to communicate between different apps if you don't want to use e.g. shared medium such a database and it doesn't force you to use polling model.
Another way to look at this is that this is basically an inter-application messaging problem, which has a number of libraries already out there that could help you solve it. RabbitMQ comes to mind for this. It has a C# client all ready to go. MSMQ is another potential technology, and one that already comes with Windows - you just need to install it.
If it's database information you're caching, you might try your luck at setting up and SqlCacheDependency.
Otherwise, I would recommend not using the ASP.NET cache, and either find a 3rd party solution that uses a distributed caching scheme, that way all applications are using one cache, instead of 3 separate ones.
I'm not saying this is the best answer or even the right answer, its just what I did.
I have a series of ecommerce websites on separate servers and data centers that rely on pulling catalog data from a central backoffice website location and then caches them locally. In my first iteration of this I simply used GET requests that the central location could ping the corresponding consuming website to initiate its own cache refresh routine. I used SSL on each of the eCommerce servers as I already had that setup and could then have the backoffice web app send credentials via SSL GET to initiate the refresh securely.
At a later stage, we found it more efficient to use sockets instead on the backoffice where each consuming website would be a client and listen for changes in the data. The backoffice website could then communicate to its corresponding website when a particular account change and then communicate this very specifically. This approach is much more granular and we could update in small bits as needed as opposed to a large chunked update but this was definitely more complicated than our first try.
looking for examples of what people have done inorder to deploy the same webapp or processes to multiple servers.
The deployment process right now consists of copying the same file multiple times to different servers within our company. There has to be a better way to do this right now I am looking into ms build does anyone have other ideas? Thanks in advance.
Take a look at msdeploy and Web Deploy.
I've done this using a variety of methods. However, I think the best one is what I call a "rolling" deployment.
The following assumes a code only deployment:
Take one or more web servers "offline" by removing them from the load balancing list, let's call this group A. You should keep enough running to keep up with existing traffic, we'll call those group B. Push the code to the offline servers (group A).
Then, put group A back into rotation and pull group B out. Make sure the app is still functional with the new code. If all is good, update group B and put them back in rotation. In the event of a problem, just put group B back in and take A out again.
In the case of a database update there are other factors to consider. If you can take the whole site down for a limited period then do so and perform all necessary updates. This is by far the easiest.
However, if you can't then do a modified "rolling" deployment which requires multiple database servers. Pick a point in time and move a copy of the production database to the second one. Apply your changes. Then pull a group of web servers out, update their code to production and test. If all is good, put those web servers back into rotation and take out group B. Update the code on B while pointing them to the second DB server. Put them back into rotation.
Finally, apply all data changes that occurred on the primary production database to the secondary one.
Note, I don't use Web Deploy or MS Deploy for pushes to production. Quite frankly I want the files ready to be copy/pasted into the correct directory on the server so that the push can run as quickly as possible. Both Web and MS Deploy options involve transferring those files over a network connection; which is typically much slower than simply copy/pasting from one local directory to another.
You can build a simple console app that connects to a fixed sftp download, uncompress
and run all the files in a fixed directory. A meta XML file can be usefull to create rules
such as each machine will run each application, pre-requirements and so on.
You can also use dropbox api to download your files if you don't have a centralized server to unify your apps.
Have a look at kwateeSDCM. It's language and platform agnostic (Windows, Linux, Solaris, MacOS). There's an article dedicated to deployment of a webapp on multiple tomcat servers.
I see a ton of questions about uploading multiple files, but none about uploading a single file to multiple servers, so here goes...
I have an ASP.NET app that will be running on two load balanced servers, and I would like to allow users to upload files and have them end up on both servers. What is the cleanest way to do this? I am using IIS 6 btw.
Some ideas that come to mind are:
1) Use a virtual directory that points to some shared location that both servers can access. Will there be any access issues if the application runs at Network Service? I'm assuming the application will need to run as a user account that exists on the shared location machine. How should the permissions be set for this?
2) It would be nice if I could via jQuery post the request to both of my servers, referencing them by their port numbers. Even though the servers are on the same domain, this violates the same origin policy, right?
Is there another solution I'm overlooking? How do other sites do this?
I think you want to consider this problem more carefully - having a pair (or more) of servers means that some of them will be offline some of the time (at least for occasional reboots).
Uploads when not all of the servers are online won't be able to be sent to all servers immediately, so you'd need either an intermediate server (which would be a point of failure unless it was highly available itself) or a queuing system to "remember" which files were where, and to transfer them when the relevant servers were restored.
Also, you'll want a backup system, and some way to add newly provisioned servers to your cluster. You will also want a way to monitor these files are the same in case they get out of sync. Your architecture needs a lot of careful thought. I don't have the answers :)
The cleanest approach is forwarding the files server-side, really. If you force two uploads via JavaScript, not only will you have to worry about working around XSS safeguards, but you'll also force the user to use their very limited upstream bandwidth twice for each file.
You shouldn't be exposing that kind of detail to the client anyway. The browser doesn't need to know where the file ends up, just who to send it to. If you keep that logic server-side, not only do you keep the details hidden (and thus less prone to errors and exploits), but you'll also get more control over the process. You can create a gateway service later that handles a multitude of back end storages and you can handle failing servers better. You can queue failed uploads and retry. All these come at a very low cost if you do them on the server side, but are a pain to be made to work reliably on the client side.
Keep back end logic to your back end. Load balancing should be hidden from the user, so there's no need to tell them where they are sending their files exactly. Make it optional, if you want, but hide the action from them. Just swallow the file on the gateway server (which can be either of the load balancing servers -- in fact, it should probably be load balanced, too, so it should work with either of them in place) and send it to the other servers from there. The transfer from server to server will probably be faster too.
Your best bet is definitely a NAS, if one is available -- a shared file system that is not specifically associated with any machine. Then you can focus on making the NAS highly available via a clustered frontend.
If that's not an option, you can use a virtual directory on each machine that points to one folder on one of the machines, but then you lose redundancy.
I'm faced with this same challenge at my work. My app is small but needs to be highly available, but there's no NAS in sight. So in each machine's web.config I place a list of all the UNC paths that the uploaded file should be stored. After uploading to a temp folder, I copy the file to each machine one by one. It's not perfect -- a machine could go down, in which case when it came up it might not have all the files (and the copy would be slowed by the hunt for the missing machine) -- but in my situation uploads are so infrequent that it's not worth improvement.
As others have mentioned, Javascript is right out. Upload once.
I have seen this problem solved with a NAS, using credentials for the app pool that can read/write files to that NAS. Make sure your NAS is setup for high availability to prevent single point of failure ie:hot swap w/ raid, multiple array controllers, power supplies..etc
You could also put folder monitoring software on the severs that keep certain directories in sync. I don't recommend this solution.
What are the challenges in porting your existing applications to Azure?
Here are few points I'm already aware about.
1) No Support for Session Affinity (Azure is Stateless) - I'm aware that Azure load balancing doesn't support Session Affinity - hence if the existing web application should be changed if it has session affinity.
2) Interfacing with COM - Presently I think there is no support for deploying COM components to the cloud to interface with them - if my current applications need to access some legacy components.
3) Interfacing with other systems from the cloud using non-http protocols
Other than the above mentioned points, what are other significant limitations/considerations that you are aware off?
Also, how these pain points are addressed in the latest release?
our biggest challenge is the stateless nature of the cloud. though we've tried really really hard, some bits of state have crept through to the core and this is what is being addressed.
the next challenge is the support of stale data and caching as data can be offline for weeks at a time. this is hard regardless.
Be prepared for a lengthy deployment process. At this time (pre-PDC 2009), uploading a deployment package and spinning up host services sometimes has taken me more than 30 minutes (depends on time of day, size of package, # of roles, etc).
One side effect of this is that making configuration changes in web.config files is expensive because it requires the entire app package to be re-packaged and re-deployed. Utilize the Azure configuration files instead for config settings - as they do not require a host suspend/restart.
My biggest problem with Azure today is operability with other OS’es. Here I am comparing Azure to EC2/Rackspace instances (Even though Azure as PAAS offers a lot more than them e.g. load balancing, storage replication, geographical deployment etc in a single cheap package).
Even if you consider me as a BizSpark startup guy, I am not inclined to run my database on SqlAzure (Sql2005 equivalent) since I can’t accept their pricing policy, which I’ll have to bear three years after of the BizSpark program. Now they don’t have an option for MySql or any other database. This to me is ridiculous for an SME. With EC2 I can run my MySql instance on another Linux VM (obviously in the same network. Azure gives you the capability to connect to network outside theirs, but that is not really an option)
2nd. This is again is related to using *nix machines. I want all my caching to be maintained by Memcached. With asp.net 4 they have even given us out of the box memcached support through extensible output caching. The reason why I am adamant about memcached is the eco system it provides. E.g.: Today I can get memcached with persistent caching as an add-on. This will even give me the opportunity to store session data with memcached. Additionally I can run map reduce jobs on the IIS logs. This is done using cloudera images on EC2. I don’t see how I can do these with Azure.
You see, in the case of Amazon/Rackspace I can run my asp.net web app on a single instance of Windows Server 2008 and the rest on *nix machines.
I am contemplating running my non hierarchical data (web app menu items) on CouchDb. With Azure I get the Azure table. But I am not very comfortable with that ATM. With EC2 I can run it on the same MySql box(don't catch me on this one :-)).
If you are ready to forget these problems, Azure gives you an environment with a lot of grunt work abstracted. And that’s a nice thing. Scaling, loading balancing, a lot of very cheap storage, CDN, storage replication, out of the box monitoring for services through Fabric Controller etc among these. With EC2/Rackspace you’ll have to hire a sysadmin shelling $150k PA to do these things (AFAIK Amazon provides some of these feature at additional cost).
My comparisons are between azure and Amazon/Rackspace instances (and not cloud). For some this might seem like apples and orange. But azure does not provide you with instances. Just the cloud with their customized offerings…
My biggest problem is/was just signing up and creating a project. And that's how far it got over the last month.
Either I am doing something very wrong, or that site is broken most of the time.
One important challenge is the learning curve, lack of experienced developers, the time it takes to become productive .
This happens with all technologies, but with the cloud there is a fundamental change in how somethings are done.
If your application needs a database, I'm not sure that Windows Azure has a relational database (right now)
Also, there are other cloud computing providers that can offer you more options in configuring your virtual machine for example, it really depends on what you actually need and want.
Please give reference/guidance to make a web application for managing all IT assest/devices .
Application consist of two component Web application and Windows .NET Application.
Client Windows .NET Application scan all active network & find all IT assests like printer,scanner & upload all data into the web application.
Now our team using Asp.net & c#
technology for this project.
Please give suggestion regarding the
client application & web application
interaction.
Suggest any library/reference
required for the project.
Is Microsoft sync frame work good for
client application.
Is Microsoft WCF will be a good
option for client & server
interaction(for making API).
How to scan active network for
finding devices by using client
application.
I will suggest you to go for a simple solution like
1. A windows application that takes care of scanning the entire network of computers from the installed computer and then sends the information to a web service
2. A web service to accept the assets list and then save them to the database
3. An asp.net application to display and catalog the indexed assets.
The part that will become somewhat complicated will be the assets discovery since it should be handled by the windows application. Your best bets are
1. to depend on the windows registry
2. to use the Windows Management Instrumentation (WMI)
3. If you dare you can dirctly program against the Netapi32.dll: NetServerEnum or similar low level win32 API
Hope this will help you to get started
Consider this approach:
Asset Catalog Service (WCF App). Its responsibility is to act as a repository for found assets. The service encapsulates the actual storage (database, etc).
Asset Finder (Winform app). Its responsibility is to scan all active network, find all IT assets and calls Asset Catalog Service to register the device. Asset Catalog Service will reconsile whether a device has not been registered (therefore will be stored), a device have been updated (therefore will be updated) or no change (therefore will be ignored). You may have multiple instances of Asset Finder running at the same time to speed up the discovery process. Asset Catalog Service or another service may be used to keep track the work pool of the asset finders.
The website (Asp.Net Web app). Its responsibility is to visualize the asset catalog. It queries Asset Catalog Service and display the result to the end users.
I can't see any obvious use case for using Microsoft Sync framework.
Unfortunately, I don't have any experience in writing any asset discovery algorithm. Others might be able to help on that point.
Hope that helps.
This is a bit off topic, but I would suggest looking at options from existing vendors that will meet your overall business requirements. Asset detection and management is not a simple task and creating an in-house application to do it is often a waste of time and money that could be better spent on core business needs or other IT support/resources. Purchasing software from an existing vendor will give you a much better solution than whatever you can code up in a week, a month, or even a year. If you are trying to catalog even a medium sized network with over 100 nodes then using an established system could end up being much cheaper than building your own. Here are a few examples of existing products:
http://www.vector-networks.com/components/network-discovery-and-mapping.php
http://www.manageengine.com/products/service-desk/track-it-assets.html
I haven't used either of them, but I have been down a similar route trying to create an in--house server monitoring and management system. We spent two weeks working on a prototype that was eventually scraped for a 3rd-party system that cost $1,000 a year. It would have cost us at least $10,000 to build something with 1/10th the features, let alone support and maintain it. Even just searching for a FOSS solution and then using that as the basis for your project (something like nmap) would be better than starting from scratch.
Best of luck!
Windows based application should be used to scan network and collect info about devices/assets available in the network. save those information in database.
look at following project to get and idea how you may scan the network http://sourceforge.net/projects/nexb/
the same database should be used by ASP .net app to be used for reporting purposes, you may also use it to group/tag/categorize various assets.
Also store scanned devices in separate "departments" depending on their IP schema.