Auto-Update ASP.NET application to clients local server - c#

I have a asp.net web application that has been deployed in the local IIS server of clients. These clients are more than 100 in number as of yet.
Since the application is in early phase of release, it gets updated almost on a daily basis. It becomes cumbersome to deploy new updates to each client's server individually.
Therefore, I would like to implement a mechanism where the user can automatically check for updates, download and replace the IIS application file.
I did try the solution given in this article Building a Self Updating Site Using NuGet but it felt as if it wasn't very reliable and scalable.
So any help on this would be highly appreciated.
Many Thanks.

Related

Synchronize IIS web server configurations

We have multiple load balanced IIS web servers for our application backed by a MS SQL server database. We store application configuration information in the database. While the application is running I frequently change the configuration and the changes need to be propagated to the other web servers. Is there a good way to do this? I have been doing it through SignalR (to alert other servers a change has occurred and they should refresh their configuration) but SignalR is not always reliable and sometimes one server does not get the message. Is there a better solution?
Thank you
Updated
I now understand you to need to propagate an application level configuration change.
You could, as you mentioned, use SignalR. This would require having a central server that hosts the websocket connections, but has the benefit of being "instant".
Alternatively, if your requirements are simple, perhaps a short term in-memory cache would suffice.
If it's complex than that, I'd recommend looking into event queues (MSMQ, RabbitMQ). In this model, the instance changing the configuration publishes an event to the queue which can be consumed by the other instances on a background thread.
Original Answer
Microsoft Web Deploy was built to do this. It supports synchronizing sites across servers, even down to application pool settings and SSL certificates.
The IIS documentation site has a specific page that is relevant to your use case: Synchronize IIS.
There is a lot involved in configuring Web Deploy so I won't attempt to explain it all here, but for posterity reasons the command to sync a local site to a remote machine would be:
msdeploy.exe -verb:sync
-source:apphostconfig="Default Web Site"
-dest:apphostconfig="Default Web Site",computername=Server1
(The command was split over multiple lines for readability)
As an an entirely alternative approach, you could also use a "pull configuration" system like Powershell Desired State Configuration or Chef.

Asp.net web API 2 separation Of Web client and web server development

Our web application is developed by 2 teams. One team works on the client side, with it's own Branch for development, and the other works on the server side, also with it's own development branch. The client and the server are running separately, each one as a website on a different port. The websites are hosted over IIS Express during development, and in production they will run over IIS.
Our ideal situation is that each team can develop completely separately and whenever a develop session is over, both teams merge their change-set to a common Branch in order to integrate, than each team merges back to their development branch, and continues.
In order for a full separation, We have x2 SERVER projects, one to handle the real HTTP requests and another one, a "Stub server" Which responses to all the clients HTTP requests with default values, just in order so that the Client side team can test their code without being dependent on the functionality of the server.
The problem is that both the "Stub server" and the real server and using the same Port which the Client side project is directed to.
This causes many annoying mistakes (mostly for the Server side team) of running the application with the "Stub server" instead of the real one, during reviews, tests etc. The only solution for us is to manually create a virtual directory for the real web server project every time before running / or after finding out we were running the wrong server.
Is there a smarter solution to overcome this annoying problem? That would improve our lives!
If anything I said was foolish / not clear please correct me (I'm new to this), or ask for more details, I'll be glad!
Thanks for helpers!
I believe your problem is more related to build automation then server configuration. You should really keep the stub server and the real server into separate ports, and change that port during some kind of build process of your client.
If you are using AngularJS, then I suggest you to create steps into the build process of your client application using common tools like gulp or grunt. You could create build processes that will set a global variable or modify a constant (e.g. the API endpoint) and name them local testing (pointing your client to the stub server) and integration (for the real server).
Please note that you can easily integrate those build processes into Visual Studio, making them part of your global debug/build process.
Here it is a simple gulp task useful for replacing text inside any file: https://www.npmjs.com/package/gulp-replace

Handling multiple deployments of application ASP.NET

I have a product, and a front end website where people can purchase the product. Upon purchase, I have a system that creates an A record in my DNS server that points to an IP address. It then creates a new IIS website with the bindings required.
All this works well, but I'm now looking at growing the business and to do this I'll need to handle upgrades of the application.
Currently, I have my application running 40 websites. It's all the same code base and each website uses it's own SQL Server database. Each website is ran in a separate application pool and operate completely independently.
I've looked at using TeamCity to build the application and then have a manual step that runs MSDeploy for each website but this isn't particularly ideal since I'd need to a) purchase a full license and b) always remember to add a new website to the TeamCity build.
How do you handle the upgrade and deployments of the same code base running many different websites and separate SQL Server databases?
First thing, it is possible to have a build configuration in TeamCity that builds and deploys to a specific location...whether a local path or a network drive. I don't remember exactly how but one of the companies I worked with in Perth had exactly the same environment. This assumes that all websites are pointing to the same physical path in the file system.
Now, a word of advice, I don't know how you have it all setup, but if this A record is simply creating a subdomain, I'd shift my approach to a real multi-tenant environment. That is, one single website, one single app pool for all clients and multiple bindings associated to a specific subdomain. This approach is way more scalable and uses way less memory resources...I've done some benchmark profiling in the past and amount of memory each process (apppool) was consuming was a massive waste of resources. There's a catch though, you will need to prepare your app for a multi-tenant architecture to avoid any sort of bleeding such as
Avoiding any per-client singleton component
Avoiding static variables
Cache cannot be global and MUST a client context associated
Pay special attention to how your save client files to the file system
Among other stuff. If you need more details about setting up TeamCity in your current environment, let me know. I could probably find some useful info

Publishing to Azure without taking the site down

Published in the ghost app - Server Fault Question
I am relatively new to this field although I've been a programmer for years.
My company has a website hosted in Azure. I am the one that performs the "Publish" action after confirming that the team finished developing a certain module. However, I have to take the site down on every publish (adding the app_offline.htm while copying dll's, aspx files etc.).
This seems redundant, right? there should be a better way to do it.
I was thinking of the obvious, two servers that while I "talk" to one the other take all the traffic, and afterwards they sync or I can make a publish to the second.
Environment: VisualStudio2013, AzureWebSite, ASP.NET 4.0.
Please share your thoughts, knowledge or even just where should I start my investigation from?
Thanks!
If you are publishing the site to a cloud service, then you can publish the site to the staging instance first and then swap over to production after the staging deployment has finished.
The idea being that you'll have version 5 of the website in the production slot and version 4 of the website in the staging slot. You would deploy version 6 to the staging slot and wait for it to finish. Then you can swap the virtual IP addresses once the staging slot is ready.
The swap takes maybe 20-30 seconds so it's minimal downtime.
The added benefit is that if the new version has issues, you can swap again and get the old version back up.
Cloud services from my experience are a bit easier to manage for availability than a VM.

Advice on options for shared database for distributed c# application

I'd like to know my options for the following scenario:
I have a C# winforms application (developed in VS 2010) distributed to a number of offices within the country. The application communicates with a C# web service which lies on a main server at a separate location and there is one database (SQL Server 2012) at a further location. (All servers run Windows Server 2008)
Head Office (where we are) utilize the same front-end to manage certain information on the database which needs to be readily available to all offices - real-time. At the same time, any data they change needs to be readily available to us at Head Office as we have a real-time dashboard web application that monitors site-wide statistics.
Currently, the users are complaining about the speed at which the application operates. They say it is really slow. We work in a business-critical environment where every minute waiting may mean losing a client.
I have researched the following options, but do not come from a DB background, so not too sure what the best route for my scenario is.
Terminal Services/Sessions (which I've just implemented at Head Office and they say it's a great improvement, although there's a terrible lag - like remoting onto someones desktop, which is not nice to work on.)
Transactional Replication (Sounds like something quite plausible for my scenario, but would require all offices to have their own SQL server database on their individual servers and they have a tendency to "fiddle" and break everything they're left in charge of!) Wish we could take over all their servers, but they are franchises so have their own IT people on site.)
I've currently got a whole lot of the look-up data being cached on start-up of the application but this too takes 2-3 minutes to complete which is just not acceptable!
Does anyone have any ideas?
With everything running through the web service, there is no need for additional SQL Servers to be deployed local to the client. The WS wouldn't be able to communicate with these databases, unless the WS was also deployed locally as well.
Before suggesting any specific improvements, you need to benchmark where your bottlenecks are occurring. What is the latency between the various clients and the web service, and then from the web service and the database? Does the database show any waiting? Once you know the worst case scenario, improve that, and then work your way down.
Some general thoughts, though:
Move the WS closer to the database
Cache the data at the web service level to save on DB calls
Find the expense WS calls, and try to optimize the throughput
If the lookup data doesn't change all that often, use a local copy of SQL CE to cache that data, and use the MS Sync Framework to keep the data synchronized to the SQL Server
Use SQL CE for everything on the client computer, and use a background process to sync between the client and WS
UPDATE
After your comment, two additional thoughts. If your web service payload(s) is/are large, you can try adding compression on the web service (if it hasn't already been implemented).
You can also update your client to do the WS calls asynchronously, either in a thread or if you are using .NET 4.5 using async/await. This would at least allow the client to use the UI, but wouldn't necessary fix any issues with data load times.

Categories

Resources