We have multiple load balanced IIS web servers for our application backed by a MS SQL server database. We store application configuration information in the database. While the application is running I frequently change the configuration and the changes need to be propagated to the other web servers. Is there a good way to do this? I have been doing it through SignalR (to alert other servers a change has occurred and they should refresh their configuration) but SignalR is not always reliable and sometimes one server does not get the message. Is there a better solution?
Thank you
Updated
I now understand you to need to propagate an application level configuration change.
You could, as you mentioned, use SignalR. This would require having a central server that hosts the websocket connections, but has the benefit of being "instant".
Alternatively, if your requirements are simple, perhaps a short term in-memory cache would suffice.
If it's complex than that, I'd recommend looking into event queues (MSMQ, RabbitMQ). In this model, the instance changing the configuration publishes an event to the queue which can be consumed by the other instances on a background thread.
Original Answer
Microsoft Web Deploy was built to do this. It supports synchronizing sites across servers, even down to application pool settings and SSL certificates.
The IIS documentation site has a specific page that is relevant to your use case: Synchronize IIS.
There is a lot involved in configuring Web Deploy so I won't attempt to explain it all here, but for posterity reasons the command to sync a local site to a remote machine would be:
msdeploy.exe -verb:sync
-source:apphostconfig="Default Web Site"
-dest:apphostconfig="Default Web Site",computername=Server1
(The command was split over multiple lines for readability)
As an an entirely alternative approach, you could also use a "pull configuration" system like Powershell Desired State Configuration or Chef.
Related
I have a asp.net web application that has been deployed in the local IIS server of clients. These clients are more than 100 in number as of yet.
Since the application is in early phase of release, it gets updated almost on a daily basis. It becomes cumbersome to deploy new updates to each client's server individually.
Therefore, I would like to implement a mechanism where the user can automatically check for updates, download and replace the IIS application file.
I did try the solution given in this article Building a Self Updating Site Using NuGet but it felt as if it wasn't very reliable and scalable.
So any help on this would be highly appreciated.
Many Thanks.
I have three applications running in three separate app pools. One of the applications is an administrative app that few people have privileged access to. One of the function the administrative app allows is creating downtime notices. So when a user goes into the administrative app and creates a downtime notice the other two apps are supposed to pick up on there being a new notice and display it on the login page.
The problem is that these notices are cached and being that each app is in a separate app pool the administrative app doesn't have any way to clear the downtime notices cache in the other two applications.
I'm trying to figure out a way around this. The only thing I can think of is to insert a record in the DB that denotes the cache needs to be cleared and the other two apps will check the DB when loading the login page. Does anyone have another approach that might work a little cleaner?
*Side note, this is more widespread than just the downtime notices, but I just used this as an example.
EDIT
Restarting the app pools is not feasible as it will most likely kill background threads.
If I understand correctly, you're basically trying to send a message from the administrative app to other apps. Maybe you should consider creating WCF service on these apps that could be called from the administrative application. That is a standard way to communicate between different apps if you don't want to use e.g. shared medium such a database and it doesn't force you to use polling model.
Another way to look at this is that this is basically an inter-application messaging problem, which has a number of libraries already out there that could help you solve it. RabbitMQ comes to mind for this. It has a C# client all ready to go. MSMQ is another potential technology, and one that already comes with Windows - you just need to install it.
If it's database information you're caching, you might try your luck at setting up and SqlCacheDependency.
Otherwise, I would recommend not using the ASP.NET cache, and either find a 3rd party solution that uses a distributed caching scheme, that way all applications are using one cache, instead of 3 separate ones.
I'm not saying this is the best answer or even the right answer, its just what I did.
I have a series of ecommerce websites on separate servers and data centers that rely on pulling catalog data from a central backoffice website location and then caches them locally. In my first iteration of this I simply used GET requests that the central location could ping the corresponding consuming website to initiate its own cache refresh routine. I used SSL on each of the eCommerce servers as I already had that setup and could then have the backoffice web app send credentials via SSL GET to initiate the refresh securely.
At a later stage, we found it more efficient to use sockets instead on the backoffice where each consuming website would be a client and listen for changes in the data. The backoffice website could then communicate to its corresponding website when a particular account change and then communicate this very specifically. This approach is much more granular and we could update in small bits as needed as opposed to a large chunked update but this was definitely more complicated than our first try.
I am relatively green with C# and WCF. I have landed on a project where I am creating self hosted WCF services running as Windows services but am starting to wonder if I should use IIS instead (which we don't currently use) as managing all of these services could eventually get cumbersome.
Despite my best efforts, I have yet to find any definitive information about why I might favor one approach over the other. The services are primarily used for utility stuff like resizing images, retrieving files, etc. and are called by both C# and Java clients.
Thanks
The shortest answer would be 'it depends'. On your requirements. You can self host without problems, but IIS will manage resources more effectively and enable you to fine tune stuff more easily than self-hosted.
For instance, in IIS would be more simple to deploy a new version or remove and old one.
Either way is fine.
Generally, using the builtin IIS hosting capabilities can make deployment and configuration simpler for you. Also you have the activation model of http.sys - which means IIS will start the necessary process for you when an appropriate message arrives.
Clients of any platform can connect to the WCF services regardless whether they are self-hosted or IIS hosted.
ps: how to allow IIS-hosted WCF services to store their configuration data in distinct xxx.config files
I need to change some configuration settings on-the-fly in a Windows Azure project - and they need to be changed via a web service call (updating the application's configuration either via the platform api or the Azure Management site isn't an option here).
The project has multiple web and worker roles - all of which will need to know about the new configuration when it is changed.
The configuration is persisted to durable storage, and it's also cached during runtime in a static variable.
My solution was to create an internal (tcp) endpoint on my roles, and use that to loop through all of the roles and instances within those roles, create a client on the fly, and tell the instance about the new setting. (pretty much identical to: http://msdn.microsoft.com/en-us/gg457891)
At first I started a ServiceHost in the WebRole's RoleEntryPoint... and I was confused why everything seemed to be working fine when I stepped through the communications (the static variables where getting set correctly) - yet when I'd make other webservice calls, the static variable seemed to have "forgotten" what I set it to.
This was the case both locally, and in the Azure staging environment.
At this point I realized that because we're using full-IIS mode, the RoleEntryPoint and the Web Services were running in two separate processes - one in Azure's stub, and one in IIS.
"Not a problem" I said, I'll simply move the line of code which starts the ServiceHost from my RoleEntryPoint into the global.asax - at which point the ServiceHost will have been started in the same process as the rest of the site - and the static variables would be the same ones.
Here's where I'm having a problem; This works great on my local machine running in the dev environment. As soon as I deploy to staging I start getting error emails saying the channel used to connect to the service can't be closed because it's in a "faulted state".
Question:
What's different about Azure vs. Dev environment that is causing this?
How can I fix or workaround the problem?
Does anyone have any general advice on how I should go about obtaining a more descriptive error... do I have to enable full wcf diagnostics in Azure to get this, or is there some other way I can get at the exception details?
Follow-Up:
Via remote desktop i've learned several interesting things:
Non-HTTP Activation isn't installed by default on Azure WebRoles. I believe this can be overcome via a startup script:
start /w pkgmgr /iu:WCF-NonHTTP-Activation;
The website created in IIS by the web role doesn't have the net.tcp protocol enabled by default. I also believe this can be overcome with a startup script:
%systemroot%\system32\inetsrv\appcmd.exe set app "Website Name Here" /enabledProtocols:https,http,net.tcp
I haven't had time to take this all the way, as deadlines have forced me to implement some workarounds temporarily.
Some useful links related to this topic:
http://msdn.microsoft.com/en-us/magazine/cc163357.aspx
http://forums.iis.net/t/1160443.aspx
http://msdn.microsoft.com/en-us/library/ms731053.aspx
http://labs.episerver.com/en/Blogs/Paul-Smith/Dates/2008/6/Hosting-non-HTTP-based-WCF-applications-in-IIS7/
UPDATE (6/27/2011):
Amazingly, someone at Microsoft (whose blog I commented on) actually got me an answer on this.
The Azure & WCF teams updated this post:
http://blogs.msdn.com/b/windowsazure/archive/2011/06/27/hosting-services-with-was-and-iis-on-windows-azure.aspx
The link contains all of the information you need to get going with this.
And a HUGE thanks goes to Yavor Georgiev, the MSFT PM with the win.
It's been quite a while since I've asked the question, and no answers, so let me leave this:
Per my follow-ups in the post, there are ways of making this work... but they are complicated and difficult to implement.
For WORKER ROLES, netTcpBinding works perfectly. No issues here. Go ahead and use it.
For WEB ROLES, you've got problems.But netTcpBinding is what you need to use for exposing internal endpoints. What to do?
Well, here's what I did:
Start netTcpBinding service in your RoleEntryPoint using ServiceHost.
Create standard WCF service in your web role using SOAP / JSON / Whatever you want.
When you receive requests through your netTcpBinding, proxy them along to the WCF service on the loopback adapter.
Properly secure your "internal" WCF service with SSL client certs.
It's not perfect... but it works, and it's not terrible.
I suspect that needing to do this kind of thing isn't super common, and I really can't think of any reason why you'd need to other than to dynamically modify settings at runtime... which means you're not slamming these services like crazy.
Obviously, YMMV.
I had a miserable time getting HTTP working between instances in staging, and gave up when it looked like I needed to mess around with netsh to give my processes permission to listen via an HttpListener (sheesh!). So I switched to TCP via sockets. HTTP just adds overhead in a point-to-point communication scenario like this.
I need to write an ASP.NET application which must handle a very large number of transactions per second - as many as 5000 users may transact at the same time. I think I will use WCF in back to communicate with SQL server. But in front, can IIS handle 5000 users at the same time effectively, or is there any simple way to host my application outside of IIS?
It will depend on the characteristics of the machine but you could always setup a web farm to handle high loads.
You can host a WCF application outside of IIS using WAS, Windows Service or a .NET application.
It certainly would be possible to design a system using IIS that could handle the load you describe. Whether this is a good idea or not really depends on the application. I suggest perhaps you look at some benchmarking some of the loads to determine if it is quicker to host in IIS or if you host a WCF application outside of IIS.
Why you need it outside IIS. you can have 5000 TPS with IIS. But bear in mind that it depends from lot of aspects... like hardware, what configuration you have for your servers, it depends from heaviness of your application, what is the response time of your applications. Also as suggested you can have web farm. You can use load balancer and have several servers behind it. So it is possible just you need to have a proper design and if needed a budget for hardware upgrade.