Change configuration at runtime in Azure Service Fabric - c#

I am new to Service Fabric and don't fully understand the concepts but I am looking at a way to change the configuration at runtime like is possible in Cloud Services and App Services without performing an upgrade (and having to up-version the config file to do so).
For example, imagine that one setting is "Email destination", which might be changed several times a month and should be picked up automatically by the service.
It seems that ConfigurationPackageModifiedEvent is only fired for upgrades and that there is no other way to update and trigger this.
Otherwise, is there another way to relatively simply update a value that the service is using?

The assumption that SF team went with us that any change should be a versions change. That includes configuration packages as well. You could deploy configuration package only to advice what you need w/o modifying the code or data packages.
Alternatively, using an external source of data could be an option. You could put it behind a dedicated service in SF to abstract the details.

Related

Gracefully restarting Service Fabric Application while in Production to reflect an update to the Data Package

I have a working C#/.NET application designed for Azure Service Fabric. The application uses Data Package and Config Package files, which are read in the constructor of the Stateless Service Fabric application during initialization. Together the files provide information that enables the application to create multiple infinitely long running Tasks in the RunAsync method of the app.
Now say, there is an update to the Data Package file after the Application has already been in production for a long time, say hours/days or more. I want the application to read the Data Package and Config Package from scratch and restart from the initialization steps performed in the constructor of the Stateless application.
After looking around I found the "RestartDeployedCodePackageAsync" method, which needs to be invoked as follows:
await client.FaultManager.RestartDeployedCodePackageAsync(applicationName, selector, CompletionMode.Verify);
This is described in detail on this page: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-testability-workload-tests
Should I use this method to achieve what I am looking to do? Is it the best method to do so?
To do this, you should perform a service upgrade deployment with just the configuration changes.
Use the same structure as a regular package, but without code, with just the new data & config files.
More info here and here.

Ways to leverage single code base between Windows Service and Azure WebJob

I'm working on a timed recurring process that in some cases will be deployed OnPrem and in other cases deployed in the cloud (Azure). I'm researching a Windows Service and an Azure WebJob. Given that I only need the recurring process to be the timed piece, I'm thinking about having the bulk of the logic in a library with just the entry points being different between the Windows Service for the local deployment or a WebJob when deploying to Azure. Each csproj (service and WebJob) would handle only the timed loop and configuration settings then call into the library for the bulk of the work.
My question is: Is there another design combination available to me that would fulfill these requirements potentially in a better way? I've read about wrapping an existing windows service in a WebJob, but I don't think that would be necessary in this case given I'm starting from scratch.
When it comes to keeping your common code up to date, and knowing which versions are used by which applications, bets solution is to create a class library project with respected design pattern and convert it into a nuget project.
you know you can host your own private NuGet repository, create your own packages, and host them internally within your own network.
Here is a very nice article for"How to create Nuget package out of your class library project". You can utilize it and share it across all your code.
And finally you can just callit from your windows service/WebJob.
Let me know if you need any help related to designing the solution.
Hope it helps.

Windows Service consuming web service deployment

I have created a windows service which consumes a WCF Service hosted at some location.
The WCF service endpoint is specified in app.config of this windows service.
I am not sure whether what i want is really the right understanding i have about services or not.
So here i go.
I have created a wix installer which encapsulates all my dependent third party dll's into one installer.
Now, the question is do i have to copy all the xsd files the client folder?
If yes , then does changing the WCF endpoint in app.config later once installed , would the new-endpoint be readily adopted by the windows service ( obviously as long as the contract remains same ) or even if it changes.?
I am not able to phrase the question well, maybe that's why even enough of googling didn't bring me any answers.
Please guide me understand this.
The .xsd files need to be copied if your service/dlls make use of them at runtime. I would assume this is the case since it's not likely (although it is certainly possible) that the .xsd files are only used in the development environment. If you have any questions about this, you can always try to install the service on another system and see if it runs successfully without them. Trial-and-error is not the most efficient way to test software, but it works 100% of the time.
As for modifying the endpoint in the app.config post-deployment, the WCF service hosted by your Windows service will happily adopt this providing that you restart the service. This is one of the most appealing features of WCF, namely that how you connect (TCP, HTTP, P2P, etc.) and where you connect (endpoint) can be specified without having to change the code. If the contract changes, things get more sticky. Additions to the contract, e.g., new methods, would presumably be non-breaking changes, but modifications to existing methods would require the code to be rebuilt and redeployed.
HTH

C# ClickOnce deployment for Windows Services?

What are some best practices for being able to deploy a Windows service that will have to be updated?
I have a Windows service that I will be deploying but might require some debugging and new versions during the beta process. What is the best way to handle that? Ideally, I'd like to find a ClickOnce-style deployment solution for Windows services but my understanding is that this does not exist. What is the closest I can get to ClickOnce for a Windows service?
A simple solution that I use is to merely stop the service and x-copy the files from my bin folder into the service folder.
A batch file to stop the service then copy the files should be easy to throw together.
Net stop myService
xcopy \\myServerWithFiles\*.* c:\WhereverTheServiceFilesAre
net start myService
I have a system we use at work here that seems to function pretty well with services. Our deployed system has around 20-30 services at any given time. At work we use a product called TopShelf you can find it here http://topshelf-project.com/
Basically TopShelf handles a lot of the service related stuff. Installing, Uninstalling etc all from the cmd line of the service. One of the very useful features is the ability to run as console for debugging. You build one service, and with a different cmd line start you can run it as a console to see the output of the service. We added one custom feature to this software that lets us configure profiles in advance. Basically our profiles configure a few things like logging, resource locations etc so that we can control all that without having to republish any code. All we do is run a command like
D:\Services\ServiceName.exe Core.Profiles.Debug or
D:\Services\ServiceName.exe Core.Profiles.Production
to get different logging configurations.
Our build script creates install.cmd and uninstall.cmd scripts for each of our services all we do is copy the files to the server and run the script. If we want to see debug output we stop the service and double click the exe and we get a console to read all the output.
One more thing that topshelf has which we don't use because its not necessary is the concept of shelving (there is documentation on this website for this). This allows you to update the service without having to "restart" but you still need to copy the files manually unless you build an automated system for that.
However, my suggestion if you need 100% service availability is to have a redundant system. No matter how you configure your service for updates you cannot avoid hardware failure causing downtime without an automated failover system. If said system was in place my recommended update strategy would be to turn off 1 node, update, test, turn on turn off the other node, update, test and turn the 2nd node back on. You can do this all of course with a simple script. This may be a more complicated system than you need but if you can't take a service offline for a simple restart that takes 5 seconds then you really need some system in place to deal with hardware issues because I can guarantee it will happen eventually.
Since a service is long-running anyway, using ClickOnce style deployment might not be viable - because ClickOnce only updates when you launch the app. A service will typically only be launched when the machine is rebooted.
If you need automatic update of a service then your best bet might be to hand-code something into the service, but I'd forsee problems with almost any solution: most install processes will require some level of user interaction (if only to get around UAC), so I can't imagine this would lead an answer that doesn't involve getting a logged-on user in front of the screen at some point.
One idea that might just work is active-directory deployment (or some similar equivalent). If your service is deployed via a standard MSI-type installer, AD allows you to update the application silently as part of the computer policy. I suspect you'd have to force the server to refresh the AD policy (by rebooting or using gpupdate from the console), but other than that it should be a hands-off deployment.
I would suggest using the "plugin" approach on this, that is, using the Proxy Design Pattern.
While using this pattern, an independant thread may verify over a folder for updates. You will need to use ShadowCopy over your assembly deployment. When your service update-thread encounters a new version of your service, it shall unload the current production assembly and load the new version, without stopping the service itself. Even more! Your service should never notice the difference, if there is no breaking code within your assembly.
I would suggest to create a normal setup project, and add the windows service project output in that setup project.
For more information please refer to http://support.microsoft.com/kb/816169.

Multiple Instances of same Application as a Windows Service?

I have an application that manages the heavy processing for my project, and need to convert it to a "Windows Service." I need to allow running multiple versions instances of the application processing, which seems to be a fairly normal requirement.
I can see at least three approaches to do this:
Create a single installed directory (EXE, DLLs, config) but install as multiple Services instances from it.
Have a single Services instance spawn multiple instances of itself after launching, a la Apache.
Have a single Services instance spawn multiple threads that work within the same process space.
My intention was approach #1, but I kept tripping over the limitations, both in design and especially documentation for Services:
Are parameters ever passed to OnStart() by the normal Services mechanisms on an unattended system? If so, when/why?
Passing run-time parameters via the ImageKey registry seems a kludge, is there a better mechanism?
I got the app to install/uninstall itself as a pair of services ("XYZ #1", "XYZ #2", ...), using the ImageKey to hand it a command line parameter instance number ("-x 1", "-x 2") but I was missing something. When attempting to start the service, it would fail with "The executable program that this service is configured to run in does not implement the service.
So, the questions:
Is there a concise description of what happens when a service starts, specifically for those situations where the ServiceName is not hard-coded (see Q above).
Has anyone used approach #1 successfully? Any comments?
NOTE: I've side-stepped the problem by using approach #3, so I can't justify much time figuring this out. But I thought someone might have information on how to implement #1 -- or good reasons why it isn't a good idea.
[Edit] I originally had a 4th option (install multiple copies of the application on the hard drive) but I removed it because it just feels, um, hackish. That's why I said "at least three approaches".
However, unless the app is recompiled, it must dynamically set its ServiceName, hence that has the solution to the third bullet/problem above. So, unless an instance needed to alter it's install files, #1 should work fine with N config files in the directory and a registry entry indicating which the instance should use.
Though I can't answer your questions specific to option #1, I can tell you that option #2 worked very well for us. We wanted to create an app domain for each 'child' service to run under and for each of them to use a different configuration file. In our service's config file we stored the app domains to start and the configuration file to use. So for each entry we simply created the app domain, set the configuration file etc and off we went. This separation of configuration allowed us to easily specify the ports and log file locations uniquely for each instance. Of additional benefit to us was that we wrote our 'child service' as a command-line exe and simply called the AppDomain's ExecuteAssembly() on a new thread for each 'child' service. The only 'clunk' in the solution was shutdown, we didn't bother to create a 'good' solution for it.
Update Feb, 2012
Some time ago we started using 'named' services (like SQL Server). I detailed the entire process on my blog in a series "Building a Windows Service – Part 1 through Part 7". They take you through creating a command-line/windows service hybrid complete with self-installation. The following goals where met:
Building a service that can also be used from the console
Proper event logging of service startup/shutdown and other activities
Allowing multiple instances by using command-line arguments
Self installation of service and event log
Proper event logging of service exceptions and errors
Controlling of start-up, shutdown and restart options
Handling custom service commands, power, and session events
Customizing service security and access control
A complete Visual Studio project template is available in the last article of the series Building a Windows Service – Part 7: Finishing touches.
Check this out: Multiple Instance .NET Windows Service
There is option #4 that I'm successfully using in my project aka "named instances".
Every installation of your app has custom name and each installation has its own service. They are completely independent and isolated from each other. MS SQL Server is using this model if you try to install it multiple time on single machine.

Categories

Resources