C# ClickOnce deployment for Windows Services? - c#

What are some best practices for being able to deploy a Windows service that will have to be updated?
I have a Windows service that I will be deploying but might require some debugging and new versions during the beta process. What is the best way to handle that? Ideally, I'd like to find a ClickOnce-style deployment solution for Windows services but my understanding is that this does not exist. What is the closest I can get to ClickOnce for a Windows service?

A simple solution that I use is to merely stop the service and x-copy the files from my bin folder into the service folder.
A batch file to stop the service then copy the files should be easy to throw together.
Net stop myService
xcopy \\myServerWithFiles\*.* c:\WhereverTheServiceFilesAre
net start myService

I have a system we use at work here that seems to function pretty well with services. Our deployed system has around 20-30 services at any given time. At work we use a product called TopShelf you can find it here http://topshelf-project.com/
Basically TopShelf handles a lot of the service related stuff. Installing, Uninstalling etc all from the cmd line of the service. One of the very useful features is the ability to run as console for debugging. You build one service, and with a different cmd line start you can run it as a console to see the output of the service. We added one custom feature to this software that lets us configure profiles in advance. Basically our profiles configure a few things like logging, resource locations etc so that we can control all that without having to republish any code. All we do is run a command like
D:\Services\ServiceName.exe Core.Profiles.Debug or
D:\Services\ServiceName.exe Core.Profiles.Production
to get different logging configurations.
Our build script creates install.cmd and uninstall.cmd scripts for each of our services all we do is copy the files to the server and run the script. If we want to see debug output we stop the service and double click the exe and we get a console to read all the output.
One more thing that topshelf has which we don't use because its not necessary is the concept of shelving (there is documentation on this website for this). This allows you to update the service without having to "restart" but you still need to copy the files manually unless you build an automated system for that.
However, my suggestion if you need 100% service availability is to have a redundant system. No matter how you configure your service for updates you cannot avoid hardware failure causing downtime without an automated failover system. If said system was in place my recommended update strategy would be to turn off 1 node, update, test, turn on turn off the other node, update, test and turn the 2nd node back on. You can do this all of course with a simple script. This may be a more complicated system than you need but if you can't take a service offline for a simple restart that takes 5 seconds then you really need some system in place to deal with hardware issues because I can guarantee it will happen eventually.

Since a service is long-running anyway, using ClickOnce style deployment might not be viable - because ClickOnce only updates when you launch the app. A service will typically only be launched when the machine is rebooted.
If you need automatic update of a service then your best bet might be to hand-code something into the service, but I'd forsee problems with almost any solution: most install processes will require some level of user interaction (if only to get around UAC), so I can't imagine this would lead an answer that doesn't involve getting a logged-on user in front of the screen at some point.
One idea that might just work is active-directory deployment (or some similar equivalent). If your service is deployed via a standard MSI-type installer, AD allows you to update the application silently as part of the computer policy. I suspect you'd have to force the server to refresh the AD policy (by rebooting or using gpupdate from the console), but other than that it should be a hands-off deployment.

I would suggest using the "plugin" approach on this, that is, using the Proxy Design Pattern.
While using this pattern, an independant thread may verify over a folder for updates. You will need to use ShadowCopy over your assembly deployment. When your service update-thread encounters a new version of your service, it shall unload the current production assembly and load the new version, without stopping the service itself. Even more! Your service should never notice the difference, if there is no breaking code within your assembly.

I would suggest to create a normal setup project, and add the windows service project output in that setup project.
For more information please refer to http://support.microsoft.com/kb/816169.

Related

Difference between a console application and Web application in asp.net core

I am trying to run a background service which just writes to a file on a specified interval.
There are two methods that I tried
1) Created the project with the Console application template
2) Created the project with Web Application as template
When I run the app from visual Studio, both of them run fine. But when I deploy them to IIS, only the web application version works. It must be noted that there is absolutely no difference between the code of the two projects. I have used the WebHost as a hosting strategy in both the projects as well as well as installed all the dependencies in case of Console application as there are in the Web Application version.
I must also inform that I have used the preloadEnabled="true" option in IIS as IIS needs a web request to start the application.
I am wondering what is the difference between both the project types as the code is the same? I don't want the Web Application template.
Edit 1: I forgot to mention that the service will also need to expose an api endpoint for healthcheck purposes. Will the windows service approach listen for http requests?
I used the following article for implementing my background service.
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice
After years of building background services, I learned that Windows services are the best tools to implement these applications. While there are different techniques to keep an IIS application up and running in the background and prevent it from getting recycled, in practice, the applications on IIS are not meant to be executed forever.
If you had an intention to build your app in the cloud, I would have suggested using something like Azure WebJobs or Azure Functions Timer-Triggered functions, but for on-premise, even using something like Hangfire in the web is not sustainable. The worst happens when you need backward compatibility on Windows servers that don't have the "Application Initialization" module.
My suggestion is to move your application to a simple Windows Service if you control your environment. Windows services consume less memory, are easier to manage, and can run forever without getting recycled.
WebApplications are plain the wrong tools for this.
Being always on and always reachable, WebServers are primary targets for hacking. To compensate for that, they are usually run under the most restrictive user rights you can imagine: Read rights to their programm and this instances content directory. While I do not know why it worked at all, it propably will stop working in Production.
What you wanted to write was eitehr a Service or something executed by the Windows Task Sheduler. Personally I advise for the Task Sheduler as Services have their own set of restrictions. Unless of coruse there is some detail of the requirements that you did not told us.
This article could be helpful. It's a step by step tutorial on how to convert a console application to a web application.

2 parts Windows application: "windows service" + GUI to configure it

I’m working on a windows app composed of two parts:
An agent, running in the background.
A main application with a window to start/stop the agent and configure it.
What I understand is that I should use a “windows service” for the agent.
But I’m not sure how this is supposed to be packaged? (Can I have these two parts in the same .exe?)
And how the agent and the main application can communicate (should I use a shared file? Can my agent have a private folder to work in?)
I’m looking for some architecture advices basically
Running the agent as a service is probably the best way to go. It'll run without anyone needing to be logged in to run it, and Windows provides extensive monitoring support for services. You can use the sc command to install, start and stop a service, and it even supports controlling services on other machines if you've got the appropriate permissions.
In order to have your gui communicate with it you could look at using WCF. It will allow you to define your interactions with the service as C# classes and will save you having to worry about checking shared directories or looking into a shared file etc. This approach will also make it easy to support multiple clients at the same time, whilst something like a shared folder approach will make this difficult.
You will need to have to separate .exe files, one for the service and one for the windows application. You can package these are two separate MSIs within Visual Studio, the benefit here is that if you need to move the service (for whatever reason) you are not then also packaging up the Windows App and leaving it where ever you install the service.
There are different ways you can have them communicate without getting massively complex. you could read from a text file, as you've suggested, but this could cause locking problems. When I've had to do similar I created a simple database in SQL (or any brand of database you wish), and have the Windows App insert / update configuration options to a table, and the service then reads the table to get its settings.

Winform inside a windows service in C# - Possible?

I am writing a windows service to process emails on a daily basis. This service includes a App.Config file, which has several parameters for the service to work accordingly.
Every time, the admin user has to go and change / add / delete the pair inside the section using a text editor.
I am planning to include a windows form to load all the pair from the section and thinking of doing any modification through the form.
All I would like to know is whether it's possible to have a winform inside a windows service and open it when ever the configuration needs to be changed? I know we can have a seperated windows application and load the App.Config file of the windows service. I just want to avoid having a seperate app for this.
If you have done something very similar to this, please share your thoughts!
Regards,
Sriram
That sounds like a security issue if nothing else.
A Windows service runs in a different context and account and cannot interact with the desktop unless specifically allowed when installing the service. This is not enough of course so you'd also have to have the service running under the same account that is running the desktop - this in itself is really bad design and not something I would recommend.
You could also have the service executable decide what to do during launch, a common pattern is to have it spawn as a console application when debugging to simplify development. But then you'd have to stop the service and launch the service executable manually interactively to get the UI behaviour.
In any way, a separate configuration tool is the way to go.

Windows Service VS web Process

I have created a message Queue here which is basically on a single thread and sending email one after another from the database. First I thought that since it is a continuous process, it has to be on windows service and it sounded like an ideal solution but not that I talked to my manager, he said that it would be better if it is in the same repository as the entire project and if I put in a while(true) statement. that way while deploying to the production, we do not need to worry about installing any windows service or anything. But what I think here is that if we do it that way then there would be a lot of unwanted pressure on the web server.
I am not sure which way to go. Any suggestions?
I would definitely suggest a windows service for processing the email queue in the background. Here are some points you can suggest to your manager:
The service could be kept in the same repository as another project.
Installing and upgrading services is very easy. Use installutil and add a batch file to your project for installing/uninstalling. Upgrading is a matter of stopping the service, updating the service .exe, and starting the service again.
All of this could technically be automated as well as part of your deployment process.
I'd go with a separate Windows service. With this service being party of your application, its life time is dependant on the life time of the application pool process (depending on the version of the IIS you are using, of course), and this way whenever youll choose to change the application pool settings, you will have to remember ur message job is also dependant on it, and if you set up any recycle settings for the app pool, you might have hard time uynderstanding why your job suddenly stop working or anything like that.
You can also go the route of simply writing a command line application and then wrapping it with something like Service+ to make it behave like a service. You also get other features like being able to run it like a command line application if you like (whenever you want to run it) or having it launch/execute from some other application as needed. You can build in a variety of behaviors as well... continuous mode, perhaps process 1 (or 100 or whatever) at a time then exit (and let Service+ restart it), or whatever else you may need.

Multiple Instances of same Application as a Windows Service?

I have an application that manages the heavy processing for my project, and need to convert it to a "Windows Service." I need to allow running multiple versions instances of the application processing, which seems to be a fairly normal requirement.
I can see at least three approaches to do this:
Create a single installed directory (EXE, DLLs, config) but install as multiple Services instances from it.
Have a single Services instance spawn multiple instances of itself after launching, a la Apache.
Have a single Services instance spawn multiple threads that work within the same process space.
My intention was approach #1, but I kept tripping over the limitations, both in design and especially documentation for Services:
Are parameters ever passed to OnStart() by the normal Services mechanisms on an unattended system? If so, when/why?
Passing run-time parameters via the ImageKey registry seems a kludge, is there a better mechanism?
I got the app to install/uninstall itself as a pair of services ("XYZ #1", "XYZ #2", ...), using the ImageKey to hand it a command line parameter instance number ("-x 1", "-x 2") but I was missing something. When attempting to start the service, it would fail with "The executable program that this service is configured to run in does not implement the service.
So, the questions:
Is there a concise description of what happens when a service starts, specifically for those situations where the ServiceName is not hard-coded (see Q above).
Has anyone used approach #1 successfully? Any comments?
NOTE: I've side-stepped the problem by using approach #3, so I can't justify much time figuring this out. But I thought someone might have information on how to implement #1 -- or good reasons why it isn't a good idea.
[Edit] I originally had a 4th option (install multiple copies of the application on the hard drive) but I removed it because it just feels, um, hackish. That's why I said "at least three approaches".
However, unless the app is recompiled, it must dynamically set its ServiceName, hence that has the solution to the third bullet/problem above. So, unless an instance needed to alter it's install files, #1 should work fine with N config files in the directory and a registry entry indicating which the instance should use.
Though I can't answer your questions specific to option #1, I can tell you that option #2 worked very well for us. We wanted to create an app domain for each 'child' service to run under and for each of them to use a different configuration file. In our service's config file we stored the app domains to start and the configuration file to use. So for each entry we simply created the app domain, set the configuration file etc and off we went. This separation of configuration allowed us to easily specify the ports and log file locations uniquely for each instance. Of additional benefit to us was that we wrote our 'child service' as a command-line exe and simply called the AppDomain's ExecuteAssembly() on a new thread for each 'child' service. The only 'clunk' in the solution was shutdown, we didn't bother to create a 'good' solution for it.
Update Feb, 2012
Some time ago we started using 'named' services (like SQL Server). I detailed the entire process on my blog in a series "Building a Windows Service – Part 1 through Part 7". They take you through creating a command-line/windows service hybrid complete with self-installation. The following goals where met:
Building a service that can also be used from the console
Proper event logging of service startup/shutdown and other activities
Allowing multiple instances by using command-line arguments
Self installation of service and event log
Proper event logging of service exceptions and errors
Controlling of start-up, shutdown and restart options
Handling custom service commands, power, and session events
Customizing service security and access control
A complete Visual Studio project template is available in the last article of the series Building a Windows Service – Part 7: Finishing touches.
Check this out: Multiple Instance .NET Windows Service
There is option #4 that I'm successfully using in my project aka "named instances".
Every installation of your app has custom name and each installation has its own service. They are completely independent and isolated from each other. MS SQL Server is using this model if you try to install it multiple time on single machine.

Categories

Resources