I am working on a project that involves a web application and two services. The two services each have their own job. One kills an entry in the application by marking it as dead in the database and removing it's "query string" from Splunk. The other is a reporting service. Every day at 5pm it sends out a report of all the active entries in the DB.
My issue is that these need to be installable on client servers and thus the IP address for the DB and the Splunk Server will be vary. Originally I was planning on encrypting the settings inside Settings.settings under the property folder. In the service's command line installation functions I tried putting Console.writeLine and readLine but after running that and doing some research I learned that Console output is discarded for services which leaves my settings empty.
What is the best practice to create an interface to configure a service?
As any other .NET application the best way of configuring a service is via the app.config file. This file is renamed at compile time to yourappname.exe.config. You can the use the information contained with ConfigurationManager class.
For password encryption, have a look at this other SO question: Encrypt password in App.config
You could add a simple Winform/WPF UI project to your solution and have it tweak/encrypt the service's .config file through SectionInformation.ProtectSection
It could be useful that it would ask the user if the service should be restarted when changes are done through the UI.
Related
I have an C# web application using mvc5. It currently runs on Azure and I have a dev, test, and production instances. What do I need to do to ensure that the database connection strings will automatically change as the application is pushed to each environment? I know this is possible with web.config as you can define Web.Debug.Config, etc, but how would I go about this for different worker roles on Azure? I have been all over the internet looking for a solution. In a nutshell, I would like to do the same approach used for the multiple web.config files but for Azure.
As some additional background, for my solution I have my repositorybase broken out into a separate project and there I am trying to grab the connection string from the configuration files (let's say domain.dll is the name of the library that contains it). As first this worked when I was only using web.config but when I had to run my domain DLL files from another worker role the configuration began to return null; because this code would not run when run from a different worker process(non web). This seems to introduce an interesting problem, what if I need to use the domain.dll code outside the web and outside of Azure? How do I still maintain the connection string benefits that Azure and web.config provide?
Assumption: you are using Web-Services, not Web-Sites. There is a difference.
There are 2 ways to get what you need:
For worker role you can do app.config transformations almost in the same way you do in web.config. Only you'll need to do it with SlowCheetah. There is a nuget package for that, also there is VS extension to create transform files. There is too much faffing-about with this method. I never liked it, so move on to second option.
If you run Web-Services, you can specify connection strings as part of worker-role configuration. Go to your Azure project and open properties of your worker-role:
There you can add database connection string. And create a configuration for every environment you run (dev, test, prod). And place a different connection string for every environment.
To get your connection string you execute:
CloudConfigurationManager.GetSetting("DatabaseConnectionString")
Once your site is deployed you'll see these configuration values in Configure tab in Azure Portal.
You should make the distinction between 'building in release mode' and 'deploying to environment X'.
Building in Release mode should just transform your configuration files to be 'production-ready'. With MsDeploy you can parameterize your configuration files so upon deployment they will be filled with the parameters as supplied by you to your MsDeploy script.
There is no magic bullet which will automatically change your connectionstrings etc per environment. But this way you can standardize your process which will greatly help with the stability of your product.
One thing to note is that the parameterization of your deployments will break the easy workflow 'publish' from within visual studio due to the fact that you are not given an option to fill in your parameters during the publish wizard... :'(
You should manage the connection strings through the azure portal rather than through config file transformations. With the MVC app this will be easy, go to the configure tab and set your connection strings there
For items like web jobs use Microsoft.WindowsAzure.ConfigurationManager which
provides a unified API to load configuration settings regardless of
where the application is hosted - whether on-premises or in a Cloud
Service
I have created a forms application for my project. I want to host on my website for users to download and test it. Because I am using a configuration manager I have to include the config file along with the .exe as there is a back end remote database for the application. And of course I only now realize my connection string is there for all to see. I tried renaming the app.config to web.config, but the aspnet_regiis -pef command just returns a help menu when ran as admin on my vista machine! Even if this command works and I rename web.config back to app.config, will the machine which runs the app when downloaded automatically decrypt the connection string? So in conclusion what is the best way for a novice like to approach this dilemma? Why does aspnet_regiis -pef not run? I have also looked at other posts about this topic but unfortunately they have not worked for me so far.
Either create user/specific connection string, or wrap all your data access in some web services, where you can control the autorization.
Creating user specific connection string is the simplest, but may have impact on the DB charge. You can still keep one connection string, but using windows identity to connect. In both case, you will have to spent some effort to ensure users won't able to do more than what they are allowed to do.
Wrapping your data access in web services is far more manageable but will require an extra work to make it works. Maybe you can take a look at RIA Services. The advantages are multiples: you can control the permissions within the web services, and you are reducing the exposure of unwanted queries.
Please also note that even if you encrypt the connection string in the configuration file, any malicious user will be able to decrypt it. A simple decompiler will highlight your decryption key.
You could just store an encrypt the connection string in the app.config but you will have to include the encryption key somewhere in the application. Therefore this is not safe because everyone can just decompile the application or attach a debugger and extract the connection string. Or monitor the network traffic. Actually there is now way you can prevent this from happening - whatever your application can do can be done manually by everyone (with access to the application).
The flaw in the design is that the application needs direct access to the database in the first place. It is close to impossible to ensure that the database can not be corrupted in this scenario (unless the database is only used for reading data). Essentially you would have to replicate a large portion of your business logic at the database server to ensure that no sequence of requests will corrupt the state.
A better solution would be accessing the database only indirectly through a web service. This allows you to perform better and easier to implement server-side validation and even authentication and authorization per user.
I have a web app and want to transfer data from client's machines to us every day. Assume there is a common API on every client machine to extract data from. To make this work, I have to create:
An API to receive data from clients - using WCF, seems ok at this point
An application that's installed on client machines
The client app needs to store info from the user (eg username/password to access our API - encrypted with DPAPI). The app needs to run daily (probably with a random Sleep() command so our API isn't overloaded all at once). It also needs to be easy to install.
I've created a console app which talks with the client API and our own API. I've used Visual Studio's Settings.settings with a user scope to save the persistent settings - if parameters are provided then it stores these settings, if no parameters it uses the stored settings.
How can I make this usable for the end user? I'm thinking a separate installer/configuration program that installs the exe file (and its dependencies) and asks the user to enter the settings to be stored (which can also be read by the client app). It would have to set up the scheduled task and also offer the ability to change the configuration (the stored shared variables).
Hoping someone can help architect this solution?
Thanks so much!
I think that your idea about an installer is correct since you will most likely have dependencies or prequisites to install.
However, rather than building the settings logic into the installer, I would recommend that you build a UI for this in your application so that the user can adjust it post-installation if needed.
For example, if the user changes their password, in your current design, the user will have to uninstall and reinstall the app. Also, if the scheduled time is incompatible with some other operations on their machine, then they will need to adjust the time without uninstalling and reinstalling.
You could build the UI and API interface into a single application: just change the behavior (runtime or configuration) with a command line switch (for example, only use a /runtime command line switch for the scheduled task).
What are some best practices for being able to deploy a Windows service that will have to be updated?
I have a Windows service that I will be deploying but might require some debugging and new versions during the beta process. What is the best way to handle that? Ideally, I'd like to find a ClickOnce-style deployment solution for Windows services but my understanding is that this does not exist. What is the closest I can get to ClickOnce for a Windows service?
A simple solution that I use is to merely stop the service and x-copy the files from my bin folder into the service folder.
A batch file to stop the service then copy the files should be easy to throw together.
Net stop myService
xcopy \\myServerWithFiles\*.* c:\WhereverTheServiceFilesAre
net start myService
I have a system we use at work here that seems to function pretty well with services. Our deployed system has around 20-30 services at any given time. At work we use a product called TopShelf you can find it here http://topshelf-project.com/
Basically TopShelf handles a lot of the service related stuff. Installing, Uninstalling etc all from the cmd line of the service. One of the very useful features is the ability to run as console for debugging. You build one service, and with a different cmd line start you can run it as a console to see the output of the service. We added one custom feature to this software that lets us configure profiles in advance. Basically our profiles configure a few things like logging, resource locations etc so that we can control all that without having to republish any code. All we do is run a command like
D:\Services\ServiceName.exe Core.Profiles.Debug or
D:\Services\ServiceName.exe Core.Profiles.Production
to get different logging configurations.
Our build script creates install.cmd and uninstall.cmd scripts for each of our services all we do is copy the files to the server and run the script. If we want to see debug output we stop the service and double click the exe and we get a console to read all the output.
One more thing that topshelf has which we don't use because its not necessary is the concept of shelving (there is documentation on this website for this). This allows you to update the service without having to "restart" but you still need to copy the files manually unless you build an automated system for that.
However, my suggestion if you need 100% service availability is to have a redundant system. No matter how you configure your service for updates you cannot avoid hardware failure causing downtime without an automated failover system. If said system was in place my recommended update strategy would be to turn off 1 node, update, test, turn on turn off the other node, update, test and turn the 2nd node back on. You can do this all of course with a simple script. This may be a more complicated system than you need but if you can't take a service offline for a simple restart that takes 5 seconds then you really need some system in place to deal with hardware issues because I can guarantee it will happen eventually.
Since a service is long-running anyway, using ClickOnce style deployment might not be viable - because ClickOnce only updates when you launch the app. A service will typically only be launched when the machine is rebooted.
If you need automatic update of a service then your best bet might be to hand-code something into the service, but I'd forsee problems with almost any solution: most install processes will require some level of user interaction (if only to get around UAC), so I can't imagine this would lead an answer that doesn't involve getting a logged-on user in front of the screen at some point.
One idea that might just work is active-directory deployment (or some similar equivalent). If your service is deployed via a standard MSI-type installer, AD allows you to update the application silently as part of the computer policy. I suspect you'd have to force the server to refresh the AD policy (by rebooting or using gpupdate from the console), but other than that it should be a hands-off deployment.
I would suggest using the "plugin" approach on this, that is, using the Proxy Design Pattern.
While using this pattern, an independant thread may verify over a folder for updates. You will need to use ShadowCopy over your assembly deployment. When your service update-thread encounters a new version of your service, it shall unload the current production assembly and load the new version, without stopping the service itself. Even more! Your service should never notice the difference, if there is no breaking code within your assembly.
I would suggest to create a normal setup project, and add the windows service project output in that setup project.
For more information please refer to http://support.microsoft.com/kb/816169.
I have an application that manages the heavy processing for my project, and need to convert it to a "Windows Service." I need to allow running multiple versions instances of the application processing, which seems to be a fairly normal requirement.
I can see at least three approaches to do this:
Create a single installed directory (EXE, DLLs, config) but install as multiple Services instances from it.
Have a single Services instance spawn multiple instances of itself after launching, a la Apache.
Have a single Services instance spawn multiple threads that work within the same process space.
My intention was approach #1, but I kept tripping over the limitations, both in design and especially documentation for Services:
Are parameters ever passed to OnStart() by the normal Services mechanisms on an unattended system? If so, when/why?
Passing run-time parameters via the ImageKey registry seems a kludge, is there a better mechanism?
I got the app to install/uninstall itself as a pair of services ("XYZ #1", "XYZ #2", ...), using the ImageKey to hand it a command line parameter instance number ("-x 1", "-x 2") but I was missing something. When attempting to start the service, it would fail with "The executable program that this service is configured to run in does not implement the service.
So, the questions:
Is there a concise description of what happens when a service starts, specifically for those situations where the ServiceName is not hard-coded (see Q above).
Has anyone used approach #1 successfully? Any comments?
NOTE: I've side-stepped the problem by using approach #3, so I can't justify much time figuring this out. But I thought someone might have information on how to implement #1 -- or good reasons why it isn't a good idea.
[Edit] I originally had a 4th option (install multiple copies of the application on the hard drive) but I removed it because it just feels, um, hackish. That's why I said "at least three approaches".
However, unless the app is recompiled, it must dynamically set its ServiceName, hence that has the solution to the third bullet/problem above. So, unless an instance needed to alter it's install files, #1 should work fine with N config files in the directory and a registry entry indicating which the instance should use.
Though I can't answer your questions specific to option #1, I can tell you that option #2 worked very well for us. We wanted to create an app domain for each 'child' service to run under and for each of them to use a different configuration file. In our service's config file we stored the app domains to start and the configuration file to use. So for each entry we simply created the app domain, set the configuration file etc and off we went. This separation of configuration allowed us to easily specify the ports and log file locations uniquely for each instance. Of additional benefit to us was that we wrote our 'child service' as a command-line exe and simply called the AppDomain's ExecuteAssembly() on a new thread for each 'child' service. The only 'clunk' in the solution was shutdown, we didn't bother to create a 'good' solution for it.
Update Feb, 2012
Some time ago we started using 'named' services (like SQL Server). I detailed the entire process on my blog in a series "Building a Windows Service – Part 1 through Part 7". They take you through creating a command-line/windows service hybrid complete with self-installation. The following goals where met:
Building a service that can also be used from the console
Proper event logging of service startup/shutdown and other activities
Allowing multiple instances by using command-line arguments
Self installation of service and event log
Proper event logging of service exceptions and errors
Controlling of start-up, shutdown and restart options
Handling custom service commands, power, and session events
Customizing service security and access control
A complete Visual Studio project template is available in the last article of the series Building a Windows Service – Part 7: Finishing touches.
Check this out: Multiple Instance .NET Windows Service
There is option #4 that I'm successfully using in my project aka "named instances".
Every installation of your app has custom name and each installation has its own service. They are completely independent and isolated from each other. MS SQL Server is using this model if you try to install it multiple time on single machine.