I need to find a reliable way to update a running Windows Service (Service.exe).
The service is run under the LocalSystem account whereas the logged-in user is in a non-admin User account.
My current solution would be as follows:
- The Service.exe checks for updates (files) regularily
- When an update it found it starts another service (Launcher.exe) that would stop the Service.exe, copy over the files, restart Service.exe, then stop itself
After doing some online-reading and from some of my previous forum posts I beleive this would be the appropriate solution - but before I go ahead I wanted to check with all the guru's and see if I am forgetting something important or if there is a better way.
I did read-up on some method of self-updating (loading & unloading assemblies, etc...) but it just seemed very unsure and I need this to be as robust as possible - if it fails it means someone needs to manually intervene.
Any help or hints would be much appreciated.
Thanks,
The download / stop / apply changes / restart procedure is a fairly common and robust one. I definitely wouldn't try to get into the business of doing it without restarting. It may well be possible in many cases, but it's going to be a lot harder to get right.
Don't forget to make sure you can update the updater, by the way...
You could write a bootstrapper service as well.
Basically it's a lightweight service that you install, it watches a directory for dll's that match a certain interface.
Anything that matches, it loads up and runs as a service inside of itself.
Have it load two dlls to start. One is your service, the other is your update service.
Since there is virtually no code in the bootstrapper it shouldn't need to be updated.
But your updater will be able to update itself, and the service dll periodically.
How your updater works is up to you. We found that having a publish location on our network, and just watching the folder for updates, additions and delete's and synching the local dll folder was sufficient, but you could have it monitor a config file that point to dlls all over if that is what is needed (had to do that for one worldwide update system).
It's a bit tricky to get it all setup and working correctly. But once it is, it works great.
That is quite the right solution, however somebody (usually the main service) needs to carry code to update the launcher.
The launcher cannot do itself for what should be obvious reasons.
I do a double launcher (launcher spawns second service that updates both) to simplify update checks.
BTW even in the UNIX world we apply the same basic steps only we overwrite the running binary while the service is still running. In the Windows world you can do the same thing by renaming all files you are about to overwrite out of the way first.
Related
It is entirely possible I'm going about this entirely the wrong way, but here is what I'm doing:
I have a device that is communicating with a DLL over a COM port. This works fine for a single program, however I need multiple programs to be running, each updating with the state of the device.
Since I can't work out how to share access to a COM port, my solution is that each DLL checks for the existance of a timestamped file. If the file is there, the DLL goes into 'slave' mode and just reads the state of the device from the file. If the file doesn't exist or is over 30ms old, the DLL appoints itself 'master', claims the COM port, and writes the file itself.
(The device can also be sent instructions, so the master will have to handle collecting and sending the slaves' requests somehow, but I can deal with that later. Also I might want the DLLs to be aware of each other, so if one is causing problems then the user can be told about it - again, will get to this later.)
My immediate problem now is, where to store this file/possible collection of files? It must:
be somewhere that doesn't force the programs using the DLLs to have admin privileges or anything else that means they can't "just run".
be always the same place, not somewhere that might change based on some outside factor. Googling has shown me things like, "Environment.SpecialFolder.CommonDocuments" (in C#) but if that location is sometimes C:\ProgramData and other times C:\Users\Public then we'll get two parallel masters, only one of which can claim the COM.
work on as many Windowses as possible (all the way back to XP if I can get away with it) because no one wants to maintain more versions than necessary.
preferably, be somewhere non-technical users won't see it. Quite apart from confusing/scaring them, it just looks unprofessional, doesn't it?
Two sidenotes:
I looked at using the Registry and learned there's a time cost to reading it. The DLLs need to be reading with a period of maybe 10ms, so I assume the Registry is a bad idea.
After we get this working on Windows, we need to turn around and address Android, OSX, Linux, Mac, etc, etc, so I'm trying to bear that in mind when deciding how we structure everything.
EDIT:
I should add for context that we're releasing the DLL and letting other devs create apps that use it. We want them to work as frictionlessly as possible, without for example requiring the user install anything first, or the dev needing to jump through a bunch of hoops to be sure it will work.
I have to design a backup algorithm for some files used by a Windows Service and I already have some ideas, but I would like to hear the opinion of the wiser ones, in order to try and improve what I have in mind.
The software that I am dealing with follows a client-server architecture.
On the server side, we have a Windows Service that performs some tasks such as monitoring folders, etc, and it has several xml configuration files (around 10). These are the files that I want to backup.
On the client side, the user has a graphical interface that allows him to modify these configuration files, although this shouldn't happen very often. Communication with the server are made using WCF.
So the config files might be modified remotely by the user, but the administrator might also modify them manually on the server (the windows service monitors these changes).
And for the moment, this is what I have in mind for the backup algorithm (quite simple though):
When - backups will be performed in two situations:
Periodically: a parallel thread on the server application will perform a copy of the configuration files every XXXX months/weeks/whatever (configurable parameter). This is, it does not perform the backup each time the files are modified by user action, but only when the client app is launched.
Every time the user launches the client: every time the server detects that a user has launched the application, the server side will perform a backup.
How:
There will be a folder named Backup on the Program Data folder of the Windows Service. There, each time a backup is performed, a sub-folder named BackupYYYYMMDDHHmm will be created, containing all the concerned files.
Maintenance: Backup folders won't be kept forever. Periodically, all of those older than XXXX weeks/months/year (configurable parameter) will be deleted. Alternatively, I might only maintain N backup sub-folders (configurable parameter). I still haven't chosen an option, but I think I'll go for the first one.
So, this is it. Comments are very welcome. Thanks!!
I think your design is viable. just a few comments:
do you need to back up to a separate place other than the server? I don't feel it's safe to back up important data on same server, and I would rather back them up to a separate disk (perhaps a network location)
you need to implement the monitoring/backup/retention/etc. by yourself, and it sounds complicated - how long do you wish to spend on this?
Personally i would use some simple trick to achieve the backup, for example, since the data are plain text files (xml format) and light, I might simply back them up to some source control system: make the folder a checkout of SVN (or some other means) and create a simple script that detects/checks in changes to SVN, and schedule the script to be executed once a few hours (or more often up to your needs, or can be triggered by your service/app on demand) - this way it eliminates the unnecessary copy of data (as it checks in changes only), and it's much more trackable as svn provides all the history.
hope above can help a bit...
I have a project ongoing at the moment which is create a Windows Service that essentially moves files around multiple paths. A job may be to, every 60 seconds, get all files matching a regular expression from an FTP server and transfer them to a Network Path, and so on. These jobs are stored in an SQL database.
Currently, the service takes the form of a console application, for ease of development. Jobs are added using an ASP.NET page, and can be editted using another ASP.NET page.
I have some issues though, some relating to Quartz.NET and some general issues.
Quartz.NET:
1: This is the biggest issue I have. Seeing as I'm developing the application as a console application for the time being, I'm having to create a new Quartz.NET scheduler on all my files/pages. This is causing multiple confusing errors, but I just don't know how to institate the scheduler in one global file, and access these in my ASP.NET pages (so I can get details into a grid view to edit, for example)
2: My manager would suggested I could look into having multiple 'configurations' inside Quartz.NET. By this, I mean that at any given time, an administrator can change the applications configuration so that only specifically chosen applications run. What'd be the easiest way of doing this in Quartz.NET?
General:
1: One thing that that's crucial in this application is assurance that the file has been moved and it's actually on the target path (after the move the original file is deleted, so it would be disastrous if the file is deleted when it hasn't actually been copied!). I also need to make sure that the files contents match on the initial path, and the target path to give peace of mind that what has been copied is right. I'm currently doing this by MD5 hashing the initial file, copying the file, and before deleting it make sure that the file exists on the server. Then I hash the file on the server and make sure the hashes match up. Is there a simpler way of doing this? I'm concerned that the hashing may put strain on the system.
2: This relates to the above question, but isn't as important as not even my manager has any idea how I'd do this, but I'd love to implement this. An issue would arise if a job is executed when a file is being written to, which may be that a half written file will be transferred, thus making it totally useless, and it would also be bad as the the initial file would be destroyed while it's being written to! Is there a way of checking of this?
As you've discovered, running the Quartz scheduler inside an ASP.NET presents many problems. Check out Marko Lahma's response to your question about running the scheduler inside of an ASP.NET web app:
Quartz.Net scheduler works locally but not on remote host
As far as preventing race conditions between your jobs (eg. trying to delete a file that hasn't actually been copied to the file system yet), what you need to implement is some sort of job-chaining:
http://quartznet.sourceforge.net/faq.html#howtochainjobs
In the past I've used the TriggerListeners and JobListeners to do something similar to what you need. Basically, you register event listeners that wait to execute certain jobs until after another job is completed. It's important that you test out those listeners, and understand what's happening when those events are fired. You can easily find yourself implementing a solution that seems to work fine in development (false positive) and then fails to work in production, without understanding how and when the scheduler does certain things with regards to asynchronous job execution.
Good luck! Schedulers are fun!
i want to implement a windows service that functions as a simple license security feature for a software X. The service is meant to run independently from software X.
The rough idea:
The service is like a timebomb for a software Z installed on the machine...
Whenever the user runs software X, the service pops up a window every 30 minutes to remind the user to register software X.
If the user doesnt register the software after 1 month, the service will change the license code in a file and kill the software X process.
On the next start up, software X will read the wrong license code and starts in demo mode.
The service backs up the license code first before changing it.
When the user do register, a exe or bat file will be given for the user to run. The file restores the original license file and permanently removes the service.
Additional info:
Is it possible that if the user tries to kill the service, the service will automatically change license code and kill software X before being killed itself?
If the user changes the license code manually in the file after the service changes it, then the service will automatically change it back and kill software X.
I'm quite the newbie in programming... so i wanna ask for advice first before jumping into the project... Any advice, tips or issues/concerns i should be aware of based on your experience?
I'll most probably code it in C++ but might do it in C#(never used it before) after reading the following discussion:
Easiest language for creating a Windows service
I'm quite the newbie in programming...
so i wanna ask for advice first before
jumping into the project... Any
advice, tips or issues/concerns i
should be aware of based on your
experience?
The best advice I can give you is "newbies to programming should not attempt to write security systems". Developing a security system that actually mitigates real vulnerabilities to real attacks is incredibly difficult and requires years of real-world experience and both practical and theoretical knowledge of how exactly the operating system and framework class libraries work.
The second-best advice I can give you is to construct a detailed, accurate and complete threat model. (If you do not know how to do thread modeling then that'll be the first thing to learn. Do not attempt to rollerskate before you can crawl.) Only by having a detailed, accurate and complete threat model will you know whether your proposed security features actually mitigate the attacks on your vulnerabilities.
Whenever the user runs software X, the service pops up a window every 30 minutes to remind the user to register software X.
This is not possible. A service cannot display a window due to being on another desktop then the user. (Since Vista this is mandatory, XP did allow for showing a window.)
Is it possible that if the user tries to kill the service, the service will automatically change license code and kill software X before being killed itself?
No. A service is just another program running in the system, which can be killed at any point in time. (Only you have to be in the administrator group).
If the user changes the license code manually in the file after the service changes it, then the service will automatically change it back and kill software X.
The conclusion is, that when you break your license check into 2 parts, you get another point at which the user can break your check. You cannot prevent the user from working around your service, if it is not mandatory for your program to work.
Is it possible that if the user tries to kill the service, the service will automatically change license code and kill software X before being killed itself?
Not in general, no. If I shut down the process unconditionally (e.g. using taskkill /f command), it won't get any chance to react.
If the user changes the license code manually in the file after the service changes it, then the service will automatically change it back and kill software X.
It's possible - you can use ReadDirectoryChangesW function to watch the file and react to changes (or FileSystemWatcher class if your service is implemented in .NET). Of course, in light of the first answer above, user can just kill your service and then alter the file...
NEVER make a service for something unless it's really a system service. If you are creating an application, then you have NO BUSINESS EVER running code on the system when the application is closed unless the user explicitly requested that operation. Ideas like this are the reason we (nerds) have to deal with so much crap when people ask us to "fix my computer, it's running so slow."
I would walk from a 6-figure salary before I would ever become a part of an abomination like that.
Edit: I suppose first I'd need a 6-figure salary... some day some day
I hate asking questions like this - they're so undefined... and undefinable, but here goes.
Background:
I've got a DLL that is the guts of an application that is a timed process. My timer receives a configuration for the interval at which it runs and a delegate that should be run when the interval elapses. I've got another DLL that contains the process that I inject.
I created two applications, one Windows Service and one Console Application. Each of the applications read their own configuration file and load the same libraries pushing the configured timer interval and delegate into my timed process class.
Problem:
Yesterday and for the last n weeks, everything was working fine in our production environment using the Windows Service. Today, the Windows Service will run for a period of around 20-30 minutes and hangs (with a timer interval of 30 secods), but the console application runs without issue and has for the past 4 hours. Detailed logging doesn't indicate any failure. It's as if the Windows Service just...dies quietly - without stopping.
Given that my Windows Service and Console Applications are doing the exact same thing, I can only think that there is something that is causing the Windows Service process to hang - but I have no idea what could be causing that. I've checked the configuration files, and they're both identical - I even copied and pasted the contents of one into the other just to be sure. No dice.
Can anyone make suggestions as to what might cause a Windows Service to hang, when a counterpart Console Application using the same base libraries doesn't; or can anyone point me in the direction of tools that would allow me to diagnose what could be causing this issue?
Thanks for everyone's help - still digging.
You need to figure out what changed on the production server. At first, the IT guys responsible will swear that nothing changed but you have to be persistent. i've seen this happen to often i've lost count. Software doesn't spoil. Period. The change must have been to the environment.
Difference in execution: You have two apps running the same code. The most likely difference (and culprit) is that the service is running with a different set of security credentials than your console app and might fall victim to security vagaries. Check on that first. Which Windows account is running the service? What is its role and scope? Is there any 3rd party security software running on the server and perhaps Killing errant apps? Do you have to register your service with a 3rd party security service? Is your .Net assembly properly signed? Are your .Net assemblies properly registered and configured on the server? Last but not least, don't forget that a debugger user, which you most likely are, gets away with a lot more stuff than many other account types.
Another thought: Since timing seems to be part of the issues, check the scheduled tasks on the machine. Perhaps there's a process that is set to go off every 30 minutes that is interfering with your own.
You can debug a Windows service by running it interactively within Visual Studio. This may help you to isolate the problem by setting (perhaps conditional) breakpoints.
Alternatively, you can use the Visual Studio "Attach to process" dialog window to find the service process and attach to it with the "Debug CLR" option enabled. Again this allows you to set breakpoints as needed.
Are you using any assertions? If an assertion fires without being re-directed to write to a log file, your service will hang. If the code throws an unhandled exception, perhaps because of a memory leak, then your service process will crash. If you set the Service Control Manager (SCM) to restart your process in the event of a crash, you should be able to see that the service has been restarted. As you have identical code running in both environments, these two situations don't seem likely. But remember that your service is being hosted by the SCM, which means a very different environment to the one in which your console app is running.
I often use a "heartbeat", where each active thread in the service sends a regular (say every 30 seconds) message to a local MSMQ. This enables manual or automated monitoring, and should give you some clues when these heartbeat messages stop arriving.
Annother possibility is some sort of permissions problem, because the service is probably running with a different local/domain user to the console.
After the hang, can you use the SCM to stop the service? If you can't, then there is probably some sort of thread deadlock problem. After the service appears to hang, you can go to a command-line and type sc queryex servicename. This should give you the current STATE of the service.
I would probably put in some file logging just to see how far the program is getting. It may give you a better idea of what is looping/hanging/deadlocked/crashing.
You can try these techniques
Logging start logging the flow of the code in the service. Have this parameter based so you dont have a deluge after you are done. You should log all function names, parameters, timestamps.
Attach Debugger Locally or Remotely attach a debugger with the code to the running service, set appropriate breakpoints (can be based on the data gathered from logging)
PerfMon Run this utility and gather information about the machine that the service is running on for any additional clues (high CPU spikes, IO spikes, excessive paging, etc)
Microsoft provides a good resource on debugging a Windows Service. That essentially sounds like what you'd have to do given that your question is so generic. With that said, has any changes been made to the system over the last few days that could aversely affect the service? Have you made any updates to the code that change the way the service might possibly work?
Again, I think you're going to have to do some serious debugging to find your problem.
What type of timer are you using in the windows service? I've seen numberous people on SO have problems with timers and windows services. Here is a good tutorial just to make sure you are setting it up correctly and using the right type of timer. Hope that helps.
Another potential problem in reference to psasik's answer is if your application is relying on something only available when being run in User Mode.
Running in service mode runs in (is it desktop0?) which can cause some issues in my experience if you are trying to determine states of something that can only be seen in user mode.
Smells like a threading issue to me. Is there any threading or async work being done at all? One crucial question is "does the service hang on the same line of code or same method every time?" Use your logging to find out the last thing that happens before a hang, and if so, post the problem code.
One other tool you may consider is a good profiler. If it is .NET code, I believe RedGate ANTS can monitor it and give you a good picture of any threadlock scenarios.