Backup algorithm for windows service - c#

I have to design a backup algorithm for some files used by a Windows Service and I already have some ideas, but I would like to hear the opinion of the wiser ones, in order to try and improve what I have in mind.
The software that I am dealing with follows a client-server architecture.
On the server side, we have a Windows Service that performs some tasks such as monitoring folders, etc, and it has several xml configuration files (around 10). These are the files that I want to backup.
On the client side, the user has a graphical interface that allows him to modify these configuration files, although this shouldn't happen very often. Communication with the server are made using WCF.
So the config files might be modified remotely by the user, but the administrator might also modify them manually on the server (the windows service monitors these changes).
And for the moment, this is what I have in mind for the backup algorithm (quite simple though):
When - backups will be performed in two situations:
Periodically: a parallel thread on the server application will perform a copy of the configuration files every XXXX months/weeks/whatever (configurable parameter). This is, it does not perform the backup each time the files are modified by user action, but only when the client app is launched.
Every time the user launches the client: every time the server detects that a user has launched the application, the server side will perform a backup.
How:
There will be a folder named Backup on the Program Data folder of the Windows Service. There, each time a backup is performed, a sub-folder named BackupYYYYMMDDHHmm will be created, containing all the concerned files.
Maintenance: Backup folders won't be kept forever. Periodically, all of those older than XXXX weeks/months/year (configurable parameter) will be deleted. Alternatively, I might only maintain N backup sub-folders (configurable parameter). I still haven't chosen an option, but I think I'll go for the first one.
So, this is it. Comments are very welcome. Thanks!!

I think your design is viable. just a few comments:
do you need to back up to a separate place other than the server? I don't feel it's safe to back up important data on same server, and I would rather back them up to a separate disk (perhaps a network location)
you need to implement the monitoring/backup/retention/etc. by yourself, and it sounds complicated - how long do you wish to spend on this?
Personally i would use some simple trick to achieve the backup, for example, since the data are plain text files (xml format) and light, I might simply back them up to some source control system: make the folder a checkout of SVN (or some other means) and create a simple script that detects/checks in changes to SVN, and schedule the script to be executed once a few hours (or more often up to your needs, or can be triggered by your service/app on demand) - this way it eliminates the unnecessary copy of data (as it checks in changes only), and it's much more trackable as svn provides all the history.
hope above can help a bit...

Related

Best way to create disc cache for web service

I have created a webservice that delivers images. It will always be one-way communication. The images will never be changed, on the side that gets them from the service.
It has multiple sources, and some can be far away, on bad connections.
I have created a memory cache for it, but I would like to also have a disc cache, to store images for longer periods.
I am a bit unsure on the best approach to do this.
First of all, all of my sources are webservers, so I don't really know how to check the last modified date (as an example) of my images, which I would like to use, to see if the file has changed.
Second, how do I best store my local cache? Just drop the files in a folder and compare dates with the original source?
Or, perhaps store all the timestamps in a txt file, with all the images, to avoid checking files.
OR, maybe store them in a local SQL express DB?
The images, in general, are not very large. Most are around 200kb. Every now and then, however, there will be 7+ mb.
The big problem is, that some of the locations, where the service will be hosted, are on really bad connections, and they will need to use the same image, many times.
There are no real performance requirements, I just want to make it as responsive as possible, for the locations that have a horrible connection, to our central servers.
I can't install any "real" cache systems. It has to be something I can handle in my code.
Why don't you install a proxy server on your server, and access all the remote web-servers through that? The proxy server will take care of caching for you.
EDIT: Since you can't install anything and don't have a database available, I'm afraid you're stuck with implementing the disk cache yourself.
The good news is - it's relatively easy. You need to pick a folder and place your image files there. And you need a unique mapping between your image identification and a file name. If your image IDs are numbers, the mapping is very simple...
When you receive a request for an image, first check for it on the disk. If it's there, you have it already. If not , download it from the remote server and store it there, then serve it from there.
You'll need to take concurrent requests into account. Make sure writing the files to disk is a relatively brief process (you can write them once you finish downloading them). When you write the file to disk, make sure nobody can open it for reading, that way you avoid sending incomplete files.
Now you just need to handle the case where the file isn't in your cache, and two requests for it are received at once. If performance isn't a real issue, just download it twice.

sync local files with server files

Scenario: I want to develop an application.The application should be able to connect to my remote server and download data to the local disk , while downloading it should check for new files and only download the new ones simultaneously creating the required(new) folders.
Problem: I have no idea how to compare the files in the server with the ones in the local disk.How to download only the new files from the server to the local disk?
What am thinking?: I want to sync the files in the local machine with the ones in the server. I am planning to use rsync for syncing but i have no idea how to use it with ASP.NET.
Kindly let me know if my approach is wrong or is there any other better way to accomplish this.
First you can compare the file names, then the file size and when all matches, you can compare the hashes of the files.
I call this kind of a problem a "data mastering" problem. I synchronize our databases with a Fortune 100 company throughout the week and have handled a number of business process issues.
The first rule of handling production data is not to do your users' data entry. They must be responsible for putting any business process into motion which touches production. They must understand the process and have access to logs showing what data was changed, otherwise they cannot handle issues. If you're doing this for them, then you are assuming these responsibilities. They will expect you to fix everything when problems happen, which you cannot feasibly do because IT cannot interpret business data or its relevance. For example, I handle delivery records but had to be taught that a duplicate key indicated a carrier change.
I inherited several mismanaged scenarios where IT simply dumped "newer" data into production without any further concern. Sometimes I get junk data, where I have to manually exclude incoming records from the mastering process because they have invalid negative quantities. Some of my on-hand records are more complete than incoming data, and so I have to skip synchronizing specific columns. When one application's import process simply failed, I had to put an end to complaints by creating a working update script. These are issues you need to think ahead about, because they will encourage you to organize control of each step of the synchronization process.
Synchronization steps:
Log what is there before you update
Download and compare local vs remote copies for differences; you cannot compare the two without a) having them both in the same physical location or b) controlling the other system
Log what you're updating with, and timestamp when you're updating it
Save and close the logs
Only when 1-4 are done should you post an update to production
Now as far as organizing a "mastering" process goes, which is what I call comparing the data and producing the lists of what's different, I have more experience to share. For one application, I had to restructure (decentralize) tables and reports before I could reliably compare both sources. This implies a need to understand the business data and know it is in proper form. You don't say if you're comparing PDFs, spreadsheets or images. For data, you must write a separate mastering process for each table (or worksheet), because the mastering process's comparison step may be specially shaped by business needs. Do not write one process which masters everything. Make each process controllable.
Not all information is compared the same way when imported. We get in PO and delivery data and therefore compare tens of thousands of records to determine which data points have changed, but some invoice information is simply imported without any future checks or synchronization. Business needs can even override updates and keep stale data on your end.
Each mastering process's comparer module can then be customized as needed. You'll want specific APIs when comparing file types like PDFs and spreadsheets. I use EPPlus for workbooks. Anything you cannot open has to be binary compared, of course.
A mastering process should not clean or transform the data, especially financial data. Those steps need to occur prior to mastering so that these issues are caught before mastering is begun.
My tools organize the data in 3 tabs -- Creates, Updates and Deletes -- each with DataGridViews showing the relevant records. Then I can log, review and commit changes or hand the responsibility to someone willing.
Mastering process steps:
(Clean / transform data externally)
Load data sources
Compare external to local data
Hydrate datasets indicating Creates, Updates and Deletes

How do I make a duplicate of a folder on a remote server with asp.net and keep it speedy?

I have an ASP.NET page that needs to make a duplicate of a folder that resides on another server. All of the file IO runs on the remote server. This is not a question about how to copy files from one computer to another, nor is it a question about how to do a recursive directory copy. Those tasks are pretty easy.
What I've found is that making a copy of a directory on the server running the ASP.NET application is nice and fast. As soon as the source and destination folder reside on a remote server (i.e. a file server), it becomes incredibly slow. I can't figure out why it's so slow, which means I can't really figure out a good way to get around the problem. To be perfectly clear, here's an example of what I'm trying to accomplish:
The asp.net application is running on a server named "webserver"
The files are located on a server named "fileserver"
I need the website running on "webserver" to make a recursive duplicate of \\fileserver\files\setup named \\fileserver\files\setup_temp
If there are already answers to this question then please point me in the right direction. I found loads of answers to the "how do I make a recursive copy" and "how do I copy a folder to another server"
I would recommend using some remote execution commands instead of doing that from the web app itself. For e.g., using Powershell or PSExec that will not require your web app to keep a connection for the two end points of the copy operation. I am not sure how you are doing the copy but if you are calling anything in .NET to do that work, that means your server is a middle man for that operation and that will slow it down.
When I re-read your message, it looks like you are thinking the IO happens on the remote server itself. In which case, these suggestions might not be of much use. :(
How does the Webserver communicate with the Fileserver? The transfer of data between those two is definitely your bottleneck. I would recommend using a WCF service to transfer one file/folder at a time (depending on the size of the data). If the files are really large i would also recommend enabling streaming.

Hints and tips for a Windows service I am creating in C# and Quartz.NET

I have a project ongoing at the moment which is create a Windows Service that essentially moves files around multiple paths. A job may be to, every 60 seconds, get all files matching a regular expression from an FTP server and transfer them to a Network Path, and so on. These jobs are stored in an SQL database.
Currently, the service takes the form of a console application, for ease of development. Jobs are added using an ASP.NET page, and can be editted using another ASP.NET page.
I have some issues though, some relating to Quartz.NET and some general issues.
Quartz.NET:
1: This is the biggest issue I have. Seeing as I'm developing the application as a console application for the time being, I'm having to create a new Quartz.NET scheduler on all my files/pages. This is causing multiple confusing errors, but I just don't know how to institate the scheduler in one global file, and access these in my ASP.NET pages (so I can get details into a grid view to edit, for example)
2: My manager would suggested I could look into having multiple 'configurations' inside Quartz.NET. By this, I mean that at any given time, an administrator can change the applications configuration so that only specifically chosen applications run. What'd be the easiest way of doing this in Quartz.NET?
General:
1: One thing that that's crucial in this application is assurance that the file has been moved and it's actually on the target path (after the move the original file is deleted, so it would be disastrous if the file is deleted when it hasn't actually been copied!). I also need to make sure that the files contents match on the initial path, and the target path to give peace of mind that what has been copied is right. I'm currently doing this by MD5 hashing the initial file, copying the file, and before deleting it make sure that the file exists on the server. Then I hash the file on the server and make sure the hashes match up. Is there a simpler way of doing this? I'm concerned that the hashing may put strain on the system.
2: This relates to the above question, but isn't as important as not even my manager has any idea how I'd do this, but I'd love to implement this. An issue would arise if a job is executed when a file is being written to, which may be that a half written file will be transferred, thus making it totally useless, and it would also be bad as the the initial file would be destroyed while it's being written to! Is there a way of checking of this?
As you've discovered, running the Quartz scheduler inside an ASP.NET presents many problems. Check out Marko Lahma's response to your question about running the scheduler inside of an ASP.NET web app:
Quartz.Net scheduler works locally but not on remote host
As far as preventing race conditions between your jobs (eg. trying to delete a file that hasn't actually been copied to the file system yet), what you need to implement is some sort of job-chaining:
http://quartznet.sourceforge.net/faq.html#howtochainjobs
In the past I've used the TriggerListeners and JobListeners to do something similar to what you need. Basically, you register event listeners that wait to execute certain jobs until after another job is completed. It's important that you test out those listeners, and understand what's happening when those events are fired. You can easily find yourself implementing a solution that seems to work fine in development (false positive) and then fails to work in production, without understanding how and when the scheduler does certain things with regards to asynchronous job execution.
Good luck! Schedulers are fun!

Looking for solution ideas on how to update files in real time that may be locked by other software

I'm interested in getting solution ideas for a problem we have.
Background:
We have software tools that run on laptops and flash data onto hardware components. This software reads in a series of data files in order to do the programming on the hardware. It's in a manufacturing environment and is running continuously throughout the day.
Problem:
Currently, they're a central repository that the software connects to to read the data files. The software reads the files and retains a lock on them throughout the entire flashing process. This is running all throughout the day on different hardware components, so it's feasible that these files could be "locked" for most of the day.
There's new requirements that state these data files that the software is reading need to be updated in real time, will minimal impact to the end user who is doing the flashing. We will be writing the service that drops the files out there in real time.
The software is developed by a third party vendor and is not modifiable by us. However, it expects a location to look for the data files, so everything up until the point of flashing is our process that we're free to change.
Question:
What approach would you take to solve this from a solution programming standpoint? We're not sure how to drop files out there in real time given the locks that will be present on them throughout the day. We'll settle for an "as soon as possible" solution if that is significantly easier.
The only way out of this conundrum seems to be the introduction of an extra file repository, along with a service-like piece of logic in charge of keeping these repositories synchronized.
In other words, the file upload takes places in one of the repositories (call it the "input repository"), and the flashing process uses the other repository (call it the "ouput repository"). The synchronization logic permanently pools the input repository for new files (based on file time stamp or other...) and when it finds such new files, attempts to copy these to the "output directory"; such copy either takes place instantly, when the flashing logic hasn't locked the corresponding file in the output directory, or it is differed till the file gets unlocked.
Note: During the file copy, the synchronization logic can/should lock the file, hence very temporarily preventing the file to be overwritten by new uploads, but ensuring full integrity of the copied file. The difference with the existing system is that the lock is held for a much shorter amount of time.
The drawback of this system is the full duplication of the repository, and this could be a problem if the repository is very big. However there doesn't appear to be many alternatives since we do not have control over the flashing process.
"As soon as possible" is your only option. You can't update a file that's locked, that's the whole point of a lock.
Edit:
Would it be possible to put the new file in a different location and then tell the 3rd party service to look in that location the next time it needs the file?

Categories

Resources