I have a program that requires a few large (~4 or 5mb) files. Once a week, every week, there are new versions of these files with minor changes. Mostly just a few lines added or removed.
When the program starts, if there's an Internet connection, I'd like the program to update these files automatically. Instead of downloading the entire new versions of the files, I'll like to download just a patch based on the client's version of the files that updates them.
How might I do this?
I have total control over the server.
That is a tough problem to solve if you don't have any for knowledge of what is in the file or the server doest have a facility to allow you to request differences. Any program you write that does not have a way to determine the differences with out looking at the old and new file will have to download it anyway.
C# doesn't have any built-in facility to do this, but it sounds like your requirements aren't complicated. Look at how diff and ed on Unix can be used to patch a text file based on an easy-to-grok delta. Of course you should check the resulting file against a hash and fall back to a full download if it isn't correct.
Related
I generated a help file (*.chm) using HTML Help Workshop.
But there is one line I need to change every time I compile my solution.
Imagine you do have a complete finished *.chm file, but if a server builds the version new, this build number won't get updated in the *.chm file. For now I always deleted this *.chm file and created it new afterwards.
Now I reached at a point where it annoys me every time I have to create it new only because the server makes a build. It would be comfortable if i could modify the existing *.chm file directly in my C#-Code.
Is there any possibility to modify a *.chm file with C# code?
Yes. .chm files are really just an archive of a bunch of HTML files and some other bits to hold it all together.
Download a universal zip/unzip program like 7-zip and you can right-click (in windows) your .chm, then choose 7-zip>>Open Archive and you'll see the contents.
Be careful about monkeying around too much in here though since broken links and changed file names will ruin your .chm.
I would agree though that modifying your source before running it up through html-help-workshop is a better option than monkeying with it afterwards.
this is more of a question because I am experimenting with this.
All over the internet I see how you can update a .txt file. Well that is all good and well, but lets say I have a .doxc or even an .exe or even a .dll file.
If we make a minor change to a file, do we really have to replace(overwrite) the whole file?
Is it possible to "update" the file so that we don't use too mush data (over the internet).
What I am trying to achieve is to create a FTP client with a FileSystemWatcher. This will monitor a certain folder on the Computer. If anything changes in this folder (even sub directories) then it uploads, deletes, renames, or changes the file. But at the moment I am wondering if I have, lets say, a 20MB .exe file or whatever, if it is possible to change something in that .exe, instead of just overwriting the whole thing... thus, sparing some cap.
In general, it's possible to update the remote file only partially, but not in your case.
What would work:
1) track the file change using a filesystem filter driver, which gives you information about what parts of the file have been updated.
2) use the protocol which allows partial upload or remote modification of the file (eg. SFTP).
As for your scenario:
Step 1 is not possible with FileSystemWatcher.
Step 2 is not possible with FTP protocol which doesn't support modification of file blocks.
Since your are experimenting, I can provide some pointers. But I dont know for sure if the below operations are just updates or replaced newly by the underlysing os calls
Have different cases for each file type. Try with a basic types first, a txt file, then a binary file etc.
You should have the entire copy of the current file somewhere, sine you "should" compare the old file to know what changes
Then when a change is made to the file compare it with the old file e.g) in a text file with 1 MB and the change is only 1 KB you will need to build a format like
[Text][Offset][[operation]
e.g) [Mrs.Y][40][Delete] then [Mr.X][40][Add]
Then your ftp client should be able to implement this format and make changes to the local copy on the client.
No it is not possible to only upload the changes to .exe file.we have to overwrite it.
#Frederik - It would be possible if FTP supports an updating of resource like HTTP's PUT command. Try exploring that angle. Let us know if you find something.
I'm trying to find the most reliable way of finding new and modified files in a directory using C# and .NET. I'm not looking for a real time solution, I want to check for changes at given times. It could be every 5 minutes or every hour etc.
We have CreationTime and LastWriteTime on the FileInfo object, and this seems to be enough to get new and modified files. But if a file is renamed none of the available dates are changed and the file will be missed if we just look at CreationTime and LastWriteTime.
At the moment i'm maintaining af "snapshot" of the files in the directory including the time of the last check for changes. This enables me to compare all the files in the directory with files in the snapshot, if the snapshot is missing a file it is either new or renamed.
Is this the only way? Rr am I missing something. I'm not going to use FileSystemWatcher as it seems pretty "buggy" and is required to run all the time.
Any suggestions are very welcome.
Merry Christmas!
Use the FileSystemWatcher class, it's the good way. Maybe you could be more specific with the
as it seems pretty "buggy"
EDIT: FileSystemWatcher does support renaming events.
The Microsoft Sync Framework has components for synchronising files.
The framework covers all data types and data storage and the file system component should be more reliable than the FileSystemWatcher. As it says in the MSDN:
It can be used to synchronize files and folders in NTFS, FAT, or SMB file systems. The directories to synchronize can be local or remote; they do not have to be of the same file system. An application can use static filters to exclude or include files either by listing them explicitly or by using wildcard characters (such as *.txt). Or the application can set filters that exclude whole subfolders. An application can also register to receive notification of file synchronization progress
I know you really only want to know when files have changed, but given that you've already dismissed the FileSystemWatcher route it might be the only reliable route (other than doing what you are in maintaining a snapshot yourself).
Your problem looks very much like a Database with no primary key.
If you asssign, say, a GUID to each file in that folder and check for that GUID instead of the filename, your application will be much more reliable.
So that's the theory, in practice, we're talking metadata. Depending on your system, and the files contained in that folder, you could use Alternate Data Streams.
Here is a SO question about it.
It boils down to having information on a file that is not stored within the file, it is merely linked to it.
You can then look it up in a DOS box:
notepade.exe myfile.txt:MYGUID
It requires the system to use NTFS.
HTH.
A very primitive approach would use a command of "dir" and comparing outputs...
Here is some info on params:
http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/dir.mspx?mfr=true
Along with you snapshot of dates, you can compare the outputs of dir... its very fast and low on resources consuming...
File system watcher class in .net provides two methods:-
OnChanged
OnRenamed
You can set the EnableRaisingEvents to true and that is!!!! Every thing is simple with Dot Net chill!!!
I am looking at implementing some performance optimization around my javascript/css. In particular looking to achieve the minification and combining of such. I am developing in .net/c# web applications.
I have a couple of options and looking for feedback on each:
First one is this clever tool I came across Chirpy which via visual studio combines, minifies etc -> http://chirpy.codeplex.com/ This is a visual studio add in but as I am in a team environment, this tool isnt ideal.
My next option is to use an Msbuild task (http://yuicompressor.codeplex.com/) to minify the files and also combine them (maybe read from an xml file what needs to be combined). While this works for minifying fine, the concern I have is that I will have to maintain what must be combined which could be a headache.
3rd option is to use msbuild task just for the minifying and at runtime using some helper classes, combine the files on a per page basis. This would combine the files, give it a name and add a version to it.
Any other options I could consider? My concern with the last option is that it may have performance issues as I would have to open the file from the local drive, read its contents and then combine the files. This is alot of processing at run time. I was looking at something like Squishit - https://github.com/jetheredge/SquishIt/downloads This minifies the files at run time but I would look at doing this at compile time.
So any feedback on my approaches would be great? If the 3rd option would not cause performance issues, I am leading towards it.
We have done something similar with several ASP.NET web applications. Specifically, we use the Yahoo Yui compressor, which has a .NET library version which you can reference in your applications.
The approach we took was to generate the necessary merged/minified files at runtime. We wrapped all this logic up into an ASP.NET control, but that isn't necessary depending on your project.
The first time a request is made for a page, we process through the list of included JS and CSS files. In a separate thread (so the original request returns without delay) we then merged the included files together (1 for JS, 1 for CSS), and then apply the Yui compressor.
The result is then written to disk for fast reference in the future
On subsequent requests, the page first looks for the minified versions. If found, it just serves those up. If not, it goes through the process again.
As some icing to the cake:
For debug purposes, if the query string ?debug=true is present, the merged/minified resources are ignored and the original individual files are served instead (since it can be hard to debug optimized JS)
We have found this process to work exceptionally well. We built it into a library so all our ASP.NET sites can take advantage. The post-build scripts can get complicated if each page has different dependencies, but the run-time can determine this quite easily. And, if someone needs to make a quick fix to a CSS file, they can do so, delete the merged versions of the file, and the process will automatically start over without need to do post-build processing with MSBuild or NAnt.
RequestReduce provides a really nice solution for combining and minifying javascript and css at run time. It will also attempt to sprite your background images. It caches the processed files and serves them using custom ETags and far future headers. RequestReduce uses a response filter to transform the content so no code or configuration is needed for basic functionality. It can be configured to work in a web farm environment and sync content accross several servers and can be configured to point to a CDN. It can be downloaded at http://www.RequestReduce.com or from Visual Studio via Nuget. The source is available at https://github.com/mwrock/RequestReduce.
have you heard of Combres ?
go to : http://combres.codeplex.com and check it out
it minifies your CSS and JS files at Runtime meaning you can change any file and upload it and each request the client does it minifies it.
all you gotta do is add the files u wanna compress to a list in the combres XML file and just call the list from your page / masterpage.
if you are using VS2010 you can easily install it on your project using NuGet
here's the Combres NuGet link: http://combres.codeplex.com/wikipage?title=5-Minute%20Quick%20Start
I did a really nice solution to this a couple of years back but I don't have the source left. The solution was for webforms but it should work fine to port it to MVC. I'll give it a try to explain what I did in some simple step. First we need to register the scripts and we wrote a special controller that did just that. When the controller was rendered it did three things:
Minimize all the files, I think we used the YUI compression
Combine all the files and store as string
Calculate a hash for the string of the combined files and use that as a virtual filename. You store the string of combined files in a cached dictionary on the server with the hash value as key, the html that is rendered needs to point to a special folder where the "scripts" are located.
The next step is to implement a special HttpHandler that handles request for files in the special folder. When a request is made to that special folder you make a lookup in the cached dictionary and returns the string bascially.
One really nice feature of this is that the returned script is always valid so the user will never have to ask you for an update of the script. The reason for that is when you make a change to any of the script files the hash value will change and the client will ask for a new script.
You can use this for css-files as well with no problems. I remebered making it configurable so you could turn off combine files, minimize files, or just exclude one file from the process if you wanted to do some debugging.
I might have missed some details, but it wasn't that hard to implement and it turned out very well.
Update: I've implemented a solution for MVC and released it on nuget and have the source up on github.
Microsoft’s Ajax minifier is suprisingly good as a minification tool. I wrote a blog post on combining files and using their minifier in a javascript and stylesheet handler:
http://www.markistaylor.com/javascript-concatenating-and-minifying/
It's worthwhile combining the files at run time to avoid having to synchronise new versions. However, once they are programmatically combined, cache them to disk. Then the code which runs each time the files are fetched need only check that the files haven't changed before serving the cached version.
If they have changed, then the compression code can run as a one-off.
Whilst there will be a slight performance cost, you will also receive a performance benefit from fewer file requests.
This is the approach that the Minify tool uses to compress JS/CSS, which has worked really well for me. It's Linux/PHP only, but you might get some more ideas there too.
I needed a solution for combining/minifying CSS/JS on a .NET 2.0 web app and SquishIt and other tools I found weren't .NET 2.0-compatible, I created my own solution that uses a syntax similar to SquishIt but is compatible with .NET 2.0. Since I thought other people might find it useful I put it up on Github. You can find it here: https://github.com/AlliterativeAlice/simpleyui
Hi I'm creating online shop. In this shope people online must be buy files with zip extension. They pay with their credit cards or other methods get key and download product. How can I know when they finish product download?
Thanks
Unfortunatelly there is no really good way to do this as some clients might not download the file at once (e.g. Downloadmanagers split the download into several parralel part downloads).
Options are:
If it is very important to you that it can only be downloaded once: You could
simply not support resuming. Then you
can log if the file has entirely been
downloaded (as soon as the last byte
has been sent). This might work well if the download is small.
Otherwise you could offer some grace
data (we usually allow to download
clients to download 5 times the size
of the real download) and log every
download attempt.
You should NOT just count the bytes downloaded (because the download might be disrupted). And NOT just determine if all sections have been downloaded once (also because the download might be disrupted)
Just to clarify: All this means that you have to write your own download handler (fileserver).
you can use custom file server that works on either http or ftp and have it send a notification once the client received the last file fragment.
all other options are problematic; the client might download the file using a download manager,so you cannot even register for any browser event, if there was any.
A custom server application seems indeed a solution for this,
or possibly some kind of scripting.
A normal http server does not notify the end of a connection,
but possibly, if you generate the output in a cgi/php/asp/* script,
you read the file in cgi/php/asp/* scripting language and
send it to the output. when you reach the end of the file, you
do the notification, and then end the script.
When you do it that way, it will only detect fully downloaded files,
and if the connection gets interrupted half-way, it would not mark
the file as downloaded.
a 'cgi-script' can be a compiled c program, (or any other langauge
for that matter). Compiled code anyways. A compiled program
would give better performance then a interpreted script solution.