this is more of a question because I am experimenting with this.
All over the internet I see how you can update a .txt file. Well that is all good and well, but lets say I have a .doxc or even an .exe or even a .dll file.
If we make a minor change to a file, do we really have to replace(overwrite) the whole file?
Is it possible to "update" the file so that we don't use too mush data (over the internet).
What I am trying to achieve is to create a FTP client with a FileSystemWatcher. This will monitor a certain folder on the Computer. If anything changes in this folder (even sub directories) then it uploads, deletes, renames, or changes the file. But at the moment I am wondering if I have, lets say, a 20MB .exe file or whatever, if it is possible to change something in that .exe, instead of just overwriting the whole thing... thus, sparing some cap.
In general, it's possible to update the remote file only partially, but not in your case.
What would work:
1) track the file change using a filesystem filter driver, which gives you information about what parts of the file have been updated.
2) use the protocol which allows partial upload or remote modification of the file (eg. SFTP).
As for your scenario:
Step 1 is not possible with FileSystemWatcher.
Step 2 is not possible with FTP protocol which doesn't support modification of file blocks.
Since your are experimenting, I can provide some pointers. But I dont know for sure if the below operations are just updates or replaced newly by the underlysing os calls
Have different cases for each file type. Try with a basic types first, a txt file, then a binary file etc.
You should have the entire copy of the current file somewhere, sine you "should" compare the old file to know what changes
Then when a change is made to the file compare it with the old file e.g) in a text file with 1 MB and the change is only 1 KB you will need to build a format like
[Text][Offset][[operation]
e.g) [Mrs.Y][40][Delete] then [Mr.X][40][Add]
Then your ftp client should be able to implement this format and make changes to the local copy on the client.
No it is not possible to only upload the changes to .exe file.we have to overwrite it.
#Frederik - It would be possible if FTP supports an updating of resource like HTTP's PUT command. Try exploring that angle. Let us know if you find something.
Related
I generated a help file (*.chm) using HTML Help Workshop.
But there is one line I need to change every time I compile my solution.
Imagine you do have a complete finished *.chm file, but if a server builds the version new, this build number won't get updated in the *.chm file. For now I always deleted this *.chm file and created it new afterwards.
Now I reached at a point where it annoys me every time I have to create it new only because the server makes a build. It would be comfortable if i could modify the existing *.chm file directly in my C#-Code.
Is there any possibility to modify a *.chm file with C# code?
Yes. .chm files are really just an archive of a bunch of HTML files and some other bits to hold it all together.
Download a universal zip/unzip program like 7-zip and you can right-click (in windows) your .chm, then choose 7-zip>>Open Archive and you'll see the contents.
Be careful about monkeying around too much in here though since broken links and changed file names will ruin your .chm.
I would agree though that modifying your source before running it up through html-help-workshop is a better option than monkeying with it afterwards.
I have a scenario in my application is that i need to upload some files (Zip files) from the client to the server and in the server i want to extract the Zip file and replace those files which i getting from extracting the Zip file into some other folder.
The files which i need to be replaced is mostly dll files. So one thing that i need to ensure that either all files should be replaced or none of them get replaced.
Is there any way in C# to achieve this (like Transaction in SQL) ? If anything bad occurs while replacing files (Example: no memory space), every changes happened to the previous files should be rollbacked.
Hope you understand the problem.
Any help ?
NTFS allows file system transactions, see https://msdn.microsoft.com/en-us/magazine/cc163388.aspx
Having a quick poke around, only way I can see you doing this would be through https://msdn.microsoft.com/en-us/magazine/cc163388.aspx which involves some native code. Otherwise you could use a third party tool such as http://transactionalfilemgr.codeplex.com/
If you wanted to manage it yourself or go for a simpler approach, I would suggest backing up the existing files somewhere before trying to copy the new files. This could be in another folder or zipped up. Then if the copy fails, you handle this and revert all the files to their original state.
Whatever you choose, make sure you have plenty of logging so you can see what's happening and if/when something goes wrong :)
I have searched and searched and can't find a solution to my specific problem. I am updating a console app that will now be used for more than one client. We have decided, over storing the info in a db at least for now, to store the clients info in config files. Each client will have their own configuration file. I need to know how to load a/any config file from an "unknown" location. All of the examples that I found want me to put in the path of the file. While using my computer, I will know the path, but once it gets pushed to other servers, the paths to the file will change.
Working under these conditions, how can I load a config file for any client without knowing the path to the file?
EDIT: The console app is only ran on one server, but it is used to go to different clients websites and crawl their site. This is why each client has their own config file. It contains the information needed to get and use their site. We have a task set up for the app to run each client on a timer.
Since everything was being read from the bin folder, I looked at the properties and changed from Do Not Copy to Copy Always for Copy to Output Directory. Now it WILL locate the config files in the bin folder. And the answer to this question put me in the direction to read my config files.
What about writing every piece of code with in mind that it is to be used in another file, say with an include statement. So, every file is used in another one until we reach a wrapper file, which only purpose is to define the location of the config file. The application would always be called through such a wrapper file. This wrapper file will do whatever necessary to determine the location of the config file and make it available to the included files. If it knows the user, it can look up in a table. The key point here is that, the wrapper files would not move when you move the code from one environment to another. I think that this is a useful feature, because we do not want edit the code each time that we move it from one environment to another. Another advantage of this approach is that it applies to all environments, even very limited environments. For example, let say you provide only a folder to a programmer as a sandbox to experiment with the code. This programmer does not have access to the /bin or the /etc directory. In contrast, the proposed approach will work fine in this case, because the programmer can set the config files wherever he wants in its local wrapper files. This issue is discussed here How to organize code so that we can move and update it without having to edit the location of the configuration file?
I am working on a small application to allow me modify files and version each file before each change. What I would like the app to do is uniquely mark each file so that whenever the same file is opened up, the history for that particular file can be pulled back up. I am not using any of the big version control tools for this. How do I do this pro grammatically please?
Simple solution. Use a verison control which already exists (eg. Git) but if your really want to do this then try this.
Each time you create a new version copy the previous version of the file into a separate hidden directory and have a config file in that directory which holds the checksum of that file. Checksum will "more than likely" be unique since its a hashed value of the file (each time file changes, checksum will be different - you need to calculate the checksum yourself.)
When you open a file just check if there is that config file in the directory and compare the checksum with the checksum of what's already open. If they are the same then you are on the same file. That's how it works.
You could use checksums to optimise it. So if a user goes in to a file changes things, changes back to the way they were and saves. Checksum should return the same thing (unless you include modified date and time etc.)
Each folder should have a name which follows a pattern (filenameVn.n eg. someTextFile.txt.v1.0) then you will be able to figure out what the directory you are navigating to in the history should say.
Another approach would be to simply copy the file and append some tag onto the end of it (checksum maybe? version number?) so then you wouldn't need extra folders.
Yet another approach would be to call the files whatever the checksum recorded and store the history of versions (along with corresponding checksums) in a separate config file and then refer to it when you want to figure out what the file that you want to access is called. So each version will be refered to based on its own checksum (like in Git.)
So to sum up each file version would be stored somewhere, you will be able to validate if they are the same (so you can optimise by avoiding storing multiple files with no changes in them and wasting space) and you will be able to dynamically determine where each version is and get access to it.
Hope it gives you a bit more understanding of how to get started.
I am developing a WinForms application using C# 3.5. I have a requirement to save a file on a temporary basis. Let's just say, for arguments sake, that's it's for a short duration of time while the user is viewing a particular tab on the app. After the user navigates away from the tab I am free to delete this file. Each time the user navigates to the tab(which is typically only done once), the file will be created(using a GUID name).
To get to my question - is it considered good practice to save a file to the temp directory? I'll be using the following logic:
Path.GetTempFileName();
My intention would be to create the file and leave it without deleting it. I'm going to assume here that the Windows OS cleans up the temp directory at some interval based on % of available space remaining.
Note: I had considered using the IsolatedStorage option to create the file and manually delete the file when I was finished using it i.e. when the user navigates away from the tab. However, it's not going so well as I have a requirement to get the Absolute or Relative path to the file and this does not appear to be an straight-forward/safe chore when interacting with IsolatedStorage. My opinion is that it's just not designed to allow
this.
I write temp files quite frequently. In my humble opionion the key is to clean up after one self by deleting unneeded temp files.
In my opinion, it's a better practice to actually delete the temporary files when you don't need them. Consider the following remarks from Path.GetTempFileName() Method:
The GetTempFileName method will raise an IOException if it is used to
create more than 65535 files without deleting previous temporary
files.
The GetTempFileName method will raise an IOException if no
unique temporary file name is available. To resolve this error, delete
all unneeded temporary files.
Also, you should beaware about the following hotfix for Windows 7 and Windows Server 2008 R2.
Creating temp files in the temp directory is fine. It is considered good practice to clean up any temporary file when you are done using it.
Remember that temp files shouldn't persist any data you need on a long term basis (defined as across user sessions). Exaples of data needed "long term" are user settings or a saved data file.
Go ahead and save there, but clean up when you're done (closing the program). Keeping them until the end also allows re-use.