I have a scenario in my application is that i need to upload some files (Zip files) from the client to the server and in the server i want to extract the Zip file and replace those files which i getting from extracting the Zip file into some other folder.
The files which i need to be replaced is mostly dll files. So one thing that i need to ensure that either all files should be replaced or none of them get replaced.
Is there any way in C# to achieve this (like Transaction in SQL) ? If anything bad occurs while replacing files (Example: no memory space), every changes happened to the previous files should be rollbacked.
Hope you understand the problem.
Any help ?
NTFS allows file system transactions, see https://msdn.microsoft.com/en-us/magazine/cc163388.aspx
Having a quick poke around, only way I can see you doing this would be through https://msdn.microsoft.com/en-us/magazine/cc163388.aspx which involves some native code. Otherwise you could use a third party tool such as http://transactionalfilemgr.codeplex.com/
If you wanted to manage it yourself or go for a simpler approach, I would suggest backing up the existing files somewhere before trying to copy the new files. This could be in another folder or zipped up. Then if the copy fails, you handle this and revert all the files to their original state.
Whatever you choose, make sure you have plenty of logging so you can see what's happening and if/when something goes wrong :)
Related
I generated a help file (*.chm) using HTML Help Workshop.
But there is one line I need to change every time I compile my solution.
Imagine you do have a complete finished *.chm file, but if a server builds the version new, this build number won't get updated in the *.chm file. For now I always deleted this *.chm file and created it new afterwards.
Now I reached at a point where it annoys me every time I have to create it new only because the server makes a build. It would be comfortable if i could modify the existing *.chm file directly in my C#-Code.
Is there any possibility to modify a *.chm file with C# code?
Yes. .chm files are really just an archive of a bunch of HTML files and some other bits to hold it all together.
Download a universal zip/unzip program like 7-zip and you can right-click (in windows) your .chm, then choose 7-zip>>Open Archive and you'll see the contents.
Be careful about monkeying around too much in here though since broken links and changed file names will ruin your .chm.
I would agree though that modifying your source before running it up through html-help-workshop is a better option than monkeying with it afterwards.
I'm supposed to get multiple files with the same extension (for example, an archive that's split into several archives: rar, r00, r01, etc).
I'm trying to find a solution where if one of the file streams I got fails to be written, all the previously successful files that were created will be deleted.
Just like a transactional file stream writer.
I've bumped into .NET Transactional File Manager project, which seems just like what I need -- except it doesn't work with streams but with file paths.
At this point I only see two options:
Keeping a list of successful file writes and if other one will fail, I'll go over the list and delete all.
Write all files to %TEMP% or something via FileStream and then - after all files were written successfully, I'll use the Transactional File Manager (mentioned above) to move the files to the desired location.
Need to notice that I have to work with streams
Which of the two options is better in your opinion?
Is there any better recommendation or idea for doing this?
Thanks
Edit:
Another option I bumped into is using AlphaFS just like in the following example.
Any thoughts on this?
I ended up using AlphaFS.
Using it just like in this example.
It works perfectly and does exactly what I was needed.
Thanks for all the comments.
this is more of a question because I am experimenting with this.
All over the internet I see how you can update a .txt file. Well that is all good and well, but lets say I have a .doxc or even an .exe or even a .dll file.
If we make a minor change to a file, do we really have to replace(overwrite) the whole file?
Is it possible to "update" the file so that we don't use too mush data (over the internet).
What I am trying to achieve is to create a FTP client with a FileSystemWatcher. This will monitor a certain folder on the Computer. If anything changes in this folder (even sub directories) then it uploads, deletes, renames, or changes the file. But at the moment I am wondering if I have, lets say, a 20MB .exe file or whatever, if it is possible to change something in that .exe, instead of just overwriting the whole thing... thus, sparing some cap.
In general, it's possible to update the remote file only partially, but not in your case.
What would work:
1) track the file change using a filesystem filter driver, which gives you information about what parts of the file have been updated.
2) use the protocol which allows partial upload or remote modification of the file (eg. SFTP).
As for your scenario:
Step 1 is not possible with FileSystemWatcher.
Step 2 is not possible with FTP protocol which doesn't support modification of file blocks.
Since your are experimenting, I can provide some pointers. But I dont know for sure if the below operations are just updates or replaced newly by the underlysing os calls
Have different cases for each file type. Try with a basic types first, a txt file, then a binary file etc.
You should have the entire copy of the current file somewhere, sine you "should" compare the old file to know what changes
Then when a change is made to the file compare it with the old file e.g) in a text file with 1 MB and the change is only 1 KB you will need to build a format like
[Text][Offset][[operation]
e.g) [Mrs.Y][40][Delete] then [Mr.X][40][Add]
Then your ftp client should be able to implement this format and make changes to the local copy on the client.
No it is not possible to only upload the changes to .exe file.we have to overwrite it.
#Frederik - It would be possible if FTP supports an updating of resource like HTTP's PUT command. Try exploring that angle. Let us know if you find something.
I am developing a WinForms application using C# 3.5. I have a requirement to save a file on a temporary basis. Let's just say, for arguments sake, that's it's for a short duration of time while the user is viewing a particular tab on the app. After the user navigates away from the tab I am free to delete this file. Each time the user navigates to the tab(which is typically only done once), the file will be created(using a GUID name).
To get to my question - is it considered good practice to save a file to the temp directory? I'll be using the following logic:
Path.GetTempFileName();
My intention would be to create the file and leave it without deleting it. I'm going to assume here that the Windows OS cleans up the temp directory at some interval based on % of available space remaining.
Note: I had considered using the IsolatedStorage option to create the file and manually delete the file when I was finished using it i.e. when the user navigates away from the tab. However, it's not going so well as I have a requirement to get the Absolute or Relative path to the file and this does not appear to be an straight-forward/safe chore when interacting with IsolatedStorage. My opinion is that it's just not designed to allow
this.
I write temp files quite frequently. In my humble opionion the key is to clean up after one self by deleting unneeded temp files.
In my opinion, it's a better practice to actually delete the temporary files when you don't need them. Consider the following remarks from Path.GetTempFileName() Method:
The GetTempFileName method will raise an IOException if it is used to
create more than 65535 files without deleting previous temporary
files.
The GetTempFileName method will raise an IOException if no
unique temporary file name is available. To resolve this error, delete
all unneeded temporary files.
Also, you should beaware about the following hotfix for Windows 7 and Windows Server 2008 R2.
Creating temp files in the temp directory is fine. It is considered good practice to clean up any temporary file when you are done using it.
Remember that temp files shouldn't persist any data you need on a long term basis (defined as across user sessions). Exaples of data needed "long term" are user settings or a saved data file.
Go ahead and save there, but clean up when you're done (closing the program). Keeping them until the end also allows re-use.
I have realised that by using the Amazon S3 service directly, I can save myself a lot of money. Instead of buying a client like GoodSync or Jungle Disk I thought it would be interesting to create my own Windows syncing application, which would sync my files to S3.
I have discovered that I can use FileSystemWatcher to monitor for changes to files and directories, but I am looking for the theory behind how other services like Dropbox index their files. Things like comparing the file size of a file with the size recorded in an index somewhere on the client PC, then using this information to determine whether to sync or not.
I am using C# and references to different libraries or code samples I could use would be helpful, but I am mainly looking for the best way to index files and for someone to point me in the right direction.
Thanks
I've went down this path myself. In fact, now that Mozy dropped their unlimited plan and Carbonite chooses to NOT backup certain files...like 3GP files and *.dat files unless you routinely go in and manually add them, I am very disgruntled with online backups.
But your question was on syncing. Dropbox does it the best. But it's expensive. But I'm not sure S3 would be any cheaper.
Anyway, you will have a lot of hurdles. In my experiences, the problems I ran into are:
1) Propagating deletes
2) FileSystemWatcher simply missing events such as rapidly adding files to a folder then deleting them
3) etc..
Now some ideas on how I would tackle this again:
1) Keep a small SQLite db for files names/path locally
2) Copy files to a tmp directory before sending to S3.
3) On file changes/updates/deletions/etc store that meta information in SQLite
Anyway just some ideas.