I want to rename large files while using c#.
If I have a couple of large files and if I use .IO.move function
my files will be copied with the right name and the old ones will be destroyed.
that will take a veryyyy long time with large files
I couldnt find a good solution.
anyone an idea fitting large files?
Whatever solution you choose - if you move your file between different logical/physical disks - you cannot do anything with it. It takes some time to move the data.
Have you tested your assumption? I did, and I found that if you use System.IO.File.Move, and the target location is on the same physical logical disk as the source, the file is just renamed. It doesn't take a long time.
You can create a new hardlink, then delete the original. This will only affect filesystem metadata, and not copy the file around.
Related
I am trying to figure out how to store data that can be easily/heavily edited.
Reading data from a big single file isn't really a problem. The problem starts when I need to make changes to that file.
Let's say I have a bit log file which always appends a string to the file. The Filesystem needs to recreate the whole file since it has changed. And the bigger the File the heavier the performance cost.
What I could do is simply create a new file for each log. Creating, removing and editing would be more efficient. Until I would like to copy all these files lets say on a new SSD.
Reading directories and copying thousand of files, even if they are small, hits hard on performance.
So maybe bundle all files into a single file/archive?
But then AFAIK archive like .zip ... also needs to be recreated when something changed.
Is there a good or maybe even simple solution to this?
How does a single file database like SQlite handle this?
Mention: I am using C#
I have an ASP.NET website that stores large numbers of files such as videos. I want an easy way to allow the user to download all the files in a single package. I was thinking about creating ZIP files dynamically.
All the examples I have seen involve creating the file before it is downloaded but potentially terabytes of information will be downloaded and therefor the user will have a long wait. Apparently ZIP files store all the information regarding what is in the ZIP file at the end of the file.
My idea is to dynamically create the file as its downloaded. This way I could allow the user to click download. The download would start and not require any space on the server to be pre packaged as it would copy things over uncompressed sequentially. The final part of the file would contain the information on the contents of what has been downloaded.
Has anyone had any experience of this? Does anyone know a better way of doing this? At the moment I cant see any pre made utilities for doing this but I believe it will work. If it doesn't exist then i'm thinking that I will have to read the Zip file format specifications and write my own code... something that will take more time than I was intending to spend on this.
https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
I'm making an XNA game, which uses a lot (currently ~2800) of small resource files. It has become a problem to move them around from place to place unarchived, so I thought maybe I could just zip them and make the game unzip them automatically, into memory, preferably. I don't need the writing capability yet, right now only reading.
Is there an easy way to unzip a folder into memory and access those files just like, or as simple as the regular files on disk?
I've been reading some similar questions and I see many people say that the OS (Windows in my case) can handle file caching better than a ram drive. I'm just going for unzipping and reading files for now, but in future I might need to modify or create new files, and I'd like it to be quick and seamless for the user. Maybe I should take a different approach at solving my current problem, taking in account my future goal?
I haven't personally tried this but if you want to be able to zip/unzip in memory, you could just use a MemoryStream and pass that into a library (eg https://github.com/icsharpcode/SharpZipLib). Things you'll probably need to remember, are you just moving a bottleneck to a different bottleneck?
You could also try something like the approach with Sprites in HTML. You combine all your Zip's into 1 with an Index to where in the file they are. Then you move your FileStream.Position to the location for the resource you are looking for, read (the amount you need) then do what you need with it. You'd need to make sure that if you rebuild any you make something rebuild all your combining indexes etc. Then you just would be copying 1 file around, it just so happens that inside that file you have ~2800 smaller segments of interest.
I implemented a RAMDisk into my C# application, and everything is going great, except I need to back up the contents regularly due to it being volatile. I have been battling with AlphaVSS for Shadow Copy backups for a week, then someone informed me that VSS does not work on a RAMDisk.
The contents that are located on the RAMDisk (world files for Minecraft) are very small, but there can be hundreds of them. The majority of them are .dat files only a few hundred bytes in size, and there is other files that are 2-8MB each.
I posted about this yesterday Here, and the solution that was suggested was to use a FileStream, and save the data out of it. I just read that this is a horrible idea for binary data on another Stack Overflow question, so I am looking for a better approach to backup all of these little files, some of which might be in use.
I suggest you first zip all the small files together, then back them up to a location.
ref:
zip library: http://www.icsharpcode.net/opensource/sharpziplib/
use System.IO.File.Copy to copy the zip packed.
Say i have thousands of files. Is it better to have one folder to store the files or is it better to have sub folders?
What is better for c# program to locate a retrieve files (from a performance pov)?
Thanks
I would imagine that if you always know the path to a file eg: path = (configuredRoot + path + filename) retreiving files should be the same for all paths. If you have to recursively search for files, having these in folders would obviously slow down the process of finding them.
Assuming that the path is known and a search of the directory contents is performed to find the next subdirectory/the desired file, using subdirectories would be more efficient from an asymptotic point of view, much the same way that binary search trees give results much faster than linked lists in the worst case scenario. I don't know if my assumption about the file system is correct, though.