What is the most efficient way to extract all the files from a zip file (and store them in a dictionary file_name->contents) using DotNetZip? The zip is in a slow network location, so I want to make sure it is (a) downloaded and (b) decompressed only once.
There is not much to do here, then
1) download the file
2) unzip it localy
You need (1) to avoid expensive permission check on every network access.
Just one more point: make sure that you download/unzip into the location were current user has read/write permission.
For example, it could be:
var path = Path.Combine(
Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData),
APP_NAME);
which results on Windows7 in C:\ProgramData\APP_NAME
Related
I have this C# code which unzip a zip file.
ZipFile.ExtractToDirectory(_downloadPath, _extractPath);
To test download process, I use the file size and compare them. But for extraction process, how do we ensure that the extraction process is successful? It could be corrupted (extraction process stop half way). Can I use file count to compare?
I suggest you go ahead and compare md5 hash of files in archive and the ones that were extracted. Though it is definitely not the fastest process, this way you'll be 100% sure the data is not corrupted.
You can find how to get md5 of a file inside archive here:
I have to take the directory of a file in zip file
We have an ASP.NET MVC 4 app in wich our users have the possibility to upload files to a folder.
Now we want to limit the size of this folder to avoid it to grow uncontrolled.
To do this would be fine to account the current size of the folder, so when a file is going to be uploaded we can check if the fixed size limit would be reached and cancel the upload.
Problem is we fear this could slow a lot our upload process as number of files into source directory grow.
We could use DirectoryInfo to build a method that retrieves folder size, or we could store on database the size of each uploaded file (we are already storing their paths, as they are related to other elements on our business model) and build a method that retrieves the folder size adding the values stored on database.
Wich method will be better and faster?
I vote for storing the size of folder in DB. If you use DirectoryInfo it could be return incorrect info in case there is someone is uploading file. Storing in DB will give you exactly the current number no matter there are anyone uploading files.
If you just get the folder size I think get directory size from database not good..because after upload any file you must update your database and for check limittion must get directory size from database ...but with a method that get directory info size is easier and has good performance
Well, the speed of enumerating files depends on how much files are in this folder.
Do you know this figure?
You can run this from command line and it will create you 5000 files
For /L %i in (1,1,5000) do fsutil file createnew A%i.tmp 12000
And then make a performance test.
My C# application downloads a .zip that contains at least a .dcm file.
After decompression I get something like:
download/XXXX/YYYYYY/ZZZZZZ/file.dcm
I don't know the name of these intermediary X,Y,Z folders, and I don't know how many of them exist, but I'm certain that at least a single .dcm file exists at the end of the path.
How can I get the full path of folders between download and the folder with .dcm files? (assume Windows filesystem and .Net Framework 4.0).
This will give you a list of all the files contained within the download file that would match your filename:
Directory.GetFiles("C:\\path_to_download_folder", "file.dcm", SearchOption.AllDirectories);
You could then parse the returned filepaths for whatever parts you needed. The System.IO.Path methods will probably give you want you need instead of rolling your own.
Additionally, if your application might be downloading multiple files throughout the day, and you always need to retrieve the path of the very latest matching file, you could send the filepath to a System.IO.FileInfo, which lets you get the creation time of the file, which you could use to determine which file is the newest.
I want to download only specified file from torrent using MonoTorrent.
I use TorrentFile.Priority = Priority.DoNotDownload; , with this MonoTorrent doesn't download useless files for me, but MonoTorrent creates fake clear file, how can i avoid it? How can i avoid creation of fake mirror files for files with DoNowDownload priority?
Thanks!
The only way to avoid it is to exclude that file from initial torrent file. That is protocol specific problem - fake files used to count pices hash.
My application need to download multiple files in Silverlight, and because I don't want to ask user multiple times for permission to save the files, I save the files in IsolatedStorage first and then I want to zip them all to a file and ask once for saving permission.
therefore I used SharpZipLib to zip multiple files which are located in IsolatedStorage, the problem is that SharpZipLib just accept file address as ZipEntery:
ZipEntry z= new ZipEntry(name);
and as you know cause the files are located in IsolatedStorage I don't have the address of them.
I saw sample on Create a Zip from/to a memory stream or byte array but I cant use it for multiple files.
Please help me to find a way to use SharpZipLib or introduce me another way to downloading multiple files without asking multiple times for permission.
The name in ZipEntry z= new ZipEntry(name); is a logical/relative name inside your zip, you can establish it any way you want.
So as long as you can re-open you IsoStorage files as a Stream, you should be able to use SharpZip.