Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I use GZip in C#. I have multiple files to zip and put them together.
But i don't know if i zip files one by one then save them in a file is better than make a file from them, and finally zip the created file only.
How about the size of the produced File?Which one is better to producing smaller file size ?
If you are looking to optimize on file size, putting them in one zip file might be a bit better, depending on the data in the files; the exact settings used, etc.
If you are looking to optimize on running time, and the library you're using is single-threaded, then you can potentially get a large performance bump by zipping each file in its own thread, but that assumes that Gzip is processor bound, which it might well not be.
Overall, the results will probably be subjective, and very dependent on the data you're trying to compress. Your best option would likely be to grab a sample of somewhat representative files, GZip them a few different ways, and see which way works best for what you need.
I try to implement the options on two image file and the result is :
Image1 6kb + Image2 7kb = 13kb
Image1Compressed 3kb + Image2Compressed 3kb = 6kb
In this sample there are no difference.
So it's better to use 2nd option.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working on a project that needs to look at large amounts of data (~1TB) and copy it from drive A to drive B. It will be constantly run in the background (or tray) and run a check every XX hours/mins. At that time, it will check if there are any NEW files in drive A and copy them to drive B. If there are any files that were updated and newer then it will also copy and replace the files from A to B.
I'm not really sure where to start. Should I write this in Python or C# (maybe visual?)? If someone could give me some advice I would greatly appreciate it. Thanks!
EDIT:
Just wanted to give an update! I ended up using Robocopy, which is built into Windows. I moved away from Python and just created a small batch file that would check all of the files in drive A and compare to drive B. If anything was new or didn't exist, it copies it over. I then set up a task through Task Scheduler, also built into Windows. Works PERFECTLY in literally just 1 line of code in a batch file!
I was starting to look into building something like this myself. I was going to write it in c#, probably as a system service and then have it periodically scan for new files. It would then build checksums with either sha1 or md5. You can look here about how to generate an MD5 in c#. Here is some additional information talking about byte-for-byte vs checksum comparisons.
After it has its hash list, it can do a transfer of the files then do another hash on the destination to ensure it was written properly. I was going to just hang on to all the hashes and then when it rescans the directory it has something to compare to in order to see if a file was updated. Then it would just repeat the above.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a windows form application which loads a list of paths to video files into an array, and then with a foreach (string[] file in fileList) loop, will go through each file and run some analysis of the video file, then write the result back to the filelist array.
The trouble is that it processes each video file at slightly less than real time which is not idea. I am aiming to split up the task across multiple threads. I have tested opening the application 5 times and ran the processing on separate files. The CPU handles this without issue.
What would be the simplest way to split up the processing across multiple threads?
Edit: I am new to multi threading, and currently learning. I know there are different ways to multi-thread but I am looking for the method I should be using in order to learn about it.
I found this example, however I don't understand how it works, it seems too simple compared to other examples I have been looking at.
Best way to do a multithread foreach loop
If the results are added to the same collection, PLINQ makes it a bit easier:
var results = fileList.AsParallel().Select(file => {
var bytes = File.ReadAllBytes(file);
var result = bytes.Length;
return result;
}).ToList();
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I took a look at the ZipFile class but it does only support single files. So my question is: Is there any way to extract multipart zip files without relying on 3rd party tools?
Simple answer is not unless you write one yourself, the .Net compression system is geared towards streams not files, and as a stream can be a file, memory space, network traffic or a data feed from a device, enforcing file compression onto them would be a very bad thing
I tend to use SharpZip, its a free opensource library that handles some of the most common compression formats
Update:
looks like if you'rr using .net4.5 or higher ms has added the ZipArchive which allows you access directly to the windows zip manager (thanks to spender)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
This question is actually a theoretical one, not about specific coding, but of the approach.
In my video game, there are a lot of phrases that protagonist is supposed to say. Now, I want to add a voiceover, so that someone will read those phrases aloud, so I can add them as sounds, not just plain text.
There are many different phrases.
So, the main question is, what's better, to have each phrase as separate sound file, or to have those phrases divided by sections (like game areas, or game actions or whatever), where each large file will contain a number of phrases. Then, in code, I'll order to play the large file not completely, but from specific time, to specific time (is it possible in WPF?).
What is important:
Time - which approach is easier to do?
Resources - which approach is easier for computer and\or visual studio compiler?
Copyright - I want to limit the possibility of end users stealing sound files.
I personally think that having a thousand of files is crazy, so it's better to use larger files that contain smaller ones. However, my friend highly recommended against it, claiming that playing large files from the middle is harder for computer and will cause problems, maybe slow down the game.
What option will you recommend? Or maybe there is another approach I didn't think of?
Thank you in advance,
Evgenie
I would imagine that using seperate sound files would be easier. Then within c#, add them all to a list.
From that I think it should just be possible to call the index of each sound file, which would be easy if you keep them organised, and labeled well.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What is the best way to store data for a Windows 8 APP ?
I need to store from one to thousands of little pictures (max 300x300px) (Load from FilePicker).
Also, is there a recommended practice to store picture (A class rather than an other) ?w
Storing 1000 of images uncompressed in memory could hang computer, depending on images and memory available. So that's not the best practice.
You haven't mentioned why you need to do this, so there is no clear solution, but some guidelines:
if you need to show many pics on screen, use XAML (e.g. Image element) and set path to the image on web or local disk. Windows will do the caching for you. Also, put Image element in GridView or ListView and use Data binding
if you afraid that images might get lost and it is not enough to keep just path, copy them in ApplicationData LocalFolder.
if you need to do some operations on images, then load into memory the one you need or load them one by one.