How to split a wmv file in bytes implemented in C# language? - c#

I have a wmv file whose size is 300 bytes. I want to split it into several bytes (example: (150 bytes each) or (3 100 bytes)). How do I implement this in C# Language?

It really depends on whether you want the files to work or not. Splitting them in chunks is easy: Read them into a byte array, have a for loop that copies part of the array to a file of size CHUNK, without forgetting to copy the final bytes of the file. Splitting them in working files is different.

I would try to just stream it without explicit splitting (the tcp stack will split it as it likes^^). If you have a good codec it will play it anyway. (Vlc can always plays the videos while downloading)

The real answer is, just use a streaming server and forget about writing a streaming protocol. Thats crazy. To split a file into byte segments you could use something like the code below. Not thats untested, but it should be about 95% complete.
You should take a look at the proto spec if you havent already. http://msdn.microsoft.com/en-us/library/cc251059(v=PROT.10).aspx And if you have, and you asked this question, you dont stand an ice cubes chance in hell at making it work,
int chunkSize = 300;
var file = File.Open("c:\file.wmv", FileMode.Open);
var numberOfChunks = (file.Length/chunkSize)+1;
byte[][] fileBytes = new byte[numberOfChunks][];
for (int i = 0; i < numberOfChunks; i++)
{
int bytesToRead = chunkSize;
if (i == numberOfChunks + 1)
{
bytesToRead = (int)(file.Length - (i * chunkSize));
}
fileBytes[i] = new byte[bytesToRead];
file.Read(fileBytes[i], i * chunkSize, bytesToRead);
}

Related

Is it practical to concatenate GZipStreams?

I conceived this idea to merge an arbitrary number of small text file into 1 single zip file with GZipStream class. I spent several nights to make it work, but the outcome is that the final zip file ended up being bigger than if the text files had concatenated together. I vaguely know how Huffman coding works, so I don't know if it's practical to do this, or if there's a better alternative. Ultimately, I want an external sorted index file to map out each blob for fast access. What do you think?
// keep track of index current position
long indexByteOffset = 0;
// in reality the blobs vary in size from 1k to 300k bytes
string[] originalData = { "data blob1", "data blob2", "data blob3", "data blob4" /* etc etc etc */};
// merged compressed file
BinaryWriter zipWriter = new BinaryWriter(File.Create(#"c:\temp\merged.gz"));
// keep track of begining position and size of each blob
StreamWriter indexWriter = new StreamWriter(File.Create(#"c:\temp\index.txt"));
foreach(var blob in originalData){
using(MemoryStream ms = new MemoryStream()){
using(GZipStream zipper = new GZipStream(ms, CompressionMode.Compress)){
Encoding utf8Encoder = new UTF8Encoding();
byte[] encodeBuffer = utf8Encoder.GetBytes(blob);
zipper.Write(encodeBuffer, 0, encodeBuffer.Length);
}
byte[] compressedData = ms.ToArray();
zipWriter.Write(compressedData);
zipWriter.Seek(0, SeekOrigin.End);
indexWriter.WriteLine(indexByteOffset + '\t' + (indexByteOffset + compressedData.Length));
indexByteOffset += compressedData.Length;
}
}
Different data can compress with different effectiveness. Small data usually isn't worth trying to compress. One common approach is to allow for an "is it compressed?" flag - do a speculative compress, but if it is larger store the original. That information could be included in the index. Personally, though, I'd probably be tempted to go for a single file - either a .zip, or just including the length of each fragment as a 4-byte chunk (or maybe a "varint") before each - then seeking to the n-th fragment is just a case of "read length prefix, decode as int, seek that many bytes, repeat". You could also reserve one bit of that for "is it compressed".
But as for "is it worth compression": that depends on your data.

Extracting a binary file from other file encoding\conversion mistake

I have two binary files, "bigFile.bin" and "smallFile.bin".
The "bigFile.bin" contains "smallFile.bin".
Opening it in beyond compare confirms that.
I want to extract the smaller file form the bigger into a "result.bin" that equals "smallFile.bin".
I have two keywords- one for the start position ("Section") and one for the end position ("Man");
I tried the following:
byte[] bigFile = File.ReadAllBytes("bigFile.bin");
UTF8Encoding enc = new UTF8Encoding();
string text = enc.GetString(bigFile);
int startIndex = text.IndexOf("Section");
int endIndex = text.IndexOf("Man");
string smallFile = text.Substring(startIndex, endIndex - startIndex);
File.WriteAllBytes("result.bin",enc.GetBytes(smallFile));
I tried to compare the result file with the origin small file in beyond compare, which shows hex representation comparison.
nost of the bytes areequal -but some not.
For example in the new file I have 84 but in the old file I have EF BF BD sequence instead.
What can cause those differences? Where am I mistaken?
Since you are working with binary files, you should not use text-related functionality (which includes encodings etc). Work with byte-related methods instead.
Your current code could be converted to work by making it into something like this:
byte[] bigFile = File.ReadAllBytes("bigFile.bin");
int startIndex = /* assume we somehow know this */
int endIndex = /* assume we somehow know this */
var length = endIndex - startIndex;
var smallFile = new byte[length];
Array.Copy(bigFile, startIndex, smallFile, 0, length);
File.WriteAllBytes("result.bin", smallFile);
To find startIndex and endIndex you could even use your previous technique, but something like this would be more appropriate.
However this would still be problematic because:
Stuffing both binary data and "text" into the same file is going to complicate matters
There is still a lot of unnecessary copying going on here; you should work with your input as a Stream rather than an array of bytes
Even worse than the unnecessary copying, any non-stream solution would either need to load all of your input file in memory as happens above (wasteful), or be exceedingly complicated to code
So, what to do?
Don't read file contents in memory as byte arrays. Work with FileStream instead.
Wrap a StreamReader around the FileStream and use it to find the markers for the start and end indexes. Even better, change your file format so that you don't need to search for text.
After you know startIndex and length, use stream functions to seek to the relevant part of your input stream and copy length bytes to the output stream.

Remove item from binary file

What's the best and the fastest method to remove an item from a binary file?
I have a binary file and I know that I need to remove B number of bytes from a position A, how to do it?
Thanks
You might want to consider working in batches to prevent allocation on the LOH but that depends on the size of your file and the frequency in which you call this logic.
long skipIndex = 100;
int skipLength = 40;
using (FileStream fileStream = File.Open("file.dat", FileMode.Open))
{
int bufferSize;
checked
{
bufferSize = (int)(fileStream.Length - (skipLength + skipIndex));
}
byte[] buffer = new byte[bufferSize];
// read all data after
fileStream.Position = skipIndex + skipLength;
fileStream.Read(buffer, 0, bufferSize);
// write to displacement
fileStream.Position = skipIndex;
fileStream.Write(buffer, 0, bufferSize);
fileStream.SetLength(fileStream.Position); // trim the file
}
Depends... There are a few ways to do this, depending on your requirements.
The basic solution is to read chunks of data from the source file into a target file, skipping over the bits that must be removed (is it always only one segment to remove, or multiple segments?). After you're done, delete the original file and rename the temp file to the original's name.
Things to keep in mind here are that you should tend towards larger chunks rather than smaller. The size of your files will determine a suitable value. 1MB is a good 'default'.
The simple approach assumes that deleting and renaming a new file is not a problem. If you have specific permissions attached to the file, or used NTFS streams or some-such, this approach won't work.
In that case, make a copy of the original file. Then, skip to the first byte after the segment to ignore in the copied file, skip to the start of the segment in the source file, and transfer bytes from copy to original. If you're using Streams, you'll want to call Stream.SetLength to truncate the original to the correct size
If you want to just rewrite the original file, and remove a sequence from it the best way is to "rearrange" the file.
The idea is:
for i = A+1 to file.length - B
file[i] = file[i+B]
For better performance it's best to read and write in chunks and not single bytes. Test with different chunk sizes to see what best for your target system.

Fastest way to delete the first few bytes of a file

I am using a windows mobile compact edition 6.5 phone and am writing out binary data to a file from bluetooth. These files get quite large, 16M+ and what I need to do is to once the file is written then I need to search the file for a start character and then delete everything before, thus eliminating garbage. I cannot do this inline when the data comes in due to graphing issues and speed as I get alot of data coming in and there is already too many if conditions on the incoming data. I figured it was best to post process. Anyway here is my dilemma, speed of search for the start bytes and the rewrite of the file takes sometimes 5mins or more...I basically move the file over to a temp file parse through it and rewrite a whole new file. I have to do this byte by byte.
private void closeFiles() {
try {
// Close file stream for raw data.
if (this.fsRaw != null) {
this.fsRaw.Flush();
this.fsRaw.Close();
// Move file, seek the first sync bytes,
// write to fsRaw stream with sync byte and rest of data after it
File.Move(this.s_fileNameRaw, this.s_fileNameRaw + ".old");
FileStream fsRaw_Copy = File.Open(this.s_fileNameRaw + ".old", FileMode.Open);
this.fsRaw = File.Create(this.s_fileNameRaw);
int x = 0;
bool syncFound = false;
// search for sync byte algorithm
while (x != -1) {
... logic to search for sync byte
if (x != -1 && syncFound) {
this.fsPatientRaw.WriteByte((byte)x);
}
}
this.fsRaw.Close();
fsRaw_Copy.Close();
File.Delete(this.s_fileNameRaw + ".old");
}
} catch(IOException e) {
CLogger.WriteLog(ELogLevel.ERROR,"Exception in writing: " + e.Message);
}
}
There has got to be a faster way than this!
------------Testing times using answer -------------
Initial Test my way with one byte read and and one byte write:
27 Kb/sec
using a answer below and a 32768 byte buffer:
321 Kb/sec
using a answer below and a 65536 byte buffer:
501 Kb/sec
You're doing a byte-wise copy of the entire file. That can't be efficient for a load of reasons. Search for the start offset (and end offset if you need both), then copy from one stream to another the entire contents between the two offsets (or the start offset and end of file).
EDIT
You don't have to read the entire contents to make the copy. Something like this (untested, but you get the idea) would work.
private void CopyPartial(string sourceName, byte syncByte, string destName)
{
using (var input = File.OpenRead(sourceName))
using (var reader = new BinaryReader(input))
using (var output = File.Create(destName))
{
var start = 0;
// seek to sync byte
while (reader.ReadByte() != syncByte)
{
start++;
}
var buffer = new byte[4096]; // 4k page - adjust as you see fit
do
{
var actual = reader.Read(buffer, 0, buffer.Length);
output.Write(buffer, 0, actual);
} while (reader.PeekChar() >= 0);
}
}
EDIT 2
I actually needed something similar to this today, so I decided to write it without the PeekChar() call. Here's the kernel of what I did - feel free to integrate it with the second do...while loop above.
var buffer = new byte[1024];
var total = 0;
do
{
var actual = reader.Read(buffer, 0, buffer.Length);
writer.Write(buffer, 0, actual);
total += actual;
} while (total < reader.BaseStream.Length);
Don't discount an approach because you're afraid it will be too slow. Try it! It'll only take 5-10 minutes to give it a try and may result in a much better solution.
If the detection process for the start of the data is not too complex/slow, then avoiding writing data until you hit the start may actually make the program skip past the junk data more efficiently.
How to do this:
Use a simple bool to know whether or not you have detected the start of the data. If you are reading junk, then don't waste time writing it to the output, just scan it to detect the start of the data. Once you find the start, then stop scanning for the start and just copy the data to the output. Just copying the good data will incur no more than an if (found) check, which really won't make any noticeable difference to your performance.
You may find that in itself solves the problem. But you can optimise it if you need more performance:
What can you do to minimise the work you do to detect the start of the data? Perhaps if you are looking for a complex sequence you only need to check for one particular byte value that starts the sequence, and it's only if you find that start byte that you need to do any more complex checking. There are some very simple but efficient string searching algorithms that may help in this sort of case too. Or perhaps you can allocate a buffer (e.g. 4kB) and gradually fill it with bytes from your incoming stream. When the buffer is filled, then and only then search for the end of the "junk" in your buffer. By batching the work you can make use of memory/cache coherence to make the processing considerably more efficient than it would be if you did the same work byte by byte.
Do all the other "conditions on the incoming data" need to be continually checked? How can you minimise the amount of work you need to do but still achieve the required results? Perhaps some of the ideas above might help here too?
Do you actually need to do any processing on the data while you are skipping junk? If not, then you can break the whole thing into two phases (skip junk, copy data), and skipping the junk won't cost you anything when it actually matters.

how to check if 2 files are equal using .NET? [duplicate]

This question already has answers here:
How to compare 2 files fast using .NET?
(20 answers)
Closed 7 years ago.
say i have a file A.doc.
then i copy it to b.doc and move it to another directory.
for me, it is still the same file.
but how can i determine that it is?
when i download files i sometimes read about getting the mda5 something or the checksum, but i don't know what that is about.
Is there a way to check whether these files are binary equal?
If you want to be 100% sure of the exact bytes in the file being the same, then opening two streams and comparing each byte of the files is the only way.
If you just want to be pretty sure (99.9999%?), I would calculate a MD5 hash of each file and compare the hashes instead. Check out System.Security.Cryptography.MD5CryptoServiceProvider.
In my testing, if the files are usually equivalent then comparing MD5 hashes is about three times faster than comparing each byte of the file.
If the files are usually different then comparing byte-by-byte will be much faster, because you don't have to read in the whole file, you can stop as soon as a single byte differs.
Edit: I originally based this answer off a quick test which read from each file byte-by-byte, and compared them byte-by-byte. I falsely assumed that the buffered nature of the System.IO.FileStream would save me from worrying about hard disk block sizes and read speeds; this was not true. I retested my program that reads from each file in 4096 byte chunks and then compares the chunks - this method is slightly faster overall than MD5 even when the files are exactly the same, and will of course be much faster if they differ.
I'm leaving this answer as a mild warning about the FileStream class, and because I still thinkit has some value as an answer to "how do I calculate the MD5 of a file in .NET". Apart from that though, it's not the best way to fulfill the original request.
example of calculating the MD5 hashes of two files (now tested!):
using (var reader1 = new System.IO.FileStream(filepath1, System.IO.FileMode.Open, System.IO.FileAccess.Read))
{
using (var reader2 = new System.IO.FileStream(filepath2, System.IO.FileMode.Open, System.IO.FileAccess.Read))
{
byte[] hash1;
byte[] hash2;
using (var md51 = new System.Security.Cryptography.MD5CryptoServiceProvider())
{
md51.ComputeHash(reader1);
hash1 = md51.Hash;
}
using (var md52 = new System.Security.Cryptography.MD5CryptoServiceProvider())
{
md52.ComputeHash(reader2);
hash2 = md52.Hash;
}
int j = 0;
for (j = 0; j < hash1.Length; j++)
{
if (hash1[j] != hash2[j])
{
break;
}
}
if (j == hash1.Length)
{
Console.WriteLine("The files were equal.");
}
else
{
Console.WriteLine("The files were not equal.");
}
}
}
First compare the size of the files , if the size is not the same then the files are different , if the size is the same , then simply compare the files content.
Indeed there is. Open both files, read them in as byte arrays, and compare each byte. If they are equal, then the file is equal.

Categories

Resources