I managed to read values from a binary file between two particular offset values, now I'm stuck. Now I need to replace all values between two particular offsets.
If file is not that long, you can try Linq:
using System.IO;
using System.Linq;
...
string fileName = ...
int offset1 = ...;
int offset2 = ...;
byte[] toInsert = ...
byte[] data = File.ReadAllBytes(fileName);
File.WriteAllBytes(fileName, data
.Take(offset1) // bytes in 0..offset1 range
.Concat(toInsert) // bytes to insert
.Concat(data.Skip(offset2)) // bytes in offset2..eof range
.ToArray());
Related
I am trying to write a console application that will take data as input and split it in to two
example: if i pass a value 0x00000000A0DB383E as input my output should be look like below:
var LowerValue = 0x00000000A0DB0000 (last 2 bytes 383E (index 14-17) replaced with 0000)
var UpperValue = 0x000000000000383E (middle 2 bytes A0DB (index 10-13) replaced with 0000)
So far i have tried below but dont know how to proceed further. Any help will be highly appreciated
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
namespace SplitFunction
{
class Program
{
static void Main(string[] args)
{
byte[] rawValue = BitConverter.GetBytes(0x00000000A0DB383E);
SplitData(rawValue);
Console.ReadKey();
}
public static byte[] SplitDta(byte[] input)
{
byte[] lowerValues = new byte[8];
Array.Copy(input, 0, lowerValues, 4, 4);
foreach(var lowerValue in lowerValues)
Console.WriteLine(lowerValue);
return lowerValues;
}
}
}
Rather than copying & zeroing individual array elements, use masking to create new arrays directly. Something like this :
long input = 0x0000000A0DB383EL;
byte[] rawValue = BitConverter.GetBytes(input);
byte[] lowValue = BitConverter.GetBytes(input & 0x000000000000FFFF);
byte[] highValue = BitConverter.GetBytes(input & 0x00000000FFFF0000);
if you want the values in order high byte to low byte - then reverse them
byte[] rawValue = Array.Reverse(BitConverter.GetBytes(input));
byte[] lowValue = Array.Reverse(BitConverter.GetBytes(input & 0x000000000000FFFF));
byte[] highValue = Array.Reverse(BitConverter.GetBytes(input & 0x00000000FFFF0000));
if you simply want the long value rather than an array
long lowValue = input & 0x000000000000FFFF;
long highValue = input & 0x00000000FFFF0000;
I have
0x4D5A90000300000004000000FFFF0000B80000000000000040...
generated from sql server.
How can I insert byte string into byte[] column in database using EntityFramework?
As per my comment above, I strongly suspect that the best thing to do here is to return the data as a byte[] from the server; this should be fine and easy to do. However, if you have to use a string, then you'll need to parse it out - take off the 0x prefix, divide the length by 2 to get the number of bytes, then loop and parse each 2-character substring using Convert.ToByte(s, 16) in turn. Something like (completely untested):
int len = (value.Length / 2)-1;
var arr = new byte[len];
for(int i = 0; i < len;i++) {
var s = value.Substring((i + 1) * 2, 2);
arr[i] = Convert.ToByte(s, 16);
}
So I have this really simple code that reads a file and spits its data out in a hex viewer fashion. Here it is:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace HexViewer
{
class Program
{
static void Main(string[] args)
{
BinaryReader br = new BinaryReader(new FileStream("C:\\dump.bin", FileMode.Open));
for (int i = 0; i < br.BaseStream.Length; i+= 16)
{
Console.Write(i.ToString("x") + ": ");
byte[] data = new byte[16];
br.Read(data, i, 16);
Console.WriteLine(BitConverter.ToString(data).Replace("-", " "));
}
Console.ReadLine();
}
}
}
The problem is that after the first iteration, when I do
br.Read(data, 16, 16);
The byte array is padded by 16 bytes, and then filled with data from 15th byte to 31st byte of the file. Because it can't fit 32 bytes into a 16 byte large array, it throws an exception. You can try this code with any file larger than 16 bytes. So, the question is, what is wrong with this code?
Just change br.Read(data, i, 16); to br.Read(data, 0, 16);
You are reading in a new block of data each time, so no need to use i for the data buffer.
Even better, change:
byte[] data = new byte[16];
br.Read(data, 0, 16);
To:
var data = br.ReadBytes(16);
I have a small problem in checking MD5 checksum of files in C# and PHP. The hash calculated by PHP script vary from hash calculated by C#.
libcurl.dll C# = c3506360ce8f42f10dc844e3ff6ed999
libcurl.dll PHP = f02b47e41e9fa77909031bdef07532af
In PHP I use md5_file function, and my C# code is:
protected string GetFileMD5(string fileName)
{
FileStream file = new FileStream(fileName, FileMode.Open);
MD5 md5 = new MD5CryptoServiceProvider();
byte[] retVal = md5.ComputeHash(file);
file.Close();
StringBuilder sb = new StringBuilder();
for (int i = 0; i < retVal.Length; i++)
{
sb.Append(retVal[i].ToString("x2"));
}
return sb.ToString();
}
Any ideas how to calculate the same hash? I think that it may be something about encoding.
Thanks in advance!
My C# is rusty, but will:
byte[] retVal = md5.ComputeHash(file);
actually read in the entire file? I think it is just hashing the stream object. I believe you need to read the stream, then hash on the entire file contents?
int length = (int)file.Length; // get file length
buffer = new byte[length]; // create buffer
int count; // actual number of bytes read
int sum = 0; // total number of bytes read
// read until Read method returns 0 (end of the stream has been reached)
while ((count = file.Read(buffer, sum, length - sum)) > 0)
sum += count; // sum is a buffer offset for next reading
byte[] retVal = md5.ComputeHash(buffer);
I'm not sure if that actually runs as is, but I think something along those lines will be needed.
I use this:
I havent had yet any issues with comparison of php md5 with c# md5
System.Text.UTF8Encoding text = new System.Text.UTF8Encoding();
System.Security.Cryptography.MD5CryptoServiceProvider md5 = new System.Security.Cryptography.MD5CryptoServiceProvider();
Convert2.ToBase16(md5.ComputeHash(text.GetBytes(encPassString + sess)));
class Convert2
{
public static string ToBase16(byte[] input)
{
return string.Concat((from x in input select x.ToString("x2")).ToArray());
}
}
I need to be able to insert audio data into existing ac3 files. AC3 files are pretty simple and can be appended to each other without stripping headers or anything. The problem I have is that if you want to add/overwrite/erase a chunk of an ac3 file, you have to do it in 32ms increments, and each 32ms is equal to 1536 bytes of data. So when I insert a data chunk (which must be 1536 bytes, as I just said), I need to find the nearest offset that is divisible by 1536 (like 0, 1536 (0x600), 3072 (0xC00), etc). Let's say I can figure that out. I've read about changing a particular character at a specific offset, but I need to INSERT (not overwrite) that entire 1536-byte data chunk. How would I do that in C#, given the starting offset and the 1536-byte data chunk?
Edit: The data chunk I want to insert is basically just 32ms of silence, and I have the hex, ASCII and ANSI text translations of it. Of course, I may want to insert this chunk multiple times to get 128ms of silence instead of just 32, for example.
byte[] filbyte=File.ReadAllBytes(#"C:\abc.ac3");
byte[] tobeinserted=;//allocate in your way using encoding whatever
byte[] total=new byte[filebyte.Length+tobeinserted.Length];
for(int i=0;int j=0;i<total.Length;)
{
if(i==1536*pos)//make pos your choice
{
while(j<tobeinserted.Length)
total[i++]=tobeinserted[j++];
}
else{total[i++]=filbyte[i-j];}
}
File.WriteAllBytes(#"C:\abc.ac3",total);
Here is the helper method that will do what you need:
public static void Insert(string filepath, int insertOffset, Stream dataToInsert)
{
var newFilePath = filepath + ".tmp";
using (var source = File.OpenRead(filepath))
using (var destination = File.OpenWrite(newFilePath))
{
CopyTo(source, destination, insertOffset);// first copy the data before insert
dataToInsert.CopyTo(destination);// write data that needs to be inserted:
CopyTo(source, destination, (int)(source.Length - insertOffset)); // copy remaining data
}
// delete old file and rename new one:
File.Delete(filepath);
File.Move(newFilePath, filepath);
}
private static void CopyTo(Stream source, Stream destination, int count)
{
const int bufferSize = 32 * 1024;
var buffer = new byte[bufferSize];
var remaining = count;
while (remaining > 0)
{
var toCopy = remaining > bufferSize ? bufferSize : remaining;
var actualRead = source.Read(buffer, 0, toCopy);
destination.Write(buffer, 0, actualRead);
remaining -= actualRead;
}
}
And here is an NUnit test with example usage:
[Test]
public void TestInsert()
{
var originalString = "some original text";
var insertString = "_ INSERTED TEXT _";
var insertOffset = 8;
var file = #"c:\someTextFile.txt";
if (File.Exists(file))
File.Delete(file);
using (var originalData = new MemoryStream(Encoding.ASCII.GetBytes(originalString)))
using (var f = File.OpenWrite(file))
originalData.CopyTo(f);
using (var dataToInsert = new MemoryStream(Encoding.ASCII.GetBytes(insertString)))
Insert(file, insertOffset, dataToInsert);
var expectedText = originalString.Insert(insertOffset, insertString);
var actualText = File.ReadAllText(file);
Assert.That(actualText, Is.EqualTo(expectedText));
}
Be aware that I have removed some checks for code clarity - do not forget to check for null, file access permissions and file size. For example insertOffset can be bigger than file length - this condition is not checked here.