How to replace hex without a binary writer - c#

I need to replace a bit of text string in a hex file. I have already used a binary writer but as i add more stuff to the file, the offsets change. Therefore I have to keep fixing the offsets.
I have already tried the binary writer method.
BinaryWriter BinaryWriter1 = new BinaryWriter((Stream) File.OpenWrite("[File]"));
for (int index = [Offset]; index <= [Offset]; ++index) {
BinaryWriter1.BaseStream.Position = (long) index;
BinaryWriter1.Write([Name of form].Byte1);
BinaryWriter1.Close();
}

Related

How to read the from a text file then calculate an average

I plan on reading the marks from a text file and then calculating what the average mark is based upon data written in previous code. I haven't been able to read the marks though or calculate how many marks there are as BinaryReader doesn't let you use .Length.
I have tried using an array to hold each mark but it doesn't like each mark being an integer
public static int CalculateAverage()
{
int count = 0;
int total = 0;
float average;
BinaryReader markFile;
markFile = new BinaryReader(new FileStream("studentMarks.txt", FileMode.Open));
//A loop to read each line of the file and add it to the total
{
//total = total + eachMark;
//count++;
}
//average = total / count;
//markFile.Close();
//Console.WriteLine("Average mark:", average);
return 0;
}
This is my studentMark.txt file in VS
First of all, don't use BinerayRead you can use StreamReader for example.
Also with using statement is not necessary implement the close().
There is an answer using a while loop, so using Linq you can do in one line:
var avg = File.ReadAllLines("file.txt").ToArray().Average(a => Int32.Parse(a));
Console.WriteLine("avg = "+avg); //5
Also using File.ReadAllLines() according too docs the file is loaded into memory and then close, so there is no leak memory problem or whatever.
Opens a text file, reads all lines of the file into a string array, and then closes the file.
Edit to add the way to read using BinaryReader.
First thing to know is you are reading a txt file. Unless you have created the file using BinaryWriter, the binary reader will not work. And, if you are creating a binary file, there is not a good practice name as .txt.
So, assuming your file is binary, you need to loop and read every integer, so this code shoul work.
var fileName = "file.txt";
if (File.Exists(fileName))
{
using (BinaryReader reader = new BinaryReader(File.Open(fileName, FileMode.Open)))
{
while (reader.BaseStream.Position < reader.BaseStream.Length)
{
total +=reader.ReadInt32();
count++;
}
}
average = total/count;
Console.WriteLine("Average = "+average); // 5
}
I've used using to ensure file is close at the end.
If your file only contains numbers, you only have to use ReadInt32() and it will work.
Also, if your file is not binary, obviously, binary writer will not work. By the way, my binary file.txt created using BinaryWriter looks like this:
So I'm assuming you dont have a binary file...

Custom Newline in Binary Stream using Hex Array in WPF

I have a binary file I am reading and printing into a textbox while wrapping at a set point, but it is wrapping at places it shouldn't be. I want to ignore all line feed characters except those I have defined.
There isn't a single Newline byte, rather it seems to be a series of them. I think I found the series of Hex values 00-01-01-0B that seem to correspond with where the line feeds should be.
How do I ignore existing line breaks, and use what I want instead?
This is where I am at:
shortFile = new FileStream(#"tempfile.dat", FileMode.Open, FileAccess.Read);
DisplayArea.Text = "";
byte[] block = new byte[1000];
shortFile.Position = 0;
while (shortFile.Read(block, 0, 1000) > 0)
{
string trimmedText = System.Text.Encoding.Default.GetString(block);
DisplayArea.Text += trimmedText + "\n";
}
I had just figured it out a couple minutes before dlatikay posted, but really appreciated seeing that he also had the right idea. I just replaced all control characters with spaces.
for (int i = 0; i < block.Length; i++)
{
if (block[i] < 32)
{
block[i] = 0x20;
}
}

How can I replace a unicode string in a binary file?

I've been trying to get my program to replace unicode in a binary file.
The user would input what to find, and the program would find and replace it with a specific string if it can find it.
I've searched around, but there's nothing I can find to my specifics, what I would like would be something like:
string text = File.ReadAllText(path, Encoding.Unicode);
text = text.Replace(userInput, specificString);
File.WriteAllText(path, text);
but anything that works in a similar manner should suffice.
Using that results in a file that is larger and unusable, though.
I use:
int var = File.ReadAllText(path, Encoding.Unicode).Contains(userInput) ? 1 : 0;
if (var == 1)
{
//Missing Part
}
for checking if the file contains the user inputted string, if it matters.
This can work only in very limited situations. Unfortunately, you haven't offered enough details as to the nature of the binary file for anyone to know if this will work in your situation or not. There are a practically endless variety of binary file formats out there, at least some of which would be rendered invalid if you modify a single byte, many more of which could be rendered invalid if the file length changes (i.e. data after your insertion point is no longer where it is expected to be).
Of course, many binary files are also either encrypted, compressed, or both. In such cases, even if you do by some miracle find the text you're looking for, it probably doesn't actually represent that text, and modifying it will render the file unusable.
All that said, for the sake of argument let's assume your scenario doesn't have any of these problems and it's perfectly okay to just completely replace some text found in the middle of the file with some entirely different text.
Note that we also need to make an assumption about the text encoding. Text can be represented in a wide variety of ways, and you will need to use the correct encoding not just to find the text, but also to ensure the replacement text will be valid. For the sake of argument, let's say your text is encoded as UTF8.
Now we have everything we need:
void ReplaceTextInFile(string fileName, string oldText, string newText)
{
byte[] fileBytes = File.ReadAllBytes(fileName),
oldBytes = Encoding.UTF8.GetBytes(oldText),
newBytes = Encoding.UTF8.GetBytes(newText);
int index = IndexOfBytes(fileBytes, oldBytes);
if (index < 0)
{
// Text was not found
return;
}
byte[] newFileBytes =
new byte[fileBytes.Length + newBytes.Length - oldBytes.Length];
Buffer.BlockCopy(fileBytes, 0, newFileBytes, 0, index);
Buffer.BlockCopy(newBytes, 0, newFileBytes, index, newBytes.Length);
Buffer.BlockCopy(fileBytes, index + oldBytes.Length,
newFileBytes, index + newBytes.Length,
fileBytes.Length - index - oldBytes.Length);
File.WriteAllBytes(filename, newFileBytes);
}
int IndexOfBytes(byte[] searchBuffer, byte[] bytesToFind)
{
for (int i = 0; i < searchBuffer.Length - bytesToFind.Length; i++)
{
bool success = true;
for (int j = 0; j < bytesToFind.Length; j++)
{
if (searchBuffer[i + j] != bytesToFind[j])
{
success = false;
break;
}
}
if (success)
{
return i;
}
}
return -1;
}
Notes:
The above is destructive. You may want to run it only on a copy of the file, or prefer to modify the code so that it takes an addition parameter specifying the new file to which the modification should be written.
This implementation does everything in-memory. This is much more convenient, but if you are dealing with large files, and especially if you are on a 32-bit platform, you may find you need to process the file in smaller chunks.

What to do when file output turns Chinese?

Suddenly, my output file decided to become Chinese. I tried to write some random ASCII characters to a file, but instead of writing ASCII, C# decided to write ancient Chinese letters instead. Is it trying to tell me something?
static void WriteToFile()
{
for (int i = 0; i < 100; i++)
{
int x = 0;
x = rand.Next(0, 127);
writer.Write((char)x);
}
writer.Close();
}
When you write a text file without a BOM, you leave it up to the program that reads the file to guess at the encoding that was used to convert text to the bytes in the file. Notepad uses a heuristic if you don't pick the Encoding from its File + Open dialog. Underlying winapi call is IsTextUnicode().
With random byte values, like you use, and way too many ASCII control characters present it isn't unlikely to pick IS_TEXT_UNICODE_ASCII16 (aka utf-16). Yes, that looks like Chinese, two bytes select the glyph. Writing the BOM keeps you out of trouble, utf-8 being the sane choice. And no control characters, most don't have a matching glyph. Pick from the range 32..127. Google "bush hid the facts" for an amusing story about an early version of IsTextUnicode() fumbling the guess.
I guess the issue is you are writing values that are not displayable, like the first 32 characters in ASCII. When writing them as UTF-8 without a BOM (which is the default in .NET for StreamWriter), you might end up with unexpected results.
This code yields the expected result:
StringWriter writer = new StringWriter();
Random rand = new Random();
for (int i = 0; i < 100; i++)
{
int x = 0;
x = rand.Next(32, 126);
writer.Write((char)x);
}
writer.Close();
string s = writer.ToString();
File.WriteAllText(#"C:\temp\so2343.dat", s, Encoding.ASCII);
Also note the code change I made to rand.Next to only get the visible characters.
You're writing raw bytes into the file and Notepad treats the resulting file as unicode.

Need a fast method of deserializing 1 million Strings & Guids in c#

I want to deserialize a list of 1 million pairs of (String,Guid) for a performance critical app. The format can be anything I choose, and serialization does not have the same performance requirements.
What sort of approach is best? Text or binary? Write each pair (string,guid) consecutively, or write all strings followed by all guids?
I started playing with LinqPad, (and the simpler example of deserializing strings only) and found that (slightly counter-intuitively), using a TextReader and ReadLine() was a fair bit faster than using a BinaryReader and ReadString(). (Is the filesystem cache playing tricks on me?)
public string[] DeSerializeBinary()
{
var tmr = System.Diagnostics.Stopwatch.StartNew();
long ms = 0;
string[] arr = null;
using (var rdr = new BinaryReader(new FileStream(file, FileMode.Open, FileAccess.Read)))
{
var num = rdr.ReadInt32();
arr = new String[num];
for (int i = 0; i < num; i++)
{
arr[i] = rdr.ReadString();
}
tmr.Stop();
ms = tmr.ElapsedMilliseconds;
Console.WriteLine("DeSerializeBinary took {0}ms", ms);
}
return arr;
}
public string[] DeserializeText()
{
var tmr = System.Diagnostics.Stopwatch.StartNew();
long ms = 0;
string[] arr = null;
using (var rdr = File.OpenText(file))
{
var num = Int32.Parse(rdr.ReadLine());
arr = new String[num];
for (int i = 0; i < num; i++)
{
arr[i] = rdr.ReadLine();
}
tmr.Stop();
ms = tmr.ElapsedMilliseconds;
Console.WriteLine("DeserializeText took {0}ms", ms);
}
return arr;
}
Some Edits:
I used RamMap to clear the file system cache, and it turns out there was very little difference to Text & Binary reader for strings only.
I have a fairly simple class that holds the string and guid. It also holds an int index which corresponds to its position in the list. Obviously there's no need to include this in serialization.
In a test for (binary) deSerializing Strings and Guids alternately, I get around 500ms.
Ideal timing is 50ms, or as close as I can get. However, a simple experiment showed it takes at least 120ms to read the (compressed) file into memory from a reasonably fast SSD drive, without any sort of parsing at all. So 50ms seems unlikely.
Our strings have no theoretical length restrictions. However, we can assume that the performance target only applies if they are all 20 characters or less.
Timings include opening the file.
Reading the Strings is the clear bottleneck now (hence my experiments with serializing strings only). The JIT_NewFast took 30% before I preallocated an array of 16bytes for reading GUIDs.
It's not surprising that reading a bunch of strings is faster with StreamReader than with BinaryReader. StreamReader reads in blocks from the underlying stream, and parses the strings from that buffer. BinaryReader doesn't have a buffer like that. It reads the string length from the underlying stream, and then reads that many characters. So BinaryReader makes more calls to the base stream's Read method.
But there's more to deserializing a (String, Guid) pair than just reading. You also have to parse the Guid. If you write the file in binary then the Guid is written in binary, which makes it much easier and faster to create a Guid structure. If it's a string, then you have to call new Guid(string) to parse the text and create a Guid, after you split the line into its two fields.
Hard to say which of those will be faster.
I can't imagine that we're talking about a whole lot of time here. Certainly reading a file with a million lines will take around a second. Unless the string is really long. A GUID is only 36 characters if you count the separators, right?
With BinaryWriter, you can write the file like this:
writer.Write(count); // integer number of records
foreach (var pair in pairs)
{
writer.Write(pair.theString);
writer.Write(pair.theGuid.ToByteArray());
}
And to read it, you have:
count = reader.ReadInt32();
byte[] guidBytes = new byte[16];
for (int i = 0; i < count; ++i)
{
string s = reader.ReadString();
reader.Read(guidBytes, 0, guidBytes.Length);
pairs.Add(new Pair(s, new Guid(guidBytes));
}
Whether that's faster than splitting a string and calling the Guid constructor that takes a string parameter, I don't know.
I suspect that any difference is going to be pretty slight. I'd probably go with the simplest method: a text file.
If you want to get really crazy, you can write a custom format that you can easily slurp up in just a couple of large reads (a header, an index, and two arrays for strings and GUIDs), and do everything else in memory. That would almost certainly be faster. But faster enough to warrant the extra work? Doubtful.
Update
Or maybe not doubtful. Here's some code that writes and reads a custom binary format. The format is:
count (int32)
guids (count * 16 bytes)
strings (one big concatenated string)
index (index of each string's starting character in the big string)
I assume you're using a Dictionary<string, Guid> to hold these things. But your data structure doesn't really matter. The code would be substantially the same.
Note that I tested this very briefly. I won't say that the code is 100% bug free, but I think you can get the idea of what I'm doing.
private void WriteGuidFile(string filename, Dictionary<string, Guid>guids)
{
using (var fs = File.Create(filename))
{
using (var writer = new BinaryWriter(fs, Encoding.UTF8))
{
List<int> stringIndex = new List<int>(guids.Count);
StringBuilder bigString = new StringBuilder();
// write count
writer.Write(guids.Count);
// Write the GUIDs and build the string index
foreach (var pair in guids)
{
writer.Write(pair.Value.ToByteArray(), 0, 16);
stringIndex.Add(bigString.Length);
bigString.Append(pair.Key);
}
// Add one more entry to the string index.
// makes deserializing easier
stringIndex.Add(bigString.Length);
// Write the string that contains all of the strings, combined
writer.Write(bigString.ToString());
// write the index
foreach (var ix in stringIndex)
{
writer.Write(ix);
}
}
}
}
Reading is just slightly more involved:
private Dictionary<string, Guid> ReadGuidFile(string filename)
{
using (var fs = File.OpenRead(filename))
{
using (var reader = new BinaryReader(fs, Encoding.UTF8))
{
// read the count
int count = reader.ReadInt32();
// The guids are in a huge byte array sized 16*count
byte[] guidsBuffer = new byte[16*count];
reader.Read(guidsBuffer, 0, guidsBuffer.Length);
// Strings are all concatenated into one
var bigString = reader.ReadString();
// Index is an array of int. We can read it as an array of
// ((count+1) * 4) bytes.
byte[] indexBuffer = new byte[4*(count+1)];
reader.Read(indexBuffer, 0, indexBuffer.Length);
var guids = new Dictionary<string, Guid>(count);
byte[] guidBytes = new byte[16];
int startix = 0;
int endix = 0;
for (int i = 0; i < count; ++i)
{
endix = BitConverter.ToInt32(indexBuffer, 4*(i+1));
string key = bigString.Substring(startix, endix - startix);
Buffer.BlockCopy(guidsBuffer, (i*16),
guidBytes, 0, 16);
guids.Add(key, new Guid(guidBytes));
startix = endix;
}
return guids;
}
}
}
A couple of notes here. First, I'm using BitConverter to convert the data in the byte arrays to integers. It would be faster to use unsafe code and just index into the arrays using an int32*.
You might gain some speed by using pointers to index into the guidBuffer and calling Guid Constructor (Int32, Int16, Int16, Byte, Byte, Byte, Byte, Byte, Byte, Byte, Byte) rather than using Buffer.BlockCopy to copy the GUID into the temporary array.
You could make the string index an index of lengths rather than the starting positions. That would eliminate the need for the extra value at the end of the array, but it's unlikely that it'd make any difference in the speed.
There might be other optimization opportunities, but I think you get the general idea here.

Categories

Resources