StreamReader.BaseStream issue after using EndOfStream property - c#

First of all I understand that I can solve this issue using different ways. I guess that this issue exists only because of using different methods in incorrect way. But I want to find out what exactly happened in my example.
I was using StreamReader for reading file. In order to get bytes from it I decided to use BaseStream.Read:
int length = (int)reader.BaseStream.Length;
byte[] file = new byte[length];
while(!reader.EndOfStream)
{
int readBytes = reader.BaseStream.Read(file, 0,
(length-offset)>bufferSize?bufferSize:(length - offset));
for (int i = 0; i<readBytes; i++)
{
...
}
offset += readBytes;
}
BaseStream.Read refuses to get last 1024 bytes when property StreamReader.EndOfStream was used before reading. Later I've found information, that EndOfStream trying to read 1 byte, but in fact he reads 1024 bytes due performance. Apparently this 1kb become impossible to reach.
EDIT: If I delete reader.EndOfStream property in code, reader.BaseStream.Read will work correctly. That was the main point of question.
Again, I understand, that this code example is absolutely inefficient. I'm just trying to understand how streams work in that example and does this issue exist because of bad code only (or StreamReader.BaseStream has some issues)? Thanks in advance.

It is not StreamReader.BaseStream has some issues but is a problem in your code. When you work directly with the Stream wraped inside StreamReader.
From MSDN about StreamReader.DiscardBufferedData:
You need to call this method only when the position of the internal buffer and the BaseStream do not match. These positions can become mismatched when you read data into the buffer and then seek a new position in the underlying stream.
That mean, in your case, when the Stream already reached end position, the position of StreamReader internal buffer still remain the value before you read the underlying stream directedly, therefore reader.EndOfStream still = false. That why you can not finish the loop.
Edit:
I think you are missing something, I give you this code to prove that the file is successfully reached to the end. Run it and you see that your app repeatly say: I'm at the end of the file!
static void Main()
{
using (StreamReader reader = new StreamReader(#"yourFile"))
{
int offset = 0;
int bufferSize = 102400;
int length = (int)reader.BaseStream.Length;
byte[] file = new byte[length];
while (!reader.EndOfStream)
{
// Add this line:
Console.WriteLine(reader.BaseStream.Position);
Console.ReadLine();
int readBytes = reader.BaseStream.Read(file, 0,
(length - offset) > bufferSize ? bufferSize : (length - offset));
string str = Encoding.UTF8.GetString(file, 0, readBytes);
offset += readBytes;
if (reader.BaseStream.Position == length)
{
Console.WriteLine("I'm at the end of the file! Current Tickcount: " + Environment.TickCount);
Thread.Sleep(100);
}
}
}
}
Edit 2
But still , offset and length should be equal, im my case length - offset = 1024 (in case of files that bigger than 1kb). Maybe I'm doing something wrong, but if I use files with size less than 1kb, readBytes always equals 0.
That because your first call to while (!reader.EndOfStream), the reader have to read the file (this case is 1024 bytes - read bytes to internal buffer) to detemine if file is ended or not (see two lines of code I add above), after it read the file is seeked 1024 bytes, that why length - offset = 1024, and if your file small than 1kb then with this first call, it already seek to end of file. This is where you lost data.
The second call to it, it don't seek because you don't send any read request to the reader, so it consider unchanged, then it don't need read file again to check if at the end of file, that why the second call don't loss data.

Related

C# MemoryStream & GZipInputStream: Can't .Read more than 256 bytes

I'm having a problem with writing an uncompressed GZIP stream using SharpZipLib's GZipInputStream. I only seem to be able to get 256 bytes worth of data with the rest not being written to and left zeroed. The compressed stream (compressedSection) has been checked and all data is there (1500+ bytes). The snippet of the decompression process is below:
int msiBuffer = 4096;
using (Stream msi = new MemoryStream(msiBuffer))
{
msi.Write(compressedSection, 0, compressedSection.Length);
msi.Position = 0;
int uncompressedIntSize = AllMethods.GetLittleEndianInt(uncompressedSize, 0); // Gets little endian value of uncompressed size into an integer
// SharpZipLib GZip method called
using (GZipInputStream decompressStream = new GZipInputStream(msi, uncompressedIntSize))
{
using (MemoryStream outputStream = new MemoryStream(uncompressedIntSize))
{
byte[] buffer = new byte[uncompressedIntSize];
decompressStream.Read(buffer, 0, uncompressedIntSize); // Stream is decompressed and read
outputStream.Write(buffer, 0, uncompressedIntSize);
using (var fs = new FileStream(kernelSectionUncompressed, FileMode.Create, FileAccess.Write))
{
fs.Write(buffer, 0, buffer.Length);
fs.Close();
}
outputStream.Close();
}
decompressStream.Close();
So in this snippet:
1) The compressed section is passed in, ready to be decompressed.
2) The expected size of the uncompressed output (which is stored in a header with the file as a 2-byte little-endian value) is passed through a method to convert it to integer. The header is removed earlier as it is not part of the compressed GZIP file.
3) SharpLibZip's GZIP stream is declared with the compressed file stream (msi) and a buffer equal to int uncompressedIntSize (have tested with a static value of 4096 as well).
4) I set up a MemoryStream to handle writing the output to a file as GZipInputStream doesn't have Read/Write; it takes the expected decompressed file size as the argument (capacity).
5) The Read/Write of the stream needs byte[] array as the first argument, so I set up a byte[] array with enough space to take all the bytes of the decompressed output (3584 bytes in this case, derived from uncompressedIntSize).
6) int GzipInputStream decompressStream uses .Read with the buffer as first argument, from offset 0, using the uncompressedIntSize as the count. Checking the arguments in here, the buffer array still has a capacity of 3584 bytes but has only been given 256 bytes of data. The rest are zeroes.
It looks like the output of .Read is being throttled to 256 bytes but I'm not sure where. Is there something I've missed with the Streams, or is this a limitation with .Read?
You need to loop when reading from a stream; the lazy way is probably:
decompressStream.CopyTo(outputStream);
(but this doesn't guarantee to stop after uncompressedIntSize bytes - it'll try to read to the end of decompressStream)
A more manual version (that respects an imposed length limit) would be:
const int BUFFER_SIZE = 1024; // whatever
var buffer = ArrayPool<byte>.Shared.Rent(BUFFER_SIZE);
try
{
int remaining = uncompressedIntSize, bytesRead;
while (remaining > 0 && // more to do, and making progress
(bytesRead = decompressStream.Read(
buffer, 0, Math.Min(remaining, buffer.Length))) > 0)
{
outputStream.Write(buffer, 0, bytesRead);
remaining -= bytesRead;
}
if (remaining != 0) throw new EndOfStreamException();
}
finally
{
ArrayPool<byte>.Shared.Return(buffer);
}
The issue turned out to be an oversight I'd made earlier in the posted code:
The file I'm working with has 27 sections which are GZipped, but they each have a header which will break the Gzip decompression if the GZipInput stream hits any of them. When opening the base file, it was starting from the beginning (adjusted by 6 to avoid the first header) each time instead of going to the next post-head offset:
brg.BaseStream.Seek(6, SeekOrigin.Begin);
Instead of:
brg.BaseStream.Seek(absoluteSectionOffset, SeekOrigin.Begin);
This meant that the extracted compressed data was an amalgam of the first headerless section + part of the 2nd section along with its header. As the first section is 256 bytes long without its header, this part was being decompressed correctly by the GZipInput stream. But after that is 6-bytes of header which breaks it, resulting in the rest of the output being 00s.
There was no explicit error being thrown by the GZipInput stream when this happened, so I'd incorrectly assumed that the cause was the .Read or something in the stream retaining data from the previous pass. Sorry for the hassle.

NetworkStream.Read() not correctly reading after first iteration of loop

I'm having a problem where my code works as expected when I'm stepping through it but will read incorrect data when running normally. I thought the problem could have been timing however NetworkStream.Read() should be blocking and I also tested this by putting the thread to sleep for 1000ms (more than enough time and more time than I was giving whilst stepping through).
The purpose of the code (and what is does when stepping through) is to read a bitmap image into a buffer that is preceded by a string containing the image size in bytes followed by a carriage return and a new line. I believe the problem lies in the read statements, but I can't be sure. The following code is contained within a larger loop also containing Telnet reads, however I have not has a problem with those and they are only reading ASCII strings, no binary data.
List<byte> len = new List<byte>();
byte[] b = new byte[2];
while (!Encoding.ASCII.GetString(b).Equals("\r\n"))
{
len.Add(b[0]);
b[0] = b[1];
b[1] = (byte)stream.ReadByte();
}
len = len.FindAll(x => x != 0);
len.Add((byte)0);
string lenStr = Encoding.ASCII.GetString(len.ToArray());
int imageSize = int.Parse(lenStr);
byte[] imageIn = new byte[imageSize];
stream.Read(imageIn, 0, imageSize);
using (MemoryStream g = new MemoryStream(imageIn))
{
g.Position = 0;
bmp = (Bitmap)Image.FromStream(g);
}
The actual problem that occurs with the code is that the first time it runs it correctly receives the length and image, however it does not seem to recognize the \r\n in consecutive reads, however this may only be a symptom and not the problem itself.
Thanks in advance!
EDIT:
So I did narrow the problem down and manage to fix it by adding in some artificial delay between my Telnet call using NetworkStream.Write() to retrieve the image and the networkStream.Read() to retrieve it, however this solution is messy and I would still like to know why this issue is happening
The Read() operation returns the number of bytes actually read. It only blocks when there is no data to read, and it can return less bytes as specified by the count parameter.
You can easily fix this by putting this inside a loop:
byte[] imageIn = new byte[imageSize];
int remaining = imageSize;
int offset = 0;
while (remaining > 0)
{
int read = stream.Read(imageIn, offset, remaining);
if (read == 0) throw new Exception("Connection closed before expected data was read");
offset += read;
remaining -= read;
}

C# TCP file transfer - Images semi-transferred

I am developing a TCP file transfer client-server program. At the moment I am able to send text files and other file formats perfectly fine, such as .zip with all contents intact on the server end. However, when I transfer a .gif the end result is a gif with same size as the original but with only part of the image showing as if most of the bytes were lost or not written correctly on the server end.
The client sends a 1KB header packet with the name and size of the file to the server. The server then responds with OK if ready and then creates a fileBuffer as large as the file to be sent is.
Here is some code to demonstrate my problem:
// Serverside method snippet dealing with data being sent
while (true)
{
// Spin the data in
if (streams[0].DataAvailable)
{
streams[0].Read(fileBuffer, 0, fileBuffer.Length);
break;
}
}
// Finished receiving file, write from buffer to created file
FileStream fs = File.Open(LOCAL_FOLDER + fileName, FileMode.CreateNew, FileAccess.Write);
fs.Write(fileBuffer, 0, fileBuffer.Length);
fs.Close();
Print("File successfully received.");
// Clientside method snippet dealing with a file send
while(true)
{
con.Read(ackBuffer, 0, ackBuffer.Length);
// Wait for OK response to start sending
if (Encoding.ASCII.GetString(ackBuffer) == "OK")
{
// Convert file to bytes
FileStream fs = new FileStream(inPath, FileMode.Open, FileAccess.Read);
fileBuffer = new byte[fs.Length];
fs.Read(fileBuffer, 0, (int)fs.Length);
fs.Close();
con.Write(fileBuffer, 0, fileBuffer.Length);
con.Flush();
break;
}
}
I've tried a binary writer instead of just using the filestream with the same result.
Am I incorrect in believing successful file transfer to be as simple as conversion to bytes, transportation and then conversion back to filename/type?
All help/advice much appreciated.
Its not about your image .. It's about your code.
if your image bytes were lost or not written correctly that's mean your file transfer code is wrong and even the .zip file or any other file would be received .. It's gonna be correpted.
It's a huge mistake to set the byte buffer length to the file size. imagine that you're going to send a large a file about 1GB .. then it's gonna take 1GB of RAM .. for an Idle transfering you should loop over the file to send.
This's a way to send/receive files nicely with no size limitation.
Send File
using (FileStream fs = new FileStream(srcPath, FileMode.Open, FileAccess.Read))
{
long fileSize = fs.Length;
long sum = 0; //sum here is the total of sent bytes.
int count = 0;
data = new byte[1024]; //8Kb buffer .. you might use a smaller size also.
while (sum < fileSize)
{
count = fs.Read(data, 0, data.Length);
network.Write(data, 0, count);
sum += count;
}
network.Flush();
}
Receive File
long fileSize = // your file size that you are going to receive it.
using (FileStream fs = new FileStream(destPath, FileMode.Create, FileAccess.Write))
{
int count = 0;
long sum = 0; //sum here is the total of received bytes.
data = new byte[1024 * 8]; //8Kb buffer .. you might use a smaller size also.
while (sum < fileSize)
{
if (network.DataAvailable)
{
{
count = network.Read(data, 0, data.Length);
fs.Write(data, 0, count);
sum += count;
}
}
}
}
happy coding :)
When you write over TCP, the data can arrive in a number of packets. I think your early tests happened to fit into one packet, but this gif file is arriving in 2 or more. So when you call Read, you'll only get what's arrived so far - you'll need to check repeatedly until you've got as many bytes as the header told you to expect.
I found Beej's guide to network programming a big help when doing some work with TCP.
As others have pointed out, the data doesn't necessarily all arrive at once, and your code is overwriting the beginning of the buffer each time through the loop. The more robust way to write your reading loop is to read as many bytes as are available and increment a counter to keep track of how many bytes have been read so far so that you know where to put them in the buffer. Something like this works well:
int totalBytesRead = 0;
int bytesRead;
do
{
bytesRead = streams[0].Read(fileBuffer, totalBytesRead, fileBuffer.Length - totalBytesRead);
totalBytesRead += bytesRead;
} while (bytesRead != 0);
Stream.Read will return 0 when there's no data left to read.
Doing things this way will perform better than reading a byte at a time. It also gives you a way to ensure that you read the proper number of bytes. If totalBytesRead is not equal to the number of bytes you expected when the loop is finished, then something bad happened.
Thanks for your input Tvanfosson. I tinkered around with my code and managed to get it working. The synchronicity between my client and server was off. I took your advice though and replaced read with reading a byte one at a time.

How to write super-fast file-streaming code in C#?

I have to split a huge file into many smaller files. Each of the destination files is defined by an offset and length as the number of bytes. I'm using the following code:
private void copy(string srcFile, string dstFile, int offset, int length)
{
BinaryReader reader = new BinaryReader(File.OpenRead(srcFile));
reader.BaseStream.Seek(offset, SeekOrigin.Begin);
byte[] buffer = reader.ReadBytes(length);
BinaryWriter writer = new BinaryWriter(File.OpenWrite(dstFile));
writer.Write(buffer);
}
Considering that I have to call this function about 100,000 times, it is remarkably slow.
Is there a way to make the Writer connected directly to the Reader? (That is, without actually loading the contents into the Buffer in memory.)
I don't believe there's anything within .NET to allow copying a section of a file without buffering it in memory. However, it strikes me that this is inefficient anyway, as it needs to open the input file and seek many times. If you're just splitting up the file, why not open the input file once, and then just write something like:
public static void CopySection(Stream input, string targetFile, int length)
{
byte[] buffer = new byte[8192];
using (Stream output = File.OpenWrite(targetFile))
{
int bytesRead = 1;
// This will finish silently if we couldn't read "length" bytes.
// An alternative would be to throw an exception
while (length > 0 && bytesRead > 0)
{
bytesRead = input.Read(buffer, 0, Math.Min(length, buffer.Length));
output.Write(buffer, 0, bytesRead);
length -= bytesRead;
}
}
}
This has a minor inefficiency in creating a buffer on each invocation - you might want to create the buffer once and pass that into the method as well:
public static void CopySection(Stream input, string targetFile,
int length, byte[] buffer)
{
using (Stream output = File.OpenWrite(targetFile))
{
int bytesRead = 1;
// This will finish silently if we couldn't read "length" bytes.
// An alternative would be to throw an exception
while (length > 0 && bytesRead > 0)
{
bytesRead = input.Read(buffer, 0, Math.Min(length, buffer.Length));
output.Write(buffer, 0, bytesRead);
length -= bytesRead;
}
}
}
Note that this also closes the output stream (due to the using statement) which your original code didn't.
The important point is that this will use the operating system file buffering more efficiently, because you reuse the same input stream, instead of reopening the file at the beginning and then seeking.
I think it'll be significantly faster, but obviously you'll need to try it to see...
This assumes contiguous chunks, of course. If you need to skip bits of the file, you can do that from outside the method. Also, if you're writing very small files, you may want to optimise for that situation too - the easiest way to do that would probably be to introduce a BufferedStream wrapping the input stream.
The fastest way to do file I/O from C# is to use the Windows ReadFile and WriteFile functions. I have written a C# class that encapsulates this capability as well as a benchmarking program that looks at differnet I/O methods, including BinaryReader and BinaryWriter. See my blog post at:
http://designingefficientsoftware.wordpress.com/2011/03/03/efficient-file-io-from-csharp/
How large is length? You may do better to re-use a fixed sized (moderately large, but not obscene) buffer, and forget BinaryReader... just use Stream.Read and Stream.Write.
(edit) something like:
private static void copy(string srcFile, string dstFile, int offset,
int length, byte[] buffer)
{
using(Stream inStream = File.OpenRead(srcFile))
using (Stream outStream = File.OpenWrite(dstFile))
{
inStream.Seek(offset, SeekOrigin.Begin);
int bufferLength = buffer.Length, bytesRead;
while (length > bufferLength &&
(bytesRead = inStream.Read(buffer, 0, bufferLength)) > 0)
{
outStream.Write(buffer, 0, bytesRead);
length -= bytesRead;
}
while (length > 0 &&
(bytesRead = inStream.Read(buffer, 0, length)) > 0)
{
outStream.Write(buffer, 0, bytesRead);
length -= bytesRead;
}
}
}
You shouldn't re-open the source file each time you do a copy, better open it once and pass the resulting BinaryReader to the copy function. Also, it might help if you order your seeks, so you don't make big jumps inside the file.
If the lengths aren't too big, you can also try to group several copy calls by grouping offsets that are near to each other and reading the whole block you need for them, for example:
offset = 1234, length = 34
offset = 1300, length = 40
offset = 1350, length = 1000
can be grouped to one read:
offset = 1234, length = 1074
Then you only have to "seek" in your buffer and can write the three new files from there without having to read again.
Have you considered using the CCR since you are writing to separate files you can do everything in parallel (read and write) and the CCR makes it very easy to do this.
static void Main(string[] args)
{
Dispatcher dp = new Dispatcher();
DispatcherQueue dq = new DispatcherQueue("DQ", dp);
Port<long> offsetPort = new Port<long>();
Arbiter.Activate(dq, Arbiter.Receive<long>(true, offsetPort,
new Handler<long>(Split)));
FileStream fs = File.Open(file_path, FileMode.Open);
long size = fs.Length;
fs.Dispose();
for (long i = 0; i < size; i += split_size)
{
offsetPort.Post(i);
}
}
private static void Split(long offset)
{
FileStream reader = new FileStream(file_path, FileMode.Open,
FileAccess.Read);
reader.Seek(offset, SeekOrigin.Begin);
long toRead = 0;
if (offset + split_size <= reader.Length)
toRead = split_size;
else
toRead = reader.Length - offset;
byte[] buff = new byte[toRead];
reader.Read(buff, 0, (int)toRead);
reader.Dispose();
File.WriteAllBytes("c:\\out" + offset + ".txt", buff);
}
This code posts offsets to a CCR port which causes a Thread to be created to execute the code in the Split method. This causes you to open the file multiple times but gets rid of the need for synchronization. You can make it more memory efficient but you'll have to sacrifice speed.
The first thing I would recommend is to take measurements. Where are you losing your time? Is it in the read, or the write?
Over 100,000 accesses (sum the times):
How much time is spent allocating the buffer array?
How much time is spent opening the file for read (is it the same file every time?)
How much time is spent in read and write operations?
If you aren't doing any type of transformation on the file, do you need a BinaryWriter, or can you use a filestream for writes? (try it, do you get identical output? does it save time?)
Using FileStream + StreamWriter I know it's possible to create massive files in little time (less than 1 min 30 seconds). I generate three files totaling 700+ megabytes from one file using that technique.
Your primary problem with the code you're using is that you are opening a file every time. That is creating file I/O overhead.
If you knew the names of the files you would be generating ahead of time, you could extract the File.OpenWrite into a separate method; it will increase the speed. Without seeing the code that determines how you are splitting the files, I don't think you can get much faster.
No one suggests threading? Writing the smaller files looks like text book example of where threads are useful. Set up a bunch of threads to create the smaller files. this way, you can create them all in parallel and you don't need to wait for each one to finish. My assumption is that creating the files(disk operation) will take WAY longer than splitting up the data. and of course you should verify first that a sequential approach is not adequate.
(For future reference.)
Quite possibly the fastest way to do this would be to use memory mapped files (so primarily copying memory, and the OS handling the file reads/writes via its paging/memory management).
Memory Mapped files are supported in managed code in .NET 4.0.
But as noted, you need to profile, and expect to switch to native code for maximum performance.

Is there a BinaryReader in C++ to read data written from a BinaryWriter in C#?

I've written several ints, char[]s and the such to a data file with BinaryWriter in C#. Reading the file back in (in C#) with BinaryReader, I can recreate all of the pieces of the file perfectly.
However, attempting to read them back in with C++ yields some scary results. I was using fstream to attempt to read back the data and the data was not reading in correctly. In C++, I set up an fstream with ios::in|ios::binary|ios::ate and used seekg to target my location. I then read the next four bytes, which were written as the integer "16" (and reads correctly into C#). This reads as 1244780 in C++ (not the memory address, I checked). Why would this be? Is there an equivalent to BinaryReader in C++? I noticed it mentioned on msdn, but that's Visual C++ and intellisense doesn't even look like c++, to me.
Example code for writing the file (C#):
public static void OpenFile(string filename)
{
fs = new FileStream(filename, FileMode.Create);
w = new BinaryWriter(fs);
}
public static void WriteHeader()
{
w.Write('A');
w.Write('B');
}
public static byte[] RawSerialize(object structure)
{
Int32 size = Marshal.SizeOf(structure);
IntPtr buffer = Marshal.AllocHGlobal(size);
Marshal.StructureToPtr(structure, buffer, true);
byte[] data = new byte[size];
Marshal.Copy(buffer, data, 0, size);
Marshal.FreeHGlobal(buffer);
return data;
}
public static void WriteToFile(Structures.SomeData data)
{
byte[] buffer = Serializer.RawSerialize(data);
w.Write(buffer);
}
I'm not sure how I could show you the data file.
Example of reading the data back (C#):
BinaryReader reader = new BinaryReader(new FileStream("C://chris.dat", FileMode.Open));
char[] a = new char[2];
a = reader.ReadChars(2);
Int32 numberoffiles;
numberoffiles = reader.ReadInt32();
Console.Write("Reading: ");
Console.WriteLine(a);
Console.Write("NumberOfFiles: ");
Console.WriteLine(numberoffiles);
This I want to perform in c++. Initial attempt (fails at first integer):
fstream fin("C://datafile.dat", ios::in|ios::binary|ios::ate);
char *memblock = 0;
int size;
size = 0;
if (fin.is_open())
{
size = static_cast<int>(fin.tellg());
memblock = new char[static_cast<int>(size+1)];
memset(memblock, 0, static_cast<int>(size + 1));
fin.seekg(0, ios::beg);
fin.read(memblock, size);
fin.close();
if(!strncmp("AB", memblock, 2)){
printf("test. This works.");
}
fin.seekg(2); //read the stream starting from after the second byte.
int i;
fin >> i;
Edit: It seems that no matter what location I use "seekg" to, I receive the exact same value.
You realize that a char is 16 bits in C# rather than the 8 it usually is in C. This is because a char in C# is designed to handle Unicode text rather than raw data. Therefore, writing chars using the BinaryWriter will result in Unicode being written rather than raw bytes.
This may have lead you to calculate the offset of the integer incorrectly. I recommend you take a look at the file in a hex editor, and if you cannot work out the issue post the file and the code here.
EDIT1
Regarding your C++ code, do not use the >> operator to read from a binary stream. Use read() with the address of the int that you want to read to.
int i;
fin.read((char*)&i, sizeof(int));
EDIT2
Reading from a closed stream is also going to result in undefined behavior. You cannot call fin.close() and then still expect to be able to read from it.
This may or may not be related to the problem, but...
When you create the BinaryWriter, it defaults to writing chars in UTF-8. This means that some of them may be longer than one byte, throwing off your seeks.
You can avoid this by using the 2 argument constructor to specify the encoding. An instance of System.Text.ASCIIEncoding would be the same as what C/C++ use by default.
There are many thing going wrong in your C++ snippet. You shouldn't mix binary reading with formatted reading:
// The file is closed after this line. It is WRONG to read from a closed file.
fin.close();
if(!strncmp("AB", memblock, 2)){
printf("test. This works.");
}
fin.seekg(2); // You are moving the "get pointer" of a closed file
int i;
// Even if the file is opened, you should not mix formatted reading
// with binary reading. ">>" is just an operator for reading formatted data.
// In other words, it is for reading "text" and converting it to a
// variable of a specific data type.
fin >> i;
If it's any help, I went through how the BinaryWriter writes data here.
It's been a while but I'll quote it and hope it's accurate:
Int16 is written as 2 bytes and padded.
Int32 is written as Little Endian and zero padded
Floats are more complicated: it takes the float value and dereferences it, getting the memory address's contents which is a hexadecimal

Categories

Resources