Read Int32 from binary at specific position - c#

I have a memorystream reading a specific part of my data. From the binary I want one ReadInt32 value from position 5-8. How do I achieve this in:
using (var reader = new BinaryReader(stream))
{
somebyte1
somebyte2
somebyte3
//get only this value
int v = reader.ReadInt32;
}

Move the base stream to the position you want to read from:
stream.Seek(4, SeekOrigin.Begin);
using (var reader = new BinaryReader(stream))
{
int v = reader.ReadInt32;
}

In .NET there are stream types that are seekable and types that don't allow seek. This is indicated by the CanSeek property. If your stream allows seek (and a MemoryStream does), you can just move the current position and read the data. If the stream does not allow seek, your only choice is to read and discard the data until you reach the stream position where your desired data is at. So the generalized solution to your problem would be:
const int targetPosition = 4;
BinaryReader reader = new BinaryReader(stream);
using (reader) {
if (stream.CanSeek) {
stream.Position = targetPosition;
}
else {
reader.ReadBytes(targetPosition);
}
int result = reader.ReadInt32();
}

Related

how to get position and length of unzipped gzipstream in c#?

i'm trying to read .gz files using binary reader by first unzipping with gzipstream, and then creating a new binary reader with the gzipstream. however, when i try to use the BaseStream.Position and BaseStream.Length of BinaryReader (to know when i'm at the end of my file), i get a NotSupportedException, checking the doc for these fields in GZipStream Class shows:
Length
This property is not supported and always throws a NotSupportedException.(Overrides Stream.Length.)
Position
This property is not supported and always throws a NotSupportedException.(Overrides Stream.Position.)
so my question is how can i know when i'm at the end of my file when reading a decompressed GZipStream using BinaryReader? thanks
here is my code:
Stream stream = new MemoryStream(textAsset.bytes);
GZipStream zippedStream = new GZipStream(stream, CompressionMode.Decompress);
using (BinaryReader reader = new BinaryReader(zippedStream))
while(reader.BaseStream.Position != reader.BaseStream.Length)
{
//do stuff with BinaryReader
}
the above throws:
NotSupportedException: Operation is not supported. System.IO.Compression.DeflateStream.get_Position()
due to the BaseStream.Position call in the while()
You can copy your zippedStream to MemoryStream instance, that can be read fully using ToArray function. That is the easiest solution I can think of.
Stream stream = new MemoryStream(textAsset.bytes);
byte[] result;
using (GZipStream zippedStream = new GZipStream(stream, CompressionMode.Decompress))
{
using (MemoryStream reader = new MemoryStream())
{
zippedStream.CopyTo(reader);
result = reader.ToArray();
}
}
Alternatively if you want to read stream in chunks
using (GZipStream zippedStream = new GZipStream(stream, CompressionMode.Decompress))
{
byte[] buffer = new byte[16 * 1024];
int read;
while ((read = zippedStream.Read(buffer, 0, buffer.Length)) > 0)
{
// do work
}
}
Depending on what you are decoding you can read the first type into a byte array using the BinaryReader and then use BitConverter to convert these bytes into the type you want. You can then use BinaryReader as normal until the start of the next record.
byte[] headermarker = new byte[4];
int count;
// if bytes available in underlying stream.
while ((count = br.Read(headermarker, 0, 4) > 0 )
{
Int32 marker = BitConverter.ToInt32(headermarker, 0);
//
// now use Binary Reader for the rest of the record until we loop
//
}

Unable to re-construct a file using byte array retrieved from another file (chunk-by-chunk)

I am currently trying to construct file B by extracting a certain length of bytes from file A (chunk-by-chunk). The size of file B is 38052441 bytes, and its location in file A is from byte 34 onward. If I do it in one shot, I manage to extract file B from file A without any issue, as shown in the snippet below.
test = new byte[38052441];
//madefilePath: file A, madecabfilePath: file B
using (BinaryReader reader = new BinaryReader(new FileStream(madefilePath, FileMode.Open)))
using (BinaryWriter bw = new BinaryWriter(File.Open(madecabfilePath, FileMode.OpenOrCreate)))
{
reader.BaseStream.Seek(34, SeekOrigin.Begin);
reader.Read(test, 0, 38052441);
bw.Write(test);
bw.Close();
reader.Close();
}
Howerver, if I try to do it in multiple query (I have to do this, because this feature will be ported to compact framework in the future), I kept on getting a corrupted file. Currently, I am testing by getting the first 20Mb, write into a file, then get the remaining bytes and write it into the file again.
int max = 38052474;
int offset = 34;
int weight = 20000000;
bool isComplete = false;
test = null;
test = new byte[weight];
using (BinaryWriter bw = new BinaryWriter(File.Open(madecabfilePath, FileMode.OpenOrCreate)))
using (BinaryReader reader = new BinaryReader(new FileStream(madefilePath, FileMode.Open)))
{
while (!isComplete)
{
if (offset + weight < max)
{
reader.BaseStream.Seek(offset, SeekOrigin.Begin);
reader.Read(test, 0, weight);
bw.Write(test);
offset = offset + weight;
}
else
{
weight = max - offset;
test = null;
test = new byte[weight];
reader.BaseStream.Seek(offset, SeekOrigin.Begin);
reader.Read(test, 0, weight);
bw.Write(test);
//Terminate everything
reader.Close();
bw.Close();
isComplete = true;
}
}
}
I think the issue lies with my logic, but I can't figure out why. Any help is appreciated. Thank you.
BinaryReader.Read() returns the number of bytes that were actually read. So you can simplify your logic and probably fix some issues with something like:
using (BinaryWriter bw = new BinaryWriter(File.Open(madecabfilePath, FileMode.OpenOrCreate)))
using (BinaryReader reader = new BinaryReader(new FileStream(madefilePath, FileMode.Open)))
{
reader.BaseStream.Seek(offset, SeekOrigin.Begin);
while (!isComplete)
{
int charsRead = reader.Read(test, 0, weight);
if (charsRead == 0)
{
isComplete = true;
}
else
{
bw.Write(test, 0, charsRead);
}
}
}
Note that you don't need to explicitly close bw or reader, as the using statement will do that for you. Also note that after the first Seek() call the position in the BinaryReader will be kept track of.

Prevent JsonTextReader from consuming the stream during deserialization

I'm using Json.Net to consume some seekable streams.
// reset the input stream, in case it was previously read
inputStream.Position = 0;
using (var textReader = new StreamReader(inputStream))
{
using (var reader = new JsonTextReader(textReader))
{
deserialized = serializer.Deserialize(reader, expectedType);
}
}
However, this method 'consumes' the stream, meaning the first contained valid Json token is removed from the stream.
That it very annoying. And meaningless, stream Position is provided to emulate a consumption, and 'reading' generally implies 'not modifying'.
Of course, I can dump the stream into a MemoryStream to protect my precious source stream, but that's a huge overhead, especially when doing trial-and-error on a deserialization.
If there is a way to to just 'read' and not 'read-and-consume', thanks for your help, I could not find documentation about that (and I hope this post will help others to google the solution ^^).
JsonTextReader is a forward-only reader, meaning it cannot be set back to a position earlier in the JSON to re-read a portion of it, even if the underlying stream supports seeking. However, the reader does not actually "consume" the stream, as you said. If you set the CloseInput property on the reader to false to prevent it from closing the underlying reader and stream when it is disposed, you can position the stream back to the beginning and open a new reader on the same stream to re-read the JSON. Here is a short program to demonstrate reading the same stream twice:
class Program
{
static void Main(string[] args)
{
string json = #"{ ""name"": ""foo"", ""size"": ""10"" }";
MemoryStream inputStream = new MemoryStream(Encoding.UTF8.GetBytes(json));
JsonSerializer serializer = new JsonSerializer();
using (var textReader = new StreamReader(inputStream))
{
for (int i = 0; i < 2; i++)
{
inputStream.Position = 0;
using (var reader = new JsonTextReader(textReader))
{
reader.CloseInput = false;
Widget w = serializer.Deserialize<Widget>(reader);
Console.WriteLine("Name: " + w.Name);
Console.WriteLine("Size: " + w.Size);
Console.WriteLine();
}
}
}
}
}
class Widget
{
public string Name { get; set; }
public int Size { get; set; }
}
Output:
Name: foo
Size: 10
Name: foo
Size: 10
Fiddle: https://dotnetfiddle.net/fftZV7
A stream may be consumed once read. The solution could be to copy it to a memory or file stream as below:
MemoryStream ms = new MemoryStream();
inputStream.CopyTo(ms);
ms.Position = 0;
using (var textReader = new StreamReader(ms))
(...)
Please let me know if it works.

Multiple (Sequential) XmlReader instances on same stream

My question should be relatively straight forward:
Is it (In any way) possible to create multiple XmlReader objects for the same stream in sequence, without the first reader advancing the stream to the end once it's disposed?
Sample code (Note that the second call to ReadElement will fail because the first reader advanced the stream to the end, for whatever reason):
private static void DoTest()
{
using (var stream = new MemoryStream())
{
WriteElement("Test", stream);
Console.WriteLine("Stream Length after first write: {0}", stream.Length);
WriteElement("Test2", stream);
Console.WriteLine("Stream Length after second write: {0}", stream.Length);
stream.Position = 0;
Console.WriteLine(ReadElement(stream));
Console.WriteLine("Position is now: {0}/{1}", stream.Position, stream.Length);
Console.WriteLine(ReadElement(stream)); // Note that this will fail due to the stream position now being at the end.
}
}
private static string ReadElement(Stream source)
{
string result;
using (var reader = XmlReader.Create(source, new XmlReaderSettings
{
ConformanceLevel = ConformanceLevel.Fragment,
CloseInput = false
}))
{
reader.Read();
result = reader.Name;
reader.Read();
}
return result;
}
private static void WriteElement(string name, Stream target)
{
using (var writer = XmlWriter.Create(target, new XmlWriterSettings
{
ConformanceLevel = ConformanceLevel.Fragment,
WriteEndDocumentOnClose = false,
OmitXmlDeclaration = true,
}))
{
writer.WriteStartElement(name);
writer.WriteEndElement();
}
}
If this is not possible with 'pure .Net', are there any alternative ('Light') Xml parser libraries out there that would support this behaviour?
1. Messy way
If you're able to save the length of each sequence you could do as following:
int len, i = 0; //len is the length of the interval
byte[] buffer = new byte[0xff];
while (len --> 0)
buffer[i++] = stream.ReadByte (); //copies the interval
This separately copies an interval of bytes from the MemoryStream and saves it into a buffer. Then you'd simply jongle with the buffer by assigning it to a new MemoryStream or to a String (see XMLCreate overloads).
The problem is that the first read operation is too GREEDY and eats the whole interval.
2. Original way
Write your one stream to suite your needs!

How can I read an Http response stream twice in C#?

I am trying to read an Http response stream twice via the following:
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
stream = response.GetResponseStream();
RssReader reader = new RssReader(stream);
do
{
element = reader.Read();
if (element is RssChannel)
{
feed.Channels.Add((RssChannel)element);
}
} while (element != null);
StreamReader sr = new StreamReader(stream);
feed._FeedRawData = sr.ReadToEnd();
However when the StreamReader code executes there is no data returned because the stream has now reached the end. I tried to reset the stream via stream.Position = 0 but this throws an exception (I think because the stream can't have its position changed manually).
Basically, I would like to parse the stream for XML and have access to the raw data (in string format).
Any ideas?
Copy it into a new MemoryStream first. Then you can re-read the MemoryStream as many times as you like:
Stream responseStream = CopyAndClose(resp.GetResponseStream());
// Do something with the stream
responseStream.Position = 0;
// Do something with the stream again
private static Stream CopyAndClose(Stream inputStream)
{
const int readSize = 256;
byte[] buffer = new byte[readSize];
MemoryStream ms = new MemoryStream();
int count = inputStream.Read(buffer, 0, readSize);
while (count > 0)
{
ms.Write(buffer, 0, count);
count = inputStream.Read(buffer, 0, readSize);
}
ms.Position = 0;
inputStream.Close();
return ms;
}
Copying the stream to a MemoryStream as suggested by Iain is the right approach. But since
.NET Framework 4 (released 2010) we have Stream.CopyTo. Example from the docs:
// Create the streams.
MemoryStream destination = new MemoryStream();
using (FileStream source = File.Open(#"c:\temp\data.dat",
FileMode.Open))
{
Console.WriteLine("Source length: {0}", source.Length.ToString());
// Copy source to destination.
source.CopyTo(destination);
}
Console.WriteLine("Destination length: {0}", destination.Length.ToString());
Afterwards you can read destination as many times as you like:
// re-set to beginning and convert stream to string
destination.Position = 0;
StreamReader streamReader = new StreamReader(destination);
string text = streamReader.ReadToEnd();
// re-set to beginning and read again
destination.Position = 0;
RssReader cssReader = new RssReader(destination);
(I have seen Endy's comment but since it is an appropriate, current answer, it should have its own answer entry.)
have you tried resetting the stream position?
if this does not work you can copy the stream to a MemoryStream and there you can reset the position (i.e. to 0) as often as you want.

Categories

Resources