EndianBinaryReader - Contious update of the input stream? - c#

I am trying to use the EndianBinaryReader and EndianBinaryWriter that Jon Skeet wrote as part of his misc utils lib. It works great for the two uses I have made of it.
The first reading from a Network Stream (TCPClient) where I sit in a loop reading the data as it comes in. I can create a single EndianBinaryReader and then just dispose of it on the shut down of the application. I construct the EndianBinaryReader by passing the TCPClient.GetStream in.
I am now trying to do the same thing when reading from a UdpClient but this does not have a stream as it is connection less. so I get the data like so
byte[] data = udpClientSnapShot.Receive(ref endpoint);
I could put this data into a memory stream
var memoryStream = new MemoryStream(data);
and then create the EndianBinaryReader
var endianbinaryReader = new EndianBinaryReader(
new BigEndianBitConverter(), memoryStream,Encoding.ASCII);
but this means I have to create a new endian reader every time I do a read. Id there a way where I can just create a single stream that I can just keep updateing the inputstream with the data from the udp client?

I can't remember whether EndianBinaryReader buffers - you could overwrite a single MemoryStream? But to be honest there is very little overhead from an extra object here. How big are the packets? (putting it into a MemoryStream will clone the byte[]).
I'd be tempted to use the simplest thing that works and see if there is a real problem. Probably the one change I would make is to introduce using (since they are IDisposable):
using(var memoryStream = new MemoryStream(data))
using(var endianbinaryReader = ..blah..) {
// use it
}

Your best option is probably an override of the .NET Stream class to provide your custom functionality. The class is designed to be overridable with custom behavior.
It may look daunting because of the number of members, but it is easier than it looks. There are a number of boolean properties like "CanWrite", etc. Override them and have them all return "false" except for the functionality that your reader needs (probably CanRead is the only one you need to be true.)
Then, just override all of the methods that start with the phrase "When overridden in a derived class" in the help for Stream and have the unsupported methods return an "UnsupportedException" (instead of the default "NotImplementedException".
Implement the Read method to return data from your buffered UDP packets using perhaps a linked list of buffers, setting used buffers to "null" as you read past them so that the memory footprint doesn't grow unbounded.

Related

What risks in manipulating a provided stream multiple times

Given a stream by a user such that we expect them to manage the disposal of it through typical using
using(var stream = new MemoryStream())
{
MyMethod(stream);
}
Is there any risk to copying back to the stream after working on it. Specifically we have a method that populates the data, but we have a conditional need to sort the data. So MyMethod is something like this:
void MyMethod(Stream stream, bool sort = false)
{
//Stream is populated
stream.Position = 0;
if(sort)
{
Sort(stream);
}
}
void Sort(Stream stream)
{
using(var sortedStream = new MemoryStream)
{
//Sort per requirements into the new sorted local stream
sortedStream.Position = 0;
//Is this safe? Any risk of losing data or memory leak?
sortedStream.CopyTo(stream);
}
}
The thing to notice is we are populating the stream provided by the user and then sorting it into a local stream. Since the local stream is owned by the local method it is cleaned up but in converse we can NOT clean up the provided stream, but want to populate it with the local results.
To reiterate my question, is there anything wrong with this? Is there a risk of garbage data being in the stream or some other issue I am not thinking of?
Stream is an abstract class, and has a lot of different implementations. Not all streams can be written to, so in some cases the code may not work as expected, or could crash.
sortedStream.Position = 0;
sortedStream.CopyTo(stream);
You would need to check the CanSeek and CanWrite properties beforehand:
if (sortedStream.CanSeek & stream.CanWrite)
{
sortedStream.Position = 0;
sortedStream.CopyTo(stream);
}
else
{
// not supported
}
Whether a given stream support moving the position around and re-writing data over itself is going to depend on the specific stream. Some support it, and some don't. Not all streams are allowed to change their position, not all are able to write, not all are able to overwrite existing data, and some are able to do all of those things.
A well behaved stream shouldn't leak resources if you do any of those unsupported things; it ought to just throw an exception, but of course technically a custom stream could do whatever it wants, so you most certainly could write your own stream that leaks resources when changing the position. But of course at that point the bug of leaking a resource is in that stream's implementation, not in your code that sorts the data in the stream. The code you've shown here only needs to worry about a stream throwing an exception if an unsupported operation is performed.
I have no idea why you don't sort it before you insert it into the stream or why you use a stream at all when you access seems to be random-access, but technically, it's fine. You can do it. It will work.

Pass-Through Stream (not having to save to memory in the middle)

In my c# program, I am working with an object (third-party, so I have no way of changing its source code) that takes a stream in in its constructor (var myObject = new MyObject(stream);).
My challenge is that I need to make some changes to some of the lines of my file prior to it being ready to be given to this object.
To do this, I wrote the following code (which does work):
using (var st = new MemoryStream())
using (var reader = new StreamReader(path))
{
using (var writer = new StreamWriter(st))
{
while (!reader.EndOfStream)
{
var currentLine = reader.ReadLine().Replace(#"XXXX", #"YYYY");
writer.Write(currentLine);
}
writer.Flush();
st.Position = 0;
var myObject = new MyObject(st);
}
}
So, this does work, but it seems so inefficient since it is no longer really streaming the information, but storing it all in the memory stream before giving it through to the object.
Is there a way to create a transform / "pass-through stream" that will:
Read in each small amount from the streamreader
Make the adjustment on that small amount
Stream that amount through
So there won't be a large bit of memory storage in the middle?
Thanks!!
You just create your own class that derives from Stream, and implements the required methods that your MyObject needs.
Implement all your Read/ReadAsync methods by calling the matching Read/ReadAsync methods on your StreamReader. You can then modify the data as it passes through.
You'd have a bit of work to do if the modification requires some sort of understanding of the data as you'll be working in unknown quantities of bytes at a time. You would need to buffer the data to an extent required to do your necessary transformations, but how you do that is very specific to the transformation of the stream that you want to achieve.
Unfortunately the design of the C# Stream class is loaded down with lots of baggage, so implementing the entire API of Stream is quite a bit of work, but chances are your MyObject only calls one or two methods on Stream so a bit of experimentation should soon get it working.

How to read back appended objects using protobuf-net?

I'm appending real-time events to a file stream using protobuf-net serialization. How can I stream all saved objects back for analysis? I don't want to use an in-memory collection (because it would be huge).
private IEnumerable<Activity> Read() {
using (var iso = new IsolatedStorageFileStream(storageFilename, FileMode.OpenOrCreate, FileAccess.Read, this.storage))
using (var sr = new StreamReader(iso)) {
while (!sr.EndOfStream) {
yield return Serializer.Deserialize<Activity>(iso); // doesn't work
}
}
}
public void Append(Activity activity) {
using (var iso = new IsolatedStorageFileStream(storageFilename, FileMode.Append, FileAccess.Write, this.storage)) {
Serializer.Serialize(iso, activity);
}
}
First, I need to discuss the protobuf format (via Google, not specific to protobuf-net). By design, it is appendable but with append===merge. For lists this means "append as new items", but for single objects this means "combine the members". Secondly, as a consequence of the above, the root object in protobuf is never terminated - the "end" is simply: when you run out of incoming data. Thirdly, and again as a direct consequence - fields are not required to be in any specific order, and generally will overwrite. So: if you just use Serialize lots of times, and then read the data back: you will have exactly one object, which will have basically the values from the last object on the stream.
What you want to do, though, is a very common scenario. So protobuf-net helps you out by including the SerializeWithLengthPrefix and DeserializeWithLengthPrefix methods. If you use these instead of Serialize / Deserialize, then it is possible to correctly parse individual objects. Basically, the length-prefix restricts the data so that only the exact amount per-object is read (rather than reading to the end of the file).
I strongly suggest (as parameters) using tag===field-number===1, and the base-128 prefix-style (an enum). As well as making the data fully protobuf compliant throughout (including the prefix data), this will make it easy to use an extra helper method: DeserializeItems. This exposes each consecutive object via an iterator-block, making it efficient to read huge files without needing everything in memory at once. It even works with LINQ.
There is also a way to use the API to selectively parse/skip different objects in the file - for example, to skip the first 532 records without processing the data. Let me know if you need an example of that.
If you already have lots of data that was already stored with Serialize rather than SerializeWithLengthPrefix - then it is probably still possible to decipher the data, by using ProtoReader to detect when the field-numbers loop back around : meaning, given fields "1, 2, 4, 5, 1, 3, 2, 5" - we can probably conclude there are 3 objects there and decipher accordingly. Again, let me know if you need a specific example.

No Ionic.Zlib.DeflateStream.BaseStream

I'm working with Ionic.Zlib.DeflateStream (I think aka DotNetZip) in C# code and notice it doesn't have a BaseStream property like System.IO.Compression.DeflateStream does. Is there any simple way to access this? Maybe a partial class or extension (not really familiar with those concepts) or just something I'm overlooking, or an updated version of this library?
Update: I have function deep inside a large project that is given an Ionic.Zlib.DeflateStream as a paramater. I know that the underlying stream is a MemoryStream, and I want to modify the code to seek to Position 0 in the underlying stream, write a few bytes, then return to the previos Position. This is what we call a "kludge", or dirty-hack, as opposed to rewriting a lot of code... but this is the solution we are looking for at this time, as opposed to something else that would require more retesting. The few bytes in this part of the MemoryStream that need to be updated are not compressed, so modifying them outside the DeflateStream in this matter is fine.
I'd still like to know other options for future projects, or if this answer could cause issues, but I think I did find one option...
When I create the object like this:
MemoryStream ms = new MemoryStream();
DeflateStream ds = new DeflateStream(ms,...);
If instead I create a class like:
class MyDeflateStream : DeflateStream
{
public MemoryStream RootStream;
}
I can change the above code to:
MemoryStream ms = new MemoryStream();
MyDeflateStream ds = new MyDeflateStream (ms,...);
ds.RootStream = ms;
Then make the function where I need access to it something like this:
void Whatever(DeflateStream ds)
{
MyDeflateStream mds = (MyDeflateStream)ds;
MemoryStream ms = mds.RootStream;
}
Ideally I'd only have to modify the Whatever() function, because sometimes I might not have access to the code that created the object in the first place, but in this case I do. So still hoping for an answer, even though I found one possible way to handle this.

int[] to byte[], am i forgetting something?

This is untested as i need to write more code. But is this correct and i feel like i am missing something, like this could be better written. Do i need the c.lose at the end? should i flush anything(i'll assume no if i do close())?
Byte[] buffer;
using (var m = new MemoryStream())
{
using (var binWriter = new BinaryWriter(m))
{
foreach (var v in wordIDs)
binWriter.Write(v);
binWriter.Close();
}
buffer = m.GetBuffer();
m.Close();
}
You don't need the .Close() calls (the automatic .Dispose() the using block generates takes care of those).
Also, you'll want to use .ToArray() on the MemoryStream, not .GetBuffer(). GetBuffer() returns the underlying buffer, no matter how much of it is used. ToArray() returns a copy that is the perfect length.
If you're using this to communicate with another program, make sure you and it agree on the order of the bytes (aka endianness). If you're using network byte-order, you'll need to flip the order of the bytes (using something like IPAddress.HostToNetworkOrder()), as network byte-order is big-endian, and BinaryWriter uses little-endian.
What is wordIDs, is it an enumeration or is it an Int32[]? You can use the following if it is just Int32[]:
byte[] bytes = new byte[wordIDs.Length * 4];
Buffer.BlockCopy(wordIDs, 0, bytes, 0, bytes.Length);
Otherwise, if wordIDs is an enumeration that you must step through, all you need to change is remove the m.Close (as mentioned) and use MemoryStream.ToArray (as mentioned).
Close is not needed here. The using statements will ensure the Dispose method on these types are called on exit and this will have the same effect as calling Close. In fact if you look at the code in reflector, you'll find that Close in both cases just proxies off to the Dispose method on both types.
Thus sayeth Skeet:
There's no real need to close either
a MemoryStream or a BinaryWriter, but
I think it's good form to use a using
statement to dispose of both - that
way if you change at a later date to
use something that really does need
disposing, it will fit into the same
code.
So you don't need the Close or the using statement, but using is idiomatic C#.
JaredPar's and Jonathan's answers are correct. If you want an alternative, you use BitConverter.GetBytes(int). So now your code turns into this
wordIDs.SelectMany(i => BitConverter.GetBytes(i));
I disagree with the Skeet here.
Whilst you may not need close by using using you are relying on the implementation of BinaryWriter and MemoryStream to do it for you in the Dispose method. This is true for framework types, but what if someone writes a Writer or Stream which doesn't do it?
Adding close does no harm and protects you against badly written classes.

Categories

Resources