Improving the performance of a BinaryReader - c#

I am currently in the process of writing a BinaryReader that caches the BaseStream.Position and BaseStream.Length properties. Here is what I have so far:
public class FastBinaryReader
{
BinaryReader reader;
public long Length { get; private set; }
public long Position { get; private set; }
public FastBinaryReader(Stream stream)
{
reader = new BinaryReader(stream);
Length = stream.Length;
Position = 0;
}
public void Seek(long newPosition)
{
reader.BaseStream.Position = newPosition;
Position = newPosition;
}
public byte[] ReadBytes(int count)
{
if (Position + count >= Length)
Position = Length;
else
Position += count;
return reader.ReadBytes(count);
}
public void Close()
{
reader.Close();
}
}
Instead of providing a Length and Position property, I would like to create a BaseStream property that allows me to expose my Position and Length properties as FastBinaryReader.BaseStream.Position and FastBinaryReader.BaseStream.Length, so that my existing code will stay compatible with the original BinaryReader class.
How would I go about doing this?

Here's the final implementation, if anyone is interested. Passing this as a Stream object to a BinaryReader, instead of the usual FileStream object, yields about a 45% improvement in speed on my machine, when reading 1000 byte chunks.
Note that the Length param is only accurate when reading, since Length is read in at the start and doesn't change. If you are writing, it will not update as the length of the underlying stream changes.
public class FastFileStream : FileStream
{
private long _position;
private long _length;
public FastFileStream(string path, FileMode fileMode) : base(path, fileMode)
{
_position = base.Position;
_length = base.Length;
}
public override long Length
{
get { return _length; }
}
public override long Position
{
get { return _position; }
set
{
base.Position = value;
_position = value;
}
}
public override long Seek(long offset, SeekOrigin seekOrigin)
{
switch (seekOrigin)
{
case SeekOrigin.Begin:
_position = offset;
break;
case SeekOrigin.Current:
_position += offset;
break;
case SeekOrigin.End:
_position = Length + offset;
break;
}
return base.Seek(offset, seekOrigin);
}
public override int Read(byte[] array, int offset, int count)
{
_position += count;
return base.Read(array, offset, count);
}
public override int ReadByte()
{
_position += 1;
return base.ReadByte();
}
}

I wouldn't do this exactly the way you have it here.
Consider that you need to expose a property of type Stream (what BinaryReader.BaseStream is). So you 'll need to create your own class deriving from Stream. This class would need to:
take a reference to FastBinaryReader so that it can override Stream.Length and Stream.Offset by delegating to a FastBinaryReader member
take a reference to a Stream (the same one passed in the FastBinaryReader constructor) in order to delegate all other operations to that stream (you could have these throw new NotImplementedException() instead, but you never know which library method is going to call them!)
You can imagine how it'd look:
private class StreamWrapper : Stream
{
private readonly FastBinaryReader reader;
private readonly Stream baseStream;
public StreamWrapper(FastBinaryReader reader, Stream baseStream)
{
this.reader = reader;
this.baseStream = baseStream;
}
public override long Length
{
get { return reader.Length; }
}
public override long Position
{
get { return reader.Position; }
set { reader.Position = value; }
}
// Override all other Stream virtuals as well
}
This would work, but it seems to me to be slightly clumsy. The logical continuation would be to put the caching in StreamWrapper instead of inside FastBinaryReader itself:
private class StreamWrapper : Stream
{
private readonly Stream baseStream;
public StreamWrapper(Stream baseStream)
{
this.baseStream = baseStream;
}
public override long Length
{
get { /* caching implementation */ }
}
public override long Position
{
get { /* caching implementation */ }
set { /* caching implementation */ }
}
// Override all other Stream virtuals as well
}
This would allow you to use StreamWrapper transparently and keep the caching behavior. But it raises the question: is the Stream you work with so dumb that it doesn't cache this by itself?
And if it isn't, maybe the performance gain you see is the result of that if statement inside ReadBytes and not of caching Length and Position?

Related

Create stream using byte array buffer

I want to use a third party dll function which requires a stream input.
The data I need to feed it is provided by a different third party dll function, which only offers access to the source data by using a 'ReadBuffer' option to obtain chunks of data at a time, by populating a byte array of a set length.
The data I'm reading exceeds several TB, so I'm unable to just write a loop and write all the data to memory and then into a stream.
Is their a simple way to create a stream from data which is being read into a byte array buffer within a while loop as the stream is read?
I'm writing in C# & thanks for any pointers
Thanks
You should inherit the Stream Class. It is possible your library offers all the data from an inner Stream you may read and set in your derived class (Current position, Data Length, if can be readed, etc).
This way you can create a new object and use in the other library
public class MyStream : Stream
{
private LibraryClient _client;
// Wrapper
public MyStream(LibraryClient libraryClient)
{
buffer = new byte[bufferSize];
_client = libraryClient;
}
// Return the client length
public override long Length => _client.DataLength;
// Specify the position in your buffer, if the client has this info, reference it
public override long Position { get; set; }
public override int Read(byte[] buffer, int offset, int count)
{
// temp array in local var for example. you may instance it or rent it as well
var tmp = new byte[count - offset];
_client.ReadBuffer(tmp, Position, count);
//copy all (we skip count check because it's local variable)
tmp.CopyTo(buffer, offset);
return count - offset; //if the library 'ReadBuffer' returns the length readed, place it here
}
public override long Seek(long offset, SeekOrigin origin)
{
//... this method moves the Position of the Stream
//use it to update the inner position
if (offset > _client.DataLength) throw new ArgumentOutOfRangeException();
long tempPosition = 0;
switch (origin)
{
case SeekOrigin.Begin:
{
tempPosition = offset;
break;
}
case SeekOrigin.Current:
{
tempPosition = Position + offset;
Position = tempPosition;
break;
}
case SeekOrigin.End:
{
tempPosition = _client.DataLength + offset;
break;
}
}
if (tempPosition < 0) throw new IOException("Offset too backward");
if (tempPosition > _client.DataLength) throw new IOException("Offset too foward");
Position = tempPosition;
return offset;
}
public override void SetLength(long value)
{
// ... handle if neccesary
}
public override void Write(byte[] buffer, int offset, int count)
{
// ... handle if neccesary
}
public override void Flush()
{
// handle if neccesary
}
// Modify if neccesary
public override bool CanWrite => false;
public override bool CanSeek => true;
public override bool CanRead => Position < Length;
}

Streaming data from a NVarchar(Max) column using c#

I want to put the contents of some files into the database to be read by a seperate process. This is a two step thing as the files will be uploaded to a java server but then processed by a seperate c# application that runs periodically.
I was planning on using a nvarchar(Max) column to represent the data but I can't see how to read from that sort of column in a sensible way. I don't want to use SqlDataReader.GetString as that will force me to hold all the data in memory at once. The files aren't massive but that just seems like a stupid thing to do - it'll give me it as a single string which will then need splitting up into lines, so the whole approach would be totally backwards.
I was assuming I'd just be able to use a normal stream reader but calling GetStream fails saying it doesn't work for this type of column.
Any ideas? Is it just going to be easier for the database to pretend this isn't really text and store it as bytes so I can stream it?
I wrote this extension method some time ago:
public static class DataExtensions
{
public static Stream GetStream(this IDataRecord record, int ordinal)
{
return new DbBinaryFieldStream(record, ordinal);
}
public static Stream GetStream(this IDataRecord record, string name)
{
int i = record.GetOrdinal(name);
return record.GetStream(i);
}
private class DbBinaryFieldStream : Stream
{
private readonly IDataRecord _record;
private readonly int _fieldIndex;
private long _position;
private long _length = -1;
public DbBinaryFieldStream(IDataRecord record, int fieldIndex)
{
_record = record;
_fieldIndex = fieldIndex;
}
public override bool CanRead
{
get { return true; }
}
public override bool CanSeek
{
get { return true; }
}
public override bool CanWrite
{
get { return false; }
}
public override void Flush()
{
throw new NotSupportedException();
}
public override long Length
{
get
{
if (_length < 0)
{
_length = _record.GetBytes(_fieldIndex, 0, null, 0, 0);
}
return _length;
}
}
public override long Position
{
get
{
return _position;
}
set
{
_position = value;
}
}
public override int Read(byte[] buffer, int offset, int count)
{
long nRead = _record.GetBytes(_fieldIndex, _position, buffer, offset, count);
_position += nRead;
return (int)nRead;
}
public override long Seek(long offset, SeekOrigin origin)
{
long newPosition = _position;
switch (origin)
{
case SeekOrigin.Begin:
newPosition = offset;
break;
case SeekOrigin.Current:
newPosition = _position + offset;
break;
case SeekOrigin.End:
newPosition = this.Length - offset;
break;
default:
break;
}
if (newPosition < 0)
throw new ArgumentOutOfRangeException("offset");
_position = newPosition;
return _position;
}
public override void SetLength(long value)
{
throw new NotSupportedException();
}
public override void Write(byte[] buffer, int offset, int count)
{
throw new NotSupportedException();
}
}
}
It's designed for a BLOB, but it works for a NVARCHAR(max) as well (at least on SQL Server).
You can use it like this:
using (var dataReader = command.ExecuteReader())
{
dataReader.Read();
using (var stream = dataReader.GetStream("Text"))
using (var streamReader = new StreamReader(stream))
{
// read the text using the StreamReader...
}
}

What are the possible causes for stream not writable exception?

What are the possible causes for Stream not Writable Exception when serializing custom object over TCP using Network Stream in C#.
I am Sending the Mp3 data in the form of Packets.The Frame consists of Byte[] Buffer.I am Using Binary Formatter to serialize the object.
BinaryFormatter.Serialize(NetworkStream,Packet);
The Mp3 Played at client with distortion and jitters end for few seconds and then The above mentioned exception raised.I m using NAudio Open Source library for it.
Before doing this modification I was using
NetworkStream.Write(Byte[] Buffer,0,EncodedSizeofMp3);
and it was writing it successfully before giving any exception
If you are writing to a NetworkStream, the stream/socket could be closed
If you are writing to a NetworkStream, it could have been created with FileAccess.Read
If I had to guess, though, it sounds like something is closing the stream - this can be the case if, say, a "writer" along the route assumes it owns the stream, so closes the stream prematurely. It is pretty common to have to write and use some kind of wrapper Stream that ignores Close() requests (I have one in front of me right now, in fact, since I'm writing some TCP code).
As a small aside; I generally advise against BinaryFormatter for comms (except remoting) - most importantly: it doesn't "version" in a very friendly way, but it also tends to be a bit verbose in most cases.
Here's the wrapper I'm using currently, in case it helps (the Reset() method spoofs resetting the position, so the caller can read a relative position):
class NonClosingNonSeekableStream : Stream
{
public NonClosingNonSeekableStream(Stream tail)
{
if(tail == null) throw new ArgumentNullException("tail");
this.tail = tail;
}
private long position;
private readonly Stream tail;
public override bool CanRead
{
get { return tail.CanRead; }
}
public override bool CanWrite
{
get { return tail.CanWrite; }
}
public override bool CanSeek
{
get { return false; }
}
public override bool CanTimeout
{
get { return false; }
}
public override long Position
{
get { return position; }
set { throw new NotSupportedException(); }
}
public override void Flush()
{
tail.Flush();
}
public override void SetLength(long value)
{
throw new NotSupportedException();
}
public override long Seek(long offset, SeekOrigin origin)
{
throw new NotSupportedException();
}
public override long Length
{
get { throw new NotSupportedException(); }
}
public override int Read(byte[] buffer, int offset, int count)
{
int read = tail.Read(buffer, offset, count);
if (read > 0) position += read;
return read;
}
public override void Write(byte[] buffer, int offset, int count)
{
tail.Write(buffer, offset, count);
if (count > 0) position += count;
}
public override int ReadByte()
{
int result = tail.ReadByte();
if (result >= 0) position++;
return result;
}
public override void WriteByte(byte value)
{
tail.WriteByte(value);
position++;
}
public void Reset()
{
position = 0;
}
}

Is there an in memory stream that blocks like a file stream

I'm using a library that requires I provide an object that implements this interface:
public interface IConsole {
TextWriter StandardInput { get; }
TextReader StandardOutput { get; }
TextReader StandardError { get; }
}
The object's readers then get used by the library with:
IConsole console = new MyConsole();
int readBytes = console.StandardOutput.Read(buffer, 0, buffer.Length);
Normally the class implementing IConsole has the StandardOutput stream as coming from an external process. In that case the console.StandardOutput.Read calls work by blocking until there is some data written to the StandardOutput stream.
What I'm trying to do is create a test IConsole implementation that uses MemoryStreams and echo's whatever appears on the StandardInput back onto the StandardInput. I tried:
MemoryStream echoOutStream = new MemoryStream();
StandardOutput = new StreamReader(echoOutStream);
But the problem with that is the console.StandardOutput.Read will return 0 rather than block until there is some data. Is there anyway I can get a MemoryStream to block if there is no data available or is there a different in memory stream I could use?
Inspired by your answer, here's my multi-thread, multi-write version:
public class EchoStream : MemoryStream
{
private readonly ManualResetEvent _DataReady = new ManualResetEvent(false);
private readonly ConcurrentQueue<byte[]> _Buffers = new ConcurrentQueue<byte[]>();
public bool DataAvailable{get { return !_Buffers.IsEmpty; }}
public override void Write(byte[] buffer, int offset, int count)
{
_Buffers.Enqueue(buffer);
_DataReady.Set();
}
public override int Read(byte[] buffer, int offset, int count)
{
_DataReady.WaitOne();
byte[] lBuffer;
if (!_Buffers.TryDequeue(out lBuffer))
{
_DataReady.Reset();
return -1;
}
if (!DataAvailable)
_DataReady.Reset();
Array.Copy(lBuffer, buffer, lBuffer.Length);
return lBuffer.Length;
}
}
With your version you should Read the Stream upon Write, without any consecutively write be possible. My version buffers any written buffer in a ConcurrentQueue (it's fairly simple to change it to a simple Queue and lock it)
In the end I found an easy way to do it by inheriting from MemoryStream and taking over the Read and Write methods.
public class EchoStream : MemoryStream {
private ManualResetEvent m_dataReady = new ManualResetEvent(false);
private byte[] m_buffer;
private int m_offset;
private int m_count;
public override void Write(byte[] buffer, int offset, int count) {
m_buffer = buffer;
m_offset = offset;
m_count = count;
m_dataReady.Set();
}
public override int Read(byte[] buffer, int offset, int count) {
if (m_buffer == null) {
// Block until the stream has some more data.
m_dataReady.Reset();
m_dataReady.WaitOne();
}
Buffer.BlockCopy(m_buffer, m_offset, buffer, offset, (count < m_count) ? count : m_count);
m_buffer = null;
return (count < m_count) ? count : m_count;
}
}
I'm going to add one more refined version of EchoStream. This is a combination of the other two versions, plus some suggestions from the comments.
UPDATE - I have tested this EchoStream with over 50 terrabytes of data run through it for days on end. The test had it sitting between a network stream and the ZStandard compression stream. The async has also been tested, which brought a rare hanging condition to the surface. It appears the built in System.IO.Stream does not expect one to call both ReadAsync and WriteAsync on the same stream at the same time, which can cause it to hang if there isn't any data available because both calls utilize the same internal variables. Therefore I had to override those functions, which resolved the hanging issue.
This version has the following enhancements:
This was written from scratch using the System.IO.Stream base class instead of MemoryStream.
The constructor can set a max queue depth and if this level is reached then stream writes will block until a Read is performed which drops the queue depth back below the max level (no limit=0, default=10).
When reading/writing data, the buffer offset and count are now honored. Also, you can call Read with a smaller buffer than Write without throwing an exception or losing data. BlockCopy is used in a loop to fill in the bytes until count is satisfied.
There is a public property called AlwaysCopyBuffer, which makes a copy of the buffer in the Write function. Setting this to true will safely allow the byte buffer to be reused after calling Write.
There is a public property called ReadTimeout/WriteTimeout, which controls how long the Read/Write function will block before it returns 0 (default=Infinite, -1).
The BlockingCollection<> class is used, which under the hood combines the ConcurrentQueue and AutoResetEvent classes. Originally I was using these two classes, but there exists a rare condition where you will find that after data has been Enqueued( ), that it is not available immediately when AutoResetEvent allows a thread through in the Read( ). This happens about once every 500GB of data that passes through it. The cure was to Sleep and check for the data again. Sometimes a Sleep(0) worked, but in extreme cases where the CPU usage was high, it could be as high as Sleep(1000) before the data showed up. After I switched to BlockingCollection<>, it has a lot of extra code to handle off of this elegantly and without issues.
This has been tested to be thread safe for simultaneous async reads and writes.
using System;
using System.IO;
using System.Threading.Tasks;
using System.Threading;
using System.Collections.Concurrent;
public class EchoStream : Stream
{
public override bool CanTimeout { get; } = true;
public override int ReadTimeout { get; set; } = Timeout.Infinite;
public override int WriteTimeout { get; set; } = Timeout.Infinite;
public override bool CanRead { get; } = true;
public override bool CanSeek { get; } = false;
public override bool CanWrite { get; } = true;
public bool CopyBufferOnWrite { get; set; } = false;
private readonly object _lock = new object();
// Default underlying mechanism for BlockingCollection is ConcurrentQueue<T>, which is what we want
private readonly BlockingCollection<byte[]> _Buffers;
private int _maxQueueDepth = 10;
private byte[] m_buffer = null;
private int m_offset = 0;
private int m_count = 0;
private bool m_Closed = false;
private bool m_FinalZero = false; //after the stream is closed, set to true after returning a 0 for read()
public override void Close()
{
m_Closed = true;
// release any waiting writes
_Buffers.CompleteAdding();
}
public bool DataAvailable
{
get
{
return _Buffers.Count > 0;
}
}
private long _Length = 0L;
public override long Length
{
get
{
return _Length;
}
}
private long _Position = 0L;
public override long Position
{
get
{
return _Position;
}
set
{
throw new NotImplementedException();
}
}
public EchoStream() : this(10)
{
}
public EchoStream(int maxQueueDepth)
{
_maxQueueDepth = maxQueueDepth;
_Buffers = new BlockingCollection<byte[]>(_maxQueueDepth);
}
// we override the xxxxAsync functions because the default base class shares state between ReadAsync and WriteAsync, which causes a hang if both are called at once
public new Task WriteAsync(byte[] buffer, int offset, int count)
{
return Task.Run(() => Write(buffer, offset, count));
}
// we override the xxxxAsync functions because the default base class shares state between ReadAsync and WriteAsync, which causes a hang if both are called at once
public new Task<int> ReadAsync(byte[] buffer, int offset, int count)
{
return Task.Run(() =>
{
return Read(buffer, offset, count);
});
}
public override void Write(byte[] buffer, int offset, int count)
{
if (m_Closed || buffer.Length - offset < count || count <= 0)
return;
byte[] newBuffer;
if (!CopyBufferOnWrite && offset == 0 && count == buffer.Length)
newBuffer = buffer;
else
{
newBuffer = new byte[count];
System.Buffer.BlockCopy(buffer, offset, newBuffer, 0, count);
}
if (!_Buffers.TryAdd(newBuffer, WriteTimeout))
throw new TimeoutException("EchoStream Write() Timeout");
_Length += count;
}
public override int Read(byte[] buffer, int offset, int count)
{
if (count == 0)
return 0;
lock (_lock)
{
if (m_count == 0 && _Buffers.Count == 0)
{
if (m_Closed)
{
if (!m_FinalZero)
{
m_FinalZero = true;
return 0;
}
else
{
return -1;
}
}
if (_Buffers.TryTake(out m_buffer, ReadTimeout))
{
m_offset = 0;
m_count = m_buffer.Length;
}
else
{
if (m_Closed)
{
if (!m_FinalZero)
{
m_finalZero = true;
return 0;
}
else
{
return -1;
}
}
else
{
return 0;
}
}
}
int returnBytes = 0;
while (count > 0)
{
if (m_count == 0)
{
if (_Buffers.TryTake(out m_buffer, 0))
{
m_offset = 0;
m_count = m_buffer.Length;
}
else
break;
}
var bytesToCopy = (count < m_count) ? count : m_count;
System.Buffer.BlockCopy(m_buffer, m_offset, buffer, offset, bytesToCopy);
m_offset += bytesToCopy;
m_count -= bytesToCopy;
offset += bytesToCopy;
count -= bytesToCopy;
returnBytes += bytesToCopy;
}
_Position += returnBytes;
return returnBytes;
}
}
public override int ReadByte()
{
byte[] returnValue = new byte[1];
return (Read(returnValue, 0, 1) <= 0 ? -1 : (int)returnValue[0]);
}
public override void Flush()
{
}
public override long Seek(long offset, SeekOrigin origin)
{
throw new NotImplementedException();
}
public override void SetLength(long value)
{
throw new NotImplementedException();
}
}
UPDATE: this works in .NET 4.8, but the behavior was changed in .NET Core and it no longer blocks the same way.
An anonymous pipe stream blocks like a file stream and should handle more edge cases than the sample code provided.
Here is a unit test that demonstrates this behavior.
var cts = new CancellationTokenSource();
using (var pipeServer = new AnonymousPipeServerStream(PipeDirection.Out))
using (var pipeStream = new AnonymousPipeClientStream(PipeDirection.In, pipeServer.ClientSafePipeHandle))
{
var buffer = new byte[1024];
var readTask = pipeStream.ReadAsync(buffer, 0, buffer.Length, cts.Token);
Assert.IsFalse(readTask.IsCompleted, "Read already complete");
// Cancelling does NOT unblock the read
cts.Cancel();
Assert.IsFalse(readTask.IsCanceled, "Read cancelled");
// Only sending data does
pipeServer.WriteByte(42);
var bytesRead = await readTask;
Assert.AreEqual(1, bytesRead);
}
Here's my take on the EchoStream posted above. It handles the offset and count parameters on Write and Read.
public class EchoStream : MemoryStream
{
private readonly ManualResetEvent _DataReady = new ManualResetEvent(false);
private readonly ConcurrentQueue<byte[]> _Buffers = new ConcurrentQueue<byte[]>();
public bool DataAvailable { get { return !_Buffers.IsEmpty; } }
public override void Write(byte[] buffer, int offset, int count)
{
_Buffers.Enqueue(buffer.Skip(offset).Take(count).ToArray());
_DataReady.Set();
}
public override int Read(byte[] buffer, int offset, int count)
{
_DataReady.WaitOne();
byte[] lBuffer;
if (!_Buffers.TryDequeue(out lBuffer))
{
_DataReady.Reset();
return -1;
}
if (!DataAvailable)
_DataReady.Reset();
Array.Copy(lBuffer, 0, buffer, offset, Math.Min(lBuffer.Length, count));
return lBuffer.Length;
}
}
I was able to use this class to unit test a System.IO.Pipelines implementation. I needed a MemoryStream that could simulate multiple read calls in succession without reaching the end of the stream.
I was trying to use all the codes from other answers, as well as the famous EchoStream, but unfortunately, they all were not working as I need:
EchoStream stream doesn't work well with non-standard read and write sizes, causing loss of data and corrupted reads.
EchoStream limits the stream by the number of writes, but not by the count, so theoretically someone can write in tons of data.
Solution:
I've created a ThroughStream, which is limited by any specified exact buffer size. The actual size might grow up to bufferSize * 2, but not larger than that.
It works perfectly with any non-standard size reads and writes, doesn't fail in multithreading, and is quite simple and optimized.
And is available on Gist! (click here)

Is there a built-in way to handle multiple files as one stream?

I have a list of files, and i need to read them each in a specific order into byte[] of a given size. This in itself is not a problem for a single file, a simple while ((got = fs.Read(piece, 0, pieceLength)) > 0) gets the job done perfectly fine. The last piece of the file may be smaller than desired, which is fine.
Now, there is a tricky bit: If I have multiple files, I need to have one continous stream, which means that if the last piece of a file is smaller that pieceLength, then I need to read (pieceLength-got) of the next file, and then keep on going on until the end of the last file.
So essentially, given X files, I will always read pieces that are exactly pieceLength long, except for the very last piece of the very last file, which may be smaller.
I just wonder if there is already something build in .net (3.5 SP1) that does the trick. My current approach is to create a Class that takes a list of files and then exposes a Read(byte[] buffer, long index, long length) function, similar to FileStream.Read(). This should be pretty straight forward because I do not have to change my calling code that reads the data, but before I reinvent the wheel I'd just like to double check that the wheel is not already built into the BCL.
Thanks :)
I don't believe there's anything in the framework, but I'd suggest making it a bit more flexible - take an IEnumerable<Stream> in your constructor, and derive from Stream yourself. Then to get file streams you can (assuming C# 3.0) just do:
Stream combined = new CombinationStream(files.Select(file => File.Open(file));
The "ownership" part is slightly tricky here - the above would allow the combination stream to take ownership of any stream it reads from, but you may not want it to have to iterate through all the rest of the streams and close them all if it's closed prematurely.
Here is what I came up based on #jon skeet's idea.
It just implements Read which was quite sufficient for me. (but no i need help on implementing the BeginRead/EndRead method.) Here is the full code containing both sync and async - Read and BeginRead/EndRead
https://github.com/facebook-csharp-sdk/combination-stream/blob/master/src/CombinationStream-Net20/CombinationStream.cs
internal class CombinationStream : System.IO.Stream
{
private readonly System.Collections.Generic.IList<System.IO.Stream> _streams;
private int _currentStreamIndex;
private System.IO.Stream _currentStream;
private long _length = -1;
private long _postion;
public CombinationStream(System.Collections.Generic.IList<System.IO.Stream> streams)
{
if (streams == null)
{
throw new System.ArgumentNullException("streams");
}
_streams = streams;
if (streams.Count > 0)
{
_currentStream = streams[_currentStreamIndex++];
}
}
public override void Flush()
{
if (_currentStream != null)
{
_currentStream.Flush();
}
}
public override long Seek(long offset, System.IO.SeekOrigin origin)
{
throw new System.InvalidOperationException("Stream is not seekable.");
}
public override void SetLength(long value)
{
this._length = value;
}
public override int Read(byte[] buffer, int offset, int count)
{
int result = 0;
int buffPostion = offset;
while (count > 0)
{
int bytesRead = _currentStream.Read(buffer, buffPostion, count);
result += bytesRead;
buffPostion += bytesRead;
_postion += bytesRead;
if (bytesRead <= count)
{
count -= bytesRead;
}
if (count > 0)
{
if (_currentStreamIndex >= _streams.Count)
{
break;
}
_currentStream = _streams[_currentStreamIndex++];
}
}
return result;
}
public override long Length
{
get
{
if (_length == -1)
{
_length = 0;
foreach (var stream in _streams)
{
_length += stream.Length;
}
}
return _length;
}
}
public override long Position
{
get { return this._postion; }
set { throw new System.NotImplementedException(); }
}
public override void Write(byte[] buffer, int offset, int count)
{
throw new System.InvalidOperationException("Stream is not writable");
}
public override bool CanRead
{
get { return true; }
}
public override bool CanSeek
{
get { return false; }
}
public override bool CanWrite
{
get { return false; }
}
}
Also available as a NuGet package
Install-Package CombinationStream

Categories

Resources