Pipelines buffer preserving until processing is complete - c#

I am researching the possibility of using pipelines for processing binary messages coming from network.
The binary messages i will be processing come with an payload and it is desirable to keep the payload in its binary form.
The idea is to read out the whole message and create a slice of message and its payload, once the message is completely read it will be passed to a channel chain for processing, the processing will not be instant and might take some time or be executed later and the goal is not to have the pipe reader wait until the processing is complete, then once the message processing is complete i would need to release the processed buffer region to the pipe writer.
Now of course i could just create a new byte array and copy the data coming from pipe writer but that would beat the purpose of no-copy? So as i understand i would need some buffer synchronization between the pipeline and the channel?
I observed the available apis (AdvanceTo) of pipe reader where its possible to tell the pipe reader what was consumed and what was examined but cant get around how this could be synced outside of the pipe reading method.
So the question would be whether there are some techniques or examples on how this can be achieved.

The buffer obtained from TryRead/ReadAsync is only valid until you call AdvanceTo, with the expectation that as soon as you've done that: anything you reported as consumed is available to be recycled for use elsewhere (which could be parallel/concurrent readers). Strictly speaking: even the bits you haven't reported as consumed: you still shouldn't treat as valid once you've called AdvanceTo (although in reality, it is likely that they'll still be the same segments - just: that isn't the concern of the caller; to the caller, it is only valid between the read and the advance).
This means that you explicitly can't do:
while (...)
{
var result = await pipe.ReadAsync();
if (TryIdentifyFrameBoundary(out var frame)) {
BeginProcessingInBackground(frame); // <==== THIS IS A PROBLEM!
reader.AdvanceTo(frame.End, frame.End);
}
else if { // take nothing
reader.AdvanceTo(buffer.Start, buffer.End);
if (result.IsCompleted) break; // that's all folks
}
}
because the "in background" bit, when it fires, could now be reading someone else's data (due to it being reused already).
So: either you need to process the frame contents as part of the read loop, or you're going to have to make a copy of the data, most likely by using:
c#
var len = checked ((int)buffer.Length);
var oversized = ArrayPool<byte>.Shared.Rent(len);
buffer.CopyTo(oversized);
and pass oversized to your background processing, remembering to only look at the first len bytes of it. You could pass this as a ReadOnlyMemory<byte>, but you need to consider that you're also going to want to return it to the array-pool afterwards (probably in a finally block), and passing it as a memory makes it a little more awkward (but not impossible, thanks to MemoryMarshal.TryGetArray).
Note: in early versions of the pipelines API, there was an element of reference-counting, which did allow you to preserve buffers, but it had a few problems:
it complicated the API hugely
it led to leaked buffers
it was ambiguous and confusing what "preserved" meant; is the count until it gets reused? or released completely?
so that feature was dropped.

Related

Fast approach to read and parse serial-data continuously

I've got a thread to read and parse serial data.
The messages are in binary format and start with either the character 'F', 'S', 'Q' or 'M'.
There are no newlines and there is no special ending character (the characters above state that a message is finished and everything before it is ready to be parsed).
How do I continuously read and parse the data?
All that comes to my mind is having a 4096 byte long input buffer (byte array) and then follow this procedure:
Track the position in the buffer manually
append available data to it via SerialPort.Read(buffer, position, byteCount)
try to parse as many messages as possible from the buffer
copy the rest to a temporary buffer
reset the input buffer
copy the contents of the temporary buffer to the original buffer
set the position in the buffer
Can you think of faster / easier approaches?
A very simple way to get ahead is to stop trying to make it faster. There is no point, serial port data rates are very, very low and modern computers are very, very fast. Your Read() call only ever returns a single byte, rarely 2.
Note that this is hard to see, when you debug and single-step through the code then you'll artificially slow down your program a great deal. Allowing more bytes to be received and thus more of them getting returned by the Read() call. But this doesn't happen when the program runs at normal speed.
So use SerialPort.BaseStream.ReadByte() instead. Makes the code very simple.
After acquiring some experience with SerialPort C# component
At the beginning: Take a serial port exclusevily.
Then:
1st parallel Task: Continues read entire buffer content after a regular
interval and pushes the read chunk into a "Gathering Collection" of data.
2nd parallel Task: Analyzes the "Gathering Collection" for a completed "phrase", delegates the clonned "phrase" to a "Phrase Manager" and excludes the phrase from the "Gathering Collection"
You have a freedom about "Gathering Collection" implementation, but what was important to me is that:
read all, but not a sized content from the Serial port buffer
to avoid losses and save an order in messages build your own port dispatcher rather let anybody open and close your port at any time for reading/writing.
detect the port read frequency experimenally. The more frequent read-operation will let your code detect fatser a "phrase" and start the proder handlind. Too frequent reading without detecting a "phrase" can cost you additional resource usage.

How to do non blocking socket reads with Protobuf using C#?

Lets say I want to do non blocking reads from a network socket.
I can async await for the socket to read x bytes and all is fine.
But how do I combine this with deserialization via protobuf?
Reading objects from a stream must be blocking? that is, if the stream contains too little data for the parser, then there has to be some blocking going on behind the scenes so that the reader can fetch all the bytes it needs.
I guess I can use lengthprefix delimiters and read the first bytes and then figure out how many bytes I have to fetch minimum before I parse, is this the right way to go about it?
e.g. if my buffer is 500 bytes, then await those 500 bytes, and parse the lengthprefix and if the length is more than 500 then wait again, untill all of it is read.
What is the idiomatic way to combine non blocking IO and protobuf parsing?
(I'm using Jon Skeet's implementation right now http://code.google.com/p/protobuf-csharp-port/)
As a general rule, serializers don't often contain a DeserializeAsync method, because that is really really hard to do (at least, efficiently). If the data is of moderate size, then I would advise to buffer the required amount of data using asynchronous code - and then deserialize when all of the required data is available. If the data is very large and you don't want to have to buffer everything in memory, then consider using a regular synchronous deserialize on a worker thread.
(note that note of this is implementation specific, but if the serializer you are using does support an async deserialize: then sure, use that)
Use the BeginReceive/EndRecieve() methods to receive your data into a byte buffer (typically 1024 or 2048 bytes). In the AsyncCallback, after ensuring that you didn't read -1/0 bytes (end of stream/disconnect/io error), attempt to deserialize the packet with ProtocolBuf.
Your receive callback will be asynchronous, and it makes sense to parse the packet in the same thread as the reading, IMHO. It's the handling that will likely cause the biggest performance hit.

Split message in serial communication

I am new to serial communication. I have read a fair few tutorials, and most of what I am trying to do is working, however I have a question regarding serial communication with C#. I have a micro controller that is constantly sending data through a serial line. The data ist in this format:
bxxxxixx.xx,xx.xx*
where the x's represent different numbers, + or - signs.
At certain times want to read this information from my C# program on my PC. The problem that I am having is that my messages seem to be split in random positions even though I am using
ReadTo("*");
I assumed this would read everything upto the * character.
How can I make sure that the message I recieved is complete?
Thank you for your help.
public string receiveCommandHC()
{
string messageHC = "";
if (serialHC.IsOpen)
{
serialHC.DiscardInBuffer();
messageHC = serialHC.ReadTo("*");
}
return messageHC;
}
I'm not sure why you're doing it, but you're discarding everything in the serial ports in-buffer just before reading, so if the computer has already received "bxxx" at that point, you throw it away and you'll only be reading "xixx.xx,xx.xx".
You'll nearly always find in serial comms that data messages (unless very small) are split. This is mostly down to the speed of communication and the point at which you retrieve data from the port.
Usually you'd set your code to run in a separate thread (to help prevent impacting the performance of the rest of your code) which raises an event when a complete message is received and also takes full messages in for transmission. Read and write functionality is dealt with by worker threads (serial comms traffic is slow).
You'll need a read and a write buffer. These should be suitabley large to hold data for several cycles.
Append data read from the input to the end of your read buffer. Have the read buffer read on cyclicly for complete messages, from the start of the buffer.
Depending on the protocol used there is usually a data start and maybe a data end indicator and somewhere a message size (this may be fixed, again depending on your protocol). I gather form your protocol that the message start character is 'b' and the message end character is '*'. Discard all data preceeding your message start character ('b'), as this is from an incomplete message.
When a complete message is found, strip it from the front of the buffer and raise an event to indicate its arrival.
A similar process is run for sending data, except that you may need to split the message, hence data to be sent is appended to the end of the buffer and data being sent is read from the start.
I hope that this helps you in understanding how to cope with serial comms.
As pointed out by Marc you're currently clearing your buffer in a way that will cause problems.
edit
As I said in my comment I don't recognise serialHC, but if dealing with raw data then look at using the SerialPort class. More information on how to use it and an example (which roughly uses the process that I described above) can be found here.
I'm going to take a guess that you're actually getting the ends of commands, i.e. instead of getting b01234.56.78.9 (omitting the final *), you're getting (say) .56.78.9. That is because you discarded the input buffer. Don't do that. You can't know the state at that point (just before a read), so discarding the buffer is simply wrong. Remove the serialHC.DiscardInBuffer(); line.

Reading from NetworkStream. Which is better ReadLine() vs. Read into ByteArray?

I am using TcpClient to communicate with a server that sends information in form of "\n" delimited strings. The data flow is pretty high and once the channel is set, the stream would always have information to read from. The messages can be of variable sizes.
My question now is, would it be better to use ReadLine() method to read the messages from the stream as they are already "\n" delimited, or will it be advisable to read byteArray of some fixed size and pick up message strings from them using Split("\n") or such? (Yes, I do understand that there may be cases when the byte array gets only a part of the message, and we would have to implement logic for that too.)
Points that need to be considered here are:
Performance.
Data Loss. Will some data be lost if the client isn't reading as fast as the data is coming in?
Multi-Threaded setup. What if this setup has to be implemented in a multi-threaded environment, where each thread would have a separate communication channel, however would share the same resources on the client.
If Performance is your main concern then I would prefer the Read over ReadLine method. I/O is one of the slower things a program can do so you want to minimize the amount of time in I/O routines by reading as much data up front.
Data loss is not really a concern here if you are using TCP. The TCP protocol guarantees delivery and will deal with congestion issues that result in lost packets.
For the threading portion of the question we're going to need a bit more information. What resources are shared, are they sharing TcpClient's, etc ...
I would say to go with a pool of buffers and doing reads manually (Read() on the socket), if you need a lot of performance. Pooling the buffers would avoid generating garbage, as I believe ReadLine() will generate some.
Since you're usint TCP, data loss should not be a problem.
In the multi-threaded setup, you will have to be specific, but in general, resource sharing is troublesome as it might generate a data race.
Why not use a BufferedStream to ensure you are reading optimally out of the stream:
var req = (HttpWebRequest)WebRequest.Create("http://www.stackoverflow.com");
using(var resp = (HttpWebResponse)req.GetResponse())
using(var stream = resp.GetResponseStream())
using(var bufferedStream = new BufferedStream(stream))
using(var streamReader = new StreamReader(bufferedStream))
{
while(!streamReader.EndOfStream)
{
string currentLine = streamReader.ReadLine();
Console.WriteLine(currentLine);
}
}
Of course, if you're looking to scale, going async would be a necessity. As such, ReadLine is out of the question and you're back to byte array manipulations.
I would read into a byte array.... only downside: limited size. You'd need to know a certain byte amount limit, or flush the byte array into a byte collection manually sometimes, and then transform the collection back into an array of bytes, also transforming it to a string using bitConverter and finally splitting it into the real messages :p
You will have a lot of overhead flushing the array into a collection... BUT, flushing into a string would require more resources, as bytes have to be decoded AS you flush them into the string.... so it's up to you... you can choose simplicity with string or efficiency with bytes, either way it wouldnt exactly be a serious performance boost from each other, but i'd personally go with the byte collection to avoid the implicit byte conversion.
Important: This comes from personal experience from previous TCP socket usage (Quake RCON stuff), not any book or anything :) Correct me if im mistaken please.

What is a good method to handle line based network I/O streams?

Note: Let me appologize for the length of this question, i had to put a lot of information into it. I hope that doesn't cause too many people to simply skim it and make assumptions. Please read in its entirety. Thanks.
I have a stream of data coming in over a socket. This data is line oriented.
I am using the APM (Async Programming Method) of .NET (BeginRead, etc..). This precludes using stream based I/O because Async I/O is buffer based. It is possible to repackage the data and send it to a stream, such as a Memory stream, but there are issues there as well.
The problem is that my input stream (which i have no control over) doesn't give me any information on how long the stream is. It simply is a stream of newline lines looking like this:
COMMAND\n
...Unpredictable number of lines of data...\n
END COMMAND\n
....repeat....
So, using APM, and since i don't know how long any given data set will be, it is likely that blocks of data will cross buffer boundaries requiring multiple reads, but those multiple reads will also span multiple blocks of data.
Example:
Byte buffer[1024] = ".................blah\nThis is another l"
[another read]
"ine\n.............................More Lines..."
My first thought was to use a StringBuilder and simply append the buffer lines to the SB. This works to some extent, but i found it difficult to extract blocks of data. I tried using a StringReader to read newlined data but there was no way to know whether you were getting a complete line or not, as StringReader returns a partial line at the end of the last block added, followed by returning null aftewards. There isn't a way to know if what was returned was a full newlined line of data.
Example:
// Note: no newline at the end
StringBuilder sb = new StringBuilder("This is a line\nThis is incomp..");
StringReader sr = new StringReader(sb);
string s = sr.ReadLine(); // returns "This is a line"
s = sr.ReadLine(); // returns "This is incomp.."
What's worse, is that if I just keep appending to the data, the buffers get bigger and bigger, and since this could run for weeks or months at a time that's not a good solution.
My next thought was to remove blocks of data from the SB as I read them. This required writing my own ReadLine function, but then I'm stuck locking the data during reads and writes. Also, the larger blocks of data (which can consist of hundreds of reads and megabytes of data) require scanning the entire buffer looking for newlines. It's not efficient and pretty ugly.
I'm looking for something that has the simplicity of a StreamReader/Writer with the convenience of async I/O.
My next thought was to use a MemoryStream, and write the blocks of data to a memory stream then attach a StreamReader to the stream and use ReadLine, but again I have issues with knowing if a the last read in the buffer is a complete line or not, plus it's even harder to remove the "stale" data from the stream.
I also thought about using a thread with synchronous reads. This has the advantage that using a StreamReader, it will always return a full line from a ReadLine(), except in broken connection situations. However this has issues with canceling the connection, and certain kinds of network problems can result in hung blocking sockets for an extended period of time. I'm using async IO because i don't want to tie up a thread for the life of the program blocking on data receive.
The connection is long lasting. And data will continue to flow over time. During the intial connection, there is a large flow of data, and once that flow is done the socket remains open waiting for real-time updates. I don't know precisely when the initial flow has "finished", since the only way to know is that no more data is sent right away. This means i can't wait for the initial data load to finish before processing, I'm pretty much stuck processing "in real time" as it comes in.
So, can anyone suggest a good method to handle this situation in a way that isn't overly complicated? I really want this to be as simple and elegant as possible, but I keep coming up with more and more complicated solutions due to all the edge cases. I guess what I want is some kind of FIFO in which i can easily keep appending more data while at the same time poping data out of it that matches certain criteria (ie, newline terminated strings).
That's quite an interesting question. The solution for me in the past has been to use a separate thread with synchronous operations, as you propose. (I managed to get around most of the problems with blocking sockets using locks and lots of exception handlers.) Still, using the in-built asynchronous operations is typically advisable as it allows for true OS-level async I/O, so I understand your point.
Well I've gone and written a class for accomplishing what I believe you need (in a relatively clean manner I would say). Let me know what you think.
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
public class AsyncStreamProcessor : IDisposable
{
protected StringBuilder _buffer; // Buffer for unprocessed data.
private bool _isDisposed = false; // True if object has been disposed
public AsyncStreamProcessor()
{
_buffer = null;
}
public IEnumerable<string> Process(byte[] newData)
{
// Note: replace the following encoding method with whatever you are reading.
// The trick here is to add an extra line break to the new data so that the algorithm recognises
// a single line break at the end of the new data.
using(var newDataReader = new StringReader(Encoding.ASCII.GetString(newData) + Environment.NewLine))
{
// Read all lines from new data, returning all but the last.
// The last line is guaranteed to be incomplete (or possibly complete except for the line break,
// which will be processed with the next packet of data).
string line, prevLine = null;
while ((line = newDataReader.ReadLine()) != null)
{
if (prevLine != null)
{
yield return (_buffer == null ? string.Empty : _buffer.ToString()) + prevLine;
_buffer = null;
}
prevLine = line;
}
// Store last incomplete line in buffer.
if (_buffer == null)
// Note: the (* 2) gives you the prediction of the length of the incomplete line,
// so that the buffer does not have to be expanded in most/all situations.
// Change it to whatever seems appropiate.
_buffer = new StringBuilder(prevLine, prevLine.Length * 2);
else
_buffer.Append(prevLine);
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
private void Dispose(bool disposing)
{
if (!_isDisposed)
{
if (disposing)
{
// Dispose managed resources.
_buffer = null;
GC.Collect();
}
// Dispose native resources.
// Remember that object has been disposed.
_isDisposed = true;
}
}
}
An instance of this class should be created for each NetworkStream and the Process function should be called whenever new data is received (in the callback method for BeginRead, before you call the next BeginRead I would imagine).
Note: I have only verified this code with test data, not actual data transmitted over the network. However, I wouldn't anticipate any differences...
Also, a warning that the class is of course not thread-safe, but as long as BeginRead isn't executed again until after the current data has been processed (as I presume you are doing), there shouldn't be any problems.
Hope this works for you. Let me know if there are remaining issues and I will try to modify the solution to deal with them. (There could well be some subtlety of the question I missed, despite reading it carefully!)
What you're explaining in you're question, reminds me very much of ASCIZ strings. (link text). That may be a helpfull start.
I had to write something similar to this in college for a project I was working on. Unfortunatly, I had control over the sending socket, so I inserted a length of message field as part of the protocol. However, I think that a similar approach may benefit you.
How I approached my solution was I would send something like 5HELLO, so first I'd see 5, and know I had message length 5, and therefor the message I needed was 5 characters. However, if on my async read, i only got 5HE, i would see that I have message length 5, but I was only able to read 3 bytes off the wire (Let's assume ASCII characters). Because of this, I knew I was missing some bytes, and stored what I had in fragment buffer. I had one fragment buffer per socket, therefor avoiding any synchronization problems. The rough process is.
Read from socket into a byte array, record how many bytes was read
Scan through byte by byte, until you find a newline character (this becomes very complex if you're not receiving ascii characters, but characters that could be multiple bytes, you're on you're own for that)
Turn you're frag buffer into a string, and append you're read buffer up until the new line to it. Drop this string as a completed message onto a queue or it's own delegate to be processed. (you can optimize these buffers by actually having you're read socket writing to the same byte array as you're fragment, but that's harder to explain)
Continue looping through, every time we find a new line, create a string from the byte arrange from a recorded start / end position and drop on queue / delegate for processing.
Once we hit the end of our read buffer, copy anything that's left into the frag buffer.
Call the BeginRead on the socket, which will jump to step 1. when data is available in the socket.
Then you use another Thread to read you're queue of incommign messages, or just let the Threadpool handle it using delegates. And do whatever data processing you have to do. Someone will correct me if I'm wrong, but there is very little thread synchronization issues with this, since you can only be reading or waiting to read from the socket at any one time, so no worry about locks (except if you're populating a queue, I used delegates in my implementation). There are a few details you will need to work out on you're own, like how big of a frag buffer to leave, if you receive 0 newlines when you do a read, the entire message must be appended to the fragment buffer without overwriting anything. I think it ran me about 700 - 800 lines of code in the end, but that included the connection setup stuff, negotiation for encryption, and a few other things.
This setup performed very well for me; I was able to perform up to 80Mbps on 100Mbps ethernet lan using this implementation a 1.8Ghz opteron including encryption processing. And since you're tied to the socket, the server will scale since multiple sockets can be worked on at the same time. If you need items processed in order, you'll need to use a queue, but if order doesn't matter, then delegates will give you very scalable performance out of the threadpool.
Hope this helps, not meant to be a complete solution, but a direction in which to start looking.
*Just a note, my implementation was down purely at the byte level and supported encryption, I used characters for my example to make it easier to visualize.

Categories

Resources