SerialPort data loss - C# - c#

I'm working on a SerialPort app and one very simple part of it is giving me issues. I simply want to read a constant stream of data from the port and write it out to a binary file as it comes in. The problem seems to be speed: my code has worked fine on my 9600 baud test device, but when carried over to the 115200bps live device, I seem to be losing data. What happens is after a variable period of time, I miss 1 byte which throws off the rest of the data. I've tried a few things:
private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
bwLogger.Write((byte)serialPort1.ReadByte());
}
or
private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
byte[] inc = new byte[serialPort1.BytesToRead];
serialPort1.Read(inc, 0, inc.Length);
bwLogger.Write(inc);
}
and a few variations. I can't use ReadLine() as I am working with a constant stream of data (right?). I've tried fiddling with the buffer size (both serialPort1.ReadBufferSize and the hardware FIFO buffer). Ideally, for usability purposes, I'd handle this on the software side and not make the user have to change Windows driver settings.
Any ideas?

If the problem seems to be that you can't process the data fast enough, what you could try would be to double-buffer your data.
1) Allow one thread to read the serial port into one buffer. This may involve copying data off the port into the buffer (i'm not intimately familiar with .NET).
2) When you are ready to handle the incoming data, (on a different thread) make your program read into the 2nd buffer, and while this is happening you should write the first buffer to disk.
3) When the first buffer is written to disk, swap it back to the serial port buffer, and write the 2nd buffer to disk. Repeat process, continually swapping the buffers.

You might try enabling handshaking, using the Handshake property of the SerialPort object.
You'll have to set it on both the sender or receiver. however: if you're overflowing the receiver's UART's buffer (very small, 16 bytes IIRC), there's probably no other way. If you can't enable handshaking on the sender, you'll probably have to stay at 9600 or below.

I'd try the following:
Set the Buffer-Size to at least 230K Bytes
Set the Incoming Threshold to 16K, 32K or 65K
Write this fixed blocks of data to the file
I'm not sure if this might help, but it should at least take the pressure of the framework to fire the event that often.

I would check the number of bytes read which is returned by the Read(Byte>[], Int32, Int32) method and make sure it matches what you expect.
Make sure you are listening for SerialErrorReceivedEventHandler ErrorReceived events on the port object. An RXOver error would indicate your buffer is full.
Check the thread safety on your output buffer. If the write to the output buffer is not thread safe, a second write may corrupt the first write.

Is your bwLogger a BinaryWriter class? You might try using it with a BufferedStream to make the disk I/O nonblocking.
Also, if your packets have a known ending character, you can set the SerialPort.NewLine property to enable you to use ReadLine/WriteLine, although I don't think that would make much of a performance difference.

The machines I've been working with recently all send a stop code (in my case ASCII code 3 or 4). If you also have this feature, you can make use of ReadTo(string) off your SerialPort object.

Related

Data errors with my serial receive method

I'm taking data from a serial instrument for plotting on a chart. The data stream is 230 kbps, and the serial pipeline is less than 50% full, data arrives about 100 kbps and actually doesn't vary really and rate or quantity.
Having used just a serial terminal program, like Teraterm, on the same computer; I can capture data and prove that both the source of the data as well as the test reception method are fine and I see no errors to the captured data.
The Windows Forms application I'm developing loses data. I've reduced it from receiving, capturing (in parallel), parsing, and plotting, to just receiving and capturing. And have found that I still see lost data in the capture.
I'm not a long experienced Windows person, so therefore may not know of better ways to accomplish the same functions. Here are the actions I'm taking to perform receive actions:
I'm using a System.IO.Ports.SerialPort class.
I modify the .DataReceived event via:
+= new SerialDataReceivedEventHandler(comPort_DataReceive);
I then call the open() method.
Note: I may be doing something incorrect here, I never clear the .DataReceived event with a -= at any point, instead each time I open, the event is added yet again. Nevertheless, these problems occur even when I've only talked to the port once.
Here's my code for the data receive function. RxString is a string.
private void comPort_DataReceive(object sender, SerialDataReceivedEventArgs e)
{
RxString = comPort.ReadExisting();
this.Invoke(new EventHandler(ParseData));
}
private void ParseData(object sender, EventArgs e)
{
// Save to capture file, if capture is enabled
if ((WriteToFileEnabled == true) && (WriteToFileName != null))
{
writeFileHandle.Write(RxString);
}
return;
// Previously would parse and plot data
}
So, how would persons execute a receive in this situation to get this data without losing it?
Follow on questions are things like: How big is the buffer for serial receive, or do I need to worry about that if I have a reasonably responsive application? Flow control is irrelevant, the remote device is going to send data no matter what, so it would be up to my computer to take that data and process it or ignore it. But how would I know if I've lost data or experienced framing errors and stuff? (I ask that last one without having searched much on the SerialPort class structure, sorry.)
Lets assume that your device is sending messages that are 85 bytes in length. The DataReceive event handler may or may not fire once to receive those 85 bytes. Since it might fire more than once your code must account for that. The DataReceive event handler should read the bytes available and append them to a buffer that is processed later.
Also, only one of the events raised by the SerialPort class can execute at a time. In the example assume the handler has to fire three times to receive the 85 bytes. While processing the first part the other two can't execute. If while processing the first part one of the other events, PinChanged or ErrorReceived, are needed they can't be executed either.
My first two experiences with the SerialPort class were a 9600 bps terminal and a 1 Mbps bluetooth device. What worked for the slower did not work for the faster, but when I figured out how to get the faster to work the slower could use the same methodology.
My methodology:
Before opening the serial port I start two other background threads that run in a do loop. The first one(Receive) reads all available bytes from the serial port, adds them to a buffer, and signals the second thread on every read. The second one(Protocol) determines if a full message has arrived, does any byte to string conversion, updates the UI, etc. Depending on the application I may start a third thread that handles errors and pin changes. All of these threads are throttled by a Threading AutoResetEvent.
My DataReceive event handler has one line in it, a Set on the AutoResetEvent that is throttling Receive.
A VB example of this can be found here SerialPort Methodology. Since adopting this methodology I have not had any of the problems that seem to plague other SerialPort users and have used it successfully with speeds up to 2Mbps.

How to Properly Read from a SerialPort in .NET

I'm embarrassed to have to ask such a question, but I'm having a rough time figuring out how to reliably read data over a serial port with the .NET SerialPort class.
My first approach:
static void Main(string[] args)
{
_port = new SerialPort
{
PortName = portName,
BaudRate = 57600,
DataBits = 8,
Parity = Parity.None,
StopBits = StopBits.One,
RtsEnable = true,
DtrEnable = false,
WriteBufferSize = 2048,
ReadBufferSize = 2048,
ReceivedBytesThreshold = 1,
ReadTimeout = 5000,
};
_port.DataReceived += _port_DataReceived;
_port.Open();
// whatever
}
private void _port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
var buf = new byte[_port.BytesToRead];
var bytesRead = _port.Read(buf, 0, buf.Length);
_port.DiscardInBuffer();
for (int i = 0; i < bytesRead; ++i)
{
// read each byte, look for start/end values,
// signal complete packet event if/when end is found
}
}
So this has an obvious problem; I am calling DiscardInBuffer, so any data which came in after the event was fired is discarded, i.e., I'm dropping data.
Now, the documentation for SerialPort.Read() does not even state if it advances the current position of the stream (really?), but I have found other sources which claim that it does (which makes sense). However, if I do not call DiscardInBuffer I eventually get an RXOver error, i.e., I'm taking too long to process each message and the buffer is overflowing.
So... I'm really not a fan of this interface. If I have to process each buffer on a separate thread I'll do that, but that comes with its own set of problems, and I'm hoping that I am missing something as I don't have much experience with this interface.
Jason makes some good points about reducing UI access from the worker thread, but an even better option is to not receive the data on a worker thread in the first place.
Use port.BaseStream.ReadAsync to get your data, event-driven, on the thread where you want it. I've written more about this approach at http://www.sparxeng.com/blog/software/must-use-net-system-io-ports-serialport
To correctly handle data from a serial port you need to do a couple of things.
First, don't handle the data in your receive event. Copy the data somewhere else and do any processing on another thread. (This is true of most events - it is a bad idea to do any time-consuming processing in an event handler as it delays the caller and can introduce problems. You also need to be careful as your event is raised on a different thread to your main application)
Secondly, you can't guarantee that you will receive exactly one packet, or a complete packet when you receive data - it may come to you in small fragments.
So the upshot of this is that you should create your own buffer (big enough to hold several packets), and when you receive data, append it to your buffer. Then in another thread you can process the buffer, looking to see if you can decode a packet from it and then consume that data. You may have to skip the end of a partial packet before you find the start of a valid one. If you don't have enough data to build a full packet, then you may need to wait for a bit until more data arrives.
You shouldn't call Discard on the port - just read the data and consume it. Each time you are called, there will be another fragment of data to process. It does not remember the data from previous calls - each time your event is called, it is given a small burst of data that has arrived since you were last called. Just use the data you've been given and return.
As a last suggestion: Don't change any settings for the port unless you specifically need to for it to operate properly. So you must set the baud rate, data/stop bits and parity, but avoid trying to change properties like the Rts/Dtr, buffer sizes and read thresholds unless you have a good reason to think you know better than the author of the serial port. Most serial devices work in an industry standard manner these days, and changing these low-level options is very likely to cause trouble unless you're talking to some unusual equipment and you intimately know the hardware.
In particular setting the ReceivedBytesThreshold to 1 is probably what is causing the failure you've mentioned, because you are asking the serial port to call your event handler with only one byte at a time, 57,600 times per second - giving your event handler only 0.017 milliseconds to process each byte before you'll start to get re-entrant calls.
DiscardInBuffer is typically only used immediately after opening a serial port. It is not required for standard serial port communication so you should not have it in your dataReceived handler.

SocketAsyncEventArgs buffer is full of zeroes

I'm writing a message layer for my distributed system. I'm using IOCP, ie the Socket.XXXAsync methods.
Here's something pretty close to what I'm doing (in fact, my receive function is based on his):
http://vadmyst.blogspot.com/2008/05/sample-code-for-tcp-server-using.html
What I've found now is that at the start of the program (two test servers talking to each other) I each time get a number of SAEA objects where the .Buffer is entirely filled with zeroes, yet the .BytesTransferred is the size of the buffer (1024 in my case).
What does this mean? Is there a special condition I need to check for? My system interprets this as an incomplete message and moves on, but I'm wondering if I'm actually missing some data. I was under the impression that if nothing was being received, you'd not get a callback. In any case, I can see in WireShark that there aren't any zero-length packets coming in.
I've found the following when I Googled it, but I'm not sure my problem is the same:
http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/40fe397c-b1da-428e-a355-ee5a6b0b4d2c
http://go4answers.webhost4life.com/Example/socketasynceventargs-buffer-not-ready-121918.aspx
I am sure not what is going on in the linked example. It appears to be using asynchronous sockets in a synchronous way. I cannot see any callbacks or similar in the code. You may need to rethink whether you need synchronous or asynchronous sockets :).
To the problem at hand stems from the possibility that your functions are trying to read/write to the buffer before the network transmit/receive has been completed. Try using the callback functionality included in the async Socket. E.g.
// This goes into your accept function, to begin receiving the data
socketName.BeginReceive(yourbuffer, 0, yourbuffer.Length,
SocketFlags.None, new AsyncCallback(OnRecieveData), socketName);
// In your callback function you know that the socket has finished receiving data
// This callback will fire when the receive is complete.
private void OnRecieveData(IAsyncResult input) {
Socket inSocket = (Socket)input.AsyncState; // This is just a typecast
inSocket.EndReceive(input);
// Pull the data out of the socket as you already have before.
// state.Data.Write ......
}

problem with C# when reading from serial

i have a problem with a serial port reader in C#.
if i send 5555 through the serial port the program prints out 555.
here is the program
public static void Main()
{
byte[] buffer = new byte[256];
string buff;
using (SerialPort sp = new SerialPort("COM2", 6200))
{
sp.Open();
//read directly
sp.Read(buffer, 0, (int)buffer.Length);
//read using a Stream
sp.BaseStream.Read(buffer, 0, (int)buffer.Length);
string sir = System.Text.Encoding.Default.GetString(buffer);
Console.WriteLine(sir);
Both your computer's and the other device's UART may have a hardware buffer which passes data in respect to the actual hardware control enabled for the connection. Hence you have to take care of:
hardware control flow setup;
timing of data reading / writing;
Bear in mind you are working with a real-time hardware device that has its own timing which needs to be respected by your application. Communicating with a hardware device is a process. In other words, a one-shot read may not be enough to retrieve all input you are expecting on the logical level.
Update: Google for “SerialPort tutorial C#” and study few of them, like this one.
You need to use the int returned from the "Read" methods. The value returned will tell you how many bytes were actually read. You will probably need to loop and call "Read" multiple times until you have read the number of bytes you need.
Update: This other question has some sample code that shows how to read multiple times until you have enough data to process.

Handling dropped TCP packets in C#

I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.
I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.
First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?
Secondly, how can I get round this? Receiving all of the 20000 strings is vital.
Socket receiving code:
private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
byte[] recvBuffer = new byte[1024];
int bytesRead = 0;
try
{
bytesRead = socket.Receive(recvBuffer);
}
catch (SocketException e)
{
if (! socket.Connected)
{
return;
}
}
string input = encoding.GetString(recvBuffer, 0, bytesRead);
CountStringsIn(input);
}
Socket sending code:
private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.
What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.
The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.
Well there's one thing wrong with your code to start with, if you're counting the number of calls to Receive which complete: you appear to be assuming that you'll see as many Receive calls finish as you made Send calls.
TCP is a stream-based protocol - you shouldn't be worrying about individual packets or reads; you should be concerned with reading the data, expecting that sometimes you won't get a whole message in one packet and sometimes you may get more than one message in a single read. (One read may not correspond to one packet, too.)
You should either prefix each method with its length before sending, or have a delimited between messages.
It's definitely not TCP's fault. TCP guarantees in-order, exactly-once delivery.
Which strings are "missing"? I'd wager it's the last ones; try flushing from the sending end.
Moreover, your "protocol" here (I'm taking about the application-layer protocol you're inventing) is lacking: you should consider sending the # of objects and/or their length so the receiver knows when he's actually done receiving them.
How long are each of the strings? If they aren't exactly 1024 bytes, they'll be merged by the remote TCP/IP stack into one big stream, which you read big blocks of in your Receive call.
For example, using three Send calls to send "A", "B", and "C" will most likely come to your remote client as "ABC" (as either the remote stack or your own stack will buffer the bytes until they are read). If you need each string to come without it being merged with other strings, look into adding in a "protocol" with an identifier to show the start and end of each string, or alternatively configure the socket to avoid buffering and combining packets.

Categories

Resources