c#: strange shift in data buffer during communication with device - c#

In my app I have to receive and process some data from device, connected through COM port. I do it partially. In that particular device first two bytes are the length of the packet (minus 2 since it doesn't take into account these very two bytes; so it is length of the rest of the packet after all). Then, since I know that device tends to send me its data slowly, I read rest of the packet in the loop, until all data has been read. But right here I encountered strange problem. Let's assume the entire packet (including these first two bytes with length) looks like this: ['a', 'b', 'c', 'd', 'e']. When I read first two bytes ('a' and 'b'), I'd expect rest of the packet to look like this: ['c', 'd', 'e']. But instead, it looks like this: ['b', 'c', 'd', 'e']. How come second byte of the response is still in the read buffer? And why just the second one, without the previous one?
The code below shows how do I handle the communication process:
//The data array is some array with output data
//The size array is two-byte array to store frame-length bytes
//The results array is for device's response
//The part array is for part of the response that's currently in read buffer
port.Write(data, 0, data.Length);
//Receiving device's response (if there's any)
try
{
port.Read(size, 0, 2); //Read first two bytes (packet's length) of the response
//We'll store entire response in results array. We get its size from first two bytes of response
//(+2 for these very bytes since they're not counted in the device's data frame)
results = new byte[(size[0] | ((int)size[1] << 8)) + 2];
results[0] = size[0]; results[1] = size[1]; //We'll need packet size for checksum count
//Time to read rest of the response
for(offset = 2; offset < results.Length && port.BytesToRead > 0; offset += part.Length)
{
System.Threading.Thread.Sleep(5); //Device's quite slow, isn't it
try
{
part = new byte[port.BytesToRead];
port.Read(part, 0, part.Length); //Here's where old data is being read
}
catch(System.TimeoutException)
{
//Handle it somehow
}
Buffer.BlockCopy(part, 0, results, offset, part.Length);
}
if(offset < results.Length) //Something went wrong during receiving response
throw new Exception();
}
catch(Exception)
{
//Handle it somehow
}

You are making a traditional mistake, you cannot ignore the return value of Read(). It tells you how many bytes were actually received. It will be at least 1, not more than count. However many are present in the receive buffer, BytesToRead tells you. Simply keep calling Read() until your happy:
int cnt = 0;
while (cnt < 2) cnt += port.Read(size, cnt, 2 - cnt);
Just use the same code in the 2nd part of your code so you don't burn 100% core without the Sleep() call. Do keep in mind that TimeoutException is just as likely when you read the size, more likely actually. It is a fatal exception if it is thrown when cnt > 0, you can't resynchronize anymore.

Well, strange enough, but when I read first two bytes separatedly:
port.Read(size, 0, 1); //Read first two bytes (packet's length) of the response
port.Read(size, 1, 1); //Second time, lol
Everything works just fine, no matter what kind of data pack do I receive from device.

The documntation for SerialPort contains the following text:
Because the SerialPort class buffers data, and the stream contained in
the BaseStream property does not, the two might conflict about how
many bytes are available to read. The BytesToRead property can
indicate that there are bytes to read, but these bytes might not be
accessible to the stream contained in the BaseStream property because
they have been buffered to the SerialPort class.
Could this explain why BytesToRead is giving you confusing values?
Personally, I always use the DataReceived event and in my event handler, I use ReadExisting() to read all immediately available data and add it to my own buffer. I don't attempt to impose any meaning on the data stream at that level, I just buffer it; instead I'll typically write a little state machine that takes characters out of the buffer one at a time and parses the data into whatever format is required.
Alternatively, you could use the ReactiveExtensions to produce an observable sequence of received characters and then layer observers on top of that. I do it with a couple of extension methods like this:
public static class SerialObservableExtensions
{
static readonly Logger log = LogManager.GetCurrentClassLogger();
/// <summary>
/// Captures the <see cref="System.IO.Ports.SerialPort.DataReceived" /> event of a serial port and returns an
/// observable sequence of the events.
/// </summary>
/// <param name="port">The serial port that will act as the event source.</param>
/// <returns><see cref="IObservable{Char}" /> - an observable sequence of events.</returns>
public static IObservable<EventPattern<SerialDataReceivedEventArgs>> ObservableDataReceivedEvents(
this ISerialPort port)
{
var portEvents = Observable.FromEventPattern<SerialDataReceivedEventHandler, SerialDataReceivedEventArgs>(
handler =>
{
log.Debug("Event: SerialDataReceived");
return handler.Invoke;
},
handler =>
{
// We must discard stale data when subscribing or it will pollute the first element of the sequence.
port.DiscardInBuffer();
port.DataReceived += handler;
log.Debug("Listening to DataReceived event");
},
handler =>
{
port.DataReceived -= handler;
log.Debug("Stopped listening to DataReceived event");
});
return portEvents;
}
/// <summary>
/// Gets an observable sequence of all the characters received by a serial port.
/// </summary>
/// <param name="port">The port that is to be the data source.</param>
/// <returns><see cref="IObservable{char}" /> - an observable sequence of characters.</returns>
public static IObservable<char> ReceivedCharacters(this ISerialPort port)
{
var observableEvents = port.ObservableDataReceivedEvents();
var observableCharacterSequence = from args in observableEvents
where args.EventArgs.EventType == SerialData.Chars
from character in port.ReadExisting()
select character;
return observableCharacterSequence;
}
}
The ISerialPort interface is just a header interface that I extracted from the SerialPort class, which makes it easier for me to mock it when I'm unit testing.

Related

Awaiting data from Serial Port in C#

I have an application that receives data from a wireless radio using RS-232. These radios use an API for communicating with multiple clients. To use the radios I created a library for communicate with them that other software can utilize with minimal changes from a normal SerialPort connection. The library reads from a SerialPort object and inserts incoming data into different buffers depending on the radio it receives from. Each packet that is received contains a header indicating its length, source, etc.
I start by reading the header, which is fixed-length, from the port and parsing it. In the header, the length of the data is defined before the data payload itself, so once I know the length of the data, I then wait for that much data to be available, then read in that many bytes.
Example (the other elements from the header are omitted):
// Read header
byte[] header = new byte[RCV_HEADER_LENGTH];
this.Port.Read(header, 0, RCV_HEADER_LENGTH);
// Get length of data in packet
short dataLength = header[1];
byte[] payload = new byte[dataLength];
// Make sure all the payload of this packet is ready to read
while (this.Port.BytesToRead < dataLength) { }
this.Port.Read(payload, 0, dataLength);
Obviously the empty while port is bad. If for some reason the data never arrives the thread will lock. I haven't encountered this problem yet, but I'm looking for an elegant way to do this. My first thought is to add a short timer that starts just before the while-loop, and sets an abortRead flag when it elapses that would break the while loop, like this:
// Make sure all the payload of this packet is ready to read
abortRead = false;
readTimer.Start();
while (this.Port.BytesToRead < dataLength && !abortRead) {}
This code needs to handle a constant stream of incoming data as quickly as it can, so keeping overhead to a minimum is a concern, and am wondering if I am doing this properly.
You don't have to run this while loop, the method Read would either fill the buffer for you or would throw a TimeoutException if buffer wasn't filled within the SerialPort.ReadTimeout time (which you can adjust to your needs).
But some general remark - your while loop would cause intensive CPU work for nothing, in the few milliseconds it would take the data to arrive you would have thousends of this while loop iterations, you should've add some Thread.Sleep inside.
If you want to truly adress this problem, you need to run the code in the background. There are different options to do that; you can start a thread, you start a Task or you can use async await.
To fully cover all options, the answer would be endless. If you use threads or tasks with the default scheduler and your wait time is expected to be rather short, you can use SpinWait.SpinUntil instead of your while loop. This will perform better than your solution:
SpinWait.SpinUntil(() => this.Port.BytesToRead >= dataLength);
If you are free to use async await, I would recommend this solution, since you need only a few changes to your code. You can use Task.Delay and in the best case you pass a CancellationToken to be able to cancel your operation:
try {
while (this.Port.BytesToRead < dataLength) {
await Task.Delay(100, cancellationToken);
}
}
catch(OperationCancelledException) {
//Cancellation logic
}
I think I would do this asynchronously with the SerialPort DataReceived event.
// Class fields
private const int RCV_HEADER_LENGTH = 8;
private const int MAX_DATA_LENGTH = 255;
private SerialPort Port;
private byte[] PacketBuffer = new byte[RCV_HEADER_LENGTH + MAX_DATA_LENGTH];
private int Readi = 0;
private int DataLength = 0;
// In your constructor
this.Port.DataReceived += new SerialDataReceivedEventHandler(DataReceivedHandler);
private void DataReceivedHandler(object sender, SerialDataReceivedEventArgs e)
{
if (e.EventType != SerialData.Chars)
{
return;
}
// Read all available bytes.
int len = Port.BytesToRead;
byte[] data = new byte[len];
Port.Read(data, 0, len);
// Go through each byte.
for (int i = 0; i < len; i++)
{
// Add the next byte to the packet buffer.
PacketBuffer[Readi++] = data[i];
// Check if we've received the complete header.
if (Readi == RCV_HEADER_LENGTH)
{
DataLength = PacketBuffer[1];
}
// Check if we've received the complete data.
if (Readi == RCV_HEADER_LENGTH + DataLength)
{
// The packet is complete add it to the appropriate buffer.
Readi = 0;
}
}
}

Winsock receive data

I'm trying to send a very large information to the server,(size 11000) and am having a problem. The information does not reach complete.
Look the code:
On my server , there is a loop.
do
{
Tick = Environment.TickCount;
Listen.AcceptClient();
Listen.Update();
}
Listen.update
public static void UpdateClient(UserConnection client)
{
string data = null;
Decoder utf8Decoder = Encoding.UTF8.GetDecoder();
// byte[] buffer = new byte[client.TCPClient.Available];
//try
//{
//client.TCPClient.GetStream().
// client.TCPClient.GetStream().Read(buffer, 0, buffer.Length);
//}
//catch
//{
// int code = System.Runtime.InteropServices.Marshal.GetExceptionCode();
// Console.WriteLine("Erro Num: " + code);
//}
//data = Encoding.UTF8.GetString(buffer);
//Console.WriteLine("Byte is: " + ReadFully(client.TCPClient.GetStream(), 0));
Console.WriteLine("Iniciando");
byte[] buffer = ReadFully(client.TCPClient.GetStream(), 0);
int charCount = utf8Decoder.GetCharCount(buffer, 0, buffer.Length);
Char[] chars = new Char[charCount];
int charsDecodedCount = utf8Decoder.GetChars(buffer, 0, buffer.Length, chars, 0);
foreach (Char c in chars)
{
data = data + String.Format("{0}", c);
}
int buffersize = buffer.Length;
Console.WriteLine("Byte is: " + buffer.Length);
Console.WriteLine("Data is: " + data);
Console.WriteLine("Size is: " + data.Length);
Server.Network.ReceiveData.SelectPacket(client.Index, data);
}
/// <summary>
/// Reads data from a stream until the end is reached. The
/// data is returned as a byte array. An IOException is
/// thrown if any of the underlying IO calls fail.
/// </summary>
/// <param name="stream">The stream to read data from</param>
/// <param name="initialLength">The initial buffer length</param>
public static byte[] ReadFully(Stream stream, int initialLength)
{
// If we've been passed an unhelpful initial length, just
// use 32K.
if (initialLength < 1)
{
initialLength = 32768;
}
byte[] buffer = new byte[initialLength];
int read = 0;
int chunk;
chunk = stream.Read(buffer, read, buffer.Length - read);
checkreach:
read += chunk;
// If we've reached the end of our buffer, check to see if there's
// any more information
if (read == buffer.Length)
{
int nextByte = stream.ReadByte();
// End of stream? If so, we're done
if (nextByte == -1)
{
return buffer;
}
// Nope. Resize the buffer, put in the byte we've just
// read, and continue
byte[] newBuffer = new byte[buffer.Length * 2];
Array.Copy(buffer, newBuffer, buffer.Length);
newBuffer[read] = (byte)nextByte;
buffer = newBuffer;
read++;
goto checkreach;
}
// Buffer is now too big. Shrink it.
byte[] ret = new byte[read];
Array.Copy(buffer, ret, read);
return ret;
}
Listen.AcceptClient
//Tem alguém querendo entrar na putaria? ;D
if (listener.Pending())
{
//Adicionamos ele na lista
Clients.Add(new UserConnection(listener.AcceptTcpClient(), Clients.Count()));
And this is my winsock server.
Anyone have tips or a solution?
Start here: Winsock FAQ. It will explain a number of things you need to know, including that you are unlikely in a single call to Read() to read all of the data that was sent. Every single TCP program needs to include somewhere logic that will receive data via some type of looping, and in most cases also logic to interpret the data being received to identify boundaries between individual elements of the received data (e.g. logical messages, etc. … the only exception is when the application protocol dictates that the whole transmission from connection to closure represents a single "unit", in which case the only boundary that matters is the end of the stream).
In addition (to address just some of the many things wrong in the little bit of code you included here):
Don't use TcpClient.Available; it's not required in correct code.
Don't use Marshal.GetExceptionCode() to retrieve exception information for managed exceptions
Don't use Convert.ToInt32() when your value already is an instance of System.Int32. And more generally, don't use Convert at all in scenarios where a simple cast would accomplish the same thing (even a cast isn't needed here, but I can tell from the code here what your general habit is…you should break that habit).
Don't just ignore exceptions. Either do something to actually handle them, or let them propagate up the call stack. There's no way the rest of the code in your UpdateClient() method could work if an exception was thrown by the Read() method, but you go ahead and execute it all anyway.
Don't use the Flush() method on a NetworkStream object. It does nothing (it's there only because the Stream class requires it).
Do use Stream.ReadAsync() instead of dedicating a thread to each connection
Do catch exceptions by including the exception type and a variable to accept the exception object reference
Do use a persistent Decoder object to decode UTF8-encoded text (or any other variable-byte-length text encoding), so that if a character's encoded representation spans multiple received buffers, the text is still decoded properly.
And finally:
Do post a good, minimal, complete code example. It is simply not possible to answer a question with any sort of preciseness if it doesn't include a proper, complete code example.
Addendum:
Don't use the goto statement. Use a proper loop (e.g. while). Had you used a proper loop, you probably would have avoided the bug in your code where you fail to branch back to the actual Read() call.
Don't expect the Read() method to fill the buffer you passed it. Not only (as I already mentioned above) is there no guarantee that all of the data sent will be returned in a single call to Read(), there is no guarantee that the entire buffer you pass to Read() will be filled before Read() returns.
Don't read one byte at a time. That's one of the surest ways to kill performance and/or to introduce bugs. In your own example, I don't see anything obviously wrong – you only (intend to) read the single byte when looking for more data and then (intend to) go back to reading into a larger buffer – but it's not required (just try to read more data normally…that gets you the same information without special cases in the code that can lead to bugs and in any case make the code harder to understand).
Do look at other examples and tutorials of networking code. Reinventing the wheel may well eventually lead to a good solution, but odds are low of that and it is a lot more time-consuming and error-prone than following someone else's good example.
I will reiterate: please read the Winsock FAQ. It has a lot of valuable information that everyone who wants to write networking code needs to know.
I will also reiterate: you cannot get a precise answer without a COMPLETE code example.

random byte loss when streaming data over network

My issue is that when i'm streaming a continuous stream of data over a LOCAL LAN network sometimes random bytes gets lost in the process.
As it is right now the code is set up to stream about 1027 bytes or so ~40 times a second over a lan and sometimes (very rare) one or more of the bytes are lost.
The thing that baffles me is that the actual byte isn't "lost" it is just set to 0 regardless of the original data. (I'm using TCP by the way)
Here's the sending code:
public void Send(byte[] data)
{
if (!server)
{
if (CheckConnection(serv))
{
serv.Send(BitConverter.GetBytes(data.Length));
serv.Receive(new byte[1]);
serv.Send(data);
serv.Receive(new byte[1]);
}
}
}
and the receiving code:
public byte[] Receive()
{
if (!server)
{
if (CheckConnection(serv))
{
byte[] TMP = new byte[4];
serv.Receive(TMP);
TMP = new byte[BitConverter.ToInt32(TMP, 0)];
serv.Send(new byte[1]);
serv.Receive(TMP);
serv.Send(new byte[1]);
return TMP;
}
else return null;
}
else return null;
}
The sending and receiving of the empty bytes are just to keep the system in sync sorta.
Personally i think that the problem lies on the receiving side of the system. haven't been able to prove that jet though.
Just because you give Receive(TMP) a 4 byte array does not mean it is going to fill that array with 4 bytes. The Receive call is allowed to put in anywhere between 1 and TMP.Length bytes in to the array. You must check the returned int to see how many bytes of the array where filled.
Network connections are stream based not message based. Any bytes you put on to the wire just get concatenated in to a big queue and get read in on the other side as it becomes available. So if you sent the two arrays 1,1,1,1 and 2,2,2,2 it is entirely possible that on the receiving side you call Receive three times with a 4 byte array and get
1,1,0,0 (Receive returned 2)
1,1,2,2 (Receive returned 4)
2,2,0,0 (Receive returned 2)
So what you need to do is look at the values you got back from Receive and keep looping till your byte array is full.
byte[] TMP = new byte[4];
//loop till all 4 bytes are read
int offset = 0;
while(offset < TMP.Length)
{
offset += serv.Receive(TMP, offset, TMP.Length - offset, SocketFlags.None);
}
TMP = new byte[BitConverter.ToInt32(TMP, 0)];
//I don't understand why you are doing this, it is not necessary.
serv.Send(new byte[1]);
//Reset the offset then loop till TMP.Length bytes are read.
offset = 0;
while(offset < TMP.Length)
{
offset += serv.Receive(TMP, offset, TMP.Length - offset, SocketFlags.None);
}
//I don't understand why you are doing this, it is not necessary.
serv.Send(new byte[1]);
return TMP;
Lastly you said "the network stream confuses you", I am willing to bet the above issue is one of the things that confused you, going to a lower level will not remove those complexities. If you want these complex parts gone so you don't have to handle them you will need to use a 3rd party library that will handle it for you inside the library.

Sockets starts to slow down and not respond

i am developing a server (with c#) and a client (with flash, actionscript 3.0) application. Server sends data (datas are arround 90 bytes) to clients continuously and clients behave according to data they received (data is json formatted)
for a while, everything works as expected but after some time passed, clients start to receive messages laggy. they keep waiting for some time and then they behave according to last message (some messages lost). after some time passed clients starts to wait and process all the messages at the same time. I could not figured out what causing this. My network condition is stable.
here is some part of my c# code, sending message:
public void Send(byte[] buffer)
{
if (ClientSocket != null && ClientSocket.Connected)
{
ClientSocket.BeginSend(buffer, 0, buffer.Length, 0, WriteCallback, ClientSocket);
}
}
private void WriteCallback(IAsyncResult result)
{
//
}
and some part of my client, receiving message (actionscript)
socket.addEventListener(ProgressEvent.SOCKET_DATA, onResponse);
function onResponse(e:ProgressEvent):void {
trace(socket.bytesAvailable);
if(socket.bytesAvailable > 0) {
try
{
var serverResponse:String = socket.readUTFBytes(socket.bytesAvailable);
....
I hope i could explain my problem. How should i optimize my code? What can be causing lags. Thanks.
You really need to give more detail as to how you're setting up the socket (is it TCP or UDP?)
Assuming it's a TCP socket, then it would appear that your client relies on each receive call returning the same number of bytes that were sent by the server's Send() call. This is however not the case, and could well be the cause of your issues if a message is only being partially received on the client, or multiple messages are received at once.
For example, the server may send a 90 byte message in a single call, but your client may receive it in one 90-byte receive, or two 45-byte chunks, or even 90 x 1-byte chunks, or anything in between. Multiple messages sent by the server may also be partially combined when received by the client. E.g. two 90-byte messages may be received in a single 180-byte chunk, or a 150-byte and a 30-byte chunk, etc. etc.
You need therefore to provide some kind of framing on your messages so that when the stream of data is received by the client, it can be reliably reconstructed into individual messages.
The most basic framing mechanism would be to prefix each message sent with a fixed-length field indicating the message size. you may be able to get away with a single byte if you can guarantee that your messages will never be > 255 bytes long, which will simplify the receiving code.
On the client side, you first need to receive the length prefix, and then read up to that many bytes off the socket to construct the message data. If you receive fewer than the required number of bytes, your receiving code must wait for more data (appending it to the partially-received message when it is eventually received) until it has a complete message of the.
Once the full message is received it can be processed as you are currently.
Unfortunately I don't know ActionScript, so can't give you an example of the client-side code, but here's how you might write the server and client framing in C#:
Server side:
public void SendMessage(string message)
{
var data = Encoding.UTF8.GetBytes(message);
if (data.Length > byte.MaxValue) throw new Exception("Data exceeds maximum size");
var bufferList = new[]
{
new ArraySegment<byte>(new[] {(byte) data.Length}),
new ArraySegment<byte>(data)
};
ClientSocket.Send(bufferList);
}
Client side:
public string ReadMessage()
{
var header = new byte[1];
// Read the header indicating the data length
var bytesRead = ServerSocket.Receive(header);
if (bytesRead > 0)
{
var dataLength = header[0];
// If the message size is zero, return an empty string
if (dataLength == 0) return string.Empty;
var buffer = new byte[dataLength];
var position = 0;
while ((bytesRead = ServerSocket.Receive(buffer, position, buffer.Length - position, SocketFlags.None)) > 0)
{
// Advance the position by the number of bytes read
position += bytesRead;
// If there's still more data to read before we have a full message, call Receive again
if (position < buffer.Length) continue;
// We have a complete message - return it.
return Encoding.UTF8.GetString(buffer);
}
}
// If Receive returns 0, the socket has been closed, so return null to indicate this.
return null;
}

Serial Port; Missing Packets after few hours

I am new to Visual C#. I have to receive a packet of 468 bytes every second from a embedded device serially. The header of the packet is 0xbf, 0x13, 0x97, 0x74. After check validating the packet header i am saving this packet , process it, and display it graphically.
The problem is that i start losing packets after few hours. (Other software was logging the same data for the whole week and is working well).
The code is here...
private void DataRec(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
rtTotBytes = comport.BytesToRead;
rtTotBytesRead = comport.Read(rtSerBuff, 0, rtTotBytes);
this.Invoke(new ComportDelegate(ComportDlgtCallback), rtSerBuff, rtTotBytesRead);
}
//Delegate
delegate void ComportDelegate(byte[] sBuff, int sByte);
//Callback Function to Delegate
private void ComportDlgtCallback(byte[] SerBuff, int TotBytes)
{
for (int k = 0; k < TotBytes; k++)
{
switch (rtState)
{
case 0:
if (SerBuff[k] == 0xbf) { rtState = 1; TempBuff[0] = 0xbf; }
else rtState = 0;
break;
case 1:
if (SerBuff[k] == 0x13) { rtState = 2; TempBuff[1] = 0x13; }
else rtState = 0;
break;
case 2:
if (SerBuff[k] == 0x97) { rtState = 3; TempBuff[2] = 0x97; }
else rtState = 0;
break;
case 3:
if (SerBuff[k] == 0x74) { rtState = 4; TempBuff[3] = 0x74; rtCnt = 4; }
else rtState = 0;
break;
case 4:
if (rtCnt == 467)
{
TempBuff[rtCnt] = SerBuff[k];
TempBuff.CopyTo(PlotBuff, 0);
ProcessPacket(PlotBuff);
rtState = 0; rtCnt = 0;
}
else
TempBuff[rtCnt++] = SerBuff[k];
break;
}
}
}
Another question: can the BytesToRead be zero if a DataReceivedEvent had occured? Do you have to check (BytesToRead>0) in DataRecievedEvent?
Serial port input data must be treated as a stream, and not series of packets. For example, when device sends 0xbf, 0x13, 0x97, 0x74 packet, DataRec function may be called once with the whole packet, or twice with 0xbf, 0x13 and 0x97, 0x74 packets, or 4 times with one byte, etc. The program must be flexible enough to handle input stream using some parser. Your current program doesn't do this, it can miss logical packets which are received in a several function calls. Another situation is possible, when several packets are received in one DataRec function call - your program is not ready also for such situation.
Edit.
Typical serial port input stream handling algorithm should look like this:
DataRec function adds received data to input queue and calls parser.
Input queue is some byte array, which contains the data already received, but not parsed yet. New data is added to the end, and parsed packets are removed from the beginning of this queue.
Parser reads the input queue, handles all recognized packets and removes them from the queue, leaving all unrecognized data for the next call.
I think a problem could be that you can't be sure that you receive a full package within the DataReceived event. It is possible that you just got the first half of the packet and half a second later the second half.
So you should implement another layer where you put the data into a buffer. The further proceeding depends on the data format.
If you receive additionally informations like an end mark or the length of the data you could check if the buffer already contains these informations. If yes advance this full package to your routine.
If you don't have this information you have to wait till you receive the next header and forward the data within your buffer till this new header.
Have you checked the memory usage of the program?
Maybe you have a small interop class, memory or something which is not properly freed, adds up after a few hours and make the program run sluggish, causing it to lose data.
I'd use process explorer to check how memory and cpu use change after a few hours. Maybe check for hdd activity, too.
If this does not lead to results, use a full blown profiler like ANTS and try to run the program under the profiler to check for problems.
As Alex Farber points out, there's no guarantee that when your DataReceived handler is invoked, all the bytes are there.
If your buffers are always a fixed size, and at a low rate, you can use the Read function directly, rather than relying on the DataReceived event. Conceptually:
packetSize = 468;
...initialization...
comport.ReadTimeout = 2000; //packets expected every 1000 milliseconds, so give it some slack
while (captureFlag) {
comport.Read(rtSerBuff, 0, packetSize);
...do stuff...
}
This can be put into its own worker thread if you want.
Another approach would be to use the ReadLine method. You mention that the packets have a known starting signature. Do they also have a known ending signature that is guaranteed to not be repeated in the packet? If so, you can set the NewLine property to this ending signature and use ReadLine. Again, you can put this in a worker thread,

Categories

Resources