Receiving uneven results from Serial Port in C# - c#

I have 2 GUI applications, one in C++ and one in C#.
The applications are the same and there is a function that writes and reads from COM port.
When I run my C++ app I receive the right result from the Serial.Read which is a buffer with 24 bytes.
But when I run my C# app I receive uneven results:
* Just 1 byte buffer if I don`t put sleep between write and read.
* Different sizes if I do put sleep between write and read (between 10-22 bytes).
What could be the reason for that?
My C++ code:
serial.write(&c, 1, &written);
serial.read(read_buf, read_len, &received); // received = 24
My C# code:
serial.Write(temp_char, 0, 1);
received = serial.Read(read_buff, 0,read_len); // received = 1
C# with sleep:
serial.Write(temp_char, 0, 1);
Thread.Sleep(100);
received = serial.Read(read_buff, 0,read_len); // received = (10~22)

Serial ports just give a stream of bytes, they don't know how you've written blocks of data to them. When you call read, the bytes that have been received are returned, if that isn't a complete message you need to call read repeatedly until you have the whole message.
You need to define a protocol to indicate message boundaries, this could be a special character (e.g. a new line in a text based protocol) or you can prefix your messages with a length.

Related

Server do not receive all messages

I have 2 computers: I would call them Comp A, Comp B;
I have to:
send a sound file of PCM format from Comp А to the Comp B;
verify how precisely this file was transmitted;
play what I sent to the Comp B on the Comp B.
To send the file I use the function
socket.SendTo(packet,0,count,SocketFlags.None,remoteEP);
from System.Net.Sockets.
As a result I concluded that the file was being transmitted precisely. I monitor it using Wireshark on Comp A and Comp B. However packets of bytes coming to the Comp B, don't coincide at all with the file being transmitted.
The program which sends data of the file, opens this file in a right way. Then it passes right bytes of the source PCM file to the function Socket.SendTo(...). But Wireshark of Comp A (output) displays absolutely incorrect bytes i.e. Comp A sends incorrect bytes.
What could be the problem?
I figured out that function socket.SendTo(packet,0,count,SocketFlags.None,remoteEP);
sends correct bytes if I send them with a delay. I mean I can send 400 bytes (without loops) and my program sends 400 absolutely precise, correct bytes.
But I have a big PCM file. Its size is about 50 Mb. Its duration is 1 minute. I have to send this file during a minute so that this file would be transmitted evenly, uniformly. It means about 800 Kb needs to be transmitted per a second.
So here is my program code. I send every 800 Kb per second using timer function being called 2 times per second.
private void m_pTimer_Tick(object sender,EventArgs e)
{
uint sent_data = 0;
while ((sent_data <= (BUFFERSIZE / 120)) && ((num * RAW_PACKET) + sent_data < BUFFERSIZE))
{
uint bytes_count = ((BUFFERSIZE - (RAW_PACKET * num)) > RAW_PACKET) ? RAW_PACKET : (BUFFERSIZE - (RAW_PACKET * num));
byte[] buffer = new byte[bytes_count];
Array.Copy(ReadBuffer, num * RAW_PACKET, buffer, 0, bytes_count);
num++;
// Send and read next.
m_pUdpServer.SendPacket(buffer, 0, Convert.ToInt32(bytes_count), m_pTargetEP);
sent_data += bytes_count;
}
if ((num * RAW_PACKET) + sent_data == BUFFERSIZE)
{
m_pTimer.Enabled = false;
}
m_pPacketsReceived.Text = m_pUdpServer.PacketsReceived.ToString();
m_pBytesReceived.Text = m_pUdpServer.BytesReceived.ToString();
m_pPacketsSent.Text = m_pUdpServer.PacketsSent.ToString();
m_pBytesSent.Text = m_pUdpServer.BytesSent.ToString();
}
If I call a function m_pUdpServer.SendPacket(buffer, 0, Convert.ToInt32(bytes_count), m_pTargetEP); without a timer or any loops(while, etc.) I see correct result on output.
Well here 120 - a number of file parts that are being transmitted for a period of every timer function call. The timer function is called 2 times per second.
BUFFERSIZE is a total file size.
ReadBuffer is an array that contains all PCM file data.
RAW PACKET = 400 bytes.
sent_data is a total byte count being sent within every timer function call.
num is a total count of sent packets.
I suppose there is too many packets(bytes) to be sent within a timer function call. Therefore I see incorrect values on output.
So what is a solution of this problem?
I think I can make up a RTP packet (to add a sequence number to every sent packet). It will help me to identify received packets and to make up a correct sequence of received packets. It can help me if received packets have a correct byte sequence. Because if received packets have a mixed byte order(sequence) I don't understand how to restore a correct byte sequence in every received packet.
I was advised to refuse the timer call and to send packets evenly, uniformly using a synchronization by time. Actually I don't know how to do it. Maybe I should use threads, pool of threads or something like that. What do you think?
The only guarantee UDP gives you is that the entire message is delivered. However, if you send piece 1,2,3,4 in order they may be received in any order, for instance 4132. That is, UDP do not guarantee ordering.
You MUST include a sequence number to be able to store the PCM correctly.
UDP do not guarantee delivery either. If the server receive piece #4 but not #5 within X seconds it should probably request that piece again.
Or you'll just switch to TCP. Much easier. All you need is some way to tell the length of the file and then just transfer it.

Serial port not reading all data

I have a microcontroller (Arduino Uno) running nanopb that is sending protobuf messages over the wire. I'm finding that under one specific case I'm not receiving my full message. I thought for a while that it was the microcontroller, but it appears to be on the C# side that's reading from it.
The issue ONLY happens for uint32 values GREATER THAN 16. 16 or less and it works fine.
I've setup a VERY simple program on the microcontroller to ensure it's not my other code that's causing it there. Essentially it's sending a struct over the wire with one uint32_t value in it:
//Protobuf message:
message Test { required uint32 testInt = 1 }
//Resulting struct:
typedef struct Test {
uint32_t testInt;
}
//Serial code:
Serial.begin(115200);
pb_ostream_t ostream;
//Removed ostream setup code as it's not relevant here...
Test message;
Test.testInt = 17;
pb_encode_delimited(&ostream, Test_fields, &message);
If I plug in my device and look at it's data output using Termite I see the following data (which is correct):
[02] [08] [11] (note Termite displays it in hex)
(That's saying the message is 2 bytes long, followed by the msg start byte, followed by the Test.testInt value of 17 - 0x11 in hex)
Now, if I bring this up in C# I should see 3 bytes when reading the message, but I only see 2. When the value in testInt is 16 or less it comes across as three bytes, 17 or greater and I only get two:
var port = new SerialPort("COM7", 115200, Parity.None, 8, StopBits.One);
port.Handshake = Handshake.RequestToSendXOnXOff;
port.Open();
while (port.IsOpen)
{
Console.WriteLine(port.ReadByte());
Thread.Sleep(10);
}
port.Close();
Console.ReadLine();
Output with 16: 2 8 16
Output with 17: 2 8 17
Any help is greatly appreciated, I'm at a loss on this one =(
You set the serial port to use Xon/Xoff. Why?
The code for XON is 17.
If you are sending binary data, don't use Xon/Xoff.
Looks like a simple race condition - there is nothing to ensure that the C# code gets 3 bytes. It could get one, two, three or more. If, say, it starts the loop when two bytes are in the UART buffer then it will get two bytes and output them. I suspect the 16/17 issue is just a coincidence.
Also, when there is nothing in the buffer your loop consumes 100% CPU. That's not good.
You'd be better off using the blocking SerialPort.ReadByte() call, get the length byte and then loop to read that many more bytes from the serial port.
Also, as a protocol, using protobuf messages alone with no header mark isn't great. If you get out of sync it could take a long while before you get lucky and get back in sync. You might want to add some kind of 'magic byte' or a sequence of magic bytes at the start of each message so the reader can regain sync.

Incomplete data reading from serial port c#

I'm having a problem in reading the data transmitted from serial port (incomplete data in every first time running the project).
I've tried two methods to read:
byte[] data = new byte[_serialPort.BytesToRead];
_serialPort.Read(data, 0, data.Length);
txtGateway.Text = System.Text.Encoding.UTF8.GetString(data);
And
txtGateway.Text = _serialPort.ReadExisting();
However, it only reads 14 bytes in every first time when I start the program. When I trace the program, _serialPort.BytesToRead gives only 14 in every first time. If I send the data for the second time, the data is read correctly.
The above two methods have the same result. I'm sure that writing data from serial port gives the complete data.
Serial ports don't have any message boundaries. If you want framing, you have to add it yourself.
As for only the most recent 14 bytes being in the serial port when your program starts, 14 bytes is a typical FIFO size for serial ports.
See also How do you programmatically configure the Serial FIFO Receive and Transmit Buffers in Windows?
Or just flush the receive buffer when the program starts.
Hi you can use array with carriage return \r and ascii code \x02 for STX
string data = _serialPort.ReadExisting();
string[] arr1 = data.Split('\r');
checkFinal = checkFinal.Replace("\x02", "").ToString().Trim();

Why does splitting TCP message on client side doesn't work?

I've written a Client-Server application using C#.
The client asynchronously sends a message with a 4-byte header defining the message size, and the server waits for the entire message (it knows the size) and then raises a DataReceived event. This all works fine when I send and receive the data asynchronously.
At some point I wanted to simulate a bad connection, in which 2 segments were sent one after the other using Send() and not BeginSend()
public void SendSyncString(string str, Commands cmd)
{
BinaryWriter bw = new BinaryWriter(new MemoryStream());
bw.Write((int)cmd);
bw.Write((int)str.Length);
bw.Write(Encoding.ASCII.GetBytes(str));
bw.Close();
byte[] data = ((MemoryStream)(bw.BaseStream)).ToArray();
bw.BaseStream.Dispose();
SendSync(data,1);
}
public void SendSync(byte[] data,int delay)
{
//create |dataLength|data| segment
byte[] dataWithHeader = Combine(BitConverter.GetBytes(data.Length), data);
//send first block of data, delay, and then send the rest
socket.Send(dataWithHeader, 0, 4, SocketFlags.None);
Thread.Sleep(delay*1000);
socket.Send(dataWithHeader, 5, dataWithHeader.Length - 5, SocketFlags.None);
}
This doesn't work. I do wish to understand why though.
If I a TCP is simply a stream of bytes, and there's no way to know when each segment will arrive,
why can't I split it to segments as I wish and simply send them like I did above (Assuming the first 4 bytes were sent fully)?
Thanks for the insights.
Because your first socket.Send call only sent 4 bytes:
socket.Send(dataWithHeader, 0, 4, SocketFlags.None);
That is, bytes at offsets of 0,1,2,and 3 get sent (4 bytes total). Remember the third parameter to socket.Send is a length parameter, not an ending position.
Thus, this line has a bug:
socket.Send(dataWithHeader, 5, dataWithHeader.Length - 5, SocketFlags.None);
It sends bytes from offset 5,6,7.. to the end of the array. It skipped byte 4. Hence, the receiver likely blocks because it's one byte short of receiving a full message.
It should read:
socket.Send(dataWithHeader, 4, dataWithHeader.Length - 4, SocketFlags.None);
There you go.
I wanted to simulate a bad connection, in which 2 segments were sent one after the other
That's not a 'bad connection'. That's an entirely legal way for any TCP connection to behave. If the receiving software doesn't cope correctly with that it isn't written correctly.
Assuming the first 4 bytes were sent fully
You can't assume that.

Socket c# send and receive

in server/client program without multi-Client
when the server send two message like:
byte[] data = Encoding.Default.GetBytes("hello world1");
socket.Send(data1, 0, data.Length, 0);
byte[] data = Encoding.Default.GetBytes("hello world2");
socket.Send(data1, 0, data.Length, 0);
the Client received the two messages in one message like:
hello world1hello world2
but I want the client receive the 2 send in 2 received
please help me how to fix it ??? :(
Use a line separator like '\n' and split incomming messages. With TCP you must be prepared for situations where packets are splitted up or joined.
If you used UDP, you could send separate packets.
These are some of your options
You can use length prefixed message. Where you always send the length of the message for example in the first 4 bytes. The server would read the first four bytes and know the length and know how many remaining bytes are part of this message. It would know the next four bytes and so on and so forth.
You can have a message demarker. For example if you know that your message will never have a particular bit pattern you can send it as a message demarker. As an example the server might always scan for a bit pattern 0,1,0,1,0,1 and know that the message has ended
You can use a higher level framework WCF where the infrastructure handles it for you

Categories

Resources