If the SerialPort.Write() is a blocking operation( or is it not?), what would be the need for the BytesToWrite() method. It would always evaluate to zero, cause the last Write operation either succeeded in writing all data or failed, in either case the bytes to be written would be come zero.
Perhaps, there is more to it then what I have described.
SerialPort.Write is a blocking operation, yes. However, there are two buffers to be considered: The serial device, and the SerialPort buffers.
If the SerialPort object is configured to buffer, the write only blocks if there isn't enough room in that buffer. It will block for as long as it takes the buffer to empty enough to fit the new data. Otherwise it fills the buffer and returns.
If the SerialPort object does not buffer, the Write operation blocks only for as long as it takes to transfer the data to the serial device. That device has its own buffer(*), so the block may take far less time than the time it will take to send the data.
SerialPort.BytesToWrite includes both data in the device's buffer and data in the SerialPort object's buffer.
(*) Older UARTs did not have buffers and newer ones do but can be configured to not buffer.
Serial ports are very slow I/O devices that date from the stone age of computing. At a common baudrate of 9600 bits per second, it can only transmit a bit less than thousand bytes per second. Compare to a hard disk, a burst speed of 60,000,000 bytes per second is possible. Or more comparable, a network card can transmit 125,000,000 bytes per second.
So the serial port driver keeps a buffer that stores the bytes you write and slowly empties it while they are written by the UART chip. That buffer allows the Write() call to quickly return. Given that it is so slow, you might want to be aware of how full that buffer is so that you can avoid blocking on the Write() call. WriteBufferSize - BytesToWrite tells you how much space is available in that buffer.
The SerialPort property BytesToWrite Gets the number of bytes of data in the send buffer.
On the other hand the SerialPort method Write(string text) Writes the specified string to the serial port.
You do know how a serial port works right? A serial sends a certain amount of bytes every second depending on the baud used.
Related
I'm trying to figure out how to work around a problem in Windows. I'm using C# (net5.0), but if you know the answer in C or C++ that shouldn't be a real problem, because I can call functions in DLLs without an issue.
While testing UDP handling with multicast on Windows 10, I found a problem: when my buffer is too short for the payload coming in, System.Net.Sockets.Socket.ReceiveMessageFrom fills the buffer to its maximum capacity, doesn't set the SocketOptions.Fragmented or SocketOptions.Multicast flags (and setting either before the call is made leads to a "not supported" exception), and the IPPacketInformation.Address field is null. ReceiveMessageFrom's return value is the size of my buffer, not the size of the packet. I cannot seem to get the necessary allocation size (even with SocketOptions.Peek) no matter what I do.
When my buffer is long enough for the payload coming in, (System.Net.Sockets.Socket)s.ReceiveMessageFrom fills the buffer, sets SocketOptions.Multicast, and sets the IPPacketInformation.Address field to the multicast group IP that it received from. The return value in that case is the amount of data actually received, when my buffer is larger than the data received.
On Linux, I can set SocketOptions.Fragmented and it will work correctly: the too-small buffer is filled, but the return value of ReceiveMessageFrom is set to the actual size of the incoming data. Combined with SocketOptions.Peek, this allows me to allocate a new buffer large enough to hold it, and retrieve the full data. (This is very much akin to calling e.g. Windows Registry functions with a 0-length buffer, and being told how big of a buffer you'll actually need.)
The alternative to Linux's way would be to try to allocate a buffer as large as the interface will allow, but System.Net.NetworkInformation.IPInterfaceProperties doesn't have a .Mtu member, while System.Net.NetworkInformation.IPv6InterfaceProperties does. I can't figure out how to get the maximum size of a frame, which is necessary because some drivers support a feature called "jumbo frames" that can be upwards of 64kb.)
To whit: My multicastsender program sends packets that are 26 bytes long, containing the entire uppercase US alphabet from A to Z.
My multicastreceiver program is where I have been making the changes. When I set my receive buffer in that program to less than 26 bytes long, I get the problematic behavior. When I set it to 26 bytes or more long, I get the correct behavior.
This question is not about OS-level or Winsock-level buffers. I will tune those separately, if they need to be. I am specifically trying to ensure that the data that I retrieve with ReceiveMessageFrom is not truncated. (When the OS level doesn't have enough buffer space for the data to be queued, it simply drops the entire packet. It does not write a partial packet to the queue. My application is receiving partial data from the call to ReceiveMessageFrom, and it's not indicating that there is anything that was truncated. I need to figure out how to work around this.)
I am not okay with losing packet space by encoding the size of the data in the area reserved for the data itself, as that will take at least 2 bytes, and I already need to squeeze a lot in here. The UDP header already has a Length field, and that field contains what I need, but I have no access to it.
Thanks for your help!
If i send 1000 bytes in TCP, does it guarantee that the receiver will get the entire 1000 bytes "togther"? or perhaps he will first only get 500 bytes, and later he'll receive the other bytes?
EDIT: the question comes from the application's point of view. If the 1000 bytes are reassembles into a single buffer before they reach the application .. then i don't care if it was fragmented in the way..
See Transmission Control Protocol:
TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer.
A "stream" means that there is no message boundary from the receiver's point of view. You could get one 1000 byte message or one thousand 1 byte messages depending on what's underneath and how often you call read/select.
Edit: Let me clarify from the application's point of view. No, TCP will not guarantee that the single read would give you all of the 1000 bytes (or 1MB or 1GB) packet the sender may have sent. Thus, a protocol above the TCP usually contains fixed length header with the total content length in it. For example you could always send 1 byte that indicates the total length of the content in bytes, which would support up to 255 bytes.
As other answers indicated, TCP is a stream protocol -- every byte sent will be received (once and in the same order), but there are no intrinsic "message boundaries" -- whether all bytes are sent in a single .send call, or multiple ones, they might still be received in one or multiple .receive calls.
So, if you need "message boundaries", you need to impose them on top of the TCP stream, IOW, essentially, at application level. For example, if you know the bytes you're sending will never contain a \0, null-terminated strings work fine; various methods of "escaping" let you send strings of bytes which obey no such limitations. (There are existing protocols for this but none is really widespread or widely accepted).
Basically as far as TCP goes it only guarantees that the data sent from one end to the other end will be sent in the same order.
Now usually what you'll have to do is have an internal buffer that keeps looping until it has received your 1000 byte "packet".
Because the recv command as mentioned returns how much has actually been received.
So usually you'll have to then implement a protocol on top of TCP to make sure you send data at an appropriate speed. Because if you send() all the data in one run through it will overload the under lying networking stack, and which will cause complications.
So usually in the protocol there is a tiny acknowledgement packet sent back to confirm that the packet of 1000 bytes are sent.
You decide, in your message that how many bytes your message shall contain. For instance in your case its 1000. Following is up and running C# code to achieve the same. The method returns with 1000 bytes. The abort code is 0 bytes; you can tailor that according to your needs.
Usage:
strMsg = ReadData(thisTcpClient.Client, 1000, out bDisconnected);
Following is the method:
string ReadData(Socket sckClient, int nBytesToRead, out bool bShouldDisconnect)
{
bShouldDisconnect = false;
byte[] byteBuffer = new byte[nBytesToRead];
Array.Clear(byteBuffer, 0, byteBuffer.Length);
int nDataRead = 0;
int nStartIndex = 0;
while (nDataRead < nBytesToRead)
{
int nBytesRead = sckClient.Receive(byteBuffer, nStartIndex, nBytesToRead - nStartIndex, SocketFlags.None);
if (0 == nBytesRead)
{
bShouldDisconnect = true;
//0 bytes received; assuming disconnect signal
break;
}
nDataRead += nBytesRead;
nStartIndex += nBytesRead;
}
return Encoding.Default.GetString(byteBuffer, 0, nDataRead);
}
Let us know this didn't help you (0: Good luck.
Yes, there is a chance for receiving packets part by part. Hope this msdn article and following example (taken from the article in msdn for quick review) would be helpful to you if you are using windows sockets.
void CChatSocket::OnReceive(int nErrorCode)
{
CSocket::OnReceive(nErrorCode);
DWORD dwReceived;
if (IOCtl(FIONREAD, &dwReceived))
{
if (dwReceived >= dwExpected) // Process only if you have enough data
m_pDoc->ProcessPendingRead();
}
else
{
// Error handling here
}
}
TCP guarantees that they will recieve all 1000 bytes, but not necessarily in order (though, it will appear so to the recieving application) and not necessarily all at once (unless you craft the packet yourself and make it so.).
That said, for a packet as small as 1000 bytes, there is a good chance it'll send in one packet as long as you do it in one call to send, though for larger transmissions it may not.
The only thing that the TCP layer guarantees is that the receiver will receive:
all the bytes transmitted by the sender
in the same order
There are no guarantees at all about how the bytes might be split up into "packets". All the stuff you might read about MTU, packet fragmentation, maximum segment size, or whatever else is all below the layer of TCP sockets, and is irrelevant. TCP provides a stream service only.
With reference to your question, this means that the receiver may receive the first 500 bytes, then the next 500 bytes later. Or, the receiver might receive the data one byte at a time, if that's what it asks for. This is the reason that the recv() function takes a parameter that tells it how much data to return, instead of it telling you how big a packet is.
The transmission control protocol guarantees successful delivery of all packets by requiring acknowledgment of the successful delivery of each packet to the sender by the receiver. By this definition the receiver will always receive the payload in chunks when the size of the payload exceeds the MTU (maximum transmission unit).
For more information please read Transmission Control Protocol.
The IP packets may get fragmented during retransmission.
So the destination machine may receive multiple packets - which will be reassembled back by TCP/IP stack. Depending on the network API you are using - the data will be given to you either reassembled or in RAW packets.
It depends of the stablished MTU (Maximum transfer unit). If your stablished connection (once handshaked) refers to a MTU of 512 bytes you will need two or more TCP packets to send 1000 bytes.
If i send 1000 bytes in TCP, does it guarantee that the receiver will get the entire 1000 bytes "togther"? or perhaps he will first only get 500 bytes, and later he'll receive the other bytes?
EDIT: the question comes from the application's point of view. If the 1000 bytes are reassembles into a single buffer before they reach the application .. then i don't care if it was fragmented in the way..
See Transmission Control Protocol:
TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer.
A "stream" means that there is no message boundary from the receiver's point of view. You could get one 1000 byte message or one thousand 1 byte messages depending on what's underneath and how often you call read/select.
Edit: Let me clarify from the application's point of view. No, TCP will not guarantee that the single read would give you all of the 1000 bytes (or 1MB or 1GB) packet the sender may have sent. Thus, a protocol above the TCP usually contains fixed length header with the total content length in it. For example you could always send 1 byte that indicates the total length of the content in bytes, which would support up to 255 bytes.
As other answers indicated, TCP is a stream protocol -- every byte sent will be received (once and in the same order), but there are no intrinsic "message boundaries" -- whether all bytes are sent in a single .send call, or multiple ones, they might still be received in one or multiple .receive calls.
So, if you need "message boundaries", you need to impose them on top of the TCP stream, IOW, essentially, at application level. For example, if you know the bytes you're sending will never contain a \0, null-terminated strings work fine; various methods of "escaping" let you send strings of bytes which obey no such limitations. (There are existing protocols for this but none is really widespread or widely accepted).
Basically as far as TCP goes it only guarantees that the data sent from one end to the other end will be sent in the same order.
Now usually what you'll have to do is have an internal buffer that keeps looping until it has received your 1000 byte "packet".
Because the recv command as mentioned returns how much has actually been received.
So usually you'll have to then implement a protocol on top of TCP to make sure you send data at an appropriate speed. Because if you send() all the data in one run through it will overload the under lying networking stack, and which will cause complications.
So usually in the protocol there is a tiny acknowledgement packet sent back to confirm that the packet of 1000 bytes are sent.
You decide, in your message that how many bytes your message shall contain. For instance in your case its 1000. Following is up and running C# code to achieve the same. The method returns with 1000 bytes. The abort code is 0 bytes; you can tailor that according to your needs.
Usage:
strMsg = ReadData(thisTcpClient.Client, 1000, out bDisconnected);
Following is the method:
string ReadData(Socket sckClient, int nBytesToRead, out bool bShouldDisconnect)
{
bShouldDisconnect = false;
byte[] byteBuffer = new byte[nBytesToRead];
Array.Clear(byteBuffer, 0, byteBuffer.Length);
int nDataRead = 0;
int nStartIndex = 0;
while (nDataRead < nBytesToRead)
{
int nBytesRead = sckClient.Receive(byteBuffer, nStartIndex, nBytesToRead - nStartIndex, SocketFlags.None);
if (0 == nBytesRead)
{
bShouldDisconnect = true;
//0 bytes received; assuming disconnect signal
break;
}
nDataRead += nBytesRead;
nStartIndex += nBytesRead;
}
return Encoding.Default.GetString(byteBuffer, 0, nDataRead);
}
Let us know this didn't help you (0: Good luck.
Yes, there is a chance for receiving packets part by part. Hope this msdn article and following example (taken from the article in msdn for quick review) would be helpful to you if you are using windows sockets.
void CChatSocket::OnReceive(int nErrorCode)
{
CSocket::OnReceive(nErrorCode);
DWORD dwReceived;
if (IOCtl(FIONREAD, &dwReceived))
{
if (dwReceived >= dwExpected) // Process only if you have enough data
m_pDoc->ProcessPendingRead();
}
else
{
// Error handling here
}
}
TCP guarantees that they will recieve all 1000 bytes, but not necessarily in order (though, it will appear so to the recieving application) and not necessarily all at once (unless you craft the packet yourself and make it so.).
That said, for a packet as small as 1000 bytes, there is a good chance it'll send in one packet as long as you do it in one call to send, though for larger transmissions it may not.
The only thing that the TCP layer guarantees is that the receiver will receive:
all the bytes transmitted by the sender
in the same order
There are no guarantees at all about how the bytes might be split up into "packets". All the stuff you might read about MTU, packet fragmentation, maximum segment size, or whatever else is all below the layer of TCP sockets, and is irrelevant. TCP provides a stream service only.
With reference to your question, this means that the receiver may receive the first 500 bytes, then the next 500 bytes later. Or, the receiver might receive the data one byte at a time, if that's what it asks for. This is the reason that the recv() function takes a parameter that tells it how much data to return, instead of it telling you how big a packet is.
The transmission control protocol guarantees successful delivery of all packets by requiring acknowledgment of the successful delivery of each packet to the sender by the receiver. By this definition the receiver will always receive the payload in chunks when the size of the payload exceeds the MTU (maximum transmission unit).
For more information please read Transmission Control Protocol.
The IP packets may get fragmented during retransmission.
So the destination machine may receive multiple packets - which will be reassembled back by TCP/IP stack. Depending on the network API you are using - the data will be given to you either reassembled or in RAW packets.
It depends of the stablished MTU (Maximum transfer unit). If your stablished connection (once handshaked) refers to a MTU of 512 bytes you will need two or more TCP packets to send 1000 bytes.
What is the behaviour of the NetworkStream.Write() method, if I send data via a TcpClient's NetworkStream, and the TcpClient.SendBufferSize is smaller than the data?
The MSDN documentation for SendBufferSize says
If the network buffer is smaller than the amount of data you provide
the Write method, several network send operations will be performed
for every call you make to the Write method. You can achieve greater
data throughput by ensuring that your network buffer is at least as
large as your application buffer.
So I know that the data will be sent in multiple operations, and the receiving TCP-Stack should reassemble it into one continuous stream transparently.
But what happens exactly in my program during this time?
If there is enough space in the SendBuffer, TcpClient.GetStream().Write() will not block at all, and return immediately and so will NetworkStream.Flush().
If I set the TcpClient.SendBufferSize to a value smaller than the data, will the Write() block until
either the first part of the data has been received and ACKed,
or the TcpClient.SendTimeout has expired?
Or does it work in some other way? Does it actually wait for a TCP ACK?
Are there any other drawbacks besides higher overhead to such a smaller buffer size? Are there problems with changing the SendBufferSize on the fly?
Example:
byte[] data = new byte[20] // 20 byte data
tcpClient.SendBufferSize= 10; // 10 byte buffer
tcpClient.SendTimeout = 1000; // 1s timeout
tcpClient.GetStream().Write(data,0,20);
// will this block until the first 10 bytes have been full transmitted?
// does it block until a TCP ACK has been received? or something else?
// will it throw if the first 10 bytes have not been received in 1 second?
tcpClient.GetStream().Flush(); // would this make any difference?
My goal here is mainly getting a better understanding of the network internals.
Also , I was wondering if this could be abused to react more quickly to a network failure. If data is sent only infrequently, and each data packet is small enough to be transmitted at once, and there are no receipt messages in a given message protocol, it could take a long time after a network error until the next Write() is called; so a long time until an exception is thrown.
If the SentBuffer is very small, would an error be noticed more quickly, unless it happened at end of the data?
Could I abuse this to measure the time it takes for a single packet to be transmitted and ACKed?
I've got a thread to read and parse serial data.
The messages are in binary format and start with either the character 'F', 'S', 'Q' or 'M'.
There are no newlines and there is no special ending character (the characters above state that a message is finished and everything before it is ready to be parsed).
How do I continuously read and parse the data?
All that comes to my mind is having a 4096 byte long input buffer (byte array) and then follow this procedure:
Track the position in the buffer manually
append available data to it via SerialPort.Read(buffer, position, byteCount)
try to parse as many messages as possible from the buffer
copy the rest to a temporary buffer
reset the input buffer
copy the contents of the temporary buffer to the original buffer
set the position in the buffer
Can you think of faster / easier approaches?
A very simple way to get ahead is to stop trying to make it faster. There is no point, serial port data rates are very, very low and modern computers are very, very fast. Your Read() call only ever returns a single byte, rarely 2.
Note that this is hard to see, when you debug and single-step through the code then you'll artificially slow down your program a great deal. Allowing more bytes to be received and thus more of them getting returned by the Read() call. But this doesn't happen when the program runs at normal speed.
So use SerialPort.BaseStream.ReadByte() instead. Makes the code very simple.
After acquiring some experience with SerialPort C# component
At the beginning: Take a serial port exclusevily.
Then:
1st parallel Task: Continues read entire buffer content after a regular
interval and pushes the read chunk into a "Gathering Collection" of data.
2nd parallel Task: Analyzes the "Gathering Collection" for a completed "phrase", delegates the clonned "phrase" to a "Phrase Manager" and excludes the phrase from the "Gathering Collection"
You have a freedom about "Gathering Collection" implementation, but what was important to me is that:
read all, but not a sized content from the Serial port buffer
to avoid losses and save an order in messages build your own port dispatcher rather let anybody open and close your port at any time for reading/writing.
detect the port read frequency experimenally. The more frequent read-operation will let your code detect fatser a "phrase" and start the proder handlind. Too frequent reading without detecting a "phrase" can cost you additional resource usage.