[SOLVED, the question is based on incorrect assumptions]
While working with TCP I came across an issue with NetworkStream.Read returning value 0 in two separate cases, which I have trouble distinguishing.
A bit of background - I have a working client-server solution communicating via TCP using length-prefixed messages. However, since most of the communication (apart from some initial messages exchange) happens from Client to Server, the Server doesn't have a good way to know whether a Client is still connected or not. One way to find this out is to send something to the Client from time to time, and that is what I decided to do.
I know that I can add a dedicated "ping" message to my protocol, and simply ignore it in the Client, but I was also testing for other possibilities. One thing I tried was to send an empty byte array to the Client like so:
networkStream.Write(new byte[0], 0, 0);
All looks good, it seems to be sending a TCP packet with no data in it... however! My client code does expect data from the Server from time to time, so it has a thread that blocks on networkStream.Read, like so:
int bytesRead = networkStream.Read(buffer, 0, 4);
if (bytesRead == 0)
break;
According to the docs, Socket.Read (or NetworkStream.Read) returns 0 if the other end closes the connection. This is true, but in my case, after sending the empty byte array, Read(...) also returns 0.
So far I was unable to distinguish those two situations. Socket.Connected checked after Read is true in both cases (connection closed and the empty byte array). Is there any other way to handle this?
Again, I do know that sending this empty array is almost the same as adding a new type of message for that purpose. I am not asking for a solution here... just wanted to know if .NET's Socket can distinguish between an empty byte array and connection closing.
EDIT:
I am sorry for bothering everyone with a question, that in the end was based on incorrect assumptions. My tests were not done on my production code, and were too sloppy. This caused me to draw incorrect conclusions.
Basically, what I was testing is if I do a Write(new byte[0]...) on one end, the other end's Read(...) will return 0. It did, but not due to the send. The TcpClient I used to test was falling out of scope, which (I assume) caused it to be Disposed by GC, thus the connection got closed, which caused the Read to return 0. I did repeat the test with TcpClient not being disposed/lost, and the Read does not return anything, no matter how many empty byte arrays I send.
At first, I expected the Nagle's algorithm to mess things up, but it doesn't in this case - 1-byte arrays arrive without delays, as I was testing on localhost. I may do a different test, using Sockets, and explicitly disabling Nagle's algorithm, but I don't think this will change anything.
Now I just need to check whether sending such an array will actually allow me to detect a disconnection, but that is a different story, not in the scope of this question.
EDIT 2:
I did some more tests regarding this, and found out, that despite several suggestions, e.g. here (which seems like a valid source of information), doing an empty Send does not recognize a broken connection. I physically disconnected a network cable, and my Server is doing those empty sends every 5 seconds. It's been going like that for several minutes, and no disconnection was detected. If I decide to send any data (even a single byte), the disconnection gets detected after 20 seconds at most.
From MSDN's NetworkStream.Read Method (Byte[], Int32, Int32):
The Read operation reads as much data as is available, up to the
number of bytes specified by the size parameter. If no data is
available for reading, the Read method returns 0.
While sending data, you're sending an empty byte array and writing zero bytes using:
networkStream.Write(new byte[0], 0, 0);
Whereas, while reading the data, you claim that
"My client code does expect data from the Server from time to time, so
it has a thread that blocks on networkStream."
int bytesRead = networkStream.Read(buffer, 0, 4);
if (bytesRead == 0)
break;
But, then again you're trying to read 4 bytes to a byte array, which would obviously keep on waiting. So, what else do you expect!
"...just wanted to know if .NET's Socket can distinguish between an empty
byte array and connection closing."
Connection closing is totally a different story, which involves several steps before closing of a Socket connection. So, obviously, it is not the same as sending or receiving zero bytes!
-> Lastly, as hinted by #WithMetta in the comments, please check whether the data is available to be read or not using NetworkStream.DataAvailable Property.
while(networkStream.DataAvailable) { // your code logic}
Related
I'm working with C# socket communication, and I receive different messages, separated by a special character (ASCII code 3).
Currently this is not taken into account and my setup of the reception, who looks as follows:
sock.BeginReceive(_receivedData,
0,
_receivedData.Length,
SocketFlags.None,
OnReceivedData,
sock);
This receives different messages, separated by the mentioned character, as one message.
Inside of the OnReceivedData(...) I can run through the whole received bytearray, and look for that special character myself, but I would prefer having this handled by the socket handler itself.
Does anybody know how to declare the callback receive in order to take a separator into account, something like (pseudo-code):
sock.BeginReceive(_receivedData,
0,
_receivedData.Length,
SocketFlags.None, // maybe here?
OnReceivedData,
sock,
separator=chr(3));
Edit after some more investigation
The "complete" code concerning the reception of the data looks as follows:
private void SetupReceiveCallback(Socket sock)
{ try
{ sock.BeginReceive(_receivedData, 0, _receivedData.Length, SocketFlags.None, OnReceivedData, sock);}
catch (Exception ex)
{ }
}
private void OnReceivedData(IAsyncResult ar)
{ sock = (Socket)ar.AsyncState;
// Check if we got any data
try
{
int nBytesRec = sock.EndReceive(ar);
This (sock.BeginReceive()takes in a whole bunch of data at once, while I would like the data being taken up to a certain character (0x03), which is the terminator of my message.
My own answer is wrong indeed, as it takes data, which are not even complete, causing a mess (as predicted by Dialectus).
As far as the proposal from Panagiatos: I'm very new at C# programming. How can I do this?
Thanks in advance
This separator is part of your communication protocol. Sockets do not know anything about applicative protocols. They only know about TCP/IP. Therefore there is no way to have this handled by the framework. You must handle it. You call this character a separator, but it must be actually a terminator. Your protocol must provide the information when the message is ending, otherwise you will not know if you received a complete message.
You are provided with stream of bytes, and the only guarantee you have is that if the stream exists then it has the same bytes, in the same order, as they were sent. You may not receive all bytes, and there is no guarantee that you will. Your communication protocol must be good enough to detect when a new message is started in the stream, and when it is ending.
Or you can adopt some existing communication protocol, and the library that implements it.
As mentioned in the many comments, immediately after having posted the question, a separator cannot be declared in the communication part of my application: I just have to get everything coming from the socket, and do the separation myself, which I did like this:
string [] infoList = infoString.Split('\u0003');
foreach (string infoEntry in infoList)
{
... // do the treatment of the messages, one by one
I don't think any callback ever implement packet processing.
You need to under stand separation of interest task of stream reader is delivery of payload (bytes).
As you mentioned separator 0x03, actually this is ETX (end of text) which indicates end of data at end of any packet. But in your case multiple messages are stacked together by ETX.
Unless you follow standard protocol (Ymodem, Kermit, Modbus) its highly depends on implementers.
A good packet protocol shall include start of text (0x02), number of bytes, payload data, checksum and end of text (0x03).
You might find it easier to use a NetworkStream to access your data.
NetworkStream.ReadByte() method on Microsoft
NetworkStream constructor accepts Socket as an argument. You can read one byte at a time to consume a message, without discarding any data unintentionally. I believe (not sure) it will also buffer internally for you. I don't think there is an async version of this method so you must also take this into account; from your code provided it's not clear to me whether that would have a major impact.
If i send 1000 bytes in TCP, does it guarantee that the receiver will get the entire 1000 bytes "togther"? or perhaps he will first only get 500 bytes, and later he'll receive the other bytes?
EDIT: the question comes from the application's point of view. If the 1000 bytes are reassembles into a single buffer before they reach the application .. then i don't care if it was fragmented in the way..
See Transmission Control Protocol:
TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer.
A "stream" means that there is no message boundary from the receiver's point of view. You could get one 1000 byte message or one thousand 1 byte messages depending on what's underneath and how often you call read/select.
Edit: Let me clarify from the application's point of view. No, TCP will not guarantee that the single read would give you all of the 1000 bytes (or 1MB or 1GB) packet the sender may have sent. Thus, a protocol above the TCP usually contains fixed length header with the total content length in it. For example you could always send 1 byte that indicates the total length of the content in bytes, which would support up to 255 bytes.
As other answers indicated, TCP is a stream protocol -- every byte sent will be received (once and in the same order), but there are no intrinsic "message boundaries" -- whether all bytes are sent in a single .send call, or multiple ones, they might still be received in one or multiple .receive calls.
So, if you need "message boundaries", you need to impose them on top of the TCP stream, IOW, essentially, at application level. For example, if you know the bytes you're sending will never contain a \0, null-terminated strings work fine; various methods of "escaping" let you send strings of bytes which obey no such limitations. (There are existing protocols for this but none is really widespread or widely accepted).
Basically as far as TCP goes it only guarantees that the data sent from one end to the other end will be sent in the same order.
Now usually what you'll have to do is have an internal buffer that keeps looping until it has received your 1000 byte "packet".
Because the recv command as mentioned returns how much has actually been received.
So usually you'll have to then implement a protocol on top of TCP to make sure you send data at an appropriate speed. Because if you send() all the data in one run through it will overload the under lying networking stack, and which will cause complications.
So usually in the protocol there is a tiny acknowledgement packet sent back to confirm that the packet of 1000 bytes are sent.
You decide, in your message that how many bytes your message shall contain. For instance in your case its 1000. Following is up and running C# code to achieve the same. The method returns with 1000 bytes. The abort code is 0 bytes; you can tailor that according to your needs.
Usage:
strMsg = ReadData(thisTcpClient.Client, 1000, out bDisconnected);
Following is the method:
string ReadData(Socket sckClient, int nBytesToRead, out bool bShouldDisconnect)
{
bShouldDisconnect = false;
byte[] byteBuffer = new byte[nBytesToRead];
Array.Clear(byteBuffer, 0, byteBuffer.Length);
int nDataRead = 0;
int nStartIndex = 0;
while (nDataRead < nBytesToRead)
{
int nBytesRead = sckClient.Receive(byteBuffer, nStartIndex, nBytesToRead - nStartIndex, SocketFlags.None);
if (0 == nBytesRead)
{
bShouldDisconnect = true;
//0 bytes received; assuming disconnect signal
break;
}
nDataRead += nBytesRead;
nStartIndex += nBytesRead;
}
return Encoding.Default.GetString(byteBuffer, 0, nDataRead);
}
Let us know this didn't help you (0: Good luck.
Yes, there is a chance for receiving packets part by part. Hope this msdn article and following example (taken from the article in msdn for quick review) would be helpful to you if you are using windows sockets.
void CChatSocket::OnReceive(int nErrorCode)
{
CSocket::OnReceive(nErrorCode);
DWORD dwReceived;
if (IOCtl(FIONREAD, &dwReceived))
{
if (dwReceived >= dwExpected) // Process only if you have enough data
m_pDoc->ProcessPendingRead();
}
else
{
// Error handling here
}
}
TCP guarantees that they will recieve all 1000 bytes, but not necessarily in order (though, it will appear so to the recieving application) and not necessarily all at once (unless you craft the packet yourself and make it so.).
That said, for a packet as small as 1000 bytes, there is a good chance it'll send in one packet as long as you do it in one call to send, though for larger transmissions it may not.
The only thing that the TCP layer guarantees is that the receiver will receive:
all the bytes transmitted by the sender
in the same order
There are no guarantees at all about how the bytes might be split up into "packets". All the stuff you might read about MTU, packet fragmentation, maximum segment size, or whatever else is all below the layer of TCP sockets, and is irrelevant. TCP provides a stream service only.
With reference to your question, this means that the receiver may receive the first 500 bytes, then the next 500 bytes later. Or, the receiver might receive the data one byte at a time, if that's what it asks for. This is the reason that the recv() function takes a parameter that tells it how much data to return, instead of it telling you how big a packet is.
The transmission control protocol guarantees successful delivery of all packets by requiring acknowledgment of the successful delivery of each packet to the sender by the receiver. By this definition the receiver will always receive the payload in chunks when the size of the payload exceeds the MTU (maximum transmission unit).
For more information please read Transmission Control Protocol.
The IP packets may get fragmented during retransmission.
So the destination machine may receive multiple packets - which will be reassembled back by TCP/IP stack. Depending on the network API you are using - the data will be given to you either reassembled or in RAW packets.
It depends of the stablished MTU (Maximum transfer unit). If your stablished connection (once handshaked) refers to a MTU of 512 bytes you will need two or more TCP packets to send 1000 bytes.
What is the behaviour of the NetworkStream.Write() method, if I send data via a TcpClient's NetworkStream, and the TcpClient.SendBufferSize is smaller than the data?
The MSDN documentation for SendBufferSize says
If the network buffer is smaller than the amount of data you provide
the Write method, several network send operations will be performed
for every call you make to the Write method. You can achieve greater
data throughput by ensuring that your network buffer is at least as
large as your application buffer.
So I know that the data will be sent in multiple operations, and the receiving TCP-Stack should reassemble it into one continuous stream transparently.
But what happens exactly in my program during this time?
If there is enough space in the SendBuffer, TcpClient.GetStream().Write() will not block at all, and return immediately and so will NetworkStream.Flush().
If I set the TcpClient.SendBufferSize to a value smaller than the data, will the Write() block until
either the first part of the data has been received and ACKed,
or the TcpClient.SendTimeout has expired?
Or does it work in some other way? Does it actually wait for a TCP ACK?
Are there any other drawbacks besides higher overhead to such a smaller buffer size? Are there problems with changing the SendBufferSize on the fly?
Example:
byte[] data = new byte[20] // 20 byte data
tcpClient.SendBufferSize= 10; // 10 byte buffer
tcpClient.SendTimeout = 1000; // 1s timeout
tcpClient.GetStream().Write(data,0,20);
// will this block until the first 10 bytes have been full transmitted?
// does it block until a TCP ACK has been received? or something else?
// will it throw if the first 10 bytes have not been received in 1 second?
tcpClient.GetStream().Flush(); // would this make any difference?
My goal here is mainly getting a better understanding of the network internals.
Also , I was wondering if this could be abused to react more quickly to a network failure. If data is sent only infrequently, and each data packet is small enough to be transmitted at once, and there are no receipt messages in a given message protocol, it could take a long time after a network error until the next Write() is called; so a long time until an exception is thrown.
If the SentBuffer is very small, would an error be noticed more quickly, unless it happened at end of the data?
Could I abuse this to measure the time it takes for a single packet to be transmitted and ACKed?
I am using a networkstream to pass short strings around the network.
Now, on the receiving side I have encountered an issue:
Normally I would do the reading like this
see if data is available at all
get count of data available
read that many bytes into a buffer
convert buffer content to string.
In code that assumes all offered methods work as probably intended, that would look something like this:
NetworkStream stream = someTcpClient.GetStream();
while(!stream.DataAvailable)
;
byte[] bufferByte;
stream.Read(bufferByte, 0, stream.Lenght);
AsciiEncoding enc = new AsciiEncoding();
string result = enc.GetString(bufferByte);
However, MSDN says that NetworkStream.Length is not really implemented and will always throw an Exception when called.
Since the incoming data are of varying length I cannot hard-code the count of bytes to expect (which would also be a case of the magic-number antipattern).
Question:
If I cannot get an accurate count of the number of bytes available for reading, then how can I read from the stream properly, without risking all sorts of exceptions within NetworkStream.Read?
EDIT:
Although the provided answer leads to a better overall code I still want to share another option that I came across:
TCPClient.Available gives the bytes available to read. I knew there had to be a way to count the bytes in one's own inbox.
There's no guarantee that calls to Read on one side of the connection will match up 1-1 with calls to Write from the other side. If you're dealing with variable length messages, it's up to you to provide the receiving side with this information.
One common way to do this is to first work out the length of the message you're going to send and then send that length information first. On the receiving side, you then obtain the length first and then you know how big a buffer to allocate. You then call Read in a loop until you've read the correct number of bytes. Note that, in your original code, you're currently ignoring the return value from Read, which tells you how many bytes were actually read. In a single call and return, this could be as low as 1, even if you're asking for more than 1 byte.
Another common way is to decide on message "formats" - where e.g. message number 1 is always 32 bytes in length and has X structure, and message number 2 is 51 bytes in length and has Y structure. With this approach, rather than you sending the message length before sending the message, you send the format information instead - first you send "here comes a message of type 1" and then you send the message.
A further common way, if applicable, is to use some form of sentinels - if your messages will never contain, say, a byte with value 0xff then you scan the received bytes until you've received an 0xff byte, and then everything before that byte was the message you wanted to receive.
But, whatever you want to do, whether its one of the above approaches, or something else, it's up to you to have your sending and receiving sides work together to allow the receiver to discover each message.
I forgot to say but a further way to change everything around is - if you want to exchange messages, and don't want to do any of the above fiddling around, then switch to something that works at a higher level - e.g. WCF, or HTTP, or something else, where those systems already take care of message framing and you can, then, just concentrate on what to do with your messages.
You could use StreamReader to read stream to the end
var streamReader = new StreamReader(someTcpClient.GetStream(), Encoding.ASCII);
string result = streamReader.ReadToEnd();
I am new to serial communication. I have read a fair few tutorials, and most of what I am trying to do is working, however I have a question regarding serial communication with C#. I have a micro controller that is constantly sending data through a serial line. The data ist in this format:
bxxxxixx.xx,xx.xx*
where the x's represent different numbers, + or - signs.
At certain times want to read this information from my C# program on my PC. The problem that I am having is that my messages seem to be split in random positions even though I am using
ReadTo("*");
I assumed this would read everything upto the * character.
How can I make sure that the message I recieved is complete?
Thank you for your help.
public string receiveCommandHC()
{
string messageHC = "";
if (serialHC.IsOpen)
{
serialHC.DiscardInBuffer();
messageHC = serialHC.ReadTo("*");
}
return messageHC;
}
I'm not sure why you're doing it, but you're discarding everything in the serial ports in-buffer just before reading, so if the computer has already received "bxxx" at that point, you throw it away and you'll only be reading "xixx.xx,xx.xx".
You'll nearly always find in serial comms that data messages (unless very small) are split. This is mostly down to the speed of communication and the point at which you retrieve data from the port.
Usually you'd set your code to run in a separate thread (to help prevent impacting the performance of the rest of your code) which raises an event when a complete message is received and also takes full messages in for transmission. Read and write functionality is dealt with by worker threads (serial comms traffic is slow).
You'll need a read and a write buffer. These should be suitabley large to hold data for several cycles.
Append data read from the input to the end of your read buffer. Have the read buffer read on cyclicly for complete messages, from the start of the buffer.
Depending on the protocol used there is usually a data start and maybe a data end indicator and somewhere a message size (this may be fixed, again depending on your protocol). I gather form your protocol that the message start character is 'b' and the message end character is '*'. Discard all data preceeding your message start character ('b'), as this is from an incomplete message.
When a complete message is found, strip it from the front of the buffer and raise an event to indicate its arrival.
A similar process is run for sending data, except that you may need to split the message, hence data to be sent is appended to the end of the buffer and data being sent is read from the start.
I hope that this helps you in understanding how to cope with serial comms.
As pointed out by Marc you're currently clearing your buffer in a way that will cause problems.
edit
As I said in my comment I don't recognise serialHC, but if dealing with raw data then look at using the SerialPort class. More information on how to use it and an example (which roughly uses the process that I described above) can be found here.
I'm going to take a guess that you're actually getting the ends of commands, i.e. instead of getting b01234.56.78.9 (omitting the final *), you're getting (say) .56.78.9. That is because you discarded the input buffer. Don't do that. You can't know the state at that point (just before a read), so discarding the buffer is simply wrong. Remove the serialHC.DiscardInBuffer(); line.