I have an Arduino microcontroller with a Sparkfun WiFly shield.
I build a simple program in C#/.NET that connects to the Arduino using System.Net.Sockets:
Socket Soc = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
public void SendMsg(string msg)
{
try
{
byte[] buffer = StrToByteArray(msg);
if (Soc.Connected)
Soc.Send(buffer);
else
{
Soc.Connect(this.remoteIP);
Soc.Send(buffer);
}
}
catch (Exception e){}
}
On the arduino I have:
while(SpiSerial.available() > 0) {
byte b = SpiSerial.read();
Serial.println(b);
}
When a socket connection does a handshake, I get: "*OPEN*" and when closed, I get: "*CLOS*".
The problem is that I get the messages one byte by another and sometimes I don't get the full message on one while loop.
So if I use the code I showed above on the Arduino, my serial terminal looks like:
*
O
P
E
N
*
T
E
S
T
*
C
L
O
S
*
So how can I figure out the message the PC is trying to send?
I know I need somehow to use a special byte that will symbolise the end of my message. (A special byte that I won't use in my message, only to symbolise the end of a message)
But how can I do it? And which byte to use?
You need to design your own protocol here. You should define a byte (preferably one that won't occur in the data) to indicate "start", and then you have three choices:
follow the start byte with a "length" byte indicating how much data
to read
define an "end" byte that marks the end of your data
read data until you have a complete message that matches one of
the ones you expect
The third option is the least extensible and flexible, of course, as if you already have a message "OPEN" you can't then add a new message "OPENED" for instance.
If you take the second option and define an "end" byte then you need to worry about escaping that byte if it occurs within your data (or use another byte that is guaranteed not to be in your data).
Looking at your current example, a good starting point would be to simply prefix each message with a length prefix.
If you want to support long messages you can use a 2 byte length prefix, then you read the first 2 bytes to get the length and then you continue reading from the socket until you have read the number of bytes indicated by the length prefix.
Once you have read a complete message you are then back to expecting to read the length prefix for the next message and so on until the communication is terminated by one of the parties.
Of course in between all this you need to check for error conditions like the socket on one end being closed prematurely etc. and how to handle the potential partial messages that can result form the premature closing of the socket.
Related
in this qusetion C# Receiving Packet in System.Sockets
a guy asked:
"In my Client Server Application i wondered how to make a packet and send it to the server via the Client
Then on the server i recognize which packet is this and send the proper replay"
and showed the example of his way of implementing the 'packet recognizer'. He got an answer that his way of 'structuring the message' is bad, but no explanation and code example followed by the answer.
So please, can anybody show the example of a good code, which should do something like this, but proper way:
[client]
Send(Encoding.ASCII.GetBytes("1001:UN=user123&PW=123456")) //1001 is the ID
[server]
private void OnReceivePacket(byte[] arg1, Wrapper Client)
{
try
{
int ID;
string V = Encoding.ASCII.GetString(arg1).Split(':')[0];
int.TryParse(V, out ID);
switch (ID)
{
case 1001://Login Packet
AppendToRichEditControl("LOGIN PACKET RECEIVED");
break;
case 1002:
//OTHER IDs
break;
default:
break;
}
}
catch { }
}
TCP ensures that data arrives in the same order it was sent - but it doesn't have a concept of messages. For instance, say you send the following two fragments of data:
1001:UN=user123&PW=123456
999:UN=user456&PW=1234
On the receiving end, you will read 1001:UN=user123&PW=123456999:UN=user456&PW=1234 and this might take one, two or more reads. This may even arrive in two packets as:
1001:UN=user123&PW=12
3456999:UN=user456&PW=1234
This makes it very had to parse the message correctly. The other post mentions sending the length of a packet before the actual data, and that indeed solves the problem, as you can determine exactly when one message ends and the next starts.
As an example, client and server could agree that each message starts with 4 bytes containing the length of the message. The receiver could then simply:
load 4 bytes exactly
convert to an integer, now you know the length of the remainder of the message
read exactly that many bytes and parse the message
C# conveniently has the BitConverter class that allows you to convert the integer to a byte[] and vice versa.
I am writing a Small HttpServer, sometime I encounter a problem with missing POST Data.
By using Wireshark I discovered, that the Header is split into two segments.
I only get the first segment (636 Bytes), the second one (POST Data in this case) gets totally lost.
Here is a the relevant C# Code
string requestHeaderString = "";
StreamSocket socketStream = args.Socket;
IInputStream inputStream = socketStream.InputStream;
byte[] data = new byte[BufferSize];
IBuffer buffer = data.AsBuffer();
try
{
await inputStream.ReadAsync(buffer, BufferSize, InputStreamOptions.Partial);
// This is where things go missing, buffer.ToArray() should be 678 Bytes long,
// so Segment 1 (636 Bytes) and Segment 2 (42 Bytes) combined.
// But is only 636 Bytes long, so just the first Segment?!
requestHeaderString += Encoding.UTF8.GetString(buffer.ToArray());
}
catch (Exception e)
{
Debug.WriteLine("inputStream is not readable" + e.StackTrace);
return;
}
This code is in part of the StreamSocketListener ConnectionReceived Event.
Do I manually have to reassemble the TCP Segments, isn't this what the Systems TCP Stack should do?
Thanks,
David
The problem is the systems TCP stack treats the TCP stream just like any other stream. You don't get "messages" with streams, you just get a stream of bytes.
The receiving side has no way to tell when one "message" ends and where the next begins without you telling it some how. You must implement message framing on top of TCP, then on your receiving side you must repeatedly call Receive till you have received enough bytes to form a full message (this will involve using the int returned from the receive call to see how many bytes where processed).
Important note: If you don't know how many bytes you are expecting to get in total, for example you are doing message framing by using '\0' to seperate messages you may get the end of one message and the start of the next in a single Receive call. You will need to handle that situation.
EDIT: Sorry, I skipped over the fact you where reading HTTP. You must follow the protocol of HTTP. You must read in data till you see the pattern \r\n\r\n, once you get that you must parse the header and decode how much data is in the content portion of the HTTP message then repeatatly call read till you have read the number of bytes needed.
Currently, I'm receiving data whenever it is 17 bytes. However, I have two types of data, 17 bytes and 10 bytes. How can I make it process when I have two types of data?
byte[] message = new byte[17];
int bytesRead;
while (true)
{
bytesRead = 0;
try
{
//blocks until a client sends a message
bytesRead = clientStream.Read(message, 0, 17);
}
catch
{
//a socket error has occured
break;
}
if (bytesRead == 0)
{
//the client has disconnected from the server
break;
}
I have seen similar questions asked but it's in C and I couldn't understand. Kindly assist me.
You are trying to implement a message exchange on top of a stream-based protocol (like TCP). When the messages have different lengths and/or types, there are two common approaches
framed messages: Each message will consist of a header of known length that contains the message's length and type and possibly other metadata (e.g. a timestamp). After reading the header, the appropriate amount of bytes (i.e. the payload) is read from the stream.
self-delimiting messages: The end of a message can be detected by the content of the stream read so far. One example for self-delimiting is the HTTP Header, which is delimited by a double newline (2x CRLF).
IMHO framed messaging is easier to implement since you always know how many bytes to read. For self-delimiting messages you have to employ buffering and parsing to detect the end of the message. Furthermore you have to make sure that the end-of-message-marker does not appear in the message's payload.
To implement the receiving side of a framed messaging protocol you can use the System.IO.BinaryReader class.
read the length of the message using ReadByte() or one of the ReadUInt*() methods if the messages will become longer than 255 bytes
read the payload using Read(Byte[], Int32, Int32). Please note that Read may return even if less bytes than specified have been read. You have to use a loop to fill byte[] message.
I am trying to interface an ancient network camera to my computer and I am stuck at a very fundamental problem -- detecting the end of stream.
I am using TcpClient to communicate with the camera and I can actually see it transmitting the command data, no problems here.
List<int> incoming = new List<int>();
TcpClient clientSocket = new TcpClient();
clientSocket.Connect(txtHost.Text, Int32.Parse(txtPort.Text));
NetworkStream serverStream = clientSocket.GetStream();
serverStream.Flush();
byte[] command = System.Text.Encoding.ASCII.GetBytes("i640*480M");
serverStream.Write(command, 0, command.Length);
Reading back the response is where the problem begins though. I initially thought something simple like the following bit of code would have worked:
while (serverStream.DataAvailable)
{
incoming.Add(serverStream.ReadByte());
}
But it didn't, so I had a go another version this time utilising ReadByte(). The description states:
Reads a byte from the stream and
advances the position within the
stream by one byte, or returns -1 if
at the end of the stream.
so I thought I could implement something along the lines of:
Boolean run = true;
int rec;
while (run)
{
rec = serverStream.ReadByte();
if (rec == -1)
{
run = false;
//b = (byte)'X';
}
else
{
incoming.Add(rec);
}
}
Nope, still doesn't work. I can actually see data coming in and after a certain point (which is not always the same, otherwise I could have simply read that many bytes every time) I start getting 0 as the value for the rest of the elements and it doesn't halt until I manually stop the execution. Here's what it looks like:
So my question is, am I missing something fundamental here? How can I detect the end of the stream?
Many thanks,
H.
What you're missing is how you're thinking of a TCP data stream. It is an open connection, like an open phone line - someone on the other end may or may not be talking (DataAvailable), and just because they paused to take a breath (DataAvailable==false) it doesn't mean they're actually DONE with their current statement. A moment later they could start talking again (DataAvailable==true)
You need to have some kind of defined rules for the communication protocol ABOVE TCP, which is really just a transport layer. So for instance perhaps the camera will send you a special character sequence when it's current image transmission is complete, and so you need to examine every character sent and determine if that sequence has been sent to you, and then act appropriately.
Well you can't exactly says EOS on a network communication ( unless the other party drop the connection ) usually the protocol itself contains something to signal that the message is complete ( sometimes a new line, for example ). So you read the stream and feed a buffer, and you extract complete message by applying these strategies.
Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size.
IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060);
Socket server = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
server.Connect(ipep);
String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50";
byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray();
int byteCount = server.Send(temp);
byte[] bytes = new byte[255];
int res=0;
res = server.Receive(bytes);
return Encoding.UTF8.GetString(bytes);
The size of the buffer used to receive data is application or protocol dependent. There's no way within the language to tell what the size of your receive buffer ought to be. Nor is there any socket function that can be used that says 'you need a 23867 bytes to receive this message'. In general your application has to work out from the protocol what size the receive buffer should be and how to handle this. Typically a protocol will either:
specify the number of bytes in the message.
specify a terminating character (for example hdlc using 0x7e to indicate the end of the message)
A consequence of this is that your application might need to deal with split messages. For example the server might send a message that is 2000 bytes but your receive buffer is only 1000 bytes, you'll have to write some code to maintain the state telling you if you've completed a message or are partially complete.
TCP is a stream of bytes. It knows nothing of your concept of messages.
As such it's up to you to provide the necessary message framing information within that stream of bytes. Common ways to do this include prefixing the message with a header which contains the total length of the message or terminating the message with a character that cannot otherwise appear in a valid message.
I speak about TCP message framing here: http://www.serverframework.com/asynchronousevents/2010/10/message-framing-a-length-prefixed-packet-echo-server.html though it's in reference to C++ code so it might not be any use to you.
It's usually slightly more performant for a message consumer to deal with length prefixed messages and it's often slighly more performant for a message producer to produce character delimited messages. Personally I prefer length prefixed messages wherever possible.
With a length prefixed message you would first send x bytes of data which are the length of the message, the peer would then know that it always has to read at least x bytes to work out the length and from that point it knows the size of the resulting message and can read until it has that many bytes.
With character delimited messages you simply keep reading and scanning all of the data that you have read until you find the message delimiter. You have then got a whole message, and possibly more data (part of the next message?) in the buffer to process after that.