Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size.
IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060);
Socket server = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
server.Connect(ipep);
String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50";
byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray();
int byteCount = server.Send(temp);
byte[] bytes = new byte[255];
int res=0;
res = server.Receive(bytes);
return Encoding.UTF8.GetString(bytes);
The size of the buffer used to receive data is application or protocol dependent. There's no way within the language to tell what the size of your receive buffer ought to be. Nor is there any socket function that can be used that says 'you need a 23867 bytes to receive this message'. In general your application has to work out from the protocol what size the receive buffer should be and how to handle this. Typically a protocol will either:
specify the number of bytes in the message.
specify a terminating character (for example hdlc using 0x7e to indicate the end of the message)
A consequence of this is that your application might need to deal with split messages. For example the server might send a message that is 2000 bytes but your receive buffer is only 1000 bytes, you'll have to write some code to maintain the state telling you if you've completed a message or are partially complete.
TCP is a stream of bytes. It knows nothing of your concept of messages.
As such it's up to you to provide the necessary message framing information within that stream of bytes. Common ways to do this include prefixing the message with a header which contains the total length of the message or terminating the message with a character that cannot otherwise appear in a valid message.
I speak about TCP message framing here: http://www.serverframework.com/asynchronousevents/2010/10/message-framing-a-length-prefixed-packet-echo-server.html though it's in reference to C++ code so it might not be any use to you.
It's usually slightly more performant for a message consumer to deal with length prefixed messages and it's often slighly more performant for a message producer to produce character delimited messages. Personally I prefer length prefixed messages wherever possible.
With a length prefixed message you would first send x bytes of data which are the length of the message, the peer would then know that it always has to read at least x bytes to work out the length and from that point it knows the size of the resulting message and can read until it has that many bytes.
With character delimited messages you simply keep reading and scanning all of the data that you have read until you find the message delimiter. You have then got a whole message, and possibly more data (part of the next message?) in the buffer to process after that.
Related
I am writing a Small HttpServer, sometime I encounter a problem with missing POST Data.
By using Wireshark I discovered, that the Header is split into two segments.
I only get the first segment (636 Bytes), the second one (POST Data in this case) gets totally lost.
Here is a the relevant C# Code
string requestHeaderString = "";
StreamSocket socketStream = args.Socket;
IInputStream inputStream = socketStream.InputStream;
byte[] data = new byte[BufferSize];
IBuffer buffer = data.AsBuffer();
try
{
await inputStream.ReadAsync(buffer, BufferSize, InputStreamOptions.Partial);
// This is where things go missing, buffer.ToArray() should be 678 Bytes long,
// so Segment 1 (636 Bytes) and Segment 2 (42 Bytes) combined.
// But is only 636 Bytes long, so just the first Segment?!
requestHeaderString += Encoding.UTF8.GetString(buffer.ToArray());
}
catch (Exception e)
{
Debug.WriteLine("inputStream is not readable" + e.StackTrace);
return;
}
This code is in part of the StreamSocketListener ConnectionReceived Event.
Do I manually have to reassemble the TCP Segments, isn't this what the Systems TCP Stack should do?
Thanks,
David
The problem is the systems TCP stack treats the TCP stream just like any other stream. You don't get "messages" with streams, you just get a stream of bytes.
The receiving side has no way to tell when one "message" ends and where the next begins without you telling it some how. You must implement message framing on top of TCP, then on your receiving side you must repeatedly call Receive till you have received enough bytes to form a full message (this will involve using the int returned from the receive call to see how many bytes where processed).
Important note: If you don't know how many bytes you are expecting to get in total, for example you are doing message framing by using '\0' to seperate messages you may get the end of one message and the start of the next in a single Receive call. You will need to handle that situation.
EDIT: Sorry, I skipped over the fact you where reading HTTP. You must follow the protocol of HTTP. You must read in data till you see the pattern \r\n\r\n, once you get that you must parse the header and decode how much data is in the content portion of the HTTP message then repeatatly call read till you have read the number of bytes needed.
Currently, I'm receiving data whenever it is 17 bytes. However, I have two types of data, 17 bytes and 10 bytes. How can I make it process when I have two types of data?
byte[] message = new byte[17];
int bytesRead;
while (true)
{
bytesRead = 0;
try
{
//blocks until a client sends a message
bytesRead = clientStream.Read(message, 0, 17);
}
catch
{
//a socket error has occured
break;
}
if (bytesRead == 0)
{
//the client has disconnected from the server
break;
}
I have seen similar questions asked but it's in C and I couldn't understand. Kindly assist me.
You are trying to implement a message exchange on top of a stream-based protocol (like TCP). When the messages have different lengths and/or types, there are two common approaches
framed messages: Each message will consist of a header of known length that contains the message's length and type and possibly other metadata (e.g. a timestamp). After reading the header, the appropriate amount of bytes (i.e. the payload) is read from the stream.
self-delimiting messages: The end of a message can be detected by the content of the stream read so far. One example for self-delimiting is the HTTP Header, which is delimited by a double newline (2x CRLF).
IMHO framed messaging is easier to implement since you always know how many bytes to read. For self-delimiting messages you have to employ buffering and parsing to detect the end of the message. Furthermore you have to make sure that the end-of-message-marker does not appear in the message's payload.
To implement the receiving side of a framed messaging protocol you can use the System.IO.BinaryReader class.
read the length of the message using ReadByte() or one of the ReadUInt*() methods if the messages will become longer than 255 bytes
read the payload using Read(Byte[], Int32, Int32). Please note that Read may return even if less bytes than specified have been read. You have to use a loop to fill byte[] message.
in server/client program without multi-Client
when the server send two message like:
byte[] data = Encoding.Default.GetBytes("hello world1");
socket.Send(data1, 0, data.Length, 0);
byte[] data = Encoding.Default.GetBytes("hello world2");
socket.Send(data1, 0, data.Length, 0);
the Client received the two messages in one message like:
hello world1hello world2
but I want the client receive the 2 send in 2 received
please help me how to fix it ??? :(
Use a line separator like '\n' and split incomming messages. With TCP you must be prepared for situations where packets are splitted up or joined.
If you used UDP, you could send separate packets.
These are some of your options
You can use length prefixed message. Where you always send the length of the message for example in the first 4 bytes. The server would read the first four bytes and know the length and know how many remaining bytes are part of this message. It would know the next four bytes and so on and so forth.
You can have a message demarker. For example if you know that your message will never have a particular bit pattern you can send it as a message demarker. As an example the server might always scan for a bit pattern 0,1,0,1,0,1 and know that the message has ended
You can use a higher level framework WCF where the infrastructure handles it for you
This question already has answers here:
Receiving data in TCP
(10 answers)
Closed 2 years ago.
I have written a simple TCP client and server. The problem lies with the client.
I'm having some trouble reading the entire response from the server. I must let the thread sleep to allow all the data be sent.
I've tried a few times to convert this code into a loop that runs until the server is finished sending data.
// Init & connect to client
TcpClient client = new TcpClient();
Console.WriteLine("Connecting.....");
client.Connect("192.168.1.160", 9988);
// Stream string to server
input += "\n";
Stream stm = client.GetStream();
ASCIIEncoding asen = new ASCIIEncoding();
byte[] ba = asen.GetBytes(input);
stm.Write(ba, 0, ba.Length);
// Read response from server.
byte[] buffer = new byte[1024];
System.Threading.Thread.Sleep(1000); // Huh, why do I need to wait?
int bytesRead = stm.Read(buffer, 0, buffer.Length);
response = Encoding.ASCII.GetString(buffer, 0, bytesRead);
Console.WriteLine("Response String: "+response);
client.Close();
The nature of streams that are built on top of sockets is that you have an open pipeline that transmits and receives data until the socket is closed.
However, because of the nature of client/server interactions, this pipeline isn't always guaranteed to have content on it to be read. The client and server have to agree to send content over the pipeline.
When you take the Stream abstraction in .NET and overlay it on the concept of sockets, the requirement for an agreement between the client and server still applies; you can call Stream.Read all you want, but if the socket that your Stream is connected to on the other side isn't sending content, the call will just wait until there is content.
This is why protocols exist. At their most basic level, they help define what a complete message that is sent between two parties is. Usually, the mechanism is something along the lines of:
A length-prefixed message where the number of bytes to be read is sent before the message
A pattern of characters used to mark the end of a message (this is less common depending on the content that is being sent, the more arbitrary any part of the message can be, the less likely this will be used)
That said you aren't adhering to the above; your call to Stream.Read is just saying "read 1024 bytes" when in reality, there might not be 1024 bytes to be read. If that's the case, the call to Stream.Read will block until that's been populated.
The reason the call to Thread.Sleep probably works is because by the time a second goes by, the Stream has 1024 bytes on it to read and it doesn't block.
Additionally, if you truly want to read 1024 bytes, you can't assume that the call to Stream.Read will populate 1024 bytes of data. The return value for the Stream.Read method tells you how many bytes were actually read. If you need more for your message, then you need to make additional calls to Stream.Read.
Jon Skeet wrote up the exact way to do this if you want a sample.
Try to repeat the
int bytesRead = stm.Read(buffer, 0, buffer.Length);
while bytesRead > 0. It is a common pattern for that as i remember.
Of course don't forget to pass appropriate params for buffer.
You dont know the size of data you will be reading so you have to set a mechanism to decide. One is timeout and another is using delimiters.
On your example you read whatever data from just one iteration(read) because you dont set the timeout for reading and using default value thats "0" milisecond. So you have to sleep just 1000 ms. You get same effect with using recieve time out to 1000 ms.
I think using lenght of data as prefix is not the real solution because when socket is closed by both sides, socket time-wait situation can not handled properly. Same data can be send to server and cause server to get exception . We used prefix-ending character sequence. After every read we check the data for start and end character sequence, if we cant get end characters, we call another read. But of course this works only if you have the control of server side and client side code.
In the TCP Client / Server I just wrote I generate the packet I want to send to a memory stream, then take the length of that stream and use it as a prefix when sending the data. That way the client knows how many bytes of data it's going to need to read for a full packet.
I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.
I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.
First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?
Secondly, how can I get round this? Receiving all of the 20000 strings is vital.
Socket receiving code:
private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
byte[] recvBuffer = new byte[1024];
int bytesRead = 0;
try
{
bytesRead = socket.Receive(recvBuffer);
}
catch (SocketException e)
{
if (! socket.Connected)
{
return;
}
}
string input = encoding.GetString(recvBuffer, 0, bytesRead);
CountStringsIn(input);
}
Socket sending code:
private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.
What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.
The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.
Well there's one thing wrong with your code to start with, if you're counting the number of calls to Receive which complete: you appear to be assuming that you'll see as many Receive calls finish as you made Send calls.
TCP is a stream-based protocol - you shouldn't be worrying about individual packets or reads; you should be concerned with reading the data, expecting that sometimes you won't get a whole message in one packet and sometimes you may get more than one message in a single read. (One read may not correspond to one packet, too.)
You should either prefix each method with its length before sending, or have a delimited between messages.
It's definitely not TCP's fault. TCP guarantees in-order, exactly-once delivery.
Which strings are "missing"? I'd wager it's the last ones; try flushing from the sending end.
Moreover, your "protocol" here (I'm taking about the application-layer protocol you're inventing) is lacking: you should consider sending the # of objects and/or their length so the receiver knows when he's actually done receiving them.
How long are each of the strings? If they aren't exactly 1024 bytes, they'll be merged by the remote TCP/IP stack into one big stream, which you read big blocks of in your Receive call.
For example, using three Send calls to send "A", "B", and "C" will most likely come to your remote client as "ABC" (as either the remote stack or your own stack will buffer the bytes until they are read). If you need each string to come without it being merged with other strings, look into adding in a "protocol" with an identifier to show the start and end of each string, or alternatively configure the socket to avoid buffering and combining packets.