I'm currently writing some async UDP network code in C#. I'm sending small packets (less than 50 bytes of data in each so far) back and forth and my first thought was to split them into two different packets and still send it as one packet but receive it as two. So a header or an extra information packet is always added to the start of the real packet. That would contain an ID and the data length.
So I thought I could split it on the receiving end (async receive) and first receive the header and then the actual information. This is so that I don't have to worry about the order between packets and "packet headers".
So I wrote code that basically worked like this:
Client sends 30 bytes of data to the server, where the first 3 bytes is the packet header.
The server would have called (PACKET_HEADER_SIZE = 3):
socket.BeginReceiveFrom(state.Buffer, 0, PACKET_HEADER_SIZE, SocketFlags.None, ref endPoint, ReceivePacketInfo, state);
Then receives the data:
private void ReceivePacketInfo(IAsyncResult ar)
{
StateObj state = (StateObj) ar.AsyncState;
int bytesRead = socket.EndReceiveFrom(ar, ref endPoint);
state.BytesReceived += read;
if (state.BytesReceived < state.Buffer.Length)
{
_socket.BeginReceiveFrom(state.Buffer, state.BytesReceived, state.Buffer.Length - state.BytesReceived, SocketFlags.None, ref endPoint, ReceivePacketInfo, state);
}
else
{
//my thought was to receive the rest of the packet here
}
}
but when calling socket.EndReceiveFrom(ar) I get a SocketException:
"A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself"
So now I have a couple of questions.
Do I have to make sure I receive the whole packet (in this case both the header and the packet) before I call EndReceiveFrom?
Can I assume that I will either get the whole packet in one go or get nothing so that my if-statement in ReceivePacketInfo would be redundant (as long as it's size is less than the maximum packet size, of course)?
If I cannot, is there a good way of solving my problem? I could tag all my packet headers and all my packets to be able to map them together I suppose. I could also try to have a standardized "packet ending" so that I just read until I hit the end of the packet.
Thanks in advance for any help!
Can I assume that I will either get the whole packet in one go or get nothing
That's almost the only thing, that UDP can guarantee - the content of a packet. If packet is received it is guaranteed to have same size and same content. So you have to make sure, that you buffer is large enough for a packet.
The order of a packets is not guaranteed and the delivery itself. It is up to you and your application to handle dropped packets and out of order packets.
Related
I have a 3Rd party server where I send it data and then wait for a response.
It will first respond with ack then with the full response
I am doing something like this. Is there a better way.
//Connect
eftSocket.Connect(remoteEp);
// Encode the data string into a byte array.
byte[] msg = Encoding.ASCII.GetBytes(strRequest);
// Send the data through the socket.
int bytesSent = eftSocket.Send(msg);
//receive first time
int bytesRec = eftSocket.Receive(bytes);
//Do we get an ack,
if (bytes[0] != 6)
{
throw new Exception("Could not connect to EFT terminal");
}
//receive 2nd time
bytesRec = eftSocket.Receive(bytes);
That code looks broken. At first you have to embrace that TCP is a stream and that there are no packets. That means even if the client sends something of a given length you can never expect that you receive it in one turn. Which means if you do eftSocket.Receive(bytes) you might receive any amount of bytes which were sent from the remote, not necessarily the ack byte in the first call and the remaining stuff in a second call. The only thing that you know is that you won't receive more than bytes.Length with it. If you hand it over a significantly large buffer you might receive everything (ACK + remaining data) in one turn. Therefore you should always check the return value first. And if you expect more bytes repeat the Receive calls with the required offsets.
Then you should at first check whether your receive suceeded at all. You might receive 0, which means the remote closed the socket. If that's the case bytes[0] will yield anything which was stored in it before the Receive started.
You should also properly close your socket. It's not obvious from the code example if it happens, but if you return from the code with an Exception it might be missing.
I have a socket connection that receives data, and reads it for processing.
When data is not processed/pulled fast enough from the socket, there is a bottleneck at the TCP level, and the data received is delayed (I can tell by the tmestamps after parsing).
How can I see how much TCP bytes are awaiting to be read by the socket ? (via some external tool like WireShark or else)
private void InitiateRecv(IoContext rxContext)
{
rxContext._ipcSocket.BeginReceive(rxContext._ipcBuffer.Buffer, rxContext._ipcBuffer.WrIndex,
rxContext._ipcBuffer.Remaining(), 0, CompleteRecv, rxContext);
}
private void CompleteRecv(IAsyncResult ar)
{
IoContext rxContext = ar.AsyncState as IoContext;
if (rxContext != null)
{
int rxBytes = rxContext._ipcSocket.EndReceive(ar);
if (rxBytes > 0)
{
EventHandler<VfxIpcEventArgs> dispatch = EventDispatch;
dispatch (this, new VfxIpcEventArgs(rxContext._ipcBuffer));
InitiateRecv(rxContext);
}
}
}
The fact is that I guess the "dispatch" is somehow blocking the reception until it is done, ending up in latency (i.e, data that is processed bu the dispatch is delayed, hence my (false?) conclusion that there was data accumulated on the socket level or before.
How can I see how much TCP bytes are awaiting to be read by the socket
By specifying a protocol that indicates how many bytes it's about to send. Using sockets you operate a few layers above the byte level, and you can't see how many send() calls end up as receive() calls on your end because of buffering and delays.
If you specify the number of bytes on beforehand, and send a string like "13|Hello, World!", then there's no problem when the message arrives in two parts, say "13|Hello" and ", World!", because you know you'll have to read 13 bytes.
You'll have to keep some sort of state and a buffer in between different receive() calls.
When it comes to external tools like Wireshark, they cannot know how many bytes are left in the socket. They only know which packets have passed by the network interface.
The only way to check it with Wireshark is to actually know the last bytes you read from the socket, locate them in Wireshark, and count from there.
However, the best way to get this information is to check the Available property on the socket object in your .NET application.
You can use socket.Available if you are using normal Socket class. Otherwise you have to define a header byte which gives number of bytes to be sent from other end.
Is there a way to determine the receive buffer size of a TCPIP socket in c#. I am sending a message to a server and expecting a response where I am not sure of the receive buffer size.
IPEndPoint ipep = new IPEndPoint(IPAddress.Parse("192.125.125.226"),20060);
Socket server = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
server.Connect(ipep);
String OutStr= "49|50|48|48|224|48|129|1|0|0|128|0|0|0|0|0|4|0|0|32|49|50";
byte[] temp = OutStr.Split('|').Select(s => byte.Parse(s)).ToArray();
int byteCount = server.Send(temp);
byte[] bytes = new byte[255];
int res=0;
res = server.Receive(bytes);
return Encoding.UTF8.GetString(bytes);
The size of the buffer used to receive data is application or protocol dependent. There's no way within the language to tell what the size of your receive buffer ought to be. Nor is there any socket function that can be used that says 'you need a 23867 bytes to receive this message'. In general your application has to work out from the protocol what size the receive buffer should be and how to handle this. Typically a protocol will either:
specify the number of bytes in the message.
specify a terminating character (for example hdlc using 0x7e to indicate the end of the message)
A consequence of this is that your application might need to deal with split messages. For example the server might send a message that is 2000 bytes but your receive buffer is only 1000 bytes, you'll have to write some code to maintain the state telling you if you've completed a message or are partially complete.
TCP is a stream of bytes. It knows nothing of your concept of messages.
As such it's up to you to provide the necessary message framing information within that stream of bytes. Common ways to do this include prefixing the message with a header which contains the total length of the message or terminating the message with a character that cannot otherwise appear in a valid message.
I speak about TCP message framing here: http://www.serverframework.com/asynchronousevents/2010/10/message-framing-a-length-prefixed-packet-echo-server.html though it's in reference to C++ code so it might not be any use to you.
It's usually slightly more performant for a message consumer to deal with length prefixed messages and it's often slighly more performant for a message producer to produce character delimited messages. Personally I prefer length prefixed messages wherever possible.
With a length prefixed message you would first send x bytes of data which are the length of the message, the peer would then know that it always has to read at least x bytes to work out the length and from that point it knows the size of the resulting message and can read until it has that many bytes.
With character delimited messages you simply keep reading and scanning all of the data that you have read until you find the message delimiter. You have then got a whole message, and possibly more data (part of the next message?) in the buffer to process after that.
As we know for UDP receive, we use Socket.ReceiveFrom or UdpClient.receive
Socket.ReceiveFrom accept a byte array from you to put the udp data in.
UdpClient.receive returns directly a byte array where the data is
My question is that How to set the buffer size inside Socket. I think the OS maintains its own buffer for receive UDP data, right? for e.g., if a udp packet is sent to my machine, the OS will put it to a buffer and wait us to Socket.ReceiveFrom or UdpClient.receive, right?
How can I change the size of that internal buffer?
I have tried Socket.ReceiveBuffSize, it has no effect at all for UDP, and it clearly said that it is for TCP window. Also I have done a lot of experiments which proves Socket.ReceiveBufferSize is NOT for UDP.
Can anyone share some insights for UDP internal buffer???
Thanks
I have seen some posts here, for e.g.,
http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/c80ad765-b10f-4bca-917e-2959c9eb102a
Dave said that Socket.ReceiveBufferSize can set the internal buffer for UDP. I disagree.
The experiment I did is like this:
27 hosts send a 10KB udp packet to me within a LAN at the same time (at least almost). I have a while-loop to handle each of the packet. For each packet, I create a thread a handle it. I used UdpClient or Socket to receive the packets.
I lost about 50% of the packets. I think it is a burst of the UDP sending and I can't handle all of them in time.
This is why I want to increase the buffer size for UDP. say, if I change the buffer size to 1MB, then 27 * 10KB = 270KB data can be accepted in the buffer, right?
I tried changing Socket.ReceiveBufferSize to many many values, and it just does not have effects at all.
Any one can help?
I use the .NET UDPClient often and I have always used the Socket.ReceiveBufferSize and have good results. Internally it calls Socket.SetSocketOption with the ReceiveBuffer parameter. Here is a some quick, simple, code you can test with:
public static void Main(string[] args)
{
IPEndPoint remoteEp = null;
UdpClient client = new UdpClient(4242);
client.Client.ReceiveBufferSize = 4096;
Console.Write("Start sending data...");
client.Receive(ref remoteEp);
Console.WriteLine("Good");
Thread.Sleep(5000);
Console.WriteLine("Stop sending data!");
Thread.Sleep(1500);
int count = 0;
while (true)
{
client.Receive(ref remoteEp);
Console.WriteLine(string.Format("Count: {0}", ++count));
}
}
Try adjusting the value passed into the ReceiveBufferSize. I tested sending a constant stream of data for the 5 seconds, and got 10 packets. I then increased x4 and the next time got 38 packets.
I would look to other places in your network where you may be dropping packets. Especially since you mention on your other post that you are sending 10KB packets. The 10KB will be fragmented when it is sent to packets the size of MTU. If any 1 packet in the series is dropped the entire packet will be dropped.
The issue with setting the ReceiveBufferSize is that you need to set it directly after creation of the UdpClient object. I had the same issue with my changes not being reflected when getting the value of ReceiveBufferSize.
UdpClient client = new UdpClient()
//no code inbetween these two lines accessing client.
client.Client.ReceiveBufferSize = somevalue
I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.
I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.
First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?
Secondly, how can I get round this? Receiving all of the 20000 strings is vital.
Socket receiving code:
private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
byte[] recvBuffer = new byte[1024];
int bytesRead = 0;
try
{
bytesRead = socket.Receive(recvBuffer);
}
catch (SocketException e)
{
if (! socket.Connected)
{
return;
}
}
string input = encoding.GetString(recvBuffer, 0, bytesRead);
CountStringsIn(input);
}
Socket sending code:
private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.
What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.
The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.
Well there's one thing wrong with your code to start with, if you're counting the number of calls to Receive which complete: you appear to be assuming that you'll see as many Receive calls finish as you made Send calls.
TCP is a stream-based protocol - you shouldn't be worrying about individual packets or reads; you should be concerned with reading the data, expecting that sometimes you won't get a whole message in one packet and sometimes you may get more than one message in a single read. (One read may not correspond to one packet, too.)
You should either prefix each method with its length before sending, or have a delimited between messages.
It's definitely not TCP's fault. TCP guarantees in-order, exactly-once delivery.
Which strings are "missing"? I'd wager it's the last ones; try flushing from the sending end.
Moreover, your "protocol" here (I'm taking about the application-layer protocol you're inventing) is lacking: you should consider sending the # of objects and/or their length so the receiver knows when he's actually done receiving them.
How long are each of the strings? If they aren't exactly 1024 bytes, they'll be merged by the remote TCP/IP stack into one big stream, which you read big blocks of in your Receive call.
For example, using three Send calls to send "A", "B", and "C" will most likely come to your remote client as "ABC" (as either the remote stack or your own stack will buffer the bytes until they are read). If you need each string to come without it being merged with other strings, look into adding in a "protocol" with an identifier to show the start and end of each string, or alternatively configure the socket to avoid buffering and combining packets.