I have several points regarding .NET socket implementation so I will state them sequentially:
My understanding is that an instance of a Socket has a buffer of a changeable size in its internal class implementation, and is actually a queue of bytes, and also is different than the application buffer that you declare and define in your application.
In synchronous mode using socket type:stream and protocol type:tcp , when using the method Receive (which is blocking the process), with a parameter application byte buffer is actually dequeuing the socket buffer in chunks that have the same size of the application byte buffer you declared and defined in your application then assigns this chunk to your application byte buffer you sent to Receive function.
If the above is true, then what happens when the byte buffer is larger in length than the byte elements in the socket queue?
Also, if 2 is correct then Send method of a socket sends the the data to the end point connected host Socket buffer and not application buffer.
Finally, since the Socket method Accept is non-blocking, a thread is created for it in the underlying implementation, and it has a queue of its own that is dequeued when Accept method is called.
I ask all of this to check if my understanding so far is right, or if it's mostly wrong and need correcting.
First of all .net's implementation is mostly just a managed wrapper around winsock.
My understanding is that an instance of a Socket has a buffer of a changeable size in its internal class implementation, and is actually a queue of bytes, and also is different than the application buffer that you declare and define in your application.
Ok so far.
In synchronous mode using socket type: ... when using the method Receive
When you call Receive, data will be copied into the supplied buffer and the number of bytes written will be returned. This may well be less than the size of the buffer. If your buffer is not large enough to accomodate all of the data queued by the TCP stack only as many bytes as can be copied into your buffer will be copied, the remaining bytes will be returned on your next call to Receive.
Sockets treat all data sent (or received) as being a continuous stream without breaks. However, data sent across the network is subject to networks or hosts splitting the data to meet constraints like a maximum packet size. Your code should assumes data may arrive in arbitrarily sized chunks. Incidentally, this kind of message is more likely to appear in a production environment than in a development/testing one.
socket sends the the data to the end point connected host Socket buffer and not application buffer
Send will return when the data is queued by the TCP stack. If the TCP window is full and the remote endpoint is not reading off the socket (say because it is waiting for its own send to complete) this could potentially be a long time.
Finally, since the Socket method Accept is non-blocking
Per the documentation, Accept will either block until a connection is received or (in non-blocking mode) either synchronously accept the first available connection or throw if no connection is available.
this is still relevant and would still be recommended reading for anyone about to start writing network code.
Related
I use on a connected socket on my server something like this to send data to the client:
IAsyncResult asyncRes = connectionSocket.BeginSend(data, 0, length, SocketFlags.None,
out error, new AsyncCallback(SendDataDone), signalWhenDataSent);
As it seems, when there is a slow internet connection between the server and the client I receive an exception description like this: NoBufferSpaceAvailable
What exactly does this error mean ? The internal OS buffer for the socket connectionSocket is full ? What are the means to make it work. As a context where this appears is in a http proxy server. This might indicate, I suppose, that the rate at which data is coming from the origin server is higher than the rate my server can handle with the proxy client. How would you deal with it ?
I am using tcp.
The way to fix this problem is to correlate the way one reads from one socket to the speed one writes to the other socket because if you do no buffering you cannot write to a socket at a higher speed than the client connected at that end can read.
You one uses synchronous sockets the problem does not appear because they block as long as the operation is still pending but this is not the case with async calls.
Exactly, most likely the kernel socket buffer holding outgoing data is full. You're sending too "fast" for the client. You can try to increase the send buffer size, but that does not guarantee you won't bump into this problem again.
The simple answer is that you should be prepared that a send operation might fail, and retry it later. It's not ideal to maintain an ever growing buffer inside you application either, but the origin server should also slow down if you stop receiving (depending on the TCP window size and your receive buffer size).
Will NetworkStream.Write block only until it places the data to be sent into the TCP send buffer, or will it block until the data is actually ACK'd by the receiving host?
Note: The socket is configured for blocking I/O.
Edit: Whoops, there's no such thing as TcpClient.Write of course! We all understood we were talking about TcpClient.GetStream().Write, which is actually NetworkStream.Write!
Unless .net is using something other than winsock, then according to the winsock reference:
The successful completion of a send function does not indicate that the data was successfully delivered and received to the recipient. This function only indicates the data was successfully sent.
If no buffer space is available within the transport system to hold the data to be transmitted, send will block unless the socket has been placed in nonblocking mode. On nonblocking stream oriented sockets, the number of bytes written can be between 1 and the requested length, depending on buffer availability on both the client and server computers.
Assuming that write is calling send underneath, then a strict interpretation of the winsock documentation would indicate that there is no gurantee that the data made it to the other end of the pipe when it returns.
Here is the link to the winsock docs I am quoting from:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms741416(v=VS.85).aspx
I disagree with both answers [that state it blocks]. Writing to TCP/IP socket does not block unless the underlying buffer is already full of unacknowledge data. Generally, it doesn't block but just gets handed off to the TCP implementation. But of course now I have to go track down some references to back this up :)
From SO
TcpClient.Write will block until the packet buffer has been flushed to the network and the appropriate ACK(s) have been received. You'll notice that a dropped connection will usually end up throwing an exception on the Write operation, since it waits for the ACK but doesn't get one within the defined timeout period.
I intend to receive packets of data over a socket but since they are sent from the sender with a high frequency a number of them get packed into a single byte array. SocketAsyncEventArgs.Buffer then holds multiple packets, even though they were sent separately (verified using wireshark).
I have already tried to queue the incoming packets and asynchronously work them off, but still I get the same result.
What could be reason for this behaviour?
This is how TCP works. TCP connection is a bi-directional stream of bytes, and you have to treat it as such. Single send from one end might result in multiple reads on the receiving side, and vice versa, multiple sends might end up in a single read, and application message boundaries are not preserved by the transport.
You have to buffer the input until you know you have a complete application message. Common approaches are:
fixed length messages,
pre-pending length in front of the message,
delimiting the stream with special "end-of-message" separator.
I might be mistaking, but isn't this the Naggle algorithm kicking in?
Your sockets should have a flag that disables this.
I'm having an issue using Sockets in c#. Heres an example. Say I send the number 1, then I immediately send the number 2. The problem I run into sometimes is the client that is supposed to receive it will receive a packet containing '12'. I was wondering if there was a built in way to distinguish packets without using characters or something to separate the data.
To sum it up again, I said two packets. One with the number '1', one with the number '2'.
Server receives 1 packet with the data '12'.
I don't want to separate the packets with characters, like this ':1::2:' or anything like that, as I don't always have control over the format of the incoming data.
Any ideas?
Like if I do this
client.Send(new byte[1]{'1'}, 1,SocketFlags.None);
client.Send(new byte[1]{'2'}, 1,SocketFlags.None);
then on the server end
byte[] data = new byte[1024];
client.Receive(data);
data sometimes comes back with "12" even though I do two separate sends.
TCP/IP works on a stream abstraction, not a packet abstraction.
When you call Send, you are not sending a packet. You are only appending bytes to a stream.
When you call Receive, you are not receiving a packet. You are only receiving bytes from a stream.
So, you must define where a "message" begins and ends. There are only three possible solutions:
Every message has a known size (e.g., all messages are 42 bytes). In this case, you just keep Receiveing the data until you get that many bytes.
Use delimiters, which you're already aware of.
Use length prefixing (e.g., prefix each message with a 4-byte length-of-message). In this case, you keep Receiveing the length prefix until it arrives, decode it to get the actual length of the message, and then keep Receiveing the message itself.
You must do the message framing yourself. TCP/IP cannot do it for you.
TCP is a streaming protocol, so you will always receive whatever data has arrived since the last read, up to the streaming window size on the recipient's end. This buffer can be filled with data received from multiple packets of any given size sent by the sender.
TCP Receive Window Size and Scaling # MSDN
Although you have observed 1 unified blob of data containing 2 bytes on the receiving end in your example, it is possible to receive sequences of 1 byte by 1 byte (as sent by your sender) depending on network conditions, as well as many possible combinations of 0, 1, and 2 byte reads if you are doing non-blocking reads. When debugging on a typical uncongested LAN or a loopback setup, you will almost never see this if there is no delay on the sending side. There are ways, at lower levels of the network stack, to detect per-packet transmission but they're not used in typical TCP programming and are out of application scope.
If you switch to UDP, then every packet will be received as it was sent, which would match your expectations. This may suit you better, but keep in mind that UDP has no delivery promises and network routing may cause packets to be delivered out of order.
You should look into delimiting your data or finding some other method to detect when you have reached the end of a unit of data as defined by your application, and stick with TCP.
Its hard to answer your question without knowing the context. In contrast, by default, TCP/IP handles packet management for you automaticly (although you receive it in a streambased fashion). However when you have a very specific (bad?) implementation, you can send multiple streams over 1 socket at the same time making it impossible for lower level TCP/IP to detect the difference. Thereby making it very hard for yourself to identify different streams on the client. The only solution for this would be to send 2 totally unique streams (e.g. stream 1 only sends bytes lower then 127 and stream 2 only sends bytes higher or equal to 127). Yet again, this is terrible behavior
You must put into your TCP messages delimiters or use byte counts to tell where one message starts and another begins.
Your code has another serious error. TCP sockets may not give you all the data in a single call to Receive. You must loop on receiving of data until your application-specific records in the data stream indicate the entire message has been received. Your call to client.Receive(data) returns the count of bytes received. You should capture that number.
Likewise, when you send data, all of your data might not be sent in a single call. You must loop on sending of data until the count of bytes sent is equal to what you intended to send. The call to client.Send returns the actual count of bytes sent, which may not be all you tried to send!
The most common error I see people make with sockets is that they don't loop on the Send and Receive. If you understand why you need to do the looping, then you know why you need to have a delimiter or a byte count.
I'm writing a server application for an iPhone application im designing. iPhone app is written in C# (MonoTouch) and the server is written in C# too (.NET 4.0)
I'm using asynchronous sockets for the network layer. The server allows two or more iPhones ("devices") to connect to each other and be able to send data bi-directionally.
Depending on the incoming message, the server either processes the message itself , or relays the data through to the other device(s) in the same group as the sending device. It can make this decision by decoding the header of the packet first, and deciding what type of packet it is.
This is done by framing the stream in a way that the first 8 bytes are two integers, the length of the header and the length of the payload (which can be much larger than the header).
The server reads (asynchronously) from the socket the first 8 bytes so it has the lengths of the two sections. It then reads again, up to the total length of the header section.
It then deserializes the header, and based on the information within, can see if the remaining data (payload) should be forwarded onto another device, or is something that the server itself needs to work with.
If it needs to be forwarded onto another device, then the next step is to read data coming into the socket in chunks of say, 1024 bytes, and write these directly using an async send via another socket that is connected to the recipient device.
This reduces the memory requirements of the server, as i'm not loading in the entire packet into a buffer, then re-sending it down the wire to the recipient.
However, because of the nature of async sockets, I am not guaranteed to receive the entire payload in one read, so have to keep reading until I receive all the bytes. In the case of relaying onto its final destination, this means that i'm calling BeginSend() for each chunk of bytes I receive from the sender, and forwarding that chunk onto the recipient, one chunk at a time.
The issue with this is that because I am using async sockets, this leaves the possibility of another thread doing a similar operation with the same recipient (and therefore same final destination socket), and so it is likely that the chunks coming from both threads will get mixed up and corrupt all the data going to that recipient.
For example: If the first thread sends a chunk, and is waiting for the next chunk from the sender (so it can relay it onwards), the second thread could send one of its chunks of data, and corrupt the first thread's (and the second thread's for that matter) data.
As I write this, i'm just wondering is it as simple as just locking the socket object?! Would this be the correct option, or could this cause other issues (e.g.: issues with receiving data through the locked socket that's being sent BACK from the remote device?)
Thanks in advance!
I was facing a similar scenario a while back, I don't have the complete solution anymore, but here's pretty much what I did :
I didn't use sync sockets, decided to explore the async sockets in C# - fun ride
I don't allow multiple threads to share a single resource unless I really have to
My "packets" were containing information about size, index and total packet count for a message
My packet's 1st byte was unique to signify that it's a start of a message, I used 0xAA
My packets's last 2 bytes were a result of a CRC-CCITT checksum (ushort)
The objects that did the receiving bit contained a buffer with all received bytes. From that buffer I was extracting "complete" messages once the size was ok, and the checksum matched
The only "locking" I needed to do was in the temp buffer so I could safely analyze it's contents between write/read operations
Hope that helps a bit
Not sure where the problem is. Since you mentioned servers, I assume TCP, yes?
A phone needs to communicate some of your PDU to another phone. It connects as a client to the server on the other phone. A socket-pair is established. It sends the data off to the server socket. The socket-pair is unique - no other streams that might be happening between the two phones should interrupt this, (will slow it up, of course).
I don't see how async/sync sockets, assuming implemented correctly, should affect this, either should work OK.
Is there something I cannot see here?
BTW, Maciek's plan to bolster up the protocol by adding an 'AA' start byte is an excellent idea - protocols depending on sending just a length as the first element always seem to screw up eventually and result in a node trying to dequeue more bytes that there are atoms in the universe.
Rgds,
Martin
OK, now I understand the problem, (I completely misunderstood the topology of the OP network - I thought each phone was running a TCP server as well as client/s, but there is just one server on PC/whatever a-la-chatrooms). I don't see why you could not lock the socket class with a mutex, so serializing the messages. You could queue the messages to the socket, but this has the memory implications that you are trying to avoid.
You could dedicate a connection to supplying only instructions to the phone, eg 'open another socket connection to me and return this GUID - a message will then be streamed on the socket'. This uses up a socket-pair just for control and halves the capacity of your server :(
Are you stuck with the protocol you have described, or can you break your messages up into chunks with some ID in each chunk? You could then multiplex the messages onto one socket pair.
Another alternative, that again would require chunking the messages, is introduce a 'control message', (maybee a chunk with 55 at start instead of AA), that contains a message ID, (GUID?), that the phone uses to establish a second socket connection to the server, passes up the ID and is then sent the second message on the new socket connection.
Another, (getting bored yet?), way of persuading the phone to recognise that a new message might be waiting would be to close the server socket that the phone is receiving a message over. The phone could then connect up again, tell the server that it only got xxxx bytes of message ID yyyy. The server could then reply with an instruction to open another socket for new message zzzz and then resume sending message yyyy. This might require some buffering on the server to ensure no data gets lost during the 'break'. You might want to implement this kind of 'restart streaming after break' functionality anyway since phones tend to go under bridges/tunnels just as the last KB of a 360MB video file is being streamed :( I know that TCP should take care of dropped packets, but if the phone wireless layer decides to close the socket for whatever reason...
None of these solutions is particularly satisfying. Interested to see whay other ideas crop up..
Rgds,
Martin
Thanks for the help everyone, i've realised the simpliest approach is to use synchronous send commands on the client, or at least a send command that must complete before the next item is sent. Im handling this with my own send queue on the client, rather than various parts of the app just calling send() when they need to send something.