Windows socket.Send data isn't received until socket.Close - c#

I'm developing a server application that asynchronously accepts TCP connections (BeginAccept/EndAccept) and data (BeginReceive/EndReceive). The protocol requires an ACK to be sent whenever the EOM character is found before it will send the next message. The accept and receive are working but the sending app is not receiving the ACK (sent synchronously).
private void _receiveTransfer(IAsyncResult result)
{
SocketState state = result.AsyncState as SocketState;
int bytesReceived = state.Socket.EndReceive(result);
if (bytesReceived == 0)
{
state.Socket.Close();
return;
}
state.Offset += bytesReceived;
state.Stream.Write(state.Buffer, 0, bytesReceived);
if (state.Buffer[bytesReceived - 1] == 13)
{
// process message
Messages.IMessage message = null;
try
{
var value = state.Stream.ToArray();
// do some work
var completed = true;
if (completed)
{
// send positive ACK
var ackMessage = string.Format(ack, message.TimeStamp.ToString("yyyyMMddhhmm"), message.MessageType, message.Id, "AA", message.Id);
var buffer = ASCIIEncoding.ASCII.GetBytes(ackMessage);
int bytesSent = state.Socket.Send(buffer, 0, buffer.Length, SocketFlags.None);
}
else
{
// send rejected ACK
var ackMessage = string.Format(ack, message.TimeStamp.ToString("yyyyMMddhhmm"), message.MessageType, message.Id, "AR", message.Id);
state.Socket.Send(ASCIIEncoding.ASCII.GetBytes(ackMessage));
}
}
catch (Exception e)
{
// log exception
// send error ACK
if (message != null)
{
var ackMessage = string.Format(ack, DateTime.Now.ToString("yyyyMMddhhmm"), message.MessageType, message.Id, "AE", message.Id);
state.Socket.Send(ASCIIEncoding.ASCII.GetBytes(ackMessage));
}
}
}
state.Socket.BeginReceive(state.Buffer, 0, state.Buffer.Length, SocketFlags.None, new AsyncCallback(_receiveTransfer), state);
}
The state.Socket.Send returns the correct number of bytes but the data isn't received until the socket is disposed.
Suggestions are appreciated.

you shouldn't do anything synchronous from async completion routines. Under load you can end up hijacking all IO completion threads from the thread pool and severly hurt performance, up to and including complete IO deadlock. So don't send ACKs synchronously from async callback.
protocols and formats that use preambles are easier to manage that those that use terminators. Ie. write the length of the message in the fixed size message header as opposed to detecting a terminator \0x13. Of course, this applies if the protocol is under your control to start with.
As for your question, you didn't specify if the same code as you posted is also on the client side too.

How long are you giving it? The network stack can buffer, and that could delay transmition. From MSDN:
To increase network efficiency, the
underlying system may delay
transmission until a significant
amount of outgoing data is collected.
A successful completion of the Send
method means that the underlying
system has had room to buffer your
data for a network send.
You might want to try flushing using the IOControl method.
edit
Actually, the IOControl flush will kill the buffer. You may want to check out the Two Generals Problem to see if your protocol will have some inherent problems.

try setting TCP_NODELAY socket option

Have you set the NoDelay property on the socket to true? When set to false (the default), data is buffered for up to 200 milliseconds before it's sent. The reason is to reduce network traffic by limiting the number of packets that are sent. Setting NoDelay to true will force the data to be sent sooner.

Related

C# socket thinks it's connected [duplicate]

How can I detect that a client has disconnected from my server?
I have the following code in my AcceptCallBack method
static Socket handler = null;
public static void AcceptCallback(IAsyncResult ar)
{
//Accept incoming connection
Socket listener = (Socket)ar.AsyncState;
handler = listener.EndAccept(ar);
}
I need to find a way to discover as soon as possible that the client has disconnected from the handler Socket.
I've tried:
handler.Available;
handler.Send(new byte[1], 0,
SocketFlags.None);
handler.Receive(new byte[1], 0,
SocketFlags.None);
The above approaches work when you are connecting to a server and want to detect when the server disconnects but they do not work when you are the server and want to detect client disconnection.
Any help will be appreciated.
Since there are no events available to signal when the socket is disconnected, you will have to poll it at a frequency that is acceptable to you.
Using this extension method, you can have a reliable method to detect if a socket is disconnected.
static class SocketExtensions
{
public static bool IsConnected(this Socket socket)
{
try
{
return !(socket.Poll(1, SelectMode.SelectRead) && socket.Available == 0);
}
catch (SocketException) { return false; }
}
}
Someone mentioned keepAlive capability of TCP Socket.
Here it is nicely described:
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
I'm using it this way: after the socket is connected, I'm calling this function, which sets keepAlive on. The keepAliveTime parameter specifies the timeout, in milliseconds, with no activity until the first keep-alive packet is sent. The keepAliveInterval parameter specifies the interval, in milliseconds, between when successive keep-alive packets are sent if no acknowledgement is received.
void SetKeepAlive(bool on, uint keepAliveTime, uint keepAliveInterval)
{
int size = Marshal.SizeOf(new uint());
var inOptionValues = new byte[size * 3];
BitConverter.GetBytes((uint)(on ? 1 : 0)).CopyTo(inOptionValues, 0);
BitConverter.GetBytes((uint)keepAliveTime).CopyTo(inOptionValues, size);
BitConverter.GetBytes((uint)keepAliveInterval).CopyTo(inOptionValues, size * 2);
socket.IOControl(IOControlCode.KeepAliveValues, inOptionValues, null);
}
I'm also using asynchronous reading:
socket.BeginReceive(packet.dataBuffer, 0, 128,
SocketFlags.None, new AsyncCallback(OnDataReceived), packet);
And in callback, here is caught timeout SocketException, which raises when socket doesn't get ACK signal after keep-alive packet.
public void OnDataReceived(IAsyncResult asyn)
{
try
{
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = socket.EndReceive(asyn);
}
catch (SocketException ex)
{
SocketExceptionCaught(ex);
}
}
This way, I'm able to safely detect disconnection between TCP client and server.
This is simply not possible. There is no physical connection between you and the server (except in the extremely rare case where you are connecting between two compuers with a loopback cable).
When the connection is closed gracefully, the other side is notified. But if the connection is disconnected some other way (say the users connection is dropped) then the server won't know until it times out (or tries to write to the connection and the ack times out). That's just the way TCP works and you have to live with it.
Therefore, "instantly" is unrealistic. The best you can do is within the timeout period, which depends on the platform the code is running on.
EDIT:
If you are only looking for graceful connections, then why not just send a "DISCONNECT" command to the server from your client?
"That's just the way TCP works and you have to live with it."
Yup, you're right. It's a fact of life I've come to realize. You will see the same behavior exhibited even in professional applications utilizing this protocol (and even others). I've even seen it occur in online games; you're buddy says "goodbye", and he appears to be online for another 1-2 minutes until the server "cleans house".
You can use the suggested methods here, or implement a "heartbeat", as also suggested. I choose the former. But if I did choose the latter, I'd simply have the server "ping" each client every so often with a single byte, and see if we have a timeout or no response. You could even use a background thread to achieve this with precise timing. Maybe even a combination could be implemented in some sort of options list (enum flags or something) if you're really worried about it. But it's no so big a deal to have a little delay in updating the server, as long as you DO update. It's the internet, and no one expects it to be magic! :)
Implementing heartbeat into your system might be a solution. This is only possible if both client and server are under your control. You can have a DateTime object keeping track of the time when the last bytes were received from the socket. And assume that the socket not responded over a certain interval are lost. This will only work if you have heartbeat/custom keep alive implemented.
I've found quite useful, another workaround for that!
If you use asynchronous methods for reading data from the network socket (I mean, use BeginReceive - EndReceive methods), whenever a connection is terminated; one of these situations appear: Either a message is sent with no data (you can see it with Socket.Available - even though BeginReceive is triggered, its value will be zero) or Socket.Connected value becomes false in this call (don't try to use EndReceive then).
I'm posting the function I used, I think you can see what I meant from it better:
private void OnRecieve(IAsyncResult parameter)
{
Socket sock = (Socket)parameter.AsyncState;
if(!sock.Connected || sock.Available == 0)
{
// Connection is terminated, either by force or willingly
return;
}
sock.EndReceive(parameter);
sock.BeginReceive(..., ... , ... , ..., new AsyncCallback(OnRecieve), sock);
// To handle further commands sent by client.
// "..." zones might change in your code.
}
This worked for me, the key is you need a separate thread to analyze the socket state with polling. doing it in the same thread as the socket fails detection.
//open or receive a server socket - TODO your code here
socket = new Socket(....);
//enable the keep alive so we can detect closure
socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
//create a thread that checks every 5 seconds if the socket is still connected. TODO add your thread starting code
void MonitorSocketsForClosureWorker() {
DateTime nextCheckTime = DateTime.Now.AddSeconds(5);
while (!exitSystem) {
if (nextCheckTime < DateTime.Now) {
try {
if (socket!=null) {
if(socket.Poll(5000, SelectMode.SelectRead) && socket.Available == 0) {
//socket not connected, close it if it's still running
socket.Close();
socket = null;
} else {
//socket still connected
}
}
} catch {
socket.Close();
} finally {
nextCheckTime = DateTime.Now.AddSeconds(5);
}
}
Thread.Sleep(1000);
}
}
The example code here
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.connected.aspx
shows how to determine whether the Socket is still connected without sending any data.
If you called Socket.BeginReceive() on the server program and then the client closed the connection "gracefully", your receive callback will be called and EndReceive() will return 0 bytes. These 0 bytes mean that the client "may" have disconnected. You can then use the technique shown in the MSDN example code to determine for sure whether the connection was closed.
Expanding on comments by mbargiel and mycelo on the accepted answer, the following can be used with a non-blocking socket on the server end to inform whether the client has shut down.
This approach does not suffer the race condition that affects the Poll method in the accepted answer.
// Determines whether the remote end has called Shutdown
public bool HasRemoteEndShutDown
{
get
{
try
{
int bytesRead = socket.Receive(new byte[1], SocketFlags.Peek);
if (bytesRead == 0)
return true;
}
catch
{
// For a non-blocking socket, a SocketException with
// code 10035 (WSAEWOULDBLOCK) indicates no data available.
}
return false;
}
}
The approach is based on the fact that the Socket.Receive method returns zero immediately after the remote end shuts down its socket and we've read all of the data from it. From Socket.Receive documentation:
If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes.
If you are in non-blocking mode, and there is no data available in the protocol stack buffer, the Receive method will complete immediately and throw a SocketException.
The second point explains the need for the try-catch.
Use of the SocketFlags.Peek flag leaves any received data untouched for a separate receive mechanism to read.
The above will work with a blocking socket as well, but be aware that the code will block on the Receive call (until data is received or the receive timeout elapses, again resulting in a SocketException).
Above answers can be summarized as follow :
Socket.Connected properity determine socket state depend on last read or receive state so it can't detect current disconnection state until you manually close the connection or remote end gracefully close of socket (shutdown).
So we can use the function below to check connection state:
bool IsConnected(Socket socket)
{
try
{
if (socket == null) return false;
return !((socket.Poll(5000, SelectMode.SelectRead) && socket.Available == 0) || !socket.Connected);
}
catch (SocketException)
{
return false;
}
//the above code is short exp to :
/* try
{
bool state1 = socket.Poll(5000, SelectMode.SelectRead);
bool state2 = (socket.Available == 0);
if ((state1 && state2) || !socket.Connected)
return false;
else
return true;
}
catch (SocketException)
{
return false;
}
*/
}
Also the above check need to care about poll respone time(block time)
Also as said by Microsoft Documents : this poll method "can't detect proplems like a broken netwrok cable or that remote host was shut down ungracefuuly".
also as said above there is race condition between socket.poll and socket.avaiable which may give false disconnect.
The best way as said by Microsoft Documents is to attempt to send or recive data to detect these kinds of errors as MS docs said.
The below code is from Microsoft Documents :
// This is how you can determine whether a socket is still connected.
bool IsConnected(Socket client)
{
bool blockingState = client.Blocking; //save socket blocking state.
bool isConnected = true;
try
{
byte [] tmp = new byte[1];
client.Blocking = false;
client.Send(tmp, 0, 0); //make a nonblocking, zero-byte Send call (dummy)
//Console.WriteLine("Connected!");
}
catch (SocketException e)
{
// 10035 == WSAEWOULDBLOCK
if (e.NativeErrorCode.Equals(10035))
{
//Console.WriteLine("Still Connected, but the Send would block");
}
else
{
//Console.WriteLine("Disconnected: error code {0}!", e.NativeErrorCode);
isConnected = false;
}
}
finally
{
client.Blocking = blockingState;
}
//Console.WriteLine("Connected: {0}", client.Connected);
return isConnected ;
}
//and heres comments from microsoft docs*
The socket.Connected property gets the connection state of the Socket as of the last I/O operation. When it returns false, the Socket was either never connected, or is no longer connected. 
Connected is not thread-safe; it may return true after an operation is aborted when the Socket is disconnected from another thread.
The value of the Connected property reflects the state of the connection as of the most recent operation.
If you need to determine the current state of the connection, make a nonblocking, zero-byte Send call. If the call returns successfully or throws a WAEWOULDBLOCK error code (10035), then the socket is still connected; //otherwise, the socket is no longer connected .
Can't you just use Select?
Use select on a connected socket. If the select returns with your socket as Ready but the subsequent Receive returns 0 bytes that means the client disconnected the connection. AFAIK, that is the fastest way to determine if the client disconnected.
I do not know C# so just ignore if my solution does not fit in C# (C# does provide select though) or if I had misunderstood the context.
Using the method SetSocketOption, you will be able to set KeepAlive that will let you know whenever a Socket gets disconnected
Socket _connectedSocket = this._sSocketEscucha.EndAccept(asyn);
_connectedSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, 1);
http://msdn.microsoft.com/en-us/library/1011kecd(v=VS.90).aspx
Hope it helps!
Ramiro Rinaldi
i had same problem , try this :
void client_handler(Socket client) // set 'KeepAlive' true
{
while (true)
{
try
{
if (client.Connected)
{
}
else
{ // client disconnected
break;
}
}
catch (Exception)
{
client.Poll(4000, SelectMode.SelectRead);// try to get state
}
}
}
This is in VB, but it seems to work well for me. It looks for a 0 byte return like the previous post.
Private Sub RecData(ByVal AR As IAsyncResult)
Dim Socket As Socket = AR.AsyncState
If Socket.Connected = False And Socket.Available = False Then
Debug.Print("Detected Disconnected Socket - " + Socket.RemoteEndPoint.ToString)
Exit Sub
End If
Dim BytesRead As Int32 = Socket.EndReceive(AR)
If BytesRead = 0 Then
Debug.Print("Detected Disconnected Socket - Bytes Read = 0 - " + Socket.RemoteEndPoint.ToString)
UpdateText("Client " + Socket.RemoteEndPoint.ToString + " has disconnected from Server.")
Socket.Close()
Exit Sub
End If
Dim msg As String = System.Text.ASCIIEncoding.ASCII.GetString(ByteData)
Erase ByteData
ReDim ByteData(1024)
ClientSocket.BeginReceive(ByteData, 0, ByteData.Length, SocketFlags.None, New AsyncCallback(AddressOf RecData), ClientSocket)
UpdateText(msg)
End Sub
You can also check the .IsConnected property of the socket if you were to poll.

TCP Socket/NetworkStream Unexpectedly Failing

This specifically is a question on what is going on in the background communications of NetworkStream consuming raw data over TCP. The TcpClient connection is communicating directly with a hardware device on the network. Every so often, at random times, the NetworkStream appears to hiccup, and can be best described while observing in debug mode. I have a read timeout set on the stream and when everything is working as expected, when stepping over Stream.Read, it will sit there and wait the length of the timeout period for incoming data. When not, only a small portion of the data comes through, the TcpClient still shows as open and connected, but Stream.Read no longer waits for the timeout period for incoming data. It immediately steps over to the next line, no data is received obviously, and no data will ever come through until everything is disposed of and a new connection is reestablished.
The question is, in this specific scenario, what state is the NetworkStream in at this point, what causes it, and why is the TcpClient connection still in a seemingly open and valid state? What is going on in the background? No errors thrown and captured, is the stream silently failing in the background? What is the difference between states of TcpClient and NetworkStream?
private TcpClient Client;
private NetworkStream Stream;
Client = new TcpClient();
var result = Client.BeginConnect(IPAddress, Port, null, null);
var success = result.AsyncWaitHandle.WaitOne(TimeSpan.FromSeconds(2));
Client.EndConnect(result);
Stream = Client.GetStream();
try
{
while (Client.Connected)
{
bool flag = true;
StringBuilder sb = new StringBuilder();
while (!IsCompleteRecord(sb.ToString()) && Client.Connected)
{
string response = "";
byte[] data = new byte[512];
Stream.ReadTimeout = 60000;
try
{
int recv = Stream.Read(data, 0, data.Length);
response = Encoding.ASCII.GetString(data, 0, recv);
}
catch (Exception ex)
{
}
sb.Append(response);
}
string rec = sb.ToString();
// send off data
Stream.Flush();
}
}
catch (Exception ex)
{
}
You are not properly testing for the peer closing its end of the connection.
From this link : https://msdn.microsoft.com/en-us/library/system.net.sockets.networkstream.read%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes.
You are simply doing a stream.read, and not interpreting the fact that you might have received 0 bytes, which means that the peer closed its end of the connection. This is called a half close. It will not send to you anymore. At that point you should also close your end of the socket.
There is an example available here :
https://msdn.microsoft.com/en-us/library/bew39x2a(v=vs.110).aspx
// Read data from the remote device.
int bytesRead = client.EndReceive(ar);
if (bytesRead > 0) {
// There might be more data, so store the data received so far.
state.sb.Append(Encoding.ASCII.GetString(state.buffer,0,bytesRead));
// Get the rest of the data.
client.BeginReceive(state.buffer,0,StateObject.BufferSize,0,
new AsyncCallback(ReceiveCallback), state);
} else {
// All the data has arrived; put it in response.
if (state.sb.Length > 1) {
response = state.sb.ToString();
}
// Signal that all bytes have been received.
receiveDone.Set(); ---> not that this event is set here
}
and in the main code block it is waiting for receiveDone:
receiveDone.WaitOne();
// Write the response to the console.
Console.WriteLine("Response received : {0}", response);
// Release the socket.
client.Shutdown(SocketShutdown.Both);
client.Close();
Conclusion : check for reception of 0 bytes and close your end of the socket because that is what the other end has done.
A timeout is handled with an exception. You are not really doing anything with a timeout because your catch block is empty. You would just continue trying to receive.
#Philip has already answered ythe question.
I just want to add that I recommend the use of SysInternals TcpView, which is basically a GUI for netstat and lets you easily check the status of all network connections of your computer.
About the detection of the connection state in your program, see here in SO.

Socket tcp c# how to clear input buffer?

I'm writing an application for windows phone and I need to communicate with a server and transmit data. The SERVER is written in C++ and I cannot modify it. The CLIENT is what I have to write. The Server is designed such that the client connect to it and transmit data. The connection remains open for all the transmission. By writing my code in C# I am able to receive data from the server but after the first receive, the data that I read in the buffer are alway the same. So I need a way to flush the input buffer so I can receive the new data (data are sent continuously). I'm using the class defined in here:
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202858%28v=vs.105%29.aspx
thanks a lot !!
I used this code for Receiving in the SocketClient.cs :
public string Receive()
{
string response = "Operation Timeout";
// We are receiving over an established socket connection
if (_socket != null)
{
// Create SocketAsyncEventArgs context object
SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs();
socketEventArg.RemoteEndPoint = _socket.RemoteEndPoint;
// Setup the buffer to receive the data
socketEventArg.SetBuffer(new Byte[MAX_BUFFER_SIZE], 0, MAX_BUFFER_SIZE);
// Inline event handler for the Completed event.
// Note: This even handler was implemented inline in order to make
// this method self-contained.
socketEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(delegate(object s, SocketAsyncEventArgs e)
{
if (e.SocketError == SocketError.Success)
{
// *********************************************
// THIS part of the code was added to receive
// a vector of 3 double
Double[] OdomD = new Double[3];
for (int i = 0; i < 3; i++)
{
OdomD[i] = BitConverter.ToDouble(e.Buffer, 8 * i);
}
// *********************************************
}
else
{
response = e.SocketError.ToString();
}
_clientDone.Set();
});
// Sets the state of the event to nonsignaled, causing threads to block
_clientDone.Reset();
// Make an asynchronous Receive request over the socket
_socket.ReceiveAsync(socketEventArg);
// Block the UI thread for a maximum of TIMEOUT_MILLISECONDS milliseconds.
// If no response comes back within this time then proceed
_clientDone.WaitOne(TIMEOUT_MILLISECONDS);
}
else
{
response = "Socket is not initialized";
}
return response;
}
The Connect() method is exactly the same reported in the link above. So when the application start, the Connect() method is called as follow:
SocketClient client = new SocketClient();
// Attempt to connect to server for receiving data
Log(String.Format("Connecting to server '{0}' over port {1} (data) ...", txtRemoteHost.Text, 4444), true);
result = client.Connect(txtRemoteHost.Text, 4444);
Log(result, false);
That is done just once at the beginning, then I need receive this array of 3 double that is updated every second. So I use:
Log("Requesting Receive ...", true);
result = client.Receive();
Log(result, false);
The problem is that also if I debug the code and stop the execution inside Receive(), I always read the same value, that is the first value sent by the server. What I'm expecting is that every time I call client.Receive(), I get the new value, but this is not appening.
I had a similar problem by writing the same client in Matlab environment. I solved the problem by using the function flushinput(t) before to read the input buffer. In this way I was able to read always the last data sent by the server. I'm lookin for a function similar to that one ..
The size of the input buffer is fixed equal to the data that I'm expecting to receive, in that case is 24 bytes ( 3* sizeof(double) ) ..
Thanks a lot for you time !!
oleksii is right, you should call client.Receive() in a loop. You can choose to start a thread that covers the receive section of your code. Also note that client.Receive() will keep trying to receive from the buffer, and it will get stuck if there is no data available.
The main question was **how to clear the input buffer? ** or am I wrong?=!
Nevertheless; since you don't have a fixed buffer denoted as seen from you posted code and receive it via the SocketAsyncEventArgs, you could clear it with:
Array.Clear(e.Buffer, 0, e.Buffer.Length);

Sockets starts to slow down and not respond

i am developing a server (with c#) and a client (with flash, actionscript 3.0) application. Server sends data (datas are arround 90 bytes) to clients continuously and clients behave according to data they received (data is json formatted)
for a while, everything works as expected but after some time passed, clients start to receive messages laggy. they keep waiting for some time and then they behave according to last message (some messages lost). after some time passed clients starts to wait and process all the messages at the same time. I could not figured out what causing this. My network condition is stable.
here is some part of my c# code, sending message:
public void Send(byte[] buffer)
{
if (ClientSocket != null && ClientSocket.Connected)
{
ClientSocket.BeginSend(buffer, 0, buffer.Length, 0, WriteCallback, ClientSocket);
}
}
private void WriteCallback(IAsyncResult result)
{
//
}
and some part of my client, receiving message (actionscript)
socket.addEventListener(ProgressEvent.SOCKET_DATA, onResponse);
function onResponse(e:ProgressEvent):void {
trace(socket.bytesAvailable);
if(socket.bytesAvailable > 0) {
try
{
var serverResponse:String = socket.readUTFBytes(socket.bytesAvailable);
....
I hope i could explain my problem. How should i optimize my code? What can be causing lags. Thanks.
You really need to give more detail as to how you're setting up the socket (is it TCP or UDP?)
Assuming it's a TCP socket, then it would appear that your client relies on each receive call returning the same number of bytes that were sent by the server's Send() call. This is however not the case, and could well be the cause of your issues if a message is only being partially received on the client, or multiple messages are received at once.
For example, the server may send a 90 byte message in a single call, but your client may receive it in one 90-byte receive, or two 45-byte chunks, or even 90 x 1-byte chunks, or anything in between. Multiple messages sent by the server may also be partially combined when received by the client. E.g. two 90-byte messages may be received in a single 180-byte chunk, or a 150-byte and a 30-byte chunk, etc. etc.
You need therefore to provide some kind of framing on your messages so that when the stream of data is received by the client, it can be reliably reconstructed into individual messages.
The most basic framing mechanism would be to prefix each message sent with a fixed-length field indicating the message size. you may be able to get away with a single byte if you can guarantee that your messages will never be > 255 bytes long, which will simplify the receiving code.
On the client side, you first need to receive the length prefix, and then read up to that many bytes off the socket to construct the message data. If you receive fewer than the required number of bytes, your receiving code must wait for more data (appending it to the partially-received message when it is eventually received) until it has a complete message of the.
Once the full message is received it can be processed as you are currently.
Unfortunately I don't know ActionScript, so can't give you an example of the client-side code, but here's how you might write the server and client framing in C#:
Server side:
public void SendMessage(string message)
{
var data = Encoding.UTF8.GetBytes(message);
if (data.Length > byte.MaxValue) throw new Exception("Data exceeds maximum size");
var bufferList = new[]
{
new ArraySegment<byte>(new[] {(byte) data.Length}),
new ArraySegment<byte>(data)
};
ClientSocket.Send(bufferList);
}
Client side:
public string ReadMessage()
{
var header = new byte[1];
// Read the header indicating the data length
var bytesRead = ServerSocket.Receive(header);
if (bytesRead > 0)
{
var dataLength = header[0];
// If the message size is zero, return an empty string
if (dataLength == 0) return string.Empty;
var buffer = new byte[dataLength];
var position = 0;
while ((bytesRead = ServerSocket.Receive(buffer, position, buffer.Length - position, SocketFlags.None)) > 0)
{
// Advance the position by the number of bytes read
position += bytesRead;
// If there's still more data to read before we have a full message, call Receive again
if (position < buffer.Length) continue;
// We have a complete message - return it.
return Encoding.UTF8.GetString(buffer);
}
}
// If Receive returns 0, the socket has been closed, so return null to indicate this.
return null;
}

C# Socket BeginReceive / EndReceive capturing multiple messages

Problem:
When I do something like this:
for (int i = 0; i < 100; i++)
{
SendMessage( sometSocket, i.ToString());
Thread.Sleep(250); // works with this, doesn't work without
}
With or without the sleep the server logs sending of separate messages. However without the sleep the client ends up receiving multiple messages in single OnDataReceived so the client will receive messages like:
0,
1,
2,
34,
5,
678,
9 ....
Server sending Code:
private void SendMessage(Socket socket, string message)
{
logger.Info("SendMessage: Preparing to send message:" + message);
byte[] byteData = Encoding.ASCII.GetBytes(message);
if (socket == null) return;
if (!socket.Connected) return;
logger.Info("SendMessage: Sending message to non " +
"null and connected socket with ip:" + socket.RemoteEndPoint);
// Record this message so unit testing can very this works.
socket.Send(byteData);
}
Client receiving code:
private void OnDataReceived(IAsyncResult asyn)
{
logger.Info("OnDataReceived: Data received.");
try
{
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = theSockId.Socket.EndReceive(asyn);
char[] chars = new char[iRx + 1];
System.Text.Decoder d = System.Text.Encoding.UTF8.GetDecoder();
int charLen = d.GetChars(theSockId.DataBuffer, 0, iRx, chars, 0);
System.String szData = new System.String(chars);
logger.Info("OnDataReceived: Received message:" + szData);
InvokeMessageReceived(new SocketMessageEventArgs(szData));
WaitForData(); // .....
Socket Packet:
public class SocketPacket
{
private Socket _socket;
private readonly int _clientNumber;
private byte[] _dataBuffer = new byte[1024]; ....
My hunch is it's something to do with the buffer size or its just the between the OnDataReceived and EndReceive we're getting multiple messages.
Update: It turns out when I put a Thread.Sleep at the start of OnDataReceived it gets every message. Is the only solution to this wrapping my message in a prefix of length and an string to signify the end?
This is expected behaviour. A TCP socket represents a linear stream of bytes, not a sequence of well-delimited “packets”. You must not assume that the data you receive is chunked the same way it was when it was sent.
Notice that this has two consequences:
Two messages may get merged into a single callback call. (You noticed this one.)
A single message may get split up (at any point) into two separate callback calls.
Your code must be written to handle both of these cases, otherwise it has a bug.
There is no need to abandon Tcp because it is stream oriented.
You can fix the problems that you are having by implementing message framing.
See
http://blogs.msdn.com/malarch/archive/2006/06/26/647993.aspx
also:
http://nitoprograms.blogspot.com/2009/04/message-framing.html
TCP sockets don't always send data right away -- in order to minimize network traffic, TCP/IP implementations will often buffer the data for a bit and send it when it sees there's a lull (or when the buffer's full).
If you want to ensure that the messages are processed one by one, you'll need to either set socket.NoDelay = true (which might not help much, since data received may still be bunched up together in the receive buffer), implement some protocol to separate messages in the stream (like prefixing each message with its length, or perhaps using CR/LF to separate them), or use a message-oriented protocol like SCTP (which might not be supported without additional software) or UDP (if you can deal with losing messages).

Categories

Resources