Socket synchronous receive - c#

I have a weird problem occuring at times.
I have a synchronous socket that I connect to and send data to fine.
do
{
try
{
// blocking
int bytesRead = sender.Receive(bytesReceived);
// process the received bytes
}
catch (SocketException soex)
{
throw new Exception(String.Format("Socket exception..." +Environment.NewLine+"[Code: {0}] {1}",soex.ErrorCode,soex.Message));
}
}
while (sender.Poll(1500, SelectMode.SelectRead) && sender.Available > 0);
I don't want to stay in the Receive forever since it is blocking, so that is why I put a poll with a time to wait for a response for the packets to get time to come onto the wire and along with the Available, I can break when there's no more data for real.
The problem I am getting is that the while (in the do-while) returns false, when client.Available > 0 is FALSE (it has bytes)
I put a Debug.WriteLine after the while to print out how many available bytes are available and I get a number.
Any idea? I'm puzzled.

Maybe sender.Poll returns false, but still returns data to be read. Maybe you could just change your while-condition to sender.Available > 0 and call sender.Poll inside the loop.
EDIT:
Maybe this is enough (pseudo-code):
int length;
do {
length = socket.Read(buffer);
// process "buffer"
} while (length > 0);
You may need to set a socket timeout to prevent the Read call from blocking too long.

Related

BeginSend taking too long till callback

I'm using the asynchronous methos BeginSend and I need some sort of a timeout mechanism. What I've implemented works fine for connect and receive timeouts but I have a problem with the BeginSend callback. Even a timeout of 25 seconds is often not enough and gets exceeded. This seems very strange to me and points towards a different cause.
public void Send(String data)
{
if (client.Connected)
{
// Convert the string data to byte data using ASCII encoding.
byte[] byteData = Encoding.ASCII.GetBytes(data);
client.NoDelay = true;
// Begin sending the data to the remote device.
IAsyncResult res = client.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback(SendCallback), client);
if (!res.IsCompleted)
{
sendTimer = new System.Threading.Timer(SendTimeoutCallback, null, 10000, Timeout.Infinite);
}
}
else MessageBox.Show("No connection to target! Send");
}
private void SendCallback(IAsyncResult ar)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 1, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
sendTimeoutflag = 0; //needs to be reset back to 0 for next reception
// we set the flag to 1, indicating it was completed.
if (sendTimer != null)
{
// stop the timer from firing.
sendTimer.Dispose();
}
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete sending the data to the remote device.
int bytesSent = client.EndSend(ar);
ef.updateUI("Sent " + bytesSent.ToString() + " bytes to server." + "\n");
}
catch (Exception e)
{
MessageBox.Show(e.ToString());
}
}
private void SendTimeoutCallback(object obj)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 2, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
// we set the flag to 2, indicating a timeout was hit.
sendTimer.Dispose();
client.Close(); // closing the Socket cancels the async operation.
MessageBox.Show("Connection to the target has been lost! SendTimeoutCallback");
}
I've tested timeout values up to 30 seconds. The value of 30 seconds has proved to be the only one never to time out. But that just seems like an overkill and I believe there's a different underlying cause.Any ideas as to why this could be happening?
Unfortunately, there's not enough code to completely diagnose this. You don't even show the declaration of sendTimeoutflag. The example isn't self-contained, so there's no way to test it. And you're not clear about exactly what happens (e.g. do you just get the timeout, do you complete a send and still get a timeout, does something else happen?).
That said, I see at least one serious bug in the code, which is your use of the sendTimeoutflag. The SendCallback() method sets this flag to 1, but it immediately sets it back to 0 again (this time without the protection of Interlocked.CompareExchange()). Only after it's set the value to 0 does it dispose the timer.
This means that even when you successfully complete the callback, the timeout timer is nearly guaranteed to have no idea and to close the client object anyway.
You can fix this specific issue by moving the assignment sendTimeoutflag = 0; to a point after you've actually completed the send operation, e.g. at the end of the callback method. And even then only if you take steps to ensure that the timer callback cannot execute past that point (e.g. wait for the timer's dispose to complete).
Note that even having fixed that specific issue, you may still have other bugs. Frankly, it's not clear why you want a timeout in the first place. Nor is it clear why you want to use lock-free code to implement your timeout logic. More conventional locking (i.e. Monitor-based with the lock statement) would be easier to implement correctly and would likely not impose a noticeable performance penalty.
And I agree with the suggestion that you would be better-served by using the async/await pattern instead of explicitly dealing with callback methods (but of course that would mean using a higher-level I/O object, since Socket doesn't suppose async/await).

How to detect a Socket Disconnect in C#

I'm working on a client/server relationship that is meant to push data back and forth for an indeterminate amount of time.
The problem I'm attempting to overcome is on the client side, being that I cannot manage to find a way to detect a disconnect.
I've taken a couple of passes at other peoples solutions, ranging from just catching IO Exceptions, to polling the socket on all three SelectModes. I've also tried using a combination of a poll, with a check on the 'Available' field of the socket.
// Something like this
Boolean IsConnected()
{
try
{
bool part1 = this.Connection.Client.Poll(1000, SelectMode.SelectRead);
bool part2 = (this.Connection.Client.Available == 0);
if (part1 & part2)
{
// Never Occurs
//connection is closed
return false;
}
return true;
}
catch( IOException e )
{
// Never Occurs Either
}
}
On the server side, an attempt to write an 'empty' character ( \0 ) to the client forces an IO Exception and the server can detect that the client has disconnected ( pretty easy gig ).
On the client side, the same operation yields no exception.
// Something like this
Boolean IsConnected( )
{
try
{
this.WriteHandle.WriteLine("\0");
this.WriteHandle.Flush();
return true;
}
catch( IOException e )
{
// Never occurs
this.OnClosed("Yo socket sux");
return false;
}
}
A problem that I believe I am having in detecting a disconnect via a poll, is that I can fairly easily encounter a false on a SelectRead, if my server hasn't yet written anything back to the client since the last check... Not sure what to do here, I've chased down every option to make this detection that I can find and nothing has been 100% for me, and ultimately my goal here is to detect a server (or connection) failure, inform the client, wait to reconnect, etc. So I am sure you can imagine that this is an integral piece.
Appreciate anyone's suggestions.
Thanks ahead of time.
EDIT: Anyone viewing this question should note the answer below, and my FINAL Comments on it. I've elaborated on how I overcame this problem, but have yet to make a 'Q&A' style post.
One option is to use TCP keep alive packets. You turn them on with a call to Socket.IOControl(). Only annoying bit is that it takes a byte array as input, so you have to convert your data to an array of bytes to pass in. Here's an example using a 10000ms keep alive with a 1000ms retry:
Socket socket; //Make a good socket before calling the rest of the code.
int size = sizeof(UInt32);
UInt32 on = 1;
UInt32 keepAliveInterval = 10000; //Send a packet once every 10 seconds.
UInt32 retryInterval = 1000; //If no response, resend every second.
byte[] inArray = new byte[size * 3];
Array.Copy(BitConverter.GetBytes(on), 0, inArray, 0, size);
Array.Copy(BitConverter.GetBytes(keepAliveInterval), 0, inArray, size, size);
Array.Copy(BitConverter.GetBytes(retryInterval), 0, inArray, size * 2, size);
socket.IOControl(IOControlCode.KeepAliveValues, inArray, null);
Keep alive packets are sent only when you aren't sending other data, so every time you send data, the 10000ms timer is reset.

Sockets starts to slow down and not respond

i am developing a server (with c#) and a client (with flash, actionscript 3.0) application. Server sends data (datas are arround 90 bytes) to clients continuously and clients behave according to data they received (data is json formatted)
for a while, everything works as expected but after some time passed, clients start to receive messages laggy. they keep waiting for some time and then they behave according to last message (some messages lost). after some time passed clients starts to wait and process all the messages at the same time. I could not figured out what causing this. My network condition is stable.
here is some part of my c# code, sending message:
public void Send(byte[] buffer)
{
if (ClientSocket != null && ClientSocket.Connected)
{
ClientSocket.BeginSend(buffer, 0, buffer.Length, 0, WriteCallback, ClientSocket);
}
}
private void WriteCallback(IAsyncResult result)
{
//
}
and some part of my client, receiving message (actionscript)
socket.addEventListener(ProgressEvent.SOCKET_DATA, onResponse);
function onResponse(e:ProgressEvent):void {
trace(socket.bytesAvailable);
if(socket.bytesAvailable > 0) {
try
{
var serverResponse:String = socket.readUTFBytes(socket.bytesAvailable);
....
I hope i could explain my problem. How should i optimize my code? What can be causing lags. Thanks.
You really need to give more detail as to how you're setting up the socket (is it TCP or UDP?)
Assuming it's a TCP socket, then it would appear that your client relies on each receive call returning the same number of bytes that were sent by the server's Send() call. This is however not the case, and could well be the cause of your issues if a message is only being partially received on the client, or multiple messages are received at once.
For example, the server may send a 90 byte message in a single call, but your client may receive it in one 90-byte receive, or two 45-byte chunks, or even 90 x 1-byte chunks, or anything in between. Multiple messages sent by the server may also be partially combined when received by the client. E.g. two 90-byte messages may be received in a single 180-byte chunk, or a 150-byte and a 30-byte chunk, etc. etc.
You need therefore to provide some kind of framing on your messages so that when the stream of data is received by the client, it can be reliably reconstructed into individual messages.
The most basic framing mechanism would be to prefix each message sent with a fixed-length field indicating the message size. you may be able to get away with a single byte if you can guarantee that your messages will never be > 255 bytes long, which will simplify the receiving code.
On the client side, you first need to receive the length prefix, and then read up to that many bytes off the socket to construct the message data. If you receive fewer than the required number of bytes, your receiving code must wait for more data (appending it to the partially-received message when it is eventually received) until it has a complete message of the.
Once the full message is received it can be processed as you are currently.
Unfortunately I don't know ActionScript, so can't give you an example of the client-side code, but here's how you might write the server and client framing in C#:
Server side:
public void SendMessage(string message)
{
var data = Encoding.UTF8.GetBytes(message);
if (data.Length > byte.MaxValue) throw new Exception("Data exceeds maximum size");
var bufferList = new[]
{
new ArraySegment<byte>(new[] {(byte) data.Length}),
new ArraySegment<byte>(data)
};
ClientSocket.Send(bufferList);
}
Client side:
public string ReadMessage()
{
var header = new byte[1];
// Read the header indicating the data length
var bytesRead = ServerSocket.Receive(header);
if (bytesRead > 0)
{
var dataLength = header[0];
// If the message size is zero, return an empty string
if (dataLength == 0) return string.Empty;
var buffer = new byte[dataLength];
var position = 0;
while ((bytesRead = ServerSocket.Receive(buffer, position, buffer.Length - position, SocketFlags.None)) > 0)
{
// Advance the position by the number of bytes read
position += bytesRead;
// If there's still more data to read before we have a full message, call Receive again
if (position < buffer.Length) continue;
// We have a complete message - return it.
return Encoding.UTF8.GetString(buffer);
}
}
// If Receive returns 0, the socket has been closed, so return null to indicate this.
return null;
}

When NetworkStream.Read(byte[], int, int) needs to return -1, it abort()s the thread it is running on instead?

NetworkStream stream = socket.GetStream();
if (stream.CanRead)
{
while (true)
{
int i = stream.Read(buf, 0, 1024);
result += Encoding.ASCII.GetString(buf, 0, i);
}
}
Above code was designed to retrieve message from a TcpClient while running on a separate thread. Read Method works fine until it is supposed to return -1 to indicate there is nothing to read anymore; instead, it just terminates the thread it is running on without any apparent reason - tracing each step using the debugger shows that it just stops running right after that line.
Also I tried encapsulating it with a try ... catch without much success.
What could be causing this?
EDIT: I tried
NetworkStream stream = socket.GetStream();
if (stream.CanRead)
{
while (true)
{
int i = stream.Read(buf, 0, 1024);
if (i == 0)
{
break;
}
result += Encoding.ASCII.GetString(buf, 0, i);
}
}
thanks to #JonSkeet, but the problem is still there. The thread terminates at that read line.
EDIT2: I fixed the code like this and it worked.
while (stream.DataAvailable)
{
int i = stream.Read(buf, 0, 1024);
result += Encoding.ASCII.GetString(buf, 0, i);
}
I think the problem was simple, I just didn't think thoroughly enough. Thanks everyone for taking a look at this!
No, Stream.Read returns 0 when there's nothing to read, not -1:
Return value
The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached.
My guess is that actually, no exception is being thrown and the thread isn't being aborted - but it's just looping forever. You should be able to see this if you step through in the debugger. Whatever's happening, your "happy" termination condition will never be hit...
Since you're trying to read ASCII characters, from a stream, take a look at the following as a potentially simpler way to do it:
public IEnumerable<string> ReadLines(Stream stream)
{
using (StreamReader reader = new StreamReader(stream, Encoding.ASCII))
{
while (!reader.EndOfStream)
yield return reader.ReadLine();
}
}
While this may not be exactly what you want, the salient points are:
Use a StreamReader to do all the hard work for you
Use a while loop with !reader.EndOfStream to loop through the stream
You can still use reader.Read(buffer, 0, 1024) if you'd prefer to read chunks into a buffer, and append to result. Just note that these will be char[] chunks not byte[] chunks, which is likely what you want.
It looks to me like it is simply blocking - i.e. waiting on the end of the stream. For it to return a non-positive number, it is necessary that the stream be closed, i.e. the the caller has not only sent data, but has closed their outbound socket. Otherwise, the system cannot distinguish between "waiting for a packet to arrive" and "the end of the stream".
If the caller is sending one message only, they should close their outbound socket after sending (they can keep their inbound socket open for a reply).
If the caller is sending multiple messages, then you must use a framing approach to read individual sub-messages. In the case of a text-based protocol this usually means "hunt the newline".

Windows socket.Send data isn't received until socket.Close

I'm developing a server application that asynchronously accepts TCP connections (BeginAccept/EndAccept) and data (BeginReceive/EndReceive). The protocol requires an ACK to be sent whenever the EOM character is found before it will send the next message. The accept and receive are working but the sending app is not receiving the ACK (sent synchronously).
private void _receiveTransfer(IAsyncResult result)
{
SocketState state = result.AsyncState as SocketState;
int bytesReceived = state.Socket.EndReceive(result);
if (bytesReceived == 0)
{
state.Socket.Close();
return;
}
state.Offset += bytesReceived;
state.Stream.Write(state.Buffer, 0, bytesReceived);
if (state.Buffer[bytesReceived - 1] == 13)
{
// process message
Messages.IMessage message = null;
try
{
var value = state.Stream.ToArray();
// do some work
var completed = true;
if (completed)
{
// send positive ACK
var ackMessage = string.Format(ack, message.TimeStamp.ToString("yyyyMMddhhmm"), message.MessageType, message.Id, "AA", message.Id);
var buffer = ASCIIEncoding.ASCII.GetBytes(ackMessage);
int bytesSent = state.Socket.Send(buffer, 0, buffer.Length, SocketFlags.None);
}
else
{
// send rejected ACK
var ackMessage = string.Format(ack, message.TimeStamp.ToString("yyyyMMddhhmm"), message.MessageType, message.Id, "AR", message.Id);
state.Socket.Send(ASCIIEncoding.ASCII.GetBytes(ackMessage));
}
}
catch (Exception e)
{
// log exception
// send error ACK
if (message != null)
{
var ackMessage = string.Format(ack, DateTime.Now.ToString("yyyyMMddhhmm"), message.MessageType, message.Id, "AE", message.Id);
state.Socket.Send(ASCIIEncoding.ASCII.GetBytes(ackMessage));
}
}
}
state.Socket.BeginReceive(state.Buffer, 0, state.Buffer.Length, SocketFlags.None, new AsyncCallback(_receiveTransfer), state);
}
The state.Socket.Send returns the correct number of bytes but the data isn't received until the socket is disposed.
Suggestions are appreciated.
you shouldn't do anything synchronous from async completion routines. Under load you can end up hijacking all IO completion threads from the thread pool and severly hurt performance, up to and including complete IO deadlock. So don't send ACKs synchronously from async callback.
protocols and formats that use preambles are easier to manage that those that use terminators. Ie. write the length of the message in the fixed size message header as opposed to detecting a terminator \0x13. Of course, this applies if the protocol is under your control to start with.
As for your question, you didn't specify if the same code as you posted is also on the client side too.
How long are you giving it? The network stack can buffer, and that could delay transmition. From MSDN:
To increase network efficiency, the
underlying system may delay
transmission until a significant
amount of outgoing data is collected.
A successful completion of the Send
method means that the underlying
system has had room to buffer your
data for a network send.
You might want to try flushing using the IOControl method.
edit
Actually, the IOControl flush will kill the buffer. You may want to check out the Two Generals Problem to see if your protocol will have some inherent problems.
try setting TCP_NODELAY socket option
Have you set the NoDelay property on the socket to true? When set to false (the default), data is buffered for up to 200 milliseconds before it's sent. The reason is to reduce network traffic by limiting the number of packets that are sent. Setting NoDelay to true will force the data to be sent sooner.

Categories

Resources