Socket.EndRead 0 bytes means disconnected? - c#

I'm wondering if in my Async Sockets in c#, receiving 0 bytes in the EndRead call means the server has actually disconnected us?
Many Examples I see suggest that this is the case, but I'm receiving disconnects a lot more frequent that I would be expecting.
Is this code correct? Or does endResult <= 0 not really mean anything about the connection state?
private void socket_EndRead(IAsyncResult asyncResult)
{
//Get the socket from the result state
Socket socket = asyncResult.AsyncState as Socket;
//End the read
int endResult = Socket.EndRead(asyncResult);
if (endResult > 0)
{
//Do something with the data here
}
else
{
//Server closed connection?
}
}

From the docs:
If the remote host shuts down the Socket connection and all available data has been received, the EndRead method completes immediately and returns zero bytes.
So yes, zero bytes indicates a remote close.

0 read length should mean gracefull shutdown. Disconnect throws error (10054, 10053 or 10051).
In practice though I did notice reads complete with 0 length even though the connection was alive, and the only way to handle is to check the socket status on 0 length reads. The situation was as follows: post multiple buffers on a socket for receive. The thread that posted then is trimmed by the pool. The OS notices that the thread that made the requests is gone and it notifies the posted operations with error 995 ERROR_OPERATION_ABORTED, as documented. However what I've found is that when multiple operations are posted (ie. multiple Reads) only the first is notified with error 995, the subsequent are notified with success and 0 length.

Related

Why socket BeginSend would take too long

I have the following code
byte[] bytes = Encoding.Default.GetBytes(data);
IAsyncResult res = socket.BeginSend(bytes, 0, bytes.Length, 0, new AsyncCallback(SendCallback), socket);
int waitingCounter = 0;
while (!res.IsCompleted && waitingCounter<10)
{
if (Tracing.TraceInfo) Tracing.WriteLine("Waiting for data to be transmited. Give a timeout of 1 second", _traceName);
Thread.Sleep(1 * 1000);
waitingCounter++;
}
This code has been installed in many machines, but there are some cases where the condition res.IsCompleted takes long to become true.
The reason is related with the network maybe a firewall, proxy? or to the client (too slow) or to the server?
I have not been able to reproduce this scenario.
Edit: I try to reproduce the error by using an asynchronous client and a synchronous server
with the following modifications:
Client=>
while (true) {
Send(client, "This is a test<EOF>");
sendDone.WaitOne();
}
Server=>
while (true){
Console.WriteLine("Waiting for a connection...");
// Program is suspended while waiting for an incoming connection.
Socket handler = listener.Accept();
data = null;
// Show the data on the console.
Console.WriteLine("Text received : {0}", data);
handler.Shutdown(SocketShutdown.Both);
handler.Close();
}
In the second Send(), I get a socket exception. What is normal because the server is not reading the data.
But actually what I want to reproduce is:
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
As it happens in one of our installations
Edit:
An answer disappeared from this question!!!
Even though BeginSend won't stall your application, it should still have the same constraints as Send. This answer explains why things could go wrong (I've paraphrased):
For your application, the TCP buffer, the network and the local
sending TCP buffer is one big buffer. If the remote application gets
delayed in reading new bytes from its TCP buffer, eventually your
local TCP buffer will end up being (nearly) full. The send won't
complete until the TCP buffer has enough space to store the payload
you're trying to send.
Don't forget when the first check res.IsCompleted, you are always waiting a second for the next check.

Detecting unexpected socket disconnect

This is not a question about how to do this, but a question about whether it's wrong what I'm doing. I've read that it's not possible to detect if a socket is closed unexpectedly (like killing the server/client process, pulling the network cable) while waiting for data (BeginReceive), without use of timers or regular sent messages, etc. But for quite a while I've been using the following setup to do this, and so far it has always worked perfectly.
public void OnReceive(IAsyncResult result)
{
try
{
var bytesReceived = this.Socket.EndReceive(result);
if (bytesReceived <= 0)
{
// normal disconnect
return;
}
// ...
this.Socket.BeginReceive...;
}
catch // SocketException
{
// abnormal disconnect
}
}
Now, since I've read it's not easily possible, I'm wondering if there's something wrong with my method. Is there? Or is there a difference between killing processes and pulling cables and similar?
It's perfectly possible and OK to do this. The general idea is:
If EndReceive returns anything other than zero, you have incoming data to process.
If EndReceive returns zero, the remote host has closed its end of the connection. That means it can still receive data you send if it's programmed to do so, but cannot send any more of its own under any circumstances. Usually when this happens you will also close your end the connection thus completing an orderly shutdown, but that's not mandatory.
If EndReceive throws, there has been an abnormal termination of the connection (process killed, network cable cut, power lost, etc).
A couple of points you have to pay attention to:
EndReceive can never return less than zero (the test in your code is misleading).
If it throws it can throw other types of exception in addition to SocketException.
If it returns zero you must be careful to stop calling BeginReceive; otherwise you will begin an infinite and meaningless ping-pong game between BeginReceive and EndReceive (it will show in your CPU usage). Your code already does this, so no need to change anything.

Measuring the TCP Bytes waiting to be read by the Socket

I have a socket connection that receives data, and reads it for processing.
When data is not processed/pulled fast enough from the socket, there is a bottleneck at the TCP level, and the data received is delayed (I can tell by the tmestamps after parsing).
How can I see how much TCP bytes are awaiting to be read by the socket ? (via some external tool like WireShark or else)
private void InitiateRecv(IoContext rxContext)
{
rxContext._ipcSocket.BeginReceive(rxContext._ipcBuffer.Buffer, rxContext._ipcBuffer.WrIndex,
rxContext._ipcBuffer.Remaining(), 0, CompleteRecv, rxContext);
}
private void CompleteRecv(IAsyncResult ar)
{
IoContext rxContext = ar.AsyncState as IoContext;
if (rxContext != null)
{
int rxBytes = rxContext._ipcSocket.EndReceive(ar);
if (rxBytes > 0)
{
EventHandler<VfxIpcEventArgs> dispatch = EventDispatch;
dispatch (this, new VfxIpcEventArgs(rxContext._ipcBuffer));
InitiateRecv(rxContext);
}
}
}
The fact is that I guess the "dispatch" is somehow blocking the reception until it is done, ending up in latency (i.e, data that is processed bu the dispatch is delayed, hence my (false?) conclusion that there was data accumulated on the socket level or before.
How can I see how much TCP bytes are awaiting to be read by the socket
By specifying a protocol that indicates how many bytes it's about to send. Using sockets you operate a few layers above the byte level, and you can't see how many send() calls end up as receive() calls on your end because of buffering and delays.
If you specify the number of bytes on beforehand, and send a string like "13|Hello, World!", then there's no problem when the message arrives in two parts, say "13|Hello" and ", World!", because you know you'll have to read 13 bytes.
You'll have to keep some sort of state and a buffer in between different receive() calls.
When it comes to external tools like Wireshark, they cannot know how many bytes are left in the socket. They only know which packets have passed by the network interface.
The only way to check it with Wireshark is to actually know the last bytes you read from the socket, locate them in Wireshark, and count from there.
However, the best way to get this information is to check the Available property on the socket object in your .NET application.
You can use socket.Available if you are using normal Socket class. Otherwise you have to define a header byte which gives number of bytes to be sent from other end.

socket option receive time out

now lets take a secnario where we use a locking socket receive and the packet is 5000 bytes
with receivetimeout set to one second
s.SetSocketOption (SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, 1000);
int bytes_recevied = 0 ;
byte [] ReceiveBuffer = new byte[8192] ;
try
{
bytes_received = s.Receive(RecevieBuffer) ;
}
catch(SocketException e)
{
if( e.ErrorCode == 10060)
{
Array.Clear(ReceiveBuffer,0,ReceiveBuffer.Length);
}
}
now our secnario dictates that 4000 bytes have gone threw alreadys,the socket is still blocking and some error accured on the receiving end ,
now on the receiving end we would handly dispose of the 4000 bytes by catching the socket ecxecption
is there any guaranty that the socket on the sending end wont thows 1000 bytes that remain
does the sending socket know to truncate them if he wasent disconnected when we attempt to receive again wont they be the first bytes we receive ?
what im asking is :
a) does tcp have some mecanisem that tells the socket to dispose of the rest of them message ?
b) is there a socket flag that we could send or receive with that tells the buffers to dispose of the rest of the message ?
First off, TCP/IP operates on streams, not packets. So you need some kind of message framing in your protocol, regardless of buffer sizes, blocking calls, or MTUs.
Secondly, each TCP connection is independent. The normal design is to close a socket when there's a communications error. A new socket connection can then be established, which is completely independent from the old one.

Disconnecting TCPClient and seeing that on the other side

i am trying to disconnect a client from a server but the server still sees it as being connected. I cant find a solution to this and Shutdown, Disconnect and Close all dont work.
Some code for my disconnect from the client and checking on the server:
Client:
private void btnDisconnect_Click(object sender, EventArgs e)
{
connTemp.Client.Shutdown(SocketShutdown.Both);
connTemp.Client.Disconnect(false);
connTemp.GetStream().Close();
connTemp.Close();
}
Server:
while (client != null && client.Connected)
{
NetworkStream stream = client.GetStream();
data = null;
try
{
if (stream.DataAvailable)
{
data = ReadStringFromClient(client, stream);
WriteToConsole("Received Command: " + data);
}
} // So on and so on...
There are more writes and reads further down in the code.
Hope you all can help.
UPDATE: I even tried passing the TCP client by ref, assuming there was a scope issue and client.Connected remains true even after a read. What is going wrong?
Second Update!!:
Here is the solution. Do a peek and based on that, determine if you are connected or not.
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
Here is the solution!!
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
From the MSDN Documentation:
The Connected property gets the
connection state of the Client socket
as of the last I/O operation.
When it
returns false, the Client socket was
either never connected, or is no
longer connected. Because the
Connected property only reflects the
state of the connection as of the most
recent operation, you should attempt
to send or receive a message to
determine the current state. After the
message send fails, this property no
longer returns true. Note that this
behavior is by design. You cannot
reliably test the state of the
connection because, in the time
between the test and a send/receive,
the connection could have been lost.
Your code should assume the socket is
connected, and gracefully handle
failed transmissions.
I am not sure about the NetworkStream class but I would think that it would behave similar to the Socket class as it is primarily a wrapper class. In general the server would be unaware that the client disconnected from the socket unless it performs an I/O operation on the socket (a read or a write). However, when you call BeginRead on the socket the callback is not called until there is data to be read from the socket, so calling EndRead and getting a bytes read return result of 0 (zero) means the socket was disconnected. If you use Read and get a zero bytes read result I suspect that you can check the Connected property on the underlying Socket class and it will be false if the client disconnected since an I/O operation was performed on the socket.
It's a general TCP problem, see:
How do I check if a SSLSocket connection is sane on Java?
Java socket not throwing exceptions on a dead socket?
The workaround for this tend to rely on sending the amount of data to expect as part of the protocol. That's what HTTP 1.1 does using the Content-Length header (for a entire entity) or with chunked transfer encoding (with various chunk sizes).
Another way is to send "NOOP" or similar commands (essentially messages that do nothing but make sure the communication is still open) as part of your protocol regularly.
(You can also add to your protocol a command that the client can send to the server to close the connection cleanly, but not getting it won't mean the client hasn't disconnected.)

Categories

Resources