Reuse asynchronous socket: subsequent connect attempts fail - c#

I'm trying to reuse a socket in an asynchronous HTTP client, but I'm not able to connect to the host the second time around. I basically treat my asynchronous HTTP client as a state machine with the following states:
Available: the socket is available for use
Connecting: the socket is connecting to the endpoint
Sending: the socket is sending data to the endpoint
Receiving: the socket is receiving data from the endpoint
Failed: there was a socket failure
Clean Up: cleaning up the socket state
In the connecting state I call BeginConnect:
private void BeginConnect()
{
lock (_sync) // re-entrant lock
{
IPAddress[] addersses = Dns.GetHostEntry(_asyncTask.Host).AddressList;
// Connect to any available address
IAsyncResult result = _reusableSocket.BeginConnect(addersses, _asyncTask.Port, new AsyncCallback(ConnectCallback), null);
}
}
The callback method changes the state to Sending once a successful connection has been established:
private void ConnectCallback(IAsyncResult result)
{
lock (_sync) // re-entrant lock
{
try
{
_reusableSocket.EndConnect(result);
ChangeState(EClientState.Sending);
}
catch (SocketException e)
{
Console.WriteLine("Can't connect to: " + _asyncTask.Host);
Console.WriteLine("SocketException: {0} Error Code: {1}", e.Message, e.NativeErrorCode);
ThreadPool.QueueUserWorkItem(o =>
{
// An attempt was made to get the page so perform a callback
ChangeState(EClientState.Failed);
});
}
}
}
In the cleanup I Shutdown the socket and Disconnect with a reuse flag:
private void CleanUp()
{
lock (_sync) // re-entrant lock
{
// Perform cleanup
if (_reusableSocket.Connected)
{
_reusableSocket.Shutdown(SocketShutdown.Both);
_reusableSocket.Disconnect(true);
}
ChangeState(EClientState.Available);
}
}
Subsequent calls to BeginConnect result in a timeout and an exception:
SocketException: A connection attempt
failed because the connected party did
not properly respond after a period of
time, or established connection failed
because connected host has failed to
respond XX.XXX.XX.XX:80
Error Code: 10060
Here is the state trace:
Initializing...
Change State: Connecting
Change State: Sending
Change State: Receiving
Change State: CleanUp
Callback: Received data from client 0 // <--- Received the first data
Change State: Available
Change State: Connecting // <--- Timeout when I try to reuse the socket to connect to a different endpoint
What do I have to do to be able to reuse the socket to connect to a different host?
Note: I have not tried to re-connect to the same host, but I assume the same thing happens (i.e. fails to connect).
Update
I found the following note in the documentation of BeginConnect:
If this socket has previously been disconnected, then BeginConnect must be called on a thread that will not exit until the operation is complete. This is a limitation of the underlying provider. Also the EndPoint that is used must be different.
I'm starting to wonder if my issue has something to do with that... I am connecting to a different EndPoint, but what do they mean that the thread from which we call BeginConnect must not exit until the operation is complete?
Update 2.0:
I asked a related question and I tried using the "Async family" calls instead of the "Begin family" calls, but I get the same problem!!!

I commented on this question: what is benefit from socket reuse in C# about socket reuse using Disconnect(true)/DisconnectEx() and this may help you.
Personally I think it's an optimisation too far in client code.
Re update 1 to your question; no, you'd get an AbortedOperation exception if that were the case (see here: VB.NET 3.5 SocketException on deployment but not on development machine) and the docs are wrong if you're running on Vista or later as it doesn't enforce the "thread must exist until after overlapped I/O completes" rule that previous operating systems enforce.
As I've already said in the reply to the linked question; there's very little point in using this functionality for outbound connection establishment. It's likely that it was originally added to the Winsock API to support socket reuse for AcceptEx() on inbound connections, where, on a very busy web server that was using TransmitFile() to send files to clients (which is where disconnect for reused seems to have originated). The docs state that it doesn't play well with TIME_WAIT and so using it for connections where you initiate the active close (and thus put the socket into TIME_WAIT, see here) doesn't really make sense.
Can you explain why you think this micro optimisation is actually necessary in your case?

have you checked the MaxConnections Setting?
http://msdn.microsoft.com/de-de/library/system.servicemodel.nettcpbinding.maxconnections.aspx

Related

How can I detect socket closure with a SocketAsyncEventArgs?

I'm running into an interesting scenario when I'm trying to roll with .Net's SocketAsyncEventArgs. Namely, the fact that they can't seem to detect when a graceful remote socket shutdown has occurred.
Bit of background: I'm updating a legacy application from MFC to a .NET project, and my code needs to interface with all other legacy MFC code. In the legacy MFC code, the MFC backend automatically registers when a remote connection is gracefully closed with a FIN or RST signal. I've observed this behavior in action, and all the user can or needs to interact with is overloading the OnClose method that MFC provides.
I can't replicate that in C# or C++/CLI at the moment. My SocketAsyncEventArgs that I use to handle all receive operations looks like this:
static void AcceptHandler(System::IAsyncResult^ ar)
{
ServerSocket ^server = (ServerSocket ^)ar->AsyncState;
try
{
server->Socket = gcnew SocketMgr(server->listener->EndAcceptSocket(ar));
//pConnectionCb a function variable I use for updating the GUI when
//connection status changes. ReceiveDataHandler is another function
//variable for logging purposes.
if (server->pConnectionChangedCb)
{
server->pConnectionChangedCb(server->nID);
}
if (server->receiveDataHandler)
{
System::Net::Sockets::SocketAsyncEventArgs ^receiveArgs = gcnew System::Net::Sockets::SocketAsyncEventArgs();
receiveArgs->SetBuffer(server->readbuffer, server->nOffset, server->nBytesToGet - server->nOffset);
receiveArgs->Completed +=
gcnew System::EventHandler<System::Net::Sockets::SocketAsyncEventArgs ^>(server, &ServerSocket::IO_Completed);
server->Socket->ReceiveAsync(receiveArgs);
}
}
catch (System::Net::Sockets::SocketException ^e)
{
System::Windows::Forms::MessageBox::Show("OnAccept: Could not Accept, exception" + e->ErrorCode);
server->listener->EndAcceptSocket(ar);
}
}
void IO_Completed(System::Object ^sender, System::Net::Sockets::SocketAsyncEventArgs ^e)
{
if (!(e->SocketError == System::Net::Sockets::SocketError::Success))
{
kPrintf("Error.");
}
// determine which type of operation just completed and call the associated handler
switch (e->LastOperation)
{
case System::Net::Sockets::SocketAsyncOperation::Receive:
ProcessReceive(e);
break;
case System::Net::Sockets::SocketAsyncOperation::Send:
ProcessSend(e);
break;
default:
throw gcnew System::ArgumentException("The last operation completed on the socket was not a receive or send");
}
};
From what I've observed, when the remote socket ceases to exist, the SocketAsyncEventArgs object in the middle of the read exists in a state where it has not been completed, and will never be completed. As it fails to complete, IO_Completed will never be called, and I will be unable to use this to detect when a socket sends a graceful disconnect. So it can't be used.
...The only problem with this being, of course, that there's no OnRemoteClose (or equivalent) event for me to scribe to in Socket.Net.Sockets.Socket or in the SocketAsyncEventArgs, leaving me unable to detect a socket FIN or RST signal and keeping the socket open longer than expected. C# probably has a way around this, but I can't, for the life of me, find it. Anyone else wrestled with this before?
As it turns out, SocketAsyncEventArgs does record a graceful termination of any remote socket, regardless of client language. It does not expose the underlying TCP/IP events or anything similar, and instead just demonstrates the socket closure as an empty message sent.
My code, because of a PEBKAC error, was not receiving the empty 0-byte messages, and thus I could never 'see' the graceful shutdown.
(In the event anyone has this issue in the future, the problem is that the ProcessReceive method should have called ReceiveAsync to continue the loop after receiving its first signal, and it... wasn't, for reasons unrelated to the code.)

Tcp.BeginConnect thinks it is connected, even if there is no server

I am trying to make an aSync connection to a server using TcpClient.BeginConnect, but am encountering some difficulties. This is my first time using Tcp so please bear with me.
The connection itself works fine when the server is running, i can send and receive messages without problem. However when I stop the server and try to connect to it, Tcp.BeginConnect will pretend it is actually connected to a server without returning an error, until i try to actually send data which will obviously fail.
When i use TcpClient.Connect() instead it'll return A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. when no connection is established after a few seconds, letting me know the connection failed.
Is there a way to get this same behaviour with TcpClient.BeginConnect? Or am I doing something wrong myself?
i looked around and found C# BeginConnect callback is fired when not connected which is somewhat similair and the answer was that EndConnect had to be called in the callback before the socket becomes usuable, but i'm already doing that.
my code:
public static void OpenTcpASyncConnection()
{
if (client == null)
{
client = new TcpClient();
IAsyncResult connection = client.BeginConnect(serverIp, serverPort, new AsyncCallback(ASyncCallBack), client);
bool succes = connection.AsyncWaitHandle.WaitOne();//returns true
if (!succes)
{
client.Close();
client.EndConnect(connection);
throw new Exception("TcpConnection::Failed to connect.");
}
else
{
Debug.LogFormat("TcpConnection::Connecting to {0} succeeded", serverIp);
}
}
else
{
Debug.Log("TcpConnection::Client already exists");
}
}
public static void ASyncCallBack(IAsyncResult ar)
{
Debug.Log("Pre EndConnect");
client.EndConnect(ar);
Debug.Log("Post EndConnect");//this never gets called?
}
the boolean succes is true even if the server is offline (or does this always return true as long as the operation finishes?), thus i assume it thinks it is actually connected, and the Debug.Log after client.EndConnect(ar) never gets called. Not a single error gets returned.
In summary; Am I forgetting something/doing something wrong? or is this expected behaviour?
Edit: language is C# with the .net 3.5 framework. It is ment for a Unity application though i'm not inheriting from monobehaviour for this. If you require any additional information I will try to provide this.
Kind regards and thanks for your time,
Remy.

Calling EndConnect after BeginConnect

According to this MSDN article, the socket.EndConnect method should be called in the AsyncCallback delegate provided in the original socket.BeginConnect call.
What is not clear (and the MSDN article is silent here) is whether EndConnect should be called after a timeout (and the socket is NOT connected). socket.EndConnect throws an exception in this case.
What is the proper procedure to follow after timeout? What are the consequences if EndConnect is not called (either after a successful connection or timeout without connection)? My code appears to work fine without calling EndConnect.
Here is some example code covering the main ideas in the question:
// Member variables
private static ManualResetEvent m_event;
private static Socket m_socket;
// Constructor of class
public static CMyTestConnection()
{
// Create an event that can be used to wake this thread when the connection completes
m_event = new ManualResetEvent(false);
}
private static void TestConnection(object sender, EventArgs e)
{
// Create connection endpoint
IPAddress ip = IPAddress.Parse("200.1.2.3"); // Deliberately incorrect
IPEndPoint ipep = new IPEndPoint(ip, 12345); // Also deliberately incorrect
EndPoint ep = (EndPoint)ipep;
// Attempt connection
m_event.Reset();
m_socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
m_socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, 1);
m_socket.BeginConnect(ep, ConnectCompletedCallback, m_socket);
}
private static void ConnectCompletedCallback(IAsyncResult ar)
{
// The asynchronous connection call has completed. Either we have connected (success) or
// timed out without being able to connect (failure).
m_event.Set();
Socket s = (Socket)ar.AsyncState;
if (s.Connected)
{
// Success...should EndConnect only be called here?
s.EndConnect(ar);
}
else
{
// Or should EndConnect also be called here (in a try/catch block)?
s.Close();
}
}
You invited me to this chat room. I am assuming this is the question to which you're referring, but it's hard for me to know for sure. Your message in the chat room doesn't have a real URL. I looked at your question links in your profile, and the only one I recognize is this one, which isn't closed at the moment. So there's no need to vote to re-open.
That said, the answer is still the same as already provided in the comments: you always call the EndXXX method when you've called BeginXXX (the few known exceptions don't apply here). There's nothing in your question, even after the recent edit, that would indicate what more you need.
You don't show how the timeout is implemented, so there's not even enough information to understand the code you posted. But if you are closing the socket, thus causing your callback to be invoked where EndConnect() will throw an exception, you should be calling EndConnect(). Failing to do so can potentially leave unmanaged resources dangling, which would then eventually be exhausted, or at the very least lead to performance problems.
The source code for .NET is readily available, so you can easily examine the implementation yourself. In the case of Socket.EndConnect(), we can see that for the current implementation, if the socket has already been disposed, all that happens is an exception is thrown. So, in theory, you could ignore sockets that have already been closed. I.e. this is an exception to the general concern about leaving resources dangling, in the specific "socket is already closed" scenario. But only if your timeout is implemented by closing the socket.
There are a couple of problems here though, related to race conditions:
Depending on how the timeout is implemented (you didn't share that part, so the question is still incomplete), you may have code that got as far as starting to call Socket.Close(), but which has not set the disposed flag. You'll be dealing with a connected socket that is about to become disconnected, and you need to have try/catch in place to handle that scenario.
Your callback assumes (it seems…again, there's not enough context in your question) that the Connected property is a reliable way to detect that there's been a timeout, but the Connected property could theoretically be reset to false after being connected, but before your callback gets to execute (e.g. some other type of error on the socket).
As far as the question of calling EndConnect() on a successful connection, that is much more clear: you must do so. If your code appears to work even though you haven't, that's just you getting lucky. We can see in the implementation that the EndConnect() method does useful work to configure the socket state when called after a successful connection, so if you fail to call the method, your socket will be in some indeterminate, incompletely configured state.
Naturally, if your timeout is implemented in some other way, where the socket is not closed before the callback is invoked, then you are in the same situation as if the connection had completed, and you must call EndConnect() to ensure that the appropriate cleanup and socket configuration occurs. I.e. that would be the same as the "successful connection" scenario.
The bottom line is, there is zero benefit to not calling EndConnect() in the event of a close/dispose-based timeout. The only hypothetical benefit might be that you can avoid try/catch, but you can't get away without that, because of the race conditions that exist. And if there's not such a timeout, not only is there not a benefit to not calling the method, there is real harm in failing to call it.
On a related note, there's not enough context in your question to make any real assessments of the rest of your code (since you didn't show how you're implementing the timeout, nor how the rest of your network I/O is handled). But I will say that in most cases, the "reuse address" option is unnecessary and should not be used. Most people wind up using it because they get into a situation where they can't start a new listening socket after they have somehow stopped a previous one, but that problem only comes up with the first listening socket and/or associated connected sockets have not been closed or shutdown correctly. The correct approach in that case is to handle the socket closure/shutdown correctly, not to add to the problem by setting "reuse address".

Disposing each socket client that is connected to the socket server

I have just recently started development for a server - multiple clients application using Socket.
The server doesn't need to keep track of the connected clients; If there is a client that requests for connection, server accepts it. If there is a request from any client (to get some data), server will response to that client.
/// <summary>
/// Callback when server accepts a new incoming connection.
/// </summary>
/// <param name="result">Incoming connection result object.</param>
private void AcceptedCallback(IAsyncResult result)
{
try
{
Socket clientSocket = _socket.EndAccept(result); // Asynchronously accepts an incoming connection attempt
if (clientSocket.Connected) // Check if the client is in 'Connected' state
{
StateObject state = new StateObject();
state.clientSocket = clientSocket;
clientSocket.BeginReceive(state.buffer, 0, StateObject.BufferSize, SocketFlags.None, // Start listening to client request
ReceiveCallback, state);
}
else
{
clientSocket.Close(); // Terminate that client's connection
Log.writeLog("TCPServer(AcceptedCallback)"
, "Client's status is not connected.");
}
}
catch (Exception ex)
{
Log.writeLog("TCPServer(AcceptedCallback)"
, ex.Message);
clientSocket.Close();
}
finally
{
Accept(); // Start to accept new connection request
}
}
I have 3 questions about this:
For each BeginReceive that I create for the newly connected client, does my server application creates a new thread/object to hold that client?
If after the client is connected, and the network cable is pulled off at the client side and plug back in, the client will connect to the server again and this is considered a new connection on the server, if this scenario occurs again and again, will my server program crashes?
Hence, do I need to keep track of each client that is connected to the server, and find a way to track their state so I can call Close/Dispose on them?
So far in my testing for scenario 2, there are no abnormalities detected in my server program, but I hope someone would help clarify this for me. Thank you.
No, it will use a IO completion thread from a pool of threads.
No, you can code and should code to cater for this. If something happens on the client side that the OS can detect, it will send a TCP fin/ack to the server. This should result in any BeginXXX method still waiting to go the Async Callback Method. From there your call to EndXXX method should either throw an exception or return zero bytes being read from the socket.
This depends on what you mean by keep track of to dispose of them properly. If you mean dispose of them if you detect an error, no, you can put clean up code in your EndXXX methods. If you mean so that you can signal clients gracefully if you shut the server down, then yes.

Disconnecting TCPClient and seeing that on the other side

i am trying to disconnect a client from a server but the server still sees it as being connected. I cant find a solution to this and Shutdown, Disconnect and Close all dont work.
Some code for my disconnect from the client and checking on the server:
Client:
private void btnDisconnect_Click(object sender, EventArgs e)
{
connTemp.Client.Shutdown(SocketShutdown.Both);
connTemp.Client.Disconnect(false);
connTemp.GetStream().Close();
connTemp.Close();
}
Server:
while (client != null && client.Connected)
{
NetworkStream stream = client.GetStream();
data = null;
try
{
if (stream.DataAvailable)
{
data = ReadStringFromClient(client, stream);
WriteToConsole("Received Command: " + data);
}
} // So on and so on...
There are more writes and reads further down in the code.
Hope you all can help.
UPDATE: I even tried passing the TCP client by ref, assuming there was a scope issue and client.Connected remains true even after a read. What is going wrong?
Second Update!!:
Here is the solution. Do a peek and based on that, determine if you are connected or not.
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
Here is the solution!!
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
From the MSDN Documentation:
The Connected property gets the
connection state of the Client socket
as of the last I/O operation.
When it
returns false, the Client socket was
either never connected, or is no
longer connected. Because the
Connected property only reflects the
state of the connection as of the most
recent operation, you should attempt
to send or receive a message to
determine the current state. After the
message send fails, this property no
longer returns true. Note that this
behavior is by design. You cannot
reliably test the state of the
connection because, in the time
between the test and a send/receive,
the connection could have been lost.
Your code should assume the socket is
connected, and gracefully handle
failed transmissions.
I am not sure about the NetworkStream class but I would think that it would behave similar to the Socket class as it is primarily a wrapper class. In general the server would be unaware that the client disconnected from the socket unless it performs an I/O operation on the socket (a read or a write). However, when you call BeginRead on the socket the callback is not called until there is data to be read from the socket, so calling EndRead and getting a bytes read return result of 0 (zero) means the socket was disconnected. If you use Read and get a zero bytes read result I suspect that you can check the Connected property on the underlying Socket class and it will be false if the client disconnected since an I/O operation was performed on the socket.
It's a general TCP problem, see:
How do I check if a SSLSocket connection is sane on Java?
Java socket not throwing exceptions on a dead socket?
The workaround for this tend to rely on sending the amount of data to expect as part of the protocol. That's what HTTP 1.1 does using the Content-Length header (for a entire entity) or with chunked transfer encoding (with various chunk sizes).
Another way is to send "NOOP" or similar commands (essentially messages that do nothing but make sure the communication is still open) as part of your protocol regularly.
(You can also add to your protocol a command that the client can send to the server to close the connection cleanly, but not getting it won't mean the client hasn't disconnected.)

Categories

Resources