Calling EndConnect after BeginConnect - c#

According to this MSDN article, the socket.EndConnect method should be called in the AsyncCallback delegate provided in the original socket.BeginConnect call.
What is not clear (and the MSDN article is silent here) is whether EndConnect should be called after a timeout (and the socket is NOT connected). socket.EndConnect throws an exception in this case.
What is the proper procedure to follow after timeout? What are the consequences if EndConnect is not called (either after a successful connection or timeout without connection)? My code appears to work fine without calling EndConnect.
Here is some example code covering the main ideas in the question:
// Member variables
private static ManualResetEvent m_event;
private static Socket m_socket;
// Constructor of class
public static CMyTestConnection()
{
// Create an event that can be used to wake this thread when the connection completes
m_event = new ManualResetEvent(false);
}
private static void TestConnection(object sender, EventArgs e)
{
// Create connection endpoint
IPAddress ip = IPAddress.Parse("200.1.2.3"); // Deliberately incorrect
IPEndPoint ipep = new IPEndPoint(ip, 12345); // Also deliberately incorrect
EndPoint ep = (EndPoint)ipep;
// Attempt connection
m_event.Reset();
m_socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
m_socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, 1);
m_socket.BeginConnect(ep, ConnectCompletedCallback, m_socket);
}
private static void ConnectCompletedCallback(IAsyncResult ar)
{
// The asynchronous connection call has completed. Either we have connected (success) or
// timed out without being able to connect (failure).
m_event.Set();
Socket s = (Socket)ar.AsyncState;
if (s.Connected)
{
// Success...should EndConnect only be called here?
s.EndConnect(ar);
}
else
{
// Or should EndConnect also be called here (in a try/catch block)?
s.Close();
}
}

You invited me to this chat room. I am assuming this is the question to which you're referring, but it's hard for me to know for sure. Your message in the chat room doesn't have a real URL. I looked at your question links in your profile, and the only one I recognize is this one, which isn't closed at the moment. So there's no need to vote to re-open.
That said, the answer is still the same as already provided in the comments: you always call the EndXXX method when you've called BeginXXX (the few known exceptions don't apply here). There's nothing in your question, even after the recent edit, that would indicate what more you need.
You don't show how the timeout is implemented, so there's not even enough information to understand the code you posted. But if you are closing the socket, thus causing your callback to be invoked where EndConnect() will throw an exception, you should be calling EndConnect(). Failing to do so can potentially leave unmanaged resources dangling, which would then eventually be exhausted, or at the very least lead to performance problems.
The source code for .NET is readily available, so you can easily examine the implementation yourself. In the case of Socket.EndConnect(), we can see that for the current implementation, if the socket has already been disposed, all that happens is an exception is thrown. So, in theory, you could ignore sockets that have already been closed. I.e. this is an exception to the general concern about leaving resources dangling, in the specific "socket is already closed" scenario. But only if your timeout is implemented by closing the socket.
There are a couple of problems here though, related to race conditions:
Depending on how the timeout is implemented (you didn't share that part, so the question is still incomplete), you may have code that got as far as starting to call Socket.Close(), but which has not set the disposed flag. You'll be dealing with a connected socket that is about to become disconnected, and you need to have try/catch in place to handle that scenario.
Your callback assumes (it seems…again, there's not enough context in your question) that the Connected property is a reliable way to detect that there's been a timeout, but the Connected property could theoretically be reset to false after being connected, but before your callback gets to execute (e.g. some other type of error on the socket).
As far as the question of calling EndConnect() on a successful connection, that is much more clear: you must do so. If your code appears to work even though you haven't, that's just you getting lucky. We can see in the implementation that the EndConnect() method does useful work to configure the socket state when called after a successful connection, so if you fail to call the method, your socket will be in some indeterminate, incompletely configured state.
Naturally, if your timeout is implemented in some other way, where the socket is not closed before the callback is invoked, then you are in the same situation as if the connection had completed, and you must call EndConnect() to ensure that the appropriate cleanup and socket configuration occurs. I.e. that would be the same as the "successful connection" scenario.
The bottom line is, there is zero benefit to not calling EndConnect() in the event of a close/dispose-based timeout. The only hypothetical benefit might be that you can avoid try/catch, but you can't get away without that, because of the race conditions that exist. And if there's not such a timeout, not only is there not a benefit to not calling the method, there is real harm in failing to call it.
On a related note, there's not enough context in your question to make any real assessments of the rest of your code (since you didn't show how you're implementing the timeout, nor how the rest of your network I/O is handled). But I will say that in most cases, the "reuse address" option is unnecessary and should not be used. Most people wind up using it because they get into a situation where they can't start a new listening socket after they have somehow stopped a previous one, but that problem only comes up with the first listening socket and/or associated connected sockets have not been closed or shutdown correctly. The correct approach in that case is to handle the socket closure/shutdown correctly, not to add to the problem by setting "reuse address".

Related

How can I detect socket closure with a SocketAsyncEventArgs?

I'm running into an interesting scenario when I'm trying to roll with .Net's SocketAsyncEventArgs. Namely, the fact that they can't seem to detect when a graceful remote socket shutdown has occurred.
Bit of background: I'm updating a legacy application from MFC to a .NET project, and my code needs to interface with all other legacy MFC code. In the legacy MFC code, the MFC backend automatically registers when a remote connection is gracefully closed with a FIN or RST signal. I've observed this behavior in action, and all the user can or needs to interact with is overloading the OnClose method that MFC provides.
I can't replicate that in C# or C++/CLI at the moment. My SocketAsyncEventArgs that I use to handle all receive operations looks like this:
static void AcceptHandler(System::IAsyncResult^ ar)
{
ServerSocket ^server = (ServerSocket ^)ar->AsyncState;
try
{
server->Socket = gcnew SocketMgr(server->listener->EndAcceptSocket(ar));
//pConnectionCb a function variable I use for updating the GUI when
//connection status changes. ReceiveDataHandler is another function
//variable for logging purposes.
if (server->pConnectionChangedCb)
{
server->pConnectionChangedCb(server->nID);
}
if (server->receiveDataHandler)
{
System::Net::Sockets::SocketAsyncEventArgs ^receiveArgs = gcnew System::Net::Sockets::SocketAsyncEventArgs();
receiveArgs->SetBuffer(server->readbuffer, server->nOffset, server->nBytesToGet - server->nOffset);
receiveArgs->Completed +=
gcnew System::EventHandler<System::Net::Sockets::SocketAsyncEventArgs ^>(server, &ServerSocket::IO_Completed);
server->Socket->ReceiveAsync(receiveArgs);
}
}
catch (System::Net::Sockets::SocketException ^e)
{
System::Windows::Forms::MessageBox::Show("OnAccept: Could not Accept, exception" + e->ErrorCode);
server->listener->EndAcceptSocket(ar);
}
}
void IO_Completed(System::Object ^sender, System::Net::Sockets::SocketAsyncEventArgs ^e)
{
if (!(e->SocketError == System::Net::Sockets::SocketError::Success))
{
kPrintf("Error.");
}
// determine which type of operation just completed and call the associated handler
switch (e->LastOperation)
{
case System::Net::Sockets::SocketAsyncOperation::Receive:
ProcessReceive(e);
break;
case System::Net::Sockets::SocketAsyncOperation::Send:
ProcessSend(e);
break;
default:
throw gcnew System::ArgumentException("The last operation completed on the socket was not a receive or send");
}
};
From what I've observed, when the remote socket ceases to exist, the SocketAsyncEventArgs object in the middle of the read exists in a state where it has not been completed, and will never be completed. As it fails to complete, IO_Completed will never be called, and I will be unable to use this to detect when a socket sends a graceful disconnect. So it can't be used.
...The only problem with this being, of course, that there's no OnRemoteClose (or equivalent) event for me to scribe to in Socket.Net.Sockets.Socket or in the SocketAsyncEventArgs, leaving me unable to detect a socket FIN or RST signal and keeping the socket open longer than expected. C# probably has a way around this, but I can't, for the life of me, find it. Anyone else wrestled with this before?
As it turns out, SocketAsyncEventArgs does record a graceful termination of any remote socket, regardless of client language. It does not expose the underlying TCP/IP events or anything similar, and instead just demonstrates the socket closure as an empty message sent.
My code, because of a PEBKAC error, was not receiving the empty 0-byte messages, and thus I could never 'see' the graceful shutdown.
(In the event anyone has this issue in the future, the problem is that the ProcessReceive method should have called ReceiveAsync to continue the loop after receiving its first signal, and it... wasn't, for reasons unrelated to the code.)

Is it good practice to put try-catch in a loop until all statements in the try block is executed without any exceptions?

I was trying to develop a multicast receiver program and socket initialization was done as shown below:
public void initializeThread()
{
statuscheckthread = new Thread(SetSocketOptions);
statuscheckthread.IsBackground = true;
}
private void Form1_Load(object sender, EventArgs e)
{
rxsock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
iep = new IPEndPoint(IPAddress.Any, 9191);
rxsock.Bind(iep);
ep = (EndPoint)iep;
initializeThread();
statuscheckthread.Start();
}
public void SetSocketOptions()
{
initializeThread(); //re-initializes thread thus making it not alive
while (true)
{
if (NetworkInterface.GetIsNetworkAvailable())
{
bool sockOptnSet = false;
while (!sockOptnSet)
{
try
{
rxsock.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption(IPAddress.Parse("224.50.50.50")));
rxsock.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.MulticastTimeToLive, 64);
sockOptnSet = true;
}
catch
{
//Catch exception here
}
}
}
break; // Break out from loop once socket options are set
}
}
When my PC is not connected to a network, SetSocketOption method was throwing exception and even after network is connected,
I was unable to receive data because socket options are not set.
To avoid this I used a thread which runs in the background checking
for network availability and once network is available, it sets the socket options.
It works properly in some PC's but in some others, NetworkInterface.GetIsNetworkAvailable()
returned true before network got connected
(while network was being identified) .
So, to make sure Socket options are set, I used a bool variable sockOptnSet
which is set as
true if all the statements in the try block is executed as shown inside the method public void SetSocketOptions()
This program works fine in all PC's I tried, but I am doubtful about how much I can rely on this to work.
My questions are:
1) Is this good practice?
2) If not, what are the possible errors or problems it may cause? And how can I implement it in a better way?
Is this a good practice?
No, not a good practice. The vast majority of exceptions, including your first one, fall in the category of vexing exceptions. Software is supposed to work, worked well when you tested it, but doesn't on the user's machine. Something went wrong but you do not know what and there isn't anything meaningful that you can do about it. Trying to keep your program going is not useful, it cannot do the job it is supposed to do. In your case, there's no hope that the socket is ever going to receive data when there is no network. And, as you found out, trying to work around the problem just begets more problems. That's normal.
If this is bad practice, how can I implement it in a better way?
You need help from a human. The user is going to have to setup the machine to provide a working network connection. This requires a user interface, you must have a way to tell a human what he needs to do to solve your problem. You can make that as intricate or as simple as you desire. Just an error message, a verbatim copy of the Exception.Message can be enough. Writing an event handler for the AppDomain.CurrentDomain.UnhandledException event is a very good (and required) strategy. Microsoft spent an enormous amount of effort to make exception messages as clear and helpful as possible, even localizing them for you in the user's native language, you want to take advantage of that. Even if the exception message is mystifying, a quick Google query on the message text returns hundreds of hits. With this event handler in place, you don't have to do anything special. Your program automatically terminates and your user knows what to do about it.
You can certainly make it more intricate, you discovered that SetSocketOption() is liable to fail right after the network becomes available but works when you wait long enough. So this is actually an error condition that you can work around, just by waiting long enough. Whether you should write the code to handle this is something that you have to decide for yourself. It is something you write when you have enough experience with the way your program behaves, you never write it up front. Usually as a result from feedback from the users of your program.
Some good advice in the comments, lets' expand on it.
Firstly, I would put all this socket code in to its' own class, outside of the form. This makes it its' own entity and semantically easier to understand. This class could have a property Initialised, which is initially set to false. The first thing you do in your form is call an Initialise method on this class which attempts to set socket options and catches the relevant exceptions if the network is not available. If it is available, we set our Initialised property to true.
If not available, we set a single timeout (see System.Threading.Timer) that calls this same function (potentially with a retry count) after 'x' seconds. Once again we'll find ourselves back in this Initialise function, perhaps with a retry count mentioned at the beginning. Once again, if it is available, we're good - if not, set the timer again. Eventually, after 'x' retries if we're not initialised we can throw an exception or set some other failure property to indicate that we can't proceed.
Your Form class can periodically check (or hook in to an event) to determine whether the socket is now ready for communication. In case of failure you can gracefully quit out, or because our class is nice and abstracted, attempt to start the whole process again.

What is the drawback if I do not invoke the UdpClient.Close() method?

I have the following code block, and it affects my program efficiency. The problem here is if the target host exists, everything is OK. But if it does exist, it takes too long to execute. Finally, I found out the "udp.Close()" occupies the most of execution time. If I does not invoke the close method, the efficiency is good.
Can anyone help me tell me what is the drawback if I do not invoke the close method?? Thank you very much.
{ // This is my code block, I need to execute it many many times.
string ipAddress = Dns.GetHostAddresses("joe-pc").FirstOrDefault().ToString();
UdpClient udp = new UdpClient(ipAddress, Port);
udp.Send(msg, msg.Length);
udp.Close();
udp = null;
}
The drawback is that you'll have a resource leak. You may be lucky enough that garbage collection happens often enough that it doesn't demonstrate itself in your program, but why take the chance? From the documentation on Close:
The Close disables the underlying Socket and releases all managed and unmanaged resources associated with the UdpClient.
Note, it talks of unmanaged resources. These will only be released by the UdpClient running some code - it either does it in Close/Dispose, or it has to do it in its Finalize method - nothing else will cause them to be released (assuming the program stays running).
You may be able to hide the cost of the Close operation by using Task.Run to have it run on another thread - but you'd have to weigh up the cost of doing so.
Or, to put it in more concrete terms - you say that you need this method to run many times. By not cleaning up your resources here, you would increase the chances that a subsequent call will fail completely because it cannot acquire the required resources (they're all tied up in existing, non-Closed UdpClient instances).
And, as indicated in my comment, the following line is pointless:
udp = null;
Such code used to be of use in COM era VB, but it has no place in the .NET world.
The Close() disables the underlying Socket and releases all managed and unmanaged resources associated with the UdpClient.
if u not closed then it doesn't disables & deallocate resources like ur port & ipaddress
Instead of Using Send could you use BeginSend and then in your callback handle an exception when you attempt to Close if that is actually the problem?
You're targeting joe-pc presumably with different ip addresses as the dns record changes, but you reuse the same UdpClient for each send. Just remember to Close() it when you're all done.
Use
//
// Summary:
// Sends a UDP datagram to a specified port on a specified remote host.
//
// Parameters:
// dgram:
// An array of type System.Byte that specifies the UDP datagram that you intend
// to send represented as an array of bytes.
//
// bytes:
// The number of bytes in the datagram.
//
// hostname:
// The name of the remote host to which you intend to send the datagram.
//
// port:
// The remote port number with which you intend to communicate.
//
// Returns:
// The number of bytes sent.
//
public int Send(byte[] dgram, int bytes, string hostname, int port);
and skip the dns lookup too.
I know this is a while but I stumbled today upon this question and wanted to add:
udp=null;
is not pointless imho. You can see in some cases in a memory profiler that your instance is still available if you dont set it to null.

Detecting unexpected socket disconnect

This is not a question about how to do this, but a question about whether it's wrong what I'm doing. I've read that it's not possible to detect if a socket is closed unexpectedly (like killing the server/client process, pulling the network cable) while waiting for data (BeginReceive), without use of timers or regular sent messages, etc. But for quite a while I've been using the following setup to do this, and so far it has always worked perfectly.
public void OnReceive(IAsyncResult result)
{
try
{
var bytesReceived = this.Socket.EndReceive(result);
if (bytesReceived <= 0)
{
// normal disconnect
return;
}
// ...
this.Socket.BeginReceive...;
}
catch // SocketException
{
// abnormal disconnect
}
}
Now, since I've read it's not easily possible, I'm wondering if there's something wrong with my method. Is there? Or is there a difference between killing processes and pulling cables and similar?
It's perfectly possible and OK to do this. The general idea is:
If EndReceive returns anything other than zero, you have incoming data to process.
If EndReceive returns zero, the remote host has closed its end of the connection. That means it can still receive data you send if it's programmed to do so, but cannot send any more of its own under any circumstances. Usually when this happens you will also close your end the connection thus completing an orderly shutdown, but that's not mandatory.
If EndReceive throws, there has been an abnormal termination of the connection (process killed, network cable cut, power lost, etc).
A couple of points you have to pay attention to:
EndReceive can never return less than zero (the test in your code is misleading).
If it throws it can throw other types of exception in addition to SocketException.
If it returns zero you must be careful to stop calling BeginReceive; otherwise you will begin an infinite and meaningless ping-pong game between BeginReceive and EndReceive (it will show in your CPU usage). Your code already does this, so no need to change anything.

Reuse asynchronous socket: subsequent connect attempts fail

I'm trying to reuse a socket in an asynchronous HTTP client, but I'm not able to connect to the host the second time around. I basically treat my asynchronous HTTP client as a state machine with the following states:
Available: the socket is available for use
Connecting: the socket is connecting to the endpoint
Sending: the socket is sending data to the endpoint
Receiving: the socket is receiving data from the endpoint
Failed: there was a socket failure
Clean Up: cleaning up the socket state
In the connecting state I call BeginConnect:
private void BeginConnect()
{
lock (_sync) // re-entrant lock
{
IPAddress[] addersses = Dns.GetHostEntry(_asyncTask.Host).AddressList;
// Connect to any available address
IAsyncResult result = _reusableSocket.BeginConnect(addersses, _asyncTask.Port, new AsyncCallback(ConnectCallback), null);
}
}
The callback method changes the state to Sending once a successful connection has been established:
private void ConnectCallback(IAsyncResult result)
{
lock (_sync) // re-entrant lock
{
try
{
_reusableSocket.EndConnect(result);
ChangeState(EClientState.Sending);
}
catch (SocketException e)
{
Console.WriteLine("Can't connect to: " + _asyncTask.Host);
Console.WriteLine("SocketException: {0} Error Code: {1}", e.Message, e.NativeErrorCode);
ThreadPool.QueueUserWorkItem(o =>
{
// An attempt was made to get the page so perform a callback
ChangeState(EClientState.Failed);
});
}
}
}
In the cleanup I Shutdown the socket and Disconnect with a reuse flag:
private void CleanUp()
{
lock (_sync) // re-entrant lock
{
// Perform cleanup
if (_reusableSocket.Connected)
{
_reusableSocket.Shutdown(SocketShutdown.Both);
_reusableSocket.Disconnect(true);
}
ChangeState(EClientState.Available);
}
}
Subsequent calls to BeginConnect result in a timeout and an exception:
SocketException: A connection attempt
failed because the connected party did
not properly respond after a period of
time, or established connection failed
because connected host has failed to
respond XX.XXX.XX.XX:80
Error Code: 10060
Here is the state trace:
Initializing...
Change State: Connecting
Change State: Sending
Change State: Receiving
Change State: CleanUp
Callback: Received data from client 0 // <--- Received the first data
Change State: Available
Change State: Connecting // <--- Timeout when I try to reuse the socket to connect to a different endpoint
What do I have to do to be able to reuse the socket to connect to a different host?
Note: I have not tried to re-connect to the same host, but I assume the same thing happens (i.e. fails to connect).
Update
I found the following note in the documentation of BeginConnect:
If this socket has previously been disconnected, then BeginConnect must be called on a thread that will not exit until the operation is complete. This is a limitation of the underlying provider. Also the EndPoint that is used must be different.
I'm starting to wonder if my issue has something to do with that... I am connecting to a different EndPoint, but what do they mean that the thread from which we call BeginConnect must not exit until the operation is complete?
Update 2.0:
I asked a related question and I tried using the "Async family" calls instead of the "Begin family" calls, but I get the same problem!!!
I commented on this question: what is benefit from socket reuse in C# about socket reuse using Disconnect(true)/DisconnectEx() and this may help you.
Personally I think it's an optimisation too far in client code.
Re update 1 to your question; no, you'd get an AbortedOperation exception if that were the case (see here: VB.NET 3.5 SocketException on deployment but not on development machine) and the docs are wrong if you're running on Vista or later as it doesn't enforce the "thread must exist until after overlapped I/O completes" rule that previous operating systems enforce.
As I've already said in the reply to the linked question; there's very little point in using this functionality for outbound connection establishment. It's likely that it was originally added to the Winsock API to support socket reuse for AcceptEx() on inbound connections, where, on a very busy web server that was using TransmitFile() to send files to clients (which is where disconnect for reused seems to have originated). The docs state that it doesn't play well with TIME_WAIT and so using it for connections where you initiate the active close (and thus put the socket into TIME_WAIT, see here) doesn't really make sense.
Can you explain why you think this micro optimisation is actually necessary in your case?
have you checked the MaxConnections Setting?
http://msdn.microsoft.com/de-de/library/system.servicemodel.nettcpbinding.maxconnections.aspx

Categories

Resources