I'm trying to find a legitamate wya to tell if the socket has a connection with the server. This becomes invalid if they haven't connected yet, or either the client or server has closed it.
This method is currently returning null before I ever connect to the server using this socket object. Can anyone explain to me why it does that, and how I can fix it?
private static bool IsConnected(Socket socket)
{
return !(socket.Poll(1, SelectMode.SelectRead) && socket.Available == 0);
}
The only way to know if a socket is valid is to send something and get something back. Every other way is a lie. .Available only tells you whether there is data in the current read-buffer. The status is ... at best unreliable. Sockets can die without the OS noticing. Sockets can be artificially spoofed as "alive" by network hardware - in an attempt to prevent temporary network blips (especially for wifi or mobile users) severing all their sockets; but sometimes that device never comes back, and the device in the middle doesn't notice. This means that even TCP-level broken-socket detection is unreliable.
So: send some kind of test message and get something back. That is the only way.
Related
I'd like to provide my TCP/IP client class with a CheckConnection function so that I can check if something wrong has happened (my own client disconnected, server disconnected, server stuck up,...).
I have something like that:
bool isConnectionActive = false;
if (Client.Poll(100000, SelectMode.SelectWrite) == true)
isConnectionActive = true;
based on what MSDN says:
SelectWrite: true, if processing a Connect(EndPoint), and the connection has succeeded; -or- true if data can be sent; otherwise, returns false.
The point is that, testing this with simple server application, I am getting always true from CheckConnection, even if server-listener has been closed and even if server-application has been shutdown; that's weird, because I expect in those cases that both no connection is being processed (already connected minutes ago) and no data can be sent.
I have already implemented a similar connection check on server side using a combination of Poll with SelectRead and Available and it seems working properly; so now, should I write something similar also on client side? is the SelectWrite approach correct (but I'm using it improperly)?
There are lots of things you can check but none of them are assured to give you the result you are looking for. Even the implementation you have on the server will not work 100% of the time. I guarantee it will fail one day.
There are FIN packets, which should be sent from the client to the server, and vice versa when a connection is closed, but there is no guarantee that these will be delivered, or even processed.
This is generally known as the TCP Half Open problem.
Closing a TCP Socket is a mutually agreed process, you generally have a messaging protocol which tells the other end that it's closing, or you have some predefined set of instructions and you close after that.
The only reliable way to 100% detect if a remote socket is closed is to send some data to it. Only if you get an error back will you know if the socket has closed.
Some applications which don't send a lot of data implement a keep-alive protocol, they simply send/receive a few bytes every minute, so they know that the remote endpoint is present.
You can technically have two servers that are in a connected state and haven't sent data to each other for 10 years. Each end continues to believe that the other end is there until one try's to send some data and finds out it isn't.
Let's say we have a basic TCP server with a .NET TcpListener, with a basic TCP client using a .NET TcpClient.
What types of connection terminations are there, and how are they supposed to be checked for and handled?
Client
A. Client gracefully terminates the connection. The server is notified.
B. Client physically disconnects from the network. How does the server know?
C. Client program shuts down without a graceful disconnect. How does the server know?
Server
A. Server gracefully terminates the connection. The client is notified.
B. Server physically disconnects from the network. How does the client know?
C. Server program shuts down without a graceful disconnect. How does the client know?
Cases A and C are communicated by a TCP packet with the FIN flag set in the header. It is sent by the TCP/IP stack in the OS, so it doesn't matter if the application exited abnormally. The subcase of C where the OS also failed will act like B instead.
Case B, when you've lost the ability to communicate, is more complicated. If the failure is local (e.g. disassociation from a WiFi access point), then the local end of the connection will find out immediately about a change in network status, and can infer that the connection is broken (but if not cleaned up, a connection can survive a short-term outage).
If the connections are actively transmitting data, the acknowledgements will timeout and result in retransmission attempts. A limit may be placed on retransmission attempts, resulting in an error.
If there is no traffic, then it is possible for loss of connection to go undetected for a very long time (multiple days). For this reason, TCP connections often are configured to send heartbeat packets, which must be acknowledged, and detect failure of retransmission attempts in the same manner as normal data.
The short answer is: neither knows instantly when the other disconnects. UNTIL of course something lets you know.TCP isn't a physical connection therefore it relies on signals of some sort to determine the state. Usually either by checking the connection or receiving a graceful disconnect message from the other side. There are various ways to check for connections (polling, timeouts, catching socket exceptions, etc). It really depends on framework you're using and what your needs are.
If you have the resources to poll, then you can poll ever x seconds and check the state. Etc.
Check this extension method found here.
static class SocketExtensions
{
public static bool IsConnected(this Socket socket)
{
try
{
return !(socket.Poll(1, SelectMode.SelectRead) && socket.Available == 0);
}
catch (SocketException) { return false; }
}
}
Basically, socket.Poll() returns true if the connection is open and there is data, or it will return true if there is no connection. socket.Available will return the number of bytes being sent. If it's 0, && .Poll() was true, then the connection is effectively closed. It gets confusing if you try to follow the true/falses, but this will return with pretty good accuracy.
I have a TCP socket based client server system.
Everything works fine but when network is disconnected form client end and reconnect it again i get automatically SocketError.ConnectionReset send form client and regarding this command the socket is closed in the server side. this is also fine.
but when i look in to the client side it shows the socket is still connected with server. (regarding socket is still connected with server [It does not happen every time], sometime it shows disconnected and some times shows connected)
Does it make sense that "server get a SocketError.ConnectionReset from
client end but client is still connected"?
So i want to know what is the possible reasons of SocketError.ConnectionReset and how to handle such type of problem i have mentioned?
Again i say, Everything is working fine in normal environment (e.g if i exit the client it is disconnected the socket same for the server)
Thanks in advance.
EDIT:
Here is the code in the client side. actually it's a timer that tick every 3 second through programs lifetime and check if Socket is connected or not if its disconnected then it tries to reconnect again through a new socket instance
private void timerSocket_Tick(object sender, EventArgs e)
{
try
{
if (sck == null || !sck.Connected)
{
ConnectToServer();
}
}
catch (Exception ex)
{
RPLog.WriteDebugLog("Exception occcured at: "+ System.Reflection.MethodBase.GetCurrentMethod().ToString()+"Message: "+ex.Message);
}
}
In normal situation (without network disconnect/reconnect) if TCP server get a
SocketError.ConnectionReset form any client, in the client side i see
clients socket is disconnected and it tries to reconnect it again
through the code shown. but when situation happen explained earlier,
server gets a SocketError.ConnectionReset but client shows it still
connected. though the TCP server shows the reset command is send form the exact client
side.
There are several causes but the most common is that you have written to a connection that has already been closed but he other end. In other words, an application protocol error. When it happens you have no choice but to close the socket, it is dead. However you can fix the underlying cause.
When discussing a TCP/IP issue like this, you must mention the network details between the client and the server.
When one side says the connection is reset, it simply means that on the wire a RST packet appears. But to know who sends the RST packet and why, you must utilize network packet captures (by using Wireshark and any other similar tools),
https://en.wikipedia.org/wiki/Transmission_Control_Protocol
You won't easily find out the cause at .NET Framework level.
The problem with using Socket.Connected as you are is that it only gives you the connected state as at the last Send or Receive operation. i.e. It will not tell you that the socket has disconnected unless you first try to send some data to it or receive data from it.
From MSDN description of the Socket.Connected property:
Gets a value that indicates whether a Socket is connected to a remote host as of the last Send or Receive operation.
So in your example, if the socket was functioning correctly when you last sent or received any data from it, the timerSocket_Tick() method would never call ConnectToServer(), even if the socket was now not connected.
how to handle such type of problem i have mentioned?
Close the socket and initiate a new connection.
Okay I know there is lots of info out there on this and I promise you I have read it all and tried umpteen different methods to get this working!!
I have a socket server program which runs on a laptop. I then have up to 50 laptops connected wirelessly via the same LAN to the server. The client laptops all connect to the server (using Socket.ConnectAsync) and the server uses async methods as well to send and receive data. The server shows a list of connected client laptops to the user and this list seems to be accurate and picks up whenever a client disconnects and connects. However, the client laptops never seem to detect when connection to the server has been lost under certain circumstances (ie if server program crashes, if server laptop goes in to standby mode etc.) I have got a timer on the client laptops which polls the connection every 5 seconds as follows:
bool SocketConnected(Socket s)
{
bool part1 = s.Poll(0, SelectMode.SelectWrite);
bool part2 = (s.Available == 0);
if (!part1 && part2)
{
return false;
}
else
{
return true;
}
}
I have tried using all selectmodes (SelectWrite,SelectRead,SelectError) and have tried using different time out values. I have tried checking s.Connected value after these operations and have tried all manners of other methods to determine the connection state and nothing seems to produce reliable results!! I think I can achieve the result I desire by sending dummy information every 5 seconds and checking s.Connected after doing so, however I don't really want to do this as each laptop is already sending lots of data to the server as it is. Any help at all is massively appreciated! Thanks
The only reliable way to check if a connection is alive is to send something to the other end and see if it arrives. You can do this either manually by sending and receiving a "ping" value from time to time, or automatically by enabling the KeepAlive socket option.
The MSDN documentation for Socket.Poll is very explicit about the exact situations (server crashes, standby) you mentioned:
This method cannot detect certain kinds of connection problems, such
as a broken network cable, or that the remote host was shut down
ungracefully. You must attempt to send or receive data to detect these
kinds of errors.
I run my application on a network and in some cases the client lost connection to the server. After this time, when I wanted to send a message to the server I receive the following error: Operation not allowed on non-connected sockets (something like this).
I thought to create an event for object type TcpClient and when tcp_obj.Connected = false to call a function to discontinue execution of the current code. How could I do this?
Or giving me other suggestios.
Thanks.
I know at least from socket programming in Java that when a client loses connection to the server, the server does not and can not know about it. You need a heartbeat of some sort to detect the early disconnection.
We often use a heartbeat in our client/server applications to detect early disconnections and log them on the server. This way the server can close the associated socket and release the connection back to the pool.
Simply send a command to the client periodically and wait for a response. If no response is garnered within a timeout assume disconnect and close streams.
I would simply first check your connection object to ensure you are connected, prior to attempting to send the message. Also make sure that you are putting your send-logic inside of a try-catch, so that if you do happen to get disconnected mid transmission, you'll be able to resume without blowing your application apart.
Psuedo-Code:
private void SendMessage(string message, Socket socket)
{
if(socket.connectionState = States.Connected)
{
try{
// Attempt to Send
}
catch(SocketException Ex)
{
// Disconenct, Additional Cleanup Etc.
}
}
}
If you are in C#, prior to your connection state changing, you will have a socket disconnected event fire, prior to your connection state changing. Make sure you tie this event up as soon as your socket connects.
Can we know why you use TCP sockets? Is for calling a tcp device o server code?
I recommend you if is for calling a .net server app use Windows Communication Foudation. It is simple to expose services by net.tcp, http, etc.
Regards,
Actually this is a very old problem,
If I understand your question correctly you need a way to know whether you're application is still connected to the server or vice versa.
If so then a workaround is to have a UDP connection just to check the connectivity (overhead I know, but its much better then polling on Connected state), you could check just before you send you're data.
Since UDP is not Connection oriented you don't need to be connected when you send the data