C# - Check TCP/IP socket status from client side - c#

I'd like to provide my TCP/IP client class with a CheckConnection function so that I can check if something wrong has happened (my own client disconnected, server disconnected, server stuck up,...).
I have something like that:
bool isConnectionActive = false;
if (Client.Poll(100000, SelectMode.SelectWrite) == true)
isConnectionActive = true;
based on what MSDN says:
SelectWrite: true, if processing a Connect(EndPoint), and the connection has succeeded; -or- true if data can be sent; otherwise, returns false.
The point is that, testing this with simple server application, I am getting always true from CheckConnection, even if server-listener has been closed and even if server-application has been shutdown; that's weird, because I expect in those cases that both no connection is being processed (already connected minutes ago) and no data can be sent.
I have already implemented a similar connection check on server side using a combination of Poll with SelectRead and Available and it seems working properly; so now, should I write something similar also on client side? is the SelectWrite approach correct (but I'm using it improperly)?

There are lots of things you can check but none of them are assured to give you the result you are looking for. Even the implementation you have on the server will not work 100% of the time. I guarantee it will fail one day.
There are FIN packets, which should be sent from the client to the server, and vice versa when a connection is closed, but there is no guarantee that these will be delivered, or even processed.
This is generally known as the TCP Half Open problem.
Closing a TCP Socket is a mutually agreed process, you generally have a messaging protocol which tells the other end that it's closing, or you have some predefined set of instructions and you close after that.
The only reliable way to 100% detect if a remote socket is closed is to send some data to it. Only if you get an error back will you know if the socket has closed.
Some applications which don't send a lot of data implement a keep-alive protocol, they simply send/receive a few bytes every minute, so they know that the remote endpoint is present.
You can technically have two servers that are in a connected state and haven't sent data to each other for 10 years. Each end continues to believe that the other end is there until one try's to send some data and finds out it isn't.

Related

socket doesn't close when thread done

I have a problem with sockets. This:
When client-thread ends, server trying to read, and its freezes, because socket is not closed. Thread dont close it, when its over. Its problem exist, if i using thread, but if i using two independents projects, i have no problem (exception throws, and i can catch it).
I cant use timeout, and i must correct continue server-work, when client dont close socket.
Sorry for my bad eng.
As far as I know, there is no way for TCP server (listener) to find out whether data from client are not coming because it has died/quit or is just inactive. This is not .NET's defficiency, it is how TCP works. The way I deal with it is:
1. Create a timer in my client that periodically sends signal "I am alive" to the server. For example, I just send 1 unusual ASCII character '∩' (code 239).
2. In TCP listener: use NetworkStream.Read(...) method that allows to specify timeout. If timeout expires, the server disposes the old NetworkStream instance and creates new one on the same TCP port. If the server receives "I am alive" signal from client, it keeps listening.
By the way, the property TcpClient.Connected is useless for detecting on server side whether client still uses the socket. The method only returns true if last Read action returned something. So, if client is alive and just silent, the TcpClient.Connected becomes false.
Close client when you want the connection to be closed (at the end of Client).
Better yet, use using for all disposable resources such as both clients and the listener.

What are the possible reasons of SocketError.ConnectionReset in TCP Socket

I have a TCP socket based client server system.
Everything works fine but when network is disconnected form client end and reconnect it again i get automatically SocketError.ConnectionReset send form client and regarding this command the socket is closed in the server side. this is also fine.
but when i look in to the client side it shows the socket is still connected with server. (regarding socket is still connected with server [It does not happen every time], sometime it shows disconnected and some times shows connected)
Does it make sense that "server get a SocketError.ConnectionReset from
client end but client is still connected"?
So i want to know what is the possible reasons of SocketError.ConnectionReset and how to handle such type of problem i have mentioned?
Again i say, Everything is working fine in normal environment (e.g if i exit the client it is disconnected the socket same for the server)
Thanks in advance.
EDIT:
Here is the code in the client side. actually it's a timer that tick every 3 second through programs lifetime and check if Socket is connected or not if its disconnected then it tries to reconnect again through a new socket instance
private void timerSocket_Tick(object sender, EventArgs e)
{
try
{
if (sck == null || !sck.Connected)
{
ConnectToServer();
}
}
catch (Exception ex)
{
RPLog.WriteDebugLog("Exception occcured at: "+ System.Reflection.MethodBase.GetCurrentMethod().ToString()+"Message: "+ex.Message);
}
}
In normal situation (without network disconnect/reconnect) if TCP server get a
SocketError.ConnectionReset form any client, in the client side i see
clients socket is disconnected and it tries to reconnect it again
through the code shown. but when situation happen explained earlier,
server gets a SocketError.ConnectionReset but client shows it still
connected. though the TCP server shows the reset command is send form the exact client
side.
There are several causes but the most common is that you have written to a connection that has already been closed but he other end. In other words, an application protocol error. When it happens you have no choice but to close the socket, it is dead. However you can fix the underlying cause.
When discussing a TCP/IP issue like this, you must mention the network details between the client and the server.
When one side says the connection is reset, it simply means that on the wire a RST packet appears. But to know who sends the RST packet and why, you must utilize network packet captures (by using Wireshark and any other similar tools),
https://en.wikipedia.org/wiki/Transmission_Control_Protocol
You won't easily find out the cause at .NET Framework level.
The problem with using Socket.Connected as you are is that it only gives you the connected state as at the last Send or Receive operation. i.e. It will not tell you that the socket has disconnected unless you first try to send some data to it or receive data from it.
From MSDN description of the Socket.Connected property:
Gets a value that indicates whether a Socket is connected to a remote host as of the last Send or Receive operation.
So in your example, if the socket was functioning correctly when you last sent or received any data from it, the timerSocket_Tick() method would never call ConnectToServer(), even if the socket was now not connected.
how to handle such type of problem i have mentioned?
Close the socket and initiate a new connection.

Cannot get Socket.Poll and Socket.Connected to work as desired

Okay I know there is lots of info out there on this and I promise you I have read it all and tried umpteen different methods to get this working!!
I have a socket server program which runs on a laptop. I then have up to 50 laptops connected wirelessly via the same LAN to the server. The client laptops all connect to the server (using Socket.ConnectAsync) and the server uses async methods as well to send and receive data. The server shows a list of connected client laptops to the user and this list seems to be accurate and picks up whenever a client disconnects and connects. However, the client laptops never seem to detect when connection to the server has been lost under certain circumstances (ie if server program crashes, if server laptop goes in to standby mode etc.) I have got a timer on the client laptops which polls the connection every 5 seconds as follows:
bool SocketConnected(Socket s)
{
bool part1 = s.Poll(0, SelectMode.SelectWrite);
bool part2 = (s.Available == 0);
if (!part1 && part2)
{
return false;
}
else
{
return true;
}
}
I have tried using all selectmodes (SelectWrite,SelectRead,SelectError) and have tried using different time out values. I have tried checking s.Connected value after these operations and have tried all manners of other methods to determine the connection state and nothing seems to produce reliable results!! I think I can achieve the result I desire by sending dummy information every 5 seconds and checking s.Connected after doing so, however I don't really want to do this as each laptop is already sending lots of data to the server as it is. Any help at all is massively appreciated! Thanks
The only reliable way to check if a connection is alive is to send something to the other end and see if it arrives. You can do this either manually by sending and receiving a "ping" value from time to time, or automatically by enabling the KeepAlive socket option.
The MSDN documentation for Socket.Poll is very explicit about the exact situations (server crashes, standby) you mentioned:
This method cannot detect certain kinds of connection problems, such
as a broken network cable, or that the remote host was shut down
ungracefully. You must attempt to send or receive data to detect these
kinds of errors.

C# Client-Server application problem

I run my application on a network and in some cases the client lost connection to the server. After this time, when I wanted to send a message to the server I receive the following error: Operation not allowed on non-connected sockets (something like this).
I thought to create an event for object type TcpClient and when tcp_obj.Connected = false to call a function to discontinue execution of the current code. How could I do this?
Or giving me other suggestios.
Thanks.
I know at least from socket programming in Java that when a client loses connection to the server, the server does not and can not know about it. You need a heartbeat of some sort to detect the early disconnection.
We often use a heartbeat in our client/server applications to detect early disconnections and log them on the server. This way the server can close the associated socket and release the connection back to the pool.
Simply send a command to the client periodically and wait for a response. If no response is garnered within a timeout assume disconnect and close streams.
I would simply first check your connection object to ensure you are connected, prior to attempting to send the message. Also make sure that you are putting your send-logic inside of a try-catch, so that if you do happen to get disconnected mid transmission, you'll be able to resume without blowing your application apart.
Psuedo-Code:
private void SendMessage(string message, Socket socket)
{
if(socket.connectionState = States.Connected)
{
try{
// Attempt to Send
}
catch(SocketException Ex)
{
// Disconenct, Additional Cleanup Etc.
}
}
}
If you are in C#, prior to your connection state changing, you will have a socket disconnected event fire, prior to your connection state changing. Make sure you tie this event up as soon as your socket connects.
Can we know why you use TCP sockets? Is for calling a tcp device o server code?
I recommend you if is for calling a .net server app use Windows Communication Foudation. It is simple to expose services by net.tcp, http, etc.
Regards,
Actually this is a very old problem,
If I understand your question correctly you need a way to know whether you're application is still connected to the server or vice versa.
If so then a workaround is to have a UDP connection just to check the connectivity (overhead I know, but its much better then polling on Connected state), you could check just before you send you're data.
Since UDP is not Connection oriented you don't need to be connected when you send the data

NetworkStream.Write returns immediately - how can I tell when it has finished sending data?

Despite the documentation, NetworkStream.Write does not appear to wait until the data has been sent. Instead, it waits until the data has been copied to a buffer and then returns. That buffer is transmitted in the background.
This is the code I have at the moment. Whether I use ns.Write or ns.BeginWrite doesn't matter - both return immediately. The EndWrite also returns immediately (which makes sense since it is writing to the send buffer, not writing to the network).
bool done;
void SendData(TcpClient tcp, byte[] data)
{
NetworkStream ns = tcp.GetStream();
done = false;
ns.BeginWrite(bytWriteBuffer, 0, data.Length, myWriteCallBack, ns);
while (done == false) Thread.Sleep(10);
}
 
public void myWriteCallBack(IAsyncResult ar)
{
NetworkStream ns = (NetworkStream)ar.AsyncState;
ns.EndWrite(ar);
done = true;
}
How can I tell when the data has actually been sent to the client?
I want to wait for 10 seconds(for example) for a response from the server after sending my data otherwise I'll assume something was wrong. If it takes 15 seconds to send my data, then it will always timeout since I can only start counting from when NetworkStream.Write returns - which is before the data has been sent. I want to start counting 10 seconds from when the data has left my network card.
The amount of data and the time to send it could vary - it could take 1 second to send it, it could take 10 seconds to send it, it could take a minute to send it. The server does send an response when it has received the data (it's a smtp server), but I don't want to wait forever if my data was malformed and the response will never come, which is why I need to know if I'm waiting for the data to be sent, or if I'm waiting for the server to respond.
I might want to show the status to the user - I'd like to show "sending data to server", and "waiting for response from server" - how could I do that?
I'm not a C# programmer, but the way you've asked this question is slightly misleading. The only way to know when your data has been "received", for any useful definition of "received", is to have a specific acknowledgment message in your protocol which indicates the data has been fully processed.
The data does not "leave" your network card, exactly. The best way to think of your program's relationship to the network is:
your program -> lots of confusing stuff -> the peer program
A list of things that might be in the "lots of confusing stuff":
the CLR
the operating system kernel
a virtualized network interface
a switch
a software firewall
a hardware firewall
a router performing network address translation
a router on the peer's end performing network address translation
So, if you are on a virtual machine, which is hosted under a different operating system, that has a software firewall which is controlling the virtual machine's network behavior - when has the data "really" left your network card? Even in the best case scenario, many of these components may drop a packet, which your network card will need to re-transmit. Has it "left" your network card when the first (unsuccessful) attempt has been made? Most networking APIs would say no, it hasn't been "sent" until the other end has sent a TCP acknowledgement.
That said, the documentation for NetworkStream.Write seems to indicate that it will not return until it has at least initiated the 'send' operation:
The Write method blocks until the requested number of bytes is sent or a SocketException is thrown.
Of course, "is sent" is somewhat vague for the reasons I gave above. There's also the possibility that the data will be "really" sent by your program and received by the peer program, but the peer will crash or otherwise not actually process the data. So you should do a Write followed by a Read of a message that will only be emitted by your peer when it has actually processed the message.
TCP is a "reliable" protocol, which means the data will be received at the other end if there are no socket errors. I have seen numerous efforts at second-guessing TCP with a higher level application confirmation, but IMHO this is usually a waste of time and bandwidth.
Typically the problem you describe is handled through normal client/server design, which in its simplest form goes like this...
The client sends a request to the server and does a blocking read on the socket waiting for some kind of response. If there is a problem with the TCP connection then that read will abort. The client should also use a timeout to detect any non-network related issue with the server. If the request fails or times out then the client can retry, report an error, etc.
Once the server has processed the request and sent the response it usually no longer cares what happens - even if the socket goes away during the transaction - because it is up to the client to initiate any further interaction. Personally, I find it very comforting to be the server. :-)
In general, I would recommend sending an acknowledgment from the client anyway. That way you can be 100% sure the data was received, and received correctly.
If I had to guess, the NetworkStream considers the data to have been sent once it hands the buffer off to the Windows Socket. So, I'm not sure there's a way to accomplish what you want via TcpClient.
I can not think of a scenario where NetworkStream.Write wouldn't send the data to the server as soon as possible. Barring massive network congestion or disconnection, it should end up on the other end within a reasonable time. Is it possible that you have a protocol issue? For instance, with HTTP the request headers must end with a blank line, and the server will not send any response until one occurs -- does the protocol in use have a similar end-of-message characteristic?
Here's some cleaner code than your original version, removing the delegate, field, and Thread.Sleep. It preforms the exact same way functionally.
void SendData(TcpClient tcp, byte[] data) {
NetworkStream ns = tcp.GetStream();
// BUG?: should bytWriteBuffer == data?
IAsyncResult r = ns.BeginWrite(bytWriteBuffer, 0, data.Length, null, null);
r.AsyncWaitHandle.WaitOne();
ns.EndWrite(r);
}
Looks like the question was modified while I wrote the above. The .WaitOne() may help your timeout issue. It can be passed a timeout parameter. This is a lazy wait -- the thread will not be scheduled again until the result is finished, or the timeout expires.
I try to understand the intent of .NET NetworkStream designers, and they must design it this way. After Write, the data to send are no longer handled by .NET. Therefore, it is reasonable that Write returns immediately (and the data will be sent out from NIC some time soon).
So in your application design, you should follow this pattern other than trying to make it working your way. For example, use a longer time out before received any data from the NetworkStream can compensate the time consumed before your command leaving the NIC.
In all, it is bad practice to hard code a timeout value inside source files. If the timeout value is configurable at runtime, everything should work fine.
How about using the Flush() method.
ns.Flush()
That should ensure the data is written before continuing.
Bellow .net is windows sockets which use TCP.
TCP uses ACK packets to notify the sender the data has been transferred successfully.
So the sender machine knows when data has been transferred but there is no way (that I am aware of) to get that information in .net.
edit:
Just an idea, never tried:
Write() blocks only if sockets buffer is full. So if we lower that buffers size (SendBufferSize) to a very low value (8? 1? 0?) we may get what we want :)
Perhaps try setting
tcp.NoDelay = true

Categories

Resources