What should I do to completely close the tcpClient connection with mcu? - c#

I'm now working on a tcp socket connection to tcp server running in ESP32.
It works fine for the communication, but I failed to close the connection.
After searching for the solution on close/reset tcpClient, it seems that the proper way to close a tcpClient should be:
tcpClient.GetStream().Close();
tcpCLient.Close();
The example in msdn also use this method.
But unforunately, it cannot really close the connection. As checked in the mcu, the connection has not been closed. And it will not be closed even I close the application later. In this case, the mcu's tcp connection cannot be released, it cannot receive other connections. So, I think this should not be a proper solution to close the tcpClient.
If I did not execute above statement, and close the application directly, it can close the connection successfully. And the mcu's tcp connection is released.
It seems that the there has something been done in application close which can really close the connection.
In some situation, I need to close the connection after some operation, and reconnect it. So, I cannot rely on the application close.
I have tried all the following methods in some different combination, but none of them can successfully release the tcpClient connection:
tcpClient.Close();
tcpClient.Client.Close();
tcpClient.GetStream().Close();
tcpClient.Client.Disconnect(true);
tcpClient.Client.Disconnect(false);
tcpClient.Dispose();
tcpClient.Client.Dispose();
tcpCLient = null;
Maybe it should be done with some of above commands in a proper sequence.
Does anyone know how I cannot close the connection by myself.
Thanks in advance.

After studying the network packet using WireShark, it was found that the problem is due to the delay of sending RST with the code as suggested in MSDN:
tcpClient.GetStream().Close();
tcpCLient.Close();
There has no different even adding
tcpClient.LingerState = new LingerOptions(true, 0);
Because it will send FIN immediately, but not the RST after the Close method, it will be sent around 2 minutes later. Unfortunately, it won't be sent even you close the application after tcpClient close is issued.
If you close the application before closing the tcpClient, it will send the RST immedicately.
So that it can close the connection in server immediately.
After testing different combination of command, it was found that the following code can really close the connection immediately, but there will have another RST around 40 seconds later.
tcpClient.Client.Close();
tcpClient.Close();
Don't call tcpClient.GetStream().Close(); !!! It will cause the delay of RST.
I don't know if there has any impact closing the connection in this way, but this is the only way I can really close the connection immediately.

As you mentioned, this is the correct way to close a TcpClient:
tcpClient.GetStream().Close();
tcpCLient.Close();
Close() will eventually close the connection. Take a look at the documentation for TcpClient.Close()
The Close method marks the instance as disposed and requests that the associated Socket close the TCP connection. Based on the LingerState property, the TCP connection may stay open for some time after the Close method is called when data remains to be sent. There is no notification provided when the underlying connection has completed closing.
You may be able to get the behavior you desire by changing the LingerState property of the TcpClient object.
tcpClient.LingerState = new LingerOptions(true, 0);

Related

How to free socket descriptor shared between multiple processes?

Background
On Linux or BSD, it is possible to send handles to open files or sockets between unrelated processes with SCM_RIGHTS, and I have this working, such that on process is listening for connections, and then forwards the handles to a process that performs the communication.
Problem
I am unable to figure out how to free the socket handle from the listening process without shutting down the socket.
There are two conflicting descriptions in man close(3):
When all file descriptors associated with an open file description have been closed, the open file description shall be freed.
And
If fildes refers to a socket, close() shall cause the socket to be destroyed.
I originally thought that this means that calling close() will just decrement the reference count for the kernel object that has the socket, such that the last description from man close(3) meant "destroy when the last descriptor is closed".
EDIT: This is how it is supposed to work, and also how it works.
But, when I run tests, it appears that as soon as I call close() on the socket descriptor in the listening process, it will start closing the socket, sending either RST or FIN, depending on what the other process is doing with the socket at the time.
One solution would be to have a callback from the handing process with "you can now close socket nnn", but this would keep a number of socket descriptors open in the listening process and add some overhead as well.
I know I can force the socket to start the shutdown process by calling shutdown() directly from either process, but I want to prevent it.
I assume there exists a simple solution, but I cannot find it.
Question
Is there a way to de-register the socket descriptor from the listening process such that it is no longer in the file descriptor table of the process, but without activating the socket shutdown?
Source code
The SCM_RIGHTS implementation used to send the socket is here (send_fds and native_close):
https://github.com/kenkendk/SockRock/blob/master/src/ScmRightsImplementation.cs
The code that sends the socket and then closes it is here:
https://github.com/kenkendk/ceenhttpd/blob/master/Ceen.Httpd.Cli/Runner/SubProcess/SpawnRemoteInstance.cs#L488-L497
If I comment out line 497 everything works, but obviously get a large file descriptors leak.
The receiving end of SCM_RIGHTS is here:
https://github.com/kenkendk/ceenhttpd/blob/master/Ceen.Httpd.Cli/Runner/SubProcess/SpawnedRunner.cs#L313
tl;dr: The socket is closed when the last reference is closed.
The answer to my question is most likely:
No, there is no way to prevent the socket shutdown, but it is not needed as the socket will not close until the last descriptor has closed.
The answer from Andrew was correct, and got me on track: it makes no sense, others do the same all the time.
In the end, the problem was a timeout in the handler process that closed the socket, but that made it look like the close() call from the listener was the problem.
When I stopped the close() call from the listening process, it started working. This happens because then the timeout correctly closes the handle, but there is still a reference (in the listening process) so the socket stays open.

socket doesn't close when thread done

I have a problem with sockets. This:
When client-thread ends, server trying to read, and its freezes, because socket is not closed. Thread dont close it, when its over. Its problem exist, if i using thread, but if i using two independents projects, i have no problem (exception throws, and i can catch it).
I cant use timeout, and i must correct continue server-work, when client dont close socket.
Sorry for my bad eng.
As far as I know, there is no way for TCP server (listener) to find out whether data from client are not coming because it has died/quit or is just inactive. This is not .NET's defficiency, it is how TCP works. The way I deal with it is:
1. Create a timer in my client that periodically sends signal "I am alive" to the server. For example, I just send 1 unusual ASCII character '∩' (code 239).
2. In TCP listener: use NetworkStream.Read(...) method that allows to specify timeout. If timeout expires, the server disposes the old NetworkStream instance and creates new one on the same TCP port. If the server receives "I am alive" signal from client, it keeps listening.
By the way, the property TcpClient.Connected is useless for detecting on server side whether client still uses the socket. The method only returns true if last Read action returned something. So, if client is alive and just silent, the TcpClient.Connected becomes false.
Close client when you want the connection to be closed (at the end of Client).
Better yet, use using for all disposable resources such as both clients and the listener.

TCPClient stream irregularities?

I have a TCPClient that creates a stream that I read from when DataAvailable.
Every 20 seconds that !DataAvailable I ping the socket with an ACK message to keep the stream from closing.
But I seem to be getting mixed results. It seems like every other time I open the stream(basically restart my Service) I get transport errors.
This is a shortened version of my Connect function:
client = new StreamClient();
client.Connect(new IPEndPoint(clientAddress, senderPort));
stream = client.GetStream();
bool status = SendMessage(seq, sync, MessageTypes.Init);
The SendMessage function does:
if (stream == null) return false;
stream.Write(TransmitBuffer, 0, TransmitMessageLength);
My Close function does:
if (stream != null)
{
SendMessage(seq, sync, MessageTypes.Finish);
stream.Close();
}
stream = null;
client.Close();
client = null;
It is expected that the SendMessage calls will fail occasionally due to nature of the socket.
But sometimes, once I Connect, everything runs fine, no failed messages. But other times the ACK's will fail. When the ACK's fail, I call Close, which will force a Connect and validate the other end of the socket is open. If that fails then I know that end is down. But sometimes that call doesn't fail and then 20 seconds later the ACK does.
Can anyone give me an opinion on why this may happen? Is 20 seconds too long to wait? Am I not closing my end of the socket properly?
The specific error message i'm fighting with is:
Unable to write data to the transport connection: An established connection was aborted by the software in your host machine.
And it occurs at stream.Write(TransmitBuffer, 0, TransmitMessageLength);
The main thing I see in your implementation that jumps out at me is that it looks like you're treating the Stream as the Connection, and it isn't. Your checks on the Stream instance should instead be checks on the TcpClient instance. I'm not sure if that's the source of your problem, but it definitely looks strange to me.
Instead of this:
stream = client.GetStream();
if (stream != null)
{
SendMessage(seq, sync, MessageTypes.Finish);
stream.Close();
}
stream = null;
I usually do something more like this:
if (client != null)
{
if (client.Connected)
{
client.GetStream().Close();
}
client.Close();
client = null;
}
You should be checking TcpClient.Connected before working with the stream, not the stream itself.
Another thing I would mention is to be sure to always use the async methods to connect, read, and write with your TcpClient. The synchronous ones are easier, but my experience has been that relying on them can get you into trouble.
In general, putting a protocol on top of TCP is a big mistake. It can only make the connection less reliable. TCP already makes a very strong guarantee that data sent from one machine is going to be received by another one on a network. Only very gross external circumstances can make that fail. Things like equipment power loss or unscheduled reboots.
A connection should not be broken unless one of the machines intentionally closes the socket. Which should of course always be done in a predictable way. A logical end to a transaction or an explicit message that a machine is signing-off.
You didn't give any motivation for adding this "ACK protocol" to your connection logic, other than "keep the stream from closing". I think what you are seeing here is that it just doesn't work, it does not in fact keep the stream from closing. So it still goes wrong like it did before you added the protocol layer, you are still getting unexpected "An established connection was aborted" exceptions.
An example of how you made it less reliable is the 20 second timeout check you added. TCP doesn't use a 20 second timeout. It uses an exponential back-off algorithm to check for timeouts. Typically it doesn't give up until at least 45 seconds have passed. So you'll declare the connection dead before TCP does so.
Hard to give advice on how to move forward with this. But clearly it is not by adding a protocol, you tried it and it did not work. You will have to find out why the connection is getting broken unexpectedly. Unfortunately that does require leg-work, you have to get insight in the kind of network equipment and software that sits between your machine and the server. With some expectation that the problem is located at the other end of the wire since that's the one that's hardest to diagnose. Getting the site's network admin involved with your problem is an important first step.
We had a similar problem when developing a network communication library. Rather than use ACK we found the most reliable solution was just to send some null data (aka keep alive). In this case byte[1] with a zero value. This would result in either:
A successful send and the data could be ignored on the
receive end.
A failed send which would immediately cause the connection to be closed and reestablished.
Either of these outcomes ensured the connection was always in a useable state.
Why do you send ACK? You should send SYN.
And you shouldn't send ACK, SYN, RST or setting any other flag for that matter. You're dabbling with TCP internals. You're creating Application level protocol and keeping-alive is part of that, cause, iirc, even there is http://msdn.microsoft.com/en-us/library/windows/desktop/ee470551%28v=vs.85%29.aspx connection will be open until you close it.

Close an asynchronous connection

I've created an asynchronous server in C# to go along with my Android application. The server is similar to this: http://msdn.microsoft.com/en-us/library/fx6588te.aspx
It works well and I can have many clients connect and receive data from the server at the same time. One problem that I've found is that in my Android app if you are already connected over Wifi and press the connect button again, the server spawns a new socket. The server should kill the old connection first and then create a new connection. On the Android side I make sure to call close() and even set it to null afterwards. I also send a disconnect control signal to the server so that it can also call close() on the socket. For example, here's how I do it in the server:
if (state.storage.parseJson(content) == JsonStorage.DISCONNECT)
{
Console.WriteLine("Disconnect2!");
state.workSocket.Shutdown(SocketShutdown.Both);
state.workSocket.Close();
return;
}
When I inspect my server process in a program called "CurrPorts" it shows several connections open to my Android device on different ports. I send data to my clients using a Timer object, and I also check to see if the connection is active otherwise I close it. For example, my TimerCallback method:
public void TimeCallBack(object input)
{
StateObject state = (StateObject)input;
if (state.workSocket.Connected)
{
Send(state.workSocket, state.storage.getJson());
}
else
{
Console.WriteLine("Dispose!");
state.timer.Dispose();
state.workSocket.Close();
}
}
I can't think of why my server isn't closing old connections. There should only be as many connections open as devices connected to the server. If this were a threaded blocking server then it would be easy to just close the thread down, but I'm not sure what to do in this case.
edit: so I just refreshed the CurrPorts program after letting it sit for a while and it dropped down to one established connection. Is my solution right and it just takes a while for Windows to actually clear the old socket connections that were created?
Yes, Windows will keep around the information about the socket for a while. You can see this with netstat, closed sockets will show a state of TIME_WAIT after they are closed and even after the application that hosted the socket has terminated.

Closing System.Net.Sockets.TcpClient kills the connection for other TCPClients at the same IP Address

Just to be clear, all of the TCPClients I'm referring to here are not instances of my own class, they are all instances of System.Net.Sockets.TcpClient from Mono's implementation of .NET 4.0.
I have a server that is listening for client connections, as servers do. Whenever it gets a new client it creates a new TCPClient to handle the connection on a new thread. I'm keeping track of all the connections and threads with a dictionary. If the client disconnects, it sends a disconnect message to the server, the TCPClient is closed, the dictionary entry is removed and the thread dies a natural death. No fuss, no muss. The server can handle multiple clients with no problem.
However, I'm simulating what happens if the client gets disconnected, doesn't have a chance to send a disconnect message, then reconnects. I'm detecting whether a client has reconnected with a username system (it'll be more secure when I'm done testing). If I just make a new TCPClient and leave the old one running, the system works just fine, but then I have a bunch of useless threads lying around taking up space and doing nothing. Slackers.
So I try to close the TCPClient associated with the old connection. When I do that, the new TCPClient also dies and the client program throws this error:
E/mono (12944): Unhandled Exception: System.IO.IOException: Write failure ---> System.Net.Sockets.SocketException: The socket has been shut down
And the server throws this error:
Unable to write data to the transport connection: An established connection was aborted by the software in your host machine.
Cannot read from a closed TextReader.
So closing the old TCPClient with a remote endpoint of say: 192.168.1.10:50001
Also breaks the new TCPClient with a remote endpoint of say:192.168.1.10:50002
So the two TCPClient objects have the same remote endpoint IP address, but different remote endpoint ports. But closing the one seems to stop the other from working. I want to be able to close the old TCPClient to do my cleanup, without closing the new TCPClient.
I suspect this is something to do with how TCPClient works with sockets at a low level, but not having any real understanding of that, I'm not in a position to fix it.
I had a similar issue on my socket server. I used a simple List instead of a dictionary to hold all of my current connections. In a continuous while loop that listens for new streams, I have a try / catch and in the catch block it kills the client if it has disconnected.
Something like this on the sever.cs:
public static void CloseClient(SocketClient whichClient)
{
ClientList.Remove(whichClient);
whichClient.Client.Close();
// dispose of the client object
whichClient.Dispose();
whichClient = null;
}
and then a simple dispose method on the client:
public void Dispose()
{
System.GC.SuppressFinalize(this);
}
EDIT: this paste is the OPs resolution which he or she found on their own with help from my code.
So to clarify, the situation is that I have two TCPClient objects TCPClientA and TCPClientB with different remote endpoints ports, but the same IP:
TCPClientA.Client.RemoteEndPoint.ToString();
returns: 192.168.1.10:50001
TCPClientB.Client.RemoteEndPoint.ToString();
returns: 192.168.1.10:50002
TCPClientA needs to be cleaned up because it's no longer useful, so I call
TCPClientA.Close();
But this closes the socket for the client at the other end of TCPClientB, for some reason. However, writing
TCPClientA.Client.Close();
TCPClientA.Close();
Successfully closes TCPClientA without interfering with TCPClientB. So I've fixed the problem, but I don't understand why it works that way.
Looks like you have found a solution but just so you are aware there are many similar pitfalls when writing client/server applications in .net. There is an open source network library (which is fully supported in mono) where these problems have already been solved, networkComms.net. A basic sample is here.
Disclaimer: This is a commercial product and I am the founder.
This is clearly an error in your code. Merely closing one inbound connection cannot possibly close another one. Clearly something else is happening elsewhere in your code.

Categories

Resources