C# socket connection keeps sending data after disconnect and close - c#

I have a "start" and "stop" button. When clicking the start button, a new socket is created and a connection is made. When clicking the stop button the socket is shutdown, disconnected, closed and disposed to make sure it is completely gone.
At least, that's what I thought: when clicking start after stopping the connection, a new socket is made etc. but as soon as I send data, the data is sent x amount of times I had created a socket (thus, x amount of times I had clicked the start button).
This is the code for the start:
soc = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); // Socket soc; is declared at class-level
System.Net.IPAddress ipAdd = System.Net.IPAddress.Parse(IP);
System.Net.IPEndPoint remoteEP = new IPEndPoint(ipAdd, port);
try
{
soc.Connect(remoteEP);
soc.Send(jsonSettings);
}
catch (SocketException e)
{
MessageBox.Show("Could not connect to socket");
}
And this is the stop code:
if (soc != null)
{
soc.Shutdown(SocketShutdown.Both);
soc.Disconnect(false);
soc.Close();
soc.Dispose();
}
This is used within a VSTO PowerPoint add-in application if this could cause any additional specialties, when the connection is made I'm sending string data to a Python server listening to this port. Each time a connection is closed, the Python server will get out of it's listen-for-data loop and get back in it's waiting for connection state (for the multiple start/stop connections).
Code for sending data:
// this is called each time the user goes to another slide in the PowerPoint presentation
byte[] byData = System.Text.Encoding.ASCII.GetBytes(stringValue);
soc.Send(byData);
Can anyone point out what I'm doing wrong why the socket connections somehow keep on living and sending data even though I disconnected and closed them?

The observed behavior is the whole point and desired outcome from clean shutdown. From the MSDN page for Socket.Shutdown():
When using a connection-oriented Socket, always call the Shutdown method before closing the Socket. This ensures that all data is sent and received on the connected socket before it is closed.
The call to Shutdown() prevents your application from queuing additional outgoing data, it does not stop the network stack from sending data already buffered.

Since you are using a stream socket, how about declaring a network stream for your socket like this:
NetworkStream stream = new NetworkStream(soc);
Then flushing this after each send (and before closing the socket):
stream.Flush();
Also ensure you turn off Nagle's algorithm when you create the socket - it will prevent batching up items on the socket:
soc.NoDelay = true;

Related

C# TcpClinet LingerOption with Close() not skips TIME_WAIT

I am trying to contact to server with C# TcpClient for lots of time. For example, I connect to server for 5s, disconnect then try connect to server in 10s, and repeat...
But eventhogh I set LingerOption and ResueAddress Option as true, ExtendedSocketExcption came out when I reconnect to server.
Here is my code. (.Net5, Windows 10)
TCPSocket = new TcpClient(new IPEndPoint("10.10.0.100", 50010));
TCPSocket.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
LingerOption lo = new LingerOption(true, 0);
TCPSocket.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Linger, lo);
TCPSocket.Connect(new IPEndPoint("10.10.0.50", 50010));
TCPSocket.ReceiveTimeout = 5000;
//Do somthing
TCPSocket.Client.Shutdown(SocketShutdown.Both);
TCPSocket.Close();
Thread.Sleep(5000);
TCPSocket.Connect(new IPEndPoint(SRE3021IP, SRE3021TCPPort)); //ExtendedSocketExeption
And I check on cmd with command netstat -ano | findstr 50010 while thread was sleeping.
TCP 10.10.0.100:50010 10.10.0.50:50010 TIME_WAIT 0
The TIME_WAIT state remained about 30~1 min then It disappeared...
I don't know why linger option was not applied.
Setting a LingerOption doesn't stop a socket from closing. It delays the close() to allow any unsent data in the buffer to be sent. This allows an application to move on to the next phase with a slow network. The socket will still close.
ReuseAddress has nothing to do with reusing an existing socket (believe it or not), it allows a Listening Socket to bind to an existing listening port. This is a very bespoke behaviour and requires other process interops to have two different applications listening on the same port. This option has no useful meaning on an outbound socket connection.
Your problem stems from the fact you're setting a source bind with this line:
TCPSocket = new TcpClient(new IPEndPoint("10.10.0.100", 50010 ));
If you want to set a source port you have no option but to wait for the OS to clean out the socket from the connection list which means waiting for the TIME_WAIT to expire.
If you don't want to set a source port, (and these days their are very few reasons to actually set a source port) but still want to select a specific source IP address interface then you can use:
TCPSocket = new TcpClient(new IPEndPoint("10.10.0.100", 0));
If you want Windows to just choose the most appropriate outgoing interface, (and port), then use:
TCPSocket = new TcpClient();

C# socket is ignoring my SendTimeout value

I have a Socket in a C# application. (the application is acting as the server)
I want to set a timeout on the transmission sending. That if the TCP layer does not get an ack for the data in 10 seconds, the socket should throw and exception and I close the whole connection.
// Set socket timeouts
socket.SendTimeout = 10000;
//Make A TCP Client
_tcpClient = new TcpClient { Client = socket, SendTimeout = socket.SendTimeout };
Then later on in code, I send data to that socket.
/// <summary>
/// Function to send data to the socket
/// </summary>
/// <param name="data"></param>
private void SendDataToSocket(byte[] data)
{
try
{
//Send Data to the connection
_tcpClient.Client.Send(data);
}
catch (Exception ex)
{
Debug.WriteLine("Error writing data to socket: " + ex.Message);
//Close our entire connection
IncomingConnectionManager.CloseConnection(this);
}
}
Now when I test, I let my sockets connect, everything is happy. I then just kill the other socket (no TCP close, just power off the unit)
I try to send a message to it. It doesn't timeout? Even after 60 seconds, it's still waiting.
Have I done something wrong here, or am I misunderstanding the functionality of setting the sockets SendTimeout value?
A socket send() actually does a copy operation of your data into the network´s stack outgoing buffer. If the copy succeeds (i. e there is enough space to receive your data), no error is generated. This does not mean that the other side received it or even that the data went out to the wire.
Any send timeout starts counting when the buffer is full, indicating that the other side is receiving data slower that you are sending it (or, in the extreme case, not receiving anything at all because the cable is broken or it was powered off or crashed without closing its socket properly). If the full buffer persists for timeout seconds, you´ll get an error.
In other words, there is no way to detect an abrupt socket error (like a bad cable or a powered off or crashed peer) other than overfilling the outgoing buffer to trigger a timeout.
Notice that in the case of a graceful shutdown of the peer´s socket, your socket will be aware of it and give you errors if you try to send or receive after the condition was received in your socket, which may be many microseconds after you finished your operation. Again in this case, you have to trigger the error (by sending or receiving), it does not happen by itself.

C# .Net Socket Server Client

I've got a little problem with the .Net Sockets in C#.
I programmed a client and a server working with TCP.
As the client is opened it sends a handshake to the server. The server answers with it's state (clientexists, clientaccepted,...). After that the application sends a getdata-request, abandons the connection and listens for the server's 'response'. Now, the server builds a connection to the client and sends all the data the client needs.
The code and everything else works, but the problem:
On our company testserver it works fine, on the live server only the handshake works. After it the client doesn't receive any more data. Serverapplication is the same on both servers.
I thought the problem was caused by some firewall (server wants to build a tcp connection to the client -> not good), but the system administrator said there is no firewall that could block that.
Now I'm searching for a ('cheap') solution that doesn't take too much time and changes in code. If anyone knows how to theoretically solve that, that would be great.
BTW: I am not allowed to do anything on the live server other than run the serverapplication. I don't have the possibility to debug on this server.
I can't publish all of my code, but if you need to see specific parts of it, ask for it please.
---EDIT---
Client-Server communication
1) Client startup
Client send handshake (new tcp connection)
2) Server validates handshake and saves IP
Server responds with it's client state (same tcp connection)
3) Client acknowledges this response and abandons this connection
Client sends getdata-request (new tcp connection)
Client abandons this tcp connection, too
4) Server receives getdata-request and collects the needed data in the main database
Server sends all the collected data to the client (multiple tcp connections)
5) Client receives all data and displays it in it's GUI (multiple tcp connections and the order of the data is kept by working with AutoResetEvents and Counts of sockets to send)
This is the main part my code does. It's by far not the best but it was for me as I wrote it I guess. Step one, two and three work as intended. The processing of the data works fine, too.
Another thing i forgot to mention is that the solution uses two Ports '16777' and '16778'. One to receive/listen and one to send.
My code is based on the MSDN example of the asynchronous server and client.
Sending a handshake (and getdata-request)
public void BeginSend(String data)
{
try
{
StateObject state = new StateObject();
state.workSocket = sender;
byte[] byteData = Encoding.UTF8.GetBytes(data);
sender.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback((IAsyncResult e) =>
{
Socket socket = (Socket)e.AsyncState;
SocketBase.StateObject stateObject = new SocketBase.StateObject();
stateObject.workSocket = socket;
socket.BeginReceive(stateObject.buffer, 0, 256, SocketFlags.None, new AsyncCallback(this.ReadCallback), (object)stateObject);
}), sender);
sender = RetrieveSocket(); //Socketreset
Thread.Sleep(100);
}
catch /*(Exception e)*/
{
//--
}
}
Server listener
public void StartListening()
{
listener = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
// Bind the socket to the local endpoint and listen for incoming connections.
try
{
listener.Bind(localEndPoint);
listener.Listen(System.Int32.MaxValue);
while (true)
{
// Set the event to nonsignaled state.
allDone.Reset();
// Start an asynchronous socket to listen for connections.
listener.BeginAccept(
new AsyncCallback(AcceptCallback),
listener);
// Wait until a connection is made before continuing.
allDone.WaitOne();
}
}
catch (Exception e)
{
//--
}
}
public void AcceptCallback(...);
public void ReadCallback(...);
Socket send
private void Send(Socket handler, String data)
{
Socket t = RetrieveSocket(((IPEndPoint)handler.RemoteEndPoint).Address);
// Convert the string data to byte data using ASCII encoding.
byte[] byteData = Encoding.UTF8.GetBytes(data);
// Begin sending the data to the remote device.
t.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback(SendCallback), t);
}
Socket send all data part (answer to getdata-request | socToHandle should be the socket of the previous connection of the getdata-request)
private void SendAllData(Socket socToHandle, string PakContent)
{
#region IsThereADatetime? //Resolve a given datetime
#region GiveClientNumberOfPackets //Send the client info about how much he has to receive (See line below)
Send(socToHandle, "ALERT#TASKCOUNT;OPT-" + GetBestDate(dateStart) + EndSocket);
#region #SendResouces
#region #SendGroups
#region #SendTasks
}
Looking through my old code I have one idea =>
Could I send everything over the same connection by changing:
Socket t = RetrieveSocket(((IPEndPoint)handler.RemoteEndPoint).Address);
(which creates a new connection) to something that uses the same connection?
If that would work, how can I do that?
And would the listener part of the client still receive single packets?
Servers and their environment are configured to handle incoming requests properly. Clients are usually behind a router, which by default make them unable to receive incoming connections from outside their network (a good thing).
To enable incoming connections, you could configure your router to forward all requests for a certain port number to your machine. No one else on your network would be able to run the client then, though.
This is why in a typical multiple clients-single server environment, the client makes all the connections, and only the server requires any changes to the network landscape.
I don't know why you chose to connect to the clients from the server side, but I would strongly advise against this - any cheap solution that uses this mechanism may turn out to be very expensive in the end.

TcpClient.Close doesn't close the connection

I have an application that uses TcpClient and TcpListener to communicate over the network. However, when I call TcpClient.Close on the client to disconnect it from the server, the server doesn't react at all.
Now, before you post a comment about this question being a duplicate of this one, and how the solution can be found here, believe me when I say that I've already found these and tried them. It doesn't help. I've also tried different combinations of TcpClient.Close, TcpClient.GetStream().Close(), and TcpClient.Dispose. Nothing works.
The code is nothing noteworthy, just a Disconnect method in the client that resets all of the variables for reuse, and closes all network resources. The server has a loop that checks if TcpClient.Connected is true or not, and if it's false, it's supposed to jump out of the loop and terminate the thread.
Any ideas?
The TcpClient.Connected should pretty much be ignored. It basically represents whether the last communication was successful. From MSDN (emphasis mine):
Because the Connected property only reflects the state of the connection as of the most recent operation, you should attempt to send or receive a message to determine the current state. After the message send fails, this property no longer returns true. Note that this behavior is by design. You cannot reliably test the state of the connection because, in the time between the test and a send/receive, the connection could have been lost. Your code should assume the socket is connected, and gracefully handle failed transmissions.
If you call Close() on the client side, nothing is sent to the server to tell it that its closing, it literally just closes it self so that the client can't use it any more. The only reliable way to determine if you're still connected is to try to send data and handle the failure. If you want you could implement your own handshake agreement where when you call Close() you send a special notification to the server alerting it to the fact but there will still be times when that packet never reaches the server.
Listen for one byte, if it's received, get the rest.
If Receiving the one byte returns 0, client disconnected.
IPAddress nReceiveAddress = IPAddress.Parse(GetIp(sSource));
IPEndPoint localEndPoint = new IPEndPoint(nReceiveAddress, GetPort(sSource));
Socket nSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
nSocket.Bind(localEndPoint);
nSocket.Listen(10);
Socket nSocketClient = nSocket.Accept();
byte[] bufferOne = new byte[1];
int nBytes = nClient.Receive(bufferOne);
if (nBytes == 0)
{
AppendToLog(String.Format("{0}: Closing.", sName));
nClient.Close();
}
else
{
byte[] buffer = null;
buffer = new byte[nClient.Available + 1];
if (nClient.Available > 0)
nBytes = nClient.Receive(buffer);
if(nBytes>0)
{
//KS effectively insert the first received byte at start.
Array.Copy(buffer, 0, buffer, 1, buffer.Length - 1);
buffer[0] = bufferOne[0];
}
}

Disconnecting TCPClient and seeing that on the other side

i am trying to disconnect a client from a server but the server still sees it as being connected. I cant find a solution to this and Shutdown, Disconnect and Close all dont work.
Some code for my disconnect from the client and checking on the server:
Client:
private void btnDisconnect_Click(object sender, EventArgs e)
{
connTemp.Client.Shutdown(SocketShutdown.Both);
connTemp.Client.Disconnect(false);
connTemp.GetStream().Close();
connTemp.Close();
}
Server:
while (client != null && client.Connected)
{
NetworkStream stream = client.GetStream();
data = null;
try
{
if (stream.DataAvailable)
{
data = ReadStringFromClient(client, stream);
WriteToConsole("Received Command: " + data);
}
} // So on and so on...
There are more writes and reads further down in the code.
Hope you all can help.
UPDATE: I even tried passing the TCP client by ref, assuming there was a scope issue and client.Connected remains true even after a read. What is going wrong?
Second Update!!:
Here is the solution. Do a peek and based on that, determine if you are connected or not.
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
Here is the solution!!
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
From the MSDN Documentation:
The Connected property gets the
connection state of the Client socket
as of the last I/O operation.
When it
returns false, the Client socket was
either never connected, or is no
longer connected. Because the
Connected property only reflects the
state of the connection as of the most
recent operation, you should attempt
to send or receive a message to
determine the current state. After the
message send fails, this property no
longer returns true. Note that this
behavior is by design. You cannot
reliably test the state of the
connection because, in the time
between the test and a send/receive,
the connection could have been lost.
Your code should assume the socket is
connected, and gracefully handle
failed transmissions.
I am not sure about the NetworkStream class but I would think that it would behave similar to the Socket class as it is primarily a wrapper class. In general the server would be unaware that the client disconnected from the socket unless it performs an I/O operation on the socket (a read or a write). However, when you call BeginRead on the socket the callback is not called until there is data to be read from the socket, so calling EndRead and getting a bytes read return result of 0 (zero) means the socket was disconnected. If you use Read and get a zero bytes read result I suspect that you can check the Connected property on the underlying Socket class and it will be false if the client disconnected since an I/O operation was performed on the socket.
It's a general TCP problem, see:
How do I check if a SSLSocket connection is sane on Java?
Java socket not throwing exceptions on a dead socket?
The workaround for this tend to rely on sending the amount of data to expect as part of the protocol. That's what HTTP 1.1 does using the Content-Length header (for a entire entity) or with chunked transfer encoding (with various chunk sizes).
Another way is to send "NOOP" or similar commands (essentially messages that do nothing but make sure the communication is still open) as part of your protocol regularly.
(You can also add to your protocol a command that the client can send to the server to close the connection cleanly, but not getting it won't mean the client hasn't disconnected.)

Categories

Resources