C# (socket) wait for connection during x seconds - c#

I'm making a client socket. This socket will send some data to another socket and wait for its response (if any). I want my client socket to wait for a response for 5 seconds. The problem is, if I put it in Receiver mode, the program will only run after it gets a connection. I want my program to be listening for a duration of time, not until he gets a response (witch could be never, if the other socket isn't programmed to answer).

The Socket class contains a ReceiveTimeout property, which by default is Infinite.
https://msdn.microsoft.com/en-us/library/system.net.sockets.socket.receivetimeout(v=vs.110).aspx
If you set this value, then the Socket.Recieve() method will only block until the timeout has passed, then will throw a TimeoutException.
Socket sock;
//socket connection and sending data
sock.ReceiveTimeout = 5000;
try {
data = sock.Receive();
}
catch (TimeoutException ex)
{
// it never answered
}

Related

C# socket is ignoring my SendTimeout value

I have a Socket in a C# application. (the application is acting as the server)
I want to set a timeout on the transmission sending. That if the TCP layer does not get an ack for the data in 10 seconds, the socket should throw and exception and I close the whole connection.
// Set socket timeouts
socket.SendTimeout = 10000;
//Make A TCP Client
_tcpClient = new TcpClient { Client = socket, SendTimeout = socket.SendTimeout };
Then later on in code, I send data to that socket.
/// <summary>
/// Function to send data to the socket
/// </summary>
/// <param name="data"></param>
private void SendDataToSocket(byte[] data)
{
try
{
//Send Data to the connection
_tcpClient.Client.Send(data);
}
catch (Exception ex)
{
Debug.WriteLine("Error writing data to socket: " + ex.Message);
//Close our entire connection
IncomingConnectionManager.CloseConnection(this);
}
}
Now when I test, I let my sockets connect, everything is happy. I then just kill the other socket (no TCP close, just power off the unit)
I try to send a message to it. It doesn't timeout? Even after 60 seconds, it's still waiting.
Have I done something wrong here, or am I misunderstanding the functionality of setting the sockets SendTimeout value?
A socket send() actually does a copy operation of your data into the network´s stack outgoing buffer. If the copy succeeds (i. e there is enough space to receive your data), no error is generated. This does not mean that the other side received it or even that the data went out to the wire.
Any send timeout starts counting when the buffer is full, indicating that the other side is receiving data slower that you are sending it (or, in the extreme case, not receiving anything at all because the cable is broken or it was powered off or crashed without closing its socket properly). If the full buffer persists for timeout seconds, you´ll get an error.
In other words, there is no way to detect an abrupt socket error (like a bad cable or a powered off or crashed peer) other than overfilling the outgoing buffer to trigger a timeout.
Notice that in the case of a graceful shutdown of the peer´s socket, your socket will be aware of it and give you errors if you try to send or receive after the condition was received in your socket, which may be many microseconds after you finished your operation. Again in this case, you have to trigger the error (by sending or receiving), it does not happen by itself.

Stream CopyToAsync - Detect client disconnection and set a timeout

I am writing a ConnectionHandler as a part of Kestrel. The idea is that when a client connects, the ConnectionHandler opens a socket with another server in the network, gets a continuous stream of data and forwards them back to the client. In the meantime, the client can also send data to the ConnectionHandler that the latter is constantly forwarding to the other server in the network (opened socket).
public override async Task OnConnectedAsync(ConnectionContext connection)
{
TcpClient serverSocket = TcpClient(address, port);
serverSocket.ReceiveTimeout = 10000;
serverSocket.SendTimeout = 10000;
NetworkStream dataStream = serverSocket.GetStream();
dataStream.ReadTimeout = 10000;
dataStream.WriteTimeout = 10000;
Stream clientStreamOut = connection.Transport.Output.AsStream();
Stream clientStreamIn = connection.Transport.Input.AsStream();
Task dataTask = Task.Run(async () =>
{
try
{
await dataStream.CopyToAsync(clientStreamOut);
}
catch
{
await LogsHelper.Log(logStream, LogsHelper.BROKEN_CLIENT_STREAM);
return;
}
}, connection.ConnectionClosed);
Task clientTask = Task.Run(async () =>
{
try
{
await clientStreamIn.CopyToAsync(dataStream);
}
catch
{
await LogsHelper.Log(logStream, LogsHelper.BROKEN_DATA_STREAM);
return;
}
}, connection.ConnectionClosed);
await Task.WhenAny(dataTask, clientTask);
}
I am encountering 3 issues:
For the socket with the other server, I am using a TcpClient and I use a NetworkStream. Even though I am setting both ReadTimeout and WriteTimeout to 10 seconds, for both TcpClient and NetworkStream, the opened socket is waiting forever, even if the other server in the network does not send any data for 5 minutes.
Setting timeout for clientStreamOut and clientStreamIn (e.g: clientStreamIn.ReadTimeout = 10000;) is also failing with an exception that it's not supported for that particular stream. I was wondering, is it possible somehow to provide a timeout?
When a client connects to the ConnectionHandler, OnConnectedAsync is triggered. The issue with the code comes when a client disconnects (either due to network drop or for whatever reason). Sometimes disconnection of the client is being detected and the session terminates, while other times it hangs forever, even if the client has actually been disconnected. I was expecting that CopyToAsync will throw an exception in case of a disconnection since I assume that CopyToAsync is trying to write, but that's not always the case.
connection.ConnectionClosed is a CancellationToken that comes from OnConnectedAsync, I read here https://github.com/dotnet/runtime/issues/23207 that it can be used in CopyToAsync. However, I am not sure how I can use it. Also, it is worth to mention that I have zero control over the client code.
I am running the app using Docker
FROM mcr.microsoft.com/dotnet/core/sdk:3.1
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
The ReadTimeout and WriteTimeout properties only apply to synchronous reads/writes, not asynchronous ones.
For asynchronous code, you'll need to implement your own read timeouts (write timeouts are generally unnecessary). E.g., use Task.Delay and kill the connection if data isn't received in that time.

Disposing each socket client that is connected to the socket server

I have just recently started development for a server - multiple clients application using Socket.
The server doesn't need to keep track of the connected clients; If there is a client that requests for connection, server accepts it. If there is a request from any client (to get some data), server will response to that client.
/// <summary>
/// Callback when server accepts a new incoming connection.
/// </summary>
/// <param name="result">Incoming connection result object.</param>
private void AcceptedCallback(IAsyncResult result)
{
try
{
Socket clientSocket = _socket.EndAccept(result); // Asynchronously accepts an incoming connection attempt
if (clientSocket.Connected) // Check if the client is in 'Connected' state
{
StateObject state = new StateObject();
state.clientSocket = clientSocket;
clientSocket.BeginReceive(state.buffer, 0, StateObject.BufferSize, SocketFlags.None, // Start listening to client request
ReceiveCallback, state);
}
else
{
clientSocket.Close(); // Terminate that client's connection
Log.writeLog("TCPServer(AcceptedCallback)"
, "Client's status is not connected.");
}
}
catch (Exception ex)
{
Log.writeLog("TCPServer(AcceptedCallback)"
, ex.Message);
clientSocket.Close();
}
finally
{
Accept(); // Start to accept new connection request
}
}
I have 3 questions about this:
For each BeginReceive that I create for the newly connected client, does my server application creates a new thread/object to hold that client?
If after the client is connected, and the network cable is pulled off at the client side and plug back in, the client will connect to the server again and this is considered a new connection on the server, if this scenario occurs again and again, will my server program crashes?
Hence, do I need to keep track of each client that is connected to the server, and find a way to track their state so I can call Close/Dispose on them?
So far in my testing for scenario 2, there are no abnormalities detected in my server program, but I hope someone would help clarify this for me. Thank you.
No, it will use a IO completion thread from a pool of threads.
No, you can code and should code to cater for this. If something happens on the client side that the OS can detect, it will send a TCP fin/ack to the server. This should result in any BeginXXX method still waiting to go the Async Callback Method. From there your call to EndXXX method should either throw an exception or return zero bytes being read from the socket.
This depends on what you mean by keep track of to dispose of them properly. If you mean dispose of them if you detect an error, no, you can put clean up code in your EndXXX methods. If you mean so that you can signal clients gracefully if you shut the server down, then yes.

C# socket connection keeps sending data after disconnect and close

I have a "start" and "stop" button. When clicking the start button, a new socket is created and a connection is made. When clicking the stop button the socket is shutdown, disconnected, closed and disposed to make sure it is completely gone.
At least, that's what I thought: when clicking start after stopping the connection, a new socket is made etc. but as soon as I send data, the data is sent x amount of times I had created a socket (thus, x amount of times I had clicked the start button).
This is the code for the start:
soc = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); // Socket soc; is declared at class-level
System.Net.IPAddress ipAdd = System.Net.IPAddress.Parse(IP);
System.Net.IPEndPoint remoteEP = new IPEndPoint(ipAdd, port);
try
{
soc.Connect(remoteEP);
soc.Send(jsonSettings);
}
catch (SocketException e)
{
MessageBox.Show("Could not connect to socket");
}
And this is the stop code:
if (soc != null)
{
soc.Shutdown(SocketShutdown.Both);
soc.Disconnect(false);
soc.Close();
soc.Dispose();
}
This is used within a VSTO PowerPoint add-in application if this could cause any additional specialties, when the connection is made I'm sending string data to a Python server listening to this port. Each time a connection is closed, the Python server will get out of it's listen-for-data loop and get back in it's waiting for connection state (for the multiple start/stop connections).
Code for sending data:
// this is called each time the user goes to another slide in the PowerPoint presentation
byte[] byData = System.Text.Encoding.ASCII.GetBytes(stringValue);
soc.Send(byData);
Can anyone point out what I'm doing wrong why the socket connections somehow keep on living and sending data even though I disconnected and closed them?
The observed behavior is the whole point and desired outcome from clean shutdown. From the MSDN page for Socket.Shutdown():
When using a connection-oriented Socket, always call the Shutdown method before closing the Socket. This ensures that all data is sent and received on the connected socket before it is closed.
The call to Shutdown() prevents your application from queuing additional outgoing data, it does not stop the network stack from sending data already buffered.
Since you are using a stream socket, how about declaring a network stream for your socket like this:
NetworkStream stream = new NetworkStream(soc);
Then flushing this after each send (and before closing the socket):
stream.Flush();
Also ensure you turn off Nagle's algorithm when you create the socket - it will prevent batching up items on the socket:
soc.NoDelay = true;

TCP socket.receive() seems to drop packets

I am working on client-server appliction in C#. The comunication between them is with TCP sockets. The server listen on specific port for income clients connection. After a new client arrived, his socket being saved in a socket list. I define every new client socket with receive timeout of 1 ms. To receive from the client sockets without blocking my server I use the threadpool like this:
private void CheckForData(object clientSocket)
{
Socket client = (Socket)clientSocket;
byte[] data = new byte[client.ReceiveBufferSize];
try
{
int dataLength = client.Receive(data);
if (dataLength == 0)// means client disconnected
{
throw (new SocketException(10054));
}
else if (DataReceivedEvent != null)
{
string RemoteIP = ((IPEndPoint)client.RemoteEndPoint).Address.ToString();
int RemotePort = ((IPEndPoint)client.RemoteEndPoint).Port;
Console.WriteLine("SERVER GOT NEW MSG!");
DataReceivedEvent(data, new IPEndPoint(IPAddress.Parse(RemoteIP), RemotePort));
}
ThreadPool.QueueUserWorkItem(new WaitCallback(CheckForData), client);
}
catch (SocketException e)
{
if (e.ErrorCode == 10060)//recieve timeout
{
ThreadPool.QueueUserWorkItem(new WaitCallback(CheckForData), client);
}
else if(e.ErrorCode==10054)//client disconnected
{
if (ConnectionLostEvent != null)
{
ConnectionLostEvent(((IPEndPoint)client.RemoteEndPoint).Address.ToString());
DisconnectClient(((IPEndPoint)client.RemoteEndPoint).Address.ToString());
Console.WriteLine("client forcibly disconected");
}
}
}
}
My problem is when sometimes the client send 2 messages one after another, the server doesn't receive the second message. I checked with wireshark and it shows that both of the messages were received and also got ACK.
I can force this problem to occur when I am putting break point here:
if (e.ErrorCode == 10060)//recieve timeout
{
ThreadPool.QueueUserWorkItem(new WaitCallback(CheckForData), client);
}
Then send the two messages from the client, then releasing the breakpoint.
Does anyone met this problem before?
my problem is when sometimes the client send 2 messages one after another, the server doesn't receive the second message
I think it's much more likely that it does receive the second message, but in a single Receive call.
Don't forget that TCP is a stream protocol - just because the data is broken into packets at a lower level doesn't mean that one "send" corresponds to one "receive". (Multiple packets may be sent due to a single Send call, or multiple Send calls may be coalesced into a single packet, etc.)
It's generally easier to use something like TcpClient and treat its NetworkStream as a stream. If you want to layer "messages" on top of TCP, you need to do so yourself - for example, prefixing each message with its size in bytes, so that you know when you've finished receiving one message and can start on the next. If you want to handle this asynchronously, I'd suggest sing C# 5 and async/await if you possibly can. It'll be simpler than dealing with the thread pool explicitly.
Message framing is what you need to do. Here: http://blog.stephencleary.com/2009/04/message-framing.html
if you are new to socket programming, I recommend reading these FAQs http://blog.stephencleary.com/2009/04/tcpip-net-sockets-faq.html

Categories

Resources