Stream CopyToAsync - Detect client disconnection and set a timeout - c#

I am writing a ConnectionHandler as a part of Kestrel. The idea is that when a client connects, the ConnectionHandler opens a socket with another server in the network, gets a continuous stream of data and forwards them back to the client. In the meantime, the client can also send data to the ConnectionHandler that the latter is constantly forwarding to the other server in the network (opened socket).
public override async Task OnConnectedAsync(ConnectionContext connection)
{
TcpClient serverSocket = TcpClient(address, port);
serverSocket.ReceiveTimeout = 10000;
serverSocket.SendTimeout = 10000;
NetworkStream dataStream = serverSocket.GetStream();
dataStream.ReadTimeout = 10000;
dataStream.WriteTimeout = 10000;
Stream clientStreamOut = connection.Transport.Output.AsStream();
Stream clientStreamIn = connection.Transport.Input.AsStream();
Task dataTask = Task.Run(async () =>
{
try
{
await dataStream.CopyToAsync(clientStreamOut);
}
catch
{
await LogsHelper.Log(logStream, LogsHelper.BROKEN_CLIENT_STREAM);
return;
}
}, connection.ConnectionClosed);
Task clientTask = Task.Run(async () =>
{
try
{
await clientStreamIn.CopyToAsync(dataStream);
}
catch
{
await LogsHelper.Log(logStream, LogsHelper.BROKEN_DATA_STREAM);
return;
}
}, connection.ConnectionClosed);
await Task.WhenAny(dataTask, clientTask);
}
I am encountering 3 issues:
For the socket with the other server, I am using a TcpClient and I use a NetworkStream. Even though I am setting both ReadTimeout and WriteTimeout to 10 seconds, for both TcpClient and NetworkStream, the opened socket is waiting forever, even if the other server in the network does not send any data for 5 minutes.
Setting timeout for clientStreamOut and clientStreamIn (e.g: clientStreamIn.ReadTimeout = 10000;) is also failing with an exception that it's not supported for that particular stream. I was wondering, is it possible somehow to provide a timeout?
When a client connects to the ConnectionHandler, OnConnectedAsync is triggered. The issue with the code comes when a client disconnects (either due to network drop or for whatever reason). Sometimes disconnection of the client is being detected and the session terminates, while other times it hangs forever, even if the client has actually been disconnected. I was expecting that CopyToAsync will throw an exception in case of a disconnection since I assume that CopyToAsync is trying to write, but that's not always the case.
connection.ConnectionClosed is a CancellationToken that comes from OnConnectedAsync, I read here https://github.com/dotnet/runtime/issues/23207 that it can be used in CopyToAsync. However, I am not sure how I can use it. Also, it is worth to mention that I have zero control over the client code.
I am running the app using Docker
FROM mcr.microsoft.com/dotnet/core/sdk:3.1
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1

The ReadTimeout and WriteTimeout properties only apply to synchronous reads/writes, not asynchronous ones.
For asynchronous code, you'll need to implement your own read timeouts (write timeouts are generally unnecessary). E.g., use Task.Delay and kill the connection if data isn't received in that time.

Related

System.Net.WebSockets Connection OPEN before server is listening

I'm running into issues on a setup that's under load accepting the connection and setting it to OPEN before the server is read to read from the socket.
Example(not the actual code):
while (true) {
var objContext = HttpListenerContext.GetContext();
if (objContext.Request.IsWebSocketRequest){
var WebSocket = (await HttpListenerContext.AcceptWebSocketAsync(null)).WebSocket;
while (WebSocket.State == WebSocketState.Open) {
var buffer = new byte[BUFFERSIZE];
var result = await WebSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken);
....
}
}
}
When adding breakpoints on the client and server I've noticed after the AcceptWebSocketAsync is called, the clients receives that the connection is 'OPEN' and ready to be used and starts sending data.
The issue is that ReceiveAsync hasn't started at that time yet, reason is that the ReceiveAsync is a bit slower to start running the task, they stay in WaitingForActivation a bit longer then on an idle system due to the server in question being under high load (lots of other tasks/threads and low number of cpu's).
Is this normal behavior and how can this be stopped?
Had someone go over my code, that person found that the ReceiveAsync being called later isn't an issue, the info is in the buffer.
It was higher up that an event handler was attached too late.

Listen to responses from 2 TcpClient instances against the same IP, but different ports

I'm working on a TCP connection where my client connects to a server's IP on 2 different ports. So I have 2 instances of TcpClient objects, one connecting to the IP on port 9000 and the other on port 9001.
The aim of 2 connections is that the server uses the active connection on port 9000 to give certain responses to the client frequently, and the client uses these responses to form and send a request on port 9001.
Now, the first time I connect on 9000, I get a response, I then form a request and fire off via 9001. Now I have a feeling I'm doing something wrong with the way I'm managing asynchronous requests to both ports, but I can't figure an alternate way of doing this:
IPAddress IPAddress = IPAddress.Parse("192.168.1.10");
public static async Task ConnectToPort9000()
{
TcpClient TcpClient1 = new TcpClient();
try
{
await TcpClient1.ConnectAsync(IPAddress, 9000);
if (TcpClient1.Connected)
{
byte[] Buffer = new byte[1024];
while (await TcpClient1.GetStream().ReadAsync(Buffer, 0, Buffer.Length) > 0)
{
//Server returns a message on this port
string Port9000Response = Encoding.UTF8.GetString(Buffer, 0, Buffer.Length);
//Setting ConfigureAwait(false) so that any further responses returned
//on this port can be dealt with
await Task.Run(async () =>
{
await SendRequestToPort9001BasedOnResponseAsync(Port9000Response);
}).ConfigureAwait(false);
}
}
}
catch (Exception)
{
throw;
}
}
private async Task SendRequestToPort9001BasedOnResponseAsync(string Port9000Response)
{
//Open connection on port 9001 and send request
TcpClient TcpClient2 = new TcpClient();
await TcpClient2.ConnectAsync(IPAddress, 9001);
if (TcpClient2.Connected)
{
//Handle each string response differently
if (Port9000Response == "X")
{
//Form a new request message to send on port 9001
string _NewRequestMesssage = "Y";
byte[] RequestData = Encoding.UTF8.GetBytes(_NewRequestMesssage);
new SocketAsyncEventArgs().SetBuffer(RequestData, 0, RequestData.Length);
await TcpClient2.GetStream().WriteAsync(RequestData, 0, RequestData.Length);
await TcpClient2.GetStream().FlushAsync();
//Handle any responses on this port
//At this point, based on the request message sent on this port 9001
//server sends another response on **port 9000** that needs separately dealing with
//so the while condition in the previous method should receive a response and continue handling that response again
}
else if (Port9000Response == "A")
{
//Do something else
}
}
}
The issue I am having at the moment is, after I send the request on port 9001, when processing any response messages on port 9001, the server has already sent me a response on port 9000, but my while loop on the first method isn't getting triggered, and it seems like that's because it's still executing the second method to process request/response on port 9001. I tried using ConfigureAwait(false) to basically fire and forget, but it doesn't seem to be working. Am I handling asynchronous processes the wrong way? Or should I look at alternatives such as action/delegates?
The aim of 2 connections is that the server uses the active connection on port 9000 to give certain responses to the client frequently, and the client uses these responses to form and send a request on port 9001.
Please don't do this. Socket programming is hard enough without making it extremely more complicated with multiple connections. Error handling becomes harder, detection of half-open connections becomes harder (or impossible), and communication deadlocks are harder to avoid.
Each socket connection is already bidirectional; it already has two independent streams. The only thing you need to do is always be reading, and send as necessary. The read and write streams are independent, so keep your reads and writes independent.

.NET WebSocket closing for no apparent reason

I am testing how .NET WebSockets work when the client can't process data from the server side fast enough. For this purpose, I wrote an application that sends data continuously to a WebSocket, but includes an artificial delay in the receive loop. As expected, once the TCP window and other buffers fill, the SendAsync calls start to take long to return. But after a few minutes, one of these exceptions is thrown by SendAsync:
System.Net.HttpListenerException: The device does not recognize the command
System.Net.HttpListenerException: The I/O operation has been aborted because of either a thread exit or an application request.
What's weird is, that this only happens with certain message sizes and certain timing. When the client is allowed to read all data unrestricted, the connection is stable. Also, when the client is blocked completely and does not read at all, the connection stays open.
Examining the data flow through Wireshark revealed that it is the server that is resetting the TCP connection while the client's TCP window is exhausted.
I tried to follow this answer (.NET WebSockets forcibly closed despite keep-alive and activity on the connection) without success. Tweaking the WebSocket keep alive interval has no effect. Also, I know that the final application needs to be able to handle unexpected disconnections gracefully, but I do not want them to occur if they can be avoided.
Did anybody encounter this? Is there some timeout tweaking that I can do? Running this should produce the error between a minute and half to three minutes:
class Program
{
static void Main(string[] args)
{
System.Net.ServicePointManager.MaxServicePointIdleTime = Int32.MaxValue; // has no effect
HttpListener httpListener = new HttpListener();
httpListener.Prefixes.Add("http://*/ws/");
Listen(httpListener);
Thread.Sleep(500);
Receive("ws://localhost/ws/");
Console.WriteLine("running...");
Console.ReadKey();
}
private static async void Listen(HttpListener listener)
{
listener.Start();
while (true)
{
HttpListenerContext ctx = await listener.GetContextAsync();
if (!ctx.Request.IsWebSocketRequest)
{
ctx.Response.StatusCode = (int)HttpStatusCode.NotImplemented;
ctx.Response.Close();
return;
}
Send(ctx);
}
}
private static async void Send(HttpListenerContext ctx)
{
TimeSpan keepAliveInterval = TimeSpan.FromSeconds(5); // tweaking has no effect
HttpListenerWebSocketContext wsCtx = await ctx.AcceptWebSocketAsync(null, keepAliveInterval);
WebSocket webSocket = wsCtx.WebSocket;
byte[] buffer = new byte[100];
while (true)
{
await webSocket.SendAsync(new ArraySegment<byte>(buffer), WebSocketMessageType.Binary, true, CancellationToken.None);
}
}
private static async void Receive(string serverAddress)
{
ClientWebSocket webSocket = new ClientWebSocket();
webSocket.Options.KeepAliveInterval = TimeSpan.FromSeconds(5); // tweaking has no effect
await webSocket.ConnectAsync(new Uri(serverAddress), CancellationToken.None);
byte[] receiveBuffer = new byte[10000];
while (true)
{
await Task.Delay(10); // simulate a slow client
var message = await webSocket.ReceiveAsync(new ArraySegment<byte>(receiveBuffer), CancellationToken.None);
if (message.CloseStatus.HasValue)
break;
}
}
I'm not a .NET developer but as far as I have seen these kind of problems in websocket topic and in my own opinion, these can be the reasons:
Very short timeout setting on websocket on both sides.
Client/Server side runtime exceptions (beside of logging, must check onError and onClose methods to see why)
Internet or connection failures. Websocket sometimes goes into IDLE mode too. You have to implement a heartbeat system on websockets to keep them alive. Use ping and pong packets.
check maximum binary or text message size on server side. Also set some buffers to avoid failure when message is too big.
As you said your error usually happens within a certain time, 1 and 2 must help you. Again sorry if I cant provide you codes, but I have had same problems in java and I found out these are the settings that must be set in order to work with websockets. Search how to set these in your client and server implementations and you must be fine after that.
Apparently, I was hitting an HTTP.SYS low speed connection attack countermeasure, as roughly described in KB 3137046 (https://support.microsoft.com/en-us/help/3137046/http-sys-forcibly-disconnects-http-bindings-for-wcf-self-hosted-servic):
By default, Http.sys considers any speed rate of less than 150 bytes per second as a potential low speed connection attack, and it drops the TCP connection to release the resource.
When HTTP.SYS does that, there is a trace entry in the log at %windir%\System32\LogFiles\HTTPERR
Switching it off was simple from code:
httpListener.TimeoutManager.MinSendBytesPerSecond = UInt32.MaxValue;

TcpListener: Detecting a client disconnect as opposed to a client not sending any data for a while

I was looking how to detect a 'client disconnect' when using a TcpListener.
All the answers seem to be similar to this one:
TcpListener: How can I detect a client disconnect?
Basically, read from the stream and if Read() returns 0 the client had disconnected.
But that's assuming that a client disconnects after every single stream of data it sent.
We operate in environments where the TCP connect/disconnect overhead is both slow and expensive.
We establish a connection and then we send a number of requests.
Pseudocode:
client.Connect();
client.GetStatus();
client.DoSomething();
client.DoSomethingElse();
client.AndSoOn();
client.Disconnect();
Each call between Connect and Disconnect() sends a stream of data to the server.
The server knows how to analyze and process the streams.
If let the TcpListener read in a loop without ever disconnecting it reads and handles all the messages, but after the client disconnects, the server has no way of knowing that and
it will never release the client and accept new ones.
var read = client.GetStream().Read(buffer, 0, buffer.Length);
if (read > 0)
{
//Process
}
If I let the TcpListener drop the client when read == 0 it only accepts
the first stream of data only to drop the client immediately after.
Of course this means new clients can connect.
There is no artificial delay between the calls,
but in terms of computer time the time between two calls is 'huge' of course,
so there will always be a time when read == 0 even though that does not mean
the client has or should be disconnected.
var read = client.GetStream().Read(buffer, 0, buffer.Length);
if (read > 0)
{
//Process
}
else
{
break; //Always executed as soon as the first stream of data has been received
}
So I'm wondering... is there a better way to detect if the client has disconnected?
You could get the underlying socket using the NetworkStream.Socket property and use it's Receive method for reading.
Unlike NetworkStream.Read, the linked overload of Socket.Receive will block until the specified number of bytes have been read, and will only return zero if the remote host shuts down the TCP connection.
UPDATE: #jrh's comment is correct that NetworkStream.Socket is a protected property and cannot be accessed in this context. In order to get the client Socket, you could use the TcpListener.AcceptSocket method which returns the Socket object corresponding to the newly established connection.
Eren's answer solved the problem for me. In case anybody else is facing the same issue
here's some 'sample' code using the Socket.Receive method:
private void AcceptClientAndProcess()
{
try
{
client = server.Accept();
client.ReceiveTimeout = 20000;
}
catch
{
return;
}
while (true)
{
byte[] buffer = new byte[client.ReceiveBufferSize];
int read = 0;
try
{
read = client.Receive(buffer);
}
catch
{
break;
}
if (read > 0)
{
//Handle data
}
else
{
break;
}
}
if (client != null)
client.Close(5000);
}
You call AcceptClientAndProcess() in a loop somewhere.
The following line:
read = client.Receive(buffer);
will block until either
Data is received, (read > 0) in which case you can handle it
The connection has been closed properly (read = 0)
The connection has been closed abruptly (An exception is thrown)
Either of the last two situations indicate the client is no longer connected.
The try catch around the Socket.Accept() method is also required
as it may fail if the client connection is closed abruptly during the connect phase.
Note that did specify a 20 second timeout for the read operation.
The documentation for NetworkStream.Read does not reflect this, but in my experience, 'NetworkStream.Read' blocks if the port is still open and no data is available, but returns 0 if the port has been closed.
I ran into this problem from the other side, in that NetworkStream.Read does not immediately return 0 if no data is currently available. You have to use NetworkStream.DataAvailable to find out if NetworkStream.Read can read data right now.

Disconnecting TCPClient and seeing that on the other side

i am trying to disconnect a client from a server but the server still sees it as being connected. I cant find a solution to this and Shutdown, Disconnect and Close all dont work.
Some code for my disconnect from the client and checking on the server:
Client:
private void btnDisconnect_Click(object sender, EventArgs e)
{
connTemp.Client.Shutdown(SocketShutdown.Both);
connTemp.Client.Disconnect(false);
connTemp.GetStream().Close();
connTemp.Close();
}
Server:
while (client != null && client.Connected)
{
NetworkStream stream = client.GetStream();
data = null;
try
{
if (stream.DataAvailable)
{
data = ReadStringFromClient(client, stream);
WriteToConsole("Received Command: " + data);
}
} // So on and so on...
There are more writes and reads further down in the code.
Hope you all can help.
UPDATE: I even tried passing the TCP client by ref, assuming there was a scope issue and client.Connected remains true even after a read. What is going wrong?
Second Update!!:
Here is the solution. Do a peek and based on that, determine if you are connected or not.
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
Here is the solution!!
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
From the MSDN Documentation:
The Connected property gets the
connection state of the Client socket
as of the last I/O operation.
When it
returns false, the Client socket was
either never connected, or is no
longer connected. Because the
Connected property only reflects the
state of the connection as of the most
recent operation, you should attempt
to send or receive a message to
determine the current state. After the
message send fails, this property no
longer returns true. Note that this
behavior is by design. You cannot
reliably test the state of the
connection because, in the time
between the test and a send/receive,
the connection could have been lost.
Your code should assume the socket is
connected, and gracefully handle
failed transmissions.
I am not sure about the NetworkStream class but I would think that it would behave similar to the Socket class as it is primarily a wrapper class. In general the server would be unaware that the client disconnected from the socket unless it performs an I/O operation on the socket (a read or a write). However, when you call BeginRead on the socket the callback is not called until there is data to be read from the socket, so calling EndRead and getting a bytes read return result of 0 (zero) means the socket was disconnected. If you use Read and get a zero bytes read result I suspect that you can check the Connected property on the underlying Socket class and it will be false if the client disconnected since an I/O operation was performed on the socket.
It's a general TCP problem, see:
How do I check if a SSLSocket connection is sane on Java?
Java socket not throwing exceptions on a dead socket?
The workaround for this tend to rely on sending the amount of data to expect as part of the protocol. That's what HTTP 1.1 does using the Content-Length header (for a entire entity) or with chunked transfer encoding (with various chunk sizes).
Another way is to send "NOOP" or similar commands (essentially messages that do nothing but make sure the communication is still open) as part of your protocol regularly.
(You can also add to your protocol a command that the client can send to the server to close the connection cleanly, but not getting it won't mean the client hasn't disconnected.)

Categories

Resources