C# TLS session disconnects - c#

I have a Client(s)/Server TCP scenario with TLS connections written in C#.
The clients are failing to deliver messages after they go quiet and resume (inactivity time ~25 minutes). But If I keep the clients chatting (every 30 seconds) there is no problem.
Neither the client nor the server are seeing a disconnection, but traffic stops flowing (TLS Session break??? I am just guessing here).
I need to keep the connection in place because the server needs to response back at any time.
The server has KeepAlives every 5 minutes once the TCP socket connects and TLS authenticates
Q1) Is there a way to configure the SslStream or the socket in C# to use TicketKeys and reuse the session?
Q2) If the problem is not Session reuse, if I use a WireShark or NetMonitor, what should I be looking for, to determine why the traffic is no longer flowing even if the parties believe they are connected?
Thx

Don't set timeout on server and client
sslStream.ReadTimeout = 5000;
sslStream.WriteTimeout = 5000;
Set big timeout
Implement something like: Ping/Pong heartbeat command and in client, in separate thread, send Ping to server every X seconds

Related

Increase TCP disconnection time

I know TCP won't break an established TCP connection during a period of time. But is it possible to increase this time?
I am developing a client-server application used to stream continuous data from one server to the client application using C#. I want to keep collecting data if the cable connection reconnected within one minute.
There are three timeouts that you can use in the tcp client:https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient?view=netcore-3.1#properties
Connect timeout. The time it takes to establish the connection
Receive timeout. The timeout on the receiving side once the reading is initiated: https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient.receivetimeout?view=netcore-3.1#System_Net_Sockets_TcpClient_ReceiveTimeout
Send timeout. The timeout on the sender side once the sending is initiated: https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient.sendtimeout?view=netcore-3.1#System_Net_Sockets_TcpClient_SendTimeout
Those are the out of the box settings you can use. If you didn't mean one of these, please clarify in your question.

How to Process Multiple Requests at the Same Time With a Server

Alright, so here's my thinking:
If two requests were made to one server at the same time, would one be denied, and if so, how could you keep something like that from happening?
I currently have a server set up for a chat application I'm making, and it basically starts a TCP/IP connection, waits for a client, reads the data sent from them, sends something back, disconnects, and repeats. That way, the server never stops running, and as many requests as you want could be made.
However, what if a client was starting up while another program was using the server. If one program was getting a file from the server while the other one was starting up, and the starting up one needed data from the server, but it was already busy, what would happen?
Would the startup wait until the server was available, or would it just go straight to an error (since no connection was available). If so, this could be really bad, since then the user wouldn't have all the data, like the list of his friends, or a few chats. How could you fix this?
My idea would be that you could have a while loop set up so that it keeps queuing the server until it gets a response. Is this the right way to go about this?
No, the client should assume that the server is always available. You'll see that the basic Socket.Listen(int32) method (which TcpListener acts as a wrapper for) takes one parameter, backlog:
Listen causes a connection-oriented Socket to listen for incoming connection attempts. The backlog parameter specifies the number of incoming connections that can be queued for acceptance. To determine the maximum number of connections you can specify, retrieve the MaxConnections value. Listen does not block.
Most server implementations start with something like the following:
socket.Listen(10); // You choose the number
while (true)
{
var client = socket.Accept();
// Spawn a thread (or Task if you prefer), and have that do all the work for the client
var thread = new Thread(() => DoStuff(client));
thread.Start();
}
With this implementation, there is always a thread listening for new connections. When a client connects, a new thread is created for it, so the processing for that client doesn't delay/prevent more connections from being accepted on the main thread.
Now, if a few new connections come in faster than the server can create new threads, then the new connections will be put in the backlog automatically (I think at the OS level) - the backlog parameter determines how many pending connections can be in the backlog at once.
What if the backlog fills up completely? At this point, you need a second server, and a load balancer to act as a middle-man, directing half the requests to the first server, and half the requests to the second server.
Logic here is simple.
Server starts listening and begins accept clients.
Client tries to connect to the server. You can simply do nothing if server isn't running or you can implement some reconnect logic notifying client about unsuccessful connection attempts.
Lets assume that server was running when client tried to connect. You are talking about multi client application. You shouldn't disconnect the client. You should add connected client to the list of connected clients.
Then, when some of the clients connects to the server and there are some other already connected clients and sends some message. You receive that message via client socket and then you broadcast the message to all other connected clients except one that is sending the data.

c# and networking - Client listening and server send and most efficient C# socket networking?

I'm working on a game that depends on the standard System.Net.Sockets library for networking. What's the most efficient and standardized "system" I should use? Should the client send data requests every set amount of seconds, when a certain event happens? My other question, is a port forward required for a client to listen and receive data? How is this done, is there another socket created specifically for listening only on the client? How can I send messages and listen on the same socket on the client? I'm having a difficult time grasping the concept of networking, I started messing with it two days ago.
Should the client send data requests every set amount of seconds, when a certain event happens?
No. Send your message as soon as you can. The socket stack has algorithms that determine when data is actually sent. For instance the Nagle algorithm.
However, if you send a LOT of messages it can be beneficial to enqueue everything in the same socket method call. However, you need to send several thousand of messages per client and second for that to give you any benefit.
My other question, is a port forward required for a client to listen and receive data?
No. Once a socket connection have been established it's bidirectional. i.e. both end points and send and receive information without screwing something up for the other end point.
But to achieve that you typically have to use asynchronous operations so that you can keep receiving all the time.
How is this done, is there another socket created specifically for listening only on the client?
The server has a dedicated socket (a listener) which only purpose is to accept client sockets. When the listener have accepted a new connection from a remote end point you get a new socket object which represents the connection to the newly connected endpoint.
How can I send messages and listen on the same socket on the client?
The easiest way is to use asynchronous receives and blocking sends.
If you do not want to take care of everything by yourself, you can try my Apache licensed library http://sharpmessaging.net.
Creating a stable, high quality server will require you to have a wealth of knowledge on networking and managing your objects.
I highly recommend you start with something smaller before attempting to create your own server from scratch, or at the very least play around with a server for a different game that's already made, attempt to improve upon it or add new features.
That being said, there are a few ways you can setup the server, if you plan on having more than a couple of clients you don't generally want them to all send data whenever they feel like it as this can bog down the server, you want to structure it in such a way that the client sends as little data as possible on a scheduled basis and the server can request more when its ready. How that's setup and structured is up to you.
A server generally has to have a port forwarded on the router in order for requests to make it to the server from the internet, and here is why. When your computer makes a connection to a website (stackoverflow for example) it sends out a request on a random port, the router remembers the port that you sent out on and remembers who sent it (you), when the server sends the information you requested back the router knows you wanted that data and sends it back to you, in the case of RUNNING a server there is no outbound request to a client (Jack for example), so the router doesnt know where jacks request is supposed to go. By adding a port forwarding rule in the router your saying that all information passed to port 25565 (for example) is supposed to go to your server.
Clients generally do not need to forward ports because they are only making outbound requests and receiving data.
Server Starts, server starts listening on port 25565
Client starts, client connects to server on port 25565 and initiates a connection
Server responds to client on whatever port the client used to connect (this is done behind the scenes in sockets)
Communication continues from here.

Client usually disconnected from server after a few dozen minutes

I've created a server-client communicate program in .NET (c# or vb.net) using TCPListener - Socket on port 8080. In simple words, the program work like a chat software, client connect to server, and both wait for message from each other and then process it.
To retrieve packet from client, i using are using a "While" method like this :
While true
Dim Buffer(4096) As Byte
s.Receive(Buffer)
Dim strDataReceived As String = System.Text.Encoding.ASCII.GetString(Buffer)
ProcessData(strDataReceived) 'Process data received...........
End while
When testing both server.exe-client.exe in local, the software work fine for several hours without any problem.
But when i start to run the server.exe in my real server, the connection between server-client usually become lost each other when client connected after a few dozen minutes. The symptom is client send packet to server but server does not receive the packet from client when server is still standing in 'sck.receive(Buffer)' command. I have tested many times but i still have no lucky to keep the connection run over 1 hour.
I have investigated about this problem but it still very strange :
The server did not installed any firewall software.
The client did not using any proxy and antivirus, firewall software
I using a bandwidth logging software on server to make sure the internet in my server is stable.
I make a 'ping -t' from my client computer to the server and keep looking on it to sure there are no connection lost between client and server . The ping command indicate that the ping time is usually range from 5ms to 50ms and no connection time out occur.
Even I try to unplug the network cable in the client computer for a few seconds, and then replug again to simulation the disconnect event. I've awesome that my connection between server-client is still maintain and it's not the problem that cause my symptom.
I was thinking to write a code for auto reconnect if received timeout. But it could make my software usually delay when reconnecting if the above symptom still there. I really want to know what wrong with my code and which is the solution for me to fix the above symptom?
Likely the server is behind some sort of firewall (Cisco ASA, etc.) which has idle connection timeouts. When you "punch" through a firewall / NAT device, a "session" is created within the firewall kernel. There is an associated resource that has to be reclaimed, so firewalls do not usually allow unlimited connection timeout, but firewalls do support things like dead connection detection.
Adding a keepalive packet / activity every 5 minutes, or disconnecting / reconnecting is the only way around that. Few network admins are going to change their configs to accomodate this. It is pretty simple to implement a "ping" or "keepalive" command in custom TCP application protocols. Just send the string and consume it, you don't even have to respond to the packet to accomplish resetting the idle timer within the firewall, though that would probably be best practice.
When I say keepalive, I don't mean the TCP keepalive socket option. That is a zero-length packet, and is detectable by a good firewall, like Cisco. Cisco admins can setup rules to quietly deny your keepalive packet, so the solution is to implement it above the TCP layer, in the Application layer, by sending a small string of data like "KEEPALIVE\r\n".

Tcp socket suddenly closing connection

I have a chat site (http://www.pitput.com) that connects user via socket connections.
I have in the client side a flash object that opens a connection to a port in my server.
In the server i have a service that is listening to that port in an async matter.
All is working fine except when i talk to someone after an unknown period of time(about couple of minutes) the server is closing my connection and i get an error in the server :
" A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond".
I dont know how exactly the tcp socket works. does it checking for "live" connection every couple of seconds? how does it decide when to close the connection? Im pretty sure that the close operation is not coming from the client side.
Thanks.
Sounds like the server is handling the connection but not responding. This is the point where I usually pull out WireShark to find out what's going on.
TCP/IP does have an option for checking for live connections; it's called "keepalive." Keepalives are hardly ever used. They're not enabled by default. They can be enabled on a system-wide basis by tweaking the Registry, but IIRC the lowest timeout is 1 hour. They can also be enabled on a single socket (with a timeout in minutes), but you would know if your application does that.
If you are using a web service and your client is connecting to an HTTP/HTTPS port, then it may be getting closed by the HTTP server (which usually close their connections after a couple minutes of idle time). It is also possible that an intermediate router may be closing it on your behalf after an amount of idle time (this is not default behavior, but corporate routers are sometimes configured with such "helpful" settings).
If you are using a Win32 service, then it does in fact sound like the client side is dropping the connection or losing their network (e.g., moving outside the range of a wireless router). In the latter case, it's possible that the client remains oblivious to the fact that the connection has been closed (this situation is called "half-open"); the server sees the close but the client thinks the connection is still there.
Is this an ASP web service hosted with some company? If so, the server generally recycles apps every 10 to 20 minutes. You cannot have a web service running indefinitely, unless it's your own server (I believe).

Categories

Resources