Handling user-timeouts in TCP server in C# - c#

I'm writing a simple C# tcp message-server that needs to respond to the fact that a connected client has been silent the last TimeSpan timeout. In other words
Client A connects.
Client A sends stuff.
Server responds to client A.
Client B connects.
timeout time passes without client A sending anything.
Server sends "ping" (not as in network-ping, but as in a message, SendPing) to A.
Client B sends stuff.
Server responds.
pingTimeout time after ping was sent to A, connection to A is dropped, and the client is removed.
Same happens if B is silent too long.
Simple story short. If no word has been heard from client[n] within timeout, send ping. If ping is replied to, simply update client[n].LastReceivedTime, however, if client[n] fails to respond within pingTimeout, drop the connection.
As far as I understand, this must be done with some kind of scheduler, cause simply making a loop which says something like this
while(true) {
foreach(var c in clients) {
if(DateTime.Now.Subtract(c.LastReceivedTime) >= timeout && !c.WaitingPing)
c.SendPing();
else if(DateTime.Now.Subtract(c.LastReceivedTime) >= timeout + pingTimeout && c.WaitingPing)
c.Drop();
}
}
would simply fry the CPU and would be no good at all. Is there a good simple algorithm/class for handling cases like this that can easily be implemented in C#? And it needs to support 100-500 clients at once (as a minimum, it is only positive if it can handle more).

Your solution would be Ok I think if you use a dedicated thread and put a Thread.Sleep(1000) in there so you dont as you say fry the CPU. Avoid blocking calls on this thread eg make sure your calls to SendPing and Drop are asynchronous so this thread only does one thing.
The other solution is to use a System.Timers.Timer per client connection which has an interval equal to your ping timer. I'm using this method and have tested this with 500 clients with no issues. (20 sec interval). If your interval is much shorter I would not recommend this and look at other solutions using a single thread to check (like your solution)

Related

socket doesn't close when thread done

I have a problem with sockets. This:
When client-thread ends, server trying to read, and its freezes, because socket is not closed. Thread dont close it, when its over. Its problem exist, if i using thread, but if i using two independents projects, i have no problem (exception throws, and i can catch it).
I cant use timeout, and i must correct continue server-work, when client dont close socket.
Sorry for my bad eng.
As far as I know, there is no way for TCP server (listener) to find out whether data from client are not coming because it has died/quit or is just inactive. This is not .NET's defficiency, it is how TCP works. The way I deal with it is:
1. Create a timer in my client that periodically sends signal "I am alive" to the server. For example, I just send 1 unusual ASCII character '∩' (code 239).
2. In TCP listener: use NetworkStream.Read(...) method that allows to specify timeout. If timeout expires, the server disposes the old NetworkStream instance and creates new one on the same TCP port. If the server receives "I am alive" signal from client, it keeps listening.
By the way, the property TcpClient.Connected is useless for detecting on server side whether client still uses the socket. The method only returns true if last Read action returned something. So, if client is alive and just silent, the TcpClient.Connected becomes false.
Close client when you want the connection to be closed (at the end of Client).
Better yet, use using for all disposable resources such as both clients and the listener.

Establish remote SSL connection after or before local user connection for SSL wrapper?

I'm trying to make a stunnel clone in C# just for fun. The main loop goes something like this (ignore the catch-everything-and-do-nothing try-catches just for now)
ServicePointManager.ServerCertificateValidationCallback = Validator;
TcpListener a = new TcpListener (9999);
a.Start ();
while (true) {
Console.Error.WriteLine ("Spinning...");
try {
TcpClient remote = new TcpClient ("XXX.XX.XXX.XXX", 2376);
SslStream ssl = new SslStream(remote.GetStream(), false, new RemoteCertificateValidationCallback(Validator));
ssl.AuthenticateAsClient("mirai.ca");
TcpClient user = a.AcceptTcpClient ();
new Thread (new ThreadStart(() => {
Thread.CurrentThread.IsBackground = true;
try{
forward(user.GetStream(), ssl); //forward is a blocking function I wrote
}catch{}
})).Start ();
} catch {
Thread.Sleep (1000);
}
}
I found that if I do the remote SSL connection, as I did, before waiting for the user, then when the user connects the SSL is already set up (this is for tunneling HTTP so latency is pretty important). On the other hand, my server closes long-inactive connections, so if no new connection happens in, say, 5 minutes, everything locks up.
What is the best way?
Also, I observe my program generating as much as 200 threads, which of course means that context-switching overhead is pretty big and sometimes results in the whole thing just blocking for seconds, even with just one user tunneling through the program. My forward function goes, in a gist, like
new Thread(new ThreadStart(()=>in.CopyTo(out))).Start();
out.CopyTo(in);
of course with lots of error handling to prevent broken connections from holding up forever. This seems to stall a lot though. I can't figure how to use asynchronous methods like BeginRead which should help according to google.
For any kind of proxy server (including an stunnel clone), opening the backend connection after you accept the frontend connection is clearly much simpler to implement.
If you pre-open backend connections in anticipation of receiving frontend connections, you can certainly save an RTT (which is good for latency), but you have to deal with the issue you hinted at: the backend will close idle connections. At any time that you receive a frontend connections, you run the risk that the backend connection that you are about to associate with this frontend connection and which has been opened some time ago is too old to use and may be closed by the backend. You will have to manage a pool of currently open backend connections and periodically close and refresh them when they become idle for too long. There is even a race condition where if the backend decided the connection has been idle too long and decides to close it but the proxy server receives a new frontend connection at the same time, the frontend may decide to forward a request through the backend connection while the backend is closing this connection. That means that you must be able to know a priori how long backend connections can be idle for before the backend will close them (you must know what the timeout values that are configured on the backend are set to) so you can give them up just before the backend will decide they are too old.
So in summary: pre-opening backend connections will save an RTT versus opening them only on demand, but it is a lot of work, including subtle connection pool management that it quite tough to implement bug-free. Up to you to judge if the extra complexity is worth it.
By the way, concerning your comment about handling several hundred simultaneous connections, I recommend implementing such an I/O-bound program as a proxy server based around an event loop instead of based around threads. Basically, you use non-blocking sockets and process events in a single thread (e.g. "this socket has new data waiting to be forwarded to the other side") instead of spawning a thread for each connection (which can get expensive both in thread creation and context switches). In order to scale such an event-based model to multiple CPU cores, you can start a small number of parallel threads of processes (more or less one per CPU core) which each handle many hundreds (or thousands) of simultaneous connections.

Handling Timeouts in a Socket Server

I have an asynchronous socket server that contains a thread-safe collection of all connected clients. If there's no activity coming from a client for a set amount of time (i.e. timeout), the server application should disconnect the client. Can someone suggest the best way to efficiently track this timeout for each connected client and disconnect when the client times out? This socket server must be very high performance and at any given time, hundreds of client could be connected.
One solution is to have each client associated with a last activity timestamp and have a timer periodically poll the collection to see which connection has timed out based on that timestamp and disconnect it. However, this solution to me isn't very good because that would mean the timer thread has to lock the collection whenever it polls (preventing any other connections/disconnections) for the duration of this process of checking every connected client and disconnecting when timed out.
Any suggestions/ideas would be greatly appreciated. Thanks.
If this is a new project or if you're open to a major refactor of your project, have a look at Reactive Extensions. Rx has an elegant solution for timeouts in asynchronous calls:
var getBytes = Observable.FromAsyncPattern<byte[], int, int, int>(_readStream.BeginRead, _readStream.EndRead);
getBytes(buffer, 0, buffer.Length)
.Timeout(timeout);
Note, the code above is just intended to demonstrate how to timeout in Rx and it would be more complex in a real project.
Performance-wise, I couldn't say unless you profile your specific use case. But I have seen a talk where they used Rx in a complex data-driven platform (probably like your requirements) and they said that their software was able to make decisions within less than 30ms.
Code-wise, I find that using Rx makes my code look more elegant and less verbose.
Your solution is quite OK for many cases when the number of connected clients is small compared to the performance of the enumerating the collection of their contexts. If you allow some wiggle room in the timeout (i.e. it's OK to disconnect the client somewhere between 55 and 65 seconds of inactivity), I'd just run the purging routine every so often.
The other approach which worked great for me was using a queue of activity tokens. Lets use C#:
class token {
ClientContext theClient; // this is the client we've observed activity on
DateTime theTime; // this is the time of observed activity
};
class ClientContext {
// ... what you need to know about the client
DateTime lastActivity; // the time the last activity happened on this client
}
Every time an activity happens on a particular client, a token is generated and pushed into FIFO queue and lastActivity is updated in the ClientContext.
There is another thread which runs the following loop:
extracts the oldest token from the FIFO queue;
checks if theTime in this token matches the theClient.lastActivity;
if it does, shuts down the client;
looks at the next oldest token in the queue, calculates how much time is left till it needs to be shutdown;
Thread.Sleep(<this time>);
repeat
The price of this approach is a little, but constant, i.e. O(1) overhead. One can come up with faster best case solutions, but it seems to me that it's hard to come up with a one with faster worst case performance.
I think it is easier to control timeout form within the connection thread. So you may have something like this:
// accept socket and open stream
stream.ReadTimeout = 10000; // 10 seconds
while (true)
{
int bytes = stream.Read(buffer, 0, length)
// process the data
}
The stream.Read() call will either return data (which means client is alive) or throw IO exception (abnormal disconnect) or return 0 (client closed the socket).

client-server question

If i have a client that is connected to a server and if the server crashes, how can i determine, form my client, if the connection is off ? the idea is that if in my client's while i await to read a line from my server ( String a = sr.ReadLine(); ) and while the client is waiting to recieve that line , the server crashes , how do i close that thread that contains my while ?
Many have told me that in that while(alive) { .. } I should just change the alive value to true , but if my program is currently awaiting for a line to read, it won't get to exit the while because it will be trapped at sr.ReadLine() .
I was thinking that if i can't send a line to the server i should just close the client thread with .abort() . Any Ideas ?
Have a TimeOut parameter in ReadLine method which takes a TimeSpan value and times out after that interval if the response is not received..
public string ReadLine(TimeSpan timeout)
{
// ..your logic.
)
For an example check these SO posts -
Implementing a timeout on a function returning a value
Implement C# Generic Timeout
Is the server app your own, or something off the shelf?
If it's yours, send a "heart beat" every couple of seconds to let the clients know that the connection and service are still alive. (This is a bit more reliable than just seeing if the connection is closed since it may be possible for the connection to remain open while the server app is locked.)
That the server crashes has nothing to do with your clients. There are several external factors that can make the connection go down: The client is one of them, internet/lan problems is another one.
It doesn't matter why something fails, the server should handle it anyway. Servers going down will make your users scream ;)
Regarding multi threading, I suggest that you look at the BeginXXX/EndXXX asynchronous methods. They give you much more power and a more robust solution.
Try to avoid any strategy that relies on thread abort(). If you cannot avoid it, make sure you understand the idiom for that mechanism, which involves having a separate appdomain and catching ThreadAbortException
If the server crashes I imagine you will have more problems than just fixing a while loop. Your program may enter an unstable state for other reasons. State should not be overlooked. That being said, a nice "server timed out" message may suffice. You could take it a step further and ping, then give a slightly more advanced message "server appears to be down".

Sequential access to asynchronous sockets

I have a server that has several clients C1...Cn to each of which there is a TCP connection established. There are less than 10,000 clients.
The message protocol is request/response based, where the server sends a request to a client and then the client sends a response.
The server has several threads, T1...Tm, and each of these may send requests to any of the clients. I want to make sure that only one of these threads can send a request to a specific client at any one time, while the other threads wanting to send a request to the same client will have to wait.
I do not want to block threads from sending requests to different clients at the same time.
E.g. If T1 is sending a request to C3, another thread T2 should not be able to send anything to C3 until T1 has received its response.
I was thinking of using a simple lock statement on the socket:
lock (c3Socket)
{
// Send request to C3
// Get response from C3
}
I am using asynchronous sockets, so I may have to use Monitor instead:
Monitor.Enter(c3Socket); // Before calling .BeginReceive()
And
Monitor.Exit(c3Socket); // In .EndReceive
I am worried about stuff going wrong and not letting go of the monitor and therefore blocking all access to a client. I'm thinking that my heartbeat thread could use Monitor.TryEnter() with a timeout and throw out sockets that it cannot get the monitor for.
Would it make sense for me to make the Begin and End calls synchronous in order to be able to use the lock() statement? I know that I would be sacrificing concurrency for simplicity in this case, but it may be worth it.
Am I overlooking anything here? Any input appreciated.
My answer here would be a state machine per socket. The states would be free and busy:
If socket is free, the sender thread would mark it busy and start sending to client and waiting for response.
You might want to setup a timeout on that wait just in case a client gets stuck somehow.
If the state is busy - the thread sleeps, waiting for signal.
When that client-related timeout expires - close the socket, the client is dead.
When a response is successfully received/parsed, mark the socket free again and signal/wakeup the waiting threads.
Only lock around socket state inquiry and manipulation, not the actual network IO. That means a lock per socket, plus some sort of wait primitive like a conditional variables (sorry, don't remember what's really available in .NET)
Hope this helps.
You certainly can't use the locking approach that you've described. Since your system is primarily asynchronous, you can't know what thread operations will be running on. This means that you may call Exit on the wrong thread (and have a SynchronizationLockException thrown), or some other thread may call Enter and succeed even though that client is "in use", just because it happened to get the same thread that Enter was originally called on.
I'd agree with Nikolai that you need to hold some additional state alongside each socket to determine whether it is currently in use or not. You woud of course need locking to update this shared state.

Categories

Resources