C# sockets How can I handle unplanned disconnections on socket servers? - c#

I am building a socket server in C#, and i have an issue with unplanned sockets disconnects.
For now, i am handling this like that:
private List<TCPClient> _acceptedClientsList = new List<TCPClient>();
public void checkIfStillConnected()
{
while (true)
{
lock (_sync)
{
for (int i = 0; i < _acceptedClientsList.Count(); ++i)
{
if (_acceptedClientsList[i].Client.Client.Poll(10, SelectMode.SelectRead) == true && (_acceptedClientsList[i]).Client.Client.Available == 0) // 5000 is the time to wait for a response, in microseconds.
{
NHDatabaseHandler dbHandler = new NHDatabaseHandler();
dbHandler.Init();
if (_acceptedClientsList[i].isVoipClient == true)
{
dbHandler.UpdateUserStatus(_acceptedClientsList[i].UserName, EStatus.Offline);
}
RemoveClient(i); // this function removes the selected client at i from the acceptedClientsList
i--;
}
}
}
}
}
But this is not optimized at all. This is a thread who checks every sockets status with the socket.poll, every time, infinitely...
I don't know if it's the best thing to do... It looks like it's heavy and weird.
Does anyone has an idea?
Thanks

It is best to have one dedicated thread to read incoming data from each socket. When unplanned socket disconnection occurs (for e.g. the client side network drops out or crashes), the socket will throw an exception. You can then catch exception, then close the socket connection and end the thread cleanly while keeping other socket running.
I learnt the TCP/IP communication from this tutorial.
http://www.codeproject.com/Articles/155282/TCP-Server-Client-Communication-Implementation
The author explains his implementation very well. It's worth learning from. You can reference it to rewrite your own library.

Related

NetMQ response socket poll fails after succeeding one time

I'm new to the world of ZeroMQ and I'm working through the documentation of both NetMQ and ZeroMQ as I go. I'm currently implementing (or preparing to implement) the Paranoid Pirate Pattern, and hit a snag. I have a single app which is running the server(s), clients, and eventually queue, though I haven't implemented the queue yet. Right now, there should only be one server at a time running. I can launch as many clients as I like, all communicating with the single server. I am able to have my server "crash" and restart it (manually for now, automatically soon). That all works. Or at least, restarting the server works once.
To enforce that there's only a single server running, I have a thread (which I'll call the WatchThread) which opens a response socket that binds to an address and polls for messages. When the server dies, it signals its demise and the WatchThread decrements the count when it receives the signal. Here's the code snippet that is failing:
//This is the server's main loop:
public void Start(object? count)
{
num = (int)(count ?? -1);
_model.WriteMessage($"Server {num} up");
var rng = new Random();
using ResponseSocket server = new();
server.Bind(tcpLocalhost); //This is for talking to the clients
int cycles = 0;
while (true)
{
var message = server.ReceiveFrameString();
if (message == "Kill")
{
server.SendFrame("Dying");
return;
}
if (cycles++ > 3 && rng.Next(0, 16) == 0)
{
_model.WriteMessage($"Server {num} \"Crashing\"");
RequestSocket sock = new(); //This is for talking to the WatchThread
sock.Connect(WatchThreadString);
sock.SendFrame("Dying"); //This isn't working correctly
return;
}
if(cycles > 3 && rng.Next(0, 10) == 0)
{
_model.WriteMessage($"Server {num}: Slowdown");
Thread.Sleep(1000);
}
server.SendFrame($"Server{num}: {message}");
}
}
And here's the WatchThread code:
public const string WatchThreadString = "tcp://localhost:5000";
private void WatchServers()
{
_watchThread = new ResponseSocket(WatchThreadString);
_watchThread.ReceiveReady += OnWatchThreadOnReceiveReady;
while (_listen)
{
bool result = _watchThread.Poll(TimeSpan.FromMilliseconds(1000));
}
}
private void OnWatchThreadOnReceiveReady(object? s, NetMQSocketEventArgs a)
{
lock (_countLock)
{
ServerCount--;
}
_watchThread.ReceiveFrameBytes();
}
As you can see, it's pretty straight forward. What am I missing? It seems like what should happen is exactly what happens the first time everything is instantiated: The server is supposed to go down, so it opens a new socket to the pre-existing WatchThread and sends a frame. The WatchThread receives the message and decrements the counter appropriately. It's only on the second server where things don't behave as expected...
Edit: I was able to get it to work by unbinding/closing _watchThread and recreating it... it's definitely suboptimal and it still seems like I'm missing something. It's almost as if for some reason I can only use that socket once, though I have other request sockets being used multiple times.
Additional Edit:
My netstat output with 6 clients running (kubernetes is in my host file as 127.0.0.1 as is detailed here):
TCP 127.0.0.1:5555 MyComputerName:0 LISTENING
TCP 127.0.0.1:5555 kubernetes:64243 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64261 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64264 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64269 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64272 ESTABLISHED
TCP 127.0.0.1:5555 kubernetes:64273 ESTABLISHED

Multiple Client Sockets from one instance of a program connecting different devices- working really slow

I have built application that connects with help of TCP Sockets to 4 devices.
For that i created an TCP class with asynchronous methods to send and receive data.
public delegate void dataRec(string recStr);
public event dataRec dataReceiveEvent;
public Socket socket;
public void Connect(string IpAdress, int portNum)
{
socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
IPEndPoint epServer = new IPEndPoint(IPAddress.Parse(IpAdress), portNum);
socket.Blocking = false;
AsyncCallback onconnect = new AsyncCallback(OnConnect);
m_sock.BeginConnect(epServer, onconnect, socket);
}
public void SetupRecieveCallback(Socket sock)
{
try
{
AsyncCallback recieveData = new AsyncCallback(OnRecievedData);
sock.BeginReceive(m_byBuff, 0, m_byBuff.Length, SocketFlags.None, recieveData, sock);
}
catch (Exception ex)
{
//nevermind
}
}
public void OnRecievedData(IAsyncResult ar)
{
// Socket was the passed in object
Socket sock = (Socket)ar.AsyncState;
try
{
int nBytesRec = sock.EndReceive(ar);
if (nBytesRec > 0)
{
string sRecieved = Encoding.ASCII.GetString(m_byBuff, 0, nBytesRec);
OnAddMessage(sRecieved);
SetupRecieveCallback(sock);
}
else
{
sock.Shutdown(SocketShutdown.Both);
sock.Close();
}
}
catch (Exception ex)
{
//nevermind
}
}
public void OnAddMessage(string sMessage)
{
if (mainProgram.InvokeRequired)
{
{
scanEventCallback d = new scanEventCallback(OnAddMessage);
mainProgram.BeginInvoke(d, sMessage);
}
}
else
{
dataReceiveEvent(sMessage);
}
}
I have 4 devices with 4 different IP's and Port's that i send data, and from which i receive data.
So i created 4 different instances of a class mentioned.
When i receive data i call callback functions to do their job with the data i received (OnAddMessage event).
The connection with devices is really good, latency is like: 1-2ms~ (its in internal network).
Functions i call by callbacks are preety fast, each function is not more than 100ms.
The problem is it is working really slow, and its not caused by callback functions.
For each data i send to device, i receive one message from it.
When i start sending them, and stop after like 1 minute of working, the program keep receiving data for like 4-5 sec, even when i turn off devices- its like some kind of lag, that i receive data, that should be delivered a lot earlier.
It looks like something is working really slow.
Im getting like 1 message per second from each device, so it shouldnt be a big deal.
Any ideas what else i should do or set, or what actually could slow me down ?
You haven't posted all the relevant code, but here are some things to pay attention to:
With a network sniffer, like Wireshark or tcpdump, you can see what is actually going on.
Latency it not the only relevant factor for "connection speed". Look also at throughput, packet loss, re-transmissions, etc..
Try to send and receive in large chunks. Sending and receive only single bytes is slow because it has a lot of overhead.
The receiver should read data faster than the sender can send it, or else internal buffers (OS, network) will fill up.
Try to avoid a "chatty" protocol, basically synchronous request/reply, if possible.
If you have a chatty protocol, you can get better performance by disabling the Nagle algorithm. The option to disable this algorithm is often called "TCP no delay" or similar.
Don't close/reopen the connection for each message. TCP connection setup and teardown has quite some overhead.
If you have long standing open TCP connections, close the connection when the connection is idle for some time, for example several minutes.

C# socket thinks it's connected [duplicate]

How can I detect that a client has disconnected from my server?
I have the following code in my AcceptCallBack method
static Socket handler = null;
public static void AcceptCallback(IAsyncResult ar)
{
//Accept incoming connection
Socket listener = (Socket)ar.AsyncState;
handler = listener.EndAccept(ar);
}
I need to find a way to discover as soon as possible that the client has disconnected from the handler Socket.
I've tried:
handler.Available;
handler.Send(new byte[1], 0,
SocketFlags.None);
handler.Receive(new byte[1], 0,
SocketFlags.None);
The above approaches work when you are connecting to a server and want to detect when the server disconnects but they do not work when you are the server and want to detect client disconnection.
Any help will be appreciated.
Since there are no events available to signal when the socket is disconnected, you will have to poll it at a frequency that is acceptable to you.
Using this extension method, you can have a reliable method to detect if a socket is disconnected.
static class SocketExtensions
{
public static bool IsConnected(this Socket socket)
{
try
{
return !(socket.Poll(1, SelectMode.SelectRead) && socket.Available == 0);
}
catch (SocketException) { return false; }
}
}
Someone mentioned keepAlive capability of TCP Socket.
Here it is nicely described:
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
I'm using it this way: after the socket is connected, I'm calling this function, which sets keepAlive on. The keepAliveTime parameter specifies the timeout, in milliseconds, with no activity until the first keep-alive packet is sent. The keepAliveInterval parameter specifies the interval, in milliseconds, between when successive keep-alive packets are sent if no acknowledgement is received.
void SetKeepAlive(bool on, uint keepAliveTime, uint keepAliveInterval)
{
int size = Marshal.SizeOf(new uint());
var inOptionValues = new byte[size * 3];
BitConverter.GetBytes((uint)(on ? 1 : 0)).CopyTo(inOptionValues, 0);
BitConverter.GetBytes((uint)keepAliveTime).CopyTo(inOptionValues, size);
BitConverter.GetBytes((uint)keepAliveInterval).CopyTo(inOptionValues, size * 2);
socket.IOControl(IOControlCode.KeepAliveValues, inOptionValues, null);
}
I'm also using asynchronous reading:
socket.BeginReceive(packet.dataBuffer, 0, 128,
SocketFlags.None, new AsyncCallback(OnDataReceived), packet);
And in callback, here is caught timeout SocketException, which raises when socket doesn't get ACK signal after keep-alive packet.
public void OnDataReceived(IAsyncResult asyn)
{
try
{
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = socket.EndReceive(asyn);
}
catch (SocketException ex)
{
SocketExceptionCaught(ex);
}
}
This way, I'm able to safely detect disconnection between TCP client and server.
This is simply not possible. There is no physical connection between you and the server (except in the extremely rare case where you are connecting between two compuers with a loopback cable).
When the connection is closed gracefully, the other side is notified. But if the connection is disconnected some other way (say the users connection is dropped) then the server won't know until it times out (or tries to write to the connection and the ack times out). That's just the way TCP works and you have to live with it.
Therefore, "instantly" is unrealistic. The best you can do is within the timeout period, which depends on the platform the code is running on.
EDIT:
If you are only looking for graceful connections, then why not just send a "DISCONNECT" command to the server from your client?
"That's just the way TCP works and you have to live with it."
Yup, you're right. It's a fact of life I've come to realize. You will see the same behavior exhibited even in professional applications utilizing this protocol (and even others). I've even seen it occur in online games; you're buddy says "goodbye", and he appears to be online for another 1-2 minutes until the server "cleans house".
You can use the suggested methods here, or implement a "heartbeat", as also suggested. I choose the former. But if I did choose the latter, I'd simply have the server "ping" each client every so often with a single byte, and see if we have a timeout or no response. You could even use a background thread to achieve this with precise timing. Maybe even a combination could be implemented in some sort of options list (enum flags or something) if you're really worried about it. But it's no so big a deal to have a little delay in updating the server, as long as you DO update. It's the internet, and no one expects it to be magic! :)
Implementing heartbeat into your system might be a solution. This is only possible if both client and server are under your control. You can have a DateTime object keeping track of the time when the last bytes were received from the socket. And assume that the socket not responded over a certain interval are lost. This will only work if you have heartbeat/custom keep alive implemented.
I've found quite useful, another workaround for that!
If you use asynchronous methods for reading data from the network socket (I mean, use BeginReceive - EndReceive methods), whenever a connection is terminated; one of these situations appear: Either a message is sent with no data (you can see it with Socket.Available - even though BeginReceive is triggered, its value will be zero) or Socket.Connected value becomes false in this call (don't try to use EndReceive then).
I'm posting the function I used, I think you can see what I meant from it better:
private void OnRecieve(IAsyncResult parameter)
{
Socket sock = (Socket)parameter.AsyncState;
if(!sock.Connected || sock.Available == 0)
{
// Connection is terminated, either by force or willingly
return;
}
sock.EndReceive(parameter);
sock.BeginReceive(..., ... , ... , ..., new AsyncCallback(OnRecieve), sock);
// To handle further commands sent by client.
// "..." zones might change in your code.
}
This worked for me, the key is you need a separate thread to analyze the socket state with polling. doing it in the same thread as the socket fails detection.
//open or receive a server socket - TODO your code here
socket = new Socket(....);
//enable the keep alive so we can detect closure
socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
//create a thread that checks every 5 seconds if the socket is still connected. TODO add your thread starting code
void MonitorSocketsForClosureWorker() {
DateTime nextCheckTime = DateTime.Now.AddSeconds(5);
while (!exitSystem) {
if (nextCheckTime < DateTime.Now) {
try {
if (socket!=null) {
if(socket.Poll(5000, SelectMode.SelectRead) && socket.Available == 0) {
//socket not connected, close it if it's still running
socket.Close();
socket = null;
} else {
//socket still connected
}
}
} catch {
socket.Close();
} finally {
nextCheckTime = DateTime.Now.AddSeconds(5);
}
}
Thread.Sleep(1000);
}
}
The example code here
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.connected.aspx
shows how to determine whether the Socket is still connected without sending any data.
If you called Socket.BeginReceive() on the server program and then the client closed the connection "gracefully", your receive callback will be called and EndReceive() will return 0 bytes. These 0 bytes mean that the client "may" have disconnected. You can then use the technique shown in the MSDN example code to determine for sure whether the connection was closed.
Expanding on comments by mbargiel and mycelo on the accepted answer, the following can be used with a non-blocking socket on the server end to inform whether the client has shut down.
This approach does not suffer the race condition that affects the Poll method in the accepted answer.
// Determines whether the remote end has called Shutdown
public bool HasRemoteEndShutDown
{
get
{
try
{
int bytesRead = socket.Receive(new byte[1], SocketFlags.Peek);
if (bytesRead == 0)
return true;
}
catch
{
// For a non-blocking socket, a SocketException with
// code 10035 (WSAEWOULDBLOCK) indicates no data available.
}
return false;
}
}
The approach is based on the fact that the Socket.Receive method returns zero immediately after the remote end shuts down its socket and we've read all of the data from it. From Socket.Receive documentation:
If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes.
If you are in non-blocking mode, and there is no data available in the protocol stack buffer, the Receive method will complete immediately and throw a SocketException.
The second point explains the need for the try-catch.
Use of the SocketFlags.Peek flag leaves any received data untouched for a separate receive mechanism to read.
The above will work with a blocking socket as well, but be aware that the code will block on the Receive call (until data is received or the receive timeout elapses, again resulting in a SocketException).
Above answers can be summarized as follow :
Socket.Connected properity determine socket state depend on last read or receive state so it can't detect current disconnection state until you manually close the connection or remote end gracefully close of socket (shutdown).
So we can use the function below to check connection state:
bool IsConnected(Socket socket)
{
try
{
if (socket == null) return false;
return !((socket.Poll(5000, SelectMode.SelectRead) && socket.Available == 0) || !socket.Connected);
}
catch (SocketException)
{
return false;
}
//the above code is short exp to :
/* try
{
bool state1 = socket.Poll(5000, SelectMode.SelectRead);
bool state2 = (socket.Available == 0);
if ((state1 && state2) || !socket.Connected)
return false;
else
return true;
}
catch (SocketException)
{
return false;
}
*/
}
Also the above check need to care about poll respone time(block time)
Also as said by Microsoft Documents : this poll method "can't detect proplems like a broken netwrok cable or that remote host was shut down ungracefuuly".
also as said above there is race condition between socket.poll and socket.avaiable which may give false disconnect.
The best way as said by Microsoft Documents is to attempt to send or recive data to detect these kinds of errors as MS docs said.
The below code is from Microsoft Documents :
// This is how you can determine whether a socket is still connected.
bool IsConnected(Socket client)
{
bool blockingState = client.Blocking; //save socket blocking state.
bool isConnected = true;
try
{
byte [] tmp = new byte[1];
client.Blocking = false;
client.Send(tmp, 0, 0); //make a nonblocking, zero-byte Send call (dummy)
//Console.WriteLine("Connected!");
}
catch (SocketException e)
{
// 10035 == WSAEWOULDBLOCK
if (e.NativeErrorCode.Equals(10035))
{
//Console.WriteLine("Still Connected, but the Send would block");
}
else
{
//Console.WriteLine("Disconnected: error code {0}!", e.NativeErrorCode);
isConnected = false;
}
}
finally
{
client.Blocking = blockingState;
}
//Console.WriteLine("Connected: {0}", client.Connected);
return isConnected ;
}
//and heres comments from microsoft docs*
The socket.Connected property gets the connection state of the Socket as of the last I/O operation. When it returns false, the Socket was either never connected, or is no longer connected. 
Connected is not thread-safe; it may return true after an operation is aborted when the Socket is disconnected from another thread.
The value of the Connected property reflects the state of the connection as of the most recent operation.
If you need to determine the current state of the connection, make a nonblocking, zero-byte Send call. If the call returns successfully or throws a WAEWOULDBLOCK error code (10035), then the socket is still connected; //otherwise, the socket is no longer connected .
Can't you just use Select?
Use select on a connected socket. If the select returns with your socket as Ready but the subsequent Receive returns 0 bytes that means the client disconnected the connection. AFAIK, that is the fastest way to determine if the client disconnected.
I do not know C# so just ignore if my solution does not fit in C# (C# does provide select though) or if I had misunderstood the context.
Using the method SetSocketOption, you will be able to set KeepAlive that will let you know whenever a Socket gets disconnected
Socket _connectedSocket = this._sSocketEscucha.EndAccept(asyn);
_connectedSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, 1);
http://msdn.microsoft.com/en-us/library/1011kecd(v=VS.90).aspx
Hope it helps!
Ramiro Rinaldi
i had same problem , try this :
void client_handler(Socket client) // set 'KeepAlive' true
{
while (true)
{
try
{
if (client.Connected)
{
}
else
{ // client disconnected
break;
}
}
catch (Exception)
{
client.Poll(4000, SelectMode.SelectRead);// try to get state
}
}
}
This is in VB, but it seems to work well for me. It looks for a 0 byte return like the previous post.
Private Sub RecData(ByVal AR As IAsyncResult)
Dim Socket As Socket = AR.AsyncState
If Socket.Connected = False And Socket.Available = False Then
Debug.Print("Detected Disconnected Socket - " + Socket.RemoteEndPoint.ToString)
Exit Sub
End If
Dim BytesRead As Int32 = Socket.EndReceive(AR)
If BytesRead = 0 Then
Debug.Print("Detected Disconnected Socket - Bytes Read = 0 - " + Socket.RemoteEndPoint.ToString)
UpdateText("Client " + Socket.RemoteEndPoint.ToString + " has disconnected from Server.")
Socket.Close()
Exit Sub
End If
Dim msg As String = System.Text.ASCIIEncoding.ASCII.GetString(ByteData)
Erase ByteData
ReDim ByteData(1024)
ClientSocket.BeginReceive(ByteData, 0, ByteData.Length, SocketFlags.None, New AsyncCallback(AddressOf RecData), ClientSocket)
UpdateText(msg)
End Sub
You can also check the .IsConnected property of the socket if you were to poll.

getting 10060 (Connection Timed Out) when stress testing simple tcp server

I have created simple tcp server - it works pretty well.
the problems starts when we switch to the stress tests -since our server should handle many concurrent open sockets - we have created a stress test to check this.
unfortunately, looks like the server is choking and can not respond to new connection request in timely fashion when the number of the concurrent open sockets are around 100.
we already tried few types of server - and all produce the same behavior.
the server: can be something like the samples in this post(all produce the same behavior)
How to write a scalable Tcp/Ip based server
here is the code that we are using - when a client connects - the server will just hang in order to keep the socket alive.
enter code here
public class Server
{
private static readonly TcpListener listener = new TcpListener(IPAddress.Any, 2060);
public Server()
{
listener.Start();
Console.WriteLine("Started.");
while (true)
{
Console.WriteLine("Waiting for connection...");
var client = listener.AcceptTcpClient();
Console.WriteLine("Connected!");
// each connection has its own thread
new Thread(ServeData).Start(client);
}
}
private static void ServeData(object clientSocket)
{
Console.WriteLine("Started thread " + Thread.CurrentThread.ManagedThreadId);
var rnd = new Random();
try
{
var client = (TcpClient)clientSocket;
var stream = client.GetStream();
byte[] arr = new byte[1024];
stream.Read(arr, 0, 1024);
Thread.Sleep(int.MaxValue);
}
catch (SocketException e)
{
Console.WriteLine("Socket exception in thread {0}: {1}", Thread.CurrentThread.ManagedThreadId, e);
}
}
}
the stress test client: is a simple tcp client, that loop and open sokets, one after the other
class Program
{
static List<Socket> sockets;
static private void go(){
Socket newsock = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
IPEndPoint iep = new IPEndPoint(IPAddress.Parse("11.11.11.11"), 2060);
try
{
newsock.Connect(iep);
}
catch (SocketException ex)
{
Console.WriteLine(ex.Message );
}
lock (sockets)
{
sockets.Add(newsock);
}
}
static void Main(string[] args)
{
sockets = new List<Socket>();
//int start = 1;// Int32.Parse(Console.ReadLine());
for (int i = 1; i < 1000; i++)
{
go();
Thread.Sleep(200);
}
Console.WriteLine("press a key");
Console.ReadKey();
}
}
}
is there an easy way to explain this behavior? maybe c++ implementation if the TCP server will produce better results? maybe it is actually a client side problem?
Any comment will be welcomed !
ofer
Specify a huge listener backlog: http://msdn.microsoft.com/en-us/library/5kh8wf6s.aspx
Firstly a thread per connection design is unlikely to be especially scalable, you would do better to base your design on an asynchronous server model which uses IO Completion Ports under the hood. This, however, is unlikely to be the problem in this case as you're not really stressing the server that much.
Secondly the listen backlog is a red herring here. The listen backlog is used to provide a queue for connections that are waiting to be accepted. In this example your client uses a synchronous connect call which means that the client will never have more than 1 connect attempt outstanding at any one time. If you were using asynchronous connection attempts in the client then you would be right to look at tuning the listen backlog, perhaps.
Thirdly, given that the client code doesn't show that it sends any data, you can simply issue the read calls and remove the sleep that follows it, the read calls will block. The sleep just confuses matters.
Are you running the client and the server on the same machine?
Is this ALL the code in both client and server?
You might try and eliminate the client from the problem space by using my free TCP test client which is available here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html
Likewise, you could test your test client against one of my simple free servers, like this one: http://www.lenholgate.com/blog/2005/11/simple-echo-servers.html
I can't see anything obviously wrong with the code (apart from the overall design).

C# server - is this good way for timeouting and disconnecting?

On my multithreaded server I am experiencincg troubles with connections that are not coming from the proper Client and so hang unathorized. I did not want to create new thread only for checking if clients are connected for some time without authorization. Instead of this, I have add this checking to RecieveData thread, shown on the code below. Do you see some performance issue or this is acceptable? The main point is that everytime client is connected (and Class client is instantionized) it starts stopwatch. And so I add to this thread condition - if the time is greater than 1 and the client is still not authorized, its added on the list of clients determinated for disconnection. Thanks
EDIT: This While(true) is RecieveData thread. I am using async. operations - from tcplistener.BeginAccept to threadpooling. I have updated the code to let you see more.
protected void ReceiveData()
{
List<Client> ClientsToDisconnect = new List<Client>();
List<System.Net.Sockets.Socket> sockets = new List<System.Net.Sockets.Socket>();
bool noClients = false;
while (true)
{
sockets.Clear();
this.mClientsSynchronization.TryEnterReadLock(-1);
try
{
for (int i = 0; i < this.mClientsValues.Count; i++)
{
Client c = this.mClientsValues[i];
if (!c.IsDisconnected && !c.ReadingInProgress)
{
sockets.Add(c.Socket);
}
//clients connected more than 1 second without recieved name are suspect and should be disconnected
if (c.State == ClientState.NameNotReceived && c.watch.Elapsed.TotalSeconds > 1)
ClientsToDisconnect.Add(c);
}
if (sockets.Count == 0)
{
continue;
}
}
finally
{
this.mClientsSynchronization.ExitReadLock();
}
try
{
System.Net.Sockets.Socket.Select(sockets, null, null, RECEIVE_DATA_TIMEOUT);
foreach (System.Net.Sockets.Socket s in sockets)
{
Client c = this.mClients[s];
if (!c.SetReadingInProgress())
{
System.Threading.ThreadPool.QueueUserWorkItem(c.ReadData);
}
}
//remove clients in ClientsToDisconnect
foreach (Client c in ClientsToDisconnect)
{
this.RemoveClient(c,true);
}
}
catch (Exception e)
{
//this.OnExceptionCaught(this, new ExceptionCaughtEventArgs(e, "Exception when reading data."));
}
}
}
I think I see what you are trying to do and I think a better way would be to store new connections in a holding area until they have properly connected.
I'm not positive but it looks like your code could drop a valid connection. If a new connection is made after the checking section and the second section takes more than a second all the timers would time out before you could verify the connections. This would put the new connections in both the socket pool AND the ClientsToDisconnect pool. Not good. You would drop a currently active connection and chaos would ensue.
To avoid this, I would make the verification of a connection a seperate thread from the using of the connection. That way you won't get bogged down in timing issues (well...you still will but that is what happens when we work with threads and sockets) and you are sure that all the sockets you are using won't get closed by you.
My gut reaction is that while (true) plus if (sockets.Count == 0) continue will lead to heartache for your CPU. Why don't you put this on a Timer or something so that this function is only called every ~.5s? Is the 1s barrier really that important?

Categories

Resources