I'm trying to find strange behaviour in a socket library that has been used in production for a while.
Actually the Receive and send are processed with BeginReceive/EndReceive and BeginSend/EndSend calls , so everything is processed in an asynchronous fashion.
Moreover, I've a MonitorThread that is polling each second for the state of the socket:
void ControlThreadCallback()
{
if (!IsConnected && m_ConnectionIP != null && !m_ConnectionIP.Equals(String.Empty) && m_ConnectionPort > 0)
{
//No connected, perform reconnection, recreate serversocket or dispose client socket if needed
}
}
The IsConnected property is defined in the same class:
public bool IsConnected
{
get
{
try
{
return m_Socket != null && m_Socket.Connected &&
!(m_Socket.Poll(1, SelectMode.SelectRead) && m_Socket.Available == 0);
}
catch (Exception ex)
{
//Log error
//Object Disposed exception is being catched here sometimes
return false;
}
}
}
Sometimes, objectdisposedexception is being catched, thus returning false to IsConnected property and ControlThread thinks that socket is not connected. However, I've activity in the socket, so is not disconnected at all.
This happens only when high volume of data is being sent and received on the socket (that is, more than 20 messages each second or so).
In which strange cases the Socket.Poll is giving objectDisposedException? and, could be some race condition between socket.Poll && socket.Available?
If you need more code I can paste some more snippets.
How can I detect that a client has disconnected from my server?
I have the following code in my AcceptCallBack method
static Socket handler = null;
public static void AcceptCallback(IAsyncResult ar)
{
//Accept incoming connection
Socket listener = (Socket)ar.AsyncState;
handler = listener.EndAccept(ar);
}
I need to find a way to discover as soon as possible that the client has disconnected from the handler Socket.
I've tried:
handler.Available;
handler.Send(new byte[1], 0,
SocketFlags.None);
handler.Receive(new byte[1], 0,
SocketFlags.None);
The above approaches work when you are connecting to a server and want to detect when the server disconnects but they do not work when you are the server and want to detect client disconnection.
Any help will be appreciated.
Since there are no events available to signal when the socket is disconnected, you will have to poll it at a frequency that is acceptable to you.
Using this extension method, you can have a reliable method to detect if a socket is disconnected.
static class SocketExtensions
{
public static bool IsConnected(this Socket socket)
{
try
{
return !(socket.Poll(1, SelectMode.SelectRead) && socket.Available == 0);
}
catch (SocketException) { return false; }
}
}
Someone mentioned keepAlive capability of TCP Socket.
Here it is nicely described:
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
I'm using it this way: after the socket is connected, I'm calling this function, which sets keepAlive on. The keepAliveTime parameter specifies the timeout, in milliseconds, with no activity until the first keep-alive packet is sent. The keepAliveInterval parameter specifies the interval, in milliseconds, between when successive keep-alive packets are sent if no acknowledgement is received.
void SetKeepAlive(bool on, uint keepAliveTime, uint keepAliveInterval)
{
int size = Marshal.SizeOf(new uint());
var inOptionValues = new byte[size * 3];
BitConverter.GetBytes((uint)(on ? 1 : 0)).CopyTo(inOptionValues, 0);
BitConverter.GetBytes((uint)keepAliveTime).CopyTo(inOptionValues, size);
BitConverter.GetBytes((uint)keepAliveInterval).CopyTo(inOptionValues, size * 2);
socket.IOControl(IOControlCode.KeepAliveValues, inOptionValues, null);
}
I'm also using asynchronous reading:
socket.BeginReceive(packet.dataBuffer, 0, 128,
SocketFlags.None, new AsyncCallback(OnDataReceived), packet);
And in callback, here is caught timeout SocketException, which raises when socket doesn't get ACK signal after keep-alive packet.
public void OnDataReceived(IAsyncResult asyn)
{
try
{
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = socket.EndReceive(asyn);
}
catch (SocketException ex)
{
SocketExceptionCaught(ex);
}
}
This way, I'm able to safely detect disconnection between TCP client and server.
This is simply not possible. There is no physical connection between you and the server (except in the extremely rare case where you are connecting between two compuers with a loopback cable).
When the connection is closed gracefully, the other side is notified. But if the connection is disconnected some other way (say the users connection is dropped) then the server won't know until it times out (or tries to write to the connection and the ack times out). That's just the way TCP works and you have to live with it.
Therefore, "instantly" is unrealistic. The best you can do is within the timeout period, which depends on the platform the code is running on.
EDIT:
If you are only looking for graceful connections, then why not just send a "DISCONNECT" command to the server from your client?
"That's just the way TCP works and you have to live with it."
Yup, you're right. It's a fact of life I've come to realize. You will see the same behavior exhibited even in professional applications utilizing this protocol (and even others). I've even seen it occur in online games; you're buddy says "goodbye", and he appears to be online for another 1-2 minutes until the server "cleans house".
You can use the suggested methods here, or implement a "heartbeat", as also suggested. I choose the former. But if I did choose the latter, I'd simply have the server "ping" each client every so often with a single byte, and see if we have a timeout or no response. You could even use a background thread to achieve this with precise timing. Maybe even a combination could be implemented in some sort of options list (enum flags or something) if you're really worried about it. But it's no so big a deal to have a little delay in updating the server, as long as you DO update. It's the internet, and no one expects it to be magic! :)
Implementing heartbeat into your system might be a solution. This is only possible if both client and server are under your control. You can have a DateTime object keeping track of the time when the last bytes were received from the socket. And assume that the socket not responded over a certain interval are lost. This will only work if you have heartbeat/custom keep alive implemented.
I've found quite useful, another workaround for that!
If you use asynchronous methods for reading data from the network socket (I mean, use BeginReceive - EndReceive methods), whenever a connection is terminated; one of these situations appear: Either a message is sent with no data (you can see it with Socket.Available - even though BeginReceive is triggered, its value will be zero) or Socket.Connected value becomes false in this call (don't try to use EndReceive then).
I'm posting the function I used, I think you can see what I meant from it better:
private void OnRecieve(IAsyncResult parameter)
{
Socket sock = (Socket)parameter.AsyncState;
if(!sock.Connected || sock.Available == 0)
{
// Connection is terminated, either by force or willingly
return;
}
sock.EndReceive(parameter);
sock.BeginReceive(..., ... , ... , ..., new AsyncCallback(OnRecieve), sock);
// To handle further commands sent by client.
// "..." zones might change in your code.
}
This worked for me, the key is you need a separate thread to analyze the socket state with polling. doing it in the same thread as the socket fails detection.
//open or receive a server socket - TODO your code here
socket = new Socket(....);
//enable the keep alive so we can detect closure
socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
//create a thread that checks every 5 seconds if the socket is still connected. TODO add your thread starting code
void MonitorSocketsForClosureWorker() {
DateTime nextCheckTime = DateTime.Now.AddSeconds(5);
while (!exitSystem) {
if (nextCheckTime < DateTime.Now) {
try {
if (socket!=null) {
if(socket.Poll(5000, SelectMode.SelectRead) && socket.Available == 0) {
//socket not connected, close it if it's still running
socket.Close();
socket = null;
} else {
//socket still connected
}
}
} catch {
socket.Close();
} finally {
nextCheckTime = DateTime.Now.AddSeconds(5);
}
}
Thread.Sleep(1000);
}
}
The example code here
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.connected.aspx
shows how to determine whether the Socket is still connected without sending any data.
If you called Socket.BeginReceive() on the server program and then the client closed the connection "gracefully", your receive callback will be called and EndReceive() will return 0 bytes. These 0 bytes mean that the client "may" have disconnected. You can then use the technique shown in the MSDN example code to determine for sure whether the connection was closed.
Expanding on comments by mbargiel and mycelo on the accepted answer, the following can be used with a non-blocking socket on the server end to inform whether the client has shut down.
This approach does not suffer the race condition that affects the Poll method in the accepted answer.
// Determines whether the remote end has called Shutdown
public bool HasRemoteEndShutDown
{
get
{
try
{
int bytesRead = socket.Receive(new byte[1], SocketFlags.Peek);
if (bytesRead == 0)
return true;
}
catch
{
// For a non-blocking socket, a SocketException with
// code 10035 (WSAEWOULDBLOCK) indicates no data available.
}
return false;
}
}
The approach is based on the fact that the Socket.Receive method returns zero immediately after the remote end shuts down its socket and we've read all of the data from it. From Socket.Receive documentation:
If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes.
If you are in non-blocking mode, and there is no data available in the protocol stack buffer, the Receive method will complete immediately and throw a SocketException.
The second point explains the need for the try-catch.
Use of the SocketFlags.Peek flag leaves any received data untouched for a separate receive mechanism to read.
The above will work with a blocking socket as well, but be aware that the code will block on the Receive call (until data is received or the receive timeout elapses, again resulting in a SocketException).
Above answers can be summarized as follow :
Socket.Connected properity determine socket state depend on last read or receive state so it can't detect current disconnection state until you manually close the connection or remote end gracefully close of socket (shutdown).
So we can use the function below to check connection state:
bool IsConnected(Socket socket)
{
try
{
if (socket == null) return false;
return !((socket.Poll(5000, SelectMode.SelectRead) && socket.Available == 0) || !socket.Connected);
}
catch (SocketException)
{
return false;
}
//the above code is short exp to :
/* try
{
bool state1 = socket.Poll(5000, SelectMode.SelectRead);
bool state2 = (socket.Available == 0);
if ((state1 && state2) || !socket.Connected)
return false;
else
return true;
}
catch (SocketException)
{
return false;
}
*/
}
Also the above check need to care about poll respone time(block time)
Also as said by Microsoft Documents : this poll method "can't detect proplems like a broken netwrok cable or that remote host was shut down ungracefuuly".
also as said above there is race condition between socket.poll and socket.avaiable which may give false disconnect.
The best way as said by Microsoft Documents is to attempt to send or recive data to detect these kinds of errors as MS docs said.
The below code is from Microsoft Documents :
// This is how you can determine whether a socket is still connected.
bool IsConnected(Socket client)
{
bool blockingState = client.Blocking; //save socket blocking state.
bool isConnected = true;
try
{
byte [] tmp = new byte[1];
client.Blocking = false;
client.Send(tmp, 0, 0); //make a nonblocking, zero-byte Send call (dummy)
//Console.WriteLine("Connected!");
}
catch (SocketException e)
{
// 10035 == WSAEWOULDBLOCK
if (e.NativeErrorCode.Equals(10035))
{
//Console.WriteLine("Still Connected, but the Send would block");
}
else
{
//Console.WriteLine("Disconnected: error code {0}!", e.NativeErrorCode);
isConnected = false;
}
}
finally
{
client.Blocking = blockingState;
}
//Console.WriteLine("Connected: {0}", client.Connected);
return isConnected ;
}
//and heres comments from microsoft docs*
The socket.Connected property gets the connection state of the Socket as of the last I/O operation. When it returns false, the Socket was either never connected, or is no longer connected.
Connected is not thread-safe; it may return true after an operation is aborted when the Socket is disconnected from another thread.
The value of the Connected property reflects the state of the connection as of the most recent operation.
If you need to determine the current state of the connection, make a nonblocking, zero-byte Send call. If the call returns successfully or throws a WAEWOULDBLOCK error code (10035), then the socket is still connected; //otherwise, the socket is no longer connected .
Can't you just use Select?
Use select on a connected socket. If the select returns with your socket as Ready but the subsequent Receive returns 0 bytes that means the client disconnected the connection. AFAIK, that is the fastest way to determine if the client disconnected.
I do not know C# so just ignore if my solution does not fit in C# (C# does provide select though) or if I had misunderstood the context.
Using the method SetSocketOption, you will be able to set KeepAlive that will let you know whenever a Socket gets disconnected
Socket _connectedSocket = this._sSocketEscucha.EndAccept(asyn);
_connectedSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, 1);
http://msdn.microsoft.com/en-us/library/1011kecd(v=VS.90).aspx
Hope it helps!
Ramiro Rinaldi
i had same problem , try this :
void client_handler(Socket client) // set 'KeepAlive' true
{
while (true)
{
try
{
if (client.Connected)
{
}
else
{ // client disconnected
break;
}
}
catch (Exception)
{
client.Poll(4000, SelectMode.SelectRead);// try to get state
}
}
}
This is in VB, but it seems to work well for me. It looks for a 0 byte return like the previous post.
Private Sub RecData(ByVal AR As IAsyncResult)
Dim Socket As Socket = AR.AsyncState
If Socket.Connected = False And Socket.Available = False Then
Debug.Print("Detected Disconnected Socket - " + Socket.RemoteEndPoint.ToString)
Exit Sub
End If
Dim BytesRead As Int32 = Socket.EndReceive(AR)
If BytesRead = 0 Then
Debug.Print("Detected Disconnected Socket - Bytes Read = 0 - " + Socket.RemoteEndPoint.ToString)
UpdateText("Client " + Socket.RemoteEndPoint.ToString + " has disconnected from Server.")
Socket.Close()
Exit Sub
End If
Dim msg As String = System.Text.ASCIIEncoding.ASCII.GetString(ByteData)
Erase ByteData
ReDim ByteData(1024)
ClientSocket.BeginReceive(ByteData, 0, ByteData.Length, SocketFlags.None, New AsyncCallback(AddressOf RecData), ClientSocket)
UpdateText(msg)
End Sub
You can also check the .IsConnected property of the socket if you were to poll.
I'm using the asynchronous methos BeginSend and I need some sort of a timeout mechanism. What I've implemented works fine for connect and receive timeouts but I have a problem with the BeginSend callback. Even a timeout of 25 seconds is often not enough and gets exceeded. This seems very strange to me and points towards a different cause.
public void Send(String data)
{
if (client.Connected)
{
// Convert the string data to byte data using ASCII encoding.
byte[] byteData = Encoding.ASCII.GetBytes(data);
client.NoDelay = true;
// Begin sending the data to the remote device.
IAsyncResult res = client.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback(SendCallback), client);
if (!res.IsCompleted)
{
sendTimer = new System.Threading.Timer(SendTimeoutCallback, null, 10000, Timeout.Infinite);
}
}
else MessageBox.Show("No connection to target! Send");
}
private void SendCallback(IAsyncResult ar)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 1, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
sendTimeoutflag = 0; //needs to be reset back to 0 for next reception
// we set the flag to 1, indicating it was completed.
if (sendTimer != null)
{
// stop the timer from firing.
sendTimer.Dispose();
}
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete sending the data to the remote device.
int bytesSent = client.EndSend(ar);
ef.updateUI("Sent " + bytesSent.ToString() + " bytes to server." + "\n");
}
catch (Exception e)
{
MessageBox.Show(e.ToString());
}
}
private void SendTimeoutCallback(object obj)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 2, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
// we set the flag to 2, indicating a timeout was hit.
sendTimer.Dispose();
client.Close(); // closing the Socket cancels the async operation.
MessageBox.Show("Connection to the target has been lost! SendTimeoutCallback");
}
I've tested timeout values up to 30 seconds. The value of 30 seconds has proved to be the only one never to time out. But that just seems like an overkill and I believe there's a different underlying cause.Any ideas as to why this could be happening?
Unfortunately, there's not enough code to completely diagnose this. You don't even show the declaration of sendTimeoutflag. The example isn't self-contained, so there's no way to test it. And you're not clear about exactly what happens (e.g. do you just get the timeout, do you complete a send and still get a timeout, does something else happen?).
That said, I see at least one serious bug in the code, which is your use of the sendTimeoutflag. The SendCallback() method sets this flag to 1, but it immediately sets it back to 0 again (this time without the protection of Interlocked.CompareExchange()). Only after it's set the value to 0 does it dispose the timer.
This means that even when you successfully complete the callback, the timeout timer is nearly guaranteed to have no idea and to close the client object anyway.
You can fix this specific issue by moving the assignment sendTimeoutflag = 0; to a point after you've actually completed the send operation, e.g. at the end of the callback method. And even then only if you take steps to ensure that the timer callback cannot execute past that point (e.g. wait for the timer's dispose to complete).
Note that even having fixed that specific issue, you may still have other bugs. Frankly, it's not clear why you want a timeout in the first place. Nor is it clear why you want to use lock-free code to implement your timeout logic. More conventional locking (i.e. Monitor-based with the lock statement) would be easier to implement correctly and would likely not impose a noticeable performance penalty.
And I agree with the suggestion that you would be better-served by using the async/await pattern instead of explicitly dealing with callback methods (but of course that would mean using a higher-level I/O object, since Socket doesn't suppose async/await).
Please forgive me as this is going to be quite a long post. I'm currently using the SerialPort class in C# to write an application to communicate with a device called a Fluke 5500A. I've, in the past, had many problems as the amount of time the device takes to issue a command and return whatever it outputs in unpredictable at best. I asked a question yesterday here: System.Timers.Timer Usage The answer to the question is wonderful and most of the time appears to work perfectly. As an example my the class I use to connect to a SerialPort now looks like this:
public class SerialPortConnection
{
private SerialPort serialPort;
private string ping;
double failOut;
bool isReceiving;
public SerialPortConnection(string comPort = "Com1", int baud = 9600, System.IO.Ports.Parity parity = System.IO.Ports.Parity.None, int dataBits = 8, System.IO.Ports.StopBits stopBits = System.IO.Ports.StopBits.One, string ping = "*IDN?", double failOut = 2)
{
this.ping = ping;
this.failOut = failOut * 1000;
try
{
serialPort = new SerialPort(comPort, baud, parity, dataBits, stopBits);
serialPort.NewLine = ">";
serialPort.ReadTimeout = 1000;
}
catch (Exception e)
{
serialPort = null;
}
}
//Open Serial Connection. Returns False If Unable To Open.
public bool OpenSerialConnection()
{
//Opens Initial Connection:
try
{
serialPort.Open();
serialPort.Write("REMOTE\r");
}
catch (Exception e)
{
return false;
}
serialPort.Write(ping + "\r");
var testReceived = "";
try
{
testReceived += serialPort.ReadLine();
return true;
}
catch
{
return false;
}
}
public string WriteSerialConnection(string SerialCommand)
{
serialPort.Write(String.Format(SerialCommand + "\r"));
var received = "";
try
{
received += serialPort.ReadLine();
return received;
}
catch
{
received = "Error: No Data Received From Device";
return received;
}
}
public bool CloseSerialConnection()
{
try
{
serialPort.Write("LOCAL\r");
serialPort.Close();
return true;
}
catch (Exception e)
{
return false;
}
}
}
As you can see, when I open the a connection to, in this case, Com1 I test the connection by writing a command *IDN? to the SerialPort. The return for this command looks like so:
FLUKE,5500A,8030005,2.61+1.3+2.0+*
66>
In the class I've set ">" as the NewLine property so that SerialPort.ReadLine() doesn't finish till it finds that token. I've never once had the class itself throw an exception but I've noticed while debugging that sometimes testReceived won't catch that returned data properly, despite the fact that no exceptions are thrown and the code continues executing properly, and instead received will catch the returned string:
FLUKE,5500A,8030005,2.61+1.3+2.0+*
66>
whenever I pass my first command via SerialPort.Write(); Something important to know is that commands can be executed without that data being fully returned. My concern is that the initial ReadLine() appears to be skipping past that occasionally without catching the entire return. My thought is that there's an inherent flaw in the device I'm communicating with causing this but I'd prefer to be entirely sure before continuing.
My command order looks like so:
First I submit a command on startup:
REMOTE
This disables interaction with the device's manual interface and allows me to submit commands via the Serial Port.
Then I issue *IDN?, in this case, to check that the device is connected:
*IDN?
If nothing is return, the application is set to display an error in a message box and then FailFast. If all goes well a command can be submitted like so:
STBY
OUT 30MV,60HZ
OPER
The only command submitted here manually is OUT 30,MV,60HZ. STBY and OPER are set in the app.config as they only add an unnecessary step to the usage of the application. The STBY command puts the machine in standby for safety reasons. The OPER command puts it in operating mode and the device begins operating under the set parameters.
The application then waits for a technician to enter a result into a textbox and submit it. The content of these results aren't particularly pertinent but upon hitting the result button the machine is put back in standby:
STBY
Finally, two more commands are submitted when the application is closed:
*RST
LOCAL
First *RST resets the machine to ensure that it's in the same state as when it was powered on (I.E. It's not operating and no parameters are set). Then LOCAL sets the re-enables the manual interface for user interaction and disables access via the Serial Port till REMOTE is issued once more.
As you can see, a command is issued after *IDN? and before the first manual command that's sent (In this case we assume the command is OUT 30MV,60HZ). The problem is, sometimes I receive the output of *IDN whenever I check what the output of OUT 30MV,60HZ is yet I can see no problems within my code or the procedure I'm using to operate the machine. Is there any reason this could be happening?
As I've said, the error is extremely hard to reproduce (I've seen it twice in maybe forty runs). Even so, any error at all of this type is not acceptable in a production environment and the error needs to be fixed before I can begin testing my application in its entirety. I'll keep trying to reproduce the error so I can provide an example and hopefully provide further clarification as to what the problem might be.
EDIT:
I'd also like to clarify that I'm fairly certain the bug is not located somewhere within my application itself as the code is somewhat simplistic in nature:
public string SubmitCommand()
{
if (_command_Input != "No further commands to input.")
{
string received;
serialPort.WriteSerialConnection("STBY");
received = serialPort.WriteSerialConnection(_command_Input);
serialPort.WriteSerialConnection("OPER");
//Controls Enabled:
_input_IsEnabled = false;
_user_Input_IsEnabled = true;
_results_Input_IsEnabled = false;
RaisePropertyChanged("Input_IsEnabled");
RaisePropertyChanged("User_Input_IsEnabled");
RaisePropertyChanged("Results_Input_IsEnabled");
return received;
}
else
return "";
}
received is then manipulated like so:
public bool SetOutput()
{
string inter1 = SubmitCommand();
try
{
string[] lines = inter1.Split(Environment.NewLine.ToCharArray()).ToArray();
_results_Changed = lines[2];
RaisePropertyChanged("Results_Changed");
}
catch
{
_results_Changed = inter1;
RaisePropertyChanged("Results_Changed");
}
return true;
}
I can provide further code if need be but I can't currently see any other code that might be pertinent to the question at hand.
You made this hard to diagnose, the response you don't like looks exactly like the one you do like.
In general, you need to ensure that your program is in sync with the device. A possible failure mode is when the driver still has unread data in the receive buffer from a previous connection. Stale data could also exist in the device's transmit buffer. When you start back up, you'll read that stale data and assume it was a response to your command. It wasn't. You'll now be permanently out of sync, always reading stale data that was the response to the previous command.
It is also rather odd that this works without taking care of handshaking, device normally do pay attention to that.
To avoid accidents, initialize your program like this:
Call the Open() method to open the port
Set the RtsEnable and DtrEnable properties to true so that the device always sees a good signal that allows it to transmit data
Sleep for about 100 msec to allow the device to send any data that it still had buffered from the previous connection but could not send because the handshake was off
Call DiscardInBuffer() to throw away any stale response bytes.
You have now a reasonable guarantee you'll be in sync.
On my multithreaded server I am experiencincg troubles with connections that are not coming from the proper Client and so hang unathorized. I did not want to create new thread only for checking if clients are connected for some time without authorization. Instead of this, I have add this checking to RecieveData thread, shown on the code below. Do you see some performance issue or this is acceptable? The main point is that everytime client is connected (and Class client is instantionized) it starts stopwatch. And so I add to this thread condition - if the time is greater than 1 and the client is still not authorized, its added on the list of clients determinated for disconnection. Thanks
EDIT: This While(true) is RecieveData thread. I am using async. operations - from tcplistener.BeginAccept to threadpooling. I have updated the code to let you see more.
protected void ReceiveData()
{
List<Client> ClientsToDisconnect = new List<Client>();
List<System.Net.Sockets.Socket> sockets = new List<System.Net.Sockets.Socket>();
bool noClients = false;
while (true)
{
sockets.Clear();
this.mClientsSynchronization.TryEnterReadLock(-1);
try
{
for (int i = 0; i < this.mClientsValues.Count; i++)
{
Client c = this.mClientsValues[i];
if (!c.IsDisconnected && !c.ReadingInProgress)
{
sockets.Add(c.Socket);
}
//clients connected more than 1 second without recieved name are suspect and should be disconnected
if (c.State == ClientState.NameNotReceived && c.watch.Elapsed.TotalSeconds > 1)
ClientsToDisconnect.Add(c);
}
if (sockets.Count == 0)
{
continue;
}
}
finally
{
this.mClientsSynchronization.ExitReadLock();
}
try
{
System.Net.Sockets.Socket.Select(sockets, null, null, RECEIVE_DATA_TIMEOUT);
foreach (System.Net.Sockets.Socket s in sockets)
{
Client c = this.mClients[s];
if (!c.SetReadingInProgress())
{
System.Threading.ThreadPool.QueueUserWorkItem(c.ReadData);
}
}
//remove clients in ClientsToDisconnect
foreach (Client c in ClientsToDisconnect)
{
this.RemoveClient(c,true);
}
}
catch (Exception e)
{
//this.OnExceptionCaught(this, new ExceptionCaughtEventArgs(e, "Exception when reading data."));
}
}
}
I think I see what you are trying to do and I think a better way would be to store new connections in a holding area until they have properly connected.
I'm not positive but it looks like your code could drop a valid connection. If a new connection is made after the checking section and the second section takes more than a second all the timers would time out before you could verify the connections. This would put the new connections in both the socket pool AND the ClientsToDisconnect pool. Not good. You would drop a currently active connection and chaos would ensue.
To avoid this, I would make the verification of a connection a seperate thread from the using of the connection. That way you won't get bogged down in timing issues (well...you still will but that is what happens when we work with threads and sockets) and you are sure that all the sockets you are using won't get closed by you.
My gut reaction is that while (true) plus if (sockets.Count == 0) continue will lead to heartache for your CPU. Why don't you put this on a Timer or something so that this function is only called every ~.5s? Is the 1s barrier really that important?