Disposing each socket client that is connected to the socket server - c#

I have just recently started development for a server - multiple clients application using Socket.
The server doesn't need to keep track of the connected clients; If there is a client that requests for connection, server accepts it. If there is a request from any client (to get some data), server will response to that client.
/// <summary>
/// Callback when server accepts a new incoming connection.
/// </summary>
/// <param name="result">Incoming connection result object.</param>
private void AcceptedCallback(IAsyncResult result)
{
try
{
Socket clientSocket = _socket.EndAccept(result); // Asynchronously accepts an incoming connection attempt
if (clientSocket.Connected) // Check if the client is in 'Connected' state
{
StateObject state = new StateObject();
state.clientSocket = clientSocket;
clientSocket.BeginReceive(state.buffer, 0, StateObject.BufferSize, SocketFlags.None, // Start listening to client request
ReceiveCallback, state);
}
else
{
clientSocket.Close(); // Terminate that client's connection
Log.writeLog("TCPServer(AcceptedCallback)"
, "Client's status is not connected.");
}
}
catch (Exception ex)
{
Log.writeLog("TCPServer(AcceptedCallback)"
, ex.Message);
clientSocket.Close();
}
finally
{
Accept(); // Start to accept new connection request
}
}
I have 3 questions about this:
For each BeginReceive that I create for the newly connected client, does my server application creates a new thread/object to hold that client?
If after the client is connected, and the network cable is pulled off at the client side and plug back in, the client will connect to the server again and this is considered a new connection on the server, if this scenario occurs again and again, will my server program crashes?
Hence, do I need to keep track of each client that is connected to the server, and find a way to track their state so I can call Close/Dispose on them?
So far in my testing for scenario 2, there are no abnormalities detected in my server program, but I hope someone would help clarify this for me. Thank you.

No, it will use a IO completion thread from a pool of threads.
No, you can code and should code to cater for this. If something happens on the client side that the OS can detect, it will send a TCP fin/ack to the server. This should result in any BeginXXX method still waiting to go the Async Callback Method. From there your call to EndXXX method should either throw an exception or return zero bytes being read from the socket.
This depends on what you mean by keep track of to dispose of them properly. If you mean dispose of them if you detect an error, no, you can put clean up code in your EndXXX methods. If you mean so that you can signal clients gracefully if you shut the server down, then yes.

Related

C# socket is ignoring my SendTimeout value

I have a Socket in a C# application. (the application is acting as the server)
I want to set a timeout on the transmission sending. That if the TCP layer does not get an ack for the data in 10 seconds, the socket should throw and exception and I close the whole connection.
// Set socket timeouts
socket.SendTimeout = 10000;
//Make A TCP Client
_tcpClient = new TcpClient { Client = socket, SendTimeout = socket.SendTimeout };
Then later on in code, I send data to that socket.
/// <summary>
/// Function to send data to the socket
/// </summary>
/// <param name="data"></param>
private void SendDataToSocket(byte[] data)
{
try
{
//Send Data to the connection
_tcpClient.Client.Send(data);
}
catch (Exception ex)
{
Debug.WriteLine("Error writing data to socket: " + ex.Message);
//Close our entire connection
IncomingConnectionManager.CloseConnection(this);
}
}
Now when I test, I let my sockets connect, everything is happy. I then just kill the other socket (no TCP close, just power off the unit)
I try to send a message to it. It doesn't timeout? Even after 60 seconds, it's still waiting.
Have I done something wrong here, or am I misunderstanding the functionality of setting the sockets SendTimeout value?
A socket send() actually does a copy operation of your data into the network´s stack outgoing buffer. If the copy succeeds (i. e there is enough space to receive your data), no error is generated. This does not mean that the other side received it or even that the data went out to the wire.
Any send timeout starts counting when the buffer is full, indicating that the other side is receiving data slower that you are sending it (or, in the extreme case, not receiving anything at all because the cable is broken or it was powered off or crashed without closing its socket properly). If the full buffer persists for timeout seconds, you´ll get an error.
In other words, there is no way to detect an abrupt socket error (like a bad cable or a powered off or crashed peer) other than overfilling the outgoing buffer to trigger a timeout.
Notice that in the case of a graceful shutdown of the peer´s socket, your socket will be aware of it and give you errors if you try to send or receive after the condition was received in your socket, which may be many microseconds after you finished your operation. Again in this case, you have to trigger the error (by sending or receiving), it does not happen by itself.

C# .Net Socket Server Client

I've got a little problem with the .Net Sockets in C#.
I programmed a client and a server working with TCP.
As the client is opened it sends a handshake to the server. The server answers with it's state (clientexists, clientaccepted,...). After that the application sends a getdata-request, abandons the connection and listens for the server's 'response'. Now, the server builds a connection to the client and sends all the data the client needs.
The code and everything else works, but the problem:
On our company testserver it works fine, on the live server only the handshake works. After it the client doesn't receive any more data. Serverapplication is the same on both servers.
I thought the problem was caused by some firewall (server wants to build a tcp connection to the client -> not good), but the system administrator said there is no firewall that could block that.
Now I'm searching for a ('cheap') solution that doesn't take too much time and changes in code. If anyone knows how to theoretically solve that, that would be great.
BTW: I am not allowed to do anything on the live server other than run the serverapplication. I don't have the possibility to debug on this server.
I can't publish all of my code, but if you need to see specific parts of it, ask for it please.
---EDIT---
Client-Server communication
1) Client startup
Client send handshake (new tcp connection)
2) Server validates handshake and saves IP
Server responds with it's client state (same tcp connection)
3) Client acknowledges this response and abandons this connection
Client sends getdata-request (new tcp connection)
Client abandons this tcp connection, too
4) Server receives getdata-request and collects the needed data in the main database
Server sends all the collected data to the client (multiple tcp connections)
5) Client receives all data and displays it in it's GUI (multiple tcp connections and the order of the data is kept by working with AutoResetEvents and Counts of sockets to send)
This is the main part my code does. It's by far not the best but it was for me as I wrote it I guess. Step one, two and three work as intended. The processing of the data works fine, too.
Another thing i forgot to mention is that the solution uses two Ports '16777' and '16778'. One to receive/listen and one to send.
My code is based on the MSDN example of the asynchronous server and client.
Sending a handshake (and getdata-request)
public void BeginSend(String data)
{
try
{
StateObject state = new StateObject();
state.workSocket = sender;
byte[] byteData = Encoding.UTF8.GetBytes(data);
sender.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback((IAsyncResult e) =>
{
Socket socket = (Socket)e.AsyncState;
SocketBase.StateObject stateObject = new SocketBase.StateObject();
stateObject.workSocket = socket;
socket.BeginReceive(stateObject.buffer, 0, 256, SocketFlags.None, new AsyncCallback(this.ReadCallback), (object)stateObject);
}), sender);
sender = RetrieveSocket(); //Socketreset
Thread.Sleep(100);
}
catch /*(Exception e)*/
{
//--
}
}
Server listener
public void StartListening()
{
listener = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
// Bind the socket to the local endpoint and listen for incoming connections.
try
{
listener.Bind(localEndPoint);
listener.Listen(System.Int32.MaxValue);
while (true)
{
// Set the event to nonsignaled state.
allDone.Reset();
// Start an asynchronous socket to listen for connections.
listener.BeginAccept(
new AsyncCallback(AcceptCallback),
listener);
// Wait until a connection is made before continuing.
allDone.WaitOne();
}
}
catch (Exception e)
{
//--
}
}
public void AcceptCallback(...);
public void ReadCallback(...);
Socket send
private void Send(Socket handler, String data)
{
Socket t = RetrieveSocket(((IPEndPoint)handler.RemoteEndPoint).Address);
// Convert the string data to byte data using ASCII encoding.
byte[] byteData = Encoding.UTF8.GetBytes(data);
// Begin sending the data to the remote device.
t.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback(SendCallback), t);
}
Socket send all data part (answer to getdata-request | socToHandle should be the socket of the previous connection of the getdata-request)
private void SendAllData(Socket socToHandle, string PakContent)
{
#region IsThereADatetime? //Resolve a given datetime
#region GiveClientNumberOfPackets //Send the client info about how much he has to receive (See line below)
Send(socToHandle, "ALERT#TASKCOUNT;OPT-" + GetBestDate(dateStart) + EndSocket);
#region #SendResouces
#region #SendGroups
#region #SendTasks
}
Looking through my old code I have one idea =>
Could I send everything over the same connection by changing:
Socket t = RetrieveSocket(((IPEndPoint)handler.RemoteEndPoint).Address);
(which creates a new connection) to something that uses the same connection?
If that would work, how can I do that?
And would the listener part of the client still receive single packets?
Servers and their environment are configured to handle incoming requests properly. Clients are usually behind a router, which by default make them unable to receive incoming connections from outside their network (a good thing).
To enable incoming connections, you could configure your router to forward all requests for a certain port number to your machine. No one else on your network would be able to run the client then, though.
This is why in a typical multiple clients-single server environment, the client makes all the connections, and only the server requires any changes to the network landscape.
I don't know why you chose to connect to the clients from the server side, but I would strongly advise against this - any cheap solution that uses this mechanism may turn out to be very expensive in the end.

C# socket connection keeps sending data after disconnect and close

I have a "start" and "stop" button. When clicking the start button, a new socket is created and a connection is made. When clicking the stop button the socket is shutdown, disconnected, closed and disposed to make sure it is completely gone.
At least, that's what I thought: when clicking start after stopping the connection, a new socket is made etc. but as soon as I send data, the data is sent x amount of times I had created a socket (thus, x amount of times I had clicked the start button).
This is the code for the start:
soc = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); // Socket soc; is declared at class-level
System.Net.IPAddress ipAdd = System.Net.IPAddress.Parse(IP);
System.Net.IPEndPoint remoteEP = new IPEndPoint(ipAdd, port);
try
{
soc.Connect(remoteEP);
soc.Send(jsonSettings);
}
catch (SocketException e)
{
MessageBox.Show("Could not connect to socket");
}
And this is the stop code:
if (soc != null)
{
soc.Shutdown(SocketShutdown.Both);
soc.Disconnect(false);
soc.Close();
soc.Dispose();
}
This is used within a VSTO PowerPoint add-in application if this could cause any additional specialties, when the connection is made I'm sending string data to a Python server listening to this port. Each time a connection is closed, the Python server will get out of it's listen-for-data loop and get back in it's waiting for connection state (for the multiple start/stop connections).
Code for sending data:
// this is called each time the user goes to another slide in the PowerPoint presentation
byte[] byData = System.Text.Encoding.ASCII.GetBytes(stringValue);
soc.Send(byData);
Can anyone point out what I'm doing wrong why the socket connections somehow keep on living and sending data even though I disconnected and closed them?
The observed behavior is the whole point and desired outcome from clean shutdown. From the MSDN page for Socket.Shutdown():
When using a connection-oriented Socket, always call the Shutdown method before closing the Socket. This ensures that all data is sent and received on the connected socket before it is closed.
The call to Shutdown() prevents your application from queuing additional outgoing data, it does not stop the network stack from sending data already buffered.
Since you are using a stream socket, how about declaring a network stream for your socket like this:
NetworkStream stream = new NetworkStream(soc);
Then flushing this after each send (and before closing the socket):
stream.Flush();
Also ensure you turn off Nagle's algorithm when you create the socket - it will prevent batching up items on the socket:
soc.NoDelay = true;

Start listening again with Socket after a disconnect

I'm writing a small C# Sockets application. Actually I have two, a server and a client.
The user runs the client, enters the IP and port for the server, presses 'connect', and then once connected they can enter text into a textbox and send it to the server.
The server simply displays either "No connection" or "Connection from [ip]:[port]", and the most recent received message underneath.
The server successfully receives messages, and even handles the client disconnect fine.
Now I'm trying to make it listen again after the client has disconnected but for some reason nothing I try will allow it to start listening again.
Here is part of my code:
Socket socket;
private void listen()
{
socket = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
socket.Bind(new IPEndPoint(IPAddress.Any, 12345));
socket.Listen(10);
socket.BeginAccept(new AsyncCallback(acceptAsync), socket);
}
and
private void receiveAsync(IAsyncResult res)
{
Socket socket = (Socket)res.AsyncState;
try
{
int nBytes = socket.EndReceive(res);
if (nBytes > 0)
{
Invoke(new MethodInvoker(delegate()
{
lMessage.Text = encoder.GetString(buffer);
}));
setupReceiveAsync(socket);
}
else
{
Invoke(new MethodInvoker(delegate()
{
lConnections.Text = "No Connections.";
lMessage.Text = "No Messages.";
socket.Shutdown(SocketShutdown.Both);
socket.Close();
listen();
}));
}
}
catch { }
}
The last line: listen(); is what throws the error.
I have tried simply calling socket.BeginAccept() again, but that also throws an exception.
The message I'm getting is:
Only one usage of each socket address (protocol/network address/port) is normally permitted
If I don't call my listen() function and instead just call socket.BeginAccept(), then I get "You must first call socket.listen()"
If I call the socket.listen() function, then it tells me it's already connected and cart start listening.
Once I have made an asynchronous connection, and received several asynchronous messages, how then do I begin receiving again?
Your socket variable already has an listening socket assigned to it the second time you call listen(), which is why it tells you only one usage is permitted. All you need to repeat is the socket.BeginAccept(new AsyncCallback(acceptAsync), socket) call. So try replacing the call to listen() inside your receiveAsync(...) method with socket.BeginAccept(new AsyncCallback(acceptAsync), socket).
In async Begin* is always followed by End*. See Using an Asynchronous Server Socket. Your accept method should be something like:
try {
listener.Bind(localEP);
listener.Listen(10);
while (true) {
allDone.Reset();
Console.WriteLine("Waiting for a connection...");
listener.BeginAccept(
new AsyncCallback(SocketListener.acceptCallback),
listener );
allDone.WaitOne();
}
} catch (Exception e) {
Server, means that an app that listens on specified port/ip. Is usually, always in a listening mode - that's why it is called a server. It can connect and disconnect a client, but is always in listening mode.
This means, when a server disconnects a client - even then - it is in listening mode; meaning it can accept the incoming connections as well.
Though, disconnection request can come from client or can be forcefully applied by server.
The process for a server is:
Bind to socket
Listen
Accept connections
The process for client is:
Connect to the server
Send/receive messages
There are several ways for server to handle the incoming clients, couple, as follows:
Incoming connections are maintained in a list, for instance within a List<TcpClient>.
One way of handling the incoming clients is through threads. For instance, for each incoming client, spawn a thread that would handle the communication between server and client. For instance, checkout this example.
_
private void ListenForClients()
{
this.tcpListener.Start();
while (true)
{
//blocks until a client has connected to the server
TcpClient client = this.tcpListener.AcceptTcpClient();
//create a thread to handle communication
//with connected client
Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientComm));
clientThread.Start(client);
}
}
Use single thread and use context switching to manage client communications (TX/RX).

Disconnecting TCPClient and seeing that on the other side

i am trying to disconnect a client from a server but the server still sees it as being connected. I cant find a solution to this and Shutdown, Disconnect and Close all dont work.
Some code for my disconnect from the client and checking on the server:
Client:
private void btnDisconnect_Click(object sender, EventArgs e)
{
connTemp.Client.Shutdown(SocketShutdown.Both);
connTemp.Client.Disconnect(false);
connTemp.GetStream().Close();
connTemp.Close();
}
Server:
while (client != null && client.Connected)
{
NetworkStream stream = client.GetStream();
data = null;
try
{
if (stream.DataAvailable)
{
data = ReadStringFromClient(client, stream);
WriteToConsole("Received Command: " + data);
}
} // So on and so on...
There are more writes and reads further down in the code.
Hope you all can help.
UPDATE: I even tried passing the TCP client by ref, assuming there was a scope issue and client.Connected remains true even after a read. What is going wrong?
Second Update!!:
Here is the solution. Do a peek and based on that, determine if you are connected or not.
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
Here is the solution!!
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
From the MSDN Documentation:
The Connected property gets the
connection state of the Client socket
as of the last I/O operation.
When it
returns false, the Client socket was
either never connected, or is no
longer connected. Because the
Connected property only reflects the
state of the connection as of the most
recent operation, you should attempt
to send or receive a message to
determine the current state. After the
message send fails, this property no
longer returns true. Note that this
behavior is by design. You cannot
reliably test the state of the
connection because, in the time
between the test and a send/receive,
the connection could have been lost.
Your code should assume the socket is
connected, and gracefully handle
failed transmissions.
I am not sure about the NetworkStream class but I would think that it would behave similar to the Socket class as it is primarily a wrapper class. In general the server would be unaware that the client disconnected from the socket unless it performs an I/O operation on the socket (a read or a write). However, when you call BeginRead on the socket the callback is not called until there is data to be read from the socket, so calling EndRead and getting a bytes read return result of 0 (zero) means the socket was disconnected. If you use Read and get a zero bytes read result I suspect that you can check the Connected property on the underlying Socket class and it will be false if the client disconnected since an I/O operation was performed on the socket.
It's a general TCP problem, see:
How do I check if a SSLSocket connection is sane on Java?
Java socket not throwing exceptions on a dead socket?
The workaround for this tend to rely on sending the amount of data to expect as part of the protocol. That's what HTTP 1.1 does using the Content-Length header (for a entire entity) or with chunked transfer encoding (with various chunk sizes).
Another way is to send "NOOP" or similar commands (essentially messages that do nothing but make sure the communication is still open) as part of your protocol regularly.
(You can also add to your protocol a command that the client can send to the server to close the connection cleanly, but not getting it won't mean the client hasn't disconnected.)

Categories

Resources