C# socket is ignoring my SendTimeout value - c#

I have a Socket in a C# application. (the application is acting as the server)
I want to set a timeout on the transmission sending. That if the TCP layer does not get an ack for the data in 10 seconds, the socket should throw and exception and I close the whole connection.
// Set socket timeouts
socket.SendTimeout = 10000;
//Make A TCP Client
_tcpClient = new TcpClient { Client = socket, SendTimeout = socket.SendTimeout };
Then later on in code, I send data to that socket.
/// <summary>
/// Function to send data to the socket
/// </summary>
/// <param name="data"></param>
private void SendDataToSocket(byte[] data)
{
try
{
//Send Data to the connection
_tcpClient.Client.Send(data);
}
catch (Exception ex)
{
Debug.WriteLine("Error writing data to socket: " + ex.Message);
//Close our entire connection
IncomingConnectionManager.CloseConnection(this);
}
}
Now when I test, I let my sockets connect, everything is happy. I then just kill the other socket (no TCP close, just power off the unit)
I try to send a message to it. It doesn't timeout? Even after 60 seconds, it's still waiting.
Have I done something wrong here, or am I misunderstanding the functionality of setting the sockets SendTimeout value?

A socket send() actually does a copy operation of your data into the network´s stack outgoing buffer. If the copy succeeds (i. e there is enough space to receive your data), no error is generated. This does not mean that the other side received it or even that the data went out to the wire.
Any send timeout starts counting when the buffer is full, indicating that the other side is receiving data slower that you are sending it (or, in the extreme case, not receiving anything at all because the cable is broken or it was powered off or crashed without closing its socket properly). If the full buffer persists for timeout seconds, you´ll get an error.
In other words, there is no way to detect an abrupt socket error (like a bad cable or a powered off or crashed peer) other than overfilling the outgoing buffer to trigger a timeout.
Notice that in the case of a graceful shutdown of the peer´s socket, your socket will be aware of it and give you errors if you try to send or receive after the condition was received in your socket, which may be many microseconds after you finished your operation. Again in this case, you have to trigger the error (by sending or receiving), it does not happen by itself.

Related

How to send 'n' Number of packets using TcpClient using c#/.Net

I am trying to send multiple packets using TcpClient. this code works on the first iteration. but after the first iteration the server isn't receiving anything(although the loop keeps iterating). server-side code is working fine as I have been testing it with packet sender. Anything to send burst of packets using TcpClient?
TcpClient tcpClient = new TcpClient(localIpAddress,localPort);
tcpClient.NoDelay = true;
tcpClient.Connect(remoteIpAddress,remotePort);
Stream stream=tcpClient.GetStream();
int i = 0;
while (true)
{
i = i + 1;
Console.WriteLine("Message {0} Sent",i);
stream.Write(Encoding.ASCII.GetBytes(message))));
stream.Flush();
Thread.Sleep(500);
}
You can't accurately control sending of packets using TcpClient. In fact you can't even do it with a TCP Socket(). You'd need to use a Raw Socket and structure your own TCP packets.
The TCP stack decides when it's going to send a packet based on the socket configuration options, how much information is in the buffer, when the last ACK was, what the window size is, what the RTT is and a few other things.
You can fudge it, by sending lots of data, which will force the TCP stack to send them immediately as fast as it can. You can also set the TcpClient.NoDelay() but this will not work for multiple successive writes unless you have a reasonable delay between writes and even then it is not gauranteed.
You can make low level calls to work out how many packets were sent/recv, but you can only influence this by tweaking the parameters of the TCP stack and even then it will never be deterministic due to the fact the IP is a lossy protocol and TCP packets will go missing and need to be retransmitted.
Your problem with the code above is that you're setting a LocalEndPoint with this line:
TcpClient tcpClient = new TcpClient(localIpAddress,localPort);
The port will not be reusable until the Socket has passed through the TIME_WAIT phase which, depending on how the socket shutdown could be 120seconds.
Change that line to:
TcpClient tcpClient = new TcpClient();
Or if you must set the source IP address
TcpClient tcpClient = new TcpClient(localIpAddress,0);
You can read about the TIME_WAIT here and why it is an intrinsic part of the TCP Protocol.
As a side note, stream.Flush(); or NetworkStream.Flush() does absolutely nothing. This is it's implementation in the .Net source:
/// <devdoc>
/// <para>
/// Flushes data from the stream. This is meaningless for us, so it does nothing.
/// </para>
/// </devdoc>
public override void Flush() {
}

Disposing each socket client that is connected to the socket server

I have just recently started development for a server - multiple clients application using Socket.
The server doesn't need to keep track of the connected clients; If there is a client that requests for connection, server accepts it. If there is a request from any client (to get some data), server will response to that client.
/// <summary>
/// Callback when server accepts a new incoming connection.
/// </summary>
/// <param name="result">Incoming connection result object.</param>
private void AcceptedCallback(IAsyncResult result)
{
try
{
Socket clientSocket = _socket.EndAccept(result); // Asynchronously accepts an incoming connection attempt
if (clientSocket.Connected) // Check if the client is in 'Connected' state
{
StateObject state = new StateObject();
state.clientSocket = clientSocket;
clientSocket.BeginReceive(state.buffer, 0, StateObject.BufferSize, SocketFlags.None, // Start listening to client request
ReceiveCallback, state);
}
else
{
clientSocket.Close(); // Terminate that client's connection
Log.writeLog("TCPServer(AcceptedCallback)"
, "Client's status is not connected.");
}
}
catch (Exception ex)
{
Log.writeLog("TCPServer(AcceptedCallback)"
, ex.Message);
clientSocket.Close();
}
finally
{
Accept(); // Start to accept new connection request
}
}
I have 3 questions about this:
For each BeginReceive that I create for the newly connected client, does my server application creates a new thread/object to hold that client?
If after the client is connected, and the network cable is pulled off at the client side and plug back in, the client will connect to the server again and this is considered a new connection on the server, if this scenario occurs again and again, will my server program crashes?
Hence, do I need to keep track of each client that is connected to the server, and find a way to track their state so I can call Close/Dispose on them?
So far in my testing for scenario 2, there are no abnormalities detected in my server program, but I hope someone would help clarify this for me. Thank you.
No, it will use a IO completion thread from a pool of threads.
No, you can code and should code to cater for this. If something happens on the client side that the OS can detect, it will send a TCP fin/ack to the server. This should result in any BeginXXX method still waiting to go the Async Callback Method. From there your call to EndXXX method should either throw an exception or return zero bytes being read from the socket.
This depends on what you mean by keep track of to dispose of them properly. If you mean dispose of them if you detect an error, no, you can put clean up code in your EndXXX methods. If you mean so that you can signal clients gracefully if you shut the server down, then yes.

C# socket connection keeps sending data after disconnect and close

I have a "start" and "stop" button. When clicking the start button, a new socket is created and a connection is made. When clicking the stop button the socket is shutdown, disconnected, closed and disposed to make sure it is completely gone.
At least, that's what I thought: when clicking start after stopping the connection, a new socket is made etc. but as soon as I send data, the data is sent x amount of times I had created a socket (thus, x amount of times I had clicked the start button).
This is the code for the start:
soc = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); // Socket soc; is declared at class-level
System.Net.IPAddress ipAdd = System.Net.IPAddress.Parse(IP);
System.Net.IPEndPoint remoteEP = new IPEndPoint(ipAdd, port);
try
{
soc.Connect(remoteEP);
soc.Send(jsonSettings);
}
catch (SocketException e)
{
MessageBox.Show("Could not connect to socket");
}
And this is the stop code:
if (soc != null)
{
soc.Shutdown(SocketShutdown.Both);
soc.Disconnect(false);
soc.Close();
soc.Dispose();
}
This is used within a VSTO PowerPoint add-in application if this could cause any additional specialties, when the connection is made I'm sending string data to a Python server listening to this port. Each time a connection is closed, the Python server will get out of it's listen-for-data loop and get back in it's waiting for connection state (for the multiple start/stop connections).
Code for sending data:
// this is called each time the user goes to another slide in the PowerPoint presentation
byte[] byData = System.Text.Encoding.ASCII.GetBytes(stringValue);
soc.Send(byData);
Can anyone point out what I'm doing wrong why the socket connections somehow keep on living and sending data even though I disconnected and closed them?
The observed behavior is the whole point and desired outcome from clean shutdown. From the MSDN page for Socket.Shutdown():
When using a connection-oriented Socket, always call the Shutdown method before closing the Socket. This ensures that all data is sent and received on the connected socket before it is closed.
The call to Shutdown() prevents your application from queuing additional outgoing data, it does not stop the network stack from sending data already buffered.
Since you are using a stream socket, how about declaring a network stream for your socket like this:
NetworkStream stream = new NetworkStream(soc);
Then flushing this after each send (and before closing the socket):
stream.Flush();
Also ensure you turn off Nagle's algorithm when you create the socket - it will prevent batching up items on the socket:
soc.NoDelay = true;

socket option receive time out

now lets take a secnario where we use a locking socket receive and the packet is 5000 bytes
with receivetimeout set to one second
s.SetSocketOption (SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, 1000);
int bytes_recevied = 0 ;
byte [] ReceiveBuffer = new byte[8192] ;
try
{
bytes_received = s.Receive(RecevieBuffer) ;
}
catch(SocketException e)
{
if( e.ErrorCode == 10060)
{
Array.Clear(ReceiveBuffer,0,ReceiveBuffer.Length);
}
}
now our secnario dictates that 4000 bytes have gone threw alreadys,the socket is still blocking and some error accured on the receiving end ,
now on the receiving end we would handly dispose of the 4000 bytes by catching the socket ecxecption
is there any guaranty that the socket on the sending end wont thows 1000 bytes that remain
does the sending socket know to truncate them if he wasent disconnected when we attempt to receive again wont they be the first bytes we receive ?
what im asking is :
a) does tcp have some mecanisem that tells the socket to dispose of the rest of them message ?
b) is there a socket flag that we could send or receive with that tells the buffers to dispose of the rest of the message ?
First off, TCP/IP operates on streams, not packets. So you need some kind of message framing in your protocol, regardless of buffer sizes, blocking calls, or MTUs.
Secondly, each TCP connection is independent. The normal design is to close a socket when there's a communications error. A new socket connection can then be established, which is completely independent from the old one.

Disconnecting TCPClient and seeing that on the other side

i am trying to disconnect a client from a server but the server still sees it as being connected. I cant find a solution to this and Shutdown, Disconnect and Close all dont work.
Some code for my disconnect from the client and checking on the server:
Client:
private void btnDisconnect_Click(object sender, EventArgs e)
{
connTemp.Client.Shutdown(SocketShutdown.Both);
connTemp.Client.Disconnect(false);
connTemp.GetStream().Close();
connTemp.Close();
}
Server:
while (client != null && client.Connected)
{
NetworkStream stream = client.GetStream();
data = null;
try
{
if (stream.DataAvailable)
{
data = ReadStringFromClient(client, stream);
WriteToConsole("Received Command: " + data);
}
} // So on and so on...
There are more writes and reads further down in the code.
Hope you all can help.
UPDATE: I even tried passing the TCP client by ref, assuming there was a scope issue and client.Connected remains true even after a read. What is going wrong?
Second Update!!:
Here is the solution. Do a peek and based on that, determine if you are connected or not.
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
Here is the solution!!
if (client.Client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Client.Receive(checkConn, SocketFlags.Peek) == 0)
{
throw new IOException();
}
}
From the MSDN Documentation:
The Connected property gets the
connection state of the Client socket
as of the last I/O operation.
When it
returns false, the Client socket was
either never connected, or is no
longer connected. Because the
Connected property only reflects the
state of the connection as of the most
recent operation, you should attempt
to send or receive a message to
determine the current state. After the
message send fails, this property no
longer returns true. Note that this
behavior is by design. You cannot
reliably test the state of the
connection because, in the time
between the test and a send/receive,
the connection could have been lost.
Your code should assume the socket is
connected, and gracefully handle
failed transmissions.
I am not sure about the NetworkStream class but I would think that it would behave similar to the Socket class as it is primarily a wrapper class. In general the server would be unaware that the client disconnected from the socket unless it performs an I/O operation on the socket (a read or a write). However, when you call BeginRead on the socket the callback is not called until there is data to be read from the socket, so calling EndRead and getting a bytes read return result of 0 (zero) means the socket was disconnected. If you use Read and get a zero bytes read result I suspect that you can check the Connected property on the underlying Socket class and it will be false if the client disconnected since an I/O operation was performed on the socket.
It's a general TCP problem, see:
How do I check if a SSLSocket connection is sane on Java?
Java socket not throwing exceptions on a dead socket?
The workaround for this tend to rely on sending the amount of data to expect as part of the protocol. That's what HTTP 1.1 does using the Content-Length header (for a entire entity) or with chunked transfer encoding (with various chunk sizes).
Another way is to send "NOOP" or similar commands (essentially messages that do nothing but make sure the communication is still open) as part of your protocol regularly.
(You can also add to your protocol a command that the client can send to the server to close the connection cleanly, but not getting it won't mean the client hasn't disconnected.)

Categories

Resources