I have the following code
byte[] bytes = Encoding.Default.GetBytes(data);
IAsyncResult res = socket.BeginSend(bytes, 0, bytes.Length, 0, new AsyncCallback(SendCallback), socket);
int waitingCounter = 0;
while (!res.IsCompleted && waitingCounter<10)
{
if (Tracing.TraceInfo) Tracing.WriteLine("Waiting for data to be transmited. Give a timeout of 1 second", _traceName);
Thread.Sleep(1 * 1000);
waitingCounter++;
}
This code has been installed in many machines, but there are some cases where the condition res.IsCompleted takes long to become true.
The reason is related with the network maybe a firewall, proxy? or to the client (too slow) or to the server?
I have not been able to reproduce this scenario.
Edit: I try to reproduce the error by using an asynchronous client and a synchronous server
with the following modifications:
Client=>
while (true) {
Send(client, "This is a test<EOF>");
sendDone.WaitOne();
}
Server=>
while (true){
Console.WriteLine("Waiting for a connection...");
// Program is suspended while waiting for an incoming connection.
Socket handler = listener.Accept();
data = null;
// Show the data on the console.
Console.WriteLine("Text received : {0}", data);
handler.Shutdown(SocketShutdown.Both);
handler.Close();
}
In the second Send(), I get a socket exception. What is normal because the server is not reading the data.
But actually what I want to reproduce is:
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
As it happens in one of our installations
Edit:
An answer disappeared from this question!!!
Even though BeginSend won't stall your application, it should still have the same constraints as Send. This answer explains why things could go wrong (I've paraphrased):
For your application, the TCP buffer, the network and the local
sending TCP buffer is one big buffer. If the remote application gets
delayed in reading new bytes from its TCP buffer, eventually your
local TCP buffer will end up being (nearly) full. The send won't
complete until the TCP buffer has enough space to store the payload
you're trying to send.
Don't forget when the first check res.IsCompleted, you are always waiting a second for the next check.
Related
I have a Socket in a C# application. (the application is acting as the server)
I want to set a timeout on the transmission sending. That if the TCP layer does not get an ack for the data in 10 seconds, the socket should throw and exception and I close the whole connection.
// Set socket timeouts
socket.SendTimeout = 10000;
//Make A TCP Client
_tcpClient = new TcpClient { Client = socket, SendTimeout = socket.SendTimeout };
Then later on in code, I send data to that socket.
/// <summary>
/// Function to send data to the socket
/// </summary>
/// <param name="data"></param>
private void SendDataToSocket(byte[] data)
{
try
{
//Send Data to the connection
_tcpClient.Client.Send(data);
}
catch (Exception ex)
{
Debug.WriteLine("Error writing data to socket: " + ex.Message);
//Close our entire connection
IncomingConnectionManager.CloseConnection(this);
}
}
Now when I test, I let my sockets connect, everything is happy. I then just kill the other socket (no TCP close, just power off the unit)
I try to send a message to it. It doesn't timeout? Even after 60 seconds, it's still waiting.
Have I done something wrong here, or am I misunderstanding the functionality of setting the sockets SendTimeout value?
A socket send() actually does a copy operation of your data into the network´s stack outgoing buffer. If the copy succeeds (i. e there is enough space to receive your data), no error is generated. This does not mean that the other side received it or even that the data went out to the wire.
Any send timeout starts counting when the buffer is full, indicating that the other side is receiving data slower that you are sending it (or, in the extreme case, not receiving anything at all because the cable is broken or it was powered off or crashed without closing its socket properly). If the full buffer persists for timeout seconds, you´ll get an error.
In other words, there is no way to detect an abrupt socket error (like a bad cable or a powered off or crashed peer) other than overfilling the outgoing buffer to trigger a timeout.
Notice that in the case of a graceful shutdown of the peer´s socket, your socket will be aware of it and give you errors if you try to send or receive after the condition was received in your socket, which may be many microseconds after you finished your operation. Again in this case, you have to trigger the error (by sending or receiving), it does not happen by itself.
I do Socket.Receive immediately after a handshake but sometimes the programme hangs waiting for the packet.
int reciveLength = tcpSock.Receive(handShake, SocketFlags.None);
int bitfieldLength = tcpSock.Receive(bitfeildRecive, SocketFlags.None);
The first is received perfectly, the second does not seem to be received. I think it is a race condition as the first is sent at "83.1969" and the second at "83.1970".
When the timeout is reached bitfeildRecive will just be 65535 bytes of 0s.
I can see the packet in Wireshark, it is only one packet. How can I get the program to catch the next packet to be send?
Is there away to do this with Socket.Receive?
I am writing a tcp server which needs to send data to connected remote hosts. I'd prefer socket send calls to not block at all, ever. To facilitate this I am using Socket.Select to identify writable sockets and writing to those sockets using Socket.Send. The Socket.Select msdn article states:
If you already have a connection established, writability means that all send operations will succeed without blocking.
I am concerned about the case where the remote socket is not actively draining the buffer, said buffer fills, and tcp pushes back onto my server's socket. In this case I thoguht the server will not be able to send and the accepted socket's buffer will fill.
I wanted to know the behavior of socket.Send when the send buffer is partially full in this case. I hope it would accept as many bytes as it can and return that many bytes. I wrote a snippet to test this, but something strange happened: it always sends all the bytes I give it!
snippet:
var LocalEndPoint = new IPEndPoint(IPAddress.Any, 22790);
var listener = CreateSocket();
listener.Bind(LocalEndPoint);
listener.Listen(100);
Console.WriteLine("begun listening...");
var receiver = CreateSocket();
receiver.Connect(new DnsEndPoint("localhost", 22790));
Console.WriteLine("connected.");
Thread.Sleep(100);
var remoteToReceiver = listener.Accept();
Console.WriteLine("connection accepted {0} receive size {1} send size.", remoteToReceiver.ReceiveBufferSize, remoteToReceiver.SendBufferSize);
var stopwatch = Stopwatch.StartNew();
var bytes = new byte[] {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
var bytesSent = remoteToReceiver.Send(bytes, 0, 16, SocketFlags.None);
stopwatch.Stop();
Console.WriteLine("sent {0} bytes in {1}", bytesSent, stopwatch.ElapsedMilliseconds);
public void CreateSocket()
{
var socket = new Socket(SocketType.Stream, ProtocolType.Tcp)
{
ReceiveBufferSize = 4,
SendBufferSize = 8,
NoDelay = true,
};
return socket;
}
I listen to an endpoint, accept the new connection, never drain the receiver socket buffer, and send more data than can be received by the buffer alone, yet my output is:
begun listening...
connected.
connection accepted, 4 receive size 8 send size.
sent 16 bytes in 0
So somehow socket.Send is managing to send more bytes than it is setup for. Can anyone explain this behavior?
I additionally added a receive call and the socket does end up receiving all the bytes sent.
I believe the Windows TCP stack always takes all bytes because all Windows apps assume it does. I know that on Linux it does not do that. Anyway, the poll/select style is obsolete. Async sockets are done preferably with await. Next best option is the APM pattern.
No threads are in use while the IO is in progress. This is what you mean/want. This goes for all the IO-related APM, EAP and TAP APIs in the .NET Framework. The callbacks are queued to the thread-pool. You can't do better than that. .NET IO is efficient, don't worry about it much. Spawning a new thread for an IO would frankly be stupid.
Async network IO is actually super simple when you use await and follow best-practices. Efficiency-wise all async IO techniques in .NET are comparably efficient.
now lets take a secnario where we use a locking socket receive and the packet is 5000 bytes
with receivetimeout set to one second
s.SetSocketOption (SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, 1000);
int bytes_recevied = 0 ;
byte [] ReceiveBuffer = new byte[8192] ;
try
{
bytes_received = s.Receive(RecevieBuffer) ;
}
catch(SocketException e)
{
if( e.ErrorCode == 10060)
{
Array.Clear(ReceiveBuffer,0,ReceiveBuffer.Length);
}
}
now our secnario dictates that 4000 bytes have gone threw alreadys,the socket is still blocking and some error accured on the receiving end ,
now on the receiving end we would handly dispose of the 4000 bytes by catching the socket ecxecption
is there any guaranty that the socket on the sending end wont thows 1000 bytes that remain
does the sending socket know to truncate them if he wasent disconnected when we attempt to receive again wont they be the first bytes we receive ?
what im asking is :
a) does tcp have some mecanisem that tells the socket to dispose of the rest of them message ?
b) is there a socket flag that we could send or receive with that tells the buffers to dispose of the rest of the message ?
First off, TCP/IP operates on streams, not packets. So you need some kind of message framing in your protocol, regardless of buffer sizes, blocking calls, or MTUs.
Secondly, each TCP connection is independent. The normal design is to close a socket when there's a communications error. A new socket connection can then be established, which is completely independent from the old one.
I'm wondering if in my Async Sockets in c#, receiving 0 bytes in the EndRead call means the server has actually disconnected us?
Many Examples I see suggest that this is the case, but I'm receiving disconnects a lot more frequent that I would be expecting.
Is this code correct? Or does endResult <= 0 not really mean anything about the connection state?
private void socket_EndRead(IAsyncResult asyncResult)
{
//Get the socket from the result state
Socket socket = asyncResult.AsyncState as Socket;
//End the read
int endResult = Socket.EndRead(asyncResult);
if (endResult > 0)
{
//Do something with the data here
}
else
{
//Server closed connection?
}
}
From the docs:
If the remote host shuts down the Socket connection and all available data has been received, the EndRead method completes immediately and returns zero bytes.
So yes, zero bytes indicates a remote close.
0 read length should mean gracefull shutdown. Disconnect throws error (10054, 10053 or 10051).
In practice though I did notice reads complete with 0 length even though the connection was alive, and the only way to handle is to check the socket status on 0 length reads. The situation was as follows: post multiple buffers on a socket for receive. The thread that posted then is trimmed by the pool. The OS notices that the thread that made the requests is gone and it notifies the posted operations with error 995 ERROR_OPERATION_ABORTED, as documented. However what I've found is that when multiple operations are posted (ie. multiple Reads) only the first is notified with error 995, the subsequent are notified with success and 0 length.