I am trying to send multiple packets using TcpClient. this code works on the first iteration. but after the first iteration the server isn't receiving anything(although the loop keeps iterating). server-side code is working fine as I have been testing it with packet sender. Anything to send burst of packets using TcpClient?
TcpClient tcpClient = new TcpClient(localIpAddress,localPort);
tcpClient.NoDelay = true;
tcpClient.Connect(remoteIpAddress,remotePort);
Stream stream=tcpClient.GetStream();
int i = 0;
while (true)
{
i = i + 1;
Console.WriteLine("Message {0} Sent",i);
stream.Write(Encoding.ASCII.GetBytes(message))));
stream.Flush();
Thread.Sleep(500);
}
You can't accurately control sending of packets using TcpClient. In fact you can't even do it with a TCP Socket(). You'd need to use a Raw Socket and structure your own TCP packets.
The TCP stack decides when it's going to send a packet based on the socket configuration options, how much information is in the buffer, when the last ACK was, what the window size is, what the RTT is and a few other things.
You can fudge it, by sending lots of data, which will force the TCP stack to send them immediately as fast as it can. You can also set the TcpClient.NoDelay() but this will not work for multiple successive writes unless you have a reasonable delay between writes and even then it is not gauranteed.
You can make low level calls to work out how many packets were sent/recv, but you can only influence this by tweaking the parameters of the TCP stack and even then it will never be deterministic due to the fact the IP is a lossy protocol and TCP packets will go missing and need to be retransmitted.
Your problem with the code above is that you're setting a LocalEndPoint with this line:
TcpClient tcpClient = new TcpClient(localIpAddress,localPort);
The port will not be reusable until the Socket has passed through the TIME_WAIT phase which, depending on how the socket shutdown could be 120seconds.
Change that line to:
TcpClient tcpClient = new TcpClient();
Or if you must set the source IP address
TcpClient tcpClient = new TcpClient(localIpAddress,0);
You can read about the TIME_WAIT here and why it is an intrinsic part of the TCP Protocol.
As a side note, stream.Flush(); or NetworkStream.Flush() does absolutely nothing. This is it's implementation in the .Net source:
/// <devdoc>
/// <para>
/// Flushes data from the stream. This is meaningless for us, so it does nothing.
/// </para>
/// </devdoc>
public override void Flush() {
}
Related
I have a Socket in a C# application. (the application is acting as the server)
I want to set a timeout on the transmission sending. That if the TCP layer does not get an ack for the data in 10 seconds, the socket should throw and exception and I close the whole connection.
// Set socket timeouts
socket.SendTimeout = 10000;
//Make A TCP Client
_tcpClient = new TcpClient { Client = socket, SendTimeout = socket.SendTimeout };
Then later on in code, I send data to that socket.
/// <summary>
/// Function to send data to the socket
/// </summary>
/// <param name="data"></param>
private void SendDataToSocket(byte[] data)
{
try
{
//Send Data to the connection
_tcpClient.Client.Send(data);
}
catch (Exception ex)
{
Debug.WriteLine("Error writing data to socket: " + ex.Message);
//Close our entire connection
IncomingConnectionManager.CloseConnection(this);
}
}
Now when I test, I let my sockets connect, everything is happy. I then just kill the other socket (no TCP close, just power off the unit)
I try to send a message to it. It doesn't timeout? Even after 60 seconds, it's still waiting.
Have I done something wrong here, or am I misunderstanding the functionality of setting the sockets SendTimeout value?
A socket send() actually does a copy operation of your data into the network´s stack outgoing buffer. If the copy succeeds (i. e there is enough space to receive your data), no error is generated. This does not mean that the other side received it or even that the data went out to the wire.
Any send timeout starts counting when the buffer is full, indicating that the other side is receiving data slower that you are sending it (or, in the extreme case, not receiving anything at all because the cable is broken or it was powered off or crashed without closing its socket properly). If the full buffer persists for timeout seconds, you´ll get an error.
In other words, there is no way to detect an abrupt socket error (like a bad cable or a powered off or crashed peer) other than overfilling the outgoing buffer to trigger a timeout.
Notice that in the case of a graceful shutdown of the peer´s socket, your socket will be aware of it and give you errors if you try to send or receive after the condition was received in your socket, which may be many microseconds after you finished your operation. Again in this case, you have to trigger the error (by sending or receiving), it does not happen by itself.
I am writing a tcp server which needs to send data to connected remote hosts. I'd prefer socket send calls to not block at all, ever. To facilitate this I am using Socket.Select to identify writable sockets and writing to those sockets using Socket.Send. The Socket.Select msdn article states:
If you already have a connection established, writability means that all send operations will succeed without blocking.
I am concerned about the case where the remote socket is not actively draining the buffer, said buffer fills, and tcp pushes back onto my server's socket. In this case I thoguht the server will not be able to send and the accepted socket's buffer will fill.
I wanted to know the behavior of socket.Send when the send buffer is partially full in this case. I hope it would accept as many bytes as it can and return that many bytes. I wrote a snippet to test this, but something strange happened: it always sends all the bytes I give it!
snippet:
var LocalEndPoint = new IPEndPoint(IPAddress.Any, 22790);
var listener = CreateSocket();
listener.Bind(LocalEndPoint);
listener.Listen(100);
Console.WriteLine("begun listening...");
var receiver = CreateSocket();
receiver.Connect(new DnsEndPoint("localhost", 22790));
Console.WriteLine("connected.");
Thread.Sleep(100);
var remoteToReceiver = listener.Accept();
Console.WriteLine("connection accepted {0} receive size {1} send size.", remoteToReceiver.ReceiveBufferSize, remoteToReceiver.SendBufferSize);
var stopwatch = Stopwatch.StartNew();
var bytes = new byte[] {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
var bytesSent = remoteToReceiver.Send(bytes, 0, 16, SocketFlags.None);
stopwatch.Stop();
Console.WriteLine("sent {0} bytes in {1}", bytesSent, stopwatch.ElapsedMilliseconds);
public void CreateSocket()
{
var socket = new Socket(SocketType.Stream, ProtocolType.Tcp)
{
ReceiveBufferSize = 4,
SendBufferSize = 8,
NoDelay = true,
};
return socket;
}
I listen to an endpoint, accept the new connection, never drain the receiver socket buffer, and send more data than can be received by the buffer alone, yet my output is:
begun listening...
connected.
connection accepted, 4 receive size 8 send size.
sent 16 bytes in 0
So somehow socket.Send is managing to send more bytes than it is setup for. Can anyone explain this behavior?
I additionally added a receive call and the socket does end up receiving all the bytes sent.
I believe the Windows TCP stack always takes all bytes because all Windows apps assume it does. I know that on Linux it does not do that. Anyway, the poll/select style is obsolete. Async sockets are done preferably with await. Next best option is the APM pattern.
No threads are in use while the IO is in progress. This is what you mean/want. This goes for all the IO-related APM, EAP and TAP APIs in the .NET Framework. The callbacks are queued to the thread-pool. You can't do better than that. .NET IO is efficient, don't worry about it much. Spawning a new thread for an IO would frankly be stupid.
Async network IO is actually super simple when you use await and follow best-practices. Efficiency-wise all async IO techniques in .NET are comparably efficient.
I have a simple UDP listener that I am trying to collect datagrams from. My datagrams can be in one of two data formats. With the first data format, I am receiving data in my program as expected. With the second, there is absolutely no indication that data is ever received from my program, even though I can verify that the UDP data is passing onto the the network interface via Wireshark. I thought that maybe these were malformed UDP packets that Windows was rejecting but Wireshark does label them as UDP. My code is below:
static void Main(string[] args)
{
Thread thdUdpServer = new Thread(new ThreadStart(serverThread));
thdUdpServer.Start();
}
static void serverThread()
{
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
socket.Bind(new IPEndPoint(new IPAddress(0), 2000));
while (true)
{
byte[] responseData = new byte[128];
socket.Receive(responseData);
string returnData = Encoding.ASCII.GetString(responseData);
Console.WriteLine(DateTime.Now + " " + returnData);
}
The missing packets are all 29 byte datagrams that look something like this (translated to ASCII).
#01RdFFFF...?...... ........F
Why would Wireshark indicate their presence but .NET not seem to see them?
If the bytes contain non-printable ASCII characters, you may not see them on the Console.
There's something missing in your code. It should be throwing a socket exception when calling ReceiveFrom (at least according to MSDN, haven't tried your code.)
You should bind your socket to the address:port you want to listen on (or use 0.0.0.0 as the address to listen on any local address):
socket.Bind(new IPEndPoint(new IPAddress(0), 2000);
The EndPoint in ReceiveFrom is not the listening port for the server. It's the address you want to receive packets from. You can use an Endpoint of 0.0.0.0:0 to receive from any host.
After returning from the method the Endpoint will be filled with the address of the host that sent the packet (client).
You can use Receive instead of ReceiveFrom if you don't care about the client end point.
Likely your client is not sending packets from 192.168.1.100:2000, and that is why you are not receiving them; thought why you're not getting an exception when calling ReceiveFrom is beyond me.
Also:
There is no need to call Convert.ToInt32 in:
new IPEndPoint(IPAddress.Parse("192.168.1.100"), Convert.ToInt32(2000));
2000 is already an int.
I have a socket connection that receives data, and reads it for processing.
When data is not processed/pulled fast enough from the socket, there is a bottleneck at the TCP level, and the data received is delayed (I can tell by the tmestamps after parsing).
How can I see how much TCP bytes are awaiting to be read by the socket ? (via some external tool like WireShark or else)
private void InitiateRecv(IoContext rxContext)
{
rxContext._ipcSocket.BeginReceive(rxContext._ipcBuffer.Buffer, rxContext._ipcBuffer.WrIndex,
rxContext._ipcBuffer.Remaining(), 0, CompleteRecv, rxContext);
}
private void CompleteRecv(IAsyncResult ar)
{
IoContext rxContext = ar.AsyncState as IoContext;
if (rxContext != null)
{
int rxBytes = rxContext._ipcSocket.EndReceive(ar);
if (rxBytes > 0)
{
EventHandler<VfxIpcEventArgs> dispatch = EventDispatch;
dispatch (this, new VfxIpcEventArgs(rxContext._ipcBuffer));
InitiateRecv(rxContext);
}
}
}
The fact is that I guess the "dispatch" is somehow blocking the reception until it is done, ending up in latency (i.e, data that is processed bu the dispatch is delayed, hence my (false?) conclusion that there was data accumulated on the socket level or before.
How can I see how much TCP bytes are awaiting to be read by the socket
By specifying a protocol that indicates how many bytes it's about to send. Using sockets you operate a few layers above the byte level, and you can't see how many send() calls end up as receive() calls on your end because of buffering and delays.
If you specify the number of bytes on beforehand, and send a string like "13|Hello, World!", then there's no problem when the message arrives in two parts, say "13|Hello" and ", World!", because you know you'll have to read 13 bytes.
You'll have to keep some sort of state and a buffer in between different receive() calls.
When it comes to external tools like Wireshark, they cannot know how many bytes are left in the socket. They only know which packets have passed by the network interface.
The only way to check it with Wireshark is to actually know the last bytes you read from the socket, locate them in Wireshark, and count from there.
However, the best way to get this information is to check the Available property on the socket object in your .NET application.
You can use socket.Available if you are using normal Socket class. Otherwise you have to define a header byte which gives number of bytes to be sent from other end.
As we know for UDP receive, we use Socket.ReceiveFrom or UdpClient.receive
Socket.ReceiveFrom accept a byte array from you to put the udp data in.
UdpClient.receive returns directly a byte array where the data is
My question is that How to set the buffer size inside Socket. I think the OS maintains its own buffer for receive UDP data, right? for e.g., if a udp packet is sent to my machine, the OS will put it to a buffer and wait us to Socket.ReceiveFrom or UdpClient.receive, right?
How can I change the size of that internal buffer?
I have tried Socket.ReceiveBuffSize, it has no effect at all for UDP, and it clearly said that it is for TCP window. Also I have done a lot of experiments which proves Socket.ReceiveBufferSize is NOT for UDP.
Can anyone share some insights for UDP internal buffer???
Thanks
I have seen some posts here, for e.g.,
http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/c80ad765-b10f-4bca-917e-2959c9eb102a
Dave said that Socket.ReceiveBufferSize can set the internal buffer for UDP. I disagree.
The experiment I did is like this:
27 hosts send a 10KB udp packet to me within a LAN at the same time (at least almost). I have a while-loop to handle each of the packet. For each packet, I create a thread a handle it. I used UdpClient or Socket to receive the packets.
I lost about 50% of the packets. I think it is a burst of the UDP sending and I can't handle all of them in time.
This is why I want to increase the buffer size for UDP. say, if I change the buffer size to 1MB, then 27 * 10KB = 270KB data can be accepted in the buffer, right?
I tried changing Socket.ReceiveBufferSize to many many values, and it just does not have effects at all.
Any one can help?
I use the .NET UDPClient often and I have always used the Socket.ReceiveBufferSize and have good results. Internally it calls Socket.SetSocketOption with the ReceiveBuffer parameter. Here is a some quick, simple, code you can test with:
public static void Main(string[] args)
{
IPEndPoint remoteEp = null;
UdpClient client = new UdpClient(4242);
client.Client.ReceiveBufferSize = 4096;
Console.Write("Start sending data...");
client.Receive(ref remoteEp);
Console.WriteLine("Good");
Thread.Sleep(5000);
Console.WriteLine("Stop sending data!");
Thread.Sleep(1500);
int count = 0;
while (true)
{
client.Receive(ref remoteEp);
Console.WriteLine(string.Format("Count: {0}", ++count));
}
}
Try adjusting the value passed into the ReceiveBufferSize. I tested sending a constant stream of data for the 5 seconds, and got 10 packets. I then increased x4 and the next time got 38 packets.
I would look to other places in your network where you may be dropping packets. Especially since you mention on your other post that you are sending 10KB packets. The 10KB will be fragmented when it is sent to packets the size of MTU. If any 1 packet in the series is dropped the entire packet will be dropped.
The issue with setting the ReceiveBufferSize is that you need to set it directly after creation of the UdpClient object. I had the same issue with my changes not being reflected when getting the value of ReceiveBufferSize.
UdpClient client = new UdpClient()
//no code inbetween these two lines accessing client.
client.Client.ReceiveBufferSize = somevalue