I am using the C# UdpClient class to to UDP networking. There is one UdpClient object, bound to a fixed local port, but not to any remote endpoint, because it needs to be able to send/receive to/from multiple different endpoints.
I have two threads: One for sending, one for receiving. Now, when I send data to an endpoint that exists, but doesn't listen on that port, I expect a SocketException. And I do get one. Unfortunately, it is not my Send call that returns the exception, but the Receive call. So on my sending thread, I send data to an "invalid" endpoint, and my receiving thread gets the exception. Unfortunately, at that point, I have of course no idea what endpoint caused that exception to happen.
Storing the endpoint before sending, then accessing that in the receiving thread is just a race condition error waiting to happen.
Unfortunately, the SocketException does not give me the endpoint that caused the error.
Any ideas? Is it somehow possible to make the exception be thrown on the sending thread?
Help is greatly appreciated.
When you send() a UDP packet, it goes out on the wire and effectively disappears. You should not assume that you will get any feedback at all.
Sometimes, if there is no listener at the destination, the destination might be kind enough to send back an ICMP_UNREACH_PORT message. The routers in between then might be kind enough to deliver that message to your operating system. If that happens, it will be long after your original send() call returned. For ICMP_UNREACH_PORT, the OS typically caches it and reports an error the next time yo do a send() to the same destination. Other ICMP messages (you didn't mention which exception you are getting) could affect other calls.
So the bottom line is that there's no telling when, or if, UDP errors will be reported. It depends on a lot of variables. So be prepared to handle exceptions on any call, and be prepared for packets to just disappear without any error reported.
I think this is expected behavior for UDP. A UDP send() is not a blocking operation, so it won't wait for a potential error. (Not to mention the fact that you can't rely on the error messages being reliably received when sending to an active host with a closed port - it could be firewalled, rate-limited or otherwise dropped due to congestion, etc.)
You can connect() the UDP socket to a specific remote endpoint, which would allocate a unique port number and allow the OS to [most likely] distinguish errors from that specific endpoint from any other random host. But again, you should not rely on the ability to handle these errors.
It's too bad there isn't more information in the exception. This seems like an oversight in the way .NET handles UDP sockets. According to the documentation, you need to check the exception's ErrorCode and handle the error appropriately. (which, in your case, could likely mean ignoring the error.)
Related
similar question:
C# Socket BeginReceive / EndReceive capturing multiple messages
I am currently managing the communication between a website and a winform application, which is done by websocket created that way:
Socket socket = new Socket(AddressFamily.InterNetwork,
SocketType.Stream,
ProtocolType.Tcp);
If the emmitter send two messages A([TagBeginMessage:lengthMessageA] aaaaaaaaaaaaaaaa [EndMessage]) and B([TagBeginMessage: lengthMessageB] bbbbbbbbbbbbbbbbbbb [EndMessage]), I expect that the receiver will get
[TagBeginMessage:lengthMessageA] aaaaaaaaaaaaaaaa [EndMessage][TagBeginMessage: lengthMessageB] bbbbbbbbbbbbbbbbbbb [EndMessage]
or
[TagBeginMessage: lengthMessageB] bbbbbbbbbbbbbbbbbbb [EndMessage][TagBeginMessage:lengthMessageA] aaaaaaaaaaaaaaaa [EndMessage]
This is indeed the case for the vast majority of message, however the necessary asynchronous nature of the reception sometimes causes a bug when the message A is quite long and the message B quite short, in which the receiver get this:
[TagBeginMessage:lengthMessageA] aaaaaaaaaa[TagBeginMessageB: lengthMessageB] bbbbbbbbbbbbbbbbbbb [EndMessage]aaaaaa [EndMessageA]
This can still be parsed, although it required unique ending for each message. However, while I didn't see it, I am afraid that this means that the following case is also possible (due to the fact that the socket send their data by packet):
[TagBeginMessage:lengthMessageA] aaaaa[TagBeginMessageB: lengthMessageB] bbbbbbaaaabbbbbbbbbbbbb [EndMessage]aaaaaa [EndMessageA]
This is unparseable. Adding a length (as suggested in http://blog.stephencleary.com/2009/04/sample-code-length-prefix-message.html) to the beginning of the message indicates the problem but doesn't solve it. What can I do to avoid this?
My current solution are:
Send small message. Not elegant but should work.
Send very smallmessage to signal that the buffer of the socket is empty.
It sounds like you are sending data on the same connection from multiple threads simultaneously. That's fine, but if you do that, make sure you lock the connection to the current thread for the duration of sending out a logical package (length + data) - that way you will receive a complete packet (sent from one thread) before receiving anything that a different thread sent.
Under TCP/IP, packages are guaranteed to arrive in the same order that they were sent, but if you send, say, half a logical package (length + data) from one thread then half a package from another, then there is no way for the protocol layer or the receiving end to know that.
You have already linked to Stephen Cleary's blog so you know that tcp requires you to do some form of message framing. Here is the another one of Cleary's posts that describe your problem.
In short you must frame your tcp messages. You should also never see your final, unparsable, example. Tcp will send all of your data in the correct order.
Im not an expert with sockets, but i have some experience. What you could do is split the data thats being sent. Say you have a very long string that has the following: "AAAAAAAABBBBBBCCCCDDDDDEEE". What you could do is instead of sending the entire string through the socket at once you could send all the A characters then the B then the C and then the D etc. Right before the user recieves the message you could merge all the characters together. Thats just an idea.
I use on a connected socket on my server something like this to send data to the client:
IAsyncResult asyncRes = connectionSocket.BeginSend(data, 0, length, SocketFlags.None,
out error, new AsyncCallback(SendDataDone), signalWhenDataSent);
As it seems, when there is a slow internet connection between the server and the client I receive an exception description like this: NoBufferSpaceAvailable
What exactly does this error mean ? The internal OS buffer for the socket connectionSocket is full ? What are the means to make it work. As a context where this appears is in a http proxy server. This might indicate, I suppose, that the rate at which data is coming from the origin server is higher than the rate my server can handle with the proxy client. How would you deal with it ?
I am using tcp.
The way to fix this problem is to correlate the way one reads from one socket to the speed one writes to the other socket because if you do no buffering you cannot write to a socket at a higher speed than the client connected at that end can read.
You one uses synchronous sockets the problem does not appear because they block as long as the operation is still pending but this is not the case with async calls.
Exactly, most likely the kernel socket buffer holding outgoing data is full. You're sending too "fast" for the client. You can try to increase the send buffer size, but that does not guarantee you won't bump into this problem again.
The simple answer is that you should be prepared that a send operation might fail, and retry it later. It's not ideal to maintain an ever growing buffer inside you application either, but the origin server should also slow down if you stop receiving (depending on the TCP window size and your receive buffer size).
Here I am troubleshooting a theoretical problem about HOW servers and clients are working on machines. I know all NET Processes, but I am missing something referring to code. I was unable to find something related about this.
I code in Visual C# 2008, i use regular TCPClient / TCPListener with 2 different projects:
Project1 (Client)
Project2 (Server)
My issues are maybe so simple:
1-> About how server receives data, event handlers are possible?
In my first server codes i used to make this loop:
while (true)
{
if (NetworkStream.DataAvailable)
{
//stuff
}
Thread.Sleep(200);
}
I encounter this as a crap way to control the incoming data from a server. BUT server is always ready to receive data.
My question: There is anything like...? ->
AcceptTcpClient();
I want a handler that waits until something happen, in this case a specific socket data receiving.
2-> General networking I/O methods.
The problem is (beside I'm a noob) is how to handle multiple data writing.
If I use to send a lot of data in a byte array, the sending can break if I send more data. All data got joined and errors occurs when receiving. I want to handle multiple writes to send and receive.
Is this possible?
About how server receives data, event handlers are possible?
If you want to write call-back oriented server code, you may find MSDN's Asynchronous Server Socket Example exactly what you're looking for.
... the sending can break if I send more data. All data got joined and errors occurs when receiving.
That is the nature of TCP. The standardized Internet protocols fall into a few categories:
block oriented stream oriented
reliable SCTP TCP
unreliable UDP ---
If you really want to send blocks of data, you can use SCTP, but be aware that many firewalls DROP SCTP packets because they aren't "usual". I don't know if you can reliably route SCTP packets across the open Internet.
You can wrap your own content into blocks of data with your own headers or add other "synchronization" mechanisms to your system. Consider an HTTP server: it must wait until it reads an entire request like:
GET /index.html HTTP/1.1␍␊
Host: www.example.com␍␊
␍␊
Until the server sees the CRLFCRLF sequence, it must keep the partially-read data in a buffer. The bytes might come in one at a time in a dozen or more packets. Or, if the client is sending multiple requests in a single stream, a dozen requests might come in a single packet.
You just have to handle this.
My asking is quite simple and is about asynchronous sockets, working with TCP protocol.
When I send some data with the "BeginSend" method, when will the callback be called?
Will it be called when the data is just sent out to the network, or when we are ensured that the data as reached its destination (like it should be regarding to TCP specification) ?
Thanks for your answers.
KiTe.
ps : I'm sorry if my english is a bit bad ^^.
From MSDN:
"When your application calls BeginSend, the system will use a separate thread to execute the specified callback method, and will block on EndSend until the Socket sends the number of bytes requested or throws an exception."
"The successful completion of a send does not indicate that the data was successfully delivered. If no buffer space is available within the transport system to hold the data to be transmitted, send will block unless the socket has been placed in nonblocking mode."
http://msdn.microsoft.com/en-us/library/38dxf7kt.aspx
When the callback is called you can be sure that the data has been cleared from the output buffer (the asynchronous operation uses a separate thread to ensure that your calling thread is not blocked in case there is no room in the transmit buffer and it has to wait to send the date) and that it will reach it's destination - but not that it has reached it yet.
Because of the TCP protocol's nature however, you can be sure (well, I guess almost sure) that it will get to the destination, eventually.
However, for timing purposes you should not consider the time of the callback as being the same as the time the data reaches the other party.
Despite the documentation, NetworkStream.Write does not appear to wait until the data has been sent. Instead, it waits until the data has been copied to a buffer and then returns. That buffer is transmitted in the background.
This is the code I have at the moment. Whether I use ns.Write or ns.BeginWrite doesn't matter - both return immediately. The EndWrite also returns immediately (which makes sense since it is writing to the send buffer, not writing to the network).
bool done;
void SendData(TcpClient tcp, byte[] data)
{
NetworkStream ns = tcp.GetStream();
done = false;
ns.BeginWrite(bytWriteBuffer, 0, data.Length, myWriteCallBack, ns);
while (done == false) Thread.Sleep(10);
}
public void myWriteCallBack(IAsyncResult ar)
{
NetworkStream ns = (NetworkStream)ar.AsyncState;
ns.EndWrite(ar);
done = true;
}
How can I tell when the data has actually been sent to the client?
I want to wait for 10 seconds(for example) for a response from the server after sending my data otherwise I'll assume something was wrong. If it takes 15 seconds to send my data, then it will always timeout since I can only start counting from when NetworkStream.Write returns - which is before the data has been sent. I want to start counting 10 seconds from when the data has left my network card.
The amount of data and the time to send it could vary - it could take 1 second to send it, it could take 10 seconds to send it, it could take a minute to send it. The server does send an response when it has received the data (it's a smtp server), but I don't want to wait forever if my data was malformed and the response will never come, which is why I need to know if I'm waiting for the data to be sent, or if I'm waiting for the server to respond.
I might want to show the status to the user - I'd like to show "sending data to server", and "waiting for response from server" - how could I do that?
I'm not a C# programmer, but the way you've asked this question is slightly misleading. The only way to know when your data has been "received", for any useful definition of "received", is to have a specific acknowledgment message in your protocol which indicates the data has been fully processed.
The data does not "leave" your network card, exactly. The best way to think of your program's relationship to the network is:
your program -> lots of confusing stuff -> the peer program
A list of things that might be in the "lots of confusing stuff":
the CLR
the operating system kernel
a virtualized network interface
a switch
a software firewall
a hardware firewall
a router performing network address translation
a router on the peer's end performing network address translation
So, if you are on a virtual machine, which is hosted under a different operating system, that has a software firewall which is controlling the virtual machine's network behavior - when has the data "really" left your network card? Even in the best case scenario, many of these components may drop a packet, which your network card will need to re-transmit. Has it "left" your network card when the first (unsuccessful) attempt has been made? Most networking APIs would say no, it hasn't been "sent" until the other end has sent a TCP acknowledgement.
That said, the documentation for NetworkStream.Write seems to indicate that it will not return until it has at least initiated the 'send' operation:
The Write method blocks until the requested number of bytes is sent or a SocketException is thrown.
Of course, "is sent" is somewhat vague for the reasons I gave above. There's also the possibility that the data will be "really" sent by your program and received by the peer program, but the peer will crash or otherwise not actually process the data. So you should do a Write followed by a Read of a message that will only be emitted by your peer when it has actually processed the message.
TCP is a "reliable" protocol, which means the data will be received at the other end if there are no socket errors. I have seen numerous efforts at second-guessing TCP with a higher level application confirmation, but IMHO this is usually a waste of time and bandwidth.
Typically the problem you describe is handled through normal client/server design, which in its simplest form goes like this...
The client sends a request to the server and does a blocking read on the socket waiting for some kind of response. If there is a problem with the TCP connection then that read will abort. The client should also use a timeout to detect any non-network related issue with the server. If the request fails or times out then the client can retry, report an error, etc.
Once the server has processed the request and sent the response it usually no longer cares what happens - even if the socket goes away during the transaction - because it is up to the client to initiate any further interaction. Personally, I find it very comforting to be the server. :-)
In general, I would recommend sending an acknowledgment from the client anyway. That way you can be 100% sure the data was received, and received correctly.
If I had to guess, the NetworkStream considers the data to have been sent once it hands the buffer off to the Windows Socket. So, I'm not sure there's a way to accomplish what you want via TcpClient.
I can not think of a scenario where NetworkStream.Write wouldn't send the data to the server as soon as possible. Barring massive network congestion or disconnection, it should end up on the other end within a reasonable time. Is it possible that you have a protocol issue? For instance, with HTTP the request headers must end with a blank line, and the server will not send any response until one occurs -- does the protocol in use have a similar end-of-message characteristic?
Here's some cleaner code than your original version, removing the delegate, field, and Thread.Sleep. It preforms the exact same way functionally.
void SendData(TcpClient tcp, byte[] data) {
NetworkStream ns = tcp.GetStream();
// BUG?: should bytWriteBuffer == data?
IAsyncResult r = ns.BeginWrite(bytWriteBuffer, 0, data.Length, null, null);
r.AsyncWaitHandle.WaitOne();
ns.EndWrite(r);
}
Looks like the question was modified while I wrote the above. The .WaitOne() may help your timeout issue. It can be passed a timeout parameter. This is a lazy wait -- the thread will not be scheduled again until the result is finished, or the timeout expires.
I try to understand the intent of .NET NetworkStream designers, and they must design it this way. After Write, the data to send are no longer handled by .NET. Therefore, it is reasonable that Write returns immediately (and the data will be sent out from NIC some time soon).
So in your application design, you should follow this pattern other than trying to make it working your way. For example, use a longer time out before received any data from the NetworkStream can compensate the time consumed before your command leaving the NIC.
In all, it is bad practice to hard code a timeout value inside source files. If the timeout value is configurable at runtime, everything should work fine.
How about using the Flush() method.
ns.Flush()
That should ensure the data is written before continuing.
Bellow .net is windows sockets which use TCP.
TCP uses ACK packets to notify the sender the data has been transferred successfully.
So the sender machine knows when data has been transferred but there is no way (that I am aware of) to get that information in .net.
edit:
Just an idea, never tried:
Write() blocks only if sockets buffer is full. So if we lower that buffers size (SendBufferSize) to a very low value (8? 1? 0?) we may get what we want :)
Perhaps try setting
tcp.NoDelay = true