I have the requirement of receiving a stream of UDP packets at a fixed rate of 1000 packets per second. They are composed of 28 bytes of payload, where the first four bytes (Uint32) are the packet sequence number. The sender is an embedded device on the same local network, and addresses and ports are mutually known. I am not expected to receive every packet, but the host should be designed so that it doesn't add to the intrinsic UDP protocol limitations.
I have only limited previous experience with UDP and Sockets in general (just casual streaming from sensor app on android phone to PC, for a small demo project).
I am not sure what is the most sensible way to receive all the packets at a fast rate. I imagine some sort of loop, or else some sort of self-retriggering receive operation. I have studied the difference between synchronous and asynchronous receives, and also the Task-based Asynchronous Pattern (TAP), but I don't feel very confident yet. It looks like a simple task, but its understanding is still complicated to me.
So the questions are:
Is UdpClient class suitable to a scenario like this, or am I better off going with the Socket class? (or another one, by the way)
Should I use synchronous or asynchronous receiving? If I use synchronous, I am afraid of losing packets, and if I use asynchronous, how should I clock/throttle the receive operations so that I don't start them at a rate that's too large?
I thought about a tight loop testing for UdpClient.Available() like below. Would it be a good design choice?
while (running)
{
if (udpSocket.Available() > 0)
{
// do something (what?)
}
}
Just read from your UdpClient as fast as you can (or as fast as is convenient for your application) using UdpClient.Receive(). The Receive() operation will block (wait) if there is no incoming packet, so calling Receive() three times in a row will always return you three UDP packets.
Generally, when receiving UDP packets you do not have to worry about the sending rate. Only the sender needs to worry about the sending rate. So the opposite advice for the sender is: Do not send a lot of UDP packets as fast as you can.
So my answers would be:
Yes, use UdpClient.
Use synchronous receiving. Call UdpClient.Receive() in a loop running in your receive thread, of in the main loop of your program, what ever you like.
You do not need to check for available data. UdpClient.Receive() will block until there is data available.
Do not be afraid of loosing packets when receiving UDP: You will almost never lose UDP packets on the receiving host because you do something wrong in receiving. UDP packets mostly get dropped by network components like routers somewhere on the network path, which cannot send (forward) UDP packets as fast as they receive them.
Once the packets arrive at your machine, your OS does a fair amount of buffering (for example 128k on Linux), so even if your application is unresponsive for say 1 second it will not lose any UDP packets.
Related
So I have written this client/server socket application that uses the SocketAsyncEventArgs "method" for doing async sockets.
Using the same library I have used for many other applications, I now for the first time experience a situation that I never anticipated.
Our new client/server application when started, starts to send lot's of data in both directions.
When done in unit-tests using mock-objects (without delays) to mimic normal socket operations, it all works well.
But in real situations using real sockets, we get a sort of deadlock where both endpoints are stuck in a Socket.SendAsync() operation (yes it returned true, was not synchronously handled)
My idea is that the receive buffer of both parties are full, and the tcp stack is not acknowleding any frames anymore. (connected to 127.0.0.1)
So I made the receivebuffer twice as large as the sendbuffer, but unfortunately it is not that simple due to the nature of our "protocol", and how we determine to send or receive.
I now have to re-think the method that determines when to start sending and when to start receiving.
A complicating factor is, that the purpose of this connection is to mutliplex multiple bi-directional general purpose communication channels over this socket connection. That means that there is no pre-determined sequence of communication, all channels may have their own protocols.
Of course, there is a tls initiation, handshake and authentication, which all work well, but when the connection becomes operational, and the channels start their own communications, the only sure thing is that received data has a size and channelnumber as a header.
After each operation, I check to see if there is any waiting data in the receivebuffer, or by checking Socket.Available.
This combined with measuring how much data was received since last sent operation, and how full the transmitbuffer is getting, I decide to receive more or start sending, or do nothing, and poll again in xx ms.
I now realize that this is wronge.
Am I trying to accomplish something that is simply not possible using only one socket connection?
Anyone every tried to accomplish something simular, or know a good way of accomplish a safe way that does not introduce these odd lock-ups.
Thanks,
Theo.
Good day all!
I am working on a open source server for a game that is closed source - the game operates using TCP/IP sockets (instead of UDP, doh...) So being a connection based protocal, I am contrained to work with this.
My current program structure (dumbed down):
Core Thread
Receive new connection and create a new client object.
Client Object
IOloop (runs on its own thread)
Get data from socket, process packets. (one packet at a time)
Send data buffered from other threads (one packet at a time)
The client will send data immediately (no delay) when it is it's own thread.
I am noticing a huge flaw with the program, mainly it sends data very slow. Because I send data packet by packet, synchronously. (Socket.Send(byte[] buffer))
I would like to send data almost immediately, without delay - asynchronously.
I have tried creating a new thread every time I wanted to send a packet (so each packet sends on it's own managed thread) but this was a huge mess.
My current system uses a synchronous sending with nagle algorithm disabled - but this has the flaw of bottlenecking - send one packet, send operation blocks until TCP to confirms, then send the next... I can issue easily 10 packets every 100ms, and if the packets take 400ms to send, this backs up and breaks. Of course on my Local host I don't get this issue.
So, my question: What would be the best way to send multiple small packets of data? I am thinking of merging the data to send at the end of every IO thread loop to one large byte buffer, to reduce the back and forth delay - but the obvious problem here is this undermines the nagle algorithm avoidance which I had hoped would stop the delays.
How does a synchronous send work? Does it block until the data is confirmed correctly received by the recipient as I believe? Is there a way I can do this without waiting for confirmation? I do understand the packets must all go in order and correctly (as per the protocol specification).
I have done some research and I'm now using the current system:
Async send/receive operations with no threading, using AsyncCallbacks.
So I have the server start a read operation, then callback until done, when a packet is received, it is processed, then a new read operation will begin...
This method drastically reduces system overheads - thread pools, memory etc.
I have used some methods from the following and customised to my liking: https://msdn.microsoft.com/en-us/library/bew39x2a(v=vs.110).aspx
Very efficient, highly recommend this approach. For any TCP/IP network application low lag is very important and async callbacks is the best way to do it.
I have a .NET Socket (System.Net.Sockets.Socket) which I want to use for UDP (connectionless communication). I want to receive datagrams with polling, that is, I want to call the Available method to check whether a datagram is available. If it is, I call Receive to receive it without blocking. If it is not, I wait and poll again later. But now my problem is the following:
Available only returns how many bytes are available to be read without blocking. It does not tell whether these bytes are enough to form a full datagram. I do not know how big the datagrams I receive will be, so I cannot hardcode this check to a certain number.
How can I determine when one datagram ends and the next starts?
As Philip says, with UDP you either get the entire datagram, or not at all. If the socket reports data is available, it should be the entire amount of a datagram (or some combination of datagrams).
That said, in your post you say you want to use polling to receive the data. That's a poor choice for any implementation. .NET's network I/O has very good asynchronous implementation models you can use (including wrapping the socket in a NetworkStream object and using ReadAsync() with the async/await feature of C# 5), which would be a much better choice.
In addition, you should be very sure that UDP is in fact the protocol you want to use. It is inappropriate for nearly all run-of-the-mill network applications, due to its unreliability. There is no guarantee that:
Any data you send will be received. Datagrams are permitted to be dropped by the network.
The data that is received will be received in the same order in which it was sent. Delivery of datagrams may be reordered by the network.
The data that is received will be unique. The network is permitted to deliver the same datagram multiple times.
For most network applications, the business layer needs to be protected against these types of failures, which means adding your own layer of reliability between the business layer and the network layer. I.e. reinventing TCP.
It is much easier to impose a messaging paradigm on TCP than it is to impose a reliable paradigm on UDP, and for this reason it is in most cases more appropriate to use TCP for all network I/O than UDP.
In my C# application, I am sending UDP messages by calling Socket.SendTo potentially up to 10,000 times per second. I know that it is sometimes unable to keep up with how much I am sending. Is there a way to check how much data (in bytes, or number of times I've called SendTo, etc.) it has queued or backlogged to send?
My ideal solution would be to drop the non-essential messages if the socket starts running behind and let the essential messages be queued by the socket, so knowing the socket status before I call SendTo would be quite helpful.
Thanks
The packets could be queued in so many different places it's not really feasible to do this: they could be on the socket, in the IP stack somewhere, in the network interface output queue, etc...
Even if you find a way to query the appropriate queues in the operating system (which will surely not be portable), you won't really solve your problem by doing this. The reason is that unless you are actually sending messages out faster than your network interface can handle, the bottleneck is most likely somewhere on the network, outside the sending computer. Since many network interfaces on general purpose computers these days are 1Gbps, it is very likely that the network interface is able to send out packets as fast as you originate them. The choke point is probably an ethernet switch or a router somewhere, and you won't be able to find out by querying queues when that ethernet switch or router is dropping packets and when it isn't.
The problem with what you want to do is that it's the OS kernel, not your application, that is managing the outgoing queue. And with UDP even less queuing going on, so packets just get dropped if needed.
Your best bet is netstat -s on the command line to see protocol statistics and packet drop counts (Windows probably has some fancy API to query the same from an app, but I'm not aware of it).
Find an acceptable message rate your host and network can sustain, and switch to a "lean" output when that is reached/exceeded. This requires some bookkeeping, but could easily be accomplished with a ring-buffer of timestamps.
I'm writing an instant messaging server in C# for learning purposes.
My question is whether I should use synchronous or asynchronous sockets to handle the IM clients. The goal is to handle as many clients as possible.
I'm not quite sure but as far as I know with async sockets the packets don't arrive in order which means when you send 2 chat messages and there is a delay/lag it's possible that the second one arrive before the first one. Is this right and if so, is there a way to solve this issue?
About sync sockets: Is synchronous sockets a good solution for many clients? Do I have to check every socket/connection in a loop if there are new packets? If so, isn't this quite slow?
Last question: Assume I want to implement a way to send files (e.g. images) through the protocol (which is a non-standard binary protocol btw), can I still send messages while uploading?
The goal is to handle as many clients as possible.
Async then. It scales a lot better.
I'm not quite sure but as far as I know with async sockets the packets don't arrive in order which means when you send 2 chat messages and there is a delay/lag it's possible that the second one arrive before the first one.
TCP guarantees that everything arrives in order.
Assume I want to implement a way to send files (e.g. images) through the protocol (which is a non-standard binary protocol btw), can I still send messages while uploading
I recommend that you use a separate connection for file transfers. Use the first connection to do a handshake (determine which port to use and specify file name etc). Then use Socket.SendFile on the new socket to transfer the file.
Everything #jgauffin said (i.e. TCP handles packet-order, async better for n(clients) > 1000).
Assume I want to implement a way to send files (e.g. images) through the protocol (which is a non-standard binary protocol btw), can I still send messages while uploading?
Your custom protocol has to be built to support this. If you write a 8MB packet to the Socket, you won't be able to write anything else using that socket until the 8MB are sent. Instead, use upload-chunks of smaller size so that other packets have the chance to go over the pipe as well.
[UPLOAD id=123 START length=8012389]
[UPLOAD id=123 PART chunk=1 length=2048 data=...]
[UPLOAD id=123 PART chunk=2 length=2048 data=...]
[MESSAGE to="foo#example.com" text="Hi"]
[UPLOAD id=123 PART chunk=3 length=2048 data=...]
// ...
[UPLOAD id=123 COMPLETE checksum=0xdeadbeef]
The difference between an async approach and a sync approach is more about the difference between non-blocking and blocking io. With both approaches, the data is delivered in the same order that it has been transmitted. However, you don't block while you wait for an async call to complete, so you can start transmitting to all of your clients, before any of the individual communications has finished writing to the socket (which is why typically it would be the approach followed by servers).
If you go down the sync route, you block until each transmission / receive operation has completed, which means you may require need to run multiple threads to handle the clients.
As far as uploading an image at the same time as sending messages, you may want to handle that down a different pipe connection between the client/server so that it doesn't cause a blockage.