Udp Send and Receive socket behavior - c#

I am writing a program which will first authenticate to a server then send and receive messages through UDP. While sending and receiving messages, I also need to maintain the connection to the server, so I need to periodically send KeepAlive to the server and receive response from the server.
I understand it is possible to send and receive packets on the same socket at the same time, but I am concerned about the scenario where I take a while to build a packet for sending, and during that time, the server sent me two messages.
When I call receive(), does the second message overwrites the first message?
Is it the best practice to set up two UDP sockets for what I am trying to accomplish? one socket for sending and one socket for receiving.
Thank you,
Lex

It is unfortunate that the server, which you say you have no control over, requires UDP and has an authentication scheme. This is a serious security problem, because it is relatively easy for an attacker to pretend to be the same UDP end point as your client which has been authenticated.
As far as your specific questions go:
In a perfect world, if the server sends you two datagrams (messages) before you have tried to receive any, you will still receive both datagrams, in the order in which they were sent, because the OS has buffered them and presents them both to you in order.
The problem is that it's not a perfect world, and especially when you are dealing with UDP. UDP has three significant limitations:
Delivery of any given datagram is not guaranteed. Any datagram may be lost at any time.
Order of delivery is not guaranteed. Datagrams are not necessarily received in the same order in which they were sent.
Uniqueness of delivery is not guaranteed. Any given datagram may be delivered more than once.
So not only is it possible that if the server sends a second datagram, the first one may be lost if you have not received it yet, the first one may be lost in any case. You have no guarantee that any datagram will be delivered to you.
Now, there is also the issue of what you do with the datagram once you have received it. Unfortunately, it's not clear from your question whether this is part of what you're asking about. But naturally, once the call to ReceiveFrom() completes (or Receive() if you have used Connect() on the UDP Socket) and you have the datagram in hand, it's entirely up to your code how this is handled and whether previously-received data is overwritten or not.
It is not best practices to create two separate UDP Socket instances, one to handle receiving and one to handle sending. Not only is it unnecessary, you would have to either have some mechanism for the server to understand that two different remote endpoints actually represent the same client (since they would normally have different port numbers, even if you guaranteed they were on the same IP address), or you would have to abuse the Socket by using the "reuse address" option (which is a way to allow the same port to be used by two different Sockets, but which leads to all kinds of other issues).
It is perfectly fine to use the same Socket instance for both receiving and sending, and in fact this is how you're expected to use it. The best practice is to do it that way.

Related

C# Socket is every one Receive corresponds to one Send?

I'm implementing a client-server app using C# Socket class (with async methods and SocketAsyncEventArgs Class). i must implement both TCP and UDP functionality.
My usage is sending some Commands in form of Packets from server to client and vise versa using TCP for important commands and UDP for non-important commands.
it's said that Nagle algorithm may cause merging multiple Sends to one Receive for TCP packets so i set NoDelay for TCP Sockets (though i had never experienced it!)
another problem may be IP fragmentation but i think assembling fragments occurs somewhere before Receive. so it must be safe to send up to 64KB to Send and Receive at once!
My Question:: Can I Assume that every one Receive corresponds to one Send in 100% cases? (I know that One send may cause zero Receive for UDP lost Packets)
My Reason for asking: should I implement 1)Splitting Merged Packets or 2)Merging split Packets in Receive or is there some circumstances that I can safely assume that 100% every One Receive Corresponds to One Send?
P.S: I'm Using Socket.ReceiveAsync for TCP and Socket.ReceiveFromAsync for UDP packets.
No, for TCP you cannot rely on this at all. One Send may require multiple Receive calls to read, and multiple Sends may be received by a single call to Receive, there is nothing you can do (in terms of settings on the connection) to prevent this, so you must be able to handle it within your application.
If you are exchanging variable length messages over a TCP stream, the usual way to determine whether the entire message has been read is to implement some form of "framing" whereby you prefix each message with a fixed number of bytes (e.g. 4 bytes to encode an Int32) that specify the size of the following message.
On your receiving side, you first read the first 4 byte prefix (remembering that you may need to do this with multiple Receives!), convert it to an Int32 which then tells you how many more bytes you'll need to read to ensure you have read exactly one message from the stream. To read the next message simply repeat the process.

Deadlocked when both endpoints do Socket.SendAsync

So I have written this client/server socket application that uses the SocketAsyncEventArgs "method" for doing async sockets.
Using the same library I have used for many other applications, I now for the first time experience a situation that I never anticipated.
Our new client/server application when started, starts to send lot's of data in both directions.
When done in unit-tests using mock-objects (without delays) to mimic normal socket operations, it all works well.
But in real situations using real sockets, we get a sort of deadlock where both endpoints are stuck in a Socket.SendAsync() operation (yes it returned true, was not synchronously handled)
My idea is that the receive buffer of both parties are full, and the tcp stack is not acknowleding any frames anymore. (connected to 127.0.0.1)
So I made the receivebuffer twice as large as the sendbuffer, but unfortunately it is not that simple due to the nature of our "protocol", and how we determine to send or receive.
I now have to re-think the method that determines when to start sending and when to start receiving.
A complicating factor is, that the purpose of this connection is to mutliplex multiple bi-directional general purpose communication channels over this socket connection. That means that there is no pre-determined sequence of communication, all channels may have their own protocols.
Of course, there is a tls initiation, handshake and authentication, which all work well, but when the connection becomes operational, and the channels start their own communications, the only sure thing is that received data has a size and channelnumber as a header.
After each operation, I check to see if there is any waiting data in the receivebuffer, or by checking Socket.Available.
This combined with measuring how much data was received since last sent operation, and how full the transmitbuffer is getting, I decide to receive more or start sending, or do nothing, and poll again in xx ms.
I now realize that this is wronge.
Am I trying to accomplish something that is simply not possible using only one socket connection?
Anyone every tried to accomplish something simular, or know a good way of accomplish a safe way that does not introduce these odd lock-ups.
Thanks,
Theo.

UDP Socket: How to determine when the available bytes are a full datagram

I have a .NET Socket (System.Net.Sockets.Socket) which I want to use for UDP (connectionless communication). I want to receive datagrams with polling, that is, I want to call the Available method to check whether a datagram is available. If it is, I call Receive to receive it without blocking. If it is not, I wait and poll again later. But now my problem is the following:
Available only returns how many bytes are available to be read without blocking. It does not tell whether these bytes are enough to form a full datagram. I do not know how big the datagrams I receive will be, so I cannot hardcode this check to a certain number.
How can I determine when one datagram ends and the next starts?
As Philip says, with UDP you either get the entire datagram, or not at all. If the socket reports data is available, it should be the entire amount of a datagram (or some combination of datagrams).
That said, in your post you say you want to use polling to receive the data. That's a poor choice for any implementation. .NET's network I/O has very good asynchronous implementation models you can use (including wrapping the socket in a NetworkStream object and using ReadAsync() with the async/await feature of C# 5), which would be a much better choice.
In addition, you should be very sure that UDP is in fact the protocol you want to use. It is inappropriate for nearly all run-of-the-mill network applications, due to its unreliability. There is no guarantee that:
Any data you send will be received. Datagrams are permitted to be dropped by the network.
The data that is received will be received in the same order in which it was sent. Delivery of datagrams may be reordered by the network.
The data that is received will be unique. The network is permitted to deliver the same datagram multiple times.
For most network applications, the business layer needs to be protected against these types of failures, which means adding your own layer of reliability between the business layer and the network layer. I.e. reinventing TCP.
It is much easier to impose a messaging paradigm on TCP than it is to impose a reliable paradigm on UDP, and for this reason it is in most cases more appropriate to use TCP for all network I/O than UDP.

c# and networking - Client listening and server send and most efficient C# socket networking?

I'm working on a game that depends on the standard System.Net.Sockets library for networking. What's the most efficient and standardized "system" I should use? Should the client send data requests every set amount of seconds, when a certain event happens? My other question, is a port forward required for a client to listen and receive data? How is this done, is there another socket created specifically for listening only on the client? How can I send messages and listen on the same socket on the client? I'm having a difficult time grasping the concept of networking, I started messing with it two days ago.
Should the client send data requests every set amount of seconds, when a certain event happens?
No. Send your message as soon as you can. The socket stack has algorithms that determine when data is actually sent. For instance the Nagle algorithm.
However, if you send a LOT of messages it can be beneficial to enqueue everything in the same socket method call. However, you need to send several thousand of messages per client and second for that to give you any benefit.
My other question, is a port forward required for a client to listen and receive data?
No. Once a socket connection have been established it's bidirectional. i.e. both end points and send and receive information without screwing something up for the other end point.
But to achieve that you typically have to use asynchronous operations so that you can keep receiving all the time.
How is this done, is there another socket created specifically for listening only on the client?
The server has a dedicated socket (a listener) which only purpose is to accept client sockets. When the listener have accepted a new connection from a remote end point you get a new socket object which represents the connection to the newly connected endpoint.
How can I send messages and listen on the same socket on the client?
The easiest way is to use asynchronous receives and blocking sends.
If you do not want to take care of everything by yourself, you can try my Apache licensed library http://sharpmessaging.net.
Creating a stable, high quality server will require you to have a wealth of knowledge on networking and managing your objects.
I highly recommend you start with something smaller before attempting to create your own server from scratch, or at the very least play around with a server for a different game that's already made, attempt to improve upon it or add new features.
That being said, there are a few ways you can setup the server, if you plan on having more than a couple of clients you don't generally want them to all send data whenever they feel like it as this can bog down the server, you want to structure it in such a way that the client sends as little data as possible on a scheduled basis and the server can request more when its ready. How that's setup and structured is up to you.
A server generally has to have a port forwarded on the router in order for requests to make it to the server from the internet, and here is why. When your computer makes a connection to a website (stackoverflow for example) it sends out a request on a random port, the router remembers the port that you sent out on and remembers who sent it (you), when the server sends the information you requested back the router knows you wanted that data and sends it back to you, in the case of RUNNING a server there is no outbound request to a client (Jack for example), so the router doesnt know where jacks request is supposed to go. By adding a port forwarding rule in the router your saying that all information passed to port 25565 (for example) is supposed to go to your server.
Clients generally do not need to forward ports because they are only making outbound requests and receiving data.
Server Starts, server starts listening on port 25565
Client starts, client connects to server on port 25565 and initiates a connection
Server responds to client on whatever port the client used to connect (this is done behind the scenes in sockets)
Communication continues from here.

multiple threads writting to a same socket problem

My program uses sockets for inter-process communication. There is one server listening on a socket port(B) on localhost waiting for a list of TCP clients to connect. And on the other end of the server is another a socket(A) that sends out data to internet. The server is designed to take everything the TCP clients send him and forward to a server on the internet. My question is if two of the TCP clients happened to send data at the same time, is this going to be a problem for the server's outgoing socket(A)?
Thanks
The MSDN docs recommend that you use BeginSend and EndSend if multiple threads will be using the same socket to transmit data.
So I would suggest that you either use those methods or write outgoing data to a synchronized queue, from which a single thread will then pick data off of the queue and send it over socket(A)
You don't describe how you multiplex the traffic of multiple client streams onto a single outgoing stream. Just arbitrarily putting chunks of client traffic into the stream is guaranteed not the work. The receiving end on the opposite end of the intertube will have no idea what bytes belong to what conversation.
I'd recommend you focus on the opposite end first. What machine is out there, what does it do, what does it need to know about the multiple clients at the local end.

Categories

Resources