C# Server - TCP/IP Socket Efficiency - c#

Good day all!
I am working on a open source server for a game that is closed source - the game operates using TCP/IP sockets (instead of UDP, doh...) So being a connection based protocal, I am contrained to work with this.
My current program structure (dumbed down):
Core Thread
Receive new connection and create a new client object.
Client Object
IOloop (runs on its own thread)
Get data from socket, process packets. (one packet at a time)
Send data buffered from other threads (one packet at a time)
The client will send data immediately (no delay) when it is it's own thread.
I am noticing a huge flaw with the program, mainly it sends data very slow. Because I send data packet by packet, synchronously. (Socket.Send(byte[] buffer))
I would like to send data almost immediately, without delay - asynchronously.
I have tried creating a new thread every time I wanted to send a packet (so each packet sends on it's own managed thread) but this was a huge mess.
My current system uses a synchronous sending with nagle algorithm disabled - but this has the flaw of bottlenecking - send one packet, send operation blocks until TCP to confirms, then send the next... I can issue easily 10 packets every 100ms, and if the packets take 400ms to send, this backs up and breaks. Of course on my Local host I don't get this issue.
So, my question: What would be the best way to send multiple small packets of data? I am thinking of merging the data to send at the end of every IO thread loop to one large byte buffer, to reduce the back and forth delay - but the obvious problem here is this undermines the nagle algorithm avoidance which I had hoped would stop the delays.
How does a synchronous send work? Does it block until the data is confirmed correctly received by the recipient as I believe? Is there a way I can do this without waiting for confirmation? I do understand the packets must all go in order and correctly (as per the protocol specification).

I have done some research and I'm now using the current system:
Async send/receive operations with no threading, using AsyncCallbacks.
So I have the server start a read operation, then callback until done, when a packet is received, it is processed, then a new read operation will begin...
This method drastically reduces system overheads - thread pools, memory etc.
I have used some methods from the following and customised to my liking: https://msdn.microsoft.com/en-us/library/bew39x2a(v=vs.110).aspx
Very efficient, highly recommend this approach. For any TCP/IP network application low lag is very important and async callbacks is the best way to do it.

Related

Deadlocked when both endpoints do Socket.SendAsync

So I have written this client/server socket application that uses the SocketAsyncEventArgs "method" for doing async sockets.
Using the same library I have used for many other applications, I now for the first time experience a situation that I never anticipated.
Our new client/server application when started, starts to send lot's of data in both directions.
When done in unit-tests using mock-objects (without delays) to mimic normal socket operations, it all works well.
But in real situations using real sockets, we get a sort of deadlock where both endpoints are stuck in a Socket.SendAsync() operation (yes it returned true, was not synchronously handled)
My idea is that the receive buffer of both parties are full, and the tcp stack is not acknowleding any frames anymore. (connected to 127.0.0.1)
So I made the receivebuffer twice as large as the sendbuffer, but unfortunately it is not that simple due to the nature of our "protocol", and how we determine to send or receive.
I now have to re-think the method that determines when to start sending and when to start receiving.
A complicating factor is, that the purpose of this connection is to mutliplex multiple bi-directional general purpose communication channels over this socket connection. That means that there is no pre-determined sequence of communication, all channels may have their own protocols.
Of course, there is a tls initiation, handshake and authentication, which all work well, but when the connection becomes operational, and the channels start their own communications, the only sure thing is that received data has a size and channelnumber as a header.
After each operation, I check to see if there is any waiting data in the receivebuffer, or by checking Socket.Available.
This combined with measuring how much data was received since last sent operation, and how full the transmitbuffer is getting, I decide to receive more or start sending, or do nothing, and poll again in xx ms.
I now realize that this is wronge.
Am I trying to accomplish something that is simply not possible using only one socket connection?
Anyone every tried to accomplish something simular, or know a good way of accomplish a safe way that does not introduce these odd lock-ups.
Thanks,
Theo.

.Net Socket (UDP) Sending, Receiving and Scheduling

I am currently on a personal project for learning purposes. I want to make a connection over UDP, for application such as games. Each datagram sent has a specific header that indicates which "logic" channel it belongs to - for example, channel 0 is just like UDP with the extra header overhead, and channel 1 uses more headers to bring some extra reliability. The channels objective is to "automatically" separate messages into logic groups, up to a specific amount.
In my current code, there is a simple loop in a separate thread that handles sending and receiving:
// This is pseudo code
public void Tick() {
if(Socket.Poll) {
do {
ReadMessage();
} while(Socket.Available > 0)
}
SendQueuedOutgoingMessages();
}
Though this works on an ideal world, I have this feeling that this logic fails when there are too many incoming or outgoing messages. Is it possible to use the same socket to simultaneously send and receive messages (i.e send and receive are asynchronous or in different threads)? Even if it is possible, would it be better if I simply used two or more UDP sockets (or mix TCP and UDP sockets, if I need reliability), having specially maintainability in mind?
The most direct alternative I can think around this would be to use a scheduling algorithm to control how many messages to read and send, by means of queue sizes or other factors, but this feels poor and inflexible in this situation.
Edit: Adding more information about the code.
The Tick() method is set to be called a specific amount of times per second if it immediately returns. For example, 30 times per second if no new in or out messages exists, and less if it needs some time to send or receive data. I've used blocking ReceiveFrom and SendTo methods as to avoid busy waiting or calls such as Sleep(0).
Though I immediately treat incoming messages, I use an outgoing messages queue to help with the channels idea - each channel has its own priority down to 'no priority', affecting its bandwidth share over time in smooth and busy moments.
Whether or not using 2 sockets for receiving and sending separately, or just a single one depends on the situation. If you are going to send a lot of messages, and even if you are on a dedicated thread, the socket might block if the outgoing queue that is used by the socket becomes full.
There are several solutions to this problem. Using 2 sockets and 2 distinct threads is one of them, using select in combination with asynchronous sockets is another. The point is that you don't want to stop receiving just because a send might block.
Each of these possible solutions have their own complexity.
The select api is meant to check if for a certain socket there is something to receive, but you can also use it to detect if a socket becomes writable again. You need a socket option to put the socket in non blocking state and you need to check for E_WOULDBLOCK return codes with each send. If so the send has failed and you have to queue the message yourself.
You don't really send and receive at the same time, it is sequentially. You use select to check if a socket is writable and readible by using 2 bitmasks manipulated with the fd_set api. You can use select even on multiple sockets at once. Then if select, which is a blocking call, returns, you can check each individual bit to check what actions needs to be performed.
The reason why a send can block, if a socket is not put in nonblocking state is that the output queue of the socket can be full. If the socket is blocking, it would simply wait for the queue to become ready again. But during that wait, you cannot receive anything on that same socket. This is the very reason why you need non blocking sockets and the select api, in combination with some kind of queueing mechanism yourself.
Why not simply the standard read loop setup?
while (true)
ReadMessage();
There is no scheduling or throttling necessary. It is not necessary to know whether a packet is ready or not.
You can read and write simultaneously on the same UDP socket.
There is no need for an outgoing queue, either. Just send.

Receive streaming of UDP packets sent at fixed, clocked rate

I have the requirement of receiving a stream of UDP packets at a fixed rate of 1000 packets per second. They are composed of 28 bytes of payload, where the first four bytes (Uint32) are the packet sequence number. The sender is an embedded device on the same local network, and addresses and ports are mutually known. I am not expected to receive every packet, but the host should be designed so that it doesn't add to the intrinsic UDP protocol limitations.
I have only limited previous experience with UDP and Sockets in general (just casual streaming from sensor app on android phone to PC, for a small demo project).
I am not sure what is the most sensible way to receive all the packets at a fast rate. I imagine some sort of loop, or else some sort of self-retriggering receive operation. I have studied the difference between synchronous and asynchronous receives, and also the Task-based Asynchronous Pattern (TAP), but I don't feel very confident yet. It looks like a simple task, but its understanding is still complicated to me.
So the questions are:
Is UdpClient class suitable to a scenario like this, or am I better off going with the Socket class? (or another one, by the way)
Should I use synchronous or asynchronous receiving? If I use synchronous, I am afraid of losing packets, and if I use asynchronous, how should I clock/throttle the receive operations so that I don't start them at a rate that's too large?
I thought about a tight loop testing for UdpClient.Available() like below. Would it be a good design choice?
while (running)
{
if (udpSocket.Available() > 0)
{
// do something (what?)
}
}
Just read from your UdpClient as fast as you can (or as fast as is convenient for your application) using UdpClient.Receive(). The Receive() operation will block (wait) if there is no incoming packet, so calling Receive() three times in a row will always return you three UDP packets.
Generally, when receiving UDP packets you do not have to worry about the sending rate. Only the sender needs to worry about the sending rate. So the opposite advice for the sender is: Do not send a lot of UDP packets as fast as you can.
So my answers would be:
Yes, use UdpClient.
Use synchronous receiving. Call UdpClient.Receive() in a loop running in your receive thread, of in the main loop of your program, what ever you like.
You do not need to check for available data. UdpClient.Receive() will block until there is data available.
Do not be afraid of loosing packets when receiving UDP: You will almost never lose UDP packets on the receiving host because you do something wrong in receiving. UDP packets mostly get dropped by network components like routers somewhere on the network path, which cannot send (forward) UDP packets as fast as they receive them.
Once the packets arrive at your machine, your OS does a fair amount of buffering (for example 128k on Linux), so even if your application is unresponsive for say 1 second it will not lose any UDP packets.

TCP Socket buffer and Data Overflow

My application is acting as client for a banks server, sending request and getting response from that server. After response processing is done in database which takes some time, so If sever send response in 0.5 second and db operation takes 1 sec after which only my application again try to receive data from server through begin receive then should data will be accumulated somewhere and if yes where it will be stored. Is there will be some limitation so that data will be overflowed and if it happen whether it will closed this socket. I am declaring my socket buffer size to 1024. If anyone also have some article which clear my doubts please share it.
Can you control what the server is sending to you? In most cases when the receiver operates on the received data, sending an application-level ACK upon finishing the work will allow the sender to know when to send the next request. This will ensure no data is lost (since TCP will make sure it does not get lost in the network).
If you can't change the way the server sends you data, you can consider running the receiver in a different thread, where it will save every incoming request to a cache (either only in RAM or to the HD). Then, a worker thread (or multiple threads) will read requests from that cache and do the work you need. This way you will have full control of the buffering of the data.
I think you will read the data in chunks yourself with the Socket - don't you? Than just don't read more than you can handle. The sender should do the same - if your input-buffer overflows the sender should wait before sending more. But maybe there will be an error in this case. If this is really a big issue just start by downloading the data into a file on your disk and process the data after you got all of it. I don't think your HDD will be slower than your network :)

C#: Question about socket programming (sync or async)

I'm writing an instant messaging server in C# for learning purposes.
My question is whether I should use synchronous or asynchronous sockets to handle the IM clients. The goal is to handle as many clients as possible.
I'm not quite sure but as far as I know with async sockets the packets don't arrive in order which means when you send 2 chat messages and there is a delay/lag it's possible that the second one arrive before the first one. Is this right and if so, is there a way to solve this issue?
About sync sockets: Is synchronous sockets a good solution for many clients? Do I have to check every socket/connection in a loop if there are new packets? If so, isn't this quite slow?
Last question: Assume I want to implement a way to send files (e.g. images) through the protocol (which is a non-standard binary protocol btw), can I still send messages while uploading?
The goal is to handle as many clients as possible.
Async then. It scales a lot better.
I'm not quite sure but as far as I know with async sockets the packets don't arrive in order which means when you send 2 chat messages and there is a delay/lag it's possible that the second one arrive before the first one.
TCP guarantees that everything arrives in order.
Assume I want to implement a way to send files (e.g. images) through the protocol (which is a non-standard binary protocol btw), can I still send messages while uploading
I recommend that you use a separate connection for file transfers. Use the first connection to do a handshake (determine which port to use and specify file name etc). Then use Socket.SendFile on the new socket to transfer the file.
Everything #jgauffin said (i.e. TCP handles packet-order, async better for n(clients) > 1000).
Assume I want to implement a way to send files (e.g. images) through the protocol (which is a non-standard binary protocol btw), can I still send messages while uploading?
Your custom protocol has to be built to support this. If you write a 8MB packet to the Socket, you won't be able to write anything else using that socket until the 8MB are sent. Instead, use upload-chunks of smaller size so that other packets have the chance to go over the pipe as well.
[UPLOAD id=123 START length=8012389]
[UPLOAD id=123 PART chunk=1 length=2048 data=...]
[UPLOAD id=123 PART chunk=2 length=2048 data=...]
[MESSAGE to="foo#example.com" text="Hi"]
[UPLOAD id=123 PART chunk=3 length=2048 data=...]
// ...
[UPLOAD id=123 COMPLETE checksum=0xdeadbeef]
The difference between an async approach and a sync approach is more about the difference between non-blocking and blocking io. With both approaches, the data is delivered in the same order that it has been transmitted. However, you don't block while you wait for an async call to complete, so you can start transmitting to all of your clients, before any of the individual communications has finished writing to the socket (which is why typically it would be the approach followed by servers).
If you go down the sync route, you block until each transmission / receive operation has completed, which means you may require need to run multiple threads to handle the clients.
As far as uploading an image at the same time as sending messages, you may want to handle that down a different pipe connection between the client/server so that it doesn't cause a blockage.

Categories

Resources