How to write a .Net UDP Scalable server - c#

I need to write a very high load UDP server. I'm using .Net. How do I use the Socket class to achieve this?
I am familiar with the winsock API, and completion ports, and what i would do there, is to use several threads to accept sockets using a completion port, and also to receive in the same manner.
My server needs to process a LOT of small UDP packets very quickly, and i want to receive them asynchronously, how do i do this using .net?
I thought of calling BeginReceive several times, but that kinda seems silly...
If anyone has a good .net example for this it would of course help a lot.

What I have found to minimize dropped packets is to read from the socket asynchronously as you mentioned, but to put the bytes read into a thread safe queue, then have another thread read off the queue and process the bytes. If you are using .Net 4.0 you could use ConcurrentQueue:
public class SomeClass {
ConcurrentQueue<IList<Byte>> _Queue;
Byte[] _Buffer;
ManualResetEvent _StopEvent;
AutoResetEvent _QueueEvent;
private void ReceiveCallback(IAsyncResult ar) {
Socket socket = ar.AsyncState as Socket;
Int32 bytesRead = socket.EndReceive(ar);
List<Byte> bufferCopy = new List<byte>(_Buffer);
_Queue.Enqueue(bufferCopy);
_QueueEvent.Set();
if(!_StopEvent.WaitOne(0)) socket.BeginReceive(...);
return;
}
private void ReadReceiveQueue() {
WaitHandle[] handles = new WaitHandle[] { _StopEvent, _QueueEvent };
Boolean loop = true;
while (loop) {
Int32 index = WaitHandle.WaitAny(handles);
switch (index) {
case 0:
loop = false;
break;
case 1:
// Dequeue logic here
break;
default:
break;
}
}
}
}
Note: the _StopEvent is a ManualResetEvent so that both the ReceiveCallback and ReadReceiveQueue methods can use the same event to shut down cleanly.

If you only have a single socket and you can process UDP packets independently of each other, then the best approach would actually be to use a thread pool where each thread invokes a blocking Receive. The OS will take care of waking up one of the waiting threads to receive/process the packet. That way you can avoid any overhead introduced by the async-I/O routines.

Related

C# Async Sockets - Thread Logic

How does the thread creation logic behind Socket.BeginSend, Socket.BeginReceive, Socket.BeginAccept, etc. works?
Is it going to create a new thread for each client that connects to my server to handle the code, or is it only going to create one thread for each function(accept, receive, send...) no mattering how many clients there are connected to the server? This way only executing the client 2 accept code once the client 1 accept code is completed and so on.
This is the code I made and I am trying to understand the logic behind it better:
public class SocketServer
{
Socket _serverSocket;
List<Socket> _clientSocket = new List<Socket>();
byte[] _globalBuffer = new byte[1024];
public SocketServer()
{
_serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
}
public void Bind(int Port)
{
Console.WriteLine("Setting up server...");
_serverSocket.Bind(new IPEndPoint(IPAddress.Loopback, Port));
}
public void Listen(int BackLog)
{
_serverSocket.Listen(BackLog);
}
public void Accept()
{
_serverSocket.BeginAccept(AcceptCallback, null);
}
private void AcceptCallback(IAsyncResult AR)
{
Socket socket = _serverSocket.EndAccept(AR);
_clientSocket.Add(socket);
Console.WriteLine("Client Connected");
socket.BeginReceive(_globalBuffer, 0, _globalBuffer.Length, SocketFlags.None, ReceiveCallback, socket);
Accept();
}
private void ReceiveCallback(IAsyncResult AR)
{
Socket socket = AR.AsyncState as Socket;
int bufferSize = socket.EndReceive(AR);
string text = Encoding.ASCII.GetString(_globalBuffer, 0, bufferSize);
Console.WriteLine("Text Received: {0}", text);
string response = string.Empty;
if (text.ToLower() != "get time")
response = $"\"{text}\" is a Invalid Request";
else
response = DateTime.Now.ToLongTimeString();
byte[] data = Encoding.ASCII.GetBytes(response);
socket.BeginSend(data, 0, data.Length, SocketFlags.None, SendCallback, socket);
socket.BeginReceive(_globalBuffer, 0, _globalBuffer.Length, SocketFlags.None, ReceiveCallback, socket);
}
private void SendCallback(IAsyncResult AR)
{
(AR.AsyncState as Socket).EndSend(AR);
}
}
These kinds of asynchronous methods use threads from the thread pool to invoke your callback, once the underlying event, whatever it may be, occurs. In your case, the underlying event might be a connection was established, or you received some data.
When you set a socket to 'accept', no thread needs to exist. The old synchronous way of doing things was to have one thread that just blocks on socket.Accept() until a connection comes in, but the point of these Begin..() methods is to do away with that.
Here's a trick, one that .Net uses and one that you use: You can register any WaitHandle object (a lock such as Semaphore, SemaphoreSlim, Mutex, etc) and a callback method with the thread pool, such that when the WaitHandle becomes set, the thread pool will pick a thread, run your callback, and return the thread to the thread pool. See ThreadPool.RegisterWaitForSingleObject().
Turns out many of these Begin..() methods basically do the same thing. BeginAccept() uses a WaitHandle to know when a socket has received a connection - it registers the WaitHandle with the ThreadPool and then calls your callback on a ThreadPool thread when a connection occurs.
Every time you call Begin...() and provide a callback, you should assume that your callback method could be invoked on a new thread, simultaneously with every other Begin...() call you've ever made that's still outstanding.
Call BeginReceive() 50 times on 50 different sockets? You should assume 50 threads could try to invoke your callback method at the same time. Call a mix of 50 BeginReceive() and BeginAccept() methods? 50 threads.
In reality, how many simultaneous invocations of your callbacks occur will be limited by the policy set in the ThreadPool, eg, how fast it may make new threads, how many threads it keeps live ready to go, etc.
With that, you should understand that calling BeginReceive() on 50 different sockets, but passing in the same buffer - _globalBuffer - means that 50 sockets are going to write to that same buffer and just make a mess of it, resulting in arbitrary/corrupted data.
Instead, you should use a unique buffer per simultaneous BeginReceive() call. What I would recommend doing is creating a new class to store the context of a single connection - the socket for the connection, the buffer to use for reading, its state, etc. Every new connection gets a new context instance.
...
FYI, the modern way of performing asynchronous programming in C# is to use the async/await keywords and matching async methods from the API. That design is much more complicated and much more deeply integrated with the execution environment than these Begin...() methods, and the answers to questions like "when do my callbacks get called", "what thread(s) are my callbacks called on", and "how many callbacks might run simultaneously" depend entirely on the execution environment of your program consequent to the async/await design in C# / .Net.

Understanding the Async (sockets) in C#

I am a little bit confused about what does the Async approach achieve. I encountered it when looking up how to make a server accept multiple connections. What confuses me while looking up what Aync does in C# exactly, is that from what I can tell its not its own thread. However, it also allows you to avoid locking and stalling. For instance, if I have the following:
ConnectionManager()
{
listener = new TcpListener(port);
listener.BeginAcceptSocket(new AsyncCallback(acceptConnection), listener);
}
public void acceptConnection(IAsyncResult ar)
{
//Do stuff
}
does this mean that as soon as it finds a connection, it executes the "acceptConnection" function but then continues to execute through the caller function? (in this case going out of scope). How does this allow me to create a server application that will be able to take multiple clients? I am fairly new to this concept even though I have worked with threads before to manage server/client interaction. If I am being a little vague, please let me know. I have looked up multiple examples on MSDN and am still a little confused. Thank you ahead of time!
as soon as it finds a connection, it executes the "acceptConnection" function
Yes
then continues to execute through the caller function?
No.
what does the Async approach achieve
When done right, it allows processing much higher number of requests/second using fewer resources.
Imagine you're creating a server that should accept connections on 10 TCP ports.
With blocking API, you’ll have to create 10 threads just for accepting sockets. Threads are expensive system resource, e.g. every thread has its own stack, and switching between threads takes considerable time. If a client connecting to some socket, the OS will have to wake up the corresponding thread.
With async API, you post 10 asynchronous requests. When client is connecting, your acceptConnection method will be called by a thread from the CLR thread pool.
And one more thing.
If you want to continue executing the caller function after waiting for asynchronous I/O operation to complete, you should consider new C#’s async/await syntax, it allows you to do just that. The feature is available as a stand-alone library “Async CTP” for visual studio 2010, and included in visual studio 2012.
I don't profess to be a c# or sockets guru but from what I understand the code you've got above will accept the first connection and then no more. You would need to establish another BeginAccept.
Something like:
TcpListener listener = null;
ConnectionManager()
{
listener = new TcpListener(port);
listener.BeginAcceptSocket(new AsyncCallback(acceptConnection), listener);
}
public void acceptConnection(IAsyncResult ar)
{
// Create async receive data code..
// Get ready for a new connection
listener.BeginAcceptSocket(new AsyncCallback(acceptConnection), listener);
}
So by using Async receive data in addition to the async connection, the accept connection finishes pretty quickly and sets up listening for a new connection. I guess you could re-order this too.
For straight socket connection (not TcpListener) this is what i used:
(connectedClient is a my own class which handles the receive & transmit functions and holds other info about the connection).
int Port = 7777; // or whatever port you want to listen on
IPEndPoint ipLocal = new IPEndPoint(IPAddress.Any, port);
listenSocket = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
listenSocket.Bind(ipLocal);
// create the call back for any client connections...
listenSocket.BeginAccept(new AsyncCallback(OnClientConnection), null);
private void OnClientConnection(IAsyncResult asyn)
{
if (socketClosed)
{
return;
}
try
{
Socket clientSocket = listenSocket.EndAccept(asyn);
ConnectedClient connectedClient = new ConnectedClient(clientSocket, this);
connectedClient.MessageReceived += OnMessageReceived;
connectedClient.Disconnected += OnDisconnection;
connectedClient.MessageSent += OnMessageSent;
connectedClient.StartListening();
// create the call back for any client connections...
listenSocket.BeginAccept(new AsyncCallback(OnClientConnection), null);
}
catch (ObjectDisposedException excpt)
{
// Deal with this, your code goes here
}
catch (Exception excpt)
{
// Deal with this, your code goes here
}
}
I hope this has answered what you're looking for ?

Multi-threaded TcpClient send timeout after long open connection

I'm having a problem with a TcpClient closing with a send timeout in a multi-threaded application after the connection has been open for a long period of time (several hours or overnight). The NetworkStream is being used by two threads, a UI thread and a background thread. There is a StreamReader used for reading incoming data, and a StreamWriter used for outgoing data. The StreamReader is only ever accessed one thread (the background one), but the StreamWriter is accessed by both the UI thread and the background thread.
What happens is that if I open a connection and connect to a remote server, I can immediately send and receive data without any problems. I do not get any send timeouts and data is correctly sent and received. However, if I then walk away and do not send any data for several hours and then return and start sending data (this is a chat application if that helps make it make sense), the socket will timeout on the send. During the time that I walk away there is no problem at all receiving data. Additionally, the remote server polls for an active connection and my client must respond to that, and since the connection is open for several hours it must be correctly sending a response. This polling response is only sent on the background thread, though. Data I enter is sent from the UI thread, and that's where the timeout occurs.
I'm guessing it's something to do with concurrent access, but I can't figure out what's causing it and why I can initially send data from the UI without a problem and only have it timeout after being idle for several hours.
Below is the relevant code. The variables at the top are declared in the class. Address and Port are properties in the class. WriteLine is the only method anywhere in the application that sends data with the StreamWriter. I put a lock around the call to StreamWriter.WriteLine hoping that would correct any synchronization issues. WriteLine is called from the background thread inside ParseMessage, and elsewhere from the UI.
If I increase TcpClient.SendTimeout to something larger, that doesn't fix anything. It just takes longer for the socket to timeout. I can't have the background thread both read and write because the background thread is blocking on ReadLine, so nothing would ever get written.
private TcpClient _connection;
private StreamWriter _output;
private Thread _parsingThread;
private object _outputLock = new object();
public void Connect(string address, int port)
{
Address = address;
Port = port;
_parsingThread = new Thread(new ThreadStart(Run));
_parsingThread.IsBackground = true;
_parsingThread.Start();
}
private void Run()
{
try
{
using (_connection = new TcpClient())
{
_connection.Connect(Address, Port);
_connection.ReceiveTimeout = 180000;
_connection.SendTimeout = 60000;
StreamReader input = new StreamReader(_connection.GetStream());
_output = new StreamWriter(_connection.GetStream());
string line;
do
{
line = input.ReadLine();
if (!string.IsNullOrEmpty(line))
{
ParseMessage(line);
}
}
while (line != null);
}
}
catch (Exception ex)
{
//not actually catching exception, just compressing example
}
finally
{
//stuff
}
}
protected void WriteLine(string line)
{
lock (_outputLock)
{
_output.WriteLine(line);
_output.Flush();
}
}
The blocking methods (Read and Write) of the NetworkStream class are not designed to be used concurrently from multiple threads. From MSDN:
Use the Write and Read methods for simple single thread synchronous blocking I/O. If you want to process your I/O using separate threads, consider using the BeginWrite and EndWrite methods, or the BeginRead and EndRead methods for communication.
My assumption is that, when you call WriteLine (and, internally, NetworkStream.Write) from your UI thread, it would block until the concurrent ReadLine (internally, NetworkStream.Read) operation completes in the background thread. If the latter does not do so within the SendTimeout, then the Write would time out.
To work around your issue, you should convert your implementation to use non-blocking methods. However, as a quick hack to first test whether this is really the issue, try introducing a DataAvailable poll before your ReadLine:
NetworkStream stream = _connection.GetStream();
StreamReader input = new StreamReader(stream);
_output = new StreamWriter(stream);
string line;
do
{
// Poll for data availability.
while (!stream.DataAvailable)
Thread.Sleep(300);
line = input.ReadLine();
if (!string.IsNullOrEmpty(line))
{
ParseMessage(line);
}
}
while (line != null);
line = input.ReadLine();

.NET TcpSocket programming

What is the correct way to accept sockets in a multi connection environment in .NET?
Will the following be enough even if the load is high?
while (true)
{
//block untill socket accepted
var socket = tcpListener.AcceptSocket();
DoStuff(socket) //e.g. spawn thread and read data
}
That is, can I accept sockets in a single thread and then handle the sockets in a thread / dataflow / whatever.
So the question is just about the accept part..
You'll probably want the BeginAccept async operation instead of the synchroneous Accept.
And if you want to handle high load, you definitely don't want a thread per connection - again, you async methods.
Take a look at either the Reactor or Proactor pattern depending on if you wan't to block or not. I'll recommend the Patterns for Concurrent and Networked Objects book.
This should be fine but if the load gets even higher you might consider using the asynchronous versions of this method: BeginAcceptSocket/EndAcceptSocket.
The BeginAcceptSocket is a better choice if you want the most performant server.
More importantly, these async operations use a Threadpool under the hood whilst in your current implementation you are creating and destroying lots of threads which is really expensive.
I think the best approach is to call BeginAccept(), and within OnAccept call BeginAccept right again.. This should give you the best concurrency.
The OnAccept should be something like this:
private void OnAccept(IAsyncResult ar)
{
bool beginAcceptCalled = false;
try
{
//start the listener again
_listener.BeginAcceptSocket(OnAccept, null);
beginAcceptCalled = true;
Socket socket = _listener.EndAcceptSocket(ar);
//do something with the socket..
}
catch (Exception ex)
{
if (!beginAcceptCalled)
{
//try listening to connections again
_listener.BeginAcceptSocket(OnAccept, null);
}
}
}
It doesn't really matter performance wise. What matters is how you communicate which each client. That handling will consume a lot more CPU than accepting sockets.
I would use BeginAccept/EndAccept for the listener socket AND BeginReceive/EndReceive for the client sockets.
Since I'm using Async CTP and DataFlow, the current code looks like this:
private async void WaitForSockets()
{
var socket = await tcpListener.AcceptSocketAsync();
WaitForSockets();
incomingSockets.Post(socket);
}
Note that what looks like a recursive call will not cause stack overflow or block.
It will simply start a new awaiter for a new socket and exit.

SerialPort communication questions

I know SerialPort communication in .NET is designed to send the DataReceived event to the receiver when data is available and reach the threshhold.
Can we not use that DataReceived event and start a thread in the receiver side to freqenutly call one of those ReadXXX methods to get data?
What will happen if receiver is much slower than the sender? The SerialPort buffer will overflow (data lost)?
There's little point in doing this, just start the reader thread yourself after you open the port and don't bother with DataReceived. Doing it your way is difficult, tough to cleanly unsubscribe from the DataReceived event after you started the thread, especially at the very moment data is being received. You can't afford to have them both.
That works, in fact it's one of the ways I used in my question Constantly reading from a serial port with a background thread.
For your scenario you could listen to the DataReceived event, then start a thread that calls ReadExisting on the port to get all currently available bytes. You can also check how many bytes are waiting in the receive buffer by looking at the SerialPort.BytesToRead property.
As for your receive buffer overflowing, a) it's big enough (you can check with the SerialPort.ReadBufferSize property) and b) this isn't 1982, so CPUs are fast enough to process data from the port so that it doesn't have time to fill up (certainly much faster than the serial data rate).
The function of the thread that read serial port can be as this:
private void ThreadRx()
{
while (true)
{
try
{
if (this._serialPort.IsOpen == true)
{
int count = this._serialPort.BytesToRead;
if (count > 0)
{
byte[] Buffer = new Byte[count];
this._serialPort.Read(Buffer, 0, count);
//To do: Call your reception event (sending the buffer)
}
else
{
Thread.Sleep(50);
}
}
else
{
Thread.Sleep(200);
}
}
catch (ThreadAbortException ex)
{
//this exception is invoked calling the Abort method of the thread to finish the thread
break;//exit from while
}
catch (Exception ex)
{
//To do:call your error event
}
}
}
Do not worry about the input buffer, because the thread can be read much faster than the baud rate of serial port communication, you can even use this same code to read a tcp/ip socket.

Categories

Resources