C# Async Sockets - Thread Logic - c#

How does the thread creation logic behind Socket.BeginSend, Socket.BeginReceive, Socket.BeginAccept, etc. works?
Is it going to create a new thread for each client that connects to my server to handle the code, or is it only going to create one thread for each function(accept, receive, send...) no mattering how many clients there are connected to the server? This way only executing the client 2 accept code once the client 1 accept code is completed and so on.
This is the code I made and I am trying to understand the logic behind it better:
public class SocketServer
{
Socket _serverSocket;
List<Socket> _clientSocket = new List<Socket>();
byte[] _globalBuffer = new byte[1024];
public SocketServer()
{
_serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
}
public void Bind(int Port)
{
Console.WriteLine("Setting up server...");
_serverSocket.Bind(new IPEndPoint(IPAddress.Loopback, Port));
}
public void Listen(int BackLog)
{
_serverSocket.Listen(BackLog);
}
public void Accept()
{
_serverSocket.BeginAccept(AcceptCallback, null);
}
private void AcceptCallback(IAsyncResult AR)
{
Socket socket = _serverSocket.EndAccept(AR);
_clientSocket.Add(socket);
Console.WriteLine("Client Connected");
socket.BeginReceive(_globalBuffer, 0, _globalBuffer.Length, SocketFlags.None, ReceiveCallback, socket);
Accept();
}
private void ReceiveCallback(IAsyncResult AR)
{
Socket socket = AR.AsyncState as Socket;
int bufferSize = socket.EndReceive(AR);
string text = Encoding.ASCII.GetString(_globalBuffer, 0, bufferSize);
Console.WriteLine("Text Received: {0}", text);
string response = string.Empty;
if (text.ToLower() != "get time")
response = $"\"{text}\" is a Invalid Request";
else
response = DateTime.Now.ToLongTimeString();
byte[] data = Encoding.ASCII.GetBytes(response);
socket.BeginSend(data, 0, data.Length, SocketFlags.None, SendCallback, socket);
socket.BeginReceive(_globalBuffer, 0, _globalBuffer.Length, SocketFlags.None, ReceiveCallback, socket);
}
private void SendCallback(IAsyncResult AR)
{
(AR.AsyncState as Socket).EndSend(AR);
}
}

These kinds of asynchronous methods use threads from the thread pool to invoke your callback, once the underlying event, whatever it may be, occurs. In your case, the underlying event might be a connection was established, or you received some data.
When you set a socket to 'accept', no thread needs to exist. The old synchronous way of doing things was to have one thread that just blocks on socket.Accept() until a connection comes in, but the point of these Begin..() methods is to do away with that.
Here's a trick, one that .Net uses and one that you use: You can register any WaitHandle object (a lock such as Semaphore, SemaphoreSlim, Mutex, etc) and a callback method with the thread pool, such that when the WaitHandle becomes set, the thread pool will pick a thread, run your callback, and return the thread to the thread pool. See ThreadPool.RegisterWaitForSingleObject().
Turns out many of these Begin..() methods basically do the same thing. BeginAccept() uses a WaitHandle to know when a socket has received a connection - it registers the WaitHandle with the ThreadPool and then calls your callback on a ThreadPool thread when a connection occurs.
Every time you call Begin...() and provide a callback, you should assume that your callback method could be invoked on a new thread, simultaneously with every other Begin...() call you've ever made that's still outstanding.
Call BeginReceive() 50 times on 50 different sockets? You should assume 50 threads could try to invoke your callback method at the same time. Call a mix of 50 BeginReceive() and BeginAccept() methods? 50 threads.
In reality, how many simultaneous invocations of your callbacks occur will be limited by the policy set in the ThreadPool, eg, how fast it may make new threads, how many threads it keeps live ready to go, etc.
With that, you should understand that calling BeginReceive() on 50 different sockets, but passing in the same buffer - _globalBuffer - means that 50 sockets are going to write to that same buffer and just make a mess of it, resulting in arbitrary/corrupted data.
Instead, you should use a unique buffer per simultaneous BeginReceive() call. What I would recommend doing is creating a new class to store the context of a single connection - the socket for the connection, the buffer to use for reading, its state, etc. Every new connection gets a new context instance.
...
FYI, the modern way of performing asynchronous programming in C# is to use the async/await keywords and matching async methods from the API. That design is much more complicated and much more deeply integrated with the execution environment than these Begin...() methods, and the answers to questions like "when do my callbacks get called", "what thread(s) are my callbacks called on", and "how many callbacks might run simultaneously" depend entirely on the execution environment of your program consequent to the async/await design in C# / .Net.

Related

async / await or Begin / End with TcpListener?

I've started to build a tcp server which will be able to accept many clients, and receive simultaneously from all of the clients new data.
Until now, I used IOCP for tcp servers which was pretty easy and comfortable,
but this time I want to use the Async / Await tech. that was released in C# 5.0.
The problem is that when I started to write the server using async / await, I figured out that in tcp multiple users server use case, async / await tech. and the regular synchrony methods will work the same.
Here's a simple example to be more specific:
class Server
{
private TcpListener _tcpListener;
private List<TcpClient> _clients;
private bool IsStarted;
public Server(int port)
{
_tcpListener = new TcpListener(new IPEndPoint(IPAddress.Any, port));
_clients = new List<TcpClient>();
IsStarted = false;
}
public void Start()
{
IsStarted = true;
_tcpListener.Start();
Task.Run(() => StartAcceptClientsAsync());
}
public void Stop()
{
IsStarted = false;
_tcpListener.Stop();
}
private async Task StartAcceptClientsAsync()
{
while (IsStarted)
{
// ******** Note 1 ********
var acceptedClient = await _tcpListener.AcceptTcpClientAsync();
_clients.Add(acceptedClient);
IPEndPoint ipEndPoint = (IPEndPoint) acceptedClient.Client.RemoteEndPoint;
Console.WriteLine("Accepted new client! IP: {0} Port: {1}", ipEndPoint.Address, ipEndPoint.Port);
Task.Run(() => StartReadingDataFromClient(acceptedClient));
}
}
private async void StartReadingDataFromClient(TcpClient acceptedClient)
{
try
{
IPEndPoint ipEndPoint = (IPEndPoint) acceptedClient.Client.RemoteEndPoint;
while (true)
{
MemoryStream bufferStream = new MemoryStream();
// ******** Note 2 ********
byte[] buffer = new byte[1024];
int packetSize = await acceptedClient.GetStream().ReadAsync(buffer, 0, buffer.Length);
if (packetSize == 0)
{
break;
}
Console.WriteLine("Accepted new message from: IP: {0} Port: {1}\nMessage: {2}",
ipEndPoint.Address, ipEndPoint.Port, Encoding.Default.GetString(buffer));
}
}
catch (Exception)
{
}
finally
{
acceptedClient.Close();
_clients.Remove(acceptedClient);
}
}
}
Now if you see the lines under 'Note 1' and 'Note 2',
It can easily be changed to:
Note 1 from
var acceptedClient = await _tcpListener.AcceptTcpClientAsync();
to
var acceptedClient = _tcpListener.AcceptTcpClient();
And Note 2 from
int packetSize = await acceptedClient.GetStream().ReadAsync(buffer, 0, 1024);
to
int packetSize = acceptedClient.GetStream().Read(buffer, 0, 1024);
And the server will work exactly the same.
So, why using async / await in tcp listener for multiple users if it's the same like using regular synchrony methods?
Should I keep using IOCP in that case? because for me it's pretty easy and comfortable but I am afraid that it will be obsoleted or even no more available in newer .NET versions.
Untill now, i used IOCP for tcp servers which was pretty easy and comfortable, but this time i want to use the Async / Await tech. that was released in C# 5.0.
I think you need to get your terminology right.
Having BeginOperation and EndOperation methods is called Asynchronous Programming Model (APM). Having a single Task (or Task<T>) returning method is called Task-based Asynchronous Pattern (TAP). I/O Completion Ports (IOCP) are a way to handle asynchronous operations on Windows and asynchronous I/O methods using both APM and TAP use them.
What this means is that the performance of APM and TAP is going to be very similar. The big difference between the two is that code using TAP and async-await is much more readable than code using APM and callbacks.
So, if you want to (or have to) write your code asynchronously, use TAP and async-await, if you can. But if you don't have a good reason to do that, just write your code synchronously.
On the server, a good reason to use asynchrony is scalability: asynchronous code can handle many more requests at the same time, because it tends to use fewer threads. If you don't care about scalability (for example because you're not going to have many users at the same time), then asynchrony doesn't make much sense.
Also, your code contains some practices that you should avoid:
Don't use async void methods, there is no good way to tell when they complete and they have bad exception handling. The exception is event handlers, but that applies mostly to GUI applications.
Don't use Task.Run() if you don't have to. Task.Run() can be useful if you want to leave the current thread (usually the UI thread) or if you want to execute synchronous code in parallel. But it doesn't make much sense to use it to start asynchronous operations in server applications.
Don't ignore Tasks returned from methods, unless you're sure they're not going to throw an exception. Exceptions from ignored Tasks won't do anything, which could very easily mask a bug.
After couple of search i found this
Q: It is a list of TCPServer practices but which one is the best practice for managing 5000+ clients each seconds ?
A: my assumption is "Writing async methods" more over if you are working with database same time "Async methods and iterators" will do more
here is the sample code using async/await.
async await tcp server
I think it is easier to build a tcp server using async/await than iocp.

Understanding ManualResetEvent in asynchronous server socket

Socket SocketSrv;
public static ManualResetEvent Done = new ManualResetEvent(false);
IPEndPoint IPP = new IPEndPoint(IPAddress.Any, 1234);
void Listening()
{
SocketSrv = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
try
{
SocketSrv.Bind(IPP);
SocketSrv.Listen(5);
while (true)
{
Done.Reset();
info.Text = "Waiting for connections....";
SocketSrv.BeginAccept(new AsyncCallback(Connection),
SocketSrv);
Done.WaitOne();
}
}
catch(Exception error)
{
MessageBox.Show(error.Message);
}
}
void Connection(IAsyncResult ar)
{
Done.Set();
Socket con= (Socket)ar.AsyncState;
Socket handler = con.EndAccept(ar);
}
I'm trying to understand the ManualResetEvent in this asynchronus operation since I've never used it.
Step1. The SocketSrv is created to accept TCP connections and the type of sending and receving "commands" is stream.
Step2. The socket is binded with the ip,port and then we start listening for connections.
Step3. In the while loop :
The ManualResetEvent is Reset (I understand that the ManualResetEvent is a class which type is Boolean and indicates whenever a thread is busy or not).In this case , the event is always reset because if there's a connection made and another one is coming I need to reset it and start the "operation" again.
In the BeingAccept I'm being starting the asynchronous operation, the callback function which is executed and the IAsyncResult argument which will become the "socket".
Step4. The ResetEvent is now Waiting blocking the current thread and waiting for the handler in the connection method to be ended so it can finish initializing the current connection.
Step5. In the connection thread the ResetEvent sets the signal true which means... well I don't know what does it means. I think it tells to the ResetEvent to unblock the main thread.
In the 'con' socket I'm getting the AsyncState. I have no idea what does it means.
In the handler socket I'm telling the ResetEvent that the connection was made.
Things being said, could someone tell me if what I've said is true/wrong and why?
The event is used so that when a connection occurs BeginAccept won't be called again until the Connect method has been invoked. e.g. WaitOne halts the thread until Set is called. Reset is called to set the state of the event back to signalled so that WaitOne will halt the thread again so that it will wait for Connect to be called again.
Personally, I don't use this particular pattern. I've never seen an explanation of the point of this pattern. If there were some shared state between the BeginAccept loop and the Connect method, it might make sense. But, as written no state is guarded by use of the event. When I use BeginAccept I simply don't use an event, and I've used code like this to deal with many connections a second. Use of the event will do nothing to prevent errors with too many connections at a time. And quite frankly, to use an asynchronous method and force it to effectively be synchronous defeats the purpose.
The AysncState, from the point of view of BeginAccept, is simply opaque data. It's the "state" that is used for that particular async operation. This "state" is application-specific. You use whatever you need when you want to process a connection asynchronously. In the case of the BeginAccept callback, you usually want to do something with the server socket and it's passed in for the state so you have access to it to call EndAccept. Since SocketSrv is a member field, you don't really need to do this, you could do this instead:
SocketSrv.BeginAccept(new AsyncCallback(Connection), null);
//...
void Connection(IAsyncResult ar)
{
Socket handler = SocketSrv.EndAccept(ar);
//...
}
Your comments seem to suggest you have a good grasp of this particular bit of code. Your "Step4" is a bit off, it's not waiting for the Connection method to end, just for it to start (since Set is called as the first line). And yes, "Step5", the Set means it unblocks the WaitOne so the main thread call call Reset then BeginAccept.

Understanding the Async (sockets) in C#

I am a little bit confused about what does the Async approach achieve. I encountered it when looking up how to make a server accept multiple connections. What confuses me while looking up what Aync does in C# exactly, is that from what I can tell its not its own thread. However, it also allows you to avoid locking and stalling. For instance, if I have the following:
ConnectionManager()
{
listener = new TcpListener(port);
listener.BeginAcceptSocket(new AsyncCallback(acceptConnection), listener);
}
public void acceptConnection(IAsyncResult ar)
{
//Do stuff
}
does this mean that as soon as it finds a connection, it executes the "acceptConnection" function but then continues to execute through the caller function? (in this case going out of scope). How does this allow me to create a server application that will be able to take multiple clients? I am fairly new to this concept even though I have worked with threads before to manage server/client interaction. If I am being a little vague, please let me know. I have looked up multiple examples on MSDN and am still a little confused. Thank you ahead of time!
as soon as it finds a connection, it executes the "acceptConnection" function
Yes
then continues to execute through the caller function?
No.
what does the Async approach achieve
When done right, it allows processing much higher number of requests/second using fewer resources.
Imagine you're creating a server that should accept connections on 10 TCP ports.
With blocking API, you’ll have to create 10 threads just for accepting sockets. Threads are expensive system resource, e.g. every thread has its own stack, and switching between threads takes considerable time. If a client connecting to some socket, the OS will have to wake up the corresponding thread.
With async API, you post 10 asynchronous requests. When client is connecting, your acceptConnection method will be called by a thread from the CLR thread pool.
And one more thing.
If you want to continue executing the caller function after waiting for asynchronous I/O operation to complete, you should consider new C#’s async/await syntax, it allows you to do just that. The feature is available as a stand-alone library “Async CTP” for visual studio 2010, and included in visual studio 2012.
I don't profess to be a c# or sockets guru but from what I understand the code you've got above will accept the first connection and then no more. You would need to establish another BeginAccept.
Something like:
TcpListener listener = null;
ConnectionManager()
{
listener = new TcpListener(port);
listener.BeginAcceptSocket(new AsyncCallback(acceptConnection), listener);
}
public void acceptConnection(IAsyncResult ar)
{
// Create async receive data code..
// Get ready for a new connection
listener.BeginAcceptSocket(new AsyncCallback(acceptConnection), listener);
}
So by using Async receive data in addition to the async connection, the accept connection finishes pretty quickly and sets up listening for a new connection. I guess you could re-order this too.
For straight socket connection (not TcpListener) this is what i used:
(connectedClient is a my own class which handles the receive & transmit functions and holds other info about the connection).
int Port = 7777; // or whatever port you want to listen on
IPEndPoint ipLocal = new IPEndPoint(IPAddress.Any, port);
listenSocket = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
listenSocket.Bind(ipLocal);
// create the call back for any client connections...
listenSocket.BeginAccept(new AsyncCallback(OnClientConnection), null);
private void OnClientConnection(IAsyncResult asyn)
{
if (socketClosed)
{
return;
}
try
{
Socket clientSocket = listenSocket.EndAccept(asyn);
ConnectedClient connectedClient = new ConnectedClient(clientSocket, this);
connectedClient.MessageReceived += OnMessageReceived;
connectedClient.Disconnected += OnDisconnection;
connectedClient.MessageSent += OnMessageSent;
connectedClient.StartListening();
// create the call back for any client connections...
listenSocket.BeginAccept(new AsyncCallback(OnClientConnection), null);
}
catch (ObjectDisposedException excpt)
{
// Deal with this, your code goes here
}
catch (Exception excpt)
{
// Deal with this, your code goes here
}
}
I hope this has answered what you're looking for ?

Multi-threaded TcpClient send timeout after long open connection

I'm having a problem with a TcpClient closing with a send timeout in a multi-threaded application after the connection has been open for a long period of time (several hours or overnight). The NetworkStream is being used by two threads, a UI thread and a background thread. There is a StreamReader used for reading incoming data, and a StreamWriter used for outgoing data. The StreamReader is only ever accessed one thread (the background one), but the StreamWriter is accessed by both the UI thread and the background thread.
What happens is that if I open a connection and connect to a remote server, I can immediately send and receive data without any problems. I do not get any send timeouts and data is correctly sent and received. However, if I then walk away and do not send any data for several hours and then return and start sending data (this is a chat application if that helps make it make sense), the socket will timeout on the send. During the time that I walk away there is no problem at all receiving data. Additionally, the remote server polls for an active connection and my client must respond to that, and since the connection is open for several hours it must be correctly sending a response. This polling response is only sent on the background thread, though. Data I enter is sent from the UI thread, and that's where the timeout occurs.
I'm guessing it's something to do with concurrent access, but I can't figure out what's causing it and why I can initially send data from the UI without a problem and only have it timeout after being idle for several hours.
Below is the relevant code. The variables at the top are declared in the class. Address and Port are properties in the class. WriteLine is the only method anywhere in the application that sends data with the StreamWriter. I put a lock around the call to StreamWriter.WriteLine hoping that would correct any synchronization issues. WriteLine is called from the background thread inside ParseMessage, and elsewhere from the UI.
If I increase TcpClient.SendTimeout to something larger, that doesn't fix anything. It just takes longer for the socket to timeout. I can't have the background thread both read and write because the background thread is blocking on ReadLine, so nothing would ever get written.
private TcpClient _connection;
private StreamWriter _output;
private Thread _parsingThread;
private object _outputLock = new object();
public void Connect(string address, int port)
{
Address = address;
Port = port;
_parsingThread = new Thread(new ThreadStart(Run));
_parsingThread.IsBackground = true;
_parsingThread.Start();
}
private void Run()
{
try
{
using (_connection = new TcpClient())
{
_connection.Connect(Address, Port);
_connection.ReceiveTimeout = 180000;
_connection.SendTimeout = 60000;
StreamReader input = new StreamReader(_connection.GetStream());
_output = new StreamWriter(_connection.GetStream());
string line;
do
{
line = input.ReadLine();
if (!string.IsNullOrEmpty(line))
{
ParseMessage(line);
}
}
while (line != null);
}
}
catch (Exception ex)
{
//not actually catching exception, just compressing example
}
finally
{
//stuff
}
}
protected void WriteLine(string line)
{
lock (_outputLock)
{
_output.WriteLine(line);
_output.Flush();
}
}
The blocking methods (Read and Write) of the NetworkStream class are not designed to be used concurrently from multiple threads. From MSDN:
Use the Write and Read methods for simple single thread synchronous blocking I/O. If you want to process your I/O using separate threads, consider using the BeginWrite and EndWrite methods, or the BeginRead and EndRead methods for communication.
My assumption is that, when you call WriteLine (and, internally, NetworkStream.Write) from your UI thread, it would block until the concurrent ReadLine (internally, NetworkStream.Read) operation completes in the background thread. If the latter does not do so within the SendTimeout, then the Write would time out.
To work around your issue, you should convert your implementation to use non-blocking methods. However, as a quick hack to first test whether this is really the issue, try introducing a DataAvailable poll before your ReadLine:
NetworkStream stream = _connection.GetStream();
StreamReader input = new StreamReader(stream);
_output = new StreamWriter(stream);
string line;
do
{
// Poll for data availability.
while (!stream.DataAvailable)
Thread.Sleep(300);
line = input.ReadLine();
if (!string.IsNullOrEmpty(line))
{
ParseMessage(line);
}
}
while (line != null);
line = input.ReadLine();

How to write a .Net UDP Scalable server

I need to write a very high load UDP server. I'm using .Net. How do I use the Socket class to achieve this?
I am familiar with the winsock API, and completion ports, and what i would do there, is to use several threads to accept sockets using a completion port, and also to receive in the same manner.
My server needs to process a LOT of small UDP packets very quickly, and i want to receive them asynchronously, how do i do this using .net?
I thought of calling BeginReceive several times, but that kinda seems silly...
If anyone has a good .net example for this it would of course help a lot.
What I have found to minimize dropped packets is to read from the socket asynchronously as you mentioned, but to put the bytes read into a thread safe queue, then have another thread read off the queue and process the bytes. If you are using .Net 4.0 you could use ConcurrentQueue:
public class SomeClass {
ConcurrentQueue<IList<Byte>> _Queue;
Byte[] _Buffer;
ManualResetEvent _StopEvent;
AutoResetEvent _QueueEvent;
private void ReceiveCallback(IAsyncResult ar) {
Socket socket = ar.AsyncState as Socket;
Int32 bytesRead = socket.EndReceive(ar);
List<Byte> bufferCopy = new List<byte>(_Buffer);
_Queue.Enqueue(bufferCopy);
_QueueEvent.Set();
if(!_StopEvent.WaitOne(0)) socket.BeginReceive(...);
return;
}
private void ReadReceiveQueue() {
WaitHandle[] handles = new WaitHandle[] { _StopEvent, _QueueEvent };
Boolean loop = true;
while (loop) {
Int32 index = WaitHandle.WaitAny(handles);
switch (index) {
case 0:
loop = false;
break;
case 1:
// Dequeue logic here
break;
default:
break;
}
}
}
}
Note: the _StopEvent is a ManualResetEvent so that both the ReceiveCallback and ReadReceiveQueue methods can use the same event to shut down cleanly.
If you only have a single socket and you can process UDP packets independently of each other, then the best approach would actually be to use a thread pool where each thread invokes a blocking Receive. The OS will take care of waking up one of the waiting threads to receive/process the packet. That way you can avoid any overhead introduced by the async-I/O routines.

Categories

Resources