What is the async/await equivalent of a ThreadPool server? - c#

I am working on a tcp server that looks something like this using synchronous apis and the thread pool:
TcpListener listener;
void Serve(){
while(true){
var client = listener.AcceptTcpClient();
ThreadPool.QueueUserWorkItem(this.HandleConnection, client);
//Or alternatively new Thread(HandleConnection).Start(client)
}
}
Assuming my goal is to handle as many concurrent connections as possible with the lowest resource usage, this seems that it will be quickly limited by the number of available threads. I suspect that by using Non-blocking Task apis, I will be able to handle much more with fewer resources.
My initial impression is something like:
async Task Serve(){
while(true){
var client = await listener.AcceptTcpClientAsync();
HandleConnectionAsync(client); //fire and forget?
}
}
But it strikes me that this could cause bottlenecks. Perhaps HandleConnectionAsync will take an unusually long time to hit the first await, and will stop the main accept loop from proceeding. Will this only use one thread ever, or will the runtime magically run things on multiple threads as it sees fit?
Is there a way to combine these two approaches so that my server will use exactly the number of threads it needs for the number of actively running tasks, but so that it will not block threads unnecessarily on IO operations?
Is there an idiomatic way to maximize throughput in a situation like this?

I'd let the Framework manage the threading and wouldn't create any extra threads, unless profiling tests suggest I might need to. Especially, if the calls inside HandleConnectionAsync are mostly IO-bound.
Anyway, if you like to release the calling thread (the dispatcher) at the beginning of HandleConnectionAsync, there's a very easy solution. You can jump on a new thread from ThreadPool with await Yield(). That works if you server runs in the execution environment which does not have any synchronization context installed on the initial thread (a console app, a WCF service), which is normally the case for a TCP server.
The following illustrate this (the code is originally from here). Note, the main while loop doesn't create any threads explicitly:
using System;
using System.Collections.Generic;
using System.Net.Sockets;
using System.Text;
using System.Threading.Tasks;
class Program
{
object _lock = new Object(); // sync lock
List<Task> _connections = new List<Task>(); // pending connections
// The core server task
private async Task StartListener()
{
var tcpListener = TcpListener.Create(8000);
tcpListener.Start();
while (true)
{
var tcpClient = await tcpListener.AcceptTcpClientAsync();
Console.WriteLine("[Server] Client has connected");
var task = StartHandleConnectionAsync(tcpClient);
// if already faulted, re-throw any error on the calling context
if (task.IsFaulted)
await task;
}
}
// Register and handle the connection
private async Task StartHandleConnectionAsync(TcpClient tcpClient)
{
// start the new connection task
var connectionTask = HandleConnectionAsync(tcpClient);
// add it to the list of pending task
lock (_lock)
_connections.Add(connectionTask);
// catch all errors of HandleConnectionAsync
try
{
await connectionTask;
// we may be on another thread after "await"
}
catch (Exception ex)
{
// log the error
Console.WriteLine(ex.ToString());
}
finally
{
// remove pending task
lock (_lock)
_connections.Remove(connectionTask);
}
}
// Handle new connection
private async Task HandleConnectionAsync(TcpClient tcpClient)
{
await Task.Yield();
// continue asynchronously on another threads
using (var networkStream = tcpClient.GetStream())
{
var buffer = new byte[4096];
Console.WriteLine("[Server] Reading from client");
var byteCount = await networkStream.ReadAsync(buffer, 0, buffer.Length);
var request = Encoding.UTF8.GetString(buffer, 0, byteCount);
Console.WriteLine("[Server] Client wrote {0}", request);
var serverResponseBytes = Encoding.UTF8.GetBytes("Hello from server");
await networkStream.WriteAsync(serverResponseBytes, 0, serverResponseBytes.Length);
Console.WriteLine("[Server] Response has been written");
}
}
// The entry point of the console app
static async Task Main(string[] args)
{
Console.WriteLine("Hit Ctrl-C to exit.");
await new Program().StartListener();
}
}
Alternatively, the code might look like below, without await Task.Yield(). Note, I pass an async lambda to Task.Run, because I still want to benefit from async APIs inside HandleConnectionAsync and use await in there:
// Handle new connection
private static Task HandleConnectionAsync(TcpClient tcpClient)
{
return Task.Run(async () =>
{
using (var networkStream = tcpClient.GetStream())
{
var buffer = new byte[4096];
Console.WriteLine("[Server] Reading from client");
var byteCount = await networkStream.ReadAsync(buffer, 0, buffer.Length);
var request = Encoding.UTF8.GetString(buffer, 0, byteCount);
Console.WriteLine("[Server] Client wrote {0}", request);
var serverResponseBytes = Encoding.UTF8.GetBytes("Hello from server");
await networkStream.WriteAsync(serverResponseBytes, 0, serverResponseBytes.Length);
Console.WriteLine("[Server] Response has been written");
}
});
}
Updated, based upon the comment: if this is going to be a library code, the execution environment is indeed unknown, and may have a non-default synchronization context. In this case, I'd rather run the main server loop on a pool thread (which is free of any synchronization context):
private static Task StartListener()
{
return Task.Run(async () =>
{
var tcpListener = TcpListener.Create(8000);
tcpListener.Start();
while (true)
{
var tcpClient = await tcpListener.AcceptTcpClientAsync();
Console.WriteLine("[Server] Client has connected");
var task = StartHandleConnectionAsync(tcpClient);
if (task.IsFaulted)
await task;
}
});
}
This way, all child tasks created inside StartListener wouldn't be affected by the synchronization context of the client code. So, I wouldn't have to call Task.ConfigureAwait(false) anywhere explicitly.
Updated in 2020, someone just asked a good question off-site:
I was wondering what is the reason for using a lock here? This is not
necessary for exception handling. My understanding is that a lock is
used because List is not thread safe, therefore the real question
is why add the tasks to a list (and incur the cost of a lock under
load).
Since Task.Run is perfectly able to keep track of the tasks it
started, my thinking is that in this specific example the lock is
useless, however you put it there because in a real program, having
the tasks in a list allows us to for example, iterate currently
running tasks and terminate the tasks cleanly if the program receives
a termination signal from the operating system.
Indeed, in a real-life scenario we almost always want to keep track of the tasks we start with Task.Run (or any other Task objects which are "in-flight"), for a few reasons:
To track task exceptions, which otherwise might be silently swallowed if go unobserved elsewhere.
To be able to wait asynchronously for completion of all the pending tasks (e.g., consider a Start/Stop UI button or handling a request to start/stop a inside a headless Windows service).
To be able to control (and throttle/limit) the number of tasks we allow to be in-flight simultaneously.
There are better mechanisms to handle a real-life concurrency workflows (e.g., TPL Dataflow Library), but I did include the tasks list and the lock on purpose here, even in this simple example. It might be tempting to use a fire-and-forget approach, but it's almost never is a good idea. In my own experience, when I did want a fire-and-forget, I used async void methods for that (check this).

The existing answers have correctly proposed to use Task.Run(() => HandleConnection(client));, but not explained why.
Here's why: You are concerned, that HandleConnectionAsync might take some time to hit the first await. If you stick to using async IO (as you should in this case) this means that HandleConnectionAsync is doing CPU-bound work without any blocking. This is a perfect case for the thread-pool. It is made to run short, non-blocking CPU work.
And you are right, that the accept loop would be throttled by HandleConnectionAsync taking a long time before returning (maybe because there is significant CPU-bound work in it). This is to be avoided if you need a high frequency of new connections.
If you are sure that there is no significant work throttling the loop you can save the additional thread-pool Task and not do it.
Alternatively, you can have multiple accepts running at the same time. Replace await Serve(); by (for example):
var serverTasks =
Enumerable.Range(0, Environment.ProcessorCount)
.Select(_ => Serve());
await Task.WhenAll(serverTasks);
This removes the scalability problems. Note, that await will swallow all but one error here.

Try
TcpListener listener;
void Serve(){
while(true){
var client = listener.AcceptTcpClient();
Task.Run(() => this.HandleConnection(client));
//Or alternatively new Thread(HandleConnection).Start(client)
}
}

According to the Microsoft http://msdn.microsoft.com/en-AU/library/hh524395.aspx#BKMK_VoidReturnType, the void return type shouldn't be used because it is not able to catch exceptions. As you have pointed out you do need "fire and forget" tasks, so my conclusion is to that you must always return Task (as Microsoft have said), but you should catch the error using:
TaskInstance.ContinueWith(i => { /* exception handler */ }, TaskContinuationOptions.OnlyOnFaulted);
An example I used as proof is below:
public static void Main()
{
Awaitable()
.ContinueWith(
i =>
{
foreach (var exception in i.Exception.InnerExceptions)
{
Console.WriteLine(exception.Message);
}
},
TaskContinuationOptions.OnlyOnFaulted);
Console.WriteLine("This needs to come out before my exception");
Console.ReadLine();
}
public static async Task Awaitable()
{
await Task.Delay(3000);
throw new Exception("Hey I can catch these pesky things");
}

Is there any reason you need to accept connections async? I mean, does awaiting any client connection give you any value? The only reason for doing it would be because there are some other work going on in the server while waiting for a connection. If there is you could probably do something like this:
public async void Serve()
{
while (true)
{
var client = await _listener.AcceptTcpClientAsync();
Task.Factory.StartNew(() => HandleClient(client), TaskCreationOptions.LongRunning);
}
}
This way the accepting will release the current thread leaving option for other things to be done, and the handling is run on a new thread. The only overhead would be spawning a new thread for handling the client before it would go straight back to accepting a new connection.
Edit:
Just realized it's almost the same code you wrote. Think I need to read your question again to better understand what you're actually asking :S
Edit2:
Is there a way to combine these two approaches so that my server will use exactly the
number of threads it needs for the number of actively running tasks, but so that it will
not block threads unnecessarily on IO operations?
Think my solution actually answer this question. Is it really necessary though?
Edit3:
Made Task.Factory.StartNew() actually create a new thread.

Related

How to prevent HttpListener from aborting pending requests on stoppage? HttpListener.Stop is not working

I have a problem here. In below code the async/await pattern is used with HttpListener. When the request is sent via HTTP "delay" query string argument is expected and its value causes the server to delay the mentioned request processing for the given period. I need the server to process the pending requests even after the server stopped receiving new requests.
static void Main(string[] args)
{
HttpListener httpListener = new HttpListener();
CountdownEvent sessions = new CountdownEvent(1);
bool stopRequested = false;
httpListener.Prefixes.Add("http://+:9000/GetData/");
httpListener.Start();
Task listenerTask = Task.Run(async () =>
{
while (true)
{
try
{
var context = await httpListener.GetContextAsync();
sessions.AddCount();
Task childTask = Task.Run(async () =>
{
try
{
Console.WriteLine($"Request accepted: {context.Request.RawUrl}");
int delay = int.Parse(context.Request.QueryString["delay"]);
await Task.Delay(delay);
using (StreamWriter sw = new StreamWriter(context.Response.OutputStream, Encoding.Default, 4096, true))
{
await sw.WriteAsync("<html><body><h1>Hello world</h1></body></html>");
}
context.Response.Close();
}
finally
{
sessions.Signal();
}
});
}
catch (HttpListenerException ex)
{
if (stopRequested && ex.ErrorCode == 995)
{
break;
}
throw;
}
}
});
Console.WriteLine("Server is running. ENTER to stop...");
Console.ReadLine();
sessions.Signal();
stopRequested = true;
httpListener.Stop();
Console.WriteLine("Stopped accepting requests. Waiting for the pendings...");
listenerTask.Wait();
sessions.Wait();
Console.WriteLine("Finished");
Console.ReadLine();
httpListener.Close();
}
The exact problem here, is that when the server is stopped the HttpListener.Stop is called, but all the pending requests are aborted immediately, i.e. the code is unable to send the responses back.
In non-async/await pattern (i.e. simple Thread based implementation) I have a choice to abort the thread (which I suppose is very bad) and this will allow me to process pending requests, because this simply Aborts HttpListener.GetContext call.
Can you please point me out, what am I doing wrong and how to can I prevent HttpListener to abort pending requests in async/await pattern?
It seems that when HttpListener closes the request queue handle, the requests in progress are aborted. As far as I can tell, there is no way to avoid having HttpListener do that - apparently, it's a compatibility thing. In any case, that's how its GetContext-ending system works - when the handle is closed, the native method GetContext call to actually get the request context returns an error immediately.
Thread.Abort doesn't help - really, I've yet to see a place where Thread.Abort is used correctly outside of the "application domain unloading" scenario. Thread.Abort can only ever abort managed code. Since your code is currently running native, it will only be aborted when it returns back to managed code - which is almost exactly equivalent to just doing this:
var context = await httpListener.GetContextAsync();
if (stopRequested) return;
... and since there's no better cancellation API for HttpListener, this is really your only option if you want to stick with HttpListener.
The shutdown will look like this:
stopRequested = true;
sessions.Wait();
httpListener.Dispose();
listenerTask.Wait();
I'd also suggest using CancellationToken instead of a bool flag - it handles all the synchronization woes for you. If that's not desirable for some reason, make sure you synchronize access to the flag - contractually, the compiler is allowed to omit the check, since it's impossible for the flag to change in single-threaded code.
If you want to, you can make listenerTask complete sooner by sending a dummy HTTP request to yourself right after setting stopRequested - this will cause GetContext to return immediately with the new request, and you can return. This is an approach that's commonly used when dealing with APIs that don't support "nice" cancellation, e.g. UdpClient.Receive.

How to keep an asynchronous program running?

I'm working on a program that uses S22 imap to keep an IDLE connection to gmail and receive messages real time.
I am calling the following function from my Main method:
static void RunIdle()
{
using (ImapClient Client = new ImapClient("imap.gmail.com", 993, "user", "pass", AuthMethod.Login, true))
{
if (!Client.Supports("IDLE"))
throw new Exception("This server does not support IMAP IDLE");
Client.DefaultMailbox = "label";
Client.NewMessage += new EventHandler<IdleMessageEventArgs>(OnNewMessage);
Console.WriteLine("Connected to gmail");
while (true)
{
//keep the program running until terminated
}
}
}
Using the infinite while loop works, but it seems there would be a more correct way to do this. In the future if I want to add more IDLE connections, the only way I see my solution working is with a separate thread for each connection.
What is the best way to accomplish what I am doing with the while loop?
Do not dispose of the client and root it in a static variable. That way it just stays running and keeps raising events. There is no need to have a waiting loop at all. Remove the using statement.
If this is the last thread in your program you indeed need to keep it alive.
Thread.Sleep(Timeout.Infinite);
What is the best way to accomplish what I am doing with the while loop?
You may want to do the following two things:
Provide your RunIdle with a CancelationToken so that it can be cleanly stopped.
In your busy waiting while loop, use Task.Delay to "sleep" until you need to "ping" the mailbox again.
static async Task RunIdle(CancelationToken cancelToken, TimeSpan pingInterval)
{
// ...
// the new busy waiting ping loop
while (!cancelToken.IsCancellationRequested)
{
// Do your stuff to keep the connection alive.
// Wait a while, while freeing up the thread.
if (!cancelToken.IsCancellationRequested)
await Task.Delay(pingInterval, cancelToken);
}
}
If you don't need to do anything to keep the connection alive, except from keeping the process from terminating:
Read from the console and wait until there is a ctrl+c or some clean "exit" command.
A simple solution that avoid thread blocking would be to just use Task.Delay() for some random amount of time.
static async Task RunIdle(CancellationToken token = default(CancellationToken))
{
using (ImapClient Client = new ImapClient("imap.gmail.com", 993, "user", "pass", AuthMethod.Login, true))
{
...
var interval = TimeSpan.FromHours(1);
while (!token.IsCancellationRequested)
{
await Task.Delay(interval, token);
}
}
}
Instead of while(true) you could perhaps use a CancellationToken in case the execution needs to be stop at some point. Task.Delay() supports this as well.
This are all solutions for a Console application, though. If running as a service, the service host will make sure your program keeps executing after starting, so you can just return.

Constantly read from NetworkStream async

I am a farily new .NET-developer and I'm currently reading up on async/await. I need to work on a framework used for testing devices that are controlled by remotely accessing servers using TCP and reading/writing data from/to these servers. This will be used for unit tests.
There is no application-layer protocol and the server may send data based on external events. Therefore I must be able to continuously capture any data coming from the server and write it to a buffer, which can be read from a different context.
My idea goes somewhere along the lines of the following snippet:
// ...
private MemoryStream m_dataBuffer;
private NetworkStream m_stream;
// ...
public async void Listen()
{
while (Connected)
{
try
{
int bytesReadable = m_dataBuffer.Capacity - (int)m_dataBuffer.Position;
// (...) resize m_dataBuffer if necessary (...)
m_stream.ReadTimeout = Timeout;
lock (m_dataBuffer)
{
int bytesRead = await m_stream.ReadAsync(m_dataBuffer.GetBuffer(),
(int)m_dataBuffer.Position, bytesReadable);
m_stream.Position += bytesRead;
}
}
catch (IOException ex)
{
// handle read timeout.
}
catch (Exception)
{
throw new TerminalException("ReadWhileConnectedAsync() exception");
}
}
}
This seems to have the following disadvantages:
If calling and awaiting the Listen function, the caller hangs, even though the caller must be able to continue (as the network stream should be read as long as the connection is open).
If declaring it async void and not awaiting it, the application crashes when exceptions occur in the Task.
If declaring it async Task and not awaiting it, I assume the same happens (plus I get a warning)?
The following questions ensue:
Can I catch exceptions thrown in Listen if I don't await it?
Is there a better way to constantly read from a network stream using async/await?
Is it actually sane to try to continuously read from a network stream using async/await or is a thread a better option?
async void should at the very least be async Task with the return value thrown away. That makes the method adhere to sane standards and pushes the responsibility into the caller which is better equipped to make decisions about waiting and error handling.
But you don't have to throw away the return value. You can attach a logging continuation:
async Task Log(Task t) {
try { await t; }
catch ...
}
And use it like this:
Log(Listen());
Throw away the task returned by Log (or, await it if you wish to logically wait).
Or, simply wrap everything in Listen in a try-catch. This appears to be the case already.
Can I catch exceptions thrown in Listen if I don't await it?
You can find out about exceptions using any way that attaches a continuation or waits synchronously (the latter is not your strategy).
Is there a better way to constantly read from a network stream using async/await?
No, this is the way it's supposed to be done. At any given time there should be one read IO outstanding. (Or zero for a brief period of time.)
Is it actually sane to try to continuously read from a network stream using async/await or is a thread a better option?
Both will work correctly. There is a trade-off to be made. Synchronous code can be simpler, easier to debug and even less CPU intensive. Asynchronous code saved on thread stack memory and context switches. In UI apps await has significant benefits.
I would do something like this:
const int MaxBufferSize = ... ;
Queue<byte> m_buffer = new Queue<byte>(MaxBufferSize);
NetworkStream m_stream = ... ;
...
// this will create a thread that reads bytes from
// the network stream and writes them into the buffer
Task.Run(() => ReadNetworkStream());
private static void ReadNetworkStream()
{
while (true)
{
var next = m_stream.ReadByte();
if (next < 0) break; // no more data
while (m_buffer.Count >= maxBufferSize)
m_buffer.Dequeue(); // drop front
m_buffer.Enqueue((byte)next);
}
}

Polling the right way?

I am a software/hardware engineer with quite some experience in C and embedded technologies. Currently i am busy with writing some applications in C# (.NET) that is using hardware for data acquisition. Now the following, for me burning, question:
For example: I have a machine that has an endswitch for detecting the final position of an axis. Now i am using a USB Data acquisition module to read the data. Currently I am using a Thread to continuously read the port-status.
There is no interrupt functionality on this device.
My question: Is this the right way? Should i use timers, threads or Tasks? I know polling is something that most of you guys "hate", but any suggestion is welcome!
IMO, this heavily depends on your exact environment, but first off - You should not use Threads anymore in most cases. Tasks are the more convenient and more powerful solution for that.
Low polling frequency: Timer + polling in the Tick event:
A timer is easy to handle and stop. No need to worry about threads/tasks running in the background, but the handling happens in the main thread
Medium polling frequency: Task + await Task.Delay(delay):
await Task.Delay(delay) does not block a thread-pool thread, but because of the context switching the minimum delay is ~15ms
High polling frequency: Task + Thread.Sleep(delay)
usable at 1ms delays - we actually do this to poll our USB measurement device
This could be implemented as follows:
int delay = 1;
var cancellationTokenSource = new CancellationTokenSource();
var token = cancellationTokenSource.Token;
var listener = Task.Factory.StartNew(() =>
{
while (true)
{
// poll hardware
Thread.Sleep(delay);
if (token.IsCancellationRequested)
break;
}
// cleanup, e.g. close connection
}, token, TaskCreationOptions.LongRunning, TaskScheduler.Default);
In most cases you can just use Task.Run(() => DoWork(), token), but there is no overload to supply the TaskCreationOptions.LongRunning option which tells the task-scheduler to not use a normal thread-pool thread.
But as you see Tasks are easier to handle (and awaitable, but does not apply here). Especially the "stopping" is just calling cancellationTokenSource.Cancel() in this implementation from anywhere in the code.
You can even share this token in multiple actions and stop them at once. Also, not yet started tasks are not started when the token is cancelled.
You can also attach another action to a task to run after one task:
listener.ContinueWith(t => ShutDown(t));
This is then executed after the listener completes and you can do cleanup (t.Exception contains the exception of the tasks action if it was not successful).
IMO polling cannot be avoided.
What you can do is create a module, with its independent thread/Task that will poll the port regularly. Based on the change in data, this module will raise the event which will be handled by the consuming applications
May be:
public async Task Poll(Func<bool> condition, TimeSpan timeout, string message = null)
{
// https://github.com/dotnet/corefx/blob/3b24c535852d19274362ad3dbc75e932b7d41766/src/Common/src/CoreLib/System/Threading/ReaderWriterLockSlim.cs#L233
var timeoutTracker = new TimeoutTracker(timeout);
while (!condition())
{
await Task.Yield();
if (timeoutTracker.IsExpired)
{
if (message != null) throw new TimeoutException(message);
else throw new TimeoutException();
}
}
}
Look into SpinWait or into Task.Delay internals either.
I've been thinking about this and what you could probably do is build an abstraction layer on utilizing Tasks and Func, Action with the Polling service taking in the Func, Action and polling interval as args. This would keep the implementation of either functionality separate while having them open to injection into the polling service.
So for example you'd have something like this serve as your polling class
public class PollingService {
public void Poll(Func<bool> func, int interval, string exceptionMessage) {
while(func.Invoke()){
Task.Delay(interval)
}
throw new PollingException(exceptionMessage)
}
public void Poll(Func<bool, T> func, T arg, int interval, string exceptionMessage)
{
while(func.Invoke(arg)){
Task.Delay(interval)
}
throw new PollingException(exceptionMessage)
}
}

async / await or Begin / End with TcpListener?

I've started to build a tcp server which will be able to accept many clients, and receive simultaneously from all of the clients new data.
Until now, I used IOCP for tcp servers which was pretty easy and comfortable,
but this time I want to use the Async / Await tech. that was released in C# 5.0.
The problem is that when I started to write the server using async / await, I figured out that in tcp multiple users server use case, async / await tech. and the regular synchrony methods will work the same.
Here's a simple example to be more specific:
class Server
{
private TcpListener _tcpListener;
private List<TcpClient> _clients;
private bool IsStarted;
public Server(int port)
{
_tcpListener = new TcpListener(new IPEndPoint(IPAddress.Any, port));
_clients = new List<TcpClient>();
IsStarted = false;
}
public void Start()
{
IsStarted = true;
_tcpListener.Start();
Task.Run(() => StartAcceptClientsAsync());
}
public void Stop()
{
IsStarted = false;
_tcpListener.Stop();
}
private async Task StartAcceptClientsAsync()
{
while (IsStarted)
{
// ******** Note 1 ********
var acceptedClient = await _tcpListener.AcceptTcpClientAsync();
_clients.Add(acceptedClient);
IPEndPoint ipEndPoint = (IPEndPoint) acceptedClient.Client.RemoteEndPoint;
Console.WriteLine("Accepted new client! IP: {0} Port: {1}", ipEndPoint.Address, ipEndPoint.Port);
Task.Run(() => StartReadingDataFromClient(acceptedClient));
}
}
private async void StartReadingDataFromClient(TcpClient acceptedClient)
{
try
{
IPEndPoint ipEndPoint = (IPEndPoint) acceptedClient.Client.RemoteEndPoint;
while (true)
{
MemoryStream bufferStream = new MemoryStream();
// ******** Note 2 ********
byte[] buffer = new byte[1024];
int packetSize = await acceptedClient.GetStream().ReadAsync(buffer, 0, buffer.Length);
if (packetSize == 0)
{
break;
}
Console.WriteLine("Accepted new message from: IP: {0} Port: {1}\nMessage: {2}",
ipEndPoint.Address, ipEndPoint.Port, Encoding.Default.GetString(buffer));
}
}
catch (Exception)
{
}
finally
{
acceptedClient.Close();
_clients.Remove(acceptedClient);
}
}
}
Now if you see the lines under 'Note 1' and 'Note 2',
It can easily be changed to:
Note 1 from
var acceptedClient = await _tcpListener.AcceptTcpClientAsync();
to
var acceptedClient = _tcpListener.AcceptTcpClient();
And Note 2 from
int packetSize = await acceptedClient.GetStream().ReadAsync(buffer, 0, 1024);
to
int packetSize = acceptedClient.GetStream().Read(buffer, 0, 1024);
And the server will work exactly the same.
So, why using async / await in tcp listener for multiple users if it's the same like using regular synchrony methods?
Should I keep using IOCP in that case? because for me it's pretty easy and comfortable but I am afraid that it will be obsoleted or even no more available in newer .NET versions.
Untill now, i used IOCP for tcp servers which was pretty easy and comfortable, but this time i want to use the Async / Await tech. that was released in C# 5.0.
I think you need to get your terminology right.
Having BeginOperation and EndOperation methods is called Asynchronous Programming Model (APM). Having a single Task (or Task<T>) returning method is called Task-based Asynchronous Pattern (TAP). I/O Completion Ports (IOCP) are a way to handle asynchronous operations on Windows and asynchronous I/O methods using both APM and TAP use them.
What this means is that the performance of APM and TAP is going to be very similar. The big difference between the two is that code using TAP and async-await is much more readable than code using APM and callbacks.
So, if you want to (or have to) write your code asynchronously, use TAP and async-await, if you can. But if you don't have a good reason to do that, just write your code synchronously.
On the server, a good reason to use asynchrony is scalability: asynchronous code can handle many more requests at the same time, because it tends to use fewer threads. If you don't care about scalability (for example because you're not going to have many users at the same time), then asynchrony doesn't make much sense.
Also, your code contains some practices that you should avoid:
Don't use async void methods, there is no good way to tell when they complete and they have bad exception handling. The exception is event handlers, but that applies mostly to GUI applications.
Don't use Task.Run() if you don't have to. Task.Run() can be useful if you want to leave the current thread (usually the UI thread) or if you want to execute synchronous code in parallel. But it doesn't make much sense to use it to start asynchronous operations in server applications.
Don't ignore Tasks returned from methods, unless you're sure they're not going to throw an exception. Exceptions from ignored Tasks won't do anything, which could very easily mask a bug.
After couple of search i found this
Q: It is a list of TCPServer practices but which one is the best practice for managing 5000+ clients each seconds ?
A: my assumption is "Writing async methods" more over if you are working with database same time "Async methods and iterators" will do more
here is the sample code using async/await.
async await tcp server
I think it is easier to build a tcp server using async/await than iocp.

Categories

Resources