UdpClient.BeginReceive vs. UdpClient.Receive on a separate thread - c#

There is a similar question When should I use UdpClient.BeginReceive? When should I use UdpClient.Receive on a background thread?
On that post Marc Gravell wrote
"Another advantage of the async approach is that you can always get the data you need, queue another async fetch, and then process the new data on the existing async thread, giving some more options for parallelism (one reading, one processing)."
Would you be able to give me an example of what you mean with this?
My problem is that I am listening to UDP packets but I don't have time to process them in the receiving thread as I want to return to my Receive as soon as possible as to not lose any packets in the meanwhile (being that the socket will drop any packets I don't receive, being that it isn't TCP) what would be the best way to do this?

With asynchronous IO, you can return to your caller as soon as you start the IO process. Because the nature of IO bound work is done asynchronously via the operating system, we can take advantage of this.
When you use blocking api such as UdpClient.Recieve on a different thread in order to keep your application responsive that thread will mostly block waiting for the UdpClient to complete its recieve method. With async IO, as mark said, you can free the thread until the IO operation completes and do different work in the meanwhile.
For example, we can use UdpClient.RecieveAsync, which returns a Task<UdpRecieveResult>. Because a task is awaitable (see this for more on awaitables), we can take advantage of async io:
public async Task RecieveAndDoWorkAsync()
{
var udpClient = new UdpClient(); // Initialize client
var recieveTask = udpclient.RecieveAsync();
// Do some more work
// Wait for the operation to complete, meanwhile returning control to tge calling method (without creating any new threads)
await recieveTask
}

Related

What is the benefit of C# async/await if it still waits for the previous execution to complete?

I tried reading many articles and questions in stackoverflow regarding the real use of async/await, so basically asynchronous method calls but somehow I am still not able to decode of how does it provide parallelism and non blocking behavior. I referred few posts like these
Is it OK to use async/await almost everywhere?
https://news.ycombinator.com/item?id=19010989
Benefits of using async and await keywords
So if I write a piece of code like this
var user = await GetUserFromDBAsync();
var destination = await GetDestinationFromDBAsync();
var address = await GetAddressFromDBAsync();
Even though all the three methods are asynchronous but still the code will not go to the second line to get destination from database until it fully gets the user from the database.
So where is the parallelism and non blocking behavior of asyn/await here. It still waits to complete the first operation before executing the next line.
Or my total understanding is wrong about asyn?
EDIT
Any example would really help!
The point of async/await is not that methods are executed more quickly. Rather, it's about what a thread is doing while those methods are waiting for a response from a database, the file system, an HTTP request, or some other I/O.
Without asynchronous execution the thread just waits. It is, in a sense, wasted, because during that time it is doing nothing. We don't have an unlimited supply of threads, so having threads sit and wait is wasteful.
Async/await simply allows threads to do other things. Instead of waiting, the thread can serve some other request. And then, when the database query is finished or the HTTP request receives a response, the next available thread picks up execution of that code.
So yes, the individual lines in your example still execute in sequence. They just execute more efficiently. If your application is receiving many requests, it can process those requests sooner because more threads are available to do work instead of blocking, just waiting for a response from some I/O operation.
I highly recommend this blog post: There Is No Thread. There is some confusion that async/await is about executing something on another thread. It is not about that at all. It's about ensuring that no thread is sitting and waiting when it could be doing something else.
You can execute them in parallel/concurrently and still await them in non-blocking manner withTask.WhenAll. You don't have to await each single async method call individually.
So you have the performance gain and at the same time a responsive UI:
//create 3 "cold" tasks, that are not yet running
var userTask = GetUserFromDBAsync();
var destinationTask = GetDestinationFromDBAsync();
var addressTask = GetAddressFromDBAsync();
//start running and awaiting all of them at (almost) the same time
await Task.WhenAll(userTask, destinationTask, adressTask);
//get the cached results
var user = userTask.Result;
var destination = destinationTask.Result;
var address = addressTask.Result;

Run async function on background thread?

I'm working on a project that includes a server and a client.
The client sends every second a UDP packet to the server, and according to the server's response it might open a TCP connection with the server (and recieve files from the server), the client's program has a GUI which I don't want to block (made in WPF MVVM), and also it's problematic to start async function on constructor of the MainWindow as constructors can't be async.
So my question is, can I and should I run async functions on a background thread? Does the async improve performance if it's on a background thread, I'm talking about the difference between these options:
public MainWindow()
{
InitializeComponent();
DirectoryViewModel DirectoryVM = new DirectoryViewModel();
this.DataContext = DirectoryVM;
DirectoryVM.StartListen(); //Unwanted as constructor won't finish
}
To
public MainWindow()
{
InitializeComponent();
DirectoryViewModel DirectoryVM = new DirectoryViewModel();
this.DataContext = DirectoryVM;
Task.Run(DirectoryVM.StartListen()); //Possible but async might be faster
}
To
public MainWindow()
{
InitializeComponent();
DirectoryViewModel DirectoryVM = new DirectoryViewModel();
this.DataContext = DirectoryVM;
Task.Run(async () => await DirectoryVM.StartListenAsync()); //Is it faster than the second option?
}
From what I've seen I shouldn't run async code on background thread like this, but why? Isn't it faster than running sync code on a background thread?
Also I guess it's not really different but on my server I'll create a constant running thread that listens for tcp connections and will send files over it, should I make the send file function async or not?
Thanks!
... is it ok to start async method on a seperate thread ...
I think you fundamentally misunderstand the async methods. Calling an async method does not create a new thread or offload the entire method onto any other thread. Calling an async method is like calling any other method, but it maybe at some point returns execution back to the caller (with a Task as a promise) and finishes its remaining job at some point.
Though it is possible to spin up an always running periodic listener using async method, it is rather against its purpose.
When you call an async method, you expect it will be run to completion within a reasonable time (hence why you want to await on it), but it might takes long enough time to do something else meanwhile. In your case, you should explicitly start a new background task or thread what does the periodic check for you. It can be a new thread or better, a new task (Task.Run for example).
From what I've seen I shouldn't run async code on background thread
like this, but why?
According to your comments the DirectoryVM.StartListen() starts the constantly running listening, what does not have to be async. Unless it does any async calls, it is not something awaitable.
Isn't it faster than running sync code on a background thread?
Async is not about speed, but about thread blocking. It does not matter if the thread is foreground or background, using async method for an I/O operation, like calling an http endpoint or sending a UDP packet is always beneficial, if the thread can do other things while waiting, or otherwise blocking that thread might cause other issues.
Also I guess it's not really different but on my server I'll create a
constant running thread that listens for tcp connections and will send
files over it, should I make the send file function async or not?
You should, if it is beneficial. See previous part.
From what I've seen I shouldn't run async code on background thread like this, but why?
I'm not aware of any recommendations not to run asynchronous code with Task.Run.
Task.Run should be avoided on ASP.NET (for both synchronous and asynchronous code), but that doesn't apply here since you have a GUI app.
Isn't it faster than running sync code on a background thread?
No. The code will not be any faster. async is about freeing up threads, not running faster.
I think the concern that you've seen is that of ignoring tasks. Using Task.Run and ignoring the task it returns is problematic because that's a form of fire-and-forget. So if your loop fails due to an exception, your application would never know. One way around this is to await the task from an async void method. For example, as your Window's Initialized event handler:
async void Window_Initialized(object, EventArgs)
{
await Task.Run(DirectoryVM.StartListen);
// or, if asynchronous:
await DirectoryVM.StartListenAsync();
}

C# Thread.Sleep()

I need somehow to bypass Thread.Sleep() method and don't get my UI Thread blocked, but I don't have to delete the method.
I need to solve the problem without deleting the Sleep method. The Sleep method simulates a delay(unresponsive application). I need to handle that.
An application is considered non-responsive when it doesn't pump its message queue. The message queue in Winforms is pumped on the GUI thread. Therefore, to make your application "responsive", you need to make sure the GUI thread has opportunities to pump the message queue - in other words, it must not run your code.
You mentioned that the Thread.Sleep simulates a "delay" in some operation you're making. However, you need to consider two main causes of such "delays":
An I/O request waiting for completion (reading a file, querying a database, sending an HTTP request...)
CPU work
The two have different solutions. If you're dealing with I/O, the best way would usually be to switch over to using asynchronous I/O. This is a breeze with await:
var response = await new HttpClient().GetAsync("http://www.google.com/");
This ensures that your GUI thread can do its job while your request is pending, and your code will restore back on the UI thread after the response gets back.
The second one is mainly solved with multi-threading. You should be extra careful when using multi-threading, because it adds in many complexities you don't get in a single-threaded model. The simplest way of treating multi-threading properly is by ensuring that you're not accessing any shared state - that's where synchronization becomes necessary. Again, with await, this is a breeze:
var someData = "Very important data";
var result = await Task.Run(() => RunComplexComputation(someData));
Again, the computation will run outside of your UI thread, but as soon as its completed and the GUI thread is idle again, your code execution will resume back on the UI thread, with the proper result.
something like that maybe ?
public async void Sleep(int milliseconds)
{
// your code
await Task.Delay(milliseconds); // non-blocking sleep
// your code
}
And if, for reasons that escape me, you HAVE to use Thread.Sleep, you can handle it like that :
public async void YourMethod()
{
// your code
await Task.Run(() => Thread.Sleep(1000)); // non-blocking sleep using Thread.Sleep
// your code
}
Use MultiThreading.
Use a different thread for sleep rather than the main GUI thread. This way it will not interfere with your Main application

Will ManualResetEvent block the entire program?

I have a program that begins itself by listening for connections. I wanted to implement a pattern in which the server would accept a connection, pass that individual connection to a user class for processing: future packet reception, and handling of the data.
I ran into trouble with the synchronous pattern before I found out that asynchronous use of the Socket class isn't scary. But then I ran into more trouble. It seemed that, in a while (true) loop, since BeginAccept() is asynchronous, the program would constantly move through this loop and eventually run into an OutOfMemoryException. I needed something to listen for a connection, and immediately hand off responsibility of that connection to some other class.
So I read Microsoft's example and found out about ManualResetEvent. I could actually specify when I was ready for the loop to begin listening again! But after reading some questions here on Stack Overflow, I have become confused.
My worry is that even though I have asynchronously accepted a connection, the entire program will block while it's trying to listen for a new connection upon re-entering the loop. This isn't ideal if I'm handling multiple users.
I'm very new to the world of asynchronous I/O, so I would appreciate even the angriest of comments about my vocabulary or a misuse of a phrase.
Code:
static void Main(string[] args)
{
MainSocket = new Socket(SocketType.Stream, ProtocolType.Tcp);
MainSocket.Bind(new IPEndPoint(IPAddress.Parse("192.168.1.74"), 1626));
MainSocket.Listen(10);
while (true)
{
Ready.Reset();
AcceptCallback = new AsyncCallback(ConnectionAccepted);
MainSocket.BeginAccept(AcceptCallback, MainSocket);
Ready.WaitOne();
}
}
static void ConnectionAccepted(IAsyncResult IAr)
{
Ready.Set();
Connection UserConnection = new Connection(MainSocket.EndAccept(IAr));
}
The Microsoft example, in which they use the old-style WaitHandle based events, will work but frankly it is a very odd and awkward way to implement asynchronous code. I get the feeling that the events are there in the example mainly as a way of artificially synchronizing the main thread so it has something to do. But it's not really the right approach.
One option is to just not even accept sockets asynchronously. Instead, use the asynchronous I/O for when the socket is connected and use a synchronous loop in the main thread to accept sockets. This winds up being pretty much exactly what the Microsoft sample does anyway, but keeps all of the accept logic in the main thread instead of switching back and forth between the main thread (which starts the accept operation) and some IOCP thread that handles the completion.
Another option is to just give the main thread something else to do. For a simple example, this could be simply waiting for some user input to signal that the program should shut down. Of course, in a real program the main thread could be something useful (e.g. handling the message loop in a GUI program).
If the main thread is given something else to do, then you can use the asynchronous BeginAccept() in the way it was intended: you call the method to start the accept operation, and then don't call it again until that operation completes. The initial call happens when you initialize your server, but all subsequent calls happen in the completion callback.
In that case, your completion callback method looks more like this:
static void ConnectionAccepted(IAsyncResult IAr)
{
Connection UserConnection = new Connection(MainSocket.EndAccept(IAr));
MainSocket.BeginAccept(ConnectionAccepted, MainSocket);
}
That is, you simply call the BeginAccept() method in the completion callback itself. (Note that there's no need to create the AsyncCallback object explicitly; the compiler will implicitly convert the method name to the correct delegate type instance on your behalf).

Async/Await and Tasks

Ok, I think I have understood the whole async/await thing. Whenever you await something, the function you're running returns, allowing the current thread to do something else while the async function completes. The advantage is that you don't start a new thread.
This is not that hard to understand as it's somewhat how Node.JS works, except Node uses alot of callbacks to make this happen. This is where I fail to understand the advantage however.
The socket class doesn't currently have any Async methods (that work with async/await). I can of course pass a socket to the stream class, and use the async methods there, however this leaves a problem with the accepting of new sockets.
There are two ways of doing this, as far as I know. In both cases I accept new sockets in an infinite loop on the main thread. In the first case I can start a new task for every socket that I accept, and run the stream.ReceiveAsync within that task. However, won't an await actually block that task, since the task will have nothing else to do? Which again will result in more threads spawned on the threadpool, which again is no better than using synchronous methods inside a task?
My second option is to put all accepted sockets in one of several lists (one list per thread), and inside those threads run a loop, running await stream.ReceiveAsync for every socket. This way, whenever i run into await, stream.ReceiveAsync and start receiving from all other sockets.
I guess my real question is if this is in any way more effective than a threadpool, and in the first case, if it really will be worse than just using the APM methods.
I also know you can wrap APM methods into functions using await/async, but the way I see it, you still get the "disadvantage" of APM methods, with the extra overhead of state machines in async/await.
The async socket API is not based around Task[<T>], so it isn't directly usable from async/await - but you can bridge that fairly easily - for example (completely untested):
public class AsyncSocketWrapper : IDisposable
{
public void Dispose()
{
var tmp = socket;
socket = null;
if(tmp != null) tmp.Dispose();
}
public AsyncSocketWrapper(Socket socket)
{
this.socket = socket;
args = new SocketAsyncEventArgs();
args.Completed += args_Completed;
}
void args_Completed(object sender, SocketAsyncEventArgs e)
{
// might want to switch on e.LastOperation
var source = (TaskCompletionSource<int>)e.UserToken;
if (ShouldSetResult(source, args)) source.TrySetResult(args.BytesTransferred);
}
private Socket socket;
private readonly SocketAsyncEventArgs args;
public Task<int> ReceiveAsync(byte[] buffer, int offset, int count)
{
TaskCompletionSource<int> source = new TaskCompletionSource<int>();
try
{
args.SetBuffer(buffer, offset, count);
args.UserToken = source;
if (!socket.ReceiveAsync(args))
{
if (ShouldSetResult(source, args))
{
return Task.FromResult(args.BytesTransferred);
}
}
}
catch (Exception ex)
{
source.TrySetException(ex);
}
return source.Task;
}
static bool ShouldSetResult<T>(TaskCompletionSource<T> source, SocketAsyncEventArgs args)
{
if (args.SocketError == SocketError.Success) return true;
var ex = new InvalidOperationException(args.SocketError.ToString());
source.TrySetException(ex);
return false;
}
}
Note: you should probably avoid running the receives in a loop - I would advise making each socket responsible for pumping itself as it receives data. The only thing you need a loop for is to periodically sweep for zombies, since not all socket deaths are detectable.
Note also that the raw async socket API is perfectly usable without Task[<T>] - I use that extensively. While await may have uses here, it is not essential.
This is not that hard to understand as it's somewhat how Node.JS works, except Node uses alot of callbacks to make this happen. This is where I fail to understand the advantage however.
Node.js does use callbacks, but it has one other significant facet that really simplifies those callbacks: they are all serialized to the same thread. So when you're looking at asynchronous callbacks in .NET, you're usually dealing with multithreading as well as asynchronous programming (except for EAP-style callbacks).
Asynchronous programming using callbacks is called "continuation-passing style" (CPS). It's the only real option for Node.js but is one of many options on .NET. In particular, CPS code can get extremely complex and difficult to maintain, so the async/await compiler transform was introduced so you could write "normal-looking" code and the compiler would translate it to CPS for you.
In both cases I accept new sockets in an infinite loop on the main thread.
If you're writing a server, then yes, somewhere you will be repeatedly accepting new client connections. Also, you should be continuously reading from each connected socket, so each socket also has a loop.
In the first case I can start a new task for every socket that I accept, and run the stream.ReceiveAsync within that task.
You wouldn't need a new task. That's the whole point of asynchronous programming.
My second option is to put all accepted sockets in one of several lists (one list per thread), and inside those threads run a loop, running await stream.ReceiveAsync for every socket.
I'm not sure why you'd need multiple threads, or any dedicated threads at all.
You seem a bit confused on how async and await work. I recommend reading my own introduction, the MSDN overview, the Task-Based Asynchronous Pattern guidance, and the async FAQ, in that order.
I also know you can wrap APM methods into functions using await/async, but the way I see it, you still get the "disadvantage" of APM methods, with the extra overhead of state machines in async/await.
I'm not sure what disadvantage you're referring to. The overhead of state machines, while non-zero, is negligible in the face of socket I/O.
If you're looking to do socket I/O, you have several options. For reads, you can either do them in an "infinite" loop using APM or Task wrappers around the APM or Async methods. Alternatively, you could convert them into a stream-like abstraction using Rx or TPL Dataflow.
Another option is a library I wrote a few years ago called Nito.Async. It provides EAP-style (event-based) sockets that handle all the thread marshaling for you, so you end up with something simpler like Node.js. Of course, like Node.js, this simplicity means it won't scale as well as a more complex solution.

Categories

Resources