Can someone tell me what the best practice/proper way of doing this is?
I'm also using WPF, not a console or ASP.NET.
Using Listener to accept clients and spin off a new "thread" for each client that handles all the I/O and Exception catching for that client.
Method 1: Fire and forget, and just throw it into a variable to get rid of the warning.
public static async Task Start(CancellationToken token)
{
m_server = TcpListener.Create(33777);
m_server.Start();
running = true;
clientCount = 0;
// TODO: Add try... catch
while (!token.IsCancellationRequested)
{
var client = await m_server.AcceptTcpClientAsync().ConfigureAwait(false);
Client c = new Client(client);
var _ = HandleClientAsync(c);
}
}
Here's the Client Handler code:
public static async Task HandleClientAsync(Client c)
{
// TODO: add try...catch
while (c.connected)
{
string data = await c.reader.ReadLineAsync();
// Now we will parse the data and update variables accordingly
// Just Regex and some parsing that updates variables
ParseAndUpdate(data);
}
}
Method 2: The same thing... but with Task.Run()
var _ = Task.Run(() => HandleClientAsync());
Method 3: an intermediate non async function (doubt this is good. Should be Async all the way)
But this at least gets rid of the squiggly line without using the variable trick which kinda feels dirty.
while (!token.IsCancellationRequested)
{
var client = await m_server.AcceptTcpClientAsync().ConfigureAwait(false);
Client c = new Client(client);
NonAsync(c);
}
public static void NonAsync(VClient vc)
{
Task.Run(() => HandleClientAsync(vc));
}
Method 4: Make HandleClientAsync an Async void instead of Async Task (really bad)
public static async Task HandleClientAsync(Client c)
// Would change to
public static async Void HandleClientAsync(Client c)
Questions:
Is it any better to use Task.Run() When doing a fire and forget task?
Is it just accepted that you need to use the var _ = FireAndForget() trick to do fire and forget? I could just ignore the warning but something feels wrong about it.
If I wanted to update my UI from a Client, how would I do that? Would I just use a dispatcher?
Thanks guys
I've never been a fan of background workers which you expect to run for a long time, being run in a task. Tasks get scheduled to run on threads drawn from a pool. As you schedule these long running tasks, the thread pool gets smaller and smaller. Eventually all of the threads from the pool are busy running your tasks, and things get really slow and unmanageable.
My recommendation here? Use the Thread class and manage them yourselves. In this way, you keep your thread pool and the overhead for for tasks out of the picture.
Addendum - Producer Consumer Model
Another interesting question to consider: Do you really need a thread for every client? Threads are reasonably costly to create and maintain in terms of memory overhead, and if your client interaction is such that the client threads spend the vast majority of their time waiting around on something to do, then perhaps a producer consumer model is more suited to your use case.
Example:
Client connects on listening thread, gets put in a client queue
Worker thread responsible for checking to see if the clients need anything comes along through that queue every so often and checks - does the client have a new message to service? If so, it services all messages the client has, then moves on
In this way, you limit the number of threads working to just the number needed to manage the message queue. You can even get fancy and add worker threads dynamically based on how long its been since all the clients have been serviced.
If you insist
If you really like what you have going, I suggest refactoring what youre doing a bit so that rather than HandleClientAsync you do something more akin to CreateServiceForClient(c);
This could be a synchronous method that returns something like a ClientService. ClientService could then create the task that does what your HandleClientAsync does now, and store that task as a member. It could also provide methods like
ClientService.WaitUntilEnd()
and
ClientService.Disconnect() (which could set a cancellation token, also stored as a member variable)
Related
I have an ASP.NET 5 Web API application which contains a method that takes objects from a List<T> and makes HTTP requests to a server, 5 at a time, until all requests have completed. This is accomplished using a SemaphoreSlim, a List<Task>(), and awaiting on Task.WhenAll(), similar to the example snippet below:
public async Task<ResponseObj[]> DoStuff(List<Input> inputData)
{
const int maxDegreeOfParallelism = 5;
var tasks = new List<Task<ResponseObj>>();
using var throttler = new SemaphoreSlim(maxDegreeOfParallelism);
foreach (var input in inputData)
{
tasks.Add(ExecHttpRequestAsync(input, throttler));
}
List<ResponseObj> resposnes = await Task.WhenAll(tasks).ConfigureAwait(false);
return responses;
}
private async Task<ResponseObj> ExecHttpRequestAsync(Input input, SemaphoreSlim throttler)
{
await throttler.WaitAsync().ConfigureAwait(false);
try
{
using var request = new HttpRequestMessage(HttpMethod.Post, "https://foo.bar/api");
request.Content = new StringContent(JsonConvert.SerializeObject(input, Encoding.UTF8, "application/json");
var response = await HttpClientWrapper.SendAsync(request).ConfigureAwait(false);
var responseBody = await response.Content.ReadAsStringAsync().ConfigureAwait(false);
var responseObject = JsonConvert.DeserializeObject<ResponseObj>(responseBody);
return responseObject;
}
finally
{
throttler.Release();
}
}
This works well, however I am looking to limit the total number of Tasks that are being executed in parallel globally throughout the application, so as to allow scaling up of this application. For example, if 50 requests to my API came in at the same time, this would start at most 250 tasks running parallel. If I wanted to limit the total number of Tasks that are being executed at any given time to say 100, is it possible to accomplish this? Perhaps via a Queue<T>? Would the framework automatically prevent too many tasks from being executed? Or am I approaching this problem in the wrong way, and would I instead need to Queue the incoming requests to my application?
I'm going to assume the code is fixed, i.e., Task.Run is removed and the WaitAsync / Release are adjusted to throttle the HTTP calls instead of List<T>.Add.
I am looking to limit the total number of Tasks that are being executed in parallel globally throughout the application, so as to allow scaling up of this application.
This does not make sense to me. Limiting your tasks limits your scaling up.
For example, if 50 requests to my API came in at the same time, this would start at most 250 tasks running parallel.
Concurrently, sure, but not in parallel. It's important to note that these aren't 250 threads, and that they're not 250 CPU-bound operations waiting for free thread pool threads to run on, either. These are Promise Tasks, not Delegate Tasks, so they don't "run" on a thread at all. It's just 250 objects in memory.
If I wanted to limit the total number of Tasks that are being executed at any given time to say 100, is it possible to accomplish this?
Since (these kinds of) tasks are just in-memory objects, there should be no need to limit them, any more than you would need to limit the number of strings or List<T>s. Apply throttling where you do need it; e.g., number of HTTP calls done simultaneously per request. Or per host.
Would the framework automatically prevent too many tasks from being executed?
The framework has nothing like this built-in.
Perhaps via a Queue? Or am I approaching this problem in the wrong way, and would I instead need to Queue the incoming requests to my application?
There's already a queue of requests. It's handled by IIS (or whatever your host is). If your server gets too busy (or gets busy very suddenly), the requests will queue up without you having to do anything.
If I wanted to limit the total number of Tasks that are being executed at any given time to say 100, is it possible to accomplish this?
What you are looking for is to limit the MaximumConcurrencyLevel of what's called the Task Scheduler. You can create your own task scheduler that regulates the MaximumCongruencyLevel of the tasks it manages. I would recommend implementing a queue-like object that tracks incoming requests and currently working requests and waits for the current requests to finish before consuming more. The below information may still be relevant.
The task scheduler is in charge of how Tasks are prioritized, and in charge of tracking the tasks and ensuring that their work is completed, at least eventually.
The way it does this is actually very similar to what you mentioned, in general the way the Task Scheduler handles tasks is in a FIFO (First in first out) model very similar to how a ConcurrentQueue<T> works (at least starting in .NET 4).
Would the framework automatically prevent too many tasks from being executed?
By default the TaskScheduler that is created with most applications appears to default to a MaximumConcurrencyLevel of int.MaxValue. So theoretically yes.
The fact that there practically is no limit to the amount of tasks(at least with the default TaskScheduler) might not be that big of a deal for your case scenario.
Tasks are separated into two types, at least when it comes to how they are assigned to the available thread pools. They're separated into Local and Global queues.
Without going too far into detail, the way it works is if a task creates other tasks, those new tasks are part of the parent tasks queue (a local queue). Tasks spawned by a parent task are limited to the parent's thread pool.(Unless the task scheduler takes it upon itself to move queues around)
If a task isn't created by another task, it's a top-level task and is placed into the Global Queue. These would normally be assigned their own thread(if available) and if one isn't available it's treated in a FIFO model, as mentioned above, until it's work can be completed.
This is important because although you can limit the amount of concurrency that happens with the TaskScheduler, it may not necessarily be important - if for say you have a top-level task that's marked as long running and is in-charge of processing your incoming requests. This would be helpful since all the tasks spawned by this top-level task will be part of that task's local queue and therefor won't spam all your available threads in your thread pool.
When you have a bunch of items and you want to process them asynchronously and with limited concurrency, the SemaphoreSlim is a great tool for this job. There are two ways that it can be used. One way is to create all the tasks immediately and have each task acquire the semaphore before doing it's main work, and the other is to throttle the creation of the tasks while the source is enumerated. The first technique is eager, and so it consumes more RAM, but it's more maintainable because it is easier to understand and implement. The second technique is lazy, and it's more efficient if you have millions of items to process.
The technique that you have used in your sample code is the second (lazy) one.
Here is an example of using two SemaphoreSlims in order to impose two maximum concurrency policies, one per request and one globally. First the eager approach:
private const int maxConcurrencyGlobal = 100;
private static SemaphoreSlim globalThrottler
= new SemaphoreSlim(maxConcurrencyGlobal, maxConcurrencyGlobal);
public async Task<ResponseObj[]> DoStuffAsync(IEnumerable<Input> inputData)
{
const int maxConcurrencyPerRequest = 5;
var perRequestThrottler
= new SemaphoreSlim(maxConcurrencyPerRequest, maxConcurrencyPerRequest);
Task<ResponseObj>[] tasks = inputData.Select(async input =>
{
await perRequestThrottler.WaitAsync();
try
{
await globalThrottler.WaitAsync();
try
{
return await ExecHttpRequestAsync(input);
}
finally { globalThrottler.Release(); }
}
finally { perRequestThrottler.Release(); }
}).ToArray();
return await Task.WhenAll(tasks);
}
The Select LINQ operator provides an easy and intuitive way to project items to tasks.
And here is the lazy approach for doing exactly the same thing:
private const int maxConcurrencyGlobal = 100;
private static SemaphoreSlim globalThrottler
= new SemaphoreSlim(maxConcurrencyGlobal, maxConcurrencyGlobal);
public async Task<ResponseObj[]> DoStuffAsync(IEnumerable<Input> inputData)
{
const int maxConcurrencyPerRequest = 5;
var perRequestThrottler
= new SemaphoreSlim(maxConcurrencyPerRequest, maxConcurrencyPerRequest);
var tasks = new List<Task<ResponseObj>>();
foreach (var input in inputData)
{
await perRequestThrottler.WaitAsync();
await globalThrottler.WaitAsync();
Task<ResponseObj> task = Run(async () =>
{
try
{
return await ExecHttpRequestAsync(input);
}
finally
{
try { globalThrottler.Release(); }
finally { perRequestThrottler.Release(); }
}
});
tasks.Add(task);
}
return await Task.WhenAll(tasks);
static async Task<T> Run<T>(Func<Task<T>> action) => await action();
}
This implementation assumes that the await globalThrottler.WaitAsync() will never throw, which is a given according to the documentation. This will no longer be the case if you decide later to add support for cancellation, and you pass a CancellationToken to the method. In that case you would need one more try/finally wrapper around the task-creation logic. The first (eager) approach could be enhanced with cancellation support without such considerations. Its existing try/finally infrastructure is
already sufficient.
It is also important that the internal helper Run method is implemented with async/await. Eliding the async/await would be an easy mistake to make, because in that case any exception thrown synchronously by the ExecHttpRequestAsync method would be rethrown immediately, and it would not be encapsulated in a Task<ResponseObj>. Then the task returned by the DoStuffAsync method would fail without releasing the acquired semaphores, and also without awaiting the completion of the already started operations. That's another argument for preferring the eager approach. The lazy approach has too many gotchas to watch for.
I have a bunch of remoting calls that are all synchronous (3rd party library). Most of them take a lot of time so I'm not able to use them more often then about 5 to 10 times per second. This is too slow because I need to call them at least 3000 times every couple of minutes and many more if the services was stopped for some time. There is virtually no CPU work on the client. It gets the data, checks some simple conditions and makes another call that it has to wait for.
What would be the best way to make them async (call them in an async fashion - I guess I need some async wrapper) so that I can make more requests at the same time? Currently It's limited by the number of threads (which is four).
I was thinking about calling them with Task.Run but every article I read says it's for CPU bound work and that it uses thread-pool threads. If I get it correctly, with this approach I won't be able to break the thread limit, will I?. So which approach would actually fit best here?
What about Task.FromResult? Can I await such methods asynchronously in a greater number than there are threads?
public async Task<Data> GetDataTakingLotsOfTime(object id)
{
var data = remoting.GetData(id);
return await Task.FromResult(data);
}
I was thinking about calling them with Task.Run but every article I read says it's for CPU bound work and that it uses thread-pool threads.
Yes, but when you're stuck with a sync API then Task.Run() might be your lesser evil, especially on a Client.
Your current version of GetDataTakingLotsOfTime() is not really async. The FromResult() merely helps to suppress the Warning about that.
What about Task.FromResult? Can I await such methods asynchronously in a greater number than there are threads?
Not clear where your "number of threads" idea comes from but yes, starting a Task method and awaiting it later essentially runs it on the ThreadPool. But Task.Run is clearer in that respect.
Note that that does not depend on the async modifier of the method - async is an implementation detail, the caller only cares that it returns a Task.
Currently It's limited by the number of threads (which is four).
This needs some explaining. I don't get it.
You are executing a remote call, and your thread needs to wait idly for the result of the remote call. During this wait your thread could do useful things, like executing other remote calls.
Times when your thread is idly waiting for other processes to finish, like writing to a disk, querying a database or fetching information from the internet are typically situations where you'll see an async function next to a non-async function: Write and WriteAsync, Send and SendAsync.
If at the deepest level of your synchronous call you have access to an async version of the call, then your life would be easy. Alas it seems that you don't have such an async version.
Your proposed solution using Task.Run has the disadvantage of the overhead in starting a new thread (or running one from the thread pool).
You could lower this overhead by creating a workshop object. In the workshop, a dedicated thread (a worker), or several dedicated threads are waiting at one input point for an order to do something. The threads performs the task and posts the result at the output point.
Users of the workshop have one access point (front office?) where they post the request to do something, and await for the result.
For this I used System.Threading.Tasks.Dataflow.BufferBlock. Install Nuget package TPL Dataflow.
You can dedicate your workshop to accept only work to GetDataTakingLotsOfTime; I made my workshop generic: I accept every job that implements interface IWork:
interface IWork
{
void DoWork();
}
The WorkShop has two BufferBlocks: one to input work requests and one to output finished work. The workshop has a thread (or several threads) that wait at the input BufferBlock until a job arrives. Does the Work, and when finished posts the job to the output BufferBlock
class WorkShop
{
public WorkShop()
{
this.workRequests = new BufferBlock<IWork>();
this.finishedWork = new BufferBlock<IWork>();
this.frontOffice = new FrontOffice(this.workRequests, this.finishedWork);
}
private readonly BufferBlock<IWork> workRequests;
private readonly BufferBlock<IWork> finishedWork;
private readonly FrontOffice frontOffice;
public FrontOffice {get{return this.frontOffice;} }
public async Task StartWorkingAsync(CancellationToken token)
{
while (await this.workRequests.OutputAvailableAsync(token)
{ // some work request at the input buffer
IWork requestedWork = this.workRequests.ReceiveAsync(token);
requestedWork.DoWork();
this.FinishedWork.Post(requestedWork);
}
// if here: no work expected anymore:
this.FinishedWork.Complete();
}
// function to close the WorkShop
public async Task CloseShopAsync()
{
// signal that no more work is to be expected:
this.WorkRequests.Complete();
// await until the worker has finished his last job for the day:
await this.FinishedWork.Completion();
}
}
TODO: proper reaction on CancellationToken.CancellationRequested
TODO: proper reaction on exceptions thrown by work
TODO: decide whether to use several threads doing the work
FrontOffice has one async function, that accepts work, sends the work to the WorkRequests and awaits for the work to finish:
public async Task<IWork> OrderWorkAsync(IWork work, CancellationToken token)
{
await this.WorkRequests.SendAsync(work, token);
IWork finishedWork = await this.FinishedWork.ReceivedAsync(token);
return finishedWork;
}
So your process created a WorkShop object and starts one or more threads that will StartWorking.
Whenever any thread (inclusive your main thread) needs some work to be performed in async-await fashion:
Create An object that holds the input parameters and the DoWork function
Ask the WorkShop for the FrontOffice
await OrderWorkAsync
.
class InformationGetter : IWork
{
public int Id {get; set;} // the input Id
public Data FetchedData {get; private set;} // the result from Remoting.GetData(id);
public void DoWork()
{
this.FetchedData = remoting.GetData(this.Id);
}
}
Finally the Async version of your remote
async Task<Data> RemoteGetDataAsync(int id)
{
// create the job to get the information:
InformationGetter infoGetter = new InformationGetter() {Id = id};
// go to the front office of the workshop and order to do the job
await this.MyWorkShop.FrontOffice.OrderWorkAsync(infoGetter);
return infoGetter.FetchedData;
}
Background
We have a service operation that can receive concurrent asynchronous requests and must process those requests one at a time.
In the following example, the UploadAndImport(...) method receives concurrent requests on multiple threads, but its calls to the ImportFile(...) method must happen one at a time.
Layperson Description
Imagine a warehouse with many workers (multiple threads). People (clients) can send the warehouse many packages (requests) at the same time (concurrently). When a package comes in a worker takes responsibility for it from start to finish, and the person who dropped off the package can leave (fire-and-forget). The workers' job is to put each package down a small chute, and only one worker can put a package down a chute at a time, otherwise chaos ensues. If the person who dropped off the package checks in later (polling endpoint), the warehouse should be able to report on whether the package went down the chute or not.
Question
The question then is how to write a service operation that...
can receive concurrent client requests,
receives and processes those requests on multiple threads,
processes requests on the same thread that received the request,
processes requests one at a time,
is a one way fire-and-forget operation, and
has a separate polling endpoint that reports on request completion.
We've tried the following and are wondering two things:
Are there any race conditions that we have not considered?
Is there a more canonical way to code this scenario in C#.NET with a service oriented architecture (we happen to be using WCF)?
Example: What We Have Tried?
This is the service code that we have tried. It works though it feels like somewhat of a hack or kludge.
static ImportFileInfo _inProgressRequest = null;
static readonly ConcurrentDictionary<Guid, ImportFileInfo> WaitingRequests =
new ConcurrentDictionary<Guid, ImportFileInfo>();
public void UploadAndImport(ImportFileInfo request)
{
// Receive the incoming request
WaitingRequests.TryAdd(request.OperationId, request);
while (null != Interlocked.CompareExchange(ref _inProgressRequest, request, null))
{
// Wait for any previous processing to complete
Thread.Sleep(500);
}
// Process the incoming request
ImportFile(request);
Interlocked.Exchange(ref _inProgressRequest, null);
WaitingRequests.TryRemove(request.OperationId, out _);
}
public bool UploadAndImportIsComplete(Guid operationId) =>
!WaitingRequests.ContainsKey(operationId);
This is example client code.
private static async Task UploadFile(FileInfo fileInfo, ImportFileInfo importFileInfo)
{
using (var proxy = new Proxy())
using (var stream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read))
{
importFileInfo.FileByteStream = stream;
proxy.UploadAndImport(importFileInfo);
}
await Task.Run(() => Poller.Poll(timeoutSeconds: 90, intervalSeconds: 1, func: () =>
{
using (var proxy = new Proxy())
{
return proxy.UploadAndImportIsComplete(importFileInfo.OperationId);
}
}));
}
It's hard to write a minimum viable example of this in a Fiddle, but here is a start that give a sense and that compiles.
As before, the above seems like a hack/kludge, and we are asking both about potential pitfalls in its approach and for alternative patterns that are more appropriate/canonical.
Simple solution using Producer-Consumer pattern to pipe requests in case of thread count restrictions.
You still have to implement a simple progress reporter or event. I suggest to replace the expensive polling approach with an asynchronous communication which is offered by Microsoft's SignalR library. It uses WebSocket to enable async behavior. The client and server can register their callbacks on a hub. Using RPC the client can now invoke server side methods and vice versa. You would post progress to the client by using the hub (client side). In my experience SignalR is very simple to use and very good documented. It has a library for all famous server side languages (e.g. Java).
Polling in my understanding is the totally opposite of fire-and-forget. You can't forget, because you have to check something based on an interval. Event based communication, like SignalR, is fire-and-forget since you fire and will get a reminder (cause you forgot). The "event side" will invoke your callback instead of you waiting to do it yourself!
Requirement 5 is ignored since I didn't get any reason. Waiting for a thread to complete would eliminate the fire and forget character.
private BlockingCollection<ImportFileInfo> requestQueue = new BlockingCollection<ImportFileInfo>();
private bool isServiceEnabled;
private readonly int maxNumberOfThreads = 8;
private Semaphore semaphore = new Semaphore(numberOfThreads);
private readonly object syncLock = new object();
public void UploadAndImport(ImportFileInfo request)
{
// Start the request handler background loop
if (!this.isServiceEnabled)
{
this.requestQueue?.Dispose();
this.requestQueue = new BlockingCollection<ImportFileInfo>();
// Fire and forget (requirement 4)
Task.Run(() => HandleRequests());
this.isServiceEnabled = true;
}
// Cache multiple incoming client requests (requirement 1) (and enable throttling)
this.requestQueue.Add(request);
}
private void HandleRequests()
{
while (!this.requestQueue.IsCompleted)
{
// Wait while thread limit is exceeded (some throttling)
this.semaphore.WaitOne();
// Process the incoming requests in a dedicated thread (requirement 2) until the BlockingCollection is marked completed.
Task.Run(() => ProcessRequest());
}
// Reset the request handler after BlockingCollection was marked completed
this.isServiceEnabled = false;
this.requestQueue.Dispose();
}
private void ProcessRequest()
{
ImportFileInfo request = this.requestQueue.Take();
UploadFile(request);
// You updated your question saying the method "ImportFile()" requires synchronization.
// This a bottleneck and will significantly drop performance, when this method is long running.
lock (this.syncLock)
{
ImportFile(request);
}
this.semaphore.Release();
}
Remarks:
BlockingCollection is a IDisposable
TODO: You have to "close" the BlockingCollection by marking it completed:
"BlockingCollection.CompleteAdding()" or it will loop indeterminately waiting for further requests. Maybe you introduce a additional request methods for the client to cancel and/ or to update the process and to mark adding to the BlockingCollection as completed. Or a timer that waits an idle time before marking it as completed. Or make your request handler thread block or spin.
Replace Take() and Add(...) with TryTake(...) and TryAdd(...) if you want cancellation support
Code is not tested
Your "ImportFile()" method is a bottleneck in your multi threading environment. I suggest to make it thread safe. In case of I/O that requires synchronization, I would cache the data in a BlockingCollection and then write them to I/O one by one.
The problem is that your total bandwidth is very small-- only one job can run at a time-- and you want to handle parallel requests. That means that queue time could vary wildly. It may not be the best choice to implement your job queue in-memory, as it would make your system much more brittle, and more difficult to scale out when your business grows.
A traditional, scaleable way to architect this would be:
An HTTP service to accept requests, load balanced/redundant, with no session state.
A SQL Server database to persist the requests in a queue, returning a persistent unique job ID.
A Windows service to process the queue, one job at a time, and mark jobs as complete. The worker process for the service would probably be single-threaded.
This solution requires you to choose a web server. A common choice is IIS running ASP.NET. On that platform, each request is guaranteed to be handled in a single-threaded manner (i.e. you don't need to worry about race conditions too much), but due to a feature called thread agility the request might end with a different thread, but in the original synchronization context, which means you will probably never notice unless you are debugging and inspecting thread IDs.
Given the constraints context of our system, this is the implementation we ended up using:
static ImportFileInfo _importInProgressItem = null;
static readonly ConcurrentQueue<ImportFileInfo> ImportQueue =
new ConcurrentQueue<ImportFileInfo>();
public void UploadAndImport(ImportFileInfo request) {
UploadFile(request);
ImportFileSynchronized(request);
}
// Synchronize the file import,
// because the database allows a user to perform only one write at a time.
private void ImportFileSynchronized(ImportFileInfo request) {
ImportQueue.Enqueue(request);
do {
ImportQueue.TryPeek(out var next);
if (null != Interlocked.CompareExchange(ref _importInProgressItem, next, null)) {
// Queue processing is already under way in another thread.
return;
}
ImportFile(next);
ImportQueue.TryDequeue(out _);
Interlocked.Exchange(ref _importInProgressItem, null);
}
while (ImportQueue.Any());
}
public bool UploadAndImportIsComplete(Guid operationId) =>
ImportQueue.All(waiting => waiting.OperationId != operationId);
This solution works well for the loads we are expecting. That load involves a maximum of about 15-20 concurrent PDF file uploads. The batch of up to 15-20 files tends to arrive all at once and then to go quiet for several hours until the next batch arrives.
Criticism and feedback is most welcome.
I'm getting confused by await/async as I may still not get the point of its correct usage.
I have a simple WPF-UI and a ViewModel-method to start listening for clients which want to connect.
The following method is executed when the user clicks the button to start listening:
public void StartListening()
{
_tcpListener.Start(); // TcpListener
IsListening = true; // bool
Task.Factory.StartNew(DoStartListeningAsync, TaskCreationOptions.LongRunning);
}
The method DoStartListeningAsync which is called is defined like
private async Task DoStartListeningAsync()
{
while (IsListening)
{
using (var newClient = await _tcpListener.AcceptTcpClientAsync() /*.WithWaitCancellation(_cts.Token)*/)
{
apiClient = new ApiClient();
if(await apiClient.InitClientAsync()) // <-- here is the problem
{
// ... apiClient is now initialized
}
// ... do more and go back to await _tcpListener.AcceptTcpClientAsync()
}
}
}
The ApiClient class' InitClientAsync method is defined like:
public async Task<bool> InitClientAsync()
{
using (var requestStream = await _apiWebRequest.GetRequestStreamAsync())
{
_apiStreamWriter = new StreamWriter(requestStream);
}
// ... do somehing with the _apiStreamWriter
return true;
}
However, sometimes the InitClientAsync-call will get stuck at await _apiWebRequest.GetRequestStreamAsync() which then will freeze the execution of the DoStartListeningAsync-method at // <-- here is the problem.
In case the DoStartListeningAsync is stuck, no new connections will be handled which destroys my whole concept of handling multiple clients asynchronously.
Since you are using "await" keyword along the code path, you won't actually serve
multiple clients asynchronously.
The thing is, your code in the background thread will serve clients one by one. Take a deeper look - in the while loop you are getting request stream, wait till it is loaded, serve it, and then wait for other request stream.
async/await principle doesn't itself provide ability to serve multiple actions at the time. The only thing it is doing - prevents blocking current thread from being reusable by other code. So if going with async/await, you allow system yo use your current task thread, while it is waiting to other async action to complete (like _apiWebRequest.GetRequestStreamAsync()).
But since you are having one task, and you are waiting on every iteration of while loop - your code will work the same way, if you wrote it completely synchronous. The only profit is that you are using Task, and so .Net can reuse it's thread from thread pool while you are waiting for async actions to complete.
If you wan't to serve multiple clients asynchronously, you should either start multiple tasks, or don't wait till request is completely served - so actually remove some awaits from your code.
So you should move towards design, there you have one listening task/thread, that does nothing exept reading requests and putting it to the some queue. And having other tasks, that serve requests, reading it from the queue.
If I understood you correctly, you are using TcpListener under the hood. So what you need, is the loop where you accept new clients, and start serving them in the different thread/task without any waiting, so going directly to accepting other clients. But you can and should use async/await inside those handlers that serve clients.
Take a look at this answer - not completely your case (since I don't know all details of implementation), but just to get the idea.
I have a method in ASP.NET application, that consumes quite a lot of time to complete. A call to this method might occur up to 3 times during one user request, depending on the cache state and parameters that user provides. Each call takes about 1-2 seconds to complete. The method itself is synchronous call to the service and there is no possibility to override the implementation.
So the synchronous call to the service looks something like the following:
public OutputModel Calculate(InputModel input)
{
// do some stuff
return Service.LongRunningCall(input);
}
And the usage of the method is (note, that call of method may happen more than once):
private void MakeRequest()
{
// a lot of other stuff: preparing requests, sending/processing other requests, etc.
var myOutput = Calculate(myInput);
// stuff again
}
I tried to change the implementation from my side to provide simultaneous work of this method, and here is what I came to so far.
public async Task<OutputModel> CalculateAsync(InputModel input)
{
return await Task.Run(() =>
{
return Calculate(input);
});
}
Usage (part of "do other stuff" code runs simultaneously with the call to service):
private async Task MakeRequest()
{
// do some stuff
var task = CalculateAsync(myInput);
// do other stuff
var myOutput = await task;
// some more stuff
}
My question: Do I use the right approach to speed up the execution in ASP.NET application or am I doing unnecessary job trying to run synchronous code asynchronously?
Can anyone explain why the second approach is not an option in ASP.NET (if it is really not)?
Also, if such approach is applicable, do I need to call such method asynchronously if it is the only call we might perform at the moment (I have such case, when no other stuff there is to do while waiting for completion)?
Most of the articles in the net on this topic covers using async-await approach with the code, that already provides awaitable methods, but that's not my case. Here is the nice article describing my case, which doesn't describe the situation of parallel calls, declining the option to wrap sync call, but in my opinion my situation is exactly the occasion to do it.
It's important to make a distinction between two different types of concurrency. Asynchronous concurrency is when you have multiple asynchronous operations in flight (and since each operation is asynchronous, none of them are actually using a thread). Parallel concurrency is when you have multiple threads each doing a separate operation.
The first thing to do is re-evaluate this assumption:
The method itself is synchronous call to the service and there is no possibility to override the implementation.
If your "service" is a web service or anything else that is I/O-bound, then the best solution is to write an asynchronous API for it.
I'll proceed with the assumption that your "service" is a CPU-bound operation that must execute on the same machine as the web server.
If that's the case, then the next thing to evaluate is another assumption:
I need the request to execute faster.
Are you absolutely sure that's what you need to do? Are there any front-end changes you can make instead - e.g., start the request and allow the user to do other work while it's processing?
I'll proceed with the assumption that yes, you really do need to make the individual request execute faster.
In this case, you'll need to execute parallel code on your web server. This is most definitely not recommended in general because the parallel code will be using threads that ASP.NET may need to handle other requests, and by removing/adding threads it will throw the ASP.NET threadpool heuristics off. So, this decision does have an impact on your entire server.
When you use parallel code on ASP.NET, you are making the decision to really limit the scalability of your web app. You also may see a fair amount of thread churn, especially if your requests are bursty at all. I recommend only using parallel code on ASP.NET if you know that the number of simultaneous users will be quite low (i.e., not a public server).
So, if you get this far, and you're sure you want to do parallel processing on ASP.NET, then you have a couple of options.
One of the easier methods is to use Task.Run, very similar to your existing code. However, I do not recommend implementing a CalculateAsync method since that implies the processing is asynchronous (which it is not). Instead, use Task.Run at the point of the call:
private async Task MakeRequest()
{
// do some stuff
var task = Task.Run(() => Calculate(myInput));
// do other stuff
var myOutput = await task;
// some more stuff
}
Alternatively, if it works well with your code, you can use the Parallel type, i.e., Parallel.For, Parallel.ForEach, or Parallel.Invoke. The advantage to the Parallel code is that the request thread is used as one of the parallel threads, and then resumes executing in the thread context (there's less context switching than the async example):
private void MakeRequest()
{
Parallel.Invoke(() => Calculate(myInput1),
() => Calculate(myInput2),
() => Calculate(myInput3));
}
I do not recommend using Parallel LINQ (PLINQ) on ASP.NET at all.
I found that the following code can convert a Task to always run asynchronously
private static async Task<T> ForceAsync<T>(Func<Task<T>> func)
{
await Task.Yield();
return await func();
}
and I have used it in the following manner
await ForceAsync(() => AsyncTaskWithNoAwaits())
This will execute any Task asynchronously so you can combine them in WhenAll, WhenAny scenarios and other uses.
You could also simply add the Task.Yield() as the first line of your called code.
this is probably the easiest generic way in your case
return await new Task(
new Action(
delegate () {
// put your synchronous code here
}
)
);