We're planning our system to have a set of publicly accessible services which call into a set of internal services, all implemented using ServiceStack.
My question is, what is the best method (in terms of performance, stability and code maintanability) for this cross-service communication?
E.g. should my public services call the internal services using a ServiceStack client or use the Rabbit / Redis messaging system? And if the latter, can I call two or more internal services asynchronously and await for the response from both?
For one-way communications Messaging offers a lot of benefits where if installing a Rabbit MQ Broker is an option, Rabbit MQ provides the more industrial strength option.
For request/reply services where requests are transient and both endpoints are required to be up, the typed C# Service Clients allow for more direct/debuggable point-to-point communications with less moving parts.
Using the clients async API's let you easily make multiple calls in parallel, e.g:
//fire off to 2 async requests simultaneously...
var task1 = client.GetAsync(new Request1 { ... });
var task2 = client.GetAsync(new Request2 { ... });
//additional processing if any...
//Continue when first response is received
var response1 = await task1;
//Continue after 2nd response, if it arrived before task1, call returns instantly
var response2 = await task1;
The above code continues after Request1 is completed, you can also use Task.WhenAny() to process which ever request was completed first.
Related
We have a function app that build a large json payload(+- 2000 lines) everyday and posts it to the api to be mapped and saved into a database.
We are using cqrs with mediatr and it seems the API side takes exceptionally long to create and save all the neccesary information.
The problem we have is that the function's postasjsonasync waits for the api response and times out after a few minutes.
Any idea how to run this as a background task or just post and forget? Our API is only concerned that it received data.
Function side:
using (var client = new HttpClient())
{
client.Timeout = new TimeSpan(0, 10, 0);
var response = await client.PostAsJsonAsync($"{endpoint}/api/v1.0/BatchImport/Import", json); <-- Times out waiting for API
response.EnsureSuccessStatusCode();
}
API mediatr handle side:
public async Task<Unit> Handle(CreateBatchOrderCommand request, CancellationToken cancellationToken)
{
foreach (var importOrder in request.Payload) <-- Takes long to process all the data
{
await PopulateImportDataAsync(importOrder, cancellationToken);
await CreateOrderAsync(importOrder, cancellationToken);
}
return Unit.Value;
}
Cheers
The problem we have is that the function's postasjsonasync waits for the api response and times out after a few minutes.
The easiest solution is going to be just increasing that timeout. If you are talking about Azure Functions, I believe you can increase the timeout to 10 minutes.
Any idea how to run this as a background task or just post and forget? Our API is only concerned that it received data.
Any fire-and-forget solution is not going to end well; you'll end up with lost data. I recommend that you not use fire-and-forget at all, and this advice goes double as soon as you're in the cloud.
Assuming increasing the timeout isn't sufficient, your solution is to use a basic distributed architecture, as described on my blog:
Have your API place the incoming request into a durable queue.
Have a separate backend (e.g., Azure (Durable) Function) process that request from the queue.
Assuming you’re on .NET Core, you could stick incoming requests into a queued background task:
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-6.0&tabs=visual-studio#queued-background-tasks
Keep in mind this chews up resources from servicing other web requests so it will not scale well with millions of requests. This same basic principle, a message queue item and offline processing, can also be distributed across multiple services to take some of the load off the web service.
I am trying to call a long running task from web api like this
[HttpGet]
public async Task<IActionResult> Get()
{
await Task.Run(()=>_report.TestFunctionAsync());
return Accepted();
}
And this is the task
public async Task TestFunctionAsync()
{
ProcessStatus = 0;
Task.Delay(TimeSpan.FromSeconds(30));
ProcessStatus = 1;//wrting to DB
Task.Delay(TimeSpan.FromSeconds(10));
ProcessStatus = 2;//Fetching from Excel
Task.Delay(TimeSpan.FromSeconds(20));
ProcessStatus = 3;//Processing
Task.Delay(TimeSpan.FromSeconds(50));
ProcessStatus = 9;//Finished
}
But when googling I found from UI perspective, its async and UI never be blocked. But its not the correct way.
So please suggest a better way of implementing this. Also is there is any way to understand what is the status of the asyn task (using the ProcessStatus property)
Since ASP.NET MVC does not itself provide the ability to process long-running tasks, you have to create an external solution.
Create a Durable Queue in which to place your requests for long-running operations. RabbitMQ as an example. Alternatively, write your requests to a requests table in your data store/database.
Create a Backend Service to execute your long-running tasks. It should read the requests from your Durable Queue or database table, and act on them accordingly. This can be a Windows Service, Linux Daemon or AWS Lambda, etc.
Create a notification mechanism so that the UI can be notified when the task completes, such as a web socket connection or polling. Or, provide an endpoint on the ASP.NET Web API that allows your web page to retrieve task status.
Microsoft suggests the following options in ASP.NET Core Performance Best Practices docs:
Handle long-running requests with background services. In ASP.NET Core, background tasks can be implemented as hosted services. A hosted service class with background task logic that implements the IHostedService interface.
Also, you can use an Azure Function to complete work out-of-process. This scenario is especially beneficial for CPU-intensive tasks.
To notify your clients using real-time communication options, such as SignalR, to communicate with clients asynchronously.
I have a long-running process that wakes up, performs some task, and may or may not need to publish a message via MessageSender to the Azure service bus. If I can make the MessageSender a singleton, that slightly simplifies my code, so I'd like to if it is viable.
To clarify, I expect that this process may go for very long periods of time (hours at least, potentially days) without sending any messages to the service bus.
Super-simplified example:
public async Task WorkLoop(CancellationToken token)
{
while (!token.IsCancellationRequested)
{
var result = DoWork();
if (result.shouldPublish)
{
var message = buildMessage(result);
await _messageSender.SendAsync(message);
}
await Task.Delay(sleepDuration, token);
}
}
Are there any consequences to keeping a MessageSender instance alive for a long time (days at least, possibly weeks or months)?
According to Best Practices for performance improvements using Service Bus Messaging:
It is recommended that you do not close messaging factories or queue,
topic, and subscription clients after you send a message, and then
re-create them when you send the next message. [...] You can safely
use these client objects for concurrent asynchronous operations and
from multiple threads.
I have a client program which sends off to a webapi and gets a websocket back.
Once hooked up I then send off a second request to the api to start a number of tests.
The webapi then hooks into the websocket as well. sending messages as it completes the tasks. I do this using System.Net.Websockets.ClientWebSocket
var cts = new CancellationTokenSource();
var socket = new ClientWebSocket();
string wsUri = "wss://localhost:44301/api/v1/devices/registerForSocket";
await socket.ConnectAsync(new Uri(wsUri), cts.Token);
RunTests(deviceid, socket, cts);
Now I need to add the ability for the client to also poll for a current progress update. Basically, this will be a bool which is set to false when a process is being run and then reverted back to true once its finished.
Now to do this I will have to run a seperate ClientWebSocket for receiving the "PROGRESS" messages.
However, how can I share the progress between the two tasks? Is this be best way to go about it?
The project I'm working on is a client-server application with all services written in WCF and the client in WPF. There are cases where the server needs to push information to the client. I initially though about using WCF Duplex Services, but after doing some research online, I figured a lot of people are avoiding it for many reasons.
The next thing I thought about was having the client create a host connection, so that the server could use that to make a service call to the client. The problem however, is that the application is deployed over the internet, so that approach requires configuring the firewall to allow incoming traffic and since most of the users are regular users, that might also require configuring the router to allow port forwarding, which again is a hassle for the user.
My third option is that in the client, spawns a background thread which makes a call to the GetNotifications() method on server. This method on the server side then, blocks until an actual notification is created, then the thread is notified (using an AutoResetEvent object maybe?) and the information gets sent to the client. The idea is something like this:
Client
private void InitializeListener()
{
Task.Factory.StartNew(() =>
{
while (true)
{
var notification = server.GetNotifications();
// Display the notification.
}
}, CancellationToken.None, TaskCreationOptions.LongRunning, TaskScheduler.Default);
}
Server
public NotificationObject GetNotifications()
{
while (true)
{
notificationEvent.WaitOne();
return someNotificationObject;
}
}
private void NotificationCreated()
{
// Inform the client of this event.
notificationEvent.Set();
}
In this case, NotificationCreated() is a callback method called when the server needs to send information to the client.
What do you think about this approach? Is this scalable at all?
For each client you are going to hold a thread on the server. If you have a few hundred clients and the server wouldn't use the memory anyway, that may be fine. If there can be more clients, or you do not wish to burn 1MB of stack per client, you should make some changes:
Use an async WCF action method. They allow you to unblock the request thread while the method is waiting.
Change the event model to an async once. SemaphoreSlim has async support. You can also use TaskCompletionSource.
That way you can scale up to many connections.