How do I do async RPC calls with RabbitMq - c#

I'm trying to do a RestApi (asp.net core) that calls the backend (C#) through RabbitMq. To handle many requests I will need to call the backend asynchronously.
For me the example code from rabbitmq seem not to be thread-safe because it dequeues messages until the one with the correct correlation id is returned. All others will be ignored. (link: https://www.rabbitmq.com/tutorials/tutorial-six-dotnet.html )
while(true)
{
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
if(ea.BasicProperties.CorrelationId == corrId)
{
return Encoding.UTF8.GetString(ea.Body);
}
}
I'm thinking in the following possibilities:
Possibility 1:
I could use the SimpleRpcClient and create for each request a own instance. This will cause that for each request a new queue to reply gets created.
Possibility 2:
Create a own RPC client that creates one reply queue (probably per request type) and returns the right response to the right request depending on the correlation id.
What is the best practice to make multiple calls asynchronous? Are there already implementations for the second possibility or do I need to implement this by myself?

Design a job queue, Push job to queue from generator and forget so that job generator remains responsive
Have multiple workers equal to number of available CPU threads (for optimized performance) to process jobs
Each worker to deque job from main queue and put it with results along in new queue.
Keep features for
Not to process *too old** jobs.
Terminate long running jobs.
Pick high priority jobs first.
If permitted design remote job runner nodes

Related

How to know if there is message ready to consume

I am new to Kafka and looking for a way to know if the message is ready for consumption to the consumer before calling consume method.
I am doing the POC on integrating C# with Kafka, previously I did that for RabbitMQ which has a method "MessageCount", but for Kafka, I cannot find any.
Actually Kafka has an infinite loop, in which it calls the poll() function to get eventual new records from a partition.
The configuration : max.poll.intervall.ms, specifies the interval of time after which, if the poll() function is not called, the consumer is considered dead and a rebalance is operated.
So to answer your question, Kafka always calls the poll() function to check if a message is available to be consummed. However, there some consumer configurations that allow to wait for a minimum size of messages before consumming the message:
fetch.min.bytes : you will wait untill you have x bytes of messages to consume them
fetch.max.wait.ms : set how much time you are gonna wait for the fetch.min.bytes to be gathered
In theory, if you can view if messages exist you are already using processes to connect to kafka. So you might as well just do a try catch with consume with the same performance.

Azure function: limit the number of calls per second

I have an Azure function triggered by queue messages. This function makes a request to third-party API. Unfortunately this API has limit - 10 transactions per second, but I might have more than 10 messages per second in service bus queue. How can I limit the number of calls of Azure function to satisfy third-party API limitations?
Unfortunately there is no built-in option for this.
The only reliable way to limit concurrent executions would be to run on a fixed App Service Plan (not Consumption Plan) with just 1 instance running all the time. You will have to pay for this instance.
Then set the option in host.json file:
"serviceBus": {
// The maximum number of concurrent calls to the callback the message
// pump should initiate. The default is 16.
"maxConcurrentCalls": 10
}
Finally, make sure your function takes a second to execute (or other minimal duration, and adjust concurrent calls accordingly).
As #SeanFeldman suggested, see some other ideas in this answer. It's about Storage Queues, but applies to Service Bus too.
You can try writing some custom logic i.e. implement your own in-memory queue in Azure function to queue up requests and limit the calls to third party API. Anyway until the call to third party API succeeds, you dont need to acknowledge the messages in the queue. In this way reliability is also maintained.
The best way to maintain integrity of the system is to throttle the consumption of the Service Bus messages. You can control how your QueueClient processes the messages, see: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues#4-receive-messages-from-the-queue
Check out the "Max Concurrent calls"
static void RegisterOnMessageHandlerAndReceiveMessages()
{
// Configure the message handler options in terms of exception handling, number of concurrent messages to deliver, etc.
var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
{
// Maximum number of concurrent calls to the callback ProcessMessagesAsync(), set to 1 for simplicity.
// Set it according to how many messages the application wants to process in parallel.
MaxConcurrentCalls = 1,
// Indicates whether the message pump should automatically complete the messages after returning from user callback.
// False below indicates the complete operation is handled by the user callback as in ProcessMessagesAsync().
AutoComplete = false
};
// Register the function that processes messages.
queueClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
}
Do you want to get rid of N-10 messages you receive in a second interval or do you want to treat every message in respect to the API throttling limit? For the latter, you can add the messages processed by your function to another queue from which you can read a batch of 10 messages via another function (timer trigger) every second

Asp.Net MVC 5 - Long Running Task - How to ensure that worker thread won't be thrown away when IIS recycles the AppPool?

I have a data processing MVC application that works with uploaded file sizes ranging from 100MB to 2GB and contains a couple of long running operations. Users will upload the files and the data in those files will be processed and then finally some analysis on the data will be sent to related users/clients.
It will take least a couple of hours to process the data, so in order to make sure the user doesn't have to wait all the way, I've spun up a separate task to do this long running operation. This way, once the files are received by the server and stored on the disk, the user will get a response back with a ReferenceID and they can close the browser.
So far, it's been working well as intended but after reading up on issues with using Fire-and-Forget pattern in MVC and worker threads getting thrown away by IIS during recycling, I have concerns about this approach.
Is this approach still safe? If not, How can I ensure that the thread that is processing the data doesn't die until it finishes processing and sends the data to clients? (in a relatively simpler way)
The app runs on .NET 4.5, so don't think I will be able to use HostingEnvironment.QueueBackgroundWorkItem at the moment.
Does using Async/Await at controller help?
I've also thought of using a message queue on app server to store messages once the files are stored to disk and then making the DataProcessor a separate service/Process and then listen to the queue. If the queue is recoverable, then it will assure me that the messages will always get processed eventually even if the server crashes or the thread gets thrown away before finish processing the data. Is this a better approach?
My current setup is something like below
Controller
public ActionResult ProcessFiles()
{
HttpFileCollectionBase uploadedfiles = Request.Files;
var isValid = ValidateService.ValidateFiles(uploadedFiles);
if(!isValid){
return View("Error");
}
var referenceId = DataProcessor.ProcessData(uploadedFiles);
return View(referenceId);
}
Business Logic
public Class DataProcessor
{
public int ProcessFiles(HttpFileCollectionBase uploadedFiles)
{
var referenceId = GetUniqueReferenceIdForCurrentSession();
var location = SaveIncomingFilesToDisk(referenceId, uploadedFiles);
//ProcessData makes a DB call and takes a few hours to complete.
TaskFactory.StartNew(() => ProcessData(ReferenceId,location))
.ContinueWith((prevTask) =>
{
Log.Info("Completed Processing. Carrying on with other work");
//Below method takes about 30 mins to an hour
SendDataToRelatedClients(ReferenceId);
}
return referenceId;
}
}
References
http://blog.stephencleary.com/2014/06/fire-and-forget-on-asp-net.html
Apppool recycle and Asp.net with threads?
Is this approach still safe?
It was never safe.
Does using Async/Await at controller help?
No.
The app runs on .NET 4.5, so don't think I will be able to use HostingEnvironment.QueueBackgroundWorkItem at the moment.
I have an AspNetBackgroundTasks library that essentially does the same thing as QueueBackgroundWorkItem (with minor differences). However...
I've also thought of using a message queue on app server to store messages once the files are stored to disk and then making the DataProcessor a separate service/Process and then listen to the queue. If the queue is recoverable, then it will assure me that the messages will always get processed eventually even if the server crashes or the thread gets thrown away before finish processing the data. Is this a better approach?
Yes. This is the only reliable approach. It's what I call the "proper distributed architecture" in my blog post.
No, it is not safe. Create a service application on your server that handles these requests and publishes the result. If you are hosted on Azure, take advantage of their WebJob service.

What is a safe overhead for RequestAdditionalTime()?

I have a Windows service that spawns a set of child activities on separate threads and that should only terminate when all those activities have successfully completed. I do not know in advance how long it might take to terminate an activity after a stop signal is received. During OnStop(), I wait in intervals for that stop signal and keep requesting additional time for as long as the system is willing to grant it.
Here is the basic structure:
class MyService : ServiceBase
{
private CancellationTokenSource stopAllActivities;
private CountdownEvent runningActivities;
protected override void OnStart(string[] args)
{
// ... start a set of activities that signal runningActivities
// when they stop
// ... initialize runningActivities to the number of activities
}
protected override void OnStop()
{
stopAllActivities.Cancel();
while (!runningActivities.Wait(10000))
{
RequestAdditionalTime(15000); // NOTE: 5000 added for overhead
}
}
}
Just how much "overhead" should I be adding in the RequestAdditionalTime call? I'm concerned that the requests are cumulative, instead of based on the point in time when each RequestAdditionalTime call is made. If that's the case, adding overhead could result in the system eventually denying the request because it's too far out in the future. But if I don't add any overhead then my service could be terminated before it has a chance to request the next block of additional time.
This post wasn't exactly encouraging:
The MSDN documentation doesn’t mention this but it appears that the value specified in RequestAdditionalTime is not actually ‘additional’ time. Instead, it replaces the value in ServicesPipeTimeout. Worse still, any value greater than two minutes (120000 milliseconds) is ignored, i.e. capped at two minutes.
I hope that's not the case, but I'm posting this as a worst-case answer.
UPDATE: The author of that post was kind enough to post a very detailed reply to my comment, which I've copied below.
Lars, the short answer is no.
What I would say is that I now realise that Windows Services ought to be designed to start and terminate processing quickly when requested to do so.
As developers, we tend to focus on the implementation of the processing and then package it up and deliver it as a Windows Service.
However, this really isn’t the correct approach to designing Windows Services. Services must be able to respond quickly to requests to start and stop not only when an administrator making the request from the services console but also when the operating system is requesting a start as part of its start up processing or a stop because it is shutting down,
Consider what happens when Windows is configured to shut down when a UPS signals that the power has failed. It’s not appropriate for the service to respond with “I need a few more minutes…”.
It’s possible to write services that react quickly to stop requests even when they implement long running processing tasks. Usually a long running process will consist of batch processing of data and the processing should check if a stop has been requested at the level of the smallest unit of work that ensures data consistency.
As an example, the first service where I found the stop timeout was a problem involved the processing of a notifications queue on a remote server. The processing retrieved a notification from the queue, calling a web service to retrieve data related to the subject of the notification, and then writing a data file for processing by another application.
I implemented the processing as a timer driven call to a single method. Once the method is called it doesn’t return until all the notifications in the queue have been processed. I realised this was a mistake for a Windows Service because occasionally there might be tens of thousands of notifications in the queue and processing might take several minutes.
The method is capable of processing 50 notifications per second. So, what I should have done was implement a check to see if a stop had been requested before processing each notification. This would have allowed the method to return when it has completed the processing of a notification but before it has started to process the next notification. This would have ensured that the service responds quickly to a stop request and any pending notifications remained queued for processing when the service is restarted.

How to implement an IIS-like threadpool on a worker-server

EDIT I realised my question was not stated clearly enough and have edited it heavily.
This is a bit of an open ended question so apologies in advance.
In a nutshell, I want to implement IIS-style asynchronous request processing in an Azure worker role.
It may be very simple or it may be insanely hard - I am looking for pointers to where to research.
While my implementation will use Azure Workers and Service Bus Queues, the general principle is applicable to any scenario where a worker process is listening for incoming requests and then servicing them.
What IIS does
In IIS there is a fixed-size threadpool. If you deal with all request synchronously then the maximum number of requests you can deal with in parallel == maxthreads. However, if you have to do slow external I/O to serve requests then this is highly inefficient because you can end up with the server being idle, yet have all threads tied up waiting for external I/O to complete.
From MSDN:
On the Web server, the .NET Framework maintains a pool of threads that are used to service ASP.NET requests. When a request arrives, a thread from the pool is dispatched to process that request. If the request is processed synchronously, the thread that processes the request is blocked while the request is being processed, and that thread cannot service another request.
This might not be a problem, because the thread pool can be made large enough to accommodate many blocked threads. However, the number of threads in the thread pool is limited. In large applications that process multiple simultaneous long-running requests, all available threads might be blocked. This condition is known as thread starvation. When this condition is reached, the Web server queues requests. If the request queue becomes full, the Web server rejects requests with an HTTP 503 status (Server Too Busy).
In order to overcome this issue, IIS has some clever logic that allows you to deal with requests asynchronously:
When an asynchronous action is invoked, the following steps occur:
The Web server gets a thread from the thread pool (the worker thread) and schedules it to handle an incoming request. This worker thread initiates an asynchronous operation.
The worker thread is returned to the thread pool to service another Web request.
When the asynchronous operation is complete, it notifies ASP.NET.
The Web server gets a worker thread from the thread pool (which might be a different thread from the thread that started the asynchronous operation) to process the remainder of the request, including rendering the response.
The important point here is when the asynchronous request returns, the return action is scheduled to run on one of the same pool of threads that serves the initial incoming requests. This means that the system is limiting how much work it is doing concurrently and this is what I would like to replicate.
What I want to do
I want to create a Worker role which will listen for incoming work requests on Azure Service Bus Queues and also potentially on TCP sockets. Like IIS I want to have a maxium threadpool size and I want to limit how much actual work the worker is doing in parallel; If the worker is busy serving existing requests - whether new incoming ones or the callbacks from previous async calls - I don't want to pick up any new incoming requests until some threads have been freed up.
It is not a problem to limit how many jobs I start concurrently - that is easy to control; It is limiting how many I am actually working on concurrently.
Let's assume a threadpool of 100 threads.
I get 100 requests to send an email come in and each email takes 5 seconds to send to the SMTP server. If I limit my server to only process 100 requests at the same time then my server will be unable to do anything else for 5 seconds, while the CPU is completely idle. So, I don't really mind starting to send 1,000 or 10,000 emails at the same time, because 99% of the "request process time" will be spent waiting for external I/O and my server will still be very quiet.
So, that particular scenario I could deal with by just keeping on accepting incoming requests with no limit (or only limit the start of the request until I fire off the async call; as soon as the BeginSend is called, I'll return and start serving another request).
Now, imagine instead that I have a type of request that goes to the database to read some data, does some heavy calculation on it and then writes that back to the database. There are two database requests there that should be made asynchronous but 90% of the request processing time will be spent on my worker. So, if I follow the same logic as above and keep start async calls and just letting the return do whatever it needs to get a thread to continue on then I will end up with a server that is very overloaded.
Somehow, what IIS does is make sure that when an async call returns it uses the same fixed-size thread pool. This means that if I fire off a lot of async calls and they then return and start using my threads, IIS will not accept new requests until those returns have finished. And that is perfect because it ensures a sensible load on the server, especially when I have multiple load-balanced servers and a queue system that the servers pick work from.
I have this sneaky suspicion that this might be very simple to do, there is just something basic I am missing. Or maybe it is insanely hard.
Creating a threadpool should be considered as independent of Windows Azure. Since a Worker Role instance is effectively Windows 2008 Server R2 (or SP2), there's nothing really different. You'd just need to set things up from your OnStart() or Run().
One thing you wanted to do was use queue length as a determining factor when scaling to more/less worker instances. Note that Service Bus Queues don't advertise queue length, where Windows Azure Queues (based on Storage, vs. Service Bus) do. With Windows Azure Queues, you'll need to poll synchronously for messages (whereas Service Bus Queues have long-polling operations). Probably a good idea to review the differences between Service Bus Queues and Windows Azure Queues, here.
Have you considered having a dedicated WCF instance (not WAS or IIS hosted) to buffer the long running requests? It will have its own dedicated app pool, with a separate Max value setting from IIS that won't contend with your ASP.NET HTTP requests. (HTTP requests are served by
Then use IIS Async methods to call WCF with the constrained app pool.
I've used the SmartThreadPool project in the past as a per-instance pool and, if I'm reading you correctly, it should have all the callback and worker-limiting functionality you need. My company actually has it running currently on Azure for the exact purpose you describe of reading message bus requests asynchronously.
I have been digging around in this and found that it is indeed relatively easy.
http://www.albahari.com/threading/ has got some good information and I actually ended up buying the book which that website is essentially promoting.
What I found out is that;
Your application has a ThreadPool available to it by default
You can limit the number of threads available in the ThreadPool
When you use QueueUserWorkItem or Task.Factory.StartNew the job you start run on a Thread in the ThreadPool
When you use one of the asynchronous IO calls in the framework (Begin... methods or WebcClient.DownloadStringAsync etc) the the callbacks will also run on a Thread from the ThreadPool (what happens with the IO request itself is outside the scope of this discussion).
So far, so good. The problem is that I can keep calling Task.Factory.StartNew as much as I like and the ThreadPool will simply queue up the work until there are free threads to service them. So, in the case of an Azure Worker, I could easily empty the Queue even though my worker is busy servicing existing requests (and callbacks from existing requests). That is the core of my problem. What I want is to not take anything out of the queue until I actually have some free threads to service the request.
This is a very simple example of how this could be achieved. In essence, I am using an AutoResetEvent to make sure that I don't start another task from the queue until the previous task has actually started. Granted, I do actually take stuff out of the queue before there is a free thread, but on balance this should avoid crazy overloads of the worker and allow me to spin up more workers to share the load.
ThreadPool.SetMaxThreads(5, 1000); // Limit to 5 concurrent threads
ThreadPool.SetMinThreads(5, 10); // Ensure we spin up all threads
var jobStart = new AutoResetEvent(true);
// The "listen" loop
while (true)
{
var job = this.jobQueue.Dequeue();
jobStart.WaitOne(); // Wait until the previous job has actually been started
Task.Factory.StartNew(
() =>
{
jobStart.Set(); // Will happen when the threadpool allocates this job to a thread
this.Download(job);
});
}
This can - and probably should - be made a lot more sophisticated, including having timeouts, putting the work item back in the queue if a thread can't be allocated within a reasonable time and so on.
An alternative would be to use ThreadPool.GetAvailableThreads to check if there are free threads before starting to listen to the queue but that feels rather more error prone.
Somehow, what IIS does is make sure that when an async call returns
it uses the same fixed-size thread pool.
This is not true: When your code runs in response to an HTTP-Request you decide on what threads the continuation function executes. Usually, this is the thread pool. And the thread pool is an appdomain-wide resource that is shared among all requests.
I think IIS does less "magic" than you think it does. All it does is to limit the number of parallel HTTP-requests and the backlog size. You decide what happens once you have been given control by ASP.NET.
If your code is not protected against overloading the server, you will overload the server even on IIS.
From what I understand you want to constrain the number of threads used for processing a certain type of message at the same time.
One approach would be to simply wrap the message processor, invoked on a new thread with something like
try
{
Interlocked.Increment(ref count)
Process(message);
}
finally
{
Interlocked.Decrement(ref count)
}
Before invoking the wrapper, simply check if the ‘count’ is less than your threshold count; and stop polling/handling more messages till the count is sufficiently lower.
EDIT Added more information based on comment
Frans, not sure why you see the infrastructure and business code being coupled. Once you place your business process to be serviced as a task on a new thread to run asynchronously, you need not worry about performing additional IO bound calls asynchronously. This is a simpler model to program in.
Here is what I am thinking.
// semi - pseudo-code
// Infrastructure – reads messages from the queue
// (independent thread, could be a triggered by a timer)
while(count < maxCount && (message = Queue.GetMessage()) != null)
{
Interlocked.Increment(ref count);
// process message asynchronously on a new thread
Task.Factory.StartNew(() => ProcessWrapper(message));
}
// glue / semi-infrastructure - deals with message deletion and exceptions
void ProcessWrapper(Message message)
{
try
{
Process(message);
Queue.DeleteMessage(message);
}
catch(Exception ex)
{
// Handle exception here.
// Log, write to poison message queue etc ...
}
finally
{
Interlocked.Decrement(ref count)
}
}
// business process
void Process(Message message)
{
// actual work done here
;
}

Categories

Resources