Throttle/restrict serviceBus Queue to triggered the message form ServiceBusTrigger - c#

I have a ServiceBusQueue(SBQ), which gets a lots of message payloads.
I have a ServiceBusTrigger(SBT) with accessRights(manage) which continuously polling a message from SBQ.
The problem i am facing is:
My SBT(16 instances at once) pick messages(16 messages individually) at one time and create a request to another server(suppose S1).
If SBT continuously creates 500-600 requests then the server S1 stops to respond.
I am expecting:
I could throttle/restrict to pick the message at once from SBQ so that I indirectly restrict to send the request.
Please share your thoughts, what design should i follow.I couldn't googled the exact solution.

Restrict the maximum concurrent calls of Service Bus Trigger.
In host.json, add configuration to throttle concurrency(i.e. by default 16 messages at once you have seen). Take an example of v2 function.
{
"version": "2.0",
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"maxConcurrentCalls": 8
}
}
}
}
Restrict Function host instances count. When the host scales out, each instance has one Service Bus trigger which reads multiple messages concurrently as set above.
If the trigger is on dedicated App service plan, scale in the instance counts to some small value. For functions on Consumption plan, add App setting WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
with reasonable value(<=5). Of course we can set the count to 1 in order to control the behavior strictly.
If we have control over how the messages are sent, schedule the incoming messages to help decrease the request rate.
Use static clients to reuse connection with the Server S1.

Related

Batching to Event Hubs from an ASP .NET Application

I have an array of websites that (asynchronously) send event analytics into an ASP.NET website, which then should send the events into an Azure EventHubs instance.
The challenge I'm facing is that with requests exceeding 50,000 per second I've noticed that my response times to serve these requests are into the multi-second range, effecting total load times for the initial sending website. I have scaled up all parts however I recognize that sending an event per request is not very efficient due to the overhead of opening an AMQP connection to Event Hubs and sending off the payload.
As a solution I've been trying to batch the Event Data that gets sent to my EventHubs instance however I've been running into some problems with synchronizing.
With each request, I add the Event Data into a static EventDataBatch created via EventHubClient.CreateBatch() with eventHubData.TryAdd() then I check to see that the quantity of events is within a predefined threshold and if so, I send the events asynchronously via EventHubClient.SendAsync(). The challenge this has created is that since this is a ASP .NET application, there could be many threads attempting to serve requests at any given instance - any of which could be trying to to eventHubData.TryAdd() or EventHubClient.SendAsync() at the same point in time.As a poor attempt to resolve this I have attempted to call lock(batch) prior to eventHubData.TryAdd() however this does not resolve the issue since I cannot also lock the asynchronous method EventHubClient.SendAsync().
What is the best way to implement this solution so that each request does not require it's own request to Event hubs and can take advantage of batching while also preserving the integrity of the batch itself and not running into any deadlock issues?
Have a look at the source code for the application insights SDK to see how they have solved this problem - you can reuse the key parts of this to achieve the same thing with event hubs AMQP.
The pattern is ,
1) Buffer data. Define a buffer that you will share among threads with a maximum size. Multiple threads write data into the buffer
https://github.com/Microsoft/ApplicationInsights-dotnet/blob/develop/src/Microsoft.ApplicationInsights/Channel/TelemetryBuffer.cs
2) Prepare a transmission. You can transmit the items in the buffer either when the buffer is full, when some interval elapses, or whichever happens first. Take all the items from the buffer to send
https://github.com/Microsoft/ApplicationInsights-dotnet/blob/develop/src/Microsoft.ApplicationInsights/Channel/InMemoryTransmitter.cs
3) Do the transmission. Send all items as multiple data points in a single Event Hub message,
https://github.com/Microsoft/ApplicationInsights-dotnet/blob/develop/src/Microsoft.ApplicationInsights/Channel/Transmission.cs
They are the 3 classes that combine to achieve this using HTTP to post to the Application Insights collection endpoint - you can see how the sample pattern can be applied to collect, amalgamate and transmit to Event Hubs.
You'll need to control the maximum message size, which is 256KB per Event Hub message, which you could do by setting the telemetry buffer size - that's up to your client logic to manage that.

Azure function: limit the number of calls per second

I have an Azure function triggered by queue messages. This function makes a request to third-party API. Unfortunately this API has limit - 10 transactions per second, but I might have more than 10 messages per second in service bus queue. How can I limit the number of calls of Azure function to satisfy third-party API limitations?
Unfortunately there is no built-in option for this.
The only reliable way to limit concurrent executions would be to run on a fixed App Service Plan (not Consumption Plan) with just 1 instance running all the time. You will have to pay for this instance.
Then set the option in host.json file:
"serviceBus": {
// The maximum number of concurrent calls to the callback the message
// pump should initiate. The default is 16.
"maxConcurrentCalls": 10
}
Finally, make sure your function takes a second to execute (or other minimal duration, and adjust concurrent calls accordingly).
As #SeanFeldman suggested, see some other ideas in this answer. It's about Storage Queues, but applies to Service Bus too.
You can try writing some custom logic i.e. implement your own in-memory queue in Azure function to queue up requests and limit the calls to third party API. Anyway until the call to third party API succeeds, you dont need to acknowledge the messages in the queue. In this way reliability is also maintained.
The best way to maintain integrity of the system is to throttle the consumption of the Service Bus messages. You can control how your QueueClient processes the messages, see: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues#4-receive-messages-from-the-queue
Check out the "Max Concurrent calls"
static void RegisterOnMessageHandlerAndReceiveMessages()
{
// Configure the message handler options in terms of exception handling, number of concurrent messages to deliver, etc.
var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
{
// Maximum number of concurrent calls to the callback ProcessMessagesAsync(), set to 1 for simplicity.
// Set it according to how many messages the application wants to process in parallel.
MaxConcurrentCalls = 1,
// Indicates whether the message pump should automatically complete the messages after returning from user callback.
// False below indicates the complete operation is handled by the user callback as in ProcessMessagesAsync().
AutoComplete = false
};
// Register the function that processes messages.
queueClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
}
Do you want to get rid of N-10 messages you receive in a second interval or do you want to treat every message in respect to the API throttling limit? For the latter, you can add the messages processed by your function to another queue from which you can read a batch of 10 messages via another function (timer trigger) every second

Getting data from 50 service bus queue for a real time dashboard in a azure web app

Using the code as shown here.. I was able to create a web app that every 30 seconds sent data to client using System.Threading.Timer.
I was able to add some code which received data from a service bus queue using Messaging factory and Messaging receiver and based on that sent data to signalR client instead of hard-coding as in the mentioned example.
Now my real application gets data from 50 such queue..
Theoretically, I could create 50 timer objects which would call 50 different methods which in turn would call service bus queue.
I would sincerely appreciate if someone could suggest the right way to achieve my goal..
Thanks
The message pump pattern seems like it would be a good fit for this application. You create a separate client for each queue and configure each one to automatically listen for messages in its queue and process them as they come in.
foreach (var queueName in queueNames){
var queueClient = QueueClient.CreateFromConnectionString(connectionString, queueName);
queueClient.OnMessage(message =>
{
// Do work here
Console.Out.WriteLine(string.Format("Recieved message {0} on queue {1}", message.MessageId, queueName));
});
}

Azure Service Bus Subscriber regularly phoning home?

We have pub/sub application that involves an external client subscribing to a Web Role publisher via an Azure Service Bus Topic. Our current billing cycle indicates we've sent/received >25K messages, while our dashboard indicates we've sent <100. We're investigating our implementation and checking our assumptions in order to understand the disparity.
As part of our investigation we've gathered wireshark captures of client<=>service bus traffic on the client machine. We've noticed a regular pattern of communication that we haven't seen documented and would like to better understand. The following exchange occurs once every 50s when there is otherwise no activity on the bus:
The client pushes ~200B to the service bus.
10s later, the service bus pushes ~800B to the client. The client registers the receipt of an empty message (determined via breakpoint.)
The client immediately responds by pushing ~1000B to the service bus.
Some relevant information:
This occurs when our web role is not actively pushing data to the service bus.
Upon receiving a legit message from the Web Role, the pattern described above will not occur again until a full 50s has passed.
Both client and server connect to sb://namespace.servicebus.windows.net via TCP.
Our application messages are <64 KB
Questions
What is responsible for the regular, 3-packet message exchange we're seeing? Is it some sort of keep-alive?
Do each of the 3 packets count as a separately billable message?
Is this behavior configurable or otherwise documented?
EDIT:
This is the code the receives the messages:
private void Listen()
{
_subscriptionClient.ReceiveAsync().ContinueWith(MessageReceived);
}
private void MessageReceived(Task<BrokeredMessage> task)
{
if (task.Status != TaskStatus.Faulted && task.Result != null)
{
task.Result.CompleteAsync();
// Do some things...
}
Listen();
}
I think what you are seeing is the Receive call in the background. Behind the scenes the Receive calls are all using long polling. Which means they call out to the Service Bus endpoint and ask for a message. The Service Bus service gets that request and if it has a message it will return it immediately. If it doesn't have a message it will hold the connection open for a time period in case a message arrives. If a message arrives within that time frame it will be returned to the client. If a message is not available by the end of the time frame a response is sent to the client indicating that no message was there (aka, your null BrokeredMessage). If you call Receive with no overloads (like you've done here) it will immediately make another request. This loop continues to happend until a message is received.
Thus, what you are seeing are the number of times the client requests a message but there isn't one there. The long polling makes it nicer than what the Windows Azure Storage Queues have because they will just immediately return a null result if there is no message. For both technologies it is common to implement an exponential back off for requests. There are lots of examples out there of how to do this. This cuts back on how often you need to go check the queue and can reduce your transaction count.
To answer your questions:
Yes, this is normal expected behaviour.
No, this is only one transaction. For Service Bus you get charged a transaction each time you put a message on a queue and each time a message is requested (which can be a little opaque given that Recieve makes calls multiple times in the background). Note that the docs point out that you get charged for each idle transaction (meaning a null result from a Receive call).
Again, you can implement a back off methodology so that you aren't hitting the queue so often. Another suggestion I've recently heard was if you have a queue that isn't seeing a lot of traffic you could also check the queue depth to see if it was > 0 before entering the loop for processing and if you get no messages back from a receive call you could go back to watching the queue depth. I've not tried that and it is possible that you could get throttled if you did the queue depth check too often I'd think.
If these are your production numbers then your subscription isn't really processing a lot of messages. It would likely be a really good idea to have a back off policy to a time that is acceptable to wait before it is processed. Like, if it is okay that a message sits for more than 10 minutes then create a back off approach that will eventually just be checking for a message every 10 minutes, then when it gets one process it and immediately check again.
Oh, there is a Receive overload that takes a timeout, but I'm not 100% that is a server timeout or a local timeout. If it is local then it could still be making the calls every X seconds to the service. I think this is based on the OperationTimeout value set on the Messaging Factory Settings when creating the SubscriptionClient. You'd have to test that.

How can I throttle the amount of messages coming from ActiveMQ in my C# app?

I'm using ActiveMQ in a .Net program and I'm flooded with message-events.
In short when I get a queue-event 'onMessage(IMessage receivedMsg)' I put the message into an internal queue out of which X threads do their thing.
At first I had: 'AcknowledgementMode.AutoAcknowledge' when creating the session so I'm guessing that all the messages in the queue got sucked down and put into the memory queue (which is risky since with a crash, everything is lost).
So then I used: 'AcknowledgementMode.ClientAcknowledge' when creating the session, and when a worker was ready with the message it calls the 'commit()' method on the message. However, still all the messages get sucked down from the queue.
How can I configure it that ONLY an X amount of messages are being processed or are in an internal queue, and that not everything is being 'downloaded' right away?
Are you on .NET 4.0? You could use a BlockingCollection . Set it to the maximum amount it may contain. As soon as a thread tries to put in an excess element, the Add operation will block until the collection falls below the threshold again.
Maybe that would do it for throttling?
There is also an API for throttling in the Rx framework, but I do not know how it is implemented. If you implement your Queue source as Observable, this API would become available for you, but I don't know if this hits your needs.
You can set the client prefetch to control how many messages the client will be sent. When the Session is in Auto Ack, the client will only ack a message once its been delivered to your app via the onMessage callback or through a synchronous receive. By default the client will prefetch 1000 messages from the broker, if the client goes down these messages would be redelivered to another client it this was a Queue, otherwise for a topic they are just discarded as a topic is a broadcast based channel. If you set the prefetch to one then you client would only be sent one message from the sever, then each time your onMessage callback completes a new message would be dispatched as the client would ack that message, that is if the session is in Auto Ack mode.
Refer to the NMS configuration page for all the options:
http://activemq.apache.org/nms/configuring.html
Regards
Tim.
FuseSource.com

Categories

Resources