I have an azure function deployed that listens on service bus messages.
Whilst I was testing I realised that the the same azure function instance picked up the service bus message as data wrote to a file I was not expecting.
What I thought would happen is that a new instance of an azure function would launch when each new message is added to the service bus. Am I wrong in this understanding? or can this be achieved and I need to set some type of flag so once my function dequeues a message it stops listening, so any future service bus entries get picked up on a new azure function instance
Thanks
EDIT:
This is the function declaration
[FunctionName("ServiceBusShopifyTrigger")]
public async Task ServiceBusTrigger([ServiceBusTrigger("%serviceBusShopifyExtractQueue%", Connection = "serviceBusExtractConnectionString")] string msg, int deliveryCount, DateTime enqueuedTimeUtc, string messageId)
{
I have placed the following in the start of the function:
string instanceid = Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID");
_logger.LogInformation($"****** Orch Shopify - Azure Function Instance ID: {instanceid}");
As I understand this should be unique for each azure instance that spins up. However, I am seeing this as the same each time the function exectues
I also have a HTTP Trigger function and again the Instance ID is the same
Looking at the monitoring I am seeing the following for HTTP Trigger (note the operation ID on closely triggered functions)
These 2 instances were triggered from 2 seperate Http calls
For the service bus monitoring I am seeing different operation ids for closely triggered functions.
Something that I have just noticed in the following:
The calls at 13:26 share the same WEBSITE_INSTANCE_ID
The call at 13:40 share the same WEBSITE_INSTANCE_ID as the calls at 13:26
The call at 15:14 has a different WEBSITE_INSTACE_ID
Related
Is it possible to trigger the same azure function from multiple service bus triggers?
So the scenario I have is that I have two separate service buses that are for all intents and purposes sending the same message just from different sources. I want to know if it is possible to have a single azure function subcribe to both buses.
Some akin to:
public class TheFunction
{
public static async Task Run(
[ServiceBusTrigger("%TopicNameA%", "%SubscriptionNameA%", Connection = "Default")]
[ServiceBusTrigger("%TopicNameB%", "%SubscriptionNameB%", Connection = "Default")] SomeEventData event, ILogger log)
{
// do some shenanigans
}
}
Now I am aware that above isn't possible, but I hope it conveys the general idea of what I would like to achieve.
My fallback is just to have two separate but identical functions where each subscribes to one of the buses and just throws the event to some shared service that does the work.
NOTE: I am unable to send messages from different sources to the same bus. Not impossible, just not feasible to re-jig things within the given timeframe.
Is it possible to trigger the same azure function from multiple
service bus triggers?
So the scenario I have is that I have two separate service buses that
are for all intents and purposes sending the same message just from
different sources. I want to know if it is possible to have a single
azure function subcribe to both buses. Some akin to:
First of all, it is impossible to have multiple triggers in one function of azure function.
Just provide an idea, use the event grid based on the service bus to monitor the information inside the service bus.
Since the azure function can only use the trigger of the service bus but not the inputbinding of the service bus, you need to read the information in the function body.
View the documents below:
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-to-event-grid-integration-concept#azure-portal-instructions
I have a scaled out application, where each instance connects to a azure service bus subscription with the same name. The end result being that only a single instance gets to act on any given message because they are all listening to the same subscription.
Occasionally the application needs to place an instance into an idle state (service fabric ActiveSecondary replica). When this occurs, I need to close the subscription so that this instance no longer receives messages. If there were 2 instances originally, once one gets placed into the idle state all message should go to the remaining instance. This is important so that all messages are handled by a properly configured primary instance.
When the instance becomes idle, a cancellation token is cancelled. I have code listening for the cancellation and calling Close() on the SubscriptionClient generated when I created the subscription originally.
The issue is, even after I call Close() on one instance, messages are still being randomly split between it and the primary.
Is the way I'm doing this inherently wrong, or is something else in my code causing this behavior?
The Azure Service Bus track 0 and 1 SDKs do not support CancellationTokens. If you're closing your client and messages won't be processed, they'd be picked up another competing instance when visible again. That's where MaxLockDuration and MaxDeliveryCount are important to ensure messages have enough processing attempts to account the situation you're describing w/o waiting for too long.
Disregard this post. Turns out I had the same subscription name used twice within a single instance, so they were competing for the events. The close() function works as expected.
I have an Azure Function triggered by Azure Service Bus queue - Azure Function App is hosted in consumption plan.
How long does it take at most to wake up an Azure Function when new message in queue appears? (assuming there were no message during past 20 minutes).
Is it specified anywhere? I've found in the Documentation that:
For non-HTTP triggers, new instances will only be allocated at most once every 30 seconds.
Does it mean that I won't be able to process the first message in the queue faster than within 30 seconds?
Will adding Timer Triggered Azure Function to the same Function App (along with Service Bus triggered) help to keep the Azure Function instance up and running ?
Another option to handle a "cold start" of the ServiceBusTrigger function is using an eventing feature of the Azure Event Grid, where the events are emitted immediately if there are no messages in the Service Bus entity and a new message arrives. See more details here.
Note, that the event is emitted immediately when the first message arrived into the service bus entity. In the case, when the messages are there, the next event is emitted based on the idle time of the listener/receiver. This idle (watchdog) time is periodically 120 seconds from the last time used the listener/receiver on the service bus entity.
This is a push model, no listener on the service bus entity, so the AEG Subscriber (EventGridTrigger function) will "balanced" receiving a message with the ServiceBusTrigger function (its listener/receiver).
Using the REST API for deleting/receiving a message from the service bus entity (queue), the subscriber can obtained it very easy and straightforward.
So, using the AEG eventing on the topic providers/Microsoft.ServiceBus/namespaces/myNamespace and filtering on the eventType = "Microsoft.ServiceBus.ActiveMessagesAvailableWithNoListeners", the messages can be received side by side with a ServiceBudTrigger function and to solve the AF problem on the "cold start".
Note, that only the Azure Service Bus premium tier is integrated to AEG.
The following code snippet shows an example of the AEG Subscriber for Service Bus using the EventGridTrigger function:
#r "Newtonsoft.Json"
using System;
using System.Threading.Tasks;
using System.Text;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
public static async Task Run(JObject eventGridEvent, ILogger log)
{
log.LogInformation(eventGridEvent.ToString());
string requestUri = $"{eventGridEvent["data"]?["requestUri"]?.Value<string>()}";
if(!string.IsNullOrEmpty(requestUri))
{
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Authorization", Environment.GetEnvironmentVariable("AzureServiceBus_token"));
var response = await client.DeleteAsync(requestUri);
// status & headers
log.LogInformation(response.ToString());
// message body
log.LogInformation(await response.Content.ReadAsStringAsync());
}
}
await Task.CompletedTask;
}
First of all, when using consumption plan and idle for about 20 minutes, the function app goes into idle. And then if you use the function, you will experience cold start, which needs to take some time to wake up.
For your question:
Does it mean that I won't be able to process the first message in the
queue faster than within 30 seconds?
It depends on following(link is here):
1.which language you're using(eg. function using c# is more faster than java), the following is picked up from the link above:
A typical cold start latency spans from 2 to 15 seconds. C# functions usually complete the start within 3 seconds, while JavaScript and Java have longer tails.
2.The number of dependencies in your function:
Adding dependencies and thus increasing the deployed package size will further increase the cold start durations.
will adding Timer Triggered Azure Function to the same Function App
(along with Service Bus triggered) help to keep the Azure Function
instance up and running ?
Yes, add a Timer Triggered azure function to the same function app would keep the others warm up(up and running).
Azure Functions can run on either a Consumption Plan or a dedicated App Service Plan.
If you run in a dedicated mode, you need to turn on the Always On setting for your Function App to run properly. The Function runtime will go idle after a few minutes of inactivity, so only HTTP triggers will actually "wake up" your functions. This is similar to how WebJobs must have Always On enabled.
For more information, see Always On in the Azure Functions Documentation.
The timeout duration of a function app is defined by the functionTimeout property in the host.json project file. The following table shows the default and maximum values in minutes for both plans and in both runtime versions:
You can read more about cold start here.
https://azure.microsoft.com/en-in/blog/understanding-serverless-cold-start/
HTTP trigger will help in your case y warming up your instance for sure but ideal way for 24/7 type of requirement is to use dedicated App Serice plan. Hope it helps.
I have azure function which trigger when we ave new message into service bus topic.
In azure function I checked the customer api is available or not by calling it and checking its status code.
If status code is 200 I need to process the message else put this message into dead-letter and after some interval when customer api is available then process all dead letter message also.
public static class Function1
{
[FunctionName("Function1")]
public static void Run([ServiceBusTrigger("customer-order", "customer-order", Connection = "")]string mySbMsg, ILogger log)
{
// 1.call customer api to check it is available or not
// 2.if it is up and running then process the message else put message into dead-letter
// and after some interval when customer ai is available process dead-letter messages
log.LogInformation($"C# ServiceBus topic trigger function processed message: {mySbMsg}");
}
}
I can able to invoke customer api using HTTPClient and process message also.
but how to put message into dead-letter and How to execute dead-letter
after some interval when customer api is available ?
Proposed Flow Will be
in azure function app which will trigger if new message is there into.
topic start step - Check api is available or down if api is available
process the current message if api is down then do not process
message here we have two choices
1a.put current message into dead letter
1b.put current message back into main but if we do this then again function app will trigger as its new message trigger based and start step will continue.
Rather than putting this in dead letter queue, a better approach would be to defer the message for a certain duration.
If a message cannot be processed because a particular resource for
handling that message is temporarily unavailable but message
processing should not be summarily suspended, a way to put that
message on the side for a few minutes is to remember the
SequenceNumber in a scheduled message to be posted in a few minutes,
and re-retrieve the deferred message when the scheduled message
arrives.
See this answer for an example to how to do deferral in Azure functions v2. Note that the input binding is using message of type Message and is also using the injected MessageReceiver. Those are needed to be able to call DeferAsync. The template code for service bus trigger sets the message type to string, so you would have to change signature as described in that answer.
About deferred messages:
Deferred messages remain in the main queue along with all other active
messages (unlike dead-letter messages that live in a subqueue), but
they can no longer be received using the regular Receive/ReceiveAsync
functions. To retrieve a deferred message, its owner is responsible
for remembering the SequenceNumber as it defers it. Any receiver that
knows the sequence number of a deferred message can later receive the
message explicitly with Receive(sequenceNumber).
How to schedule messages with sequence number of deferred messages so that deferred message can be processed at a later time:
You can schedule messages either by setting the
ScheduledEnqueueTimeUtc property when sending a message through the
regular send path, or explicitly with the ScheduleMessageAsync API
How does the poison-message handling work for Azure WebJobs SDK's ServiceBusTrigger ? I am looking to push the service bus queue messages that have been dequeued more than 'x' times to a different ServiceBus (or) Storage queue
The online documentation here and here and SDK Samples from here does not have examples on how the poison message handling works for ServiceBusTrigger. Is this work in progress?
I tried implementing a custom poison message handling using dequeueCount parameter but it doesn't look that it is supported for ServiceBusTriggers as I was getting a runtime exception {"Cannot bind parameter 'dequeueCount' when using this trigger."}
public static void ProcessMessage([ServiceBusTrigger(topicName: "abc", subscriptionName: "abc.gdp")] NotificationMessage message,
[Blob("rox/{PayloadId}", FileAccess.Read)] Stream blobInput, Int32 dequeueCount)
{
throw new ArgumentNullException();
}
While you cannot get the dequeueCount property for ServiceBus messages, you can always bind to BrokeredMessage instead of NotificationMessage and get the property from it.
It looks like WebJobs handles this internally at the moment.
Reference: How to use Azure Service Bus with the WebJobs SDK
Specific section:
How ServicebusTrigger works
The SDK receives a message in PeekLock mode and calls Complete on the
message if the function finishes successfully, or calls Abandon if the
function fails. If the function runs longer than the PeekLock timeout,
the lock is automatically renewed.
Service Bus does its own poison queue handling, so that is neither
controlled by, nor configurable in, the WebJobs SDK.
Additional Reference
Poison message handling can't be controlled or configured in Azure Functions. Service Bus handles poison messages itself.
To add to Brendan Green's answer, the WebJobs SDK calls Abandon on messages that failed to process, and after maximum number of retries these messages are moved to the dead letter queue by the Service Bus. The properties defining when a message will be moved into the dead letter queue, such as maximum delivery count, time to live, and PeekLock duration can be changed in Service Bus -> Queue -> Properties.
You can find more information on SB dead letter queue here: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dead-letter-queues