Benefits of an async ServiceBusTrigger - c#

I'm working on microservices (using Azure Function Apps) that contain ServiceBusTrigger-based Azure Functions that trigger when a message is inserted into a Service Bus Queue.
I'm trying to determine the best way of binding output values to multiple targets (e.g. CosmosDB and IoT Hub). Whether or not the method is marked as async will determine how I should approach this problem.
As far as I am aware, the way that you would typically handle output binding with an async function is by using the [return: ...] annotation; however, in my use case, I need to return two different values to two separate targets (e.g. CosmosDb and IoT Hub). I don't think that this is something that I can achieve with return value binding or output variable binding, since you can't have an out param with an async method and you can define multiple return values with the [return: ...] approach.
It would seem that my only option (if I went the async route) would be to manually invoke SDK methods in the Azure Function to call the services independent of any output values. I'm trying to avoid doing that, seeing as output binding is the preferred approach.
An observation that I have made when creating a brand new ServiceBusTrigger-based Azure Function is that the generated method signature is not marked as async by default.
This is different than an HttpTrigger, which is marked as async out-of-box.
Can someone help me understand the reasoning for this? What are the scaling implications associated with one vs. the other?
I understand in a traditional sense why you typically mark an HttpTrigger as async; however, I don't understand the reasoning as to why the ServiceBusTrigger is not async
I need to understand this bit before I can move on with solidifying my approach to outputs.

I don't think templates with/without async functions have any reasoning to them as such. And depending on your code, your function may be more efficient.
Read this thread for more details on async/await in functions.
As for your main question, you just have to bind to different objects for the CosmosDB and IoT Hub output bindings.
For CosmosDB, you will have to bind to IAsyncCollector instead as shown in the docs
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;
namespace CosmosDBSamplesV2
{
public static class WriteDocsIAsyncCollector
{
[FunctionName("WriteDocsIAsyncCollector")]
public static async Task Run(
[QueueTrigger("todoqueueforwritemulti")] ToDoItem[] toDoItemsIn,
[CosmosDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection")]
IAsyncCollector<ToDoItem> toDoItemsOut,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
foreach (ToDoItem toDoItem in toDoItemsIn)
{
log.LogInformation($"Description={toDoItem.Description}");
await toDoItemsOut.AddAsync(toDoItem);
}
}
}
}
For Event Hub, you will have to bind to IAsyncCollector instead as shown in the docs
[FunctionName("EH2EH")]
public static async Task Run(
[EventHubTrigger("source", Connection = "EventHubConnectionAppSetting")] EventData[] events,
[EventHub("dest", Connection = "EventHubConnectionAppSetting")]IAsyncCollector<string> outputEvents,
ILogger log)
{
foreach (EventData eventData in events)
{
// do some processing:
var myProcessedEvent = DoSomething(eventData);
// then send the message
await outputEvents.AddAsync(JsonConvert.SerializeObject(myProcessedEvent));
}
}

Related

Azure Event Hubs - How to implement a consumer in .Net Core WebAPI?

So, I need to implement a Consumer in a WebAPI (.Net core 3.1) application, and reading the Microsoft Documentations and seeing several videos about it, I got to this solution.
This is an extension method for IServiceCollection, I'm calling it from the Startup.cs to instantiate my Consumer (the connection strings and container names are there for tests only):
private static async Task AddPropostaEventHub(this IServiceCollection services)
{
const string eventHubName = "EVENT HUB NAME";
const string ehubNamespaceConnectionString = "EVENT HUB CONNECTION STRING";
const string blobContainerName = "BLOB CONTAINER NAME";
const string blobStorageConnectionString = "BLOB CONNECTION STRING";
string consumerGroup = EventHubConsumerClient.DefaultConsumerGroupName;
BlobContainerClient storageClient = new BlobContainerClient(blobStorageConnectionString, blobContainerName);
EventProcessorClient processor = new EventProcessorClient(storageClient, consumerGroup, ehubNamespaceConnectionString, eventHubName);
processor.ProcessEventAsync += ProcessEvent.ProcessEventHandler;
processor.ProcessErrorAsync += ProcessEvent.ProcessErrorHandler;
await processor.StartProcessingAsync();
}
The ProcessorEventHandler class:
public static class ProcessEvent
{
public static async Task ProcessEventHandler(ProcessEventArgs eventArgs)
{
var result = Encoding.UTF8.GetString(eventArgs.Data.Body.ToArray());
//DO STUFF
await eventArgs.UpdateCheckpointAsync(eventArgs.CancellationToken);
}
public static Task ProcessErrorHandler(ProcessErrorEventArgs eventArgs)
{
//DO STUFF
return Task.CompletedTask;
}
}
This code is working, but my question is: is it okay to implement it like that? Is there a problem if the consumer nevers stops? Can it block other tasks (or requests) in my code?
Is there a better way to implement it using Dependecy Injection in .Net Core?
I couldn't find any example of someone implementing in a WebApi, is there a reason for that?
As Jesse Squire mentioned, WebAPI isn't necessarily the correct method of implementation, but it primarily depends on what your goals are.
If you are making an API that also includes an Event Hub listener, you should implement it under the IHostedService interface. Your existing AddPropostaEventHub() method goes inside the interface's StartAsync(CancellationToken cancellationToken) which is what .NET Core uses to startup a background task. Then, inside your Startup.cs, you register the handler as services.AddHostedService<EventHubService>();. This ensures that the long running listener is handled properly without blocking your incoming HTTP requests.
If you don't also have an API involved or are able to split the processes completely, then you should consider creating this as a console app instead of as a hosted service, which further separates the roles of API and event listener.
You didn't mention where you are deploying this, but if you happen to be deploying to an Azure App Service, you do have a few options for splitting the receiver from your API, and in that case I would definitely recommend doing so. Inside of App Services, there is a feature called WebJobs which specifically exists to handle things like this without making it part of your API. A good alternative is Functions. In that case you don't have to worry about setting up the DI for Event Hub at all, the host process takes care of that for you.

How to get an ILogger from an Activity Function?

I am using Durable Azure Function in a prototype for a future project.
Basically, I have a Client Azure Function triggered by an HTTP POST request that starts the Orchestrator. Then, the Orchestrator decides to trigger an Activity. Nothing complicated.
Here is a sample of what I am doing:
[FunctionName("ClientFunction")]
public static async Task<HttpResponseMessage> OnHttpTriggerAsync([HttpTrigger(AuthorizationLevel.Anonymous, "post")]
HttpRequestMessage request, [OrchestrationClient] DurableOrchestrationClient starter, ILogger logger)
{
// Triggers the orchestrator.
string instanceId = await starter.StartNewAsync("OrchestratorFunction", null);
return new HttpResponseMessage(HttpStatusCode.OK);
}
[FunctionName("OrchestratorFunction")]
public static async Task DoOrchestrationThingsAsync([OrchestrationTrigger] DurableOrchestrationContext context, ILogger logger)
{
// Triggers some serious activity.
await context.CallActivityAsync("ActivityFunction", null);
}
[FunctionName("ActivityFunction")]
public static Task DoAnAwesomeActivity([ActivityTrigger] DurableActivityContext context)
{
// Short & Sweet activity...
// Wait... Where's my logger?
}
Both the Orchestrator and the Client Functions are being given an ILogger but not the Activity Function; as stated in the documentation (either a specific parameter or the DurableActivityContext instance), the Activity function only gets one parameter. And I am not under the impression that the static class in which these methods are declared could keep a reference on that ILogger.
I understand that the Activity Function should perform one small job but I would be more comfortable if I was able to log that the activity was called with the appropriate values if something goes wrong (and it will :) ).
Question
How can the Activity access the ILogger?
It is not possible to pass multiple parameters to an activity function directly. The recommendation in this case is to pass in an array of objects or to use ValueTuples objects in .NET.
This restriction you are concerned about is talking about the parameters we pass from Orchestrator to Activity Function. It doesn't mean we could only use one parameter in Activity method signature. Feel free to add ILogger there and complete your job as needed.

How to bind to MessageSender in an Azure function?

I'm working with a service bus queue and an Azure function. Failures are handled differently depending on some business logic. Some messages needs to sent to the dead letter queue, others need to be modified then explicitly added back to the queue.
Consider the following code:
[FunctionName("ServiceBusTask")]
public static async Task Run(
[ServiceBusTrigger("myQueue", Connection = "myConnectionString")]
Message message,
MessageReceiver messageReceiver,
//MessageSender messageSender,
ILogger log)
{
//some business logic that can fail
if( condition for dead letter)
{
await messageReceiver.DeadLetterAsync(message.SystemProperties.LockToken);
}
else if( condition for a manual retry)
{
QueueClient qc = new Queueclient("myQueue", "myConnectionString");
Message updatedMessage = GetUpdatedMessage(message);
await qc.SendAsync(updatedMessage);
//await messageSender.SendAsync(updatedMessage);
}
}
The messageReceiver works just fine to send messages to the dead letter queue but it bothers me that I need to create a QueueClient to send messages to the queue. Knowing that MessageSender exists, I tried to add it to the parameters but I'm getting an error saying:
Cannot bind parameter 'messageSender' to type MessageSender. Make sure the parameter Type is supported by the binding. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
I'm not too sure why it's telling me about startup code, I have no such thing so I'm guessing the error message wasn't updated...
Reading this issue on the Azure webjobs SDK, I get the impression that it should be supported (do correct me if I'm reading it wrong!).
My question
Is it possible to use MessageSender like this and if so, what do I need to do to make it work?
Your function is using a ServiceBusTrigger, which supports MessageReceiver, but not the MessageSender binding - that's supported by the ServiceBus output binding, which you could add to your function. (example)

Cant find System.Runtime when I add async

I'm making an Azure Function, and I need it to do some async work. But I get this strange error it can't load assembly System.Runtime once I call a awaited method:
[FunctionName("MyTestFunction")]
async public static void Run(
[ServiceBusTrigger("testtopic", "testsubscription", AccessRights.Listen, Connection = "MyServiceBusConnection")]
string mySbMsg,
TraceWriter log)
{
client = new DocumentClient(new System.Uri(ENDPOINT), ENDPOINT_KEY);
await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(DATABASE, COLLECTION_ID), string.Format("{ \"newmessage\": {0} }", mySbMsg));
log.Info($"C# ServiceBus topic trigger function processed message: {mySbMsg}");
}
What am I missing here?
There is a workaround for this problem - please have a look at this example.
Using Azure Function built-in functionality seems easier and doesn't require you to await anything, which might solve your problem.
Additionally, DocumentClient class is intended to be instantiated once and reused throughout the lifetime of an application. Please have a look at this article - Improper Instantiation antipattern

Dynamically enable/ disable a triggered function in Azure WebJob

We have an azure web job that has two methods in the Functions.cs file. Both jobs are triggered off different Topics in Azure Service Bus.
As this uses reflection at run time to determine the functions that are to be run/triggered by messages hitting the topics, there are no references to these methods in code.
public static async Task DoWork([ServiceBusTrigger("topic-one", "%environmentVar%")] BrokeredMessage brokeredMessage, TextWriter log) {}
public static async Task DoOtherWork([ServiceBusTrigger("topic-two", "%environmentVar2%")] BrokeredMessage brokeredMessage, TextWriter log) {}
I have a need to have this web job run either both methods, or just one of them, based on a variable set at run time (it won't change one the job is running, but it is read in when the job starts). I can't simply wrap the internals of the methods in an if() based on the variable, as that would read and destroy the message.
Is it possible to use the JobHostConfiguration (an IServiceProvider) to achieve this, as that is built at run time. Is that was the JobHostConfiguration.IJobActivator can be used for?
Triggered Functions can be disabled when the Webjob starts.
You can have a look at this issue: Dynamic Enable/ Disable a function.
So the Webjob SDK provided a DisableAttribute`:
Can be applied at the Parameter/Method/Class level
Only affects triggered functions
[Disable("setting")] - If a config/environment value exists for the specified setting name, and its value is "1" or "True" (case insensitive), the function will be disabled.
[Disable(typeof(DisableProvider))] - custom Type declaring a function of signature bool IsDisabled(MethodInfo method). We'll call this method to determine if the function should be disabled.
This is a startup time only check. For disabled triggered functions, we simply skip starting of the function listener. However, when you update app settings bound to these attributes, your WebJob will automaticallly restart and your settings will take effect.
Setting names can include binding parameters (e.g. {MethodName}, {MethodShortName}, %test%, etc.)
In your case you need to use the DisableAttribute with a DisableProvider.
public class DoWorkDisableProvider
{
public bool IsDisabled(MethodInfo method)
{
// check if the function should be disable
// return true or false
return true;
}
}
public class DoOtherWorkkDisableProvider
{
public bool IsDisabled(MethodInfo method)
{
// check if the function should be disable
// return true or false
return true;
}
}
And your functions should be decorated with the disable attribute
[Disable(typeof(DoWorkDisableProvider))]
public static async Task DoWork([ServiceBusTrigger("topic-one", "%environmentVar%")] BrokeredMessage brokeredMessage, TextWriter log) {}
[Disable(typeof(DoOtherWorkkDisableProvider))]
public static async Task DoOtherWork([ServiceBusTrigger("topic-two", "%environmentVar2%")] BrokeredMessage brokeredMessage, TextWriter log) {}
Otherwise the JobHostConfiguration.IJobActivator is designed to inject dependencies into your functions. you can have a look at these posts related to:
Dependency injection using Azure WebJobs SDK?
Azure Triggered Webjobs Scope for Dependency Injection

Categories

Resources