I'm working with a service bus queue and an Azure function. Failures are handled differently depending on some business logic. Some messages needs to sent to the dead letter queue, others need to be modified then explicitly added back to the queue.
Consider the following code:
[FunctionName("ServiceBusTask")]
public static async Task Run(
[ServiceBusTrigger("myQueue", Connection = "myConnectionString")]
Message message,
MessageReceiver messageReceiver,
//MessageSender messageSender,
ILogger log)
{
//some business logic that can fail
if( condition for dead letter)
{
await messageReceiver.DeadLetterAsync(message.SystemProperties.LockToken);
}
else if( condition for a manual retry)
{
QueueClient qc = new Queueclient("myQueue", "myConnectionString");
Message updatedMessage = GetUpdatedMessage(message);
await qc.SendAsync(updatedMessage);
//await messageSender.SendAsync(updatedMessage);
}
}
The messageReceiver works just fine to send messages to the dead letter queue but it bothers me that I need to create a QueueClient to send messages to the queue. Knowing that MessageSender exists, I tried to add it to the parameters but I'm getting an error saying:
Cannot bind parameter 'messageSender' to type MessageSender. Make sure the parameter Type is supported by the binding. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
I'm not too sure why it's telling me about startup code, I have no such thing so I'm guessing the error message wasn't updated...
Reading this issue on the Azure webjobs SDK, I get the impression that it should be supported (do correct me if I'm reading it wrong!).
My question
Is it possible to use MessageSender like this and if so, what do I need to do to make it work?
Your function is using a ServiceBusTrigger, which supports MessageReceiver, but not the MessageSender binding - that's supported by the ServiceBus output binding, which you could add to your function. (example)
Related
So, I need to implement a Consumer in a WebAPI (.Net core 3.1) application, and reading the Microsoft Documentations and seeing several videos about it, I got to this solution.
This is an extension method for IServiceCollection, I'm calling it from the Startup.cs to instantiate my Consumer (the connection strings and container names are there for tests only):
private static async Task AddPropostaEventHub(this IServiceCollection services)
{
const string eventHubName = "EVENT HUB NAME";
const string ehubNamespaceConnectionString = "EVENT HUB CONNECTION STRING";
const string blobContainerName = "BLOB CONTAINER NAME";
const string blobStorageConnectionString = "BLOB CONNECTION STRING";
string consumerGroup = EventHubConsumerClient.DefaultConsumerGroupName;
BlobContainerClient storageClient = new BlobContainerClient(blobStorageConnectionString, blobContainerName);
EventProcessorClient processor = new EventProcessorClient(storageClient, consumerGroup, ehubNamespaceConnectionString, eventHubName);
processor.ProcessEventAsync += ProcessEvent.ProcessEventHandler;
processor.ProcessErrorAsync += ProcessEvent.ProcessErrorHandler;
await processor.StartProcessingAsync();
}
The ProcessorEventHandler class:
public static class ProcessEvent
{
public static async Task ProcessEventHandler(ProcessEventArgs eventArgs)
{
var result = Encoding.UTF8.GetString(eventArgs.Data.Body.ToArray());
//DO STUFF
await eventArgs.UpdateCheckpointAsync(eventArgs.CancellationToken);
}
public static Task ProcessErrorHandler(ProcessErrorEventArgs eventArgs)
{
//DO STUFF
return Task.CompletedTask;
}
}
This code is working, but my question is: is it okay to implement it like that? Is there a problem if the consumer nevers stops? Can it block other tasks (or requests) in my code?
Is there a better way to implement it using Dependecy Injection in .Net Core?
I couldn't find any example of someone implementing in a WebApi, is there a reason for that?
As Jesse Squire mentioned, WebAPI isn't necessarily the correct method of implementation, but it primarily depends on what your goals are.
If you are making an API that also includes an Event Hub listener, you should implement it under the IHostedService interface. Your existing AddPropostaEventHub() method goes inside the interface's StartAsync(CancellationToken cancellationToken) which is what .NET Core uses to startup a background task. Then, inside your Startup.cs, you register the handler as services.AddHostedService<EventHubService>();. This ensures that the long running listener is handled properly without blocking your incoming HTTP requests.
If you don't also have an API involved or are able to split the processes completely, then you should consider creating this as a console app instead of as a hosted service, which further separates the roles of API and event listener.
You didn't mention where you are deploying this, but if you happen to be deploying to an Azure App Service, you do have a few options for splitting the receiver from your API, and in that case I would definitely recommend doing so. Inside of App Services, there is a feature called WebJobs which specifically exists to handle things like this without making it part of your API. A good alternative is Functions. In that case you don't have to worry about setting up the DI for Event Hub at all, the host process takes care of that for you.
I have an azure function with a service bus input attribute and service bus output attribute set.
This means that whatever I return from this function will be returned to the ‘return’ queue.
However, i want to manually handle some messages as there is no point retrying them I just want to put them straight on the LDQ and carry on.
So i added MessageReceiver as a parameter with lockToken.
All good.
But now, if i handle the message and send to DLQ i have no way ending the function execution gracefully as it is expecting a return. So i throw an exception. But now i have numerous error logs in the output like ‘lock is invalid’.
I tried to set autocomplete to false but then i have a different issue; how do i ensure i can return the message AND log it in a transaction and handle rollback?
Any guidance??
Example:
public static class Function1
{
[FunctionName("Function1")]
[return: ServiceBus("returnQueueName", Connection = "myConnectionString")]
public static async System.Threading.Tasks.Task<ReturnMessage> RunAsync([ServiceBusTrigger("mytopic", "mysubscription", Connection = "myConnectionString")]
string mySbMsg, ILogger log,
MessageReceiver messageReceiver, string lockToken)
{
log.LogInformation($"C# ServiceBus topic trigger function processed message: {mySbMsg}");
await messageReceiver.DeadLetterAsync(lockToken);
return new ReturnMessage();
}
}
public class ReturnMessage
{
public string Payload { get; set; }
}
Hopefully you can see that the input queue and output queue are handled by the hosting system - Azure. I like this because it will look after atomcity for me.
But if I include the line:
await messageReceiver.DeadLetterAsync(lockToken);
This will move the current message to the DLQ.
GREAT!!! That's what I want.
But I am now forced to return something OR throw an exception. Is there any way out of this as seeing the error messages is misleading as there is no error to follow.
As far as I can see it appears that you shouldn't use an automatic return service bus binding. Instead, you should manually connect to the return topic/queue and handle the message logistics manually.
I'm working on microservices (using Azure Function Apps) that contain ServiceBusTrigger-based Azure Functions that trigger when a message is inserted into a Service Bus Queue.
I'm trying to determine the best way of binding output values to multiple targets (e.g. CosmosDB and IoT Hub). Whether or not the method is marked as async will determine how I should approach this problem.
As far as I am aware, the way that you would typically handle output binding with an async function is by using the [return: ...] annotation; however, in my use case, I need to return two different values to two separate targets (e.g. CosmosDb and IoT Hub). I don't think that this is something that I can achieve with return value binding or output variable binding, since you can't have an out param with an async method and you can define multiple return values with the [return: ...] approach.
It would seem that my only option (if I went the async route) would be to manually invoke SDK methods in the Azure Function to call the services independent of any output values. I'm trying to avoid doing that, seeing as output binding is the preferred approach.
An observation that I have made when creating a brand new ServiceBusTrigger-based Azure Function is that the generated method signature is not marked as async by default.
This is different than an HttpTrigger, which is marked as async out-of-box.
Can someone help me understand the reasoning for this? What are the scaling implications associated with one vs. the other?
I understand in a traditional sense why you typically mark an HttpTrigger as async; however, I don't understand the reasoning as to why the ServiceBusTrigger is not async
I need to understand this bit before I can move on with solidifying my approach to outputs.
I don't think templates with/without async functions have any reasoning to them as such. And depending on your code, your function may be more efficient.
Read this thread for more details on async/await in functions.
As for your main question, you just have to bind to different objects for the CosmosDB and IoT Hub output bindings.
For CosmosDB, you will have to bind to IAsyncCollector instead as shown in the docs
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;
namespace CosmosDBSamplesV2
{
public static class WriteDocsIAsyncCollector
{
[FunctionName("WriteDocsIAsyncCollector")]
public static async Task Run(
[QueueTrigger("todoqueueforwritemulti")] ToDoItem[] toDoItemsIn,
[CosmosDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection")]
IAsyncCollector<ToDoItem> toDoItemsOut,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
foreach (ToDoItem toDoItem in toDoItemsIn)
{
log.LogInformation($"Description={toDoItem.Description}");
await toDoItemsOut.AddAsync(toDoItem);
}
}
}
}
For Event Hub, you will have to bind to IAsyncCollector instead as shown in the docs
[FunctionName("EH2EH")]
public static async Task Run(
[EventHubTrigger("source", Connection = "EventHubConnectionAppSetting")] EventData[] events,
[EventHub("dest", Connection = "EventHubConnectionAppSetting")]IAsyncCollector<string> outputEvents,
ILogger log)
{
foreach (EventData eventData in events)
{
// do some processing:
var myProcessedEvent = DoSomething(eventData);
// then send the message
await outputEvents.AddAsync(JsonConvert.SerializeObject(myProcessedEvent));
}
}
Is it a valid solution to use different consumers of the same message type on a single receive endpoint or should we use a receive endpoint for each consumer?
cfg.ReceiveEndpoint(host, "MyQueue", e =>
{
logger.LogInformation("Consuming enabled.");
//register consumers with middleware components
e.Consumer<MyConsumer>(context);
e.Consumer<MyOtherConsumer>(context);
})
public class MyConsumer : IConsumer<MyMessage> {}
public class MyOtherConsumer : IConsumer<MyMessage> {}
The solution above works, each consumer receives the message. Even if one fails (exception).
Why do I ask this? Our current solution is that we have a single consumer for each message type. The consumer passes the received message to an internal custom extensible pipeline for processing. If the above solution is viable we could drop or own custom pipeline an use MassTransit instead.
Yes, you can have multiple consumers registered on the same endpoint, for the same message type, and MassTransit will handle dispatching the message to those consumers.
You can also customize the endpoint pipeline, as well as each consumer's pipeline, so that different filters can be applied to different consumers.
ec.Consumer<MyConsumer>(context, c => c.UseRetry(r => r.Interval(2,1000)));
ec.Consumer<MyOtherConsumer>(context, c => c.UseRetry(r => None()));
This was one of the core reasons MT was rewritten to be built around pipelines (this was years ago, but nonetheless) and how GreenPipes was created.
As a side note, you could put each consumer on a separate endpoint, and publish the message, which would give each consumer its own copy - and own execution context (including retry and broker error handling) if needed.
We have an azure web job that has two methods in the Functions.cs file. Both jobs are triggered off different Topics in Azure Service Bus.
As this uses reflection at run time to determine the functions that are to be run/triggered by messages hitting the topics, there are no references to these methods in code.
public static async Task DoWork([ServiceBusTrigger("topic-one", "%environmentVar%")] BrokeredMessage brokeredMessage, TextWriter log) {}
public static async Task DoOtherWork([ServiceBusTrigger("topic-two", "%environmentVar2%")] BrokeredMessage brokeredMessage, TextWriter log) {}
I have a need to have this web job run either both methods, or just one of them, based on a variable set at run time (it won't change one the job is running, but it is read in when the job starts). I can't simply wrap the internals of the methods in an if() based on the variable, as that would read and destroy the message.
Is it possible to use the JobHostConfiguration (an IServiceProvider) to achieve this, as that is built at run time. Is that was the JobHostConfiguration.IJobActivator can be used for?
Triggered Functions can be disabled when the Webjob starts.
You can have a look at this issue: Dynamic Enable/ Disable a function.
So the Webjob SDK provided a DisableAttribute`:
Can be applied at the Parameter/Method/Class level
Only affects triggered functions
[Disable("setting")] - If a config/environment value exists for the specified setting name, and its value is "1" or "True" (case insensitive), the function will be disabled.
[Disable(typeof(DisableProvider))] - custom Type declaring a function of signature bool IsDisabled(MethodInfo method). We'll call this method to determine if the function should be disabled.
This is a startup time only check. For disabled triggered functions, we simply skip starting of the function listener. However, when you update app settings bound to these attributes, your WebJob will automaticallly restart and your settings will take effect.
Setting names can include binding parameters (e.g. {MethodName}, {MethodShortName}, %test%, etc.)
In your case you need to use the DisableAttribute with a DisableProvider.
public class DoWorkDisableProvider
{
public bool IsDisabled(MethodInfo method)
{
// check if the function should be disable
// return true or false
return true;
}
}
public class DoOtherWorkkDisableProvider
{
public bool IsDisabled(MethodInfo method)
{
// check if the function should be disable
// return true or false
return true;
}
}
And your functions should be decorated with the disable attribute
[Disable(typeof(DoWorkDisableProvider))]
public static async Task DoWork([ServiceBusTrigger("topic-one", "%environmentVar%")] BrokeredMessage brokeredMessage, TextWriter log) {}
[Disable(typeof(DoOtherWorkkDisableProvider))]
public static async Task DoOtherWork([ServiceBusTrigger("topic-two", "%environmentVar2%")] BrokeredMessage brokeredMessage, TextWriter log) {}
Otherwise the JobHostConfiguration.IJobActivator is designed to inject dependencies into your functions. you can have a look at these posts related to:
Dependency injection using Azure WebJobs SDK?
Azure Triggered Webjobs Scope for Dependency Injection