MessageLockLostException breaks Service Bus QueueClient - c#

I have service which listens to Service Bus Queue. Once in a few days I receive MessageLockLostException. Exception itself is clear for me and that's not an issue. However QueueClient seems broken after exception received and stops processing any further messages. I understand that i can add try / catch to ProcessMessagesAsync. However I thought that ExceptionReceivedHandler should handle such cases and QueueClient should try to process same message again(or send it to DLQ if it fails to process it multiple time).
public class QueueService
{
private readonly QueueClient _queueClient;
public QueueServiceServiceBus(string connectionString, ILogger<QueueServiceServiceBus> logger)
{
_logger = logger;
var serviceBusConnectionStringBuilder = new ServiceBusConnectionStringBuilder(connectionString);
_queueClient = new QueueClient(serviceBusConnectionStringBuilder);
RegisterMessageHandler();
}
private Task ExceptionReceivedHandler(ExceptionReceivedEventArgs exceptionReceivedEventArgs)
{
_logger.LogError("Message handler encountered an exception {Exception}",
exceptionReceivedEventArgs.Exception);
return Task.CompletedTask;
}
private async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
//Handle message
await _queueClient.CompleteAsync(message.SystemProperties.LockToken);
}
private void RegisterMessageHandler()
{
var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
{
AutoComplete = false
};
_queueClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
}
}

ExceptionReceivedHandler handles the exceptions which occurs during the retrieval and processing of the messages.
The QueueClient will not break for any reason unless it gets throttled with over load.
I tried the sample what you provided. It works fine for me, the ExceptionReceivedHandler handles the MessageLockLostException without breaking.

Related

Is there any way to communicate between API and Worker service via http request (and vice versa)?

The idea is to have a worker service doing heavy tasks when requested from an API.
Example of communication:
API: Post data to worker service
Worker service: Post data back when its done.
I've done some research but can't find any solution to what I'm looking for.
Is it possible? If not, there is any other way to do it?
afaik there is no built-in mechanism to do this so you could use a database or queue to request the work to be carried out and then the worker can poll this db/queue to handle the workload.
I use RabbitMQ to communicate between threads in Python on Linux as well as using it for asp.net to communicate with window services (of course running on another thread). The service/worker checks the message queue every iteration/loop and performs work based on what's in the message. The message of course gets pushed into the queue by aspnet, usually through functionality in a controller.
A quick google will get you good results on RabbitMQ and aspnet. Not only can you use this with a worker, but other programs/services you have running on the system.
First link from google that might help you along the way.
https://www.c-sharpcorner.com/article/rabbitmq-message-queue-using-net-core-6-web-api/
I have implemented this into my own project. I have a Service on the aspnet side, where a controller calls it topost messages onto the queue. The Worker checks the queue on every loop and performs functions.
Here is the worker service
public class SchedulerWorkerService : BackgroundService
{
string queueName = "SchedulerQueue";
private readonly IConnectionFactory factory;
private readonly IConnection connection;
private readonly IModel channel;
public SchedulerWorkerService()
{
factory = new ConnectionFactory { HostName = "localhost" };
connection = factory.CreateConnection();
channel = connection.CreateModel();
channel.QueueDeclare(queueName, exclusive: false);
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
BasicGetResult result = channel.BasicGet(queueName, true);
if (result != null && result.Body.Length > 0)
{
var message = Encoding.UTF8.GetString(result.Body.ToArray());
// Do you work based on the message here.
switch (message)
{
case "PAUSE SERVICE":
// Stop scheduled task processing
break;
case "REFRESH":
// Perfrom logic to refresh all data stored in variables from database
break;
case "SHUTDOWN":
// perform any shutdown logic to processes
break;
}
}
await Task.Delay(1000, stoppingToken);
}
}
public override void Dispose()
{
channel.Close();
channel.Dispose();
connection.Close();
connection.Dispose();
base.Dispose();
}
}
Here is the service that pushes messages onto the queue
public class SchedulerWorkerCommService
{
public void StopService()
{
}
public void SendMessage(string message)
{
var factory = new ConnectionFactory { HostName = "localhost" };
var connection = factory.CreateConnection();
using var channel = connection.CreateModel();
channel.QueueDeclare("SchedulerQueue", exclusive: false);
channel.BasicPublish("", "SchedulerQueue", body: Encoding.UTF8.GetBytes(message));
}
}

How to handle updates in the right way using signalr?

I have a client application Angular and a signalR hub, and also I have a service that take a timestamp as a parameter.
I want to invoke a method in the hub when I press on a start button in the client, and when the method is invoked I want to keep listing to all the changes (create a timer) until the client press on the stop button then I will stop the timer.
So I want to ask which is better:
1- Call the invoked method from the client with time stamp and then create a setInterval to call the method in it and when the stop button is pressed I can stop it.
Pros:
It is easy to start and stop the timer.
Cons:
I am invoking the method each 1 sec, and then I am checking on the client if there are response to update the UI.
2- Invoke the method once and then create a timer for each client on the server and when the client press on stop button I can invoke another method to stop the timer for that client.
Pros:
I am checking the timestamp in the hub and I will send the data to the client only if the timeStamp from the service > timeStamp locally
Cons:
I actually don't know how to create a timer for each client, so if this is the right way please help me
You are using SignalR for real time data communication. Invoking a method every second is just joking on the SignalR face... So this is not a solution.
The best solution would be using the group feature.
Example:
You start button will add the user to an group.
While your user is on the group it will receive all the data you need. await this.Clients.Group("someGroup").BroadcastMessage(message);
Your stop button will remove the user from the group so it will not receive data anymore.
Some code example on the hub:
public async Task Start()
{
// Add user to the data group
await this.Groups.AddToGroupAsync(this.Context.ConnectionId, "dataGroup");
}
public async Task Stop()
{
// Add user to the data group
await this.Groups.RemoveFromGroupAsync(this.Context.ConnectionId, "dataGroup");
}
Worker example that sends data to the users that pressed start and receive real time data.
private readonly IHubContext<SignalRHub, ISignalRHub> hub;
private readonly IServiceProvider serviceProvider;
public Worker(IServiceProvider serviceProvider, IHubContext<SignalRHub, ISignalRHub> hub)
{
this.serviceProvider = serviceProvider;
this.hub = hub;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
await this.hub.Clients.Group("dataGroup").BroadcastMessage(DataManager.GetData());
this.Logger.LogDebug("Sent data to all users at {0}", DateTime.UtcNow);
await Task.Delay(1000, stoppingToken);
}
}
PS: Where you have the worker, I assume you have some manager that gets the data or something to be sent to the user.
Edit: If you don't want to user worker, you can always just the timer like:
public class TimerManager
{
private Timer _timer;
private AutoResetEvent _autoResetEvent;
private Action _action;
public DateTime TimerStarted { get; }
public TimerManager(Action action)
{
_action = action;
_autoResetEvent = new AutoResetEvent(false);
_timer = new Timer(Execute, _autoResetEvent, 1000, 2000);
TimerStarted = DateTime.Now;
}
public void Execute(object stateInfo)
{
_action();
if((DateTime.Now - TimerStarted).Seconds > 60)
{
_timer.Dispose();
}
}
}
And then use it somewhere like:
var timerManager = new TimerManager(() => this.hub.Clients.Group("dataGroup").BroadcastMessage(DataManager.GetData()));
Option #1 isn't available since SignalR exists to remove the need for polling. Frequent polling doesn't scale either. If every client polled the server every 1 second, the web site would end up paying a lot of CPU and bandwidth for nothing. Business people don't like frequent polling either, as all hosters and cloud providers charge for egress.
The SignalR streaming examples use timed notifications as a simple example of streaming notifications using IAsyncEnumerable<T>. In the simplest example, a counter increments every delay milliseconds :
public class AsyncEnumerableHub : Hub
{
public async IAsyncEnumerable<int> Counter(
int count,
int delay,
[EnumeratorCancellation]
CancellationToken cancellationToken)
{
for (var i = 0; i < count; i++)
{
// Check the cancellation token regularly so that the server will stop
// producing items if the client disconnects.
cancellationToken.ThrowIfCancellationRequested();
yield return i;
// Use the cancellationToken in other APIs that accept cancellation
// tokens so the cancellation can flow down to them.
await Task.Delay(delay, cancellationToken);
}
}
}
The client can call this action passing the desired delay and just start receiving notifications. SignalR knows this is a stream of notifications because it returns IAsyncEnumerable.
The next, more advanced example uses Channels to allow the publisher method WriteItemsAsync to send a stream of notifications to the hub.
The action itself is simpler, it just returns the Channel's reader:
public ChannelReader<int> Counter(
int count,
int delay,
CancellationToken cancellationToken)
{
var channel = Channel.CreateUnbounded<int>();
// We don't want to await WriteItemsAsync, otherwise we'd end up waiting
// for all the items to be written before returning the channel back to
// the client.
_ = WriteItemsAsync(channel.Writer, count, delay, cancellationToken);
return channel.Reader;
}
The publisher method writes to the ChannelWriter instead of returning an IAsyncEnumerable :
private async Task WriteItemsAsync(
ChannelWriter<int> writer,
int count,
int delay,
CancellationToken cancellationToken)
{
Exception localException = null;
try
{
for (var i = 0; i < count; i++)
{
await writer.WriteAsync(i, cancellationToken);
// Use the cancellationToken in other APIs that accept cancellation
// tokens so the cancellation can flow down to them.
await Task.Delay(delay, cancellationToken);
}
}
catch (Exception ex)
{
localException = ex;
}
writer.Complete(localException);
}
This method can easily be in a different class. All that's needed is to pass ChannelWriter to the publisher.

Synchronously call ProcessEventsAsync to receive messages in C#

I have Windows Service written in C#, which is subscribing to Event hubs and listening to them for any messages.
The code I have followed is as follows:
public class SimpleEventProcessor : IEventProcessor
{
public Task CloseAsync(PartitionContext context, CloseReason reason)
{
Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason: '{reason}'.");
return Task.CompletedTask;
}
public Task OpenAsync(PartitionContext context)
{
Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
return Task.CompletedTask;
}
public Task ProcessErrorAsync(PartitionContext context, Exception error)
{
Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
return Task.CompletedTask;
}
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
DoSomethingWithMessage(); // typically takes 20-20 mins to finish this method.
}
return context.CheckpointAsync();
}
}
This is the sample code mentioned in this document.
Now I have 8 partitions on my event hub. So 8 partitions keep receiving messages as and when there is a new message. Whenever a message is received, a method is called DoSomethingWithMessage() which takes around 30 mins to finish as it does some calculations.
I want my code to run synchronously, meaning when one message is received, service should first complete this method execution, then receive the next message. Now what is happening is , even when the DoSomethingWithMessage() is still under execution, new messages are being received and that message is processed parallel to the first message, which is creating some calculation problems.
Is there a way I can receive messages one by one?

Azure Function : how to implement delay of retrying queue message better

My Azure function should listen Queue for messages, then get message, try to call an external service with value inside message, if the external service returns "OK" then we have to write message to another queue (for next Azure Function), if returns "Fail" we have to return to our current queue and retry by our Azure Function again after 5 minutes. How to implement it? I did it with Timer, but solution does not like me:
[FunctionName("FunctionOffice365VerificateDomain_and_AddService_and_GexMxRecord")]
public async static Task Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer,
[Queue("domain-verificate-Office365-add-services-get-mx-record", Connection = "StorageConnectionString")]CloudQueue listenQueue,
[Queue("domain-add-mx-record-to-registrator", Connection = "StorageConnectionString")]CloudQueue outputQueue,
ILogger log)
{
while (true)
{
// do "invisible" message for next 30 sec
var message = await listenQueue.GetMessageAsync();
if (message != null)
{
DomainForRegistration domainForRegistration = JsonConvert.DeserializeObject<DomainForRegistration>(message.AsString);
try
{
await _office365DomainService.VerifyDomainAsync(domainForRegistration.DomainName);
// remove message
await listenQueue.DeleteMessageAsync(message);
await _office365DomainService.UpdateIndicateSupportedServicesDomainAsync(domainForRegistration.DomainName);
var mxRecord = await _office365DomainService.GetMxRecordForDomainAsync(domainForRegistration.DomainName);
}
catch (DomainVerificationRecordNotFoundException)
{
// thrown when VerifyDomainAsync failed
}
}
else
break;
}
}
How to do it more carefully, without these while(true), but with timeout after fail validation?
Agree with #DavidG, try to use queue trigger to achieve your goal.
W can rely on the host setting of Queue.
visibilityTimeout is The time interval between retries when processing of a message fails
maxDequeueCount is The number of times to try processing a message before moving it to the poison queue.
{
"version": "2.0",
"extensions": {
"queues": {
"visibilityTimeout" : "00:05:00",
"maxDequeueCount": 2,
}
}
}
In this way, the function should look like
public static async Task Run(
[QueueTrigger("domain-verificate-Office365-add-services-get-mx-record")]string myQueueItem, ILogger log,
[Queue("domain-add-mx-record-to-registrator", Connection = "StorageConnectionString")]IAsyncCollector<string> outputQueue
)
{
// do stuff then output message
await outputQueue.AddAsync(myQueueItem);
}
If you don't want to throw the exception to host, we can turn to initialVisibilityDelay of CloudQueue method.
specifying the interval of time from now during which the message will be invisible
public static async Task Run(
[QueueTrigger("domain-verificate-Office365-add-services-get-mx-record")]string myQueueItem, ILogger log,
[Queue("domain-add-mx-record-to-registrator", Connection = "StorageConnectionString")]IAsyncCollector<string> outputQueue,
[Queue("domain-verificate-Office365-add-services-get-mx-record", Connection = "StorageConnectionString")]CloudQueue listenQueue
)
{
try
{
// do stuff then output message
await outputQueue.AddAsync(myQueueItem);
}
catch(DomainVerificationRecordNotFoundException)
{
// add the message in current queue and can only be visible after 5 minutes
await listenQueue.AddMessageAsync(new CloudQueueMessage(myQueueItem), null, TimeSpan.FromMinutes(5), null, null);
}
}

Message from servicebus queue disappears on error in activity function

I have developed an Azure Durable Functions app that triggers on new servicebus queue messages. It works ok when no errors occurs, but when an error occurs in an activity function, it logs that it fails but the message is gone forever from the queue. What could be causing that, and how do I prevent the message from disappearing from the queue on error?
Here is the reproducable code, it's the code generated from a new Azure Function template in VS2017, only an exception is added when the city is "Seattle", and it's a ServicebusTrigger instead of a HttpTrigger.
[FunctionName("Test")]
public static async Task<List<string>> RunOrchestrator(
[OrchestrationTrigger] DurableOrchestrationContext context)
{
var outputs = new List<string>();
// Replace "hello" with the name of your Durable Activity Function.
outputs.Add(await context.CallActivityAsync<string>("Test_Hello", "Tokyo"));
outputs.Add(await context.CallActivityAsync<string>("Test_Hello", "Seattle"));
outputs.Add(await context.CallActivityAsync<string>("Test_Hello", "London"));
// returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"]
return outputs;
}
[FunctionName("Test_Hello")]
public static string SayHello([ActivityTrigger] string name, ILogger log)
{
log.LogInformation($"Saying hello to {name}.");
if (name == "Seattle")
throw new Exception("An error occurs");
return $"Hello {name}!";
}
[FunctionName("Test_HttpStart")]
public static async Task ServiceBusStart(
[ServiceBusTrigger("somequeue", Connection = "ServiceBusQueueListenerConnectionString")]string queuemsg,
[OrchestrationClient]DurableOrchestrationClient starter,
ILogger log)
{
// Function input comes from the request content.
var msg = JsonConvert.DeserializeObject<IncomingMessage>(queuemsg);
string instanceId = await starter.StartNewAsync("Test", msg);
log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
}
Update: When I have the exception in the Orchestration client function, it does the right thing like retrying and putting the message on the dead letter queue if retrying fails x times.
So I managed to work around this by updating the client function with this while loop, checking for failed/terminated/canceled status.
[FunctionName("Test_HttpStart")]
public static async Task ServiceBusStart(
[ServiceBusTrigger("somequeue", Connection = "ServiceBusQueueListenerConnectionString")]string queuemsg,
[OrchestrationClient]DurableOrchestrationClient starter,
ILogger log)
{
// Function input comes from the request content.
var msg = JsonConvert.DeserializeObject<IncomingMessage>(queuemsg);
string instanceId = await starter.StartNewAsync("Test", msg);
log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
var status = await starter.GetStatusAsync(instanceId);
while (status.RuntimeStatus != OrchestrationRuntimeStatus.Completed)
{
System.Threading.Thread.Sleep(1000);
status = await starter.GetStatusAsync(instanceId);
if (status.RuntimeStatus == OrchestrationRuntimeStatus.Failed
|| status.RuntimeStatus == OrchestrationRuntimeStatus.Terminated
|| status.RuntimeStatus == OrchestrationRuntimeStatus.Canceled)
{
throw new Exception("Orchestration failed with error: " + status.Output);
}
}
}
However it seems like a hack to me, and I have not seen this type of code in any MS example code. I guess this should be taken care of by the durable functions framework. Is there another way to make the servicebus trigger work in durable functions?
This behavior is by-design. Starting an orchestration is asynchronous - i.e. the StartNewAsync API will not automatically wait for the orchestration to run or complete. Internally, StartNewAsync just drops a message into an Azure Storage queue and writes an entry into an Azure Storage table. If that happens successfully, then your Service Bus function will continue running and complete successfully, at which point the message will be deleted.
Your workaround is acceptable if you truly need the Service Bus queue message to retry, but I question why you would need to do this. The orchestration itself can manage its own retries without relying on Service Bus. For example, you could use CallActivityWithRetryAsync to retry internally within the orchestration.
See the Error Handling topic of the Durable Functions documentation.
I know this is an old thread, but I wanted to share how I got this working with a ServiceBusTrigger and WaitForCompletionOrCreateCheckStatusResponseAsync.
[FunctionName(nameof(QueueTriggerFunction))]
public async Task QueueTriggerFunction(
[ServiceBusTrigger("queue-name", Connection = "connectionstring-key")]string queueMessage,
MessageReceiver messageReceiver,
string lockToken,
string messageId,
[DurableClient] IDurableOrchestrationClient starter,
ILogger log)
{
//note: autocomplete is disabled
try
{
//start durable function
var instanceId = await starter.StartNewAsync(nameof(OrchestratorFunction), queueMessage);
//get the payload (we want to use the status uri)
var payload = starter.CreateHttpManagementPayload(instanceId);
//instruct QueueTriggerFunction to wait for response
await starter.WaitForCompletionOrCreateCheckStatusResponseAsync(new HttpRequestMessage(HttpMethod.Get, payload.StatusQueryGetUri), instanceId);
//response ready, get status
var status = await starter.GetStatusAsync(instanceId);
//act on status
if (status.RuntimeStatus == OrchestrationRuntimeStatus.Completed)
{
//like completing the message
await messageReceiver.CompleteAsync(lockToken);
log.LogInformation($"{nameof(Functions)}.{nameof(QueueTriggerFunction)}: {nameof(OrchestratorFunction)} succeeded [MessageId={messageId}]");
}
else
{
//or deadletter the sob
await messageReceiver.DeadLetterAsync(lockToken);
log.LogError($"{nameof(Functions)}.{nameof(QueueTriggerFunction)}: {nameof(OrchestratorFunction)} failed [MessageId={messageId}]");
}
}
catch (Exception ex)
{
//not sure what went wrong, let the lock expire and try again (until max retry attempts is reached)
log.LogError(ex, $"{nameof(Functions)}.{nameof(QueueTriggerFunction)}: handler failed [MessageId={messageId}]");
}
}
The thing is, all examples on the internet are using an HttpTrigger and use the httprequest of that trigger to check for completion, but you dont have that with the ServiceBusTrigger. Moreover, I don't think thats correct and you should use the status uri from the payload call as I'm doing here with the instanceId of the orchestrator function.

Categories

Resources