RabbitMQ - How to configure conditional DLX? - c#

I have active queue which will have all messages from Publisher. My Consumer reads those message and Acks/Nacks depending on the message processing result.
while (true)
{
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
var processed = ProcessMessage(message)
if (processed)
channel.BasicAck(deliveryTag: ea.DeliveryTag, multiple: false);
else
channel.BasicNack(deliveryTag: ea.DeliveryTag, multiple: false, requeue: true);
}
My questions are
Is setting true for requeue parameter when it is nacked correct?
Or do we need to create another queue for Retry?
Let us say, if I want to move the message to DLX after retrying for 10 times? How do I do it? Is it C# code or can a rule be defined on the queue?
How do I know that a message is retried for 10 times? Does RabbitMQ provide any mechanism or do I need to manually design message object to contain retry count?
Thanks for your inputs

Starting with release 3.5.2, RabbitMQ automatically adds a header to dead-letterred messages with informations such as:
the queue(s) which saw the message
the reason(s) it was dead-letterred
the number of times it was dead-letterred
timestamps
Look at the "Dead-Lettered Messages" section near the end of the DLX documentation for more details.
If you use an older version of RabbitMQ, then #Franklin's solution should work.

if you set requeue to false then it will go to any DeadLetter Exchange assigned to the Queue. True will requeue the message.
What I have done for retry attempts is to Create a Hold Exchange and Queue. If you want to retry a message Return a positive Ack to the Queue, Add a RetryAttepmts Header to the Message then Publish it to the HoldQueue Exchange with a timeout value. Set the Hold Queue Dead Letter Exchange to an exchange that will send the message to the original Queue. Then Check the header and nack if the retry attempts are too large.

Related

How to tell Rabbit to that task fails in .Net Core

Let's imagine that we have Q named "NotificationQ" and have a consumer who gets a task from that Q and sends emails to customers.
Emailing process sends an email by API from mailgun. That API request does not turn 200 every time(the reason is not important). In that time I need to tell RabbitMQ that tasks fail. I know there is a feature called autoAck but if a request fails how the RabbitMQ client pack understood that a fail.
Am I manually trigger ack to say that request failed?
I using https://www.nuget.org/packages/RabbitMQ.Client/ pack to handle RabbitMQ tasks.
var channel = RabbitPrepareFactory.GetConnectionFactory();
channel.BasicQos(0, 1, false);
var notificationPack = channel.BasicGet("notification", true);
var message = System.Text.Encoding.UTF8.GetString(notificationPack.Body.ToArray());
var task = JsonConvert.DeserializeObject<ForgetPasswordEmailNotification>(message);
var isEmailSendSuccessful = SomeFakeEmailSendFunctions(task.Email);
if (!isEmailSendSuccessful)
{
//something for tell RabbitMQ that task fail and not delte that task in q
.......
}
I think this could be usefull. I would use something like a Dead Letter
https://www.rabbitmq.com/dlx.html
So everytime a message is failing for whatever reason, you push the message to that queue.
Once your messaged was recieved by your consumer and the scope of the operation finished, that message is acknowledged so that other consumers will not take a already processed message.
[Edit]
I dont't think its a good ideea to process a message from a queue and afterwards to leave it there if something happens to your BackEnd. If you implement the dead letter queue you could try to reprocess those messages at some time ( Maybe a CronJob ) or if you really don't wanna have dead letter queues you could try to implement in your Client a Retry Mechanism. Polly could work very well in your case https://github.com/App-vNext/Polly

The correct way to wait for a specific message in a Service Bus Queue in a multi threaded environment (Azure Functions)

I have created a solution based on Azure Functions and Azure Service Bus, where clients can retrieve information from multiple back-end systems using a single API. The API is implemented in Azure Functions, and based on the payload of the request it is relayed to a Service Bus Queue, picked up by a client application running somewhere on-premise, and the answer sent back by the client to another Service Bus Queue, the "reply-" queue. Meanwhile, the Azure Function is waiting for a message in the reply-queue, and when it finds the message that belongs to it, it sends the payload back to the caller.
The Azure Function Activity Root Id is attached to the Service Bus Message as the CorrelationId. This way each running function knows which message contains the response to the callers request.
My question is about the way I am currently retrieving the messages from the reply queue. Since multiple instances can be running at the same time, each Azure Function instance needs to get it's response from the client without blocking other instances. Besides that, a time out needs to be observed. The client is expected to respond within 20 seconds. While waiting, the Azure Function should not be blocking other instances.
This is the code I have so far:
internal static async Task<(string, bool)> WaitForMessageAsync(string queueName, string operationId, TimeSpan timeout, ILogger log)
{
log.LogInformation("Connecting to service bus queue {QueueName} to wait for reply...", queueName);
var receiver = new MessageReceiver(_connectionString, queueName, ReceiveMode.PeekLock);
try
{
var sw = Stopwatch.StartNew();
while (sw.Elapsed < timeout)
{
var message = await receiver.ReceiveAsync(timeout.Subtract(sw.Elapsed));
if (message != null)
{
if (message.CorrelationId == operationId)
{
log.LogInformation("Reply received for operation {OperationId}", message.CorrelationId);
var reply = Encoding.UTF8.GetString(message.Body);
var error = message.UserProperties.ContainsKey("ErrorCode");
await receiver.CompleteAsync(message.SystemProperties.LockToken);
return (reply, error);
}
else
{
log.LogInformation("Ignoring message for operation {OperationId}", message.CorrelationId);
}
}
}
return (null, false);
}
finally
{
await receiver.CloseAsync();
}
}
The code is based on a few assumptions. I am having a hard time trying to find any documentation to verify my assumptions are correct:
I expect subsequent calls to ReceiveAsync not to fetch messages I have previously fetched and not explicitly abandoned.
I expect new messages that arrive on the queue to be received by ReceiveAsync, even though they may have arrived after my first call to ReceiveAsync and even though there might still be other messages in the queue that I haven't received yet either. E.g. there are 10 messages in the queue, I start receiving the first few message, meanwhile new messages arrive, and after I have read the 10 pre-existing messages, I get the new messages too.
I expect that when I call ReceiveAsync for a second time, that the lock is released from the message I received with the first call, although I did not explicitly Abandon that first message.
Could anyone tell me if my assumptions are correct?
Note: please don't suggest that Durable Functions where designed specifically for this, because they simply do not fill the requirements. Most notably, Durable Functions are invoked by a process that polls a queue with a sliding interval, so after not having any requests for a few minutes, the first new request can take a minute to start, which is not acceptable for my use case.
I would consider session enabled topics or queues for this.
The Message sessions documentation explains this in detail but the essential bit is that a session receiver is created by a client accepting a session. When the session is accepted and held by a client, the client holds an exclusive lock on all messages with that session's session ID in the queue or subscription. It will also hold exclusive locks on all messages with the session ID that will arrive later.
This makes it perfect for facilitating the request/reply pattern.
When sending the message to the queue that the on-premises handlers receive messages on, set the ReplyToSessionId property on the message to your operationId.
Then, the on-premises handlers need to set the SessionId property of the messages they send to the reply queue to the value of the ReplyToSessionId property of the message they processed.
Then finally you can update your code to use a SessionClient and then use the 'AcceptMessageSessionAsync()' method on that to start listening for messages on that session.
Something like the following should work:
internal static async Task<(string?, bool)> WaitForMessageAsync(string queueName, string operationId, TimeSpan timeout, ILogger log)
{
log.LogInformation("Connecting to service bus queue {QueueName} to wait for reply...", queueName);
var sessionClient = new SessionClient(_connectionString, queueName, ReceiveMode.PeekLock);
try
{
var receiver = await sessionClient.AcceptMessageSessionAsync(operationId);
// message will be null if the timeout is reached
var message = await receiver.ReceiveAsync(timeout);
if (message != null)
{
log.LogInformation("Reply received for operation {OperationId}", message.CorrelationId);
var reply = Encoding.UTF8.GetString(message.Body);
var error = message.UserProperties.ContainsKey("ErrorCode");
await receiver.CompleteAsync(message.SystemProperties.LockToken);
return (reply, error);
}
return (null, false);
}
finally
{
await sessionClient.CloseAsync();
}
}
Note: For all this to work, the reply queue will need Sessions enabled. This will require the Standard or Premium tier of Azure Service Bus.
Both queues and topic subscriptions support enabling sessions. The topic subscriptions allow you to mix and match session enabled scenarios as your needs arise. You could have some subscriptions with it enabled, and some without.
The queue used to send the message to the on-premises handlers does not need Sessions enabled.
Finally, when Sessions are enabled on a queue or a topic subscription, the client applications can no longer send or receive regular messages. All messages must be sent as part of a session (by setting the SessionId) and received by accepting the session.
It seems that the feature can not be achieved now.
You can give your voice here where if others have same demand, they will vote up your idea.

RabbitMQ Verify Message Was Sent

This question is already exists and answered. But there is a dark side in answers. My channel already supports BasicAcks and BasicNacks handlers (in a poor way):
Channel.BasicAcks += (sender, eventArgs) =>
{
Console.WriteLine("Basic Ack!");
}
Channel.BasicNacks += (sender, eventArgs) =>
{
Console.WriteLine("Basic Nack!");
}
I have a message that published to a queue. so I use this code to do that:
Channel.BasicPublish("ExchangeName", "QueueName", messageProperties, payload);
Channel.WaitForConfirmOrDie();
As long as WaitForConfirmOrDie is a void function, how can I know if message received by a queue? Or more precise, how can I implement Ack handlers to give me a clear state of published message in order to not send it again to queue or in the case of BasicNack send it again?
Using the BasicAcks and BasicNacks event handlers is independent of calling Channel.WaitForConfirmOrDie.
Channel.WaitForConfirmOrDie is a convenience method that synchronously waits for message acknowledgements. So, if you publish messages one-by-one, you will wait for these acks one-by-one. As you can imagine, that is pretty inefficient.
What you should do is register for BasicAcks and BasicNacks like you have done. You should have an "acceptable number of outstanding confirms" defined. Here's one way to implement this -
Publish up to N messages without an ack/nack (N is up to you). If the next message would exceed N do not continue to publish messages.
While a message is outstanding, save it locally (in RAM or local disk). Remember that you can't be 100% sure a message is queued until you get an ack for it.
If the message is acked, remove it from local storage and decrease the count of outstanding messages, which allows publishing to continue (if publishing is blocked). Please remember that messages can be acked in batches.
If the message is nacked, you could re-try it up to a certain number of times, maybe with backoff. Once the re-try limit is exceeded, raise an application exception.

Azure queue handling via ReceiveAsync returns null right away

The normal expected behaviour for the code below, would be that ReceiveAsync, looks at the Azure queue for up to 1 minute before returning null or a message if one is received. The intended use for this is to have an IoT hub resource, where multiple messages may be added to a queue intended for one of several DeviceClient objects. Each DeviceClient will continuously poll this queue to receive message intended for it. Messages for other DeviceClients are thus left in the queue for those others.
The actual behaviour is that ReceiveAsync is immediately returning null each time it's called, with no delay. This is regardless of the value that is given with TimeSpan - or if no parameters are given (and the default time is used).
So, rather than seeing 1 log item per minute, stating there was a null message received, I'm getting 2 log items per second (!). This behaviour is different from a few months ago,. so I started some research - with little result so far.
using Microsoft.Azure.Devices;
using Microsoft.Azure.Devices.Client;
public static TimeSpan receiveMessageWaitTime = new TimeSpan(0, 1 , 0);
Microsoft.Azure.Devices.Client.Message receivedMessage = null;
deviceClient = DeviceClient.CreateFromConnectionString(Settings.lastKnownConnectionString, Microsoft.Azure.Devices.Client.TransportType.Amqp);
// This code is within an infinite loop/task/with try/except code
if(deviceClient != null)
{
receivedMessage = await deviceClient.ReceiveAsync(receiveMessageWaitTime);
if(receivedMessage != null)
{
string Json = Encoding.ASCII.GetString(receivedMessage.GetBytes());
// Handle the message
}
else
{
// Log the fact that we got a null message, and try again later
}
await Task.Delay(500); // Give the CPU some time, this is an infinite loop after all.
}
I looked at the Azure hub, and noticed 8 messages in the queue. I then added 2 more, and neither of the new messages were received, and the queue is now on 10 items.
I did notice this question: Azure ServiceBus: Client.Receive() returns null for messages > 64 KB
But I have no way to see whether there is indeed a message that big currently in the queue (since receivemessage returns null...)
As such the questions:
Could you preview the messages in the queue?
Could you get a queue size, e.g. ask the number of messages in the queue before getting them?
Could you delete messages from the queue without getting them?
Could you create a callback based receive instead of an infinite loop? (I guess internally the code would just do a peek and the same as we are already doing)
Any help would be greatly appreciated.
If you use the Azure ServiceBus, I recommend that you could use the Service Bus Explorer to preview the message, get the number of message in the queue. And Also you could delete the message without getting them.

Getting, processing and deleting all queue messages instead of doing one by one?

I am working on Azure queue. I need to get, process and delete all queue messages. What I am doing right now is calling the GetMessage, process the message and calling DeleteMessage one by one.
var message = _queue.GetMessage();
if (message == null)
{
return;
}
// processs
_queue.DeleteMessage(message);
Is there is a way to get all messages first then process it and delete all these processed messages?
You can't get all messages from a queue in a single call. Maximum number of messages you can Get from a queue in a single call is 32. So what you would need to do is something like:
var messages = _queue.GetMessages(32);
and then process these messages instead of getting one message at a time.
UPDATE
So a few things based on your comments:
A queue has a property called ApproximateMessages which will tell you approximately how many messages are there in the queue. This should give you an idea about the total number of messages.
You can't delete 32 messages in one shot. You will need to delete one message at a time.
Based on these, do take a look at pseudo code below:
do
{
var messages = _queue.GetMessages(32);
foreach (var msg in messages)
{
ProcessMessage(msg);
DeleteMessage(msg);
}
var approximateMessagesCount = _queue.FetchAttributes().ApproximateMessageCount.Value;
if (approximateMessagesCount == 0)
{
break;
}
} while (true);
Basically you have to keep on fetching messages from the queue (32 at a time), process individual message and once the message is processed then delete it. Once these 32 messages have been processed and deleted, you have to check if there are any more messages in the queue. If there are messages, you would repeat this process. If there are no messages, then you would exit out of the loop.

Categories

Resources