Scheduling Basic Consume in RabbitMQ - c#

I am using RabbitMQ from last month successfully.Messages are read from queue using BasicConsume feature of RabbitMQ.The messages published to queue is immediately consumed by the corresponding consumer.
Now i created a new queue DelayedMsg,The messages published to this queue has to be read only after 5 min delay.What should i do?

Add a current timestamp value to the message while publishing message to the main queue from publisher / sender. Say example, 'published_on' => 1476424186.
At the consumer side, first check the time difference of current timestamp and published_on.
If the difference found to be less than 5 minutes, then send your message in another queue ( a DLX queue ) with setting expiration time.( use 'expiration' property of amqp message )
This expiration value should be ( current timestamp - published_on ) and it should be in milliseconds.
The message will gets expired in the DLX queue on exact 5 min.
Make sure 'x-dead-letter-exchange' should be your main queue exchange and is bounded with the dlx queue so that when the message gets expired, it will automatically gets queued up back into the main queue. see Dead Letter Exchange for more details.
So, consumer now get the message back after 5 min, process it normally, since its current timestamp and published_on difference will be greater than 5 min.

Try avoiding using DLX to implement delayed messages anymore. It is more like a workaround before having the "RabbitMQ Delayed Message Plugin."
Since now we have this plugin, we should try to use it instead:
https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/

create 2 queues.
1 is work queue
2 is delay queue
and set delay queue property x-dead-letter-->work queue name,ttl-->5min
send message to delay queue,not need consumer it ,after 5min,this send the message to deal-letter(work queue),so you just consumer the work queue and process it

Related

C# Producer/Consumer queue with delays

The problem is the classic N producers (with N possibly large) and X consumers with limited resources (X is currently 4). The producers messages come in (say via MQTT) and get queued to be processed by consumers on a FIFO basis. The important part of the processing is that each consumer may need to send back to the producers one or more "replies" and such replies should be at least some time apart (the exact delay is not important). The classic solution where one starts X tasks that wait on the message queue, process and loop is easy to implement using, for example, System.Threading.Channels:
while (!cancellationToken.IsCancellationRequested && await queue.Reader.WaitToReadAsync()) {
while (queue.Reader.TryRead(out IncomingMessage item)) {
// Do some processing.
SendResponse(1);
// Do some more processing.
if (needsToSend2Response) {
await Task.Delay(500);
SendResponse(2);
}
}
}
This works and works well except that if a task needs to delay it can't process any more messages and that's obviously bad.
Possible solutions I thought of:
Use an outbound queue that process the messages and makes sure there is at least a minimum delay between messages sent to the same producer.
Don't use a queue. Just start a new Task every time a new message comes in and arbitrate the limited resources using a semaphore: it works but I don't see how to guarantee the FIFO requirement (some times messages from the same producer are processed in the wrong order).
Any other ideas?

Azure service bus client updating scheduled messages (changing their enqueue schedule time)

I need to update enqueue times of already scheduled messages in a service bus queue. I tried different approaches, but I wasn't successful at all. I tried peeking messages, then receiving messages that I'm looking for or at least completing that message, but messages can not be completed when we peek at them. Is there maybe any function to get a message by its sequence number or do you have any other approach or solution that could solve this problem?
You cannot update a scheduled message but what you can do is cancel the schedule and re-schedule the same message. This is done with the help of its sequence number.
Below is an example of cancelling a scheduled message:
var queueName = "<queue>";
var connectionString = "<connection-string>";
var client = new ServiceBusClient(connectionString);
var sender = client.CreateSender(queueName);
// Scheduling a new message
var sequenceNumber = await sender.ScheduleMessageAsync(new ServiceBusMessage(), new DateTimeOffset(DateTime.UtcNow.AddMinutes(10)));
// Cancelling the above scheduled message
await sender.CancelScheduledMessageAsync(sequenceNumber);
Now that you cancelled the scheduled message, you can schedule it again with same message details. In your case, you have to peek the message first to get your exact message details like sequence number, message body, etc.

What is the role of "MaxAutoRenewDuration" in azure service bus?

I'm using Microsoft.Azure.ServiceBus. (doc)
I was getting an exception of:
The lock supplied is invalid. Either the lock expired, or the message
has already been removed from the queue.
By the help of these questions:
1, 2, 3,
I am able to avoid the Exception by setting the AutoComplete to false and by increment the Azure's queue lock duration to its max (from 30 seconds to 5 minutes).
_queueClient.RegisterMessageHandler(ProcessMessagesAsync, new
MessageHandlerOptions(ExceptionReceivedHandler)
{
MaxConcurrentCalls = 1,
MaxAutoRenewDuration = TimeSpan.FromSeconds(10),
AutoComplete = false
}
);
private async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
await ProccesMessage(message);
}
private async Task ProccesMessage(Message message)
{
//The complete should be closed before long-timed process
await _queueClient.CompleteAsync(message.SystemProperties.LockToken);
await DoFoo(message.Body); //some long running process
}
My questions are:
This answer suggested that the exception was raised because the lock was being expired before the long time process, but in my case I was marking the message as complete immediately (before the long run process), so I'm not sure why changing the locking duration from azure made any difference? when I change it back to 30 seconds I can see the exception again.
Not sure if it related to the question but what is the purpose MaxAutoRenewDuration, the official docs is The maximum duration during which locks are automatically renewed.. If in my case I have only one app receiver that en-queue from this queue, so is it not needed because I do not need to lock the message from another app to capture it? and why this value should be greater than the longest message lock duration?
There are a few things you need to consider.
Lock duration
Total time since a message acquired from the broker
The lock duration is simple - for how long a single competing consumer can lease a message w/o having that message leased to any other competing consumer.
The total time is a bit tricker. Your callback ProcessMessagesAsync registered with to receive the message is not the only thing that is involved. In the code sample, you've provided, you're setting the concurrency to 1. If there's a prefetch configured (queue gets more than one message with every request for a message or several), the lock duration clock on the server starts ticking for all those messages. So if your processing is done slightly under MaxLockDuration but for the same of example, the last prefetched message was waiting to get processed too long, even if it's done within less than lock duration time, it might lose its lock and the exception will be thrown when attempting completion of that message.
This is where MaxAutoRenewDuration comes into the game. What it does is extends the message lease with the broker, "re-locking" it for the competing consumer that is currently handling the message. MaxAutoRenewDuration should be set to the "possibly maximum processing time a lease will be required". In your sample, it's set to TimeSpan.FromSeconds(10) which is extremely low. It needs to be set to be at least longer than the MaxLockDuration and adjusted to the longest period of time ProccesMessage will need to run. Taking prefetching into consideration.
To help to visualize it, think of the client-side having an in-memory queue where the messages can be stored while you perform the serial processing of the messages one by one in your handler. Lease starts the moment a message arrives from the broker to that in-memory queue. If the total time in the in-memory queue plus the processing exceeds the lock duration, the lease is lost. Your options are:
Enable concurrent processing by setting MaxConcurrentCalls > 1
Increase MaxLockDuration
Reduce message prefetch (if you use it)
Configure MaxAutoRenewDuration to renew the lock and overcome the MaxLockDuration constraint
Note about #4 - it's not a guaranteed operation. Therefore there's a chance a call to the broker will fail and message lock will not be extended. I recommend designing your solutions to work within the lock duration limit. Alternatively, persist message information so that your processing doesn't have to be constrained by the messaging.

I have a long running process which I call in my Service Bus Queue. I want it to continue beyond 5 minutes

I have a long running process which performs matches between millions of records I call this code using a Service Bus, However when my process passes the 5 minute limit Azure starts processing the already processed records from the start again.
How can I avoid this
Here is my code:
private static async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
long receivedMessageTrasactionId = 0;
try
{
IQueueClient queueClient = new QueueClient(serviceBusConnectionString, serviceBusQueueName, ReceiveMode.PeekLock);
// Process the message
receivedMessageTrasactionId = Convert.ToInt64(Encoding.UTF8.GetString(message.Body));
// My Very Long Running Method
await DataCleanse.PerformDataCleanse(receivedMessageTrasactionId);
//Get Transaction and Metric details
await queueClient.CompleteAsync(message.SystemProperties.LockToken);
}
catch (Exception ex)
{
Log4NetErrorLogger(ex);
throw ex;
}
}
Messages are intended for notifications and not long running processing.
You've got a fewoptions:
Receive the message and rely on receiver's RenewLock() operation to extend the lock.
Use user-callback API and specify maximum processing time, if known, via MessageHandlerOptions.MaxAutoRenewDuration setting to auto-renew message's lock.
Record the processing started but do not complete the incoming message. Rather leverage message deferral feature, sending yourself a new delayed message with the reference to the deferred message SequenceNumber. This will allow you to periodically receive a "reminder" message to see if the work is finished. If it is, complete the deferred message by its SequenceNumber. Otherise, complete the "reminder" message along with sending a new one. This approach would require some level of your architecture redesign.
Similar to option 3, but offload processing to an external process that will report the status later. There are frameworks that can help you with that. MassTransit or NServiceBus. The latter has a sample you can download and play with.
Note that option 1 and 2 are not guaranteed as those are client-side initiated operations.

Windows Services communicating via MSMQs - Do I need a service bus?

I have this problem where a system contains nodes (windows services) that push messages to be processed and others that pull messages and process them.
This has been designed in a way that the push nodes balance the load between queues by maintaining a round-robin list of queues and rotating queues after each send. Therefore message 1 will go to queue 1, message 2 to queue 2 etc. This part has been working great so far.
On the message pull end we designed it such that the messages are retrieved in a similar way - first from queue 1, then from queue 2 etc. In theory, each pull node sits on a different machine and in practice, so far, it only listened on a single queue. But a recent requirement made us have a pull node in a machine that listens to more than one queue: One that typically is extremely busy and filled with millions of messages and one that generally only contains a handful of messages.
The problem we are facing is that the way we architected originally the pull nodes goes from queue to queue until a message is found. If it times out (say after a sec) then it moves on to the next queue.
This doesnt work anymore cause Q1 (filled with millions of messages) will be delayed approximately a second per message since after each pull from Q1 we will ask Q2 for a message (and if it doesnt contain any we will wait for a second).
So it goes like this:
Q1 contains 10 messages and Q2 contains none
Pull node asks for a message from Q1
Q1 returns message immediately
Pull node asks for a message from Q1
------------ Waiting for a second ------------- (Q2 is empty and request times out)
Pull node asks for a message from Q1
Q1 returns message immediately
Pull node asks for a message from Q1
------------ Waiting for a second ------------- (Q2 is empty and request times out)
etc.
So this is clearly wrong.
I guess I am looking for the best architectural solution here. Message processing does not need to be as real-time as possible but needs to be robust and no message should ever be lost!
I would like to hear your views on this problem.
Thank in advance
Yannis
Maybe you could use the ReceiveCompleted event in the MessageQueue class? No need to poll then.
I ended up creating a set of threads - one for each msmq that needs to be processed. In the constructor I initialize those threads:
Storages.ForEach(queue =>
{
Task task = Task.Factory.StartNew(() =>
{
LoggingManager.LogInfo("Starting a local thread to read in mime messages from queue " + queue.Name, this.GetType());
while (true)
{
WorkItem mime = queue.WaitAndRetrieve();
if (mime != null)
{
_Semaphore.WaitOne();
_LocalStorage.Enqueue(mime);
lock (_locker) Monitor.Pulse(_locker);
LoggingManager.LogDebug("Adding no. " + _LocalStorage.Count + " item in queue", this.GetType());
}
}
});
});
The _LocalStorage is a thread-safe Queue implementation (ConcurrentQueue introduced in .NET 4.0)
The Semaphore is a counting semaphore to control inserts in the _LocalStorage. The _LocalStorage is basically a buffer of received messages but we dont want it to get too large while processing nodes are busy doing work. The effect could be that we retrieve ALL the msmq messages in that _LocalStorage but are busy processing only 5 of them or so. This is bad both in terms of resilience (if the program terminates unexpectedly we lose all these messages) and also in terms of performance as the memory consumption for holding all these items in memory will be huge. So we need to control how many items we hold in the _LocalStorage buffer queue.
We Pulse threads waiting for work (see below) that a new item was added to the queue by doing a simple Monitor.Pulse
The code that dequeues work items from the queue is as follows:
lock (_locker)
if (_LocalStorage.Count == 0)
Monitor.Wait(_locker);
WorkItem result;
if (_LocalStorage.TryDequeue(out result))
{
_Semaphore.Release();
return result;
}
return null;
I hope this can help someone to sort out a similar issue.

Categories

Resources