RabbitMQ EventBasicConsumer not working - c#

BACKGROUND INFO
I have a queue (for emails) in RabbitMQ, and want to build a consumer for it. The queue is used by another .NET app for sending emails to customers. I wanted the emailing logic to sit outside of the .NET app, and also have the benefits of durability ...etc that RabbitMQ offers.
ISSUE
The .NET app is able to publish/push emails onto the queue, but I have difficulty building the consumer! Here's my code for the consumer:
// A console app that would be turned into a service via TopShelf
public void Start()
{
using (_connection = _connectionFactory.CreateConnection())
{
using (var model = _connection.CreateModel())
{
model.QueueDeclare(_queueName, true, false, false, null);
model.BasicQos(0, 1, false);
var consumer = new EventingBasicConsumer(model);
consumer.Received += (channelModel, ea) =>
{
var message = (Email) ea.Body.DeSerialize(typeof(Email));
Console.WriteLine("----- Email Processed {0} : {1}", message.To, message.Subject);
model.BasicAck(ea.DeliveryTag, false);
};
var consumerTag = model.BasicConsume(_queueName, false, consumer);
}
}
}
The code above should be able to grab messages off the queue and process them (according to this official guide), but this isn't happening.

The problem is premature connection disposal. People often think that BasicConsume is a blocking call, but it is not. It will return almost immediately, and the very next statement is disposing (closing) of channel and connection which of course will cancel your subscription. So to fix - store connection and model in private fields and dispose them only when you are done with queue consumption.

You said queue is used by another .Net app, is that another consumer? If that is another consumer then can you please confirm which exchange you are using? If you want multiple consumers to pick up the message then please go ahead with "FanOut" exchange

Related

Azure Service Bus Queue Trigger function is called more than once when deployed

I have two Azure Functions. One is HTTP triggered, let's call it the API and the other one ServiceBusQueue triggered, and let's call this one the Listener.
The first one (the API) puts an HTTP request into a queue and the second one (the Listener) picks that up and processes that. The functions SDK version is: 3.0.7.
I have two projects in my solution for this. One which contains the Azure Functions and the other one which has the services. The API once triggered, calls a service from the other project that puts the message into the queue. And the Listener once received a message, calls a service from the service project to process the message.
Any long-running process?
The Listener actually performs a lightweight workflow and it all happens very quickly considering the amount of work it executes. The average time of execution is 90 seconds.
What's the queue specs?
The queue that the Listener listens to and is hosted in an Azure ServiceBus namespace has the following properties set:
Max Delivery Count: 1
Message time to live: 1 day
Auto-delete: Never
Duplicate detection window: 10 min
Message lock duration: 5 min
And here a screenshot for it:
The API puts the HTTP request into the queue using the following method:
public async Task ProduceAsync(string queueName, string jsonMessage)
{
jsonMessage.NotNull();
queueName.NotNull();
IQueueClient client = new QueueClient(Environment.GetEnvironmentVariable("ServiceBusConnectionString"), queueName, ReceiveMode.PeekLock)
{
OperationTimeout = TimeSpan.FromMinutes(5)
};
await client.SendAsync(new Message(Encoding.UTF8.GetBytes(jsonMessage)));
if (!client.IsClosedOrClosing)
{
await client.CloseAsync();
}
}
And the Listener (the service bus queue triggered azure function), has the following code to process the message:
[FunctionName(nameof(UpdateBookingCalendarListenerFunction))]
public async Task Run([ServiceBusTrigger(ServiceBusConstants.UpdateBookingQueue, Connection = ServiceBusConstants.ConnectionStringKey)] string message)
{
var data = JsonConvert.DeserializeObject<UpdateBookingCalendarRequest>(message);
_telemetryClient.TrackTrace($"{nameof(UpdateBookingCalendarListenerFunction)} picked up a message at {DateTime.Now}. Data: {data}");
await _workflowHandler.HandleAsync(data);
}
The Problem
The Listener function processes the same message 3 times! And I have no idea why! I've Googled and read through a few of StackOverFlow threads such as this one. And it looks like that everybody advising to ensure lock duration is long enough for the process to get executed completely. Although, I've put in 5 minutes for the lock, yet, the problem keeps coming. I'd really appreciate any help on this.
Just adding this in here so might be helpful for some others.
After some more investigations I've realized that in my particular case, the issue was regardless of the Azure Functions and Service Bus. In my workflow handler that the UpdateBookingCalendarListenerFunction sends messages to, I was trying to call some external APIs in a parallel approach, but, for some unknown reasons (to me) the handler code was calling off the external APIs one additional time, regardless of how many records it iterates over. The below code shows how I had implemented the parallel API calls and the other code shows how I've done it one by one that eventually led to a resolution for the issue I had.
My original code - calling APIs in parallel
public async Task<IEnumerable<StaffMemberGraphApiResponse>> AddAdminsAsync(IEnumerable<UpdateStaffMember> admins, string bookingId)
{
var apiResults = new List<StaffMemberGraphApiResponse>();
var adminsToAdd = admins.Where(ad => ad.Action == "add");
_telemetryClient.TrackTrace($"{nameof(UpdateBookingCalendarWorkflowDetailHandler)} Recognized {adminsToAdd.Count()} admins to add to booking with id: {bookingId}");
var addAdminsTasks = adminsToAdd.Select(admin => _addStaffGraphApiHandler.HandleAsync(new AddStaffToBookingGraphApiRequest
{
BookingId = bookingId,
DisplayName = admin.DisplayName,
EmailAddress = admin.EmailAddress,
Role = StaffMemberAllowedRoles.Admin
}));
if (addAdminsTasks.Any())
{
var addAdminsTasksResults = await Task.WhenAll(addAdminsTasks);
apiResults = _populateUpdateStaffMemberResponse.Populate(addAdminsTasksResults, StaffMemberAllowedRoles.Admin).ToList();
}
return apiResults;
}
And my new code without aggregating the API calls into the addAdminsTasks object and hence with no await Task.WhenAll(addAdminsTasks):
public async Task<IEnumerable<StaffMemberGraphApiResponse>> AddStaffMembersAsync(IEnumerable<UpdateStaffMember> members, string bookingId, string targetRole)
{
var apiResults = new List<StaffMemberGraphApiResponse>();
foreach (var item in members.Where(v => v.Action == "add"))
{
_telemetryClient.TrackTrace($"{nameof(UpdateBookingCalendarWorkflowDetailHandler)} Adding {targetRole} to booking: {bookingId}. data: {JsonConvert.SerializeObject(item)}");
apiResults.Add(_populateUpdateStaffMemberResponse.PopulateAsSingleItem(await _addStaffGraphApiHandler.HandleAsync(new AddStaffToBookingGraphApiRequest
{
BookingId = bookingId,
DisplayName = item.DisplayName,
EmailAddress = item.EmailAddress,
Role = targetRole
}), targetRole));
}
return apiResults;
}
I've investigated the first approach and the numbers of tasks were exact match of the number of the IEnumerable input, yet, the API was called one additional time. And within the _addStaffGraphApiHandler.HandleAsync, there is literally nothing than an HttpClient object that raises a POSTrequest. Anyway, using the second code has resolved the issue.

The correct way to wait for a specific message in a Service Bus Queue in a multi threaded environment (Azure Functions)

I have created a solution based on Azure Functions and Azure Service Bus, where clients can retrieve information from multiple back-end systems using a single API. The API is implemented in Azure Functions, and based on the payload of the request it is relayed to a Service Bus Queue, picked up by a client application running somewhere on-premise, and the answer sent back by the client to another Service Bus Queue, the "reply-" queue. Meanwhile, the Azure Function is waiting for a message in the reply-queue, and when it finds the message that belongs to it, it sends the payload back to the caller.
The Azure Function Activity Root Id is attached to the Service Bus Message as the CorrelationId. This way each running function knows which message contains the response to the callers request.
My question is about the way I am currently retrieving the messages from the reply queue. Since multiple instances can be running at the same time, each Azure Function instance needs to get it's response from the client without blocking other instances. Besides that, a time out needs to be observed. The client is expected to respond within 20 seconds. While waiting, the Azure Function should not be blocking other instances.
This is the code I have so far:
internal static async Task<(string, bool)> WaitForMessageAsync(string queueName, string operationId, TimeSpan timeout, ILogger log)
{
log.LogInformation("Connecting to service bus queue {QueueName} to wait for reply...", queueName);
var receiver = new MessageReceiver(_connectionString, queueName, ReceiveMode.PeekLock);
try
{
var sw = Stopwatch.StartNew();
while (sw.Elapsed < timeout)
{
var message = await receiver.ReceiveAsync(timeout.Subtract(sw.Elapsed));
if (message != null)
{
if (message.CorrelationId == operationId)
{
log.LogInformation("Reply received for operation {OperationId}", message.CorrelationId);
var reply = Encoding.UTF8.GetString(message.Body);
var error = message.UserProperties.ContainsKey("ErrorCode");
await receiver.CompleteAsync(message.SystemProperties.LockToken);
return (reply, error);
}
else
{
log.LogInformation("Ignoring message for operation {OperationId}", message.CorrelationId);
}
}
}
return (null, false);
}
finally
{
await receiver.CloseAsync();
}
}
The code is based on a few assumptions. I am having a hard time trying to find any documentation to verify my assumptions are correct:
I expect subsequent calls to ReceiveAsync not to fetch messages I have previously fetched and not explicitly abandoned.
I expect new messages that arrive on the queue to be received by ReceiveAsync, even though they may have arrived after my first call to ReceiveAsync and even though there might still be other messages in the queue that I haven't received yet either. E.g. there are 10 messages in the queue, I start receiving the first few message, meanwhile new messages arrive, and after I have read the 10 pre-existing messages, I get the new messages too.
I expect that when I call ReceiveAsync for a second time, that the lock is released from the message I received with the first call, although I did not explicitly Abandon that first message.
Could anyone tell me if my assumptions are correct?
Note: please don't suggest that Durable Functions where designed specifically for this, because they simply do not fill the requirements. Most notably, Durable Functions are invoked by a process that polls a queue with a sliding interval, so after not having any requests for a few minutes, the first new request can take a minute to start, which is not acceptable for my use case.
I would consider session enabled topics or queues for this.
The Message sessions documentation explains this in detail but the essential bit is that a session receiver is created by a client accepting a session. When the session is accepted and held by a client, the client holds an exclusive lock on all messages with that session's session ID in the queue or subscription. It will also hold exclusive locks on all messages with the session ID that will arrive later.
This makes it perfect for facilitating the request/reply pattern.
When sending the message to the queue that the on-premises handlers receive messages on, set the ReplyToSessionId property on the message to your operationId.
Then, the on-premises handlers need to set the SessionId property of the messages they send to the reply queue to the value of the ReplyToSessionId property of the message they processed.
Then finally you can update your code to use a SessionClient and then use the 'AcceptMessageSessionAsync()' method on that to start listening for messages on that session.
Something like the following should work:
internal static async Task<(string?, bool)> WaitForMessageAsync(string queueName, string operationId, TimeSpan timeout, ILogger log)
{
log.LogInformation("Connecting to service bus queue {QueueName} to wait for reply...", queueName);
var sessionClient = new SessionClient(_connectionString, queueName, ReceiveMode.PeekLock);
try
{
var receiver = await sessionClient.AcceptMessageSessionAsync(operationId);
// message will be null if the timeout is reached
var message = await receiver.ReceiveAsync(timeout);
if (message != null)
{
log.LogInformation("Reply received for operation {OperationId}", message.CorrelationId);
var reply = Encoding.UTF8.GetString(message.Body);
var error = message.UserProperties.ContainsKey("ErrorCode");
await receiver.CompleteAsync(message.SystemProperties.LockToken);
return (reply, error);
}
return (null, false);
}
finally
{
await sessionClient.CloseAsync();
}
}
Note: For all this to work, the reply queue will need Sessions enabled. This will require the Standard or Premium tier of Azure Service Bus.
Both queues and topic subscriptions support enabling sessions. The topic subscriptions allow you to mix and match session enabled scenarios as your needs arise. You could have some subscriptions with it enabled, and some without.
The queue used to send the message to the on-premises handlers does not need Sessions enabled.
Finally, when Sessions are enabled on a queue or a topic subscription, the client applications can no longer send or receive regular messages. All messages must be sent as part of a session (by setting the SessionId) and received by accepting the session.
It seems that the feature can not be achieved now.
You can give your voice here where if others have same demand, they will vote up your idea.

RabbitMQ - How do you programatically receive messages from the message queue?

Here is what I have so far. I've been able to successfully publish a message to a queue and see that it is there via RabbitMQ's management console.
However, when I try to receive it, it does not seem to trigger the callback function at all.
Here is the relevant code.
MessageQueue mq = new MessageQueue();
mq.Receive("task_queue", (model, ea) => {
var message = Encoding.UTF8.GetString(ea.Body);
System.Diagnostics.Debug.WriteLine(message);
});
Here is my Receive function in the MessageQueue class:
public void Receive(string queueName, EventHandler<BasicDeliverEventArgs> onReceived)
{
using (IConnection connection = GetConnection(myLocalhost))
{
using (IModel channel = connection.CreateModel())
{
channel.QueueDeclare(queue: queueName, durable: true, exclusive: false, autoDelete: false, arguments: null);
// Don't dispatch a new message to a consumer until it has processed and acknowledged the previous one.
channel.BasicQos(prefetchSize: 0, prefetchCount: 1, global: false);
var consumer = new EventingBasicConsumer(channel); // non-blocking
// Set the event to be executed when receiving a message
consumer.Received += onReceived;
// Register a consumer to listen to a specific queue.
channel.BasicConsume(queue: queueName, autoAck: true, consumer: consumer);
}
}
}
When I try to run the Receive function while there is something in the queue, nothing is printed to my output window.
Can anyone help me on this?
UPDATE
I took the code in the Receive function and placed it in the same file as the code that calls it. Still no luck. That rules out a scoping issue I think. I also tried setting the Received field to an actual event delegate (instead of a onReceive function and had that call another function in which I put a breakpoint. That function is never hit leading me to believe that my event delegate callback is never being called at all.
I'm at a loss as to why this is. The message is still being consumed from the queue as the RabbitMQ management console shows me. I've also tried renaming the queue to something else to make sure no other phantom services are consuming from the same queue. No cigar.
UPDATE 2
I tried extracting the two using statements and calling my Receive function inside there in order to keep the scope but that didn't work either. I even extracted the code in the whole Receive block out to a main function and now it doesn't even consume from the queue.
Looking at your code above, you have a pretty straightforward problem.
The instant after you call channel.BasicConsume, the whole thing (connection/channel) goes out of scope and is immediately disposed/destroyed via the using statement.
To prevent this from happening, you need to have an infinite loop immediately following the channel.BasicConsume, with appropriate logic of course to exit when you shut down the program.
while (_isRunning & channel.IsOpen) {
Thread.Sleep(1);
// Other application logic here; e.g. periodically break out of the
// loop to prevent unacknowledged messages from accumulating in the system
// (if you don't, random effects will guarantee that they eventually build up)
}

RabbitMQ throws Shared Queue closed error

We have been using RabbitMQ as messaging service in the project. We will be pushing message into a queue and which will be received at the message consumer and will be trying to make entry into database. Once the values entered into the database we will be sending positive acknowledgement back to the RabbitMQ server if not we will be sending negative acknowledgement.
I have created Message Consumer as Windows service.Message has been successfully entered and well received by the message consumer(Made entry in table)but with an exception log "Shared Queue closed".
Please find the code block.
while (true)
{
try
{
if (!Connection.IsOpen || !Channel.IsOpen)
{
CreateConnection(existConnectionConfig, QueueName);
consumer = new QueueingBasicConsumer(Channel);
consumerTag=Channel.BasicConsume(QueueName,false,consumer);
}
BasicDeliverEventArgs e = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
IBasicProperties props = e.BasicProperties;
byte[] body = e.Body;
bool ack = onMessageReceived(body);
if (ack == true)
{
Channel.BasicAck(e.DeliveryTag, false);
}
else
Channel.BasicNack(e.DeliveryTag, false, true);
}
catch (Exception ex)
{
//Logged the exception in text file where i could see the
//message as "Shared queue closed"
}
}
I have surfed in net too but couldn't able to what the problem. It will be helpful if anyone able to help me out.
Thanks in advance,
Selva
In answer to your question, I have experienced the same problems when my Web Client has reset the connection due to App Pool recycling or some other underlying reason the connection has been dropped that appears beyond your scope. You may need to build in a retry mechanism to cope with this.
You might want to look at MassTransit. I have used this with RabbitMQ and it makes things a lot easier by effectively providing a management layer to RabbitMQ. MassTransit takes away the headache of retry mechanisms - see Connection management. It also provides a nice multi threaded concurrent consumer configuration.
This has the bonus of your implementation being more portable - you could easily change things to MSMQ should the requirement come along.

How do I receive Topic messages from a SignalR hub?

I have successfully created an Azure application that sends DbTransactions to a ServiceBus Queue, and then, enqueues a 'notifying message' to a ServiceBus Topic for other clients to monitor (...so they can receive the updates automatically).
Now, I want to use SignalR to monitor & receive the SubscriptionClient messages...and I have test-code that works just fine on its' own.
I have found many examples for sending messages to an Azure Queue (that is easy). And, I have the code to receive a BrokeredMessage from a SubscriptionClient. However, I cannot get SignalR to continuously monitor my Distribute method.
How do I get SignalR to monitor the Topic?
CODE BEHIND: (updated)
public void Dequeue()
{
SubscriptionClient subscription = GetTopicSubscriptionClient(TOPIC_NAME, SUBSCRIPTION_NAME);
subscription.Receive();
BrokeredMessage message = subscription.Receive();
if (message != null)
{
try
{
var body = message.GetBody<string>();
var contextXml = message.Properties[PROPERTIES_CONTEXT_XML].ToString();
var transaction = message.Properties[PROPERTIES_TRANSACTION_TYPE].ToString();
Console.WriteLine("Body: " + body);
Console.WriteLine("MessageID: " + message.MessageId);
Console.WriteLine("Custom Property [Transaction]: " + transaction);
var context = XmlSerializer.Deserialize<Person>(contextXml);
message.Complete();
Clients.All.distribute(context, transaction);
}
catch (Exception ex)
{
// Manage later
}
}
}
CLIENT-SIDE CODE:
// TEST: Hub - GridUpdaterHub
var hubConnection = $.hubConnection();
var gridUpdaterHubProxy = hubConnection.createHubProxy('gridUpdaterHub');
gridUpdaterHubProxy.on('hello', function (message) {
console.log(message);
});
// I want this automated
gridUpdaterHubProxy.on('distribute', function (context, transaction) {
console.log('It is working');
});
connection.start().done(function () {
// This is successful
gridUpdaterHubProxy.invoke('hello', "Hello");
});
I would not do it like that. Your code is consuming and retaining ASP.NET thread pool's threads for each incoming connection, so if you have many clients you are not scaling well at all. I do not know the internals of SignalR that deep, but I'd guess that your never-ending method is preventing SignalR to let the client call your callbacks because that needs the server method to end properly. Just try to change while(true) with something exiting after, let's say, 3 messages in the queue, you should be called back 3 times and probably those calls will happen all together when your method exits.
If that is right, then you can move to something different, like dedicating a specific thread to consuming the queue and having callbacks called from there usning GlobalHost.ConnectionManager.GetHubContext. Probably better, you could try a different process consuming the queue and doing HTTP POST to your web app, which in turns broadcasts to the clients.

Categories

Resources