I am using Google PubSub messaging service to pull published messages, using code sample from here (https://cloud.google.com/pubsub/docs/quickstart-client-libraries#receive_messages):
SubscriptionName subscriptionName = SubscriptionName.FromProjectSubscription(projectId, subscriptionId);
SubscriberClient subscriber = await SubscriberClient.CreateAsync(subscriptionName);
Task startTask = subscriber.StartAsync((PubsubMessage message, CancellationToken cancel) =>
{
//Do something
return Task.FromResult(SubscriberClient.Reply.Ack);
});
My question is - is it somehow possible to get notification that connection has been broken? Right now, if I disconnect internet, then reconnect again, my app simply stops receiving messages. It would be nice if some kind of exception would occur in such scenario. Any ideas on how to solve this would be highly appreciated.
StartAsync's returned Task will complete when an unrecoverable error occurs (docs). If your subscriber remains disconnected for an extended period of time, I would expect the Task to notify you.
Related
Let's imagine that we have Q named "NotificationQ" and have a consumer who gets a task from that Q and sends emails to customers.
Emailing process sends an email by API from mailgun. That API request does not turn 200 every time(the reason is not important). In that time I need to tell RabbitMQ that tasks fail. I know there is a feature called autoAck but if a request fails how the RabbitMQ client pack understood that a fail.
Am I manually trigger ack to say that request failed?
I using https://www.nuget.org/packages/RabbitMQ.Client/ pack to handle RabbitMQ tasks.
var channel = RabbitPrepareFactory.GetConnectionFactory();
channel.BasicQos(0, 1, false);
var notificationPack = channel.BasicGet("notification", true);
var message = System.Text.Encoding.UTF8.GetString(notificationPack.Body.ToArray());
var task = JsonConvert.DeserializeObject<ForgetPasswordEmailNotification>(message);
var isEmailSendSuccessful = SomeFakeEmailSendFunctions(task.Email);
if (!isEmailSendSuccessful)
{
//something for tell RabbitMQ that task fail and not delte that task in q
.......
}
I think this could be usefull. I would use something like a Dead Letter
https://www.rabbitmq.com/dlx.html
So everytime a message is failing for whatever reason, you push the message to that queue.
Once your messaged was recieved by your consumer and the scope of the operation finished, that message is acknowledged so that other consumers will not take a already processed message.
[Edit]
I dont't think its a good ideea to process a message from a queue and afterwards to leave it there if something happens to your BackEnd. If you implement the dead letter queue you could try to reprocess those messages at some time ( Maybe a CronJob ) or if you really don't wanna have dead letter queues you could try to implement in your Client a Retry Mechanism. Polly could work very well in your case https://github.com/App-vNext/Polly
I have created a solution based on Azure Functions and Azure Service Bus, where clients can retrieve information from multiple back-end systems using a single API. The API is implemented in Azure Functions, and based on the payload of the request it is relayed to a Service Bus Queue, picked up by a client application running somewhere on-premise, and the answer sent back by the client to another Service Bus Queue, the "reply-" queue. Meanwhile, the Azure Function is waiting for a message in the reply-queue, and when it finds the message that belongs to it, it sends the payload back to the caller.
The Azure Function Activity Root Id is attached to the Service Bus Message as the CorrelationId. This way each running function knows which message contains the response to the callers request.
My question is about the way I am currently retrieving the messages from the reply queue. Since multiple instances can be running at the same time, each Azure Function instance needs to get it's response from the client without blocking other instances. Besides that, a time out needs to be observed. The client is expected to respond within 20 seconds. While waiting, the Azure Function should not be blocking other instances.
This is the code I have so far:
internal static async Task<(string, bool)> WaitForMessageAsync(string queueName, string operationId, TimeSpan timeout, ILogger log)
{
log.LogInformation("Connecting to service bus queue {QueueName} to wait for reply...", queueName);
var receiver = new MessageReceiver(_connectionString, queueName, ReceiveMode.PeekLock);
try
{
var sw = Stopwatch.StartNew();
while (sw.Elapsed < timeout)
{
var message = await receiver.ReceiveAsync(timeout.Subtract(sw.Elapsed));
if (message != null)
{
if (message.CorrelationId == operationId)
{
log.LogInformation("Reply received for operation {OperationId}", message.CorrelationId);
var reply = Encoding.UTF8.GetString(message.Body);
var error = message.UserProperties.ContainsKey("ErrorCode");
await receiver.CompleteAsync(message.SystemProperties.LockToken);
return (reply, error);
}
else
{
log.LogInformation("Ignoring message for operation {OperationId}", message.CorrelationId);
}
}
}
return (null, false);
}
finally
{
await receiver.CloseAsync();
}
}
The code is based on a few assumptions. I am having a hard time trying to find any documentation to verify my assumptions are correct:
I expect subsequent calls to ReceiveAsync not to fetch messages I have previously fetched and not explicitly abandoned.
I expect new messages that arrive on the queue to be received by ReceiveAsync, even though they may have arrived after my first call to ReceiveAsync and even though there might still be other messages in the queue that I haven't received yet either. E.g. there are 10 messages in the queue, I start receiving the first few message, meanwhile new messages arrive, and after I have read the 10 pre-existing messages, I get the new messages too.
I expect that when I call ReceiveAsync for a second time, that the lock is released from the message I received with the first call, although I did not explicitly Abandon that first message.
Could anyone tell me if my assumptions are correct?
Note: please don't suggest that Durable Functions where designed specifically for this, because they simply do not fill the requirements. Most notably, Durable Functions are invoked by a process that polls a queue with a sliding interval, so after not having any requests for a few minutes, the first new request can take a minute to start, which is not acceptable for my use case.
I would consider session enabled topics or queues for this.
The Message sessions documentation explains this in detail but the essential bit is that a session receiver is created by a client accepting a session. When the session is accepted and held by a client, the client holds an exclusive lock on all messages with that session's session ID in the queue or subscription. It will also hold exclusive locks on all messages with the session ID that will arrive later.
This makes it perfect for facilitating the request/reply pattern.
When sending the message to the queue that the on-premises handlers receive messages on, set the ReplyToSessionId property on the message to your operationId.
Then, the on-premises handlers need to set the SessionId property of the messages they send to the reply queue to the value of the ReplyToSessionId property of the message they processed.
Then finally you can update your code to use a SessionClient and then use the 'AcceptMessageSessionAsync()' method on that to start listening for messages on that session.
Something like the following should work:
internal static async Task<(string?, bool)> WaitForMessageAsync(string queueName, string operationId, TimeSpan timeout, ILogger log)
{
log.LogInformation("Connecting to service bus queue {QueueName} to wait for reply...", queueName);
var sessionClient = new SessionClient(_connectionString, queueName, ReceiveMode.PeekLock);
try
{
var receiver = await sessionClient.AcceptMessageSessionAsync(operationId);
// message will be null if the timeout is reached
var message = await receiver.ReceiveAsync(timeout);
if (message != null)
{
log.LogInformation("Reply received for operation {OperationId}", message.CorrelationId);
var reply = Encoding.UTF8.GetString(message.Body);
var error = message.UserProperties.ContainsKey("ErrorCode");
await receiver.CompleteAsync(message.SystemProperties.LockToken);
return (reply, error);
}
return (null, false);
}
finally
{
await sessionClient.CloseAsync();
}
}
Note: For all this to work, the reply queue will need Sessions enabled. This will require the Standard or Premium tier of Azure Service Bus.
Both queues and topic subscriptions support enabling sessions. The topic subscriptions allow you to mix and match session enabled scenarios as your needs arise. You could have some subscriptions with it enabled, and some without.
The queue used to send the message to the on-premises handlers does not need Sessions enabled.
Finally, when Sessions are enabled on a queue or a topic subscription, the client applications can no longer send or receive regular messages. All messages must be sent as part of a session (by setting the SessionId) and received by accepting the session.
It seems that the feature can not be achieved now.
You can give your voice here where if others have same demand, they will vote up your idea.
I have gRPC service define like below
service ConsumerService
{
stream Message Consume(SubscriptionRequest);
}
Client looks like this:
var consumer = new ConsumerService.ConsumerServiceClient(<channel>);
AsyncServerStreamingCall<Message> proxy = consumer.ConsumeAsync(subRequest));
while (!gracefulShutdown)
await proxy.ResponseStream.MoveNext(cts.Token);
// todo: tell the server that client is not going to read responses anymore..
await proxy.ResponseStream.CompleteAsync(); // I would like to have method like this
I need client to gracefully stop reading responses when gracefulShutdown is set to true.
How do I do this?
If i just stop reading and close channel, server considers it is as abort.
As much as i checked on internet i didnt found any way to do it gracefully. I expected to use CancellationTokenSource. Just use try, catch. Looks like this is intended behaviour. Comment from grpc team
Cancellations are to be used for forcing termination
of outstanding calls due to unexpected circumstances, you shouldn't rely on
them for handling the basic workflow in your application
Use catch block like here to check exception
catch (RpcException e) when (e.Status.StatusCode == StatusCode.Cancelled)
{
Console.WriteLine("Streaming was cancelled from the client!");
}
More info on github C# Clean Cancellation
We have been using RabbitMQ as messaging service in the project. We will be pushing message into a queue and which will be received at the message consumer and will be trying to make entry into database. Once the values entered into the database we will be sending positive acknowledgement back to the RabbitMQ server if not we will be sending negative acknowledgement.
I have created Message Consumer as Windows service.Message has been successfully entered and well received by the message consumer(Made entry in table)but with an exception log "Shared Queue closed".
Please find the code block.
while (true)
{
try
{
if (!Connection.IsOpen || !Channel.IsOpen)
{
CreateConnection(existConnectionConfig, QueueName);
consumer = new QueueingBasicConsumer(Channel);
consumerTag=Channel.BasicConsume(QueueName,false,consumer);
}
BasicDeliverEventArgs e = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
IBasicProperties props = e.BasicProperties;
byte[] body = e.Body;
bool ack = onMessageReceived(body);
if (ack == true)
{
Channel.BasicAck(e.DeliveryTag, false);
}
else
Channel.BasicNack(e.DeliveryTag, false, true);
}
catch (Exception ex)
{
//Logged the exception in text file where i could see the
//message as "Shared queue closed"
}
}
I have surfed in net too but couldn't able to what the problem. It will be helpful if anyone able to help me out.
Thanks in advance,
Selva
In answer to your question, I have experienced the same problems when my Web Client has reset the connection due to App Pool recycling or some other underlying reason the connection has been dropped that appears beyond your scope. You may need to build in a retry mechanism to cope with this.
You might want to look at MassTransit. I have used this with RabbitMQ and it makes things a lot easier by effectively providing a management layer to RabbitMQ. MassTransit takes away the headache of retry mechanisms - see Connection management. It also provides a nice multi threaded concurrent consumer configuration.
This has the bonus of your implementation being more portable - you could easily change things to MSMQ should the requirement come along.
while (true)
{
BasicDeliverEventArgs e = (BasicDeliverEventArgs)Consumer.Queue.Dequeue();
IBasicProperties properties = e.BasicProperties;
byte[] body = e.Body;
Console.WriteLine("Recieved Message : " + Encoding.UTF8.GetString(body));
ch.BasicAck(e.DeliveryTag, false);
}
This is what we do when we Retrieve Message by subscription..We use While Loop because we want Consumer to listen Continously..what if i want to make this even based..that is when a new message arrives in the queue at that time only Consumer should Consume the message..or on any such similar event..
use the RabbitMQ.Client.Events.EventingBasicConsumer for a eventing consumer instead of a blocking one.
You're currently blocking on the Consumer.Queue.Dequeue(). If I understand your question correctly, you want to asynchronously consume messages.
The standard way of doing this would be to write your own IBasicConsumer (probably by subclassing DefaultBasicConsumer) and set it as the consumer for the channel.
The trouble with this is that you have to be very careful about what you do in IBasicConsumer.HandleBasicDelivery. If you use any synchronous AMQP methods, such as basic.publish, you'll get a dead-lock. If you do anything that takes a long time, you'll run into some other problems.
If you do need synchronous methods or long-running actions, what you're doing is about the right way to do it. Have a look at Subscription; it's an IBasicConsumer that consumes messages and puts them on a queue for you.
If you need any more help, a great place to ask is the rabbitmq-discuss mailing list.
I had this problem and could not find an answer so created a demonstration project to have the RabbitMQ subscription raise .Net events when a message is received. The subscription runs on its own thread leaving the UI (in mycase) free to do it thing.
I amusing call my project RabbitEar as it listens out for messages from the mighty RabbitMQ
I intend to share this with the RabbitMQ site so if they think its of value they can include a link / code in there examples.
Check it out at http://rabbitears.codeplex.com/
Thanks
Simon