Stop receiving messages from SubscriptionClient - c#

How do you stop receiving messages from a subscription client set as a an event-driven message pump? I currently have some code that works however when I run two tests consecutively they second breaks. I'm fairly sure messages are still being pulled off the subscription from the first instance i created.
http://msdn.microsoft.com/en-us/library/dn130336.aspx
OnMessageOptions options = new OnMessageOptions();
options.AutoComplete = true; // Indicates if the message-pump should call complete on messages after the callback has completed processing.
options.MaxConcurrentCalls = 1; // Indicates the maximum number of concurrent calls to the callback the pump should initiate
options.ExceptionReceived += LogErrors; // Enables you to be notified of any errors encountered by the message pump
// Start receiveing messages
Client.OnMessage((receivedMessage) => // Initiates the message pump and callback is invoked for each message that is received. Calling Close() on the client will stop the pump.
{
// Process the message
Trace.WriteLine("Processing", receivedMessage.SequenceNumber.ToString());
}, options);

You need to do two things.
First, call subscriptionClient.Close(), which will eventually (but not immediately) stop the message pump.
Second, on your message received callback, check if the client is closed, like so:
if (subscriptionClient.IsClosed)
return;

You can call SubscriptionClient.Close() to stop further messages from being processed.
Also indicated in the comment in the code:
Calling Close() on the client will stop the pump.

I looked and see the exact same behavior. The message pump does NOT stop trying to process messages even when you close the subscription client. If your message processing handler attempts to do a .Complete or .Abandon on the message, it will throw an exception though, because the client is closed. Need a way to stop the pump guys.

Related

C# - ActiveMQ - Task in Consumer

Suppose we have a queue in ActiveMQ, a client that sends messages (producer) to that queue and a server that gets the messages (consumer) from the queue.
On the server side the consumer has a message listener, something like:
consumer.Listener += ConsumerOnListener;
and the implementation of ConsumerOnListener looks like the following:
private void ConsumerOnListener(IMessage message)
{
var textMessage = message as ITextMessage;
// validate textMessage
// more code here...eg. save to databse,logging etc. (part-a)
Task.Factory.StartNew(() =>
{
// do something else here (part-b)
});
}
The main idea behind the above is not to wait for part-b to be executed before processing the next message. Imagine that part-b does something completely of its own which may be succeeded or not(fire-and-forget).
So, the question here is whether is OK or not to use Tasks inside ConsumerOnListener. Will this somehow
"block" the queue?
Assuming that the Task is asynchronous then it shouldn't block either the execution of the listener or the queue itself. Typically concurrency in use-cases like this is increased by simply increasing the number of consumers/listeners, but it's also valid to have listeners kick off their own async threads, tasks, etc.

Begin reading(dequeue) from MSMQ then stop the dequeue process and then again start in C#

As per the current implementation, C# code adds messages to MSMQ and then after a particular operation is completed, I need to dequeue and start processing them. Following code is used:
_queue.ReceiveCompleted += new ReceiveCompletedEventHandler(RecieveQ_ReceiveCompleted);
_queue.BeginReceive();
However, in between the dequeue process, I would want to stop it and then again start it sometime later, depending on the user input. I came across the EndReceive(IAsyncResult asyncResult) method, but could not implement it correctly.
The BeginReceive() and EndReceive() are not for starting and stopping the queue like turning on and off a tap (or faucet).
In MSMQ, when you call BeginReceive(), a second thread is spawned which waits for a message to enter the queue. When a message arrives, it calls your RecieveQ_ReceiveCompleted event handler.
Inside you event handler, you then call EndReceive() to fetch the item from the queue, and then do your processing. Note that if another item arrives in the queue, it will not be processsed.
If you want to process queue items repeatedly, you have to call BeginReceive() again from within your event handler.
If you want to pause the processing after each item to wait for a signal from the user to process the next item, you will need to signal from the event handler that an item has been processed, and either the event handler or main thread will need to call BeginReceive() again.
Depending on your situation, you might find it easier to use the Receive() method instead of the asynchronous version to better control your order of operations.
References: https://msdn.microsoft.com/en-us/library/43h44x53(v=vs.110).aspx#Anchor_4
In asynchronous processing, you use BeginReceive to raise the ReceiveCompleted event when a message has been removed from the queue.
The MessageQueue can then access the message by calling EndReceive(IAsyncResult).
Once an asynchronous operation completes, you can call BeginPeek or BeginReceive again in the event handler to keep receiving notifications.
Hope this helps

Pop receipt changed on update message

Our application dequeues the message from Azure queue and makes it invisible for a given period of time.
Worker than processes it and when all work is done it deletes it from the queue.
But sometimes delete fails with 404 error not found. The problem might be that message's pop receipt has been changed.
Because when message is dequeued separate thread also runs and increases invisibility of a message to prevent it to be picked by other consumer. But it calls UpdateMessage which actually changes pop receipt.
Because UpdateMesasge and DeleteMessage might run at the same time, DeleteMessage sometimes fails because its PopReceipt is no longer valid.
Is there any way to avoid PopReceipt change on UpdateMessage?
Code sample:
TimerCallback extenderHandler = new TimerCallback(async state =>
{
try
{
var oldNextVisibleTime = queueMessage.NextVisibleTime.Value;
// extend message lease to prevent it from being picked by another worker instances
await returnMessage();
}
catch (Exception ex)
{
// NOTE: exceptions on Timer are not propagated to main thread; the error is only logged, because operation will be retried;
}
});
// start message extender timer which extends message lease time when it's nearing its max hold time, timer runs until it's disposed
using (var messageExtenderTimer = new System.Threading.Timer(extenderHandler, null, 0, (int)MessageLeaseCheckInterval.TotalMilliseconds))
{
processMessage();
}
In returnMessage method UpdateMessageAsync from Microsoft.WindowsAzure.Storage.Queue is called.
In processMessage method processing itself is done and at the end message is deleted using DeleteMessage method from Microsoft.WindowsAzure.Storage.Queue
And sometimes fails UpdateMessageAsync and sometimes DeleteMessage. Because of that I wonder that when these two concurrent threads make changes to the message - message is changed in the queue before PopReceipt is updated on message itself.
Is there any way to avoid PopReceipt change on UpdateMessage?
Unfortunately no. Whenever a message is updated, a new PopReceipt will be returned. From the documentation link (#4 item):
The message has been updated with a new visibility timeout. When the
message is updated, a new pop receipt will be returned.

BasicAck when processing messages asynchronously

I'm trying to set up a RabbitMQ messaging queue so that I can send a message to start a long running process and also be able to send a message to cancel that long running process if needed. So I started out with an EventingBasicConsumer and did something like this in my Recieved handler:
if (startProcess)
{
// start a long running process
}
else if (cancelProcess)
{
// cancel the currently running process
}
channel.BasicAck(ea.DeliveryTag, false);
And this doesn't work because the EventingBasicConsumer isn't multithreaded and can only handle one message at a time. So it can't handle the cancel message until it's finished with the long running process (at which point, there's no point, obviously). So next I tried this:
if (startProcess)
{
Task.Run(() => {
// start a long running process
}
}
else if (cancelProcess)
{
// cancel the currently running process
}
channel.BasicAck(ea.DeliveryTag, false);
And this works. I can now cancel the long running process...but, I'm acknowledging the request to run the long running process immediately, rather than after it's completed. This means that if the long running process was to crash, the message has already been removed. So this would require the original sender to keep track and have the receiver have to send messages back to say it's done and it all gets a bit complicated.
So I thought maybe I could change EventingBasicConsumer to just always fire its Received event on a new thread. So I created something like this:
public class AsyncRabbitConsumer : DefaultBasicConsumer
{
// code all the same as EventingBasicConsumer except this bit:
public override void HandleBasicDeliver(string consumerTag,
ulong deliveryTag,
bool redelivered,
string exchange,
string routingKey,
IBasicProperties properties,
byte[] body)
{
base.HandleBasicDeliver(consumerTag,
deliveryTag,
redelivered,
exchange,
routingKey,
properties,
body);
if (Received != null)
{
var args = new BasicDeliverEventArgs(consumerTag,
deliveryTag,
redelivered,
exchange,
routingKey,
properties,
body);
Task.Run(() =>
{
Received(this, args);
});
}
}
}
Now in my first snippet of code, I can have it process the cancel message while the long running process is still running and the long running process won't Ack and delete it's message until it's actually finished (or cancelled). So that should be great...except when I cancel I get this:
An exception of type 'RabbitMQ.Client.Exceptions.AlreadyClosedException' occurred in RabbitMQ.Client.dll but was not handled in user code
Additional information: Already closed: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text="PRECONDITION_FAILED - unknown delivery tag 3", classId=60, methodId=80, cause=
From the channel.BasicAck step of what appears to be the thread that started the long running process. So what's going on here? I think the acknowledgements (for the cancel message first and then the long running process message) are getting crossed here. Is there any decent way to straighten this out? Or am I barking up the wrong tree?
It's probably worth noting that cancelling the long running process is not instantaneous. It will cancel at the next convenient point, so it's almost certain that the cancel message will finish processing before the long running process has ended.
What you could do is have something like consumers pairs - first one that is the long running process, and second one an agent to kill the long running process. First one would receive the messages, process it and ACK when finished processing,also would do ACK also if kill signal is detected. The agent in the pair would obviously receive the cancel message and kill the first one, and also spawn another instance of the first. Clearly this requires the processes(consumers) communicate outside of RMQ.
The other thing that comes to mind (but I have never tried something like this) is that you set the prefetch count to 2 in the consumer, and while "processing single data message", publish the second message to the broker (forward) unless it's the CANCEL message, in which case you ACK both of them - the CANCEL and the DATA (to call it like that) message, after you have aborted the processing.
Another option perhaps would be that within "long running process" you have two consumer threads with each of them using their own channel.
I was facing the same error because in BasicConsume method autoAck flag was true. Now i have changed the flag to false and i am not getting error in BasicAck method for long running process.
channel.BasicConsume(queue: "test", autoAck: false, consumer: consumer);

How does MessageQueue.BeginReceive work and how to use it correctly?

I currently have a background thread. In this thread is a infinite loop.
This loop once in a while updates some values in a database, and then listens 1 second on the MessageQueue (with queue.Receive(TimeSpan.FromSeconds(1)) ).
As long as no message comes in, this call then internally throws a MessageQueueException (Timeout) which is caught and then the loop continues. If there is a message the call normally returns and the message is processed, after which the loop continues.
This leads to a lot of First chance exceptions (every second, except there is a message to process) and this spams the debug output and also breaks in the debugger when I forgot to exclude MessageQueueExceptions.
So how is the async handling of the MessageQueue meant to be done correctly, while still ensuring that, as long as my application runs, the queue is monitored and the database is updated too once in a while. Of course the thread here should not use up 100% CPU.
I just need the big picture or a hint to some correctly done async processing.
Rather than looping in a thread, I would recommend registering a delegate for the ReceiveCompleted event of your MessageQueue, as described here:
using System;
using System.Messaging;
namespace MyProject
{
///
/// Provides a container class for the example.
///
public class MyNewQueue
{
//**************************************************
// Provides an entry point into the application.
//
// This example performs asynchronous receive operation
// processing.
//**************************************************
public static void Main()
{
// Create an instance of MessageQueue. Set its formatter.
MessageQueue myQueue = new MessageQueue(".\\myQueue");
myQueue.Formatter = new XmlMessageFormatter(new Type[]
{typeof(String)});
// Add an event handler for the ReceiveCompleted event.
myQueue.ReceiveCompleted += new
ReceiveCompletedEventHandler(MyReceiveCompleted);
// Begin the asynchronous receive operation.
myQueue.BeginReceive();
// Do other work on the current thread.
return;
}
//**************************************************
// Provides an event handler for the ReceiveCompleted
// event.
//**************************************************
private static void MyReceiveCompleted(Object source,
ReceiveCompletedEventArgs asyncResult)
{
// Connect to the queue.
MessageQueue mq = (MessageQueue)source;
// End the asynchronous Receive operation.
Message m = mq.EndReceive(asyncResult.AsyncResult);
// Display message information on the screen.
Console.WriteLine("Message: " + (string)m.Body);
// Restart the asynchronous Receive operation.
mq.BeginReceive();
return;
}
}
}
Source: https://learn.microsoft.com/en-us/dotnet/api/system.messaging.messagequeue.receivecompleted?view=netframework-4.7.2
Have you considered a MessageEnumerator which is returned from the MessageQueue.GetMessageEnumerator2 ?
You get a dynamic content of the queue to examine and remove messages from a queue during the iteration.
If there are no messages then MoveNext() will return false and you don't need to catch first-chance exceptions
If there are new messages after you started iteration then they will be iterated over (if they are put after a cursor).
If there are new messages before a cursor then you can just reset an iterator or continue (if you don't need messages with lower priority at the moment).
Contrary to the comment by Jamie Dixon, the scenario IS exceptional. Note the naming of the method and its parameters: BeginReceive(TimeSpan timeout)
Had the method been named BeginTryReceive, it would've been perfectly normal if no message was received. Naming it BeginReceive (or Receive, for the sync version) implies that a message is expected to enter the queue. That the TimeSpan parameter is named timeout is also significant, because a timeout IS exceptional. A timeout means that a response was expected, but none was given, and the caller chooses to stop waiting and assumes that an error has occured. When you call BeginReceive/Receive with a 1 second timeout, you are stating that if no message has entered the queue by that time, something must have gone wrong and we need to handle it.
The way I would implement this, if I understand what you want to do correctly, is this:
Call BeginReceive either with a very large timeout, or even without a timeout if I don't see an empty queue as an error.
Attach an event handler to the ReceiveCompleted event, which 1) processes the message, and 2) calls BeginReceive again.
I would NOT use an infinite loop. This is both bad practice and completely redundant when using asynchronous methods like BeginReceive.
edit: To abandon a queue which isn't being read by any client, have the queue writers peek into the queue to determine if it is 'dead'.
edit: I have another suggestion. Since I don't know the details of your application I have no idea if it is either feasible or appropriate. It seems to me that you're basically establishing a connection between client and server, with the message queue as the communication channel. Why is this a 'connection'? Because the queue won't be written to if no one is listening. That's pretty much what a connection is, I think. Wouldn't it be more appropriate to use sockets or named pipes to transfer the messages? That way, the clients simply close the Stream objects when they are done reading, and the servers are immediately notified. As I said, I don't know if it can work for what you're doing, but it feels like a more appropriate communication channel.

Categories

Resources