I am testing a project with a dead letter queue with Microsoft Service Bus. I send 26 messages (representing the alphabet) and I use a program that when receiving the messages, randomly puts some of them in a dead letter queue. The messages are always read in peek mode from the dead letter queue, so once they reach there they stay there. After running a few times, all 26 messages will be in the dead letter queue, and always remain there.
However, when reading them, sometimes only a few (e.g. 6) are read, sometimes all 26.
I use the command:
const int maxToRead = 200; // It seems one wants to set this higher than
// the anticipated load, obtaining only some back
IEnumerable<BrokeredMessage> dlIE =
deadletterSubscriptionClient.ReceiveBatch(maxToRead);
There is an overload of ReceiveBatch which has a timeout, but this doesn't help, and proably only adds to the complexity.
Why doesn't it obtain all 26 messages every time, since it is used in "peek" mode and the messages stay there.
I can use "Service Bus Explorer" to actually verify that all messages are in the deadletter queue and remain there.
This is mostly a testing example, but one would hope that "ReceiveBatch" would work in deterministic mode and not in a very (bad) random manner...
This is only a partial-answer or work-around; the following code reliably gets all elements, but doesn't use the "ReceiveBatch"; note, as far as I can discern, Peek(i) operates on a one-based index. Also: depending on which server one is running on, if you are charged by the message pull, this may (or may not) be more expensive, so use at your own risk:
List<BrokeredMessage> dlIE = new List<BrokeredMessage>();
BrokeredMessage potentialMessage = null;
int loopCount = 1;
while ((potentialMessage = deadletterSubscriptionClient.Peek(loopCount)) != null)
{
dlIE.Add(potentialMessage);
loopCount++;
}
Related
The normal expected behaviour for the code below, would be that ReceiveAsync, looks at the Azure queue for up to 1 minute before returning null or a message if one is received. The intended use for this is to have an IoT hub resource, where multiple messages may be added to a queue intended for one of several DeviceClient objects. Each DeviceClient will continuously poll this queue to receive message intended for it. Messages for other DeviceClients are thus left in the queue for those others.
The actual behaviour is that ReceiveAsync is immediately returning null each time it's called, with no delay. This is regardless of the value that is given with TimeSpan - or if no parameters are given (and the default time is used).
So, rather than seeing 1 log item per minute, stating there was a null message received, I'm getting 2 log items per second (!). This behaviour is different from a few months ago,. so I started some research - with little result so far.
using Microsoft.Azure.Devices;
using Microsoft.Azure.Devices.Client;
public static TimeSpan receiveMessageWaitTime = new TimeSpan(0, 1 , 0);
Microsoft.Azure.Devices.Client.Message receivedMessage = null;
deviceClient = DeviceClient.CreateFromConnectionString(Settings.lastKnownConnectionString, Microsoft.Azure.Devices.Client.TransportType.Amqp);
// This code is within an infinite loop/task/with try/except code
if(deviceClient != null)
{
receivedMessage = await deviceClient.ReceiveAsync(receiveMessageWaitTime);
if(receivedMessage != null)
{
string Json = Encoding.ASCII.GetString(receivedMessage.GetBytes());
// Handle the message
}
else
{
// Log the fact that we got a null message, and try again later
}
await Task.Delay(500); // Give the CPU some time, this is an infinite loop after all.
}
I looked at the Azure hub, and noticed 8 messages in the queue. I then added 2 more, and neither of the new messages were received, and the queue is now on 10 items.
I did notice this question: Azure ServiceBus: Client.Receive() returns null for messages > 64 KB
But I have no way to see whether there is indeed a message that big currently in the queue (since receivemessage returns null...)
As such the questions:
Could you preview the messages in the queue?
Could you get a queue size, e.g. ask the number of messages in the queue before getting them?
Could you delete messages from the queue without getting them?
Could you create a callback based receive instead of an infinite loop? (I guess internally the code would just do a peek and the same as we are already doing)
Any help would be greatly appreciated.
If you use the Azure ServiceBus, I recommend that you could use the Service Bus Explorer to preview the message, get the number of message in the queue. And Also you could delete the message without getting them.
I'm using the current Apache.NMS 1.7.1 and Apache.NMS.ActiveMQ 1.7.2.
I'm using IndividualAcknowledge, so I'm trying to keep the number of loaded messages quite low, because it get's really slow if I have >>1000 messages loaded without Acking them (It's searching a linked list of all messages each time).
I have the following codesnippets:
BlockingCollection<IMessage> _collection = new BlockingCollection<IMessage>();
var factory = new ConnectionFactory("activemq:tcp://localhost:61616");
var _connection = (Connection) factory.CreateConnection();
_connection.PrefetchPolicy.All = 1000;
var session = (Session) _connection.CreateSession(AcknowledgementMode.IndividualAcknowledge);
var destination = SessionUtil.GetDestination(session, "queue://testQueue");
var messageConsumer = (MessageConsumer)session.CreateConsumer(destination);
messageConsumer.Listener += message => _collection.Add(message);
_connection.Start();
The queue testQueue contains >>20_000 messages. After waiting some seconds, _collection contains all the messages, without me acknowledging any of them.
If I understand the dokumentation right, I should get at most 1000 until I start acknowledging them.
Once the broker has dispatched a prefetch limit number of messages to a consumer it will not dispatch any more messages to that consumer until the consumer has acknowledged at least 50% of the prefetched messages, e.g., prefetch/2, that it received. When the broker has received said acknowledgements it will dispatch a further prefetch/2 number of messages to the consumer to 'top-up', as it were, its prefetch buffer.
I also tried some variations like only setting QueuePrefetch or setting the policy in the url:
activemq:tcp://localhost:61616?nms.prefetchPolicy.queuePrefetch=100
or in the queue:
queue://testQueue?consumer.prefetchSize=100
Regarding the slowness of the IndividualAcknowledge, I already tried several other options without much luck:
messageConsumer.OptimizeAcknowledge = true;
messageConsumer.OptimizeAcknowledgeTimeOut = 1000;
messageConsumer.OptimizedAckScheduledAckInterval = 500;
Though I'm not completely clear about the differences of the last to options.
Because you are using an asynchronous listener the broker will be given sending you everything as the client continues to grant credit to the broker on delivery of each message to your asynchronous event listener. To truly limit the amount of messages deliver to the client at any given time the client needs to use synchronous receive calls. Individual acknowledge is best paired with synchronous consumption such that you can control how many messages are read and acknowledge them at some point in time when ready.
The optimized acknowledge settings don't apply in individual acknowledge mode so that won't help with performance.
I have a C# application that sets up numerous MQ listeners (multiple threads and potentially multiple servers each with their own listeners). There are some messages that will come off the queue that I will want to leave on the queue, move on to the next message on the MQ, but then under some circumstances I will want to go back to re-read those messages...
var connectionFactory = XMSFactoryFactory.GetInstance(XMSC.CT_WMQ).CreateConnectionFactory();
connectionFactory.SetStringProperty(XMSC.WMQ_HOST_NAME, origination.Server);
connectionFactory.SetIntProperty(XMSC.WMQ_PORT, int.Parse(origination.Port));
connectionFactory.SetStringProperty(XMSC.WMQ_QUEUE_MANAGER, origination.QueueManager);
connectionFactory.SetStringProperty(XMSC.WMQ_CHANNEL, origination.Channel);
var connection = connectionFactory.CreateConnection(null, null);
_connections.Add(connection);
var session = connection.CreateSession(false, AcknowledgeMode.ClientAcknowledge); //changed to use ClientAcknowledge so that we will leave the message on the MQ until we're sure we're processing it
_sessions.Add(session);
var destination = session.CreateQueue(origination.Queue);
_destinations.Add(destination);
var consumer = session.CreateConsumer(destination);
_consumers.Add(consumer);
Logging.LogDebugMessage(Constants.ListenerStart);
connection.Start();
ThreadPool.QueueUserWorkItem((o) => Receive(forOrigination, consumer));
Then I have...
if (OnMQMessageReceived != null)
{
var message = consumer.Receive();
var identifier = string.Empty;
if (message is ITextMessage)
{
//do stuff with the message here
//populates identifier from the message
}
else
{
//do stuff with the message here
//populates identifier from the message
}
if (!string.IsNullOrWhiteSpace(identifier)&& OnMQMessageReceived != null)
{
if( some check to see if we should process the message now)
{
//process message here
message.Acknowledge(); //this really pulls it off of the MQ
//here is where I want to trigger the next read to be from the beginning of the MQ
}
else
{
//We actually want to do nothing here. As in do not do Acknowledge
//This leaves the message on the MQ and we'll pick it up again later
//But we want to move on to the next message in the MQ
}
}
else
{
message.Acknowledge(); //this really pulls it off of the MQ...its useless to us anyways
}
}
else
{
Thread.Sleep(0);
}
ThreadPool.QueueUserWorkItem((o) => Receive(forOrigination, consumer));
So a couple of questions:
If I do not acknowledge the message it stays on the MQ, right?
If the message is not acknowledged then by default when I read from the MQ again with the same listener it reads the next one and does not go to the beginning, right?
How do I change the listener so that the next time I read I start at the beginning of the queue?
Leaving messages on a queue is an anti-pattern. If you don't want to or cannot process the message at a certain point of your logic, then you have a number of choices:
Get it off the queue and put to another queue/topic for a delayed/different processing.
Get it off the queue and dump to a database, flat file - whatever, if you want to process it outside of messaging flow, or don't want to process at all.
If it is feasible, you may want to change the message producer so it doesn't mix the messages with different processing requirements in the same queue/topic.
In any case, do not leave a message on the queue, and always move forward to the next message. This will make the application way more predictable and easier to reason about. You will also avoid all kinds of performance problems. If your application is or may ever become sensitive to the sequence of message delivery, then manual acknowledgement of selected messages will be at odds with it too.
To your questions:
The JMS spec is vague regarding the behavior of unacknowledged messages - they may be delivered out of order, and it is undefined when exactly when they will be delivered. Also, the acknowledge method call will acknowledge all previously received and unacknowledged messages - probably not what you had in mind.
If you leave messages behind, the listener may or may not go back immediately. If you restart it, it of course will start afresh, but while it is sitting there waiting for messages it is implementation dependent.
So if you try to make your design work, you may get it kind of work under certain circumstances, but it will not be predictable or reliable.
Working with a Azure Service Bus Topic currently and running into an issue receiving my messages using ReceiveBatch method. The issue is that the expected results are not actually the results that I am getting. Here is the basic code setup, use cases are below:
SubscriptionClient client = SubscriptionClient.CreateFromConnectionString(connectionString, convoTopic, subName);
IEnumerable<BrokeredMessage> messageList = client.ReceiveBatch(100);
foreach (BrokeredMessage message in messageList)
{
try
{
Console.WriteLine(message.GetBody<string>() + message.MessageId);
message.Complete();
}
catch (Exception ex)
{
message.Abandon();
}
}
client.Close();
MessageBox.Show("Done");
Using the above code, if I send 4 messages, then poll on the first run through I get the first message. On the second run through I get the other 3. I'm expecting to get all 4 at the same time. It seems to always return a singular value on the first poll then the rest on subsequent polls. (same result with 3 and 5 where I get n-1 of n messages sent on the second try and 1 message on the first try).
If I have 0 messages to receive, the operation takes between ~30-60 seconds to get the messageList (that has a 0 count). I need this to return instantly.
If I change the code to IEnumerable<BrokeredMessage> messageList = client.ReceiveBatch(100, new Timespan(0,0,0)); then issue #2 goes away because issue 1 still persists where I have to call the code twice to get all the messages.
I'm assuming that issue #2 is because of a default timeout value which I overwrite in #3 (though I find it confusing that if a message is there it immediately responds without waiting the default time). I am not sure why I never receive the full amount of messages in a single ReceiveBatch however.
The way I got ReceiveBatch() to work properly was to do two things.
Disable Partitioning in the Topic (I had to make a new topic for this because you can't toggle that after creation)
Enable Batching on each subscription created like so:
List item
SubscriptionDescription sd = new SubscriptionDescription(topicName, orgSubName);
sd.EnableBatchedOperations = true;
After I did those two things, I was able to get the topics to work as intended using IEnumerable<BrokeredMessage> messageList = client.ReceiveBatch(100, new TimeSpan(0,0,0));
I'm having a similar problem with an ASB Queue. I discovered that I could mitigate it somewhat by increasing the PrefetchCount on the client prior to receiving the batch:
SubscriptionClient client = SubscriptionClient.CreateFromConnectionString(connectionString, convoTopic, subName);
client.PrefetchCount = 100;
IEnumerable<BrokeredMessage> messageList = client.ReceiveBatch(100);
From the Azure Service Bus Best Practices for Performance Improvements Using Service Bus Brokered Messaging:
Prefetching enables the queue or subscription client to load additional messages from the service when it performs a receive operation.
...
When using the default lock expiration of 60 seconds, a good value for
SubscriptionClient.PrefetchCount is 20 times the maximum processing rates of all receivers of the factory. For example, a factory creates 3 receivers, and each receiver can process up to 10 messages per second. The prefetch count should not exceed 20*3*10 = 600.
...
Prefetching messages increases the overall throughput for a queue or subscription because it reduces the overall number of message operations, or round trips. Fetching the first message, however, will take longer (due to the increased message size). Receiving prefetched messages will be faster because these messages have already been downloaded by the client.
Just a few more pieces to the puzzle. I still couldn't get it to work even after Enable Batching and Disable Partitioning - I still had to do two ReceiveBatch calls. I did find however:
Restarting the Service Bus services (I am using Service Bus for Windows Server) cleared up the issue for me.
Doing a single RecieveBatch and taking no action (letting the message locks expire) and then doing another ReceiveBatch caused all of the messages to come through at the same time. (Doing an initial ReceiveBatch and calling Abandon on all of the messages didn't cause that behavior.)
So it appears to be some sort of corruption/bug in Service Bus's in-memory cache.
Current Setup includes a windows service which picks up a message from the local queue and extracts the information and puts in to my SQL database.According to my design
Service picks up the message from the queue.(I am using Peek() here).
Sends it to the database.
If for some reason i get an exception while saving it to the database the message is back into the queue,which to me is reliable.
I am logging the errors so that a user can know what's the issue and fix it.
Exception example:If the DBconnection is lost during saving process of the messages to the database then the messages are not lost as they are in the queue.I don't comit untill i get an acknowledgement from the DB that the message is inserted .So a user can see the logs and make sure that the DBconnection exists and every thing would be normal and we dont lose any messages in the queue.
But looking into another scenario:The messages I would be getting in the queue are from a 3rd party according a standard schema.The schema would remain same and there is no change in that.But i have seen some where i get some format exceptions and since its not committed the message is back to the queue.At this point this message would be a bottle neck for me as the same messages is picked up again and tries to process the message.Every time the service would pick up the same message and gets the same exception.So this loops infinitely unless that message is removed or put that message last in the queue.
Looking at removing the message:As of now if i go based on the format exception...then i might be wrong since i might encounter some other exceptions in the future .
Is there a way i can put this messages back to the queue last in the list instead beginning of the queue.
Need some advice on how to proceed further.
Note:Queue is Transactional .
As far as I'm aware, MSMQ doesn't automatically dump messages to fail queues. Either way you handle it, it's only a few lines of code (Bill, Michael, and I recommend a fail queue). As far as a fail queue goes, you could simple create one named .\private$\queuename_fail.
Surviving poison messages in MSMQ is a a decent article over this exact topic, which has an example app and source code at the end.
private readonly MessageQueue _failQueue;
private readonly MessageQueue _messageQueue;
/* Other code here (cursor, peek action, run method, initialization etc) */
private void dumpToFailQueue(Message message)
{
var oldId = message.Id;
_failQueue.Send(message, MessageQueueTransactionType.Single);
// Remove the poisoned message
_messageQueue.ReceiveById(oldId);
}
private void moveToEnd(Message message)
{
var oldId = message.Id;
_messageQueue.Send(message, MessageQueueTransactionType.Single);
// Remove the poisoned message
_messageQueue.ReceiveById(oldId);
}