MessageReceiver.ReceiveBatch() not working as intended - c#

I am trying to receive messages in batch from the ServiceBus using the ReceiveBatch method in the MessageReceiver:
IEnumerable<BrokeredMessage> messages;
var messagingfactory = MessagingFactory.CreateFromConnectionString("ConnectionString");
var msgrcvr = messagingfactory.CreateMessageReceiver("queueName", ReceiveMode.ReceiveAndDelete);
messages = msgrcvr.ReceiveBatch(20, timeoutInSecs);
I have checked that my queue contains 20 messages using the Service Bus Explorer.
This code returns only one message in the messages structure. Is there some property I am missing?

This is only a partial-answer or work-around; the following code reliably gets all elements, but doesn't use the "ReceiveBatch"; note, as far as I can discern, Peek(i) operates on a one-based index. Also: depending on which server one is running on, if you are charged by the message pull, this may (or may not) be more expensive, so use at your own risk:
List<BrokeredMessage> dlIE = new List<BrokeredMessage>();
BrokeredMessage potentialMessage = null;
int loopCount = 1;
while ((potentialMessage = deadletterSubscriptionClient.Peek(loopCount)) != null)
{
dlIE.Add(potentialMessage); loopCount++;
}

Related

Azure Service Bus Receive Messages continuously when ever new message placed in web application [duplicate]

I am using Azure.Messaging.ServiceBus nuget package to work with Azure service bus. We have created a topic and a subscription. The subscription has 100+ messages. We want to read all the message and continue to read message as they arrive.
Microsoft.Azure.ServiceBus package (deprecated now) provided RegisterMessageHandler which use to process every incoming message. I am not able to find similar option under Azure.Messaging.ServiceBus nuget package.
I am able to read one message at a time but I have to call await receiver.ReceiveMessageAsync(); every time manually.
To receive multiple messages (a batch), you should use ServiceBusReceiver.ReceiveMessagesAsync() (not plural, not singular 'message'). This method will return whatever number of messages it can send back. To ensure you retrieve all 100+ messages, you'll need to loop until no messages are available.
If you'd like to use a processor, that's also available in the new SDK. See my answer to a similar question here.
As suggested by #gaurav Mantri, I used ServiceBusProcessor class to implement event based model for processing messages
public async Task ReceiveAll()
{
string connectionString = "Endpoint=sb://sb-test-today.servicebus.windows.net/;SharedAccessKeyName=manage;SharedAccessKey=8e+6SWp3skB3Aedsadsadasdwz5DU=;";
string topicName = "topicone";
string subscriptionName = "subone";
await using var client = new ServiceBusClient(connectionString, new ServiceBusClientOptions
{
TransportType = ServiceBusTransportType.AmqpWebSockets
});
var options = new ServiceBusProcessorOptions
{
// By default or when AutoCompleteMessages is set to true, the processor will complete the message after executing the message handler
// Set AutoCompleteMessages to false to [settle messages](https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-transfers-locks-settlement#peeklock) on your own.
// In both cases, if the message handler throws an exception without settling the message, the processor will abandon the message.
AutoCompleteMessages = false,
// I can also allow for multi-threading
MaxConcurrentCalls = 1
};
await using ServiceBusProcessor processor = client.CreateProcessor(topicName, subscriptionName, options);
processor.ProcessMessageAsync += MessageHandler;
processor.ProcessErrorAsync += ErrorHandler;
await processor.StartProcessingAsync();
Console.ReadKey();
}
public async Task MessageHandler(ProcessMessageEventArgs args)
{
string body = args.Message.Body.ToString();
Console.WriteLine(body);
// we can evaluate application logic and use that to determine how to settle the message.
await args.CompleteMessageAsync(args.Message);
}
public Task ErrorHandler(ProcessErrorEventArgs args)
{
// the error source tells me at what point in the processing an error occurred
Console.WriteLine(args.ErrorSource);
// the fully qualified namespace is available
Console.WriteLine(args.FullyQualifiedNamespace);
// as well as the entity path
Console.WriteLine(args.EntityPath);
Console.WriteLine(args.Exception.ToString());
return Task.CompletedTask;
}

How to create a non durable queue in ActiveMQ Artemis?

I have a application in which I want to have 1 durable and 1 non-durable queue in Active MQ Artemis.
For connecting to this message bus I use amqpnetlite.
var source = new Source()
{
};
if (durable)
{
source.Address = amqpAddressConverter.GetSubscriberAddress(address, useLoadBalancing);
source.Durable = 1;
source.ExpiryPolicy = new Symbol("never");
source.DistributionMode = new Symbol("copy");
}
else
{
source.Address = amqpAddressConverter.GetSubscriberAddress(address);
source.Durable = 0;
source.ExpiryPolicy = "never";
}
var receiverLink = new ReceiverLink(session, linkName, source, null);
So this is my receiver link. As shown I set the Durable uint of the Source which will given into the ReceiverLink.
Because as I saw in the Active MQ Artemis documentation, that the Durable is a boolean but within the amqpnetlite library it is an uint my understanding is that everything over 0 should be true and 0 should be false.
At first the behaviour was very strange: Even when the Aretemis Web interface was shown a queue as durable it would be deleted as soon as no consumer would be connected.
I found this:
ActiveMQ Artemis queue deleted after shutdown of consuming client
which describes that even durable queues get deleted because of the default behaviour.
So I manipulated the broker.xml and set AUTO-DELETE-QUEUE to false.
Since then the behaviour completly switched:
Both (durable = 1 and durable = 0) queues are being still there after the connection disconnected.
So how to create a durable and a non-durable connection correctly?
The Artemis source carries an example in .NET that creates a durable topic subscription and also shows how to later recover it using AmqpNetLite.
One key thing many folks miss is that your client needs to use a unique container ID analogous to the JMS Client ID concept.
For Queue specific subscriptions the client should indicate in the link capabilities that it wants a Queue based address created as the default is a multicast Queue which won't behave the same.
Source source = new Source() {
Address = address,
Capabilities = new Symbol[] {"queue"},
};
vs topic specific source configuration:
Source source = new Source() {
Address = address,
Capabilities = new Symbol[] {"topic"},
};

Kafka very high latency C#

I am doing some performance tests on Apache Kafka to compare it with others like RabbitMQ and ActiveMQ. The idea is to use it on a messaging system for agents' communication.
I am testing multiple scenarios (one to one, broadcast and many to one) with different numbers of publishers and subscribers and so different loads. Even in the lowest load scenario of one to one with 10 pairs of agents sending 500 messages with 1ms delay between sends I am experiencing very high latencies (average of ~200ms). And if we go to 100 pairs the numbers rise to ~1500ms. The same thing happens on broadcast and many to one.
I am using Windows with Kafka 2.12-2.5.0 and zookeeper 3.6.1 with C# .Net client Confluent.Kafka 1.4.2. I have already tried some properties like LingerMs = 0 according to some posts I found. I have both Kafka and zookeeper with default settings.
I made a simple test code in which the problem happens:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Confluent.Kafka;
namespace KafkaSetupAgain
{
class Program
{
static void Main(string[] args)
{
int numberOfMessages = 500;
int numberOfPublishers = 10;
int numberOfSubscribers = 10;
int timeOfRun = 30000;
List<MCVESubscriber> Subscribers = new List<MCVESubscriber>();
for (int i = 0; i < numberOfSubscribers; i++)
{
MCVESubscriber ZeroMqSubscriber = new MCVESubscriber();
new Thread(() =>
{
ZeroMqSubscriber.read(i.ToString());
}).Start();
Subscribers.Add(ZeroMqSubscriber);
}
Thread.Sleep(10000);//to make sure all subscribers started
for (int i = 0; i < numberOfPublishers; i++)
{
MCVEPublisher ZeroMqPublisherBroadcast = new MCVEPublisher();
new Thread(() =>
{
ZeroMqPublisherBroadcast.publish(numberOfMessages, i.ToString());
}).Start();
}
Thread.Sleep(timeOfRun);
foreach (MCVESubscriber Subscriber in Subscribers)
{
Subscriber.PrintMessages("file.csv");
}
}
public class MCVEPublisher
{
public void publish(int numberOfMessages, string topic)
{
var config = new ProducerConfig
{
BootstrapServers = "localhost:9092",
LingerMs = 0,
Acks = 0,
};
var producer = new ProducerBuilder<Null, string>(config).Build();
int success = 0;
int failure = 0;
Thread.Sleep(3500);
for (int i = 0; i < numberOfMessages; i++)
{
Thread.Sleep(1);
long milliseconds = System.Diagnostics.Stopwatch.GetTimestamp() / TimeSpan.TicksPerMillisecond;
var t = producer.ProduceAsync(topic, new Message<Null, string> { Value = milliseconds.ToString() });
t.ContinueWith(task => {
if (task.IsFaulted)
{
failure++;
}
else
{
success++;
}
});
}
Console.WriteLine("Success: " + success + " Failure:" + failure);
}
}
public class MCVESubscriber
{
private List<string> prints = new List<string>();
public void read(string topic)
{
var config = new ConsumerConfig()
{
BootstrapServers = "localhost:9092",
EnableAutoCommit = false,
FetchErrorBackoffMs = 1,
};
var consumerConfig = new ConsumerConfig(config);
consumerConfig.GroupId = Guid.NewGuid().ToString();
consumerConfig.AutoOffsetReset = AutoOffsetReset.Earliest;
consumerConfig.EnableAutoCommit = false;
using (var consumer = new ConsumerBuilder<Ignore, string>(config).Build())
{
consumer.Subscribe(new[] { topic });
while (true)
{
var consumeResult = consumer.Consume();
long milliseconds = System.Diagnostics.Stopwatch.GetTimestamp() / TimeSpan.TicksPerMillisecond;
prints.Add(consumeResult.Message.Value + ";" + milliseconds.ToString());
}
consumer.Close();
}
}
public void PrintMessages(string path)
{
Console.WriteLine("printing " + prints.Count);
File.AppendAllLines(path, prints);
}
}
}
}
Does someone what can be the problem? What configs can I change to
improve latency?
Thanks,
Davide Costa
Kafka is not really built for low latency message distribution, but for high availability. It can be configured to have lower latency, but you start losing a lot of the advantages Kafka offers.
A few tips/comments below:
On the KafkaProducer side, in general, you want to wait until there's enough messages to send, to as to batch messages more efficiently. That's the linger.ms property you already mentioned. Typically that is set to something like 50ms, so by setting it to zero, you're effectively telling the producer to send data as fast as it gets it. This may make the producer more "chatty", but you have the assurance it will send the data to the cluster as soon as it gets it.
However, once a message is "produced" into Kafka, it waits until it gets an ACK from the lower layer that the broker has received the message successfully. There's multiple options here:
Consider a message as "received" once the message has been sent by the producer. That is, locally, once the network layer has finished sending it, the producer will consider it "sent and acknowledged"
Wait for an ACK from the leader broker which you're sending the message to, depending on which partition it gets assigned, so you at least know one broker has it. THIS IS THE DEFAULT.
Wait for an ACK from the leader broker which you're sending the message to, PLUS an ACK from each of that partitions' replicas on the other brokers. This means, if your cluster has a replication factor of 3, that the message is sent to broker 1 for example, it then replicates that to brokers 2 and 3, which have copies of the same partition, waits for those brokers to reply back saying they got the message, and only THEN reply back to the producer saying the message has been ACK'd. This is typically used in environments where you never want the possibility of losing a single message, so you always guarantee that there will be three copies of your message before the producer moves on.
Official acks explanation from the Kafka docs:
https://kafka.apache.org/25/documentation.html#acks
There are other settings to consider like kafka producer compression and broker compression settings that might add more latency/overhead, but if you're using the defaults (no producer compression and producer option in the broker compression), there should be no additional latency in those steps.
Having said all that, I would suggest you try to set the acks option in the producer to 0, and see how your latency changes. My guess is you will get much better latency, BUT also understand that there are no guarantees your messages are actually being received and stored correctly. A flaky network, a network partition, etc, could cause you to lose data. That might be ok for your use case, but just make sure you're aware of it.

How to retrieve Dead Letter Queue count?

Question
How do i get the dead letter queue length without receiving each message and counting how many message I received?
My Current Implementation
public int GetDeadLetterQueueCount()
{
//Ref:http://stackoverflow.com/questions/22681954/how-do-you-access-the-dead-letter-sub-queue-on-an-azure-subscription
MessagingFactory factory = MessagingFactory.CreateFromConnectionString(CloudConnectionString);
QueueClient deadLetterClient = factory.CreateQueueClient(QueueClient.FormatDeadLetterPath(_QueueClient.Path), ReceiveMode.PeekLock);
BrokeredMessage receivedDeadLetterMessage;
List<string> lstDeadLetterQueue = new List<string>();
// Ref: https://code.msdn.microsoft.com/Brokered-Messaging-Dead-22536dd8/sourcecode?fileId=123792&pathId=497121593
// Log the dead-lettered messages that could not be processed:
while ((receivedDeadLetterMessage = deadLetterClient.Receive(TimeSpan.FromSeconds(10))) != null)
{
lstDeadLetterQueue.Add(String.Format("DeadLettering Reason is \"{0}\" and Deadlettering error description is \"{1}\"",
receivedDeadLetterMessage.Properties["DeadLetterReason"],
receivedDeadLetterMessage.Properties["DeadLetterErrorDescription"]));
var locktime = receivedDeadLetterMessage.LockedUntilUtc;
}
return lstDeadLetterQueue.Count;
}
Problem with implementation
Because I am receiving each message in peek and block mode, the messages have a lock duration set. During this time i cannot receive or even see the messages again until this time period has timed out.
There must be an easier way of just getting the count without having to poll the queue?
I do not want to consume the messages either, i would just like the count of the total amount.
You can use the NamespaceManager's GetQueue() method which has a MessageCountDetails property, which in turn has a DeadLetterMessageCount property. Something like:
var namespaceManager = Microsoft.ServiceBus.NamespaceManager.CreateFromConnectionString("<CONN_STRING>");
var messageDetails = namespaceManager.GetQueue("<QUEUE_NAME>").MessageCountDetails;
var deadLetterCount = messageDetails.DeadLetterMessageCount;

Unable to get queue length / message count from Azure

I have a Use Case where I need to queue a select number of messages when the current queue length drops below a specified value. Since I'm running in Azure, I'm trying to use the RetrieveApproximateMessageCount() method to get the current message count. Everytime I call this I get an exception stating StorageClientException: The specified queue does not exist.. Here is a review of what I've done:
Created the queue in the portal and have successfully queued messages to it.
Created the storage account in the portal and it is in the Created/Online state
Coded the query as follows (using http and https options):
var storageAccount = new CloudStorageAccount(
new StorageCredentialsAccountAndKey(_messagingConfiguration.StorageName.ToLower(),
_messagingConfiguration.StorageKey), false);
var queueClient = storageAccount.CreateCloudQueueClient();
var queue = queueClient.GetQueueReference(queueName.ToLower());
int messageCount;
try
{
messageCount = queue.RetrieveApproximateMessageCount();
}
catch (Exception)
{
//Booom!!!!! in every case
}
// ApproximateMessageCount is always null
messageCount = queue.ApproximateMessageCount == null ? 0 : queue.ApproximateMessageCount.Value;
I've confirmed the name is cased correctly with not special characters, numbers, or spaces and the resulting queue Url appears as though its correct formed based on the API documentations (e.g. http://myaccount.queue.core.windows.net/myqueue)
Can anyone help shed some light on what I'm doing wrong.
EDIT
I've confirmed that using the MessageFactory I can create a QueueClient and then enqueue/dequeue messages successfully. When I use the CloudStorageAccount the queue is never present so the counts and GetMessage routines never work. I am guessing these are not the same thing??? Assuming, I'm correct, what I need is to measure the length of the Service Bus Queue. Is that possible?
RetrieveApproximateMessageCount() has been deprecated
if you want to use ApproximateMessageCount to get result try this
CloudQueue q = queueClient.GetQueueReference(QUEUE_NAME);
q.FetchAttributes();
qCnt = q.ApproximateMessageCount;
The CloudQueue method has been deprecated (along with the v11 SDK).
The following snippet is the current replacement (from the Azure Docs)
//-----------------------------------------------------
// Get the approximate number of messages in the queue
//-----------------------------------------------------
public void GetQueueLength(string queueName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
// Instantiate a QueueClient which will be used to manipulate the queue
QueueClient queueClient = new QueueClient(connectionString, queueName);
if (queueClient.Exists())
{
QueueProperties properties = queueClient.GetProperties();
// Retrieve the cached approximate message count.
int cachedMessagesCount = properties.ApproximateMessagesCount;
// Display number of messages.
Console.WriteLine($"Number of messages in queue: {cachedMessagesCount}");
}
}
https://learn.microsoft.com/en-us/azure/storage/queues/storage-dotnet-how-to-use-queues?tabs=dotnet#get-the-queue-length

Categories

Resources