RabbitMQ BasicAck makes next message UnAck - c#

Here is the scenario
At the start,
Ready Queue : 2
UnAcked: 0
Once the consumer.Queue.Dequeue(1000, out bdea); runs,
ReadyQueue: 1
UnAcked: 1
This is obvious, where we read one message and not Acknowledged yet.
Problem is that when the channel.BasicAck(bdea.DeliveryTag, false); runs,
ReadyQueue: 0
UnAcked: 1
Message which was in the Ready state became UnAcked and ReadyQueue becomes "0" !!
Now, in the while loop, when we look for the second message with consumer.Queue.Dequeue(1000, out bdea);, bdea returns null as there is nothing in the Ready state.
This is the issue, when there an Ack happens, always it drags a message from the Ready queue to UnAck. Therefore the next time I am loosing this UnAcked message which was never dequeued.
But if I stop the process (Console App), UnAck message goes back to Ready State.
Assume there are 10 messages in Ready state at the start, at the end it will only process 5 where you find 5 messages in UnAcked state. Each Ack makes the next message UnAck. If I stop and run again (5 messages in Ready state), guess what, 3 messages will gets processed, 2 will be UnAcked. (Dequeue only picks half of the no of messages)
Here is my code (code which only has the RabbitMQ functionality, issue is there if you try this code as well),
public class TestMessages
{
private ConnectionFactory factory = new ConnectionFactory();
string billingFileId = string.Empty;
private IConnection connection = null;
private IModel channel = null;
public void Listen()
{
try
{
#region CONNECT
factory.AutomaticRecoveryEnabled = true;
factory.UserName = ConfigurationManager.AppSettings["MQUserName"];
factory.Password = ConfigurationManager.AppSettings["MQPassword"];
factory.VirtualHost = ConfigurationManager.AppSettings["MQVirtualHost"];
factory.HostName = ConfigurationManager.AppSettings["MQHostName"];
factory.Port = Convert.ToInt32(ConfigurationManager.AppSettings["MQPort"]);
#endregion
RabbitMQ.Client.Events.BasicDeliverEventArgs bdea;
using (connection = factory.CreateConnection())
{
string jobId = string.Empty;
using (IModel channel = connection.CreateModel())
{
while (true) //KEEP LISTNING
{
if (!channel.IsOpen)
throw new Exception("Channel is closed"); //Exit the loop.
QueueingBasicConsumer consumer = new QueueingBasicConsumer(channel);
//Prefetch 1 message
channel.BasicQos(0, 1, false);
String consumerTag = channel.BasicConsume(ConfigurationManager.AppSettings["MQQueueName"], false, consumer);
try
{
//Pull out the message
consumer.Queue.Dequeue(1000, out bdea);
if (bdea == null)
{
//Empty Queue
}
else
{
IBasicProperties props = bdea.BasicProperties;
byte[] body = bdea.Body;
string message = System.Text.Encoding.Default.GetString(bdea.Body);
try
{
channel.BasicAck(bdea.DeliveryTag, false);
////Heavy work starts now......
}
catch (Exception ex)
{
//Log
}
}
}
catch (Exception ex)
{
//Log it
}
}
}
}
}
catch (Exception ex)
{
WriteLog.Error(ex);
}
finally
{
//CleanUp();
}
}
}
Am I missing something?

I tried with the "Subscription" rather than the Channel and it works now, clears up the message queue. I referred to this post.
Here is the working code:
public void SubscribeListner()
{
Subscription subscription = null;
const string uploaderExchange = "myQueueExchange";
string queueName = "myQueue";
while (true)
{
try
{
if (subscription == null)
{
try
{
//CONNECT Code
//try to open connection
connection = factory.CreateConnection();
}
catch (BrokerUnreachableException ex)
{
//You probably want to log the error and cancel after N tries,
//otherwise start the loop over to try to connect again after a second or so.
//log.Error(ex);
continue;
}
//crate chanel
channel = connection.CreateModel();
// This instructs the channel not to prefetch more than one message
channel.BasicQos(0, 1, false);
// Create a new, durable exchange
channel.ExchangeDeclare(uploaderExchange, ExchangeType.Direct, true, false, null);
// Create a new, durable queue
channel.QueueDeclare(queueName, true, false, false, null);
// Bind the queue to the exchange
channel.QueueBind(queueName, uploaderExchange, queueName);
//create subscription
subscription = new Subscription(channel, uploaderExchange, false);
}
BasicDeliverEventArgs eventArgs;
var gotMessage = subscription.Next(250, out eventArgs);//250 millisecond
if (gotMessage)
{
if (eventArgs == null)
{
//This means the connection is closed.
//DisposeAllConnectionObjects();
continue;//move to new iterate
}
//process message
subscription.Ack();
//channel.BasicAck(eventArgs.DeliveryTag, false);
}
}
catch (OperationInterruptedException ex)
{
//log.Error(ex);
//DisposeAllConnectionObjects();
}
catch(Exception ex)
{
}
}
}

Related

Kafka consumer is not consuming message

I am new in Kafka. kafka consumer is not reading message from the given topic.
I am checking with kafka console as well. it is not working. i donot understand the problem. it was working fine earlier.
public string MessageConsumer(string brokerList, List<string> topics, CancellationToken cancellationToken)
{
//ConfigurationManager.AutoLoadAppSettings("", "", true);
string logKey = string.Format("ARIConsumer.StartPRoducer ==>Topics {0} Key{1} =>", "", string.Join(",", topics));
string message = string.Empty;
var conf = new ConsumerConfig
{
BootstrapServers = "localhost:9092",
GroupId = "23",
EnableAutoCommit = false,
AutoOffsetReset = AutoOffsetResetType.Latest,
};
using (var c = new Consumer<Ignore, string>(conf))
{
try
{
c.Subscribe(topics);
bool consuming = true;
// The client will automatically recover from non-fatal errors. You typically
// don't need to take any action unless an error is marked as fatal.
c.OnError += (_, e) => consuming = !e.IsFatal;
while (consuming)
{
try
{
TimeSpan timeSpan = new TimeSpan(0, 0, 5);
var cr = c.Consume(timeSpan);
// Thread.Sleep(5000);
if (cr != null)
{
message = cr.Value;
Console.WriteLine("Thread" + Thread.CurrentThread.ManagedThreadId + "Message : " + message);
CLogger.WriteLog(ELogLevel.INFO, $"Consumed message Partition '{cr.Partition}' at: '{cr.TopicPartitionOffset} thread: { Thread.CurrentThread.ManagedThreadId}'. Message: {message}");
//Console.WriteLine($"Consumed message Partition '{cr.Partition}' at: '{cr.TopicPartitionOffset}'. Topic: { cr.Topic} value :{cr.Value} Timestamp :{DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture)} GrpId: { conf.GroupId}");
c.Commit();
}
Console.WriteLine($"Calling the next Poll ");
}
catch (ConsumeException e)
{
CLogger.WriteLog(ELogLevel.ERROR, $"Error occured: {e.Error.Reason}");
Console.WriteLine($"Error occured: {e.Error.Reason}");
}
//consuming = false;
}
// Ensure the consumer leaves the group cleanly and final offsets are committed.
c.Close();
}
catch (Exception ex)
{
}
}
return message;
}
What is the issue with this code or there is installation issue with kafka
Is there a Producer actively sending data?
Your consumer is starting from the latest offsets based on the AutoOffsetReset, so it wouldn't read existing data in the topic
The console consumer also defaults to the latest offset
And if you haven't changed the GroupId, then your consumer might have worked once, then you consumed data, then commited the offsets for that group. When the consumer starts again in the same group, it will only resume from the end of the topic, or the offset of the last commit
You also have an empty catch (Exception ex), which might be hiding some other error
Try removing the timespan from consume method.

RabbitMQ response is been lost in controller

Good evening everyone, I've got a web app written using .NET and a mobile app.
I'm sending some values to rabbitMQ server through my web app and this is working fine, i put it in a queue but when the mobile app accepts the request, i don't get the returned value.
Here is my controller
public async Task<ActionResult> GetCollect(int id)
{
int PartnerId = 0;
bool SentRequest = false;
try
{
SentRequest = await RuleRabbitMQ.SentRequestRule(id);
if(SentRequest )
{
PartnerId = await RuleRabbitMQ.RequestAccepted();
}
}
catch (Exception Ex)
{
}
}
This is my RabbitMQ class
public class InteractionRabbitMQ
{
public async Task<bool> SentRequestRule(int id)
{
bool ConnectionRabbitMQ = false;
await Task.Run(() =>
{
try
{
ConnectionFactory connectionFactory = new ConnectionFactory()
{
//credentials go here
};
IConnection connection = connectionFactory.CreateConnection();
IModel channel = connection.CreateModel();
channel.QueueDeclare("SolicitacaoSameDay", true, false, false, null);
string rpcResponseQueue = channel.QueueDeclare().QueueName;
string correlationId = Guid.NewGuid().ToString();
IBasicProperties basicProperties = channel.CreateBasicProperties();
basicProperties.ReplyTo = rpcResponseQueue;
basicProperties.CorrelationId = correlationId;
byte[] messageBytes = Encoding.UTF8.GetBytes(string.Concat(" ", id.ToString()));
channel.BasicPublish("", "SolicitacaoSameDay", basicProperties, messageBytes);
channel.Close();
connection.Close();
if (connection != null)
{
ConnectionRabbitMQ = true;
}
else
{
ConnectionRabbitMQ = false;
}
}
catch (Exception Ex)
{
throw new ArgumentException($"Thre was a problem with RabbitMQ server. " +
$"Pleaser, contact the support with Error: {Ex.ToString()}");
}
});
return ConnectionRabbitMQ;
}
public async Task<int> RequestAccepted()
{
bool SearchingPartner= true;
int PartnerId = 0;
await Task.Run(() =>
{
try
{
var connectionFactory = new ConnectionFactory()
{
// credentials
};
IConnection connection = connectionFactory.CreateConnection();
IModel channel = connection.CreateModel();
channel.BasicQos(0, 1, false);
var eventingBasicConsumer = new EventingBasicConsumer(channel);
eventingBasicConsumer.Received += (sender, basicDeliveryEventArgs) =>
{
string Response = Encoding.UTF8.GetString(basicDeliveryEventArgs.Body, 0, basicDeliveryEventArgs.Body.Length);
channel.BasicAck(basicDeliveryEventArgs.DeliveryTag, false);
if(!string.IsNullOrWhiteSpace(Response))
{
int Id = Convert.ToInt32(Response);
PartnerId = Id > 0 ? Id : 0;
SearchingPartner = false;
}
};
channel.BasicConsume("SolicitacaoAceitaSameDay", false, eventingBasicConsumer);
}
catch (Exception Ex)
{
// error message
}
});
return PartnerId ;
}
I am not sure this works, can't build an infrastructure to test this quickly, but - your issue is that the RequestAccepted returns a Task which completes before the Received event is caught by the Rabbit client library.
Syncing the two could possibly resolve the issue, note however that this could potentially make your code waiting very long for (or even - never get) the response.
public Task<int> RequestAccepted()
{
bool SearchingPartner= true;
int PartnerId = 0;
var connectionFactory = new ConnectionFactory()
{
// credentials
};
IConnection connection = connectionFactory.CreateConnection();
IModel channel = connection.CreateModel();
channel.BasicQos(0, 1, false);
TaskCompletionSource<int> tcs = new TaskCompletionSource<int>();
var eventingBasicConsumer = new EventingBasicConsumer(channel);
eventingBasicConsumer.Received += (sender, basicDeliveryEventArgs) =>
{
string Response = Encoding.UTF8.GetString(basicDeliveryEventArgs.Body, 0, basicDeliveryEventArgs.Body.Length);
channel.BasicAck(basicDeliveryEventArgs.DeliveryTag, false);
if(!string.IsNullOrWhiteSpace(Response))
{
int Id = Convert.ToInt32(Response);
PartnerId = Id > 0 ? Id : 0;
SearchingPartner = false;
tcs.SetResult( PartnerId );
}
};
channel.BasicConsume("SolicitacaoAceitaSameDay", false, eventingBasicConsumer);
return tcs.Task;
}
There are couple of issues with this approach.
First, no error handling.
Then, what if the event is sent by the RMQ before the consumer subscribes to it? The consumer will block as it will never receive anything back.
And last, I don't think RMQ consumers are ever intended to be created in every request to your controller and then never disposed. While this could work on your dev box where you create a couple of requests manually, it won't probably ever scale to fix a scenario where dozens/hundreds of concurrent users hit your website and multiple RMQ consumers compete one against the other.
I don't think there is an easy way around it other than completely separate the consumer out of your web app, put it in a System Service or a Hangfire job and let it get responses to all possible requests and from the cache - serve responses to web requests.
This is a pure speculation, though, based on my understanding of what you try to do. I could be wrong here, of course.
byte[] messageBytes = Encoding.UTF8.GetBytes(string.Concat(" ", idColeta.ToString()));
I reckon 'idColeta' is blank.

Random Azure Function Apps failures: Host thresholds exceeded [Connections]

I have the following Function App
[FunctionName("SendEmail")]
public static async Task Run([ServiceBusTrigger("%EmailSendMessageQueueName%", AccessRights.Listen, Connection = AzureFunctions.Connection)] EmailMessageDetails messageToSend,
[ServiceBus("%EmailUpdateQueueName%", AccessRights.Send, Connection = AzureFunctions.Connection)]IAsyncCollector<EmailMessageUpdate> messageResponse,
//TraceWriter log,
ILogger log,
CancellationToken token)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {messageToSend}");
/* Validate input and initialise Mandrill */
try
{
if (!ValidateMessage(messageToSend, log)) // TODO: finish validation
{
log.LogError("Invalid or Unknown Message Content");
throw new Exception("Invalid message content.");
}
}
catch (Exception ex)
{
log.LogError($"Failed to Validate Message data: {ex.Message} => {ex.ReportAllProperties()}");
throw;
}
DateTime utcTimeToSend;
try
{
var envTag = GetEnvVariable("Environment");
messageToSend.Tags.Add(envTag);
utcTimeToSend = messageToSend.UtcTimeToSend.GetNextUtcSendDateTime();
DateTime utcExpiryDate = messageToSend.UtcTimeToSend.GetUtcExpiryDate();
DateTime now = DateTime.UtcNow;
if (now > utcExpiryDate)
{
log.LogError($"Stopping sending message because it is expired: {utcExpiryDate}");
throw new Exception($"Stopping sending message because it is expired: {utcExpiryDate}");
}
if (utcTimeToSend > now)
{
log.LogError($"Stopping sending message because it is not allowed to be send due to time constraints: next send time: {utcTimeToSend}");
throw new Exception($"Stopping sending message because it is not allowed to be send due to time constraints: next send time: {utcTimeToSend}");
}
}
catch (Exception ex)
{
log.LogError($"Failed to Parse and/or Validate Message Time To Send: {ex.Message} => {ex.ReportAllProperties()}");
throw;
}
/* Submit message to Mandrill */
string errorMessage = null;
IList<MandrillSendMessageResponse> mandrillResult = null;
DateTime timeSubmitted = default(DateTime);
DateTime timeUpdateRecieved = default(DateTime);
try
{
var mandrillApi = new MandrillApi(GetEnvVariable("Mandrill:APIKey"));
var mandrillMessage = new MandrillMessage
{
FromEmail = messageToSend.From,
FromName = messageToSend.FromName,
Subject = messageToSend.Subject,
TrackClicks = messageToSend.Track,
Tags = messageToSend.Tags,
TrackOpens = messageToSend.Track,
};
mandrillMessage.AddTo(messageToSend.To, messageToSend.ToName);
foreach (var passthrough in messageToSend.PassThroughVariables)
{
mandrillMessage.AddGlobalMergeVars(passthrough.Key, passthrough.Value);
}
timeSubmitted = DateTime.UtcNow;
if (String.IsNullOrEmpty(messageToSend.TemplateId))
{
log.LogInformation($"No Message Template");
mandrillMessage.Text = messageToSend.MessageBody;
mandrillResult = await mandrillApi.Messages.SendAsync(mandrillMessage, async: true, sendAtUtc: utcTimeToSend);
}
else
{
log.LogInformation($"Using Message Template: {messageToSend.TemplateId}");
var clock = new Stopwatch();
clock.Start();
mandrillResult = await mandrillApi.Messages.SendTemplateAsync(
mandrillMessage,
messageToSend.TemplateId,
async: true,
sendAtUtc: utcTimeToSend
);
clock.Stop();
log.LogInformation($"Call to mandrill took {clock.Elapsed}");
}
timeUpdateRecieved = DateTime.UtcNow;
}
catch (Exception ex)
{
log.LogError($"Failed to call Mandrill: {ex.Message} => {ex.ReportAllProperties()}");
errorMessage = ex.Message;
}
try
{
MandrillSendMessageResponse theResult = null;
SendMessageStatus status = SendMessageStatus.FailedToSendToProvider;
if (mandrillResult == null || mandrillResult.Count < 1)
{
if (String.IsNullOrEmpty(errorMessage))
{
errorMessage = "Invalid Mandrill result.";
}
}
else
{
theResult = mandrillResult[0];
status = FacMandrillUtils.ConvertToSendMessageStatus(theResult.Status);
}
var response = new EmailMessageUpdate
{
SentEmailInfoId = messageToSend.SentEmailInfoId,
ExternalProviderId = theResult?.Id ?? String.Empty,
Track = messageToSend.Track,
FacDateSentToProvider = timeSubmitted,
FacDateUpdateRecieved = timeUpdateRecieved,
FacErrorMessage = errorMessage,
Status = status,
StatusDetail = theResult?.RejectReason ?? "Error"
};
await messageResponse.AddAsync(response, token).ConfigureAwait(false);
}
catch (Exception ex)
{
log.LogError($"Failed to push message to the update ({AzureFunctions.EmailUpdateQueueName}) queue: {ex.Message} => {ex.ReportAllProperties()}");
throw;
}
}
When I queue up 100 messages, everything runs fine. When I queue up 500+ messages, 499 of them are sent but the last none never is sent. I also start to get the following errors.
The operation was canceled.
I have Application Insights setup and configured and I have logging running. I am not able to reproduce locally and based on the following End-to-end transaction details from Application Insights, I believe the issue is happening at this point :
await messageResponse.AddAsync(response, token).ConfigureAwait(false);
Application Insights End-to-end transaction
host.json
{
"logger": {
"categoryFilter": {
"defaultLevel": "Information",
"categoryLevels": {
"Host": "Warning",
"Function": "Information",
"Host.Aggregator": "Information"
}
}
},
"applicationInsights": {
"sampling": {
"isEnabled": true,
"maxTelemetryItemsPerSecond": 5
}
},
"serviceBus": {
"maxConcurrentCalls": 32
}
}
Likely related is this error from Application Insights as well.
[
Has anyone else had this or similar issues?
If you follow the link from exception https://aka.ms/functions-thresholds, you will see the following limitation:
Connections : Number of outbound connections (limit is 300). For information on handling connection limits, see Managing Connections.
You are likely to have hit that one.
In each function call you create a new instance of MandrillApi. You haven't mentioned which library you are using, but I suspect it's creating a new connection for each instance of MandrillApi.
I checked Mandrill Dot Net and yes, it's creating a new HttpClient for each instance:
_httpClient = new HttpClient
{
BaseAddress = new Uri(BaseUrl)
};
Managing Connections recommends:
In many cases, this connection limit can be avoided by re-using client instances rather than creating new ones in each function. .NET clients like the HttpClient, DocumentClient, and Azure storage clients can manage connections if you use a single, static client. If those clients are re-instantiated with every function invocation, there is a high probability that the code is leaking connections.
Check the documentation of that library if API client is thread-safe, and reuse it between the function invocations if so.

MSMQ transactional query exception handling

I'm trying out MSMQ with c#... I started by writing 2 simple console applications: one is the sender of a message (a class in a shared class library project) and one listener. worked great.
Next, what I wanted to learn is what happens in case of a failure on the receiving side (listener). So it got me started with transitional messaging. On a failure I see that the message stays in queue, but the server keeps asking for it. What I actually want to achieve is that the message is marked for X times retries at interval of times and then throw it to an error queue.
Now for my code...
Sender
Console.WriteLine("I am the sender");
Message newMessage;
MessageQueue queue = null;
string queueName = #".\Private$\MyQueue";
using (MessageQueueTransaction msgTx = new MessageQueueTransaction())
{
queue = new MessageQueue(queueName);
queue.DefaultPropertiesToSend.Recoverable = true;
while (Console.ReadLine() != null)
{
msgTx.Begin();
Person person = new Person
{
FirstName = "Shlomi",
LastName = "Or",
Birthday = new DateTime(1982, 5, 6)
};
queue.Send(person, "person_label", msgTx);
msgTx.Commit();
}
queue.Close();
}
Listener
Console.WriteLine("I am the reciever");
int counter = 0;
while (!Console.KeyAvailable)
{
Message newMessage;
MessageQueue queue = null;
string queueName = #".\Private$\MyQueue";
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
try
{
if (!MessageQueue.Exists(queueName))
queue = MessageQueue.Create(queueName, true);
else
queue = new MessageQueue(queueName);
if (queue.CanRead)
{
try
{
newMessage = queue.Receive(MessageQueueTransactionType.Automatic);
//newMessage.Formatter = new XmlMessageFormatter(new String[] { "System.String,mscorlib" });
//Console.WriteLine(newMessage.Body.ToString());
newMessage.Formatter = new XmlMessageFormatter(new Type[] { typeof(Person), typeof(Object) });
Person person = (Person)newMessage.Body;
Console.WriteLine(person.FirstName);
throw new Exception("Some error");
scope.Complete();
}
catch(Exception ex)
{
Console.WriteLine("exception occured " + (++counter));
}
}
}
finally
{
queue.Dispose();
}
}
}
when I run this code, what I see happening when I send only one message is that the error is thrown and it keeps trying to get the same message over and over. What I want is as explained above - some kind of retry and then move to error queue. how can I do that?

RabbitMQ C# verify message was sent

I'm new to RabbitMQ and trying to write to a Queue and verify the message was sent. If it fails I need to know about it.
I made a fake queue to watch it fail but no matter what I see no execptions and when I am looking for a ack I always get one. I never see the BasicNack.
I'm not even sure i'm the BasicAcks is the way to go.
private void button1_Click(object sender, EventArgs e)
{
var factory = new ConnectionFactory() { HostName = "localhost" };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
channel.QueueDeclare("task_queue", true, false, false, null);
var message = ("Helllo world");
var body = Encoding.UTF8.GetBytes(message);
channel.ConfirmSelect();
var properties = channel.CreateBasicProperties();
properties.SetPersistent(true);
properties.DeliveryMode = 2;
channel.BasicAcks += channel_BasicAcks;
channel.BasicNacks += channel_BasicNacks;
//fake queue should be task_queue
channel.BasicPublish("", "task_2queue", true, properties, body);
channel.WaitForConfirmsOrDie();
Console.WriteLine(" [x] Sent {0}", message);
}
}
}
void channel_BasicNacks(IModel model, BasicNackEventArgs args)
{
}
void channel_BasicAcks(IModel model, BasicAckEventArgs args)
{
}
For those looking for a C# answer - here is what you need.
https://rianjs.net/2013/12/publisher-confirms-with-rabbitmq-and-c-sharp
Something like this: (BasicAcks attaches an event handler - there is also BasicNacks)
using (var connection = FACTORY.CreateConnection())
{
var channel = connection.CreateModel();
channel.ExchangeDeclare(QUEUE_NAME, ExchangeType.Fanout, true);
channel.QueueDeclare(QUEUE_NAME, true, false, false, null);
channel.QueueBind(QUEUE_NAME, QUEUE_NAME, String.Empty, new Dictionary<string, object>());
channel.BasicAcks += (sender, eventArgs) =>
{
//implement ack handle
};
channel.ConfirmSelect();
for (var i = 1; i <= numberOfMessages; i++)
{
var messageProperties = channel.CreateBasicProperties();
messageProperties.SetPersistent(true);
var message = String.Format("{0}\thello world", i);
var payload = Encoding.Unicode.GetBytes(message);
Console.WriteLine("Sending message: " + message);
channel.BasicPublish(QUEUE_NAME, QUEUE_NAME, messageProperties, payload);
channel.WaitForConfirmsOrDie();
}
}
You need a Publisher Confirms
as you can read you can implement:
The transaction:
ch.txSelect(); <-- start transaction
ch.basicPublish("", QUEUE_NAME,
MessageProperties.PERSISTENT_BASIC,
"nop".getBytes());
ch.txCommit();<--commit transaction
The message is stored to the queue and to the disk.
This way can be slow, if you need performance you shouldn't use it.
You can use the Streaming Lightweight Publisher Confirms, using:
ch.setConfirmListener(new ConfirmListener() {
public void handleAck(long seqNo, boolean multiple) {
if (multiple) {
unconfirmedSet.headSet(seqNo+1).clear();
} else {
unconfirmedSet.remove(seqNo);
}
}
public void handleNack(long seqNo, boolean multiple) {
// handle the lost messages somehow
}
I hope it helps
Ok, you always get the ACK for your message sent because "Every time message is delivered to Default Exchange Successfully."
PS: You are not sending message directly to Queue, Once Exchange recevis the message it gives you ACK then it route the message to all bound queue using the routing keys if any.

Categories

Resources