I am new in Kafka. kafka consumer is not reading message from the given topic.
I am checking with kafka console as well. it is not working. i donot understand the problem. it was working fine earlier.
public string MessageConsumer(string brokerList, List<string> topics, CancellationToken cancellationToken)
{
//ConfigurationManager.AutoLoadAppSettings("", "", true);
string logKey = string.Format("ARIConsumer.StartPRoducer ==>Topics {0} Key{1} =>", "", string.Join(",", topics));
string message = string.Empty;
var conf = new ConsumerConfig
{
BootstrapServers = "localhost:9092",
GroupId = "23",
EnableAutoCommit = false,
AutoOffsetReset = AutoOffsetResetType.Latest,
};
using (var c = new Consumer<Ignore, string>(conf))
{
try
{
c.Subscribe(topics);
bool consuming = true;
// The client will automatically recover from non-fatal errors. You typically
// don't need to take any action unless an error is marked as fatal.
c.OnError += (_, e) => consuming = !e.IsFatal;
while (consuming)
{
try
{
TimeSpan timeSpan = new TimeSpan(0, 0, 5);
var cr = c.Consume(timeSpan);
// Thread.Sleep(5000);
if (cr != null)
{
message = cr.Value;
Console.WriteLine("Thread" + Thread.CurrentThread.ManagedThreadId + "Message : " + message);
CLogger.WriteLog(ELogLevel.INFO, $"Consumed message Partition '{cr.Partition}' at: '{cr.TopicPartitionOffset} thread: { Thread.CurrentThread.ManagedThreadId}'. Message: {message}");
//Console.WriteLine($"Consumed message Partition '{cr.Partition}' at: '{cr.TopicPartitionOffset}'. Topic: { cr.Topic} value :{cr.Value} Timestamp :{DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture)} GrpId: { conf.GroupId}");
c.Commit();
}
Console.WriteLine($"Calling the next Poll ");
}
catch (ConsumeException e)
{
CLogger.WriteLog(ELogLevel.ERROR, $"Error occured: {e.Error.Reason}");
Console.WriteLine($"Error occured: {e.Error.Reason}");
}
//consuming = false;
}
// Ensure the consumer leaves the group cleanly and final offsets are committed.
c.Close();
}
catch (Exception ex)
{
}
}
return message;
}
What is the issue with this code or there is installation issue with kafka
Is there a Producer actively sending data?
Your consumer is starting from the latest offsets based on the AutoOffsetReset, so it wouldn't read existing data in the topic
The console consumer also defaults to the latest offset
And if you haven't changed the GroupId, then your consumer might have worked once, then you consumed data, then commited the offsets for that group. When the consumer starts again in the same group, it will only resume from the end of the topic, or the offset of the last commit
You also have an empty catch (Exception ex), which might be hiding some other error
Try removing the timespan from consume method.
Related
New to this parallel coding.
I am trying to fire off a list of tasks (in this case, emails to send).
The below code does work, however I am unsure, if ONE email was to fail sending, or that task not finish for whatever reason etc.
Would me code just hang on the await line?
What the solution here, or is it a non problem?
I need to await OR intercept each one when it finishes as I need to (mark the email as sent correctly or retry etc).
However is awaiting all safe, or should I have an event to intercept the response?
(I do not really care if the odd one takes a while, My key is not to block up the rest of the program)
Over all this will be in a 5 mins loop, and I wouldn't want the email in the future to be blocked up is a timeout per task, as option.
Any help would be great, thank You.
// Create a list of emails to send
List<Task<string>> tasks = new List<Task<string>>();
foreach (var item in lstEmailList) // loop the list of emails to send
{
tasks.Add(EmailLib.SendEmailOldSmtpAuth(item)); // add each email as a task
}
// now run the list as parallel tasks
await Task.Run(() => Parallel.ForEach(tasks, s =>
{
//EmailLib.SendEmailOldSmtpAuth(item);
}));
foreach (var task in tasks)
{
var result = ((Task<string>)task).Result;
if (result.Contains("PH_OK") == true)
{
string strFindId = result.Replace("PH_OK=", "").ToString();
int EmailID = Int32.Parse(strFindId);
DbFunc.MarkEmailAsSent(ref SQLConnX, EmailID);
}
}
In case it is helpful:
public async Task<string> SendEmailOldSmtpAuth(Data_PendingEmails objEmail)
{
try
{
var emailMessage = new MimeMessage();
string SendToName = "";
if (objEmail.CarerID > 0)
{
SendToName = objEmail.carForename + " " + objEmail.carSurname;
}
else
{
SendToName = objEmail.cliForename + " " + objEmail.cliSurname;
}
if (SendToName == "")
{
SendToName = "User";
}
emailMessage.From.Add(new MailboxAddress(objEmail.EMailYourName, objEmail.EMailAddress));
emailMessage.To.Add(new MailboxAddress(SendToName, objEmail.ToEmailAddress));
emailMessage.Subject = objEmail.EmailSubject;
emailMessage.Body = new TextPart("html") { Text = objEmail.EmailMessage };
try
{
var client = new SmtpClient();
await client.ConnectAsync(objEmail.SMTPServer, Int32.Parse(objEmail.SMTPPort), SecureSocketOptions.SslOnConnect);
await client.AuthenticateAsync(objEmail.SMTPUserName, objEmail.SMTPPassword);
await client.SendAsync(emailMessage);
await client.DisconnectAsync(true);
return "PH_OK=" + objEmail.EmailID.ToString();
}
catch (Exception ex)
{
var e = ex;
return e.Message;
}
}
catch (Exception SendEmailOldSmtpAuthOverAll)
{
return SendEmailOldSmtpAuthOverAll.Message.ToString();
}
}
I will try to simulate your scenario with a much simpler example but the main idea will be the same. I do it this way to simulate the potential exception.
First af all you need concurrent calls to a web service, so the best way is to use Task.WhenAll because this is an I/O operation (as #Charlieface already mentioned)
Lets say that we have a list of emails:
var emailList = new List<string>() { "1", "2", "3", "4", "5", "6" };
then we need to create an ienumerable of tasks:
var tasks = emailList.Select(async email =>
{
var response = await SendEmailAsync(email);
Console.WriteLine(response);
});
Then we "mock" an exception and append it to task list in order to simulate an exception:
var problematicTask = ThrowExceptionAsync("Error from initial task");
var allTasks = tasks.Append(problematicTask);
Now, in order to fire the tasks and to catch the exceptions we need to do this:
var aggregateTasks = Task.WhenAll(allTasks);
try
{
await aggregateTasks;
}
catch
{
AggregateException aggregateException = aggregateTasks.Exception!;
foreach (var ex in aggregateException.InnerExceptions)
{
Console.WriteLine(ex.Message);
}
}
and as helper methods for this example i created these two arbitrary methods:
async Task<string> SendEmailAsync(string email)
{
await Task.Delay(1500);
if (email.Equals("7") || email.Equals("10"))
{
await ThrowExceptionAsync("Error from sendEmail");
}
return $"Email: {email} sent";
}
async Task ThrowExceptionAsync(string msg)
{
throw new Exception(msg);
}
If you run this simple example you would inspect all exceptions may thrown in each call..
Now in regards to your particular example i think you need to remove all try/catch blocks in order to catch the exceptions as aggregate exception and not blocking your app.
I have a list of offsets with their corresponding partition and I need to commit them manually.
To do so I am looping through the list and assigning partition to the consumer and then seeking to a particular offset.
then I am consuming the message and passing the ConsumerBulider to commit method.
Sometimes it executes smoothly but sometimes it throws "Local:Waiting for Coordinator" exception.
But in both the cases , when I try consuming messages afterwards I re-consume the same series of messages I already have committed or should I say I tried committing. Which means I never really could commit them :(
`
foreach (var item in cmdparamslist)
{
Partition p = new Partition(Int16.Parse(item.PartitionID));
TopicPartition tp = new
TopicPartition(configuration.GetSection("KafkaSettings").GetSection("Topic").Value, p);
Offset o = new Offset(long.Parse(item.Offset));
TopicPartitionOffset tpo = new TopicPartitionOffset(tp, o);
try
{
KafkaConsumer.Assign(tpo);
await Task.Delay(TimeSpan.FromSeconds(1));
KafkaConsumer.Seek(tpo);
var cr = KafkaConsumer.Consume(cts.Token);
try
{
KafkaConsumer.Commit(cr);
}
catch (TopicPartitionOffsetException e1)
{
Console.WriteLine("exception " + e);
}
catch (KafkaException e)
{
Console.WriteLine("exception " + e);
}
}
catch (KafkaException e)
{
Console.WriteLine("exception " + e);
}
}
KafkaConsumer.Close();
}
catch(Exception e)
{
Console.WriteLine("exception "+e);
}
}
Consumer / Client configuration:
var conf = new ConsumerConfig
{
GroupId = Guid.NewGuid().ToString(),
BootstrapServers = configuration.GetSection("KafkaSettings").GetSection("RemoteServers").Value,
AutoOffsetReset = AutoOffsetReset.Earliest,
SaslMechanism = SaslMechanism.Gssapi,
SecurityProtocol = SecurityProtocol.SaslPlaintext,
EnableAutoCommit = false
//EnableAutoOffsetStore = false
};`
I am using Confluent.Kafka 1.6.2 version and .net5
Could someone please help me ?
I have the following Function App
[FunctionName("SendEmail")]
public static async Task Run([ServiceBusTrigger("%EmailSendMessageQueueName%", AccessRights.Listen, Connection = AzureFunctions.Connection)] EmailMessageDetails messageToSend,
[ServiceBus("%EmailUpdateQueueName%", AccessRights.Send, Connection = AzureFunctions.Connection)]IAsyncCollector<EmailMessageUpdate> messageResponse,
//TraceWriter log,
ILogger log,
CancellationToken token)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {messageToSend}");
/* Validate input and initialise Mandrill */
try
{
if (!ValidateMessage(messageToSend, log)) // TODO: finish validation
{
log.LogError("Invalid or Unknown Message Content");
throw new Exception("Invalid message content.");
}
}
catch (Exception ex)
{
log.LogError($"Failed to Validate Message data: {ex.Message} => {ex.ReportAllProperties()}");
throw;
}
DateTime utcTimeToSend;
try
{
var envTag = GetEnvVariable("Environment");
messageToSend.Tags.Add(envTag);
utcTimeToSend = messageToSend.UtcTimeToSend.GetNextUtcSendDateTime();
DateTime utcExpiryDate = messageToSend.UtcTimeToSend.GetUtcExpiryDate();
DateTime now = DateTime.UtcNow;
if (now > utcExpiryDate)
{
log.LogError($"Stopping sending message because it is expired: {utcExpiryDate}");
throw new Exception($"Stopping sending message because it is expired: {utcExpiryDate}");
}
if (utcTimeToSend > now)
{
log.LogError($"Stopping sending message because it is not allowed to be send due to time constraints: next send time: {utcTimeToSend}");
throw new Exception($"Stopping sending message because it is not allowed to be send due to time constraints: next send time: {utcTimeToSend}");
}
}
catch (Exception ex)
{
log.LogError($"Failed to Parse and/or Validate Message Time To Send: {ex.Message} => {ex.ReportAllProperties()}");
throw;
}
/* Submit message to Mandrill */
string errorMessage = null;
IList<MandrillSendMessageResponse> mandrillResult = null;
DateTime timeSubmitted = default(DateTime);
DateTime timeUpdateRecieved = default(DateTime);
try
{
var mandrillApi = new MandrillApi(GetEnvVariable("Mandrill:APIKey"));
var mandrillMessage = new MandrillMessage
{
FromEmail = messageToSend.From,
FromName = messageToSend.FromName,
Subject = messageToSend.Subject,
TrackClicks = messageToSend.Track,
Tags = messageToSend.Tags,
TrackOpens = messageToSend.Track,
};
mandrillMessage.AddTo(messageToSend.To, messageToSend.ToName);
foreach (var passthrough in messageToSend.PassThroughVariables)
{
mandrillMessage.AddGlobalMergeVars(passthrough.Key, passthrough.Value);
}
timeSubmitted = DateTime.UtcNow;
if (String.IsNullOrEmpty(messageToSend.TemplateId))
{
log.LogInformation($"No Message Template");
mandrillMessage.Text = messageToSend.MessageBody;
mandrillResult = await mandrillApi.Messages.SendAsync(mandrillMessage, async: true, sendAtUtc: utcTimeToSend);
}
else
{
log.LogInformation($"Using Message Template: {messageToSend.TemplateId}");
var clock = new Stopwatch();
clock.Start();
mandrillResult = await mandrillApi.Messages.SendTemplateAsync(
mandrillMessage,
messageToSend.TemplateId,
async: true,
sendAtUtc: utcTimeToSend
);
clock.Stop();
log.LogInformation($"Call to mandrill took {clock.Elapsed}");
}
timeUpdateRecieved = DateTime.UtcNow;
}
catch (Exception ex)
{
log.LogError($"Failed to call Mandrill: {ex.Message} => {ex.ReportAllProperties()}");
errorMessage = ex.Message;
}
try
{
MandrillSendMessageResponse theResult = null;
SendMessageStatus status = SendMessageStatus.FailedToSendToProvider;
if (mandrillResult == null || mandrillResult.Count < 1)
{
if (String.IsNullOrEmpty(errorMessage))
{
errorMessage = "Invalid Mandrill result.";
}
}
else
{
theResult = mandrillResult[0];
status = FacMandrillUtils.ConvertToSendMessageStatus(theResult.Status);
}
var response = new EmailMessageUpdate
{
SentEmailInfoId = messageToSend.SentEmailInfoId,
ExternalProviderId = theResult?.Id ?? String.Empty,
Track = messageToSend.Track,
FacDateSentToProvider = timeSubmitted,
FacDateUpdateRecieved = timeUpdateRecieved,
FacErrorMessage = errorMessage,
Status = status,
StatusDetail = theResult?.RejectReason ?? "Error"
};
await messageResponse.AddAsync(response, token).ConfigureAwait(false);
}
catch (Exception ex)
{
log.LogError($"Failed to push message to the update ({AzureFunctions.EmailUpdateQueueName}) queue: {ex.Message} => {ex.ReportAllProperties()}");
throw;
}
}
When I queue up 100 messages, everything runs fine. When I queue up 500+ messages, 499 of them are sent but the last none never is sent. I also start to get the following errors.
The operation was canceled.
I have Application Insights setup and configured and I have logging running. I am not able to reproduce locally and based on the following End-to-end transaction details from Application Insights, I believe the issue is happening at this point :
await messageResponse.AddAsync(response, token).ConfigureAwait(false);
Application Insights End-to-end transaction
host.json
{
"logger": {
"categoryFilter": {
"defaultLevel": "Information",
"categoryLevels": {
"Host": "Warning",
"Function": "Information",
"Host.Aggregator": "Information"
}
}
},
"applicationInsights": {
"sampling": {
"isEnabled": true,
"maxTelemetryItemsPerSecond": 5
}
},
"serviceBus": {
"maxConcurrentCalls": 32
}
}
Likely related is this error from Application Insights as well.
[
Has anyone else had this or similar issues?
If you follow the link from exception https://aka.ms/functions-thresholds, you will see the following limitation:
Connections : Number of outbound connections (limit is 300). For information on handling connection limits, see Managing Connections.
You are likely to have hit that one.
In each function call you create a new instance of MandrillApi. You haven't mentioned which library you are using, but I suspect it's creating a new connection for each instance of MandrillApi.
I checked Mandrill Dot Net and yes, it's creating a new HttpClient for each instance:
_httpClient = new HttpClient
{
BaseAddress = new Uri(BaseUrl)
};
Managing Connections recommends:
In many cases, this connection limit can be avoided by re-using client instances rather than creating new ones in each function. .NET clients like the HttpClient, DocumentClient, and Azure storage clients can manage connections if you use a single, static client. If those clients are re-instantiated with every function invocation, there is a high probability that the code is leaking connections.
Check the documentation of that library if API client is thread-safe, and reuse it between the function invocations if so.
I have a process that reads a message from an Azure Service Bus Queue and converts that message to a Video to be Encoded by Azure Media Services. I noticed that if the process is kicked off very quickly in a row, the same video was being encoded right after another. Here is my code that adds the Video to the Queue
public class VideoManager
{
string _connectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"];
string _queueName = ConfigurationManager.AppSettings["ServiceBusQueueName"];
QueueClient _client;
public VideoManager()
{
var conStringBuilder = new ServiceBusConnectionStringBuilder(_connectionString)
{
OperationTimeout = TimeSpan.FromMinutes(120)
};
var messagingFactory = MessagingFactory.CreateFromConnectionString(conStringBuilder.ToString());
_client = messagingFactory.CreateQueueClient(_queueName);
}
public void Approve(Video video)
{
// Set video to approved.
video.ApprovalStatus = ApprovalStatus.Approved;
var message = new BrokeredMessage(new VideoMessage(video, VideoMessage.MessageTypes.Approve, string.Empty));
message.MessageId = video.RowKey;
_client.Send(message);
}
}
And the process that reads from the Queue
class Program
{
static QueueClient client;
static void Main(string[] args)
{
VideoManager videoManager = new VideoManager();
var connectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"];
var conStringBuilder = new ServiceBusConnectionStringBuilder(connectionString)
{
OperationTimeout = TimeSpan.FromMinutes(120)
};
var messagingFactory = MessagingFactory.CreateFromConnectionString(conStringBuilder.ToString());
client = messagingFactory.CreateQueueClient(ConfigurationManager.AppSettings["ServiceBusQueueName"]);
Console.WriteLine("Starting: Broadcast Center Continuous Video Processing Job");
OnMessageOptions options = new OnMessageOptions
{
MaxConcurrentCalls = 25,
AutoComplete = false
};
client.OnMessageAsync(async message =>
{
bool shouldAbandon = false;
try
{
await HandleMessage(message);
}
catch (Exception ex)
{
shouldAbandon = true;
Console.WriteLine(ex.Message);
}
if (shouldAbandon)
{
await message.AbandonAsync();
}
}, options);
while (true) { }
}
async static Task<int> HandleMessage(BrokeredMessage message)
{
VideoMessage videoMessage = message.GetBody<VideoMessage>();
Console.WriteLine(String.Format("Message body: {0}", videoMessage.Video.Title));
Console.WriteLine(String.Format("Message id: {0}", message.MessageId));
VideoProcessingService vp = new VideoProcessingService(videoMessage.Video);
Task task;
switch (videoMessage.MessageType)
{
case VideoMessage.MessageTypes.CreateThumbnail:
task = new Task(() => vp.ProcessThumbnail(videoMessage.TimeStamp));
task.Start();
while (!task.IsCompleted)
{
await Task.Delay(15000);
message.RenewLock();
}
await task;
Console.WriteLine(task.Status.ToString());
Console.WriteLine("Processing Complete");
Console.WriteLine("Awaiting Message");
break;
case VideoMessage.MessageTypes.Approve:
task = new Task(() => vp.Approve());
task.Start();
while (!task.IsCompleted)
{
await Task.Delay(15000);
message.RenewLock();
}
await task;
Console.WriteLine(task.Status.ToString());
Console.WriteLine("Processing Complete");
Console.WriteLine("Awaiting Message");
break;
default:
break;
}
return 0;
}
}
What I see in the Console Window is the following if I kick off the process 3 times in a row
Message id: 76aca19a-0698-449b-bf58-a24876fc4314
Message id: 76aca19a-0698-449b-bf58-a24876fc4314
Message id: 76aca19a-0698-449b-bf58-a24876fc4314
I thought maybe I did not have the settings correct, but they are there
I am really at a loss here, as I would expect this to be fairly out of the box behavior. Does duplicate detection only work if the message has been completed, so I can't use OnMessageAsync()?
The issue is not the completion (as it was in the code), but the fact that you have in essence multiple consumers (25 concurrent callbacks) and it seems like the LockDuration is elapsing faster than the processing takes. As a result of that, message re-appears and re-processed. As a result of that you see the same message ID logged more than once.
Possible solutions are (as I've outlined in a comment above):
Let OnMessage API manage timeout extension for you (example)
Manually renew the lock as you've done using BrokeredMessage.RenewLock
There is a line of code missing from your HandleMessage code.
async static Task<int> HandleMessage(BrokeredMessage message)
{
VideoMessage videoMessage = message.GetBody<VideoMessage>();
message.CompleteAsync(); // This line...
Console.WriteLine(String.Format("Message id: {0}", message.MessageId));
// Processes Message
}
So yes you have to mark the message with either, Complete, Defer etc..
Also see this Answer, also found this which may be useful in how duplicate detection works
Here is the scenario
At the start,
Ready Queue : 2
UnAcked: 0
Once the consumer.Queue.Dequeue(1000, out bdea); runs,
ReadyQueue: 1
UnAcked: 1
This is obvious, where we read one message and not Acknowledged yet.
Problem is that when the channel.BasicAck(bdea.DeliveryTag, false); runs,
ReadyQueue: 0
UnAcked: 1
Message which was in the Ready state became UnAcked and ReadyQueue becomes "0" !!
Now, in the while loop, when we look for the second message with consumer.Queue.Dequeue(1000, out bdea);, bdea returns null as there is nothing in the Ready state.
This is the issue, when there an Ack happens, always it drags a message from the Ready queue to UnAck. Therefore the next time I am loosing this UnAcked message which was never dequeued.
But if I stop the process (Console App), UnAck message goes back to Ready State.
Assume there are 10 messages in Ready state at the start, at the end it will only process 5 where you find 5 messages in UnAcked state. Each Ack makes the next message UnAck. If I stop and run again (5 messages in Ready state), guess what, 3 messages will gets processed, 2 will be UnAcked. (Dequeue only picks half of the no of messages)
Here is my code (code which only has the RabbitMQ functionality, issue is there if you try this code as well),
public class TestMessages
{
private ConnectionFactory factory = new ConnectionFactory();
string billingFileId = string.Empty;
private IConnection connection = null;
private IModel channel = null;
public void Listen()
{
try
{
#region CONNECT
factory.AutomaticRecoveryEnabled = true;
factory.UserName = ConfigurationManager.AppSettings["MQUserName"];
factory.Password = ConfigurationManager.AppSettings["MQPassword"];
factory.VirtualHost = ConfigurationManager.AppSettings["MQVirtualHost"];
factory.HostName = ConfigurationManager.AppSettings["MQHostName"];
factory.Port = Convert.ToInt32(ConfigurationManager.AppSettings["MQPort"]);
#endregion
RabbitMQ.Client.Events.BasicDeliverEventArgs bdea;
using (connection = factory.CreateConnection())
{
string jobId = string.Empty;
using (IModel channel = connection.CreateModel())
{
while (true) //KEEP LISTNING
{
if (!channel.IsOpen)
throw new Exception("Channel is closed"); //Exit the loop.
QueueingBasicConsumer consumer = new QueueingBasicConsumer(channel);
//Prefetch 1 message
channel.BasicQos(0, 1, false);
String consumerTag = channel.BasicConsume(ConfigurationManager.AppSettings["MQQueueName"], false, consumer);
try
{
//Pull out the message
consumer.Queue.Dequeue(1000, out bdea);
if (bdea == null)
{
//Empty Queue
}
else
{
IBasicProperties props = bdea.BasicProperties;
byte[] body = bdea.Body;
string message = System.Text.Encoding.Default.GetString(bdea.Body);
try
{
channel.BasicAck(bdea.DeliveryTag, false);
////Heavy work starts now......
}
catch (Exception ex)
{
//Log
}
}
}
catch (Exception ex)
{
//Log it
}
}
}
}
}
catch (Exception ex)
{
WriteLog.Error(ex);
}
finally
{
//CleanUp();
}
}
}
Am I missing something?
I tried with the "Subscription" rather than the Channel and it works now, clears up the message queue. I referred to this post.
Here is the working code:
public void SubscribeListner()
{
Subscription subscription = null;
const string uploaderExchange = "myQueueExchange";
string queueName = "myQueue";
while (true)
{
try
{
if (subscription == null)
{
try
{
//CONNECT Code
//try to open connection
connection = factory.CreateConnection();
}
catch (BrokerUnreachableException ex)
{
//You probably want to log the error and cancel after N tries,
//otherwise start the loop over to try to connect again after a second or so.
//log.Error(ex);
continue;
}
//crate chanel
channel = connection.CreateModel();
// This instructs the channel not to prefetch more than one message
channel.BasicQos(0, 1, false);
// Create a new, durable exchange
channel.ExchangeDeclare(uploaderExchange, ExchangeType.Direct, true, false, null);
// Create a new, durable queue
channel.QueueDeclare(queueName, true, false, false, null);
// Bind the queue to the exchange
channel.QueueBind(queueName, uploaderExchange, queueName);
//create subscription
subscription = new Subscription(channel, uploaderExchange, false);
}
BasicDeliverEventArgs eventArgs;
var gotMessage = subscription.Next(250, out eventArgs);//250 millisecond
if (gotMessage)
{
if (eventArgs == null)
{
//This means the connection is closed.
//DisposeAllConnectionObjects();
continue;//move to new iterate
}
//process message
subscription.Ack();
//channel.BasicAck(eventArgs.DeliveryTag, false);
}
}
catch (OperationInterruptedException ex)
{
//log.Error(ex);
//DisposeAllConnectionObjects();
}
catch(Exception ex)
{
}
}
}