I have created a newsletter system that allows me to specify which members should receive the newsletter. I then loop through the list of members that meet the criteria and for each member, I generate a personalized message and send them the email asynchronously .
When I send out the email, I am using ThreadPool.QueueUserWorkItem.
For some reason, a subset of the members are getting the email twice. In my last batch, I was only sending out to 712 members, yet a total of 798 messages ended up being sent.
I am logging the messages that get sent out and I was able to tell that the first 86 members received the message twice. Here is the log (in the order the messages were sent)
No. Member Date
1. 163992 3/8/2012 12:28:13 PM
2. 163993 3/8/2012 12:28:13 PM
...
85. 164469 3/8/2012 12:28:37 PM
86. 163992 3/8/2012 12:28:44 PM
87. 163993 3/8/2012 12:28:44 PM
...
798. 167691 3/8/2012 12:32:36 PM
Each member should receive the newsletter once, however, as you can see member 163992 receives message #1 and #86; member 163993 received message #2 and #87; and so on.
The other thing to note is that there was a 7 second delay between sending message #85 and #86.
I have reviewed the code several times and ruled out pretty much all of the code as being the cause of it, except for possibly the ThreadPool.QueueUserWorkItem.
This is the first time I work with ThreadPool, so I am not that familiar with it. Is it possible to have some sort of race-condition that is causing this behavior?
=== --- Code Sample --- ===
foreach (var recipient in recipientsToEmail)
{
_emailSender.SendMemberRegistrationActivationReminder(eventArgs.Newsletter, eventArgs.RecipientNotificationInfo, previewEmail: string.Empty);
}
public void SendMemberRegistrationActivationReminder(DomainObjects.Newsletters.Newsletter newsletter, DomainObjects.Members.MemberEmailNotificationInfo recipient, string previewEmail)
{
//Build message here .....
//Send the message
this.SendEmailAsync(fromAddress: _settings.WebmasterEmail,
toAddress: previewEmail.IsEmailFormat()
? previewEmail
: recipientNotificationInfo.Email,
subject: emailSubject,
body: completeMessageBody,
memberId: previewEmail.IsEmailFormat()
? null //if this is a preview message, do not mark it as being sent to this member
: (int?)recipientNotificationInfo.RecipientMemberPhotoInfo.Id,
newsletterId: newsletter.Id,
newsletterTypeId: newsletter.NewsletterTypeId,
utmCampaign: utmCampaign,
languageCode: recipientNotificationInfo.LanguageCode);
}
private void SendEmailAsync(string fromAddress, string toAddress, string subject, MultiPartMessageBody body, int? memberId, string utmCampaign, string languageCode, int? newsletterId = null, DomainObjects.Newsletters.NewsletterTypeEnum? newsletterTypeId = null)
{
var urlHelper = UrlHelper();
var viewOnlineUrlFormat = urlHelper.RouteUrl("UtilityEmailRead", new { msgid = "msgid", hash = "hash" });
ThreadPool.QueueUserWorkItem(state => SendEmail(fromAddress, toAddress, subject, body, memberId, newsletterId, newsletterTypeId, utmCampaign, viewOnlineUrlFormat, languageCode));
}
Are you sure the query you are running to get the list of members to send the email to does not have duplicates in it? Are you joining to another table? What you could do is:
List<DomainObjects.Members.MemberEmailNotificationInfo> list = GetListFromDatabase();
list = list.Distinct().ToList();
Having 800+ threads running on the server is not a good practice!
Although you are using a ThreadPool, threads are being queued on the server and run whenever old threads return back to the pool and release the resource. This might take several minutes on the server and many situations like Race Conditions or Concurrencies might happen during that time.
You could instead queue one work item, over one protected list:
lock (recipientsToEmail)
{
ThreadPool.QueueUserWorkItem(t =>
{
// enumerate recipientsToEmail and send email
});
}
Things to check (I'm assuming you have a way to mock the sending of emails):
Is the number of duplicate emails always exactly the same? What if you increase/decrease the number of input values? Is it always the same user IDs which are duplicated?
Is SendEmail() doing anything of significance? (I don't see your code for it)
Is there a reason that you aren't using the framework's SendAsync() method?
Do you get the same behavior without multithreading?
For what it's worth, sending bulk email from your own site--even when it is completely legitimate--is not always worth the trouble. Spam blocking services are very aggressive and you don't want your domain to end up blacklisted. Third party services remove that risk, provide many tools, and also manage this part of the process for you.
If this code:
foreach (var recipient in recipientsToEmail)
{
_emailSender.SendMemberRegistrationActivationReminder(eventArgs.Newsletter
,eventArgs.RecipientNotificationInfo, previewEmail: string.Empty);
}
matches what you are actually doing... you have an obvious bug. namely that you are doing a foreach but not using the returned value, so you will send the same email to eventArgs.RecipientNotificationInfo for each entry in recipientsToEmail.
One common cause of tasks getting performed twice in code where you queue the task to a background thread is faulty error handling. You might double-check your code to make sure that if there's an error that you don't always retry, regardless of the type of error (some errors warrant a retry; others don't).
Having said that, the code you've posted doesn't include enough information to definitively answer your question; there are many possibilities.
FWIW, are you aware that the SmtpClient class has a SendAsync() method, that doesn't require the use of a separate worker thread?
In your code sample, we can't see where your logging takes place.
Maybe the mehod that sends the email erronously thought that something wrong occured then, the system tried again, which could result in an email sent twice.
Also, as written in other answers and comment, I would check again that I don't get duplicated entries in the list of recipients, and test it in a non-parallel context.
Related
If the processing of an Azure Service Bus message depends on another resource, e.g. an API or a database service, and this resource is not available, not calling CompleteMessageAsync() is not an option, because the message will be immediately received again until the Max Delivery Count is reached, and then put into the DLQ. If an API is down for maintenance, we want to wait a bit before retrying.
One of the answers to this question has the general steps for deferring and receiving deferred messages. This is a little better than Microsoft's documentation, but not enough for me to understand the intent of the API, and how it is to be implemented in a hosted service that basically sits in ServiceBusProcessor.StartProcessingAsync all day long.
This is the basic structure of my service:
public class ServiceBusWatcher : IHostedService, IDisposable
{
public Task StartAsync(CancellationToken stoppingToken)
{
ReceiveMessagesAsync();
return Task.CompletedTask;
}
private async void ReceiveMessagesAsync()
{
ServiceBusClient client = new ServiceBusClient(connectionString);
processor = client.CreateProcessor(queueName, new ServiceBusProcessorOptions());
processor.ProcessMessageAsync += MessageHandler;
await processor.StartProcessingAsync();
}
async Task MessageHandler(ProcessMessageEventArgs args)
{
// a dependency is not available that allows me to process a message. so:
await args.DeferMessageAsync(args.Message);
Once the message is deferred, it is my understanding that the processor will not get to it anymore (or will it?). Instead, I have to use ReceiveDeferredMessageAsync() to receive it, along with the sequence number of the originally received message.
In my case, it will make sense to wait minutes or hours before trying again.
This could be done with a separate service that uses a timer and an explicit call to ReceiveDeferredMessageAsync(), as opposed to using a ServiceBusProcessor. I also suppose that the deferred message sequence numbers will have to be persisted in non-volatile storage so that they don't get lost.
Does this sound like a viable approach? I don't like having to remember its sequence numbers so that I can get to a message later. It goes against everything that using a message queue brings to the table in the first place.
Or, instead of deferring, I could just post a new "internal" message with the sequence number and use the ScheduledEnqueueTimeUtc property to delay receiving it. Once I receive this message, I could call ReceiveDeferredMessageAsync() with that sequence number to get to the original message. This seems elegant at the surface, but messages could quickly multiply if there is a longer outage of a dependency.
Another idea that could work without another service: I could complete and repost the payload of the message and set ScheduledEnqueueTimeUtc to a time in the future, as described in another answer to the question I mentioned earlier. Assuming that this works (Microsoft's documentation does not mention what this property is for), it seems simple and clean, and I like simple.
How have you solved this? Is there a better/preferred way that balances low complexity with high robustness without requiring a large amount of code?
Deferring a message works when you know what message you want to retrieve later and your receiver will have the message sequence number saved to retrieve the deferred message. If the receiver has no ability to save message sequence number, the delaying the message is a better option. Delaying a message will mean to copy the original message data into a newly scheduled one and completing the original message. That way the consumer doesn't have to neither hold on to the message sequence number nor initiate the retrieval of a specific message.
I am trying to create a command with DSharpPlus that will send multiple messages over time. However, the loop just stops after 5 messages have been sent. In order to test the fact that it wasn't an error in my code (at least an obvious one) I created another extremely simple loop, and once again, it maxed out at 5. The test I used is:
[Command("test")]
public async Task Test(CommandContext ctx)
{
for(int i = 0; i < 50; i++)
{
await ctx.RespondAsync(i.ToString());
}
}
So, if this were to wor properly, the bot would send a message for every integer until reaching 50. However, it stops after the integer of 4. How can I fix this?
Discord has this system in place called "Rate limits". They prevent you from overloading the server with too many requests (the HTTP error you'll receive is 429, too many requests).
To prevent this, DSharpPlus has a system in place with a queue that takes into consideration Discord's rate limit headers to make sure these messages are sent anyway.
Also, I recommend not sending too many messages like this. To prevent these rate limit errors I recommend instead sending as much data as possible in one message instead of separating it into 50 messages send in quick succession.
I hope this answers your question.
That aside, Thanks for using DSharpPlus :)
The normal expected behaviour for the code below, would be that ReceiveAsync, looks at the Azure queue for up to 1 minute before returning null or a message if one is received. The intended use for this is to have an IoT hub resource, where multiple messages may be added to a queue intended for one of several DeviceClient objects. Each DeviceClient will continuously poll this queue to receive message intended for it. Messages for other DeviceClients are thus left in the queue for those others.
The actual behaviour is that ReceiveAsync is immediately returning null each time it's called, with no delay. This is regardless of the value that is given with TimeSpan - or if no parameters are given (and the default time is used).
So, rather than seeing 1 log item per minute, stating there was a null message received, I'm getting 2 log items per second (!). This behaviour is different from a few months ago,. so I started some research - with little result so far.
using Microsoft.Azure.Devices;
using Microsoft.Azure.Devices.Client;
public static TimeSpan receiveMessageWaitTime = new TimeSpan(0, 1 , 0);
Microsoft.Azure.Devices.Client.Message receivedMessage = null;
deviceClient = DeviceClient.CreateFromConnectionString(Settings.lastKnownConnectionString, Microsoft.Azure.Devices.Client.TransportType.Amqp);
// This code is within an infinite loop/task/with try/except code
if(deviceClient != null)
{
receivedMessage = await deviceClient.ReceiveAsync(receiveMessageWaitTime);
if(receivedMessage != null)
{
string Json = Encoding.ASCII.GetString(receivedMessage.GetBytes());
// Handle the message
}
else
{
// Log the fact that we got a null message, and try again later
}
await Task.Delay(500); // Give the CPU some time, this is an infinite loop after all.
}
I looked at the Azure hub, and noticed 8 messages in the queue. I then added 2 more, and neither of the new messages were received, and the queue is now on 10 items.
I did notice this question: Azure ServiceBus: Client.Receive() returns null for messages > 64 KB
But I have no way to see whether there is indeed a message that big currently in the queue (since receivemessage returns null...)
As such the questions:
Could you preview the messages in the queue?
Could you get a queue size, e.g. ask the number of messages in the queue before getting them?
Could you delete messages from the queue without getting them?
Could you create a callback based receive instead of an infinite loop? (I guess internally the code would just do a peek and the same as we are already doing)
Any help would be greatly appreciated.
If you use the Azure ServiceBus, I recommend that you could use the Service Bus Explorer to preview the message, get the number of message in the queue. And Also you could delete the message without getting them.
I'm using RabbitMQ in C# with the EasyNetQ library. I'm using a pub/sub pattern here. I still have a few issues that I hope anyone can help me with:
When there's an error while consuming a message, it's automatically moved to an error queue. How can I implement retries (so that it's placed back on the originating queue, and when it fails to process X times, it's moved to a dead letter queue)?
As far as I can see there's always 1 error queue that's used to dump messages from all other queues. How can I have 1 error queue per type, so that each queue has its own associated error queue?
How can I easily retry messages that are in an error queue? I tried Hosepipe, but it justs republishes the messages to the error queue instead of the originating queue. I don't really like this option either because I don't want to be fiddling around in a console. Preferably I'd just program against the error queue.
Anyone?
The problem you are running into with EasyNetQ/RabbitMQ is that it's much more "raw" when compared to other messaging services like SQS or Azure Service Bus/Queues, but I'll do my best to point you in the right direction.
Question 1.
This will be on you to do. The simplest way is that you can No-Ack a message in RabbitMQ/EasyNetQ, and it will be placed at the head of the queue for you to retry. This is not really advisable because it will be retried almost immediately (With no time delay), and will also block other messages from being processed (If you have a single subscriber with a prefetch count of 1).
I've seen other implementations of using a "MessageEnvelope". So a wrapper class that when a message fails, you increment a retry variable on the MessageEnvelope and redeliver the message back onto the queue. YOU would have to do this and write the wrapping code around your message handlers, it would not be a function of EasyNetQ.
Using the above, I've also seen people use envelopes, but allow the message to be dead lettered. Once it's on the dead letter queue, there is another application/worker reading items from the dead letter queue.
All of these approaches above have a small issue in that there isn't really any nice way to have a logarithmic/exponential/any sort of increasing delay in processing the message. You can "hold" the message in code for some time before returning it to the queue, but it's not a nice way around.
Out of all of these options, your own custom application reading the dead letter queue and deciding whether to reroute the message based on an envelope that contains the retry count is probably the best way.
Question 2.
You can specify a dead letter exchange per queue using the advanced API. (https://github.com/EasyNetQ/EasyNetQ/wiki/The-Advanced-API#declaring-queues). However this means you will have to use the advanced API pretty much everywhere as using the simple IBus implementation of subscribe/publish looks for queues that are named based on both the message type and subscriber name. Using a custom declare of queue means you are going to be handling the naming of your queues yourself, which means when you subscribe, you will need to know the name of what you want etc. No more auto subscribing for you!
Question 3
An Error Queue/Dead Letter Queue is just another queue. You can listen to this queue and do what you need to do with it. But there is not really any out of the box solution that sounds like it would fit your needs.
I've implemented exactly what you describe. Here are some tips based on my experience and related to each of your questions.
Q1 (how to retry X times):
For this, you can use IMessage.Body.BasicProperties.Headers. When you consume a message off an error queue, just add a header with a name that you choose. Look for this header on each message that comes into the error queue and increment it. This will give you a running retry count.
It's very important that you have a strategy for what to do when a message exceeds the retry limit of X. You don't want to lose that message. In my case, I write the message to disk at that point. It gives you lots of helpful debugging information to come back to later, because EasyNetQ automatically wraps your originating message with error info. It also has the original message so that you can, if you like, manually (or maybe automated, through some batch re-processing code) requeue the message later in some controlled way.
You can look at the code in the Hosepipe utility to see a good way of doing this. In fact, if you follow the pattern you see there then you can even use Hosepipe later to requeue the messages if you need to.
Q2 (how to create an error queue per originating queue):
You can use the EasyNetQ Advanced Bus to do this cleanly. Use IBus.Advanced.Container.Resolve<IConventions> to get at the conventions interface. Then you can set the conventions for the error queue naming with conventions.ErrorExchangeNamingConvention and conventions.ErrorQueueNamingConvention. In my case I set the convention to be based on the name of the originating queue so that I get a queue/queue_error pair of queues every time I create a queue.
Q3 (how to process messages in the error queues):
You can declare a consumer for the error queue the same way you do any other queue. Again, the AdvancedBus lets you do this cleanly by specifying that the type coming off of the queue is EasyNetQ.SystemMessage.Error. So, IAdvancedBus.Consume<EasyNetQ.SystemMessage.Error>() will get you there. Retrying simply means republishing to the original exchange (paying attention to the retry count you put in the header (see my answer to Q1, above), and information in the Error message that you consumed off the error queue can help you find the target for republishing.
I know this is an old post but - just in case it helps someone else - here is my self-answered question (I needed to ask it because existing help was not enough) that explains how I implemented retrying failed messages on their original queues. The following should answer your question #1 and #3. For #2, you may have to use the Advanced API, which I haven't used (and I think it defeats the purpose of EasyNetQ; one might as well use RabbitMQ client directly). Also consider implementing IConsumerErrorStrategy, though.
1) Since there can be multiple consumers of a message and all may not need to retry a msg, I have a Dictionary<consumerId, RetryInfo> in the body of the message, as EasyNetQ does not (out of the box) support complex types in message headers.
public interface IMessageType
{
int MsgTypeId { get; }
Dictionary<string, TryInfo> MsgTryInfo {get; set;}
}
2) I have implemented a class RetryEnabledErrorMessageSerializer : IErrorMessageSerializer that just updates the TryCount and other information every time it is called by the framework. I attach this custom serializer to the framework on a per-consumer basis via the IoC support provided by EasyNetQ.
public class RetryEnabledErrorMessageSerializer<T> : IErrorMessageSerializer where T : class, IMessageType
{
public string Serialize(byte[] messageBody)
{
string stringifiedMsgBody = Encoding.UTF8.GetString(messageBody);
var objectifiedMsgBody = JObject.Parse(stringifiedMsgBody);
// Add/update RetryInformation into objectifiedMsgBody here
// I have a dictionary that saves <key:consumerId, val: TryInfoObj>
return JsonConvert.SerializeObject(objectifiedMsgBody);
}
}
And in my EasyNetQ wrapper class:
public void SetupMessageBroker(string givenSubscriptionId, bool enableRetry = false)
{
if (enableRetry)
{
_defaultBus = RabbitHutch.CreateBus(currentConnString,
serviceRegister => serviceRegister.Register<IErrorMessageSerializer>(serviceProvider => new RetryEnabledErrorMessageSerializer<IMessageType>(givenSubscriptionId))
);
}
else // EasyNetQ's DefaultErrorMessageSerializer will wrap error messages
{
_defaultBus = RabbitHutch.CreateBus(currentConnString);
}
}
public bool SubscribeAsync<T>(Func<T, Task> eventHandler, string subscriptionId)
{
IMsgHandler<T> currMsgHandler = new MsgHandler<T>(eventHandler, subscriptionId);
// Using the msgHandler allows to add a mediator between EasyNetQ and the actual callback function
// The mediator can transmit the retried msg or choose to ignore it
return _defaultBus.SubscribeAsync<T>(subscriptionId, currMsgHandler.InvokeMsgCallbackFunc).Queue != null;
}
3) Once the message is added to the default error queue, you can have a simple console app/windows service that periodically republishes existing error messages on their original queues. Something like:
var client = new ManagementClient(AppConfig.BaseAddress, AppConfig.RabbitUsername, AppConfig.RabbitPassword);
var vhost = client.GetVhostAsync("/").Result;
var aliveRes = client.IsAliveAsync(vhost).Result;
var errQueue = client.GetQueueAsync(Constants.EasyNetQErrorQueueName, vhost).Result;
var crit = new GetMessagesCriteria(long.MaxValue, Ackmodes.ack_requeue_false);
var errMsgs = client.GetMessagesFromQueueAsync(errQueue, crit).Result;
foreach (var errMsg in errMsgs)
{
var innerMsg = JsonConvert.DeserializeObject<Error>(errMsg.Payload);
var pubInfo = new PublishInfo(innerMsg.RoutingKey, innerMsg.Message);
pubInfo.Properties.Add("type", innerMsg.BasicProperties.Type);
pubInfo.Properties.Add("correlation_id", innerMsg.BasicProperties.CorrelationId);
pubInfo.Properties.Add("delivery_mode", innerMsg.BasicProperties.DeliveryMode);
var pubRes = client.PublishAsync(client.GetExchangeAsync(innerMsg.Exchange, vhost).Result, pubInfo).Result;
}
4) I have a MessageHandler class that contains a callback func. Whenever a message is delivered to the consumer, it goes to the MessageHandler, which decides if the message try is valid and calls the actual callback if so. If try is not valid (maxRetriesExceeded/the consumer does not need to retry anyway), I ignore the message. You can choose to Dead Letter the message in this case.
public interface IMsgHandler<T> where T: class, IMessageType
{
Task InvokeMsgCallbackFunc(T msg);
Func<T, Task> MsgCallbackFunc { get; set; }
bool IsTryValid(T msg, string refSubscriptionId); // Calls callback only
// if Retry is valid
}
Here is the mediator function in MsgHandler that invokes the callback:
public async Task InvokeMsgCallbackFunc(T msg)
{
if (IsTryValid(msg, CurrSubscriptionId))
{
await this.MsgCallbackFunc(msg);
}
else
{
// Do whatever you want
}
}
Here, I have implemented a Nuget package (EasyDeadLetter) for this purpose, which can be easily implemented with the minimum changes in any project.
All you need to do is follow the four steps :
First of all, Decorate your class object with QeueuAttribute
[Queue(“Product.Report”, ExchangeName = “Product.Report”)]
public class ProductReport { }
The second step is to define your dead-letter queue with the same QueueAttribute and also inherit the dead-letter object from the Main object class.
[Queue(“Product.Report.DeadLetter”, ExchangeName =
“Product.Report.DeadLetter”)]
public class ProductReportDeadLetter : ProductReport { }
Now, it’s time to decorate your main queue object with the EasyDeadLetter attribute and set the type of dead-letter queue.
[EasyDeadLetter(DeadLetterType =
typeof(ProductReportDeadLetter))]
[Queue(“Product.Report”, ExchangeName = “Product.Report”)]
public class ProductReport { }
In the final step, you need to register EasyDeadLetterStrategy as the default error handler (IConsumerErrorStrategy).
services.AddSingleton<IBus>
(RabbitHutch.CreateBus(“connectionString”,
serviceRegister =>
{
serviceRegister.Register<IConsumerErrorStrategy,
EasyDeadLetterStrategy>();
}));
That’s all. from now on any failed message will be moved to the related dead-letter queue.
See more detail here :
GitHub Repository
NuGet Package
Current Setup includes a windows service which picks up a message from the local queue and extracts the information and puts in to my SQL database.According to my design
Service picks up the message from the queue.(I am using Peek() here).
Sends it to the database.
If for some reason i get an exception while saving it to the database the message is back into the queue,which to me is reliable.
I am logging the errors so that a user can know what's the issue and fix it.
Exception example:If the DBconnection is lost during saving process of the messages to the database then the messages are not lost as they are in the queue.I don't comit untill i get an acknowledgement from the DB that the message is inserted .So a user can see the logs and make sure that the DBconnection exists and every thing would be normal and we dont lose any messages in the queue.
But looking into another scenario:The messages I would be getting in the queue are from a 3rd party according a standard schema.The schema would remain same and there is no change in that.But i have seen some where i get some format exceptions and since its not committed the message is back to the queue.At this point this message would be a bottle neck for me as the same messages is picked up again and tries to process the message.Every time the service would pick up the same message and gets the same exception.So this loops infinitely unless that message is removed or put that message last in the queue.
Looking at removing the message:As of now if i go based on the format exception...then i might be wrong since i might encounter some other exceptions in the future .
Is there a way i can put this messages back to the queue last in the list instead beginning of the queue.
Need some advice on how to proceed further.
Note:Queue is Transactional .
As far as I'm aware, MSMQ doesn't automatically dump messages to fail queues. Either way you handle it, it's only a few lines of code (Bill, Michael, and I recommend a fail queue). As far as a fail queue goes, you could simple create one named .\private$\queuename_fail.
Surviving poison messages in MSMQ is a a decent article over this exact topic, which has an example app and source code at the end.
private readonly MessageQueue _failQueue;
private readonly MessageQueue _messageQueue;
/* Other code here (cursor, peek action, run method, initialization etc) */
private void dumpToFailQueue(Message message)
{
var oldId = message.Id;
_failQueue.Send(message, MessageQueueTransactionType.Single);
// Remove the poisoned message
_messageQueue.ReceiveById(oldId);
}
private void moveToEnd(Message message)
{
var oldId = message.Id;
_messageQueue.Send(message, MessageQueueTransactionType.Single);
// Remove the poisoned message
_messageQueue.ReceiveById(oldId);
}