I am creating a mail(not email) messaging system on a website (in the same line as Facebook). I am looking at employing a queue for creating the messages. The problem I am facing is that in terms of user experience and UI, if I create a new conversation/message, while it gets added to the queue, it may sit there for 30+ seconds while the next poll runs. As the list of messages being returned comes from the non-queue table, there are limited options to how to show that the message has been sent.
Can only think of:
- When message is created, show a "messaging sending" ajax loader, and initialize a javascript poll of the queue to run every 5 secs. When the queue item no longer exists, reload the conversation list with the updated items.
- When a message is created, or page loads, query the message table, and join against the queue table for any messages created by senderid, so to user, it essentially looks like message has truly been sent. (Only issue with this, is that is technically negates the reason for a queue).
Related
I have a WebJob getting messages from Service bus which is working fine. If the webjob fails to finish or throws exception the message is sent back in the queue which is fine in itself.
I have set MaxDequeueCount to 10 and it is not a problem if it fails in some cases as it will try to process it again. But the problem is that the message seems to be sent to the bottom of the queue and other messages are handled before we get back to the first failed one.
I would like to handle one message at a time because I might have multiple updates on the same entity coming in a row. If the order changes it would update the entity in wrong order.
Is if it is possible to send the message back infront of the queue on error or continue working on the same message until we reach the MaxDequeueCount?
Ideally, you should not be relying on message order.
Given your requirement, you could potentially go with the FIFO semantics of Azure Service Bus Sessions. When a message is handled within a session and message is aborted, it will be handled once again rather than go to the end of the queue. You need to keep in mind the following:
Can only process one message at a time
Requires session to be used
If message is not completed and not aborted, it will be left hanging in the queue and will be picked up when a session is restarted.
I'm using a WebJob to pull from my ServiceBus Queue via the trigger method and it seems to work well. The problem is I have a nightly job that pumps work into a queue, then I'd like to have another job run at the end when the queue work is finally processed to email the results. My WebJob is currently processing 16 items at a time, and I'll probably have to have multiple WebJobs running to handle the load, so I don't feel like I can just check if the Queue is empty on every trigger.
Is there a way the ServiceBus can signal when it's empty? Should I just have another recurring process running that checks every 10 minutes and fires with a daily bit value to make sure it's done? Seems inefficient. Is there some Azure pattern I'm missing here?
Azure Service Bus will not signal about empty queues. Knowing if there are any number of messages in a queue would be probably considered an anti-pattern. As Clemens Vasters said
Anytime any #Azure #ServiceBus client code looks at QueueDescription.MessageCount to determine whether to call Receive - that's a bug. Don't
Queue can contain work items at any point in time. You never know when that will end. If you have messages that represent something as a group and need to trigger an operation at the end of that group processing, you could have something that can track what work has been accomplished and when it's all done, trigger another message. It could be "I've processed X messages for session Y and therefore this work is completed, sending a notification command".
You can do this by using an instance of NamespaceManaager. It gives you the count of messages in the subscription.
NamespaceManager nsManager = NamespaceManager.CreateFromConnectionString(<connectionstring>);
var subscription = nsManager.GetSubscription(
<topicName>,<subscriptionName>);
if(subscription != null && subscription.MessageCount > 0)
//do something
If you want to avoid DLQ count then you can use subscription.MessageCountDetails.ActiveMessageCount instead in above code.
As Sean had mentioned, it would be ideal to submit a message to another queue say 'Emails' at the completion of group processing. Create a logic app with a trigger on 'when a new message is received in the 'Emails' queue and action to send out an email to the required recipients. It is pretty easy to achieve this without even a line of code.
I'm using an Azure Service Bus Queue with Session based messaging enabled. To consume from the queue I register an IMessageSessionAsyncHandler and then process the message in the OnMessageAsync method.
This issue I'm seeing is that if I abandon a message for whatever reason, rather than being received again immediately, I receive the next message in the session and only after processing that message, do I receive the first message again (assuming only two messages in the session).
As an example, lets say I have a queue with 2 messages on it, both with the same SessionId. The two messages have sequence numbers of 1 and 2 respectively. I start receiving and get message with sequence 1, as expected. If I then abandon this message using message.Abandon (the reason for abandoning is irrelevant), I immediately get the next message in the session (sequence number 2). Only after handling (or abandoning) this second message, do I get the first message again.
This behaviour I'm seeing isn't what I'd expect from abandoning a message and isn't consistent with other ways of using the queue. I've tested this same example in the following scenarios
without the use of an IMessageSessionAsyncHandler and instead just manually accepting a message session.
without the use of sessions and instead just having two independent messages on the queue.
In both scenarios, I see the expected bahaviour, in that when I abandon a message it is always guaranteed to be the next message received, unless the max delivery count is exceeded and it is dead-lettered.
My question is this: Is the behaviour I'm seeing with the use of an IMessageSessionAsyncHandler expected, or is this a bug in the Service Bus Library? If this is not a bug, can anyone give me an explaination for why this behaves different to the other ways of receiving?
When you Register a session handler on the Queueclient, Prefetch is turned on internally to improve latency and throughput of the receivers. Unfortunately for the IMessageSessionAsyncHandler scenario this behavior cannot be overriden. One option is to abandon the Session itself when you encounter a message in a session which needs to be abandoned, this will ensure that the messages are delivered in order.
I have application that use MSSQL database.
Application have module that is using for sending messages between application users.
When one user send message to another i insert message in database, and set message status to 1( after user read message database I update and set message status to 0).
Now,i am using system.timers.timer for checking message status, and if message status 1 user get alert that he has one message inbox.
The problem is that this application can be used by many users, and if timer run ever 5 minutes this gone slow application and database.
Is there any other solution to do this, with out runing timer?
Thanks!
I don't think the solution using a timer which does polling is that bad. And 50 Users is relatively little.
Does each user run a client app, which directly connects to the database? Or is this a ASP.NET app? Or a service which connects to the db and notifies client apps?
If you have client apps connecting directly to the DB, I'd stay with the timer and probably reduce the timeout (the number of queries seems to be extremely low in your case).
Other options
Use SqlDependency/Query notifications MSDN
Only if your message processing logic gets more complex, probably take a look at service broker. Especially if you need queuing behavior. But as it seems, this would be far too complex.
I wouldn't use a trigger.
Maybe you should look into having a "monitor" service, which is the only one looking at changes in the database and then sending a message to the other applications (a delegate) that data has updated, and they themselves should fetch their own data only when they get that message.
If checking always against the message table you can use add a column to your user table named: HasNewMessage, which is updated by a trigger on the message table
To illustrate it:
User 1 gets a new message
Messagetable trigger sets HasNewMessage to 1 for user1
You then check every 5 minutes if user1 HasNewMessage (should be faster due to indexed user
table)
If user1 looks into his mailbox you set HasNewMessages back to 0
Hope this helps
I have an NServiceBus application for which a given message may not be processed due to some external event not having taken place. Because this other event is not an NSB event I can't implement sagas properly.
However, rather than just re-queuing the message (which would cause a loop until that external event has occurred), I'm wrapping the message in another message (DelayMessage) and queuing that instead. The DelayMessage is picked up by a different service and placed in a database until the retry interval expires. At which point, the delay service re-queues the message on the original queue so another attempt can be made.
However, this can happen more than once if that external event still hasn't taken place, and in the case where that even never happens, I want to limit the number of round trips the message takes. This means the DelayMessage has a MaxRetries property, but that is lost when the delay service queues the original message for the retry.
What other options am I missing? I'm happy to accept that there's a totally different solution to this problem.
Consider implementing a saga which stores that first message, holding on to it until the second message arrives. You might also want the saga to open a timeout as well so that your process won't wait indefinitely if that second message got lost or something.