Azure Service Bus automatic queue delete - c#

I am using MassTransit along with Azure Service Bus.
All consumers APIs automatically create queues in ASB, but if a consumer is removed the queue stays.
Even worse, if the message type that was used for the removed consumer still exists and is being used throughout the system (for other consumers), the old queue still accepts and hoards messages.
An example would be:
Consumer 'A' accepts messages of type 'mA'.
Consumer 'B' also accepts messages of type 'mA'.
For both of these, a queue is created in ASB (Queue A and Queue B).
Now because of some changes, Consumer 'A' is no longer needed and is deleted from the API - but if message 'mA' is published, both Queue A and Queue B will still receive it.
An obvious solution would be to just have a pretty low Auto-delete setting, but this will only remove the unneeded messages, not the queues themselves.

Related

Have ServiceBus respect transaction scope?

Within code I need to atomically:
//write to a database table
//publish EventA to service bus
//write to another database table
//publish EventB to service bus
If either of the db writes fails, I can easily roll everything back within a transaction scope. However if that happens, it's essential the events are never published to service bus.
Ideally I need the service bus library to 'wait until the transaction scope successfully completes' before publishing the message onto the bus.
I'm working with legacy .net framework code - I could easily write something which holds events to raise in memory, and only raise these once the scope completes. The problem even then is that if EventB fails to publish, EventA has already been published.
What's the easiest way to include service bus events within a transaction scope?
I don't think there's an easy way to address this. Azure Service Bus will not share a transaction with any other service. Just as database transactions cannot include a web service call. The way to go about it would always require some additional complexity compared to a simple DTC transaction that was possible on-premises (to an extent).
But it can be done. With the patterns such as Outbox (unit of work) and Inbox (idempotency) are described very well in this post
Outbox Pattern - This pattern ensures that a message was sent (e.g. to a queue) successfully at least once. With this pattern, instead of directly publishing a message to the queue, we store it in the temporary storage (e.g. database table). We’re wrapping the entity save and message storing with the Unit of Work (transaction). By that, we’re making sure that if the application data was stored, the message wouldn’t be lost. It will be published later by a background process. This process will check if there are any not sent events in the table. When the worker finds such messages, it tries to send them. After it gets confirmation of publishing (e.g. ACK from the queue) it marks the event as sent.
Inbox Pattern - This is a pattern similar to Outbox Pattern. It’s used to handle incoming messages (e.g. from a queue). Accordingly, we have a table in which we’re storing incoming events. Contrary to outbox pattern, we first save the event in the database, then we’re returning ACK to queue. If save succeeded, but we didn’t return ACK to queue, then delivery will be retried. That’s why we have at-least-once delivery again. After that, an outbox-like process runs. It calls message handlers that perform business logic.
To answer your specific question.
What's the easiest way to include service bus events within a transaction scope?
The simplest would be to use a library that does that already, has been tested thoroughly, and can integrate into your system. MassTransit, NServiceBus, Jasper, etc. Alternatively, build your own. I'd not advise unless it's a pet project or the core of your system.

How to prioritise one function over another in an Azure Function App?

I've created an Azure Function App which is triggered from Azure Service Bus Queues. The Service Bus has two queues in it and there is a function with a trigger for each of the queues. The Function App is developed using C# in Visual Studio and uses package deployment publishing.
What I would like to do is be able to indicate that one function/trigger should be processed before the other if they both have messages waiting. (They both do basically the same thing but one queue is for handling messages with a higher priority since queues are only FIFO.)
I have read that functions are processed in alphabetical order but that doesn't feel like something to rely on really.
Is there any way to explicitly indicate a priority (or even a scale-out preference) for one function/trigger over another?
(They both do basically the same thing but one queue is for handling messages with a higher priority since queues are only FIFO.)
The above scenario looks like competing consumers which has a dedicated design pattern known as competing consumers pattern where it contains the limitation with the messaging order in this pattern.
Consumer service instances may receive messages in any order, and this order need not correspond to the order in which the messages were created.
So unfortunately, it's not possible to prioritize one function over another function listening to the Service Bus queue.
You can control the activity function & orchestrator but not the starter function using the Azure Durable Functions.
Microsoft Azure Service Bus Queues can implement guaranteed first-in-first-out ordering of messages by using message sessions. For more information, see Messaging Patterns Using Sessions.

Azure queue message should go to one receiver even if two receivers are listening to the queue, implementing in C#

I'm working with Azure Service bus queue, I have two listeners for the same queue, how can i make sure that one receiver1 should get the queue message all the time not the other one?
Note: 1. Implemented all the code in C#.
2. I'm aware about the Topics, above scenario is specific to queue.

Rabbit Mq, Starting consumer before publisher

I have created two separate application for publisher and consumer that send and receives message from the common queue. I have started consumer application first and the exchange was not declared yet (It has been declared and bind to queue on publisher) that's why I got the error.
So my question is,
Is it a good idea to declare and bind exchange to queue on consumer?
In case of declaring exchange in consumer, should consumer know about the exchange properties and exchange type. Here in my case, consumer only knows the exchange,queue,route to receive message from the particular queue.
First it is important to note that starting a consumer and consuming from an existing queue should not give you any error even it the queue is not bound to any exchange. It is not necessary for a queue to be bound to an exchange, it can exist on its own.
To answer your questions: This depends on your use case. It may be OK for the consumer to create the queue, create the exchange and then bind the queue to the exchange. This allows the consumer to have control which messages are routed to the queue and can be consumed by him. If the consumer is the party that should execute this control, this is fine. But if the use case indicates that another party but the consumer shall control the routing, this other party shall create the exchange and the binding.
Consider a topology where there are only simple exchanges bound to queues. In such a topology there would be only bindings from exchanges to queues but none from exchanges to exchanges. Such a topology can be created by the consumer.
But consider a different topology with two levels of exchanges. The exchanges on the lower level are bound to queues, this is similar to the first topology. But above this lower level there is a higher level of exchanges which are only bound to exchanges on the lower level. The exchanges on the higer level distribute messages based on rules that are not related to concrete consumers. In fact, the two levels of exchanges could exist without any queues and consumers. The creation of the exchanges and bindings in this topology can not be done by a consumer.
A consumer could become a part of the second topology by declaring a queue for himself, binding this queue to the exchanges on the lower level he is interested in, and consume from the queue. The consumer would not create any exchanges, he would just bind his queue to them.
So to sum up: In trivial scenarios, it does not matter who declares exchanges, queues and bindings, as long as everything is done in the right order. But on more complex scenarios, the responsibility should by spread between the RabbitMQ admin, produces and consumers.

Lock a Service-Bus Queue and prevent others from accessing it

I have multiple queues that multiple clients insert messages into them.
On the server side, I have multiple micro-services that access the queues and handle those messages. I want to lock a queue whenever a service is working on it, so that other services won't be able to work on that queue.
Meaning that if service A is processing a message from queue X, no other service can process a message from that queue, until service A has finished processing the message. Other services can process messages from any queue other than X.
Does anyone has an idea on how to lock a queue and prevent others from accessing it? preferably the other services will receive an exception or something so that they'll try again on a different queue.
UPDATE
Another way can be to assign the queues to the services, and whenever a service is working on a queue no other service should be assigned to the queue, until the work item was processed. This is also something that isn't easy to achieve.
There are several built-in ways of doing this. If you only have a single worker, you can set MessageOptions.MaxConcurrentCalls = 1.
If you have multiple, you can use the Singleton attribute. This gives you the option of setting it in Listener mode or Function mode. The former gives the behavior you're asking for, a serially-processed FIFO queue. The latter lets you lock more granularly, so you can specifically lock around critical sections, ensuring consistency while allowing greater throughput, but doesn't necessarily preserve order.
My Guess is they'd have implemented the singleton attribute similarly to your Redis approach, so performance should be equivalent. I've done no testing with that though.
You can achieve this by using Azure Service Bus message sessions
All messages in your queue must be tagged with the same SessionId. In that case, when a client receives a message, it locks not only this message but all messages with the same SessionId (effectively whole queue).
The solution was to use Azure's redis to store the locks in-memory and have micro-services that manage those locks using the redis store.
The lock() and unlock() operations are atomic and the lock has a TTL, so that a queue won't be locked indefinitely.
Azure Service Bus is a broker with competing consumers. You can't have what you're asking with a general queue all instances of your service are using.
Put the work items into a relational database. You can still use queues to push work to workers but the queue items can now be empty. When a worker receives an item he know to look into the database instead. The content of the message is disregarded.
That way messages are independent and idempotent. For queueing to work these two properties usually must hold.
That way you can more easily sequence actions that actually are sequential. You can use transactions as well.
Maybe you don't need queues at all. Maybe it is enough to have a fixed number of workers polling the database for work. This loses auto-scaling with queues, though.

Categories

Resources