JMS: updating message version / prevent certain message from being queued - c#

I am trying to create a message based application based with ActiveMQ, using .NET Clients.
Client 1: A Web Service (producer)
Client 2: A Windows Service (consumer)
My question is: Is it possible to prevent messages of a certain type or content from being queued by a Client?
The reason why I want to do this is Version Updating.
I think there will be a time, when I need to extend or change the message type.
My plan is to do that update in the following order:
Prevent messages of the old version to be queued.
Wait until the consumer has processed all messages of the old version.
Update producer and consumer software.
I would like the Web Service to be still available during the update process to report back to the call. But it should not be able to queue new messages.
Of course if there is a better way of solving this problem altogether, please let me know.

As a general rule it is a good idea to only have one type of payload per queue. An easy way to do this is to use two different queues for the two different message versions. Something like:
mysystem.orders.1_0
mysystem.orders.1_1
The version should be the last part of the queue name, as it makes it easy to work with wildcards, which are used for a lot of the config options in ActiveMQ.
Splitting up different versions into different queues gets you around the problem of having to upgrade the producer and consumer at the same time, and also gives you some visibility as whether all of the 1_0 messages have been consumed.

Related

Extending a message deadline "indefinitely"

I have a worker attached to a subscription processing messages. Depending on the message payload, the worker could possibly take days to completely process the message.
I'm a bit confused by different properties that the client can set that control how the Pub/Sub client library automatically extends the deadline so that the worker can process the message without fearing that the message will be redelivered.
Properties from documentation which is not very clear :
MinimumAckDeadline, MaximumAckDeadline, DefaultAckDeadline, MinimumAckExtensionWindow, DefaultAckExtensionWindow, MinimumLeaseExtensionDelay, and DefaultMaxTotalAckExtension
I believe I want to set DefaultMaxTotalAckExtension to a large value (several days) to allow my subscriber to continue working on the message without getting some kind of time out.
But I think I also want to modify the AckDeadline so that Pub/Sub knows that the client is still alive. Not sure which one I would want to modify: MinimumAckDeadline, MaximumAckDeadline, DefaultAckDeadline.
Aside from those properties, I don't know if I need to set MinimumAckExtensionWindow, DefaultAckExtensionWindow, or MinimumLeaseExtensionDelay.
All of the properties you mentioned are default/limit properties for the SubscriberClient class itself. Note that they are static only have getters, not setters. What you want to set is MaxTotalAckExtension, which controls the maximum amount of time a message's lease will be extended.
However, taking days to process a message is considered an anti-pattern for Cloud Pub/Sub and will very likely result in duplicate deliveries. If you are going to take that long to process a message, you probably need to look at other options like persisting it locally (in a file or in a database), acking it, and then processing it. At that point, you may consider just writing directly to a database instead of to Pub/Sub and scanning for rows that need to be processed by your subscribers.

RESTFUL web service v Message queue when using Scatter Gatherer

Say I have a scatter gather setup like this:
1) Web app
2) RabbitMQ
3) Scatter gather API 1
4) Scatter gather API 2
5) Scatter gather API x
Say each scatter gather (and any new ones added in future) need to supply an image/update an image to the web app, so that when the web app displays the results on screen it also displays the image. What is the best way to do this?
1) RESTFUL call from each API to web app adding/updating an image where necessary
2) Use message queue to send the image
I believe option two is best because I am using a microservices architecture. However, this would mean that the image could be processed by the web app after requests are made (if competiting consumers are used). Therefore the image could be missing from the webpage?
The problem with option 1 is the scatter gatherer apis are tightly coupled with the web app.
What is the appropriate way to approach this?
The short answer: There is no right way to do this.
The long answer: Because there's no right way to do this, there a danger that any answer I give you will be an opinion. Rather than do that, I'm going to help clarify the ramifications of each option you've proposed.
First thing to note: Unless there is already an image available at the time of the HTTP request, then your HTTP response will not be able to include an image. This means that your front-end will need to be updated after the HTTP request/response cycle has concluded. There are two ways to do this: polling via AJAX requests, or pushing via sockets.
The advantage of polling is that it is probably easier to integrate into an existing web app. The advantage of pushing the image to the client via sockets is that the client won't need to spam your server with polling requests.
Second thing to note: Reporting back the image from the scatter/gather workers could happen either via an HTTP endpoint, or via the message queue, as you suggest.
The advantage of the HTTP endpoint is that it would likely be simpler to setup. The advantage of the message queue is that the worker would not have to wait for the the HTTP response (which could take a while if you're writing a large image file to disk) before moving on to the next job.
One more thing to note: If you choose to use an HTTP endpoint to create/update the images, it is possible that multiple scatter/gather workers will be trying to do this at the same time. You'll need to handle this to prevent multiple workers from trying to write to the same file at the same time. You could handle this by using a mutex to lock the file while one process is writing to it. If you choose to use a message queue, you'll have several options for dealing with this: you could use a mutex, or you could use a FIFO queue that guarantees the order of execution, or you could limit the number of workers on the queue to one, to prevent concurrency.
I do have experience with a similar system. My team and I chose to use a message queue. It worked well for us, given our constraints. But, ultimately, you'll need to decide which will work better for you given your constraints.
EDIT
The constraints we considered in choosing a message queue over HTTP included:
Not wanting to add private endpoints to a public facing web app
Not wanting to hold up a worker to wait on an HTTP request/response
Not wanting to make synchronous that which was asynchronous
There may have been other reasons. Those are the ones I remember off the top of my head.

Best practice for ServiceBus message versioning

I am setting up a system where we will transport messages between several internal services on ServiceBus Topics. The messages will hold serialized objects. The model objects are defined as quite complex trees of classes. This means it is not practical to maintain duplet versions of the model structures in the code.
We expect the model structure to change so I have exposed the model version as a property on the brokered message.
What is the best way to handle the transition when we need to upgrade the model version?
I don't think we will really need to support two parallell model versions. But I am concerned we don't loose messages during the transition. I assume it is a good strategy to upgrade the sending services first and let all subscribers continue to process messages. When all messages of the previous version are processed, then it is time to upgrade the subscribing services.
What is the best mechanism for skipping messages with a new version that the listening service is currently not handling?
I know I could go back to the old school and define parallell model versions by using schemas for json or xml, thus making it possible for the listening service to handle parallell versions. But that would be cumbersome, so I really want to avoid that.
I noticed the BrokeredMessage has a Defer method. Would that be useful? It looked promising until I realized the messages will be "moved" from the live queue into a separate state where they need to be pulled by referencing them by key. Not practical.
Is it possible to postpone the message by modifying delivery time? A couple of minutes would be fine. If the same service is still running by that time it can be postponed once again. (A working code example would be appreciated!)
Do I need to create separate subscriptions based on model version? So far we allow different message types to travel on the same topic so that would call for some redesign.
As a rule of thumb: upgrades on a live system are difficult. The easiest option that minimises risks of system downtime is:
add next message version support to the current code base
run two message versions concurrently
ensure all versions are supported and system runs without a problem
remove previous version
I have been looking at something similar but am yet to implement it so can't provide full guidance, but to answer your question on #3... I have messages which have a flag to re-queue the message to run again, e.g. to get a process to run every 5 minutes.
So during the process I extract the object from the BrokeredMessage:
var myObject = receivedMessage.GetBody<MyModel>();
I then complete that message to remove it from the queue and create a new BrokeredMessage based on that object and you can then set the ScheduledEnqueueTimeUtc field to something in the future.
BrokeredMessage brokeredMsg = new BrokeredMessage(myObject);
brokeredMsg.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddMinutes(5);
Client.Send(brokeredMsg);
So if you only want to process one model version at a time, you could assign a version number to your Model and code something in to your processor to look for a certain model number. If the model is higher, then re-queue it for a future time (Until you have updated your code). If it is lower (a missed message), then perhaps have some exception handling.
Use custom MessageProperty on message, say Version.
Under SB topic - create new subscription that will accept only messages with new version (using Rules), and modify existing subscription(s) to NOT accept new version messages.
Then you can upgrade senders - new messages will be stored only in new 'temporary' subscription.
After that, you upgrade listeners, change rules on subscriptions (remove version rule from 'main' subscription, disable receive on temporary subscription).
And now you have choice:
using any tool read messages from temporary subscription and write them back to topic - they will arrive to upgraded listeners.
temporarily start one more listener that will read temporary subscription and process all messages in it
other ways, specific to your architecture

Check number of messages in input queue

Is this possible to get the number of messages inside my InputQueue using NServiceBus and do I need to bypass it and use native MSMQ interface?
It's not gonna be a complete monitoring, we've got a system comprising several NSB components and they're monitored through the usage of Windows performance counters. What I'm trying to achieve is just a simple health check -> sending a NSB message to a component, its response is to contain let's say DB access status and number of MSMQ messages in its queue.
That's why I'd like to make it as simple as possible. So the question is: can I check the message number in a simple way or I'd rather need to read the performance counter ?
You'd have to use the System.Messaging.MessageQueue.GetAllMessages() or one of its enumerator methods to get that information. NServiceBus doesn't expose this.

How to implement Message Queuing Solution

I have a scenario where about 10 different messages will need to be enqueued and then dequeued / processed. One subscriber will need all 10 messages, but another will only need 8 of the 10 messages. I am trying to understand what the best way is to setup this type of architecture. Do you create a queue for each message type so the subscriber(s) can just subscribe to the relevant queues or do you dump them all to the same queue and ignore the messages that are not relevant to that subscriber? I want to ensure the solution is flexible / scalable, etc.
Process:
10 different xml messages will be enqueued to an IBM WebSphere MQ server.
We will use .Net (Most likely WCF since WebSphere MQ 7.1 has added in WCF support)
We will dequeue the messages and load them into another backend DB (Most likely SQL Server).
Solution needs to scale well because we will be processing a very large number of messages and this could grow (Probably 40-50,000 / hr). At least large amount for us.
As always greatly appreciate the info.
--S
Creating queues is relatively 'cheap' from a resource perspective, plus yes, it's better to use a queue for each specific purpose, so it's probably better in this case to separate them by target client if possible. Using a queue to pull messages selectively based on some criteria (correlation ID or some other thing) is usually a bad idea. The best performing scenario in messaging is the most straightforward one: simply pull messages from the queue as they arrive, rather than peeking and receiving selectively.
As to scaling, I can't speak for Websphere MQ or other IBM products, but 40-50K messages per hour isn't particularly hard for MSMQ on Windows Server to handle, so I'd assume IBM can do that as well. Usually the bottleneck isn't the queuing platform itself but rather the process of dequeuing and processing individual messages.
OK, based on the comments, here's a suggestion that will scale and doesn't require much change on the apps.
On the producer side, I'd copy the message selection criteria to a message property and then publish the message to a topic. The only change that is required here to the app is the message property. If for some reason you don't want to make it publish using the native functionality, you can define an alias over a topic. The app thinks it is sending messages but they are really publications.
On the consumer side you have a couple of choices. One is to create administrative subscriptions for each app and use a selector in the subscription. The messages are then funneled to a dedicated queue per consumer, based on the selection criteria. The apps think that they are simply consuming messages.
Alternatively the app can simply subscribe to the topic. This gives you the option of a dynamic subscription that doesn't receive messages when the app is disconnected (if in fact you wanted that) or a durable subscription that is functionally equivalent to the administrative subscription.
This solution will easily scale to the volumes you cited. Another option is that the producer doesn't use properties. Here, the consumer application consumes all messages, breaks open the message payload on each and decides whether to process or ignore the message. In this solution the producer is still publishing to a topic. Any solution involving straight queueing forces the producer to know all the destinations. Add another consumer, change the producer. Also, there's a PUT for each destination.
The worst case is a producer putting multiple messages and a consumer having to read each one to decide if it's going to be ignored. That option might have problems scaling, depending on how deep in the payload the selection criteria field lies. Really long XPath expression = poor performance and no way to tune WMQ to make up for it since the latency is all in the application at that point.
Best case, producer sets a message property and publishes. Consumers select on property in their subscription or an administrative subscription does this for them. Whether this solution uses application subscriptions or administrative subscriptions doesn't make any difference as far as scalability is concerned.

Categories

Resources