In NServicebus 7 you can set concurrency that means you can decide how many messages in queue your software can process in parallel.
This can be done at NserviceBus Endpoint level.
I have few doubts about this concept:
the concurrency is per queue not per message Type? Right?
If I use satellites which means I’ll have N different queues (for example: one per message Type), the concurrency will still be per queue?
For example:
I have configured 1 endpoint (so 1 queue) and setted to 10 the concurrency level. I manage 5 different commands (handlers). All the commands are stored in the same queue, mixed. In this case the endpoint is able to take 10 commands per time from the queue without considering the type, correct?
In a second scenario i have 5 satellites which manage the 5 message types, 1 dedicated queue per type. In this case each satellite is able to take 10 messages per time from its queue?
Satellites are an advanced feature for the raw processing of messages without all the benefits of the NServiceBus message processing pipeline. It's not normal to use them—they're most often used when implementing a message transport. For example, the RabbitMQ transport uses a Satellite for a feature that makes an endpoint instance individually addressable, so you have a QueueName queue plus a QueueName-InstanceName queue on the broker, so that another endpoint can do context.Reply() and have the reply go to the specific server that sent the original command. In any case, each satellite manages its concurrency separately, as it's a more low-level construct.
So yes, the concurrency through the main queue is for the endpoint instance, not per message type, because there's a 1:1 relationship between endpoint and queue, and you can't selectively pull messages off the queue by type.
As a result, the endpoint is your unit of scalability, both scaling up (by increasing the concurrency) or out (by adding more endpoint instances on different servers).
This means you should be careful about what message types you process in the same endpoint. They should generally have the same SLA. You wouldn't want a bunch of messages that take 50ms to process held up behind a glut of messages that process for 20 seconds.
Some people will take this to an extreme and go with one endpoint per message type. This level of complexity is usually not necessary, but it does give you the ultimate control over scalability for every single message type.
Related
In MassTransit, I'd like to make an ASP .NET Core healthcheck that considers my app in "degraded health" state, when the "prefetch buffer" is full, and the amount of consumes per second is low enough.
In other words, if there are still messages on the queue, and the consumer is slow. I'll then use this to configure AWS to autoscale my consumers based on the health.
Is there a way to access the amount of messages that have been prefetched, from the outside? Or is this entirely encapsulated within MassTransit?
I'm using ActiveMQ as the underlying transport.
The message counts are not exposed by MassTransit. You may be able to find something by monitoring ActiveMQ and looking at pending message counts for queues – I don't have any real experience monitoring ActiveMQ because I haven't used it in production.
If you are using metrics with MassTransit, for instance, using Prometheus, you can observe the number of active consumers per queue per instance and use those thresholds to determine if additional instances are needed to handle the load and scale up. That's the approach I would take, since you also get the ability to setup Grafana dashboards to observe system metrics in real time.
I have multiple queues that multiple clients insert messages into them.
On the server side, I have multiple micro-services that access the queues and handle those messages. I want to lock a queue whenever a service is working on it, so that other services won't be able to work on that queue.
Meaning that if service A is processing a message from queue X, no other service can process a message from that queue, until service A has finished processing the message. Other services can process messages from any queue other than X.
Does anyone has an idea on how to lock a queue and prevent others from accessing it? preferably the other services will receive an exception or something so that they'll try again on a different queue.
UPDATE
Another way can be to assign the queues to the services, and whenever a service is working on a queue no other service should be assigned to the queue, until the work item was processed. This is also something that isn't easy to achieve.
There are several built-in ways of doing this. If you only have a single worker, you can set MessageOptions.MaxConcurrentCalls = 1.
If you have multiple, you can use the Singleton attribute. This gives you the option of setting it in Listener mode or Function mode. The former gives the behavior you're asking for, a serially-processed FIFO queue. The latter lets you lock more granularly, so you can specifically lock around critical sections, ensuring consistency while allowing greater throughput, but doesn't necessarily preserve order.
My Guess is they'd have implemented the singleton attribute similarly to your Redis approach, so performance should be equivalent. I've done no testing with that though.
You can achieve this by using Azure Service Bus message sessions
All messages in your queue must be tagged with the same SessionId. In that case, when a client receives a message, it locks not only this message but all messages with the same SessionId (effectively whole queue).
The solution was to use Azure's redis to store the locks in-memory and have micro-services that manage those locks using the redis store.
The lock() and unlock() operations are atomic and the lock has a TTL, so that a queue won't be locked indefinitely.
Azure Service Bus is a broker with competing consumers. You can't have what you're asking with a general queue all instances of your service are using.
Put the work items into a relational database. You can still use queues to push work to workers but the queue items can now be empty. When a worker receives an item he know to look into the database instead. The content of the message is disregarded.
That way messages are independent and idempotent. For queueing to work these two properties usually must hold.
That way you can more easily sequence actions that actually are sequential. You can use transactions as well.
Maybe you don't need queues at all. Maybe it is enough to have a fixed number of workers polling the database for work. This loses auto-scaling with queues, though.
I'm using RabbitMQ for the following scenario. When a user uses a premium search feature, I send a message via RabbitMQ to one of a few server instances. They run the same routine (DB queries and billing). I want to make sure I don't process the same message more than once.
I've come across this great tutorial but the exchange type presented in it is "Topic", which does not work for me, because I process the same message more than once.
How can I implement the request-response pattern with worker queues in RabbitMQ so that each message is handled only once and there's load balancing?
Anton Gogolev's comment above is correct. You cannot guarantee a message will be processed only once, for many reasons. But, this is often a requirement of systems - to only produce the desired result once.
The way to do that is through idempotence - the idea that no matter how many times a given message is processed, it will only make the desired change once.
There are a lot of ways to do this. One simple example is to use a shared database that tracks which messages have been processed. When you receive a message, you check to see if it has been processed already. If not, you process it. If it has, you just ignore it and move on.
In your case, if you are doing request/response and want load balancing, you probably want multiple consumers on the same queue. You could have 2 or 10 or 300 instances of your request handler listening to the same queue, and you won't have too much worry about duplicate processing.
RabbitMQ will send a given message to a single consumer. It will wait for that consumer to say it is done processing, or if the consumer crashes or rejects the message, it will requeue the message for another consumer to try again.
In this way, you will generally have only 1 request handler per request. But it will always be possible for more than one to handle the same message, which is why idempotence is important.
Regarding the use of a topic exchange vs any other type of exchange - it doesn't make much difference. There will always be the possibility of more than one queue receiving the message that you are sending, because you can have multiple queues bound to the same exchange with the same binding keys.
There is a nice ASP.NET perf counter category and set of counters that can be used to track the request queue during perf test runs. However I can't find similar set for a WCF service not hosted thru IIS. Our WCF services are run as Windows services using net-tcp protocols. I've learned that there are a couple binding parameters that control queuing (Binding.MaxConnections and Binding.ListenBacklog). It wasn't a very easy find. So I wonder going forward, is there a why to track these two values in PerfMon?
Under the ServiceModelService performance counter category you can find the following set of queue performance counters:
Queue Dropped Messages
Queue Dropped Messages Per Second
Queued Poison Messages
Queued Poison Messages Per Second
Queued Rejected Messages
Queued Rejected Messages Per Second
None of which provide the information you're looking for. The performance counter that I could find more closely related to what you want is:
Percent of Max Concurrent Calls
Which provides the number of concurrent calls as a percent of maximum concurrent calls.
To see a complete list of available WCF performance counters click here.
I need to implement a queuing mechanism for WCF service requests. The service will be called by clients in a one-way manner. These request messages should be stored in a SQL Server database and a Windows Service queues the messages. The time at which the requests are processed will be configurable. If there happens error in processing the message, it need to be retried up to 100 times and if still fails it need to be terminated.
Also there should be a mechanism to monitor the number of transaction made on a day and number of failures.
QUESTIONS
If I were using MSMQ, clients could have forwarded the message to queue without knowing the service endpoint. But I am using SQL Server to store the request messages. How the clients can put the requests to SQL Server?
Is the solution feasible? Do we have any article/book that explains how to implement the above?
What are the steps to prevent service and client reaching faulted state in this scenario?
What is the best method to store incoming message to database?
What is the best method to implement retry mechanism? Anything already exist so that I don't have to reinvent the wheel?
Is there any book/article that explains this implementation?
NOTES
Content of the message will be complex XML. For example Travel expense items of an employee or a list of employees.
READING
Logging WCF Request to Database
Guaranteed processing of data in WCF service
MSMQ vs. SQL Server Service Broker
Is it possible to persist and then forward WCF messages to destination services?
WCF 4 Routing Service - protocol bridging issue
https://softwareengineering.stackexchange.com/questions/134605/designing-a-scalable-and-robust-retry-mechanism
Integrating SQL Service Broker and NServiceBus
Can a subscriber also publish/send message in NServiceBus?
I'm a DBA, so that flavors my my response, but here's what I'd do:
If you're using SQL 2005+, use Service Broker to store the messages
in the database rather than storing them in a table. You get a
queueing mechanism with this, so you can get rid of MSMQ. You'll also have a table, but it's just going to store the conversation handle (essentially, a pointer to the message) along with how many times it attempted this message. Lastly, you'll want some sort of a "dead letter box" where messages that reach your retry threshold go.
In your message processing code, do the following:
Begin a transaction
Receive a message off of the queue
If the retry count is greater than the threshold, move it to the dead letter box and commit
Increment the counter on the table for this message
Process the message
If the processing succeeded, commit the transaction
If the processing failed, put a new message on the queue with the same contents and then commit the transaction
Notice that there aren't any planned rollbacks. Rollbacks in Service Broker can be bad; if you rollback 5 times without a successful receive, the queue will become disabled for both enqueuing and dequeuing. But you still want to have transactions for the case when your message processor dies in the middle of processing (i.e. the server crashes).
1. If I were using MSMQ, clients could have forwarded the message to queue without knowing the service endpoint.
Yes - but they would need to know the MSMQ endpoint in order to send their message to the queue.....
But I am using SQL Server to store the request messages. How the clients can put the requests to SQL Server?
The clients won't put their requests into SQL Server - that's what the service on the server will do. The client just call a service method, and the code in there will store the request into the SQL Server table.
2. Is the solution feasible? Do we have any article/book that explains how to implement the above?
Sure, I don't see any big issue. The only point unclear to me right now is: how will the clients know their results?? Do they need to go get results from another service or something??
3. What are the steps to prevent service and client reaching faulted state in this scenario?
As always - just make sure your service code catches all exceptions and either handles them internally, or returns interoperable SOAP faults instead of .NET exceptions.
It sounds like what you want to do is similar to this:
In this case you can use netMsmqBinding between your service and your service consumers.
The only thing you won't get out of the box is the retrying. However if you make the queue transactional then this functionality can be implemented in your service code.
If there is a failure in your dequeue operation the message will not be removed from the queue. It will therefore be available for further dequeue attempts.
However, you would need to implement retry attempt threshold code which fails a message after a certain number of attempts.
I would suggest a different approach to the ones suggested here. If you are able to, I would consider the introduction of a messaging framework such as NServiceBus. It satifies many of the requirements that you have right out of the box. Let me try and address this in context of your requirements.
The service will be called by clients in a one-way manner.
All communication between endpoints in NServiceBus is one way. The underlying transport NServiceBus uses is MSMQ, so much like your WCF approach, your client is communicating with queues, rather than specific service endpoints.
These request messages should be stored in a SQL Server database and a Windows Service queues the messages.
If you wanted to store your request messages in a database then you can configure NServiceBus to forward all messages sent to your request processing endpoint to another "audit" queue, which you can use to persist to the database. This has the added benefit of separating your application logic from your auditing implementation.
The time at which the requests are processed will be configurable.
NServiceBus allows you to defer when a mesage is sent. Normally a message is sent via the Send method of a Bus instance - Bus.Send(msg). You can use The Defer method to send the message some time in the future eg. Bus.Defer(DateTime.Now.AddDays(1), msg); There's nothing more you really have to do, NserviceBus will handle the message once the specified time has been reached.
If there happens error in processing the message, it need to be retried up to 100 times and if still fails it need to be terminated.
By default, NServiceBus will enlist your message in a transaction as soon as your message leaves the queue. This ensures that in the event of failure that the message is rolled back to the originating queue. In such an event, NServiceBus will automatically try to reprocess the message a configurable number of times. The default being 5. You can of course set this to whatever you want, although I am not sure why you would want to set this to 100. At any rate, NServiceBus uses this setting to stop an endless loop of automatic retries. Once the limit has been reached the message is sent to an error queue where it sits until you fix whatever issues caused the exception or until you decide to push the message back to the queue for processing. Either way, you are assured that the message is never lost.
Also there should be a mechanism to monitor the number of transaction made on a day and number of failures.
The beauty of using MSMQ as the transport is that performance monitoring can be a achieved at a infrastructure level. How your applications perform, can be measured by how long they sit in the queue. NServiceBus comes with performance monitors that track the length of time a message is in the queue and you can also add perf mons that come built into windows to track other activity. To monitor errors, all you need to do is check the number of messages in the error queue.
One of the main features of NServiceBus is reliability. WCF will only do so much for you, and then you are on your own. That's a lot of code, complexity and frankly hugely error prone. The things I have described here are all standard features of NServiceBus and I have barely scratched the surface with all the other things that you can do with it. I recommend you check it out.