I have a WCF service (the fact that it's WCF shouldn't matter) and I'm not looking for message queuing, but instead for an asynchronous work queue in which to place tasks, once a request / message is received. Requirements:
Must support persistent store that enables recovery of tasks in the case of Server / service process failure.
Supports re-running of failed jobs, up to a given limit (i.e. try re-running a job up to 5 times)
Able to record the failed job call along with its parameters, in an easily queried fashion. For example, I would query the store for failed jobs and receive a list of "job name, parameters".
Unfortunately cannot be a cloud-based / hosted solution.
Queues that I'm probably not looking for:
MSMQ (RabbitMQ, AMQP). Low level, and is focused on message transport.
Quartz.NET. Has some of the above but its error-recording facilities are lacking. Geared more toward cron-like scheduling than async work and error reporting.
the Default Task Scheduler of .NET TPL. It has no persistence of the process owning it stops abruptly and doesn't support re-running of tasks very well.
I think I'd be looking for something more along the lines of Celery, Resque, or even qless. I know Resque.NET exists (https://www.nuget.org/packages/Resque/), but not sure if there's something more mainstream, or if that could suffice.
What about Amazon SQS? You don't have to worry about infrastructure as you would with RabbitMQ/MSMQ. SQS is dirt cheap, too. Last time I checked, it was $0.01 per 10,000 messages. Why re-invent the wheel? Let Amazon (or other cloud providers with similar services, like Microsoft and Rackspace) do all the worrying.
I use Amazon SQS in production for all message-based services. Some of these messages act like chron jobs; an external process queues the message at a specific time. Some of them are acted upon immediately.
Related
I have started to work with micro-services and I need to create an event publishing mechanism.
I plan to use Amazon SQS.
The idea is quite simple. I store events in the database in the same transaction as aggregates.
If user would change his email, event UserChangedEmail will be stored in the database.
I also have event handler, such as UserChangedEmailHandler, which will (in this case) be responsible to publish this event to SQS queue, so other services can know that user changed email.
My question is, what is the practice to achieve this? Should I have some kind of background timed process which will scan events table and publish events to SQS?
Can this be process within WebApi application (preferable), or should this be a separate a process?
One of the ideas was to use Hangfire, but it does not support cron jobs under a minute.
Any suggestions?
EDIT:
As suggested in the one of the answers, I've looked in to NServicebus. One of the examples on the NServiceBus page shows core of my concern.
In their example, they create a log that order has been placed. What if log or database entry is successfully commited, but publish breaks and event never gets published?
Here's the code for the event handler:
public class PlaceOrderHandler :
IHandleMessages<PlaceOrder>
{
static ILog log = LogManager.GetLogger<PlaceOrderHandler>();
IBus bus;
public PlaceOrderHandler(IBus bus)
{
this.bus = bus;
}
public void Handle(PlaceOrder message)
{
log.Info($"Order for Product:{message.Product} placed with id: {message.Id}");
log.Info($"Publishing: OrderPlaced for Order Id: {message.Id}");
var orderPlaced = new OrderPlaced
{
OrderId = message.Id
};
bus.Publish(orderPlaced); <!-- my concern
}
}
Off the Shelf Suggestions
Rather than rolling your own, I recommend looking into off the shelf products, as there is a lot of complexity here that will not be apparent out the outset, e.g.
Managing event subscriber list - an SQS queue is more appropriately paired with an event consumer, rather than with an event producer as when a message is consumed it is no longer available on the queue - so if you want to support multiple subscribers for a given event (which is a massive benefit of event driven architectures), how do you know which SQS queues you push the event message onto when it is first raised?
Retry semantics, error forwarding queues - handling temporary errors due to ephemeral infrastructure issues vs permanent errors due to business logic semantic issues
Audit trails of which messages were raised when and sent where
Security of messages sent via SQS (does your business case require them to be encrypted? SQS is an application service offered by Amazon which doesn't provide storage level encryption
Size of messages - SQS has a message size limit so you may eventually need to handle out-of-band transmission of large messages
And that's just off the top of my head...
A few off the shelf systems that would assist:
NServiceBus provides a framework for managing command and event messaging, and it has a plugin framework permitting flexible transport types - NServiceBus.SQS offers SQS as a transport.
Offers comprehensive and flexible retry, audit and error handling
Opinionated use of commands vs events (command messages say "Do this" and are sent to a single service for processing, event messages say "Something happened" and are sent to an arbitrary number of flexible subscribers)
Outbox pattern provides transactionally consistent messaging even with non-transactionally consistent transports, such as SQS
Currently the SQS plugin uses default NServiceBus subscriber persistence, which requires an SQL Server for storing the event subscriber list (see below for an option that leverages SNS)
Built in support for sagas, offering a framework to ensure multi transaction eventual consistency with rollback via compensating actions
Timeouts supporting scheduled message handling
Commercial offering, so not free, but many plugins/extensions are open source
Mass Transit
Doesn't support SQS off the shelf, but does support Azure Service Bus and RabbitMq, so could be an alternative for you if that is an option
Similar offering to NServiceBus, but not 100% the same - NServiceBus vs MassTransit offers a comprehensive comparison
Fully open source/free
Just Saying
A light-weight open source messaging framework designed specifically for SQS/SNS based
SNS topic per event, SQS queue per microservice, use native SNS SQS Queue subcription to achieve fanout
Open Source Free
There may be others, and I've most personal experience with NServiceBus, but I strongly recommend looking into the off the shelf solutions - they will free you up to start designing your system in terms of business events, rather than worrying about the mechanics of event transmission.
Even if you do want to build your own as a learning exercise, reviewing how the above work may give you some tips on what's needed for reliable event driven messaging.
Transactional Consistency and the Outbox Pattern
The question has been edited to ask about the what happens if parts of the operation succeed, but the publish operation fails. I've seen this referred to as the transactional consistency of the messaging, and it generally means that within a transaction, all business side-effects are committed, or none. Business side effects may mean:
Database record updated
Another database record deleted
Message published to a message queue
Email sent
You generally don't want an email sent or a message published, if the database operation failed, and likewise, you don't want the database operation committed if the message publish failed.
So how to ensure consistency of messaging?
NServiceBus handles this in one of two ways:
Use a transactionally consistent message transport, such as MSMQ.
MSMQ is able to make use of Microsoft's DTC (Distributed Transaction Coordinator) and DTC can enroll the publishing of messages in a distributed transaction with SQL server updates - this means that if your business transaction fails, your publish operation will be rolled back and visa versa
The Outbox Pattern
With the outbox pattern, messages are not dispatched immediately - they are added to an Outbox table in a database, ideally the same database as your business data, as part of the same transaction
AFTER the transaction is committed, it attempts to dispatch each message, and only removes it from the outbox on successful dispatch
In the event of a failure of the system after dispatch but before delete, the message will be transmitted a second time. To compensate for this, when Outbox is enabled, NServiceBus will also do de-duplication of inbound messages, by maintaining a record of all inbound messages and discarding duplicates.
De-duplication is especially useful with Amazon SQS, as it is itself eventually consistent, and the same messages may be received twice.
This is the not far from the original concept in your question, but there are differences:
You were concepting a background timed process to scan the events table (aka Outbox table) and publish events to SQS
NServiceBus executes handlers within a pipeline - with Outbox, the dispatch of messages to the transport (aka pushing messages into an SQS queue) is simply one of the last steps in the pipeline. So - whenever a message is handled, any outbound messages generated during the handling will be dispatched immediately after the business transaction is committed - no need for a timed scan of the events table.
Note: Outbox is only successful when there is an ambient NServiceBus Handler transaction - i.e. when you are handling a message within the NServiceBus pipeline. This will NOT be the case in some contexts, e.g. a WebAPI Request pipeline. For this reason, NServiceBus recommends using your API request to send a single Command message only, and then combining business data operations with further messaging within a transactionally consistent command handler in a backend endpoint service. Although point 3 in their doc is more relevant to the MSMQ than SQS transport.
Handler Semantics
One more comment about your proposal - by convention, UserChangedEmailHandler would more commonly be associated with the service that does something in response to the email being changed, rather than simply participating in the propagation of the information that the email has changed. When you have 50 events being published by your system, do you want 50 different handlers just to push those messages onto different queues?
The systems above use a generic framework to propagate messages via the transport, so you can reserve UserChangedEmailHandler for the subscribing system and include in it the business logic that should happen whenever a user changes their email.
In any case I'd go with stateful services. If you want to go a tad hands off, have a look at Azure Service Fabric.
And as in my case, I had my own set of microservices, in a scenario like this I did the basic create operation on db first (Changing the email). I had an event entity and pushed back an event in that collection (in this case mongodb). A stateful service was polling the database and processing the events in batch.
Now in your case, if your web app process is persistent you can opt to enqueue the message right away and keep a field in the event that states whether it was actually processed later by any service or not. I used mongodb for database and Azure Service Bus as a message broker. I think Amazon SQS would be similiar.
Now, if your web app is a vanilla asp.net Web api or mvc process, you only should enlist the event in database and leave as in that way you dont have to create a mesasge broker listener every time you getting a request. One service can poll the db, use the message broker to let the other services know.
If you want a total event driven paradigm, you might need a look in Event Hubs
I strongly suggest keeping a tab on whether any resource has been processed or not from the Message Bus just to make sure it's reliable.
Hope it helps. :)
I am working to port an application which was designed to work in a non-Azure environment. One of the elements of the architecture is a singleton which does not scale, and which I'm hoping to replace w/ multiple worker processes serving the resource that the singleton currently provides.
I have the necessary changes in place to replace the singleton, and am at the point of constructing the communications framework to provide interconnection from the UI servers to the resource workers and I'm wondering if I should just use a TCP binding on a WCF service or whether using the Azure Service Bus would make more sense. The TCP/WCF is easy, but doesn't solve the complete problem: how do I ensure that only one worker processes a UI request?
From reading the available documentation, it sounds like the service bus will solve this, but I've yet to see a concrete example of implementation. I'm hoping someone here can assist and/or point me in the right direction.
Seems that Azure Service Bus queues are the right solution for you.
Azure Service Bus can be used in 3 different ways:
Queues
Topics
Relays
From windows azure site:
Service Bus queues provide one-way asynchronous queuing. A sender sends a message to a Service Bus queue, and a receiver picks up that message at some later time. A queue can have just a single receiver
You can find more info at:
http://www.windowsazure.com/en-us/develop/net/fundamentals/hybrid-solutions/
Adding to Davide's answer.
Another alternative would be to use Windows Azure Queues. They are designed to facilitate asynchronous communication between web and worker roles. From your web role you push messages into a queue which is polled by your worker roles.
Your worker role can "Get" one or more messages from a queue and work on those messages. When you get a message from a queue, you can instruct the queue service to make those messages invisible to other callers for a certain amount of time (known as message visibility timeout). That would ensure that only worker role instance get to work on a message.
Once the worker role has completed the work, it can simply delete the message. If there's an error in processing the message, the message automatically reappears in the queue once the visibility timeout period has expired. You may find this link helpful: http://www.windowsazure.com/en-us/develop/net/how-to-guides/queue-service/.
Azure queues are not designed for inter process communication, but inter-application communication. The message delivery latency is substantial, and delivery timing cannot be guaranteed. Websockets or NetTcpBinding is more suitable for applications that talk to eachother in realtime. Although must admit, you get some free stuff with queuez, especially the locking mechanisms. Just my 2 cents
What I'm looking to do is a Map/Reduce system using RabbitMQ as the transport to and from the worker agents, this is to allow a simple scaling roadmap but also allows for a single server implementation for development and testing.
I've seen a few articles on a single message with a timeout using RPC with RabbitMQ but these appear to be blocking. I need to be able to fire off a batch of requests, each could possibly be served by a different agent. I then need to be able to collate all of the responses so I can crunch to a single answer.
If I were to use RPC, I believe I'd end up with a very complicated serial processing of the tasks rather than in parallel.
I'd also like to be able to resend a request under certain circumstances (e.g. the agent reports a transient error), but this isn't essential.
I'm guessing I'm going to need to spawn the threads in the master application and each have an RPC call but as it's a web app and there could be in the region of 20 tasks to be served by the agents, I'm not keen on this approach and suspect it would be a bottleneck quickly.
I need to implement a queuing mechanism for WCF service requests. The service will be called by clients in a one-way manner. These request messages should be stored in a SQL Server database and a Windows Service queues the messages. The time at which the requests are processed will be configurable. If there happens error in processing the message, it need to be retried up to 100 times and if still fails it need to be terminated.
Also there should be a mechanism to monitor the number of transaction made on a day and number of failures.
QUESTIONS
If I were using MSMQ, clients could have forwarded the message to queue without knowing the service endpoint. But I am using SQL Server to store the request messages. How the clients can put the requests to SQL Server?
Is the solution feasible? Do we have any article/book that explains how to implement the above?
What are the steps to prevent service and client reaching faulted state in this scenario?
What is the best method to store incoming message to database?
What is the best method to implement retry mechanism? Anything already exist so that I don't have to reinvent the wheel?
Is there any book/article that explains this implementation?
NOTES
Content of the message will be complex XML. For example Travel expense items of an employee or a list of employees.
READING
Logging WCF Request to Database
Guaranteed processing of data in WCF service
MSMQ vs. SQL Server Service Broker
Is it possible to persist and then forward WCF messages to destination services?
WCF 4 Routing Service - protocol bridging issue
https://softwareengineering.stackexchange.com/questions/134605/designing-a-scalable-and-robust-retry-mechanism
Integrating SQL Service Broker and NServiceBus
Can a subscriber also publish/send message in NServiceBus?
I'm a DBA, so that flavors my my response, but here's what I'd do:
If you're using SQL 2005+, use Service Broker to store the messages
in the database rather than storing them in a table. You get a
queueing mechanism with this, so you can get rid of MSMQ. You'll also have a table, but it's just going to store the conversation handle (essentially, a pointer to the message) along with how many times it attempted this message. Lastly, you'll want some sort of a "dead letter box" where messages that reach your retry threshold go.
In your message processing code, do the following:
Begin a transaction
Receive a message off of the queue
If the retry count is greater than the threshold, move it to the dead letter box and commit
Increment the counter on the table for this message
Process the message
If the processing succeeded, commit the transaction
If the processing failed, put a new message on the queue with the same contents and then commit the transaction
Notice that there aren't any planned rollbacks. Rollbacks in Service Broker can be bad; if you rollback 5 times without a successful receive, the queue will become disabled for both enqueuing and dequeuing. But you still want to have transactions for the case when your message processor dies in the middle of processing (i.e. the server crashes).
1. If I were using MSMQ, clients could have forwarded the message to queue without knowing the service endpoint.
Yes - but they would need to know the MSMQ endpoint in order to send their message to the queue.....
But I am using SQL Server to store the request messages. How the clients can put the requests to SQL Server?
The clients won't put their requests into SQL Server - that's what the service on the server will do. The client just call a service method, and the code in there will store the request into the SQL Server table.
2. Is the solution feasible? Do we have any article/book that explains how to implement the above?
Sure, I don't see any big issue. The only point unclear to me right now is: how will the clients know their results?? Do they need to go get results from another service or something??
3. What are the steps to prevent service and client reaching faulted state in this scenario?
As always - just make sure your service code catches all exceptions and either handles them internally, or returns interoperable SOAP faults instead of .NET exceptions.
It sounds like what you want to do is similar to this:
In this case you can use netMsmqBinding between your service and your service consumers.
The only thing you won't get out of the box is the retrying. However if you make the queue transactional then this functionality can be implemented in your service code.
If there is a failure in your dequeue operation the message will not be removed from the queue. It will therefore be available for further dequeue attempts.
However, you would need to implement retry attempt threshold code which fails a message after a certain number of attempts.
I would suggest a different approach to the ones suggested here. If you are able to, I would consider the introduction of a messaging framework such as NServiceBus. It satifies many of the requirements that you have right out of the box. Let me try and address this in context of your requirements.
The service will be called by clients in a one-way manner.
All communication between endpoints in NServiceBus is one way. The underlying transport NServiceBus uses is MSMQ, so much like your WCF approach, your client is communicating with queues, rather than specific service endpoints.
These request messages should be stored in a SQL Server database and a Windows Service queues the messages.
If you wanted to store your request messages in a database then you can configure NServiceBus to forward all messages sent to your request processing endpoint to another "audit" queue, which you can use to persist to the database. This has the added benefit of separating your application logic from your auditing implementation.
The time at which the requests are processed will be configurable.
NServiceBus allows you to defer when a mesage is sent. Normally a message is sent via the Send method of a Bus instance - Bus.Send(msg). You can use The Defer method to send the message some time in the future eg. Bus.Defer(DateTime.Now.AddDays(1), msg); There's nothing more you really have to do, NserviceBus will handle the message once the specified time has been reached.
If there happens error in processing the message, it need to be retried up to 100 times and if still fails it need to be terminated.
By default, NServiceBus will enlist your message in a transaction as soon as your message leaves the queue. This ensures that in the event of failure that the message is rolled back to the originating queue. In such an event, NServiceBus will automatically try to reprocess the message a configurable number of times. The default being 5. You can of course set this to whatever you want, although I am not sure why you would want to set this to 100. At any rate, NServiceBus uses this setting to stop an endless loop of automatic retries. Once the limit has been reached the message is sent to an error queue where it sits until you fix whatever issues caused the exception or until you decide to push the message back to the queue for processing. Either way, you are assured that the message is never lost.
Also there should be a mechanism to monitor the number of transaction made on a day and number of failures.
The beauty of using MSMQ as the transport is that performance monitoring can be a achieved at a infrastructure level. How your applications perform, can be measured by how long they sit in the queue. NServiceBus comes with performance monitors that track the length of time a message is in the queue and you can also add perf mons that come built into windows to track other activity. To monitor errors, all you need to do is check the number of messages in the error queue.
One of the main features of NServiceBus is reliability. WCF will only do so much for you, and then you are on your own. That's a lot of code, complexity and frankly hugely error prone. The things I have described here are all standard features of NServiceBus and I have barely scratched the surface with all the other things that you can do with it. I recommend you check it out.
I am looking for a message broker API to use it with c#.
Normally the things are quite simple. I have a server that knows what jobs are to do and I have some clients that need to get these jobs.
And here are the special requirements I have:
If a client got a job but fails to answer within a specific time, then another client should do the work.
More than one queue and priorities
If possible it needs to work with big message queues (this way I could just load all jobs sometimes a month and forget about it
secured communications would be good.
API for talking with the broker from c#. How much work is done? What is still to do?
Delete some jobs...
If available replication to another broker would be good.
The broker needs to run on windows
What is not an issue:
low latency (there is no problem when a message needs minutes)
Do you know such a message broker that is free to use?
RabbitMQ and several other AMQP implementations satisfy most of (if not all of) these requirements.
RabbitMQ allows clients to acknowledge receipt and/or processing of messages. As per http://www.rabbitmq.com/tutorials/amqp-concepts.html#message-acknowledge:
If a consumer dies without sending an acknowledgement the AMQP broker
will redeliver it to another consumer or, if none are available at the
time, the broker will wait until at least one consumer is registered
for the same queue before attempting redelivery.
Many queues (and in fact many brokers) are supported, in a variety of different configurations
It scales particularly well, even for very large message queues: http://www.rabbitmq.com/faq.html#performance
Encryption is supported: http://www.rabbitmq.com/faq.html#channel-encryption
There is a .NET Client Users Guide and API docs: http://www.rabbitmq.com/documentation.html
There is live failover if a broker dies: http://www.rabbitmq.com/clustering.html
It runs on Windows, Linux, and probably anything else that has an Erlang implementation