I would be grateful for some design suggestions concernig a windows service (c#) for publishing reports to a SOAP service.
It fetches a limited set of reports from a database (reports in Oracle AQ table), aggregates them into a message and forwards this message to a WCF SOAP service.
Reports are marked as "sent" if they have been transmitted via SOAP successfully.
Otherwise they are added again to the AQ table (via a db job).
So I came up with following designs.
What would be the best way to go?
Would the queue improve the design in terms of scalability, robustness, decoupling?
Is it a good idea to use queuing in this case?
Proposed design A:
Service with 1 to N threads.
Each thread processes reports synchronously (fetching reports, aggregating, translating, sending via SOAP)
Proposed design B:
Windows service with:
1 MSMQ message queue
1 to N Producer threads: (fetching reports,
aggregating, enqueuing message via MSMQ)
1 to N Consumer threads:
(dequeuing, translating, dipatching via SOAP)
Proposed design C:
Windows service with producer threads (fetching reports, aggregating, enqueuing messages to a private MSMQ queue via WCF NetMsmqBinding Client)
IIS/WAS hosted MSMQ-enabled service (listens to the MSMQ queue, dequeuing, translating, dipatching via SOAP)
Is there a particular reason you chose MSMQ? If you use your proposed design B, you could use BlockingCollection.
I don't see that MSMQ provides a particular advantage in this scenario unless you want multiple processes or you're expecting to spread this out across multiple computers.
But do you really even need multiple threads? It seems like your limiting factor here will be either database access time or communication to the WCF service. Unless the WCF service has to do some major processing before you can call the job successful.
So are you sure you can't just have:
while there are unsent jobs in the database
get job
send job to WCF
if job sent successfully
mark job as sent
end while
Obviously my knowledge of your situation is limited to what you've posted in your question, so it's possible I've missed something important.
Related
I have a WCF service (the fact that it's WCF shouldn't matter) and I'm not looking for message queuing, but instead for an asynchronous work queue in which to place tasks, once a request / message is received. Requirements:
Must support persistent store that enables recovery of tasks in the case of Server / service process failure.
Supports re-running of failed jobs, up to a given limit (i.e. try re-running a job up to 5 times)
Able to record the failed job call along with its parameters, in an easily queried fashion. For example, I would query the store for failed jobs and receive a list of "job name, parameters".
Unfortunately cannot be a cloud-based / hosted solution.
Queues that I'm probably not looking for:
MSMQ (RabbitMQ, AMQP). Low level, and is focused on message transport.
Quartz.NET. Has some of the above but its error-recording facilities are lacking. Geared more toward cron-like scheduling than async work and error reporting.
the Default Task Scheduler of .NET TPL. It has no persistence of the process owning it stops abruptly and doesn't support re-running of tasks very well.
I think I'd be looking for something more along the lines of Celery, Resque, or even qless. I know Resque.NET exists (https://www.nuget.org/packages/Resque/), but not sure if there's something more mainstream, or if that could suffice.
What about Amazon SQS? You don't have to worry about infrastructure as you would with RabbitMQ/MSMQ. SQS is dirt cheap, too. Last time I checked, it was $0.01 per 10,000 messages. Why re-invent the wheel? Let Amazon (or other cloud providers with similar services, like Microsoft and Rackspace) do all the worrying.
I use Amazon SQS in production for all message-based services. Some of these messages act like chron jobs; an external process queues the message at a specific time. Some of them are acted upon immediately.
I am working to port an application which was designed to work in a non-Azure environment. One of the elements of the architecture is a singleton which does not scale, and which I'm hoping to replace w/ multiple worker processes serving the resource that the singleton currently provides.
I have the necessary changes in place to replace the singleton, and am at the point of constructing the communications framework to provide interconnection from the UI servers to the resource workers and I'm wondering if I should just use a TCP binding on a WCF service or whether using the Azure Service Bus would make more sense. The TCP/WCF is easy, but doesn't solve the complete problem: how do I ensure that only one worker processes a UI request?
From reading the available documentation, it sounds like the service bus will solve this, but I've yet to see a concrete example of implementation. I'm hoping someone here can assist and/or point me in the right direction.
Seems that Azure Service Bus queues are the right solution for you.
Azure Service Bus can be used in 3 different ways:
Queues
Topics
Relays
From windows azure site:
Service Bus queues provide one-way asynchronous queuing. A sender sends a message to a Service Bus queue, and a receiver picks up that message at some later time. A queue can have just a single receiver
You can find more info at:
http://www.windowsazure.com/en-us/develop/net/fundamentals/hybrid-solutions/
Adding to Davide's answer.
Another alternative would be to use Windows Azure Queues. They are designed to facilitate asynchronous communication between web and worker roles. From your web role you push messages into a queue which is polled by your worker roles.
Your worker role can "Get" one or more messages from a queue and work on those messages. When you get a message from a queue, you can instruct the queue service to make those messages invisible to other callers for a certain amount of time (known as message visibility timeout). That would ensure that only worker role instance get to work on a message.
Once the worker role has completed the work, it can simply delete the message. If there's an error in processing the message, the message automatically reappears in the queue once the visibility timeout period has expired. You may find this link helpful: http://www.windowsazure.com/en-us/develop/net/how-to-guides/queue-service/.
Azure queues are not designed for inter process communication, but inter-application communication. The message delivery latency is substantial, and delivery timing cannot be guaranteed. Websockets or NetTcpBinding is more suitable for applications that talk to eachother in realtime. Although must admit, you get some free stuff with queuez, especially the locking mechanisms. Just my 2 cents
I need to implement a queuing mechanism for WCF service requests. The service will be called by clients in a one-way manner. These request messages should be stored in a SQL Server database and a Windows Service queues the messages. The time at which the requests are processed will be configurable. If there happens error in processing the message, it need to be retried up to 100 times and if still fails it need to be terminated.
Also there should be a mechanism to monitor the number of transaction made on a day and number of failures.
QUESTIONS
If I were using MSMQ, clients could have forwarded the message to queue without knowing the service endpoint. But I am using SQL Server to store the request messages. How the clients can put the requests to SQL Server?
Is the solution feasible? Do we have any article/book that explains how to implement the above?
What are the steps to prevent service and client reaching faulted state in this scenario?
What is the best method to store incoming message to database?
What is the best method to implement retry mechanism? Anything already exist so that I don't have to reinvent the wheel?
Is there any book/article that explains this implementation?
NOTES
Content of the message will be complex XML. For example Travel expense items of an employee or a list of employees.
READING
Logging WCF Request to Database
Guaranteed processing of data in WCF service
MSMQ vs. SQL Server Service Broker
Is it possible to persist and then forward WCF messages to destination services?
WCF 4 Routing Service - protocol bridging issue
https://softwareengineering.stackexchange.com/questions/134605/designing-a-scalable-and-robust-retry-mechanism
Integrating SQL Service Broker and NServiceBus
Can a subscriber also publish/send message in NServiceBus?
I'm a DBA, so that flavors my my response, but here's what I'd do:
If you're using SQL 2005+, use Service Broker to store the messages
in the database rather than storing them in a table. You get a
queueing mechanism with this, so you can get rid of MSMQ. You'll also have a table, but it's just going to store the conversation handle (essentially, a pointer to the message) along with how many times it attempted this message. Lastly, you'll want some sort of a "dead letter box" where messages that reach your retry threshold go.
In your message processing code, do the following:
Begin a transaction
Receive a message off of the queue
If the retry count is greater than the threshold, move it to the dead letter box and commit
Increment the counter on the table for this message
Process the message
If the processing succeeded, commit the transaction
If the processing failed, put a new message on the queue with the same contents and then commit the transaction
Notice that there aren't any planned rollbacks. Rollbacks in Service Broker can be bad; if you rollback 5 times without a successful receive, the queue will become disabled for both enqueuing and dequeuing. But you still want to have transactions for the case when your message processor dies in the middle of processing (i.e. the server crashes).
1. If I were using MSMQ, clients could have forwarded the message to queue without knowing the service endpoint.
Yes - but they would need to know the MSMQ endpoint in order to send their message to the queue.....
But I am using SQL Server to store the request messages. How the clients can put the requests to SQL Server?
The clients won't put their requests into SQL Server - that's what the service on the server will do. The client just call a service method, and the code in there will store the request into the SQL Server table.
2. Is the solution feasible? Do we have any article/book that explains how to implement the above?
Sure, I don't see any big issue. The only point unclear to me right now is: how will the clients know their results?? Do they need to go get results from another service or something??
3. What are the steps to prevent service and client reaching faulted state in this scenario?
As always - just make sure your service code catches all exceptions and either handles them internally, or returns interoperable SOAP faults instead of .NET exceptions.
It sounds like what you want to do is similar to this:
In this case you can use netMsmqBinding between your service and your service consumers.
The only thing you won't get out of the box is the retrying. However if you make the queue transactional then this functionality can be implemented in your service code.
If there is a failure in your dequeue operation the message will not be removed from the queue. It will therefore be available for further dequeue attempts.
However, you would need to implement retry attempt threshold code which fails a message after a certain number of attempts.
I would suggest a different approach to the ones suggested here. If you are able to, I would consider the introduction of a messaging framework such as NServiceBus. It satifies many of the requirements that you have right out of the box. Let me try and address this in context of your requirements.
The service will be called by clients in a one-way manner.
All communication between endpoints in NServiceBus is one way. The underlying transport NServiceBus uses is MSMQ, so much like your WCF approach, your client is communicating with queues, rather than specific service endpoints.
These request messages should be stored in a SQL Server database and a Windows Service queues the messages.
If you wanted to store your request messages in a database then you can configure NServiceBus to forward all messages sent to your request processing endpoint to another "audit" queue, which you can use to persist to the database. This has the added benefit of separating your application logic from your auditing implementation.
The time at which the requests are processed will be configurable.
NServiceBus allows you to defer when a mesage is sent. Normally a message is sent via the Send method of a Bus instance - Bus.Send(msg). You can use The Defer method to send the message some time in the future eg. Bus.Defer(DateTime.Now.AddDays(1), msg); There's nothing more you really have to do, NserviceBus will handle the message once the specified time has been reached.
If there happens error in processing the message, it need to be retried up to 100 times and if still fails it need to be terminated.
By default, NServiceBus will enlist your message in a transaction as soon as your message leaves the queue. This ensures that in the event of failure that the message is rolled back to the originating queue. In such an event, NServiceBus will automatically try to reprocess the message a configurable number of times. The default being 5. You can of course set this to whatever you want, although I am not sure why you would want to set this to 100. At any rate, NServiceBus uses this setting to stop an endless loop of automatic retries. Once the limit has been reached the message is sent to an error queue where it sits until you fix whatever issues caused the exception or until you decide to push the message back to the queue for processing. Either way, you are assured that the message is never lost.
Also there should be a mechanism to monitor the number of transaction made on a day and number of failures.
The beauty of using MSMQ as the transport is that performance monitoring can be a achieved at a infrastructure level. How your applications perform, can be measured by how long they sit in the queue. NServiceBus comes with performance monitors that track the length of time a message is in the queue and you can also add perf mons that come built into windows to track other activity. To monitor errors, all you need to do is check the number of messages in the error queue.
One of the main features of NServiceBus is reliability. WCF will only do so much for you, and then you are on your own. That's a lot of code, complexity and frankly hugely error prone. The things I have described here are all standard features of NServiceBus and I have barely scratched the surface with all the other things that you can do with it. I recommend you check it out.
I am looking for a message broker API to use it with c#.
Normally the things are quite simple. I have a server that knows what jobs are to do and I have some clients that need to get these jobs.
And here are the special requirements I have:
If a client got a job but fails to answer within a specific time, then another client should do the work.
More than one queue and priorities
If possible it needs to work with big message queues (this way I could just load all jobs sometimes a month and forget about it
secured communications would be good.
API for talking with the broker from c#. How much work is done? What is still to do?
Delete some jobs...
If available replication to another broker would be good.
The broker needs to run on windows
What is not an issue:
low latency (there is no problem when a message needs minutes)
Do you know such a message broker that is free to use?
RabbitMQ and several other AMQP implementations satisfy most of (if not all of) these requirements.
RabbitMQ allows clients to acknowledge receipt and/or processing of messages. As per http://www.rabbitmq.com/tutorials/amqp-concepts.html#message-acknowledge:
If a consumer dies without sending an acknowledgement the AMQP broker
will redeliver it to another consumer or, if none are available at the
time, the broker will wait until at least one consumer is registered
for the same queue before attempting redelivery.
Many queues (and in fact many brokers) are supported, in a variety of different configurations
It scales particularly well, even for very large message queues: http://www.rabbitmq.com/faq.html#performance
Encryption is supported: http://www.rabbitmq.com/faq.html#channel-encryption
There is a .NET Client Users Guide and API docs: http://www.rabbitmq.com/documentation.html
There is live failover if a broker dies: http://www.rabbitmq.com/clustering.html
It runs on Windows, Linux, and probably anything else that has an Erlang implementation
I have a email queue with email to be send. A webservice calls a SOAP webservice that processes the queue one by one.
We send email using an external vendor using their REST API. My problem is that calls to this API can take from 0.1ms to 12s. We sent thousands of emails to customer that subscribe to our notices and it important that in each batch there's not to much delay between the first compared to the last in the queue (ideally they'd be sent in simultaneously).
I've complained to the vendor but as they suck I'm quite sure they will not do anything about this.
Can I somehow Threadify this process, instantiating simultaneous calls to the server? The server is also my web server so I can't use all the juice. How many threads is appropriate? Is this a good idea? What's the best way to generically manage these threads?
You shouldn't be creating threads within an ASP.Net application. If you have a large enough queue to warrant multithreading you should create a windows service to handle the queue.
I would queue the email in a database table and generate a separate windows service that reads from the table and spawns a thread for each email, up to some max thread limit. The database can also be used to capture throughput time.
You also should find out how many simultaneous web service requests your vendor can handle. BCC yourself on the emails to find out if simultaneous submissions on your end end up as a single-threaded transmission on their end. And perhaps start shopping for an alternative to this vendor (you did say they suck).
If you want to get fancy and offload the effort from your own server, you send a batch of emails to a cloud service (Amazon Web Services, Microsoft Azure, or Google App Server) and spawn a process on the cloud to spray the emails to your vendor simultaneously.
You can also send the emails directly from the cloud, at least you can with Amazon. They provide a default limit, but then here's a link on how to remove the limit: http://aws.amazon.com/contact-us/ec2-email-limit-request/.
I have had some success with ThreadPool.QueueUserWorkItem() for a ASP.NET app. You can google for some usage examples.
There is no need to spawn threads yourself. The class generated by visual studio to access a web service already contains asynchronous methods. For each webservice call Foo, you will see that there is a BeginFoo and EndFoo method. The BeginFoo method will immediately return an IAsyncResult object while the webservice call is done in another thread.
See this MSDN topic for more information on how to use IAsyncResult.