We have an existing Websphere MQ Queue Manager (running fine, no issues). This has for each "method" a pair of queues: Request and Response.
We'd like to put a web service front end over this for the benefit of some apps we have that cannot call MQ but can call web services.
Of course, Web Services can be synchronous but our MQ is async...and I am not sure how to get around this.
Example:
App calls webservice...web service waits for response.
Webservice calls MQ Request queue and puts the message.
of course, the response will be on a different channel...so my thinking is that the webservice would have to read all the messages on the queue and only remove the correct one (by some identifier such as GUID).
Has anyone got any previous design knowledge on solving this?
The web service does need to read all the response messages, you can perform a correlate get. When the request is put on the request queue you use the request message id and wait on the response queue for the response message with the correlation id. MQ handles this very efficiently.
Here is another stackoverflow answer that shows some code for performing a correlated get
Issue in Correlating request message to resp message in Java Client to access MQ Series
Related
I have a problem statement where a product is making an outbound call to a self-hosted service in a synchronous way, and is expecting the response in synchronous manner.
In order to provide the response, my self hosted service needs to do a lot of async operations and goes to on-prem. On-prem sends back the response to receiver service, and while sending the response back to self-hosted service, I am not able to identify the response belongs to which request.
I have read about semaphore, and was wondering if that could be used to solve this.
For example:
Main state of each thread in Semaphore
Let the self hosted service do the usual work, and wait for something
Once I have the response from on-prem and SNS sends an event, awake to do the further processing
High-level architecture
Kindly note: I can't change the flow or the architecture
Using SQL Server 2008, ASP.NET, C#/VB/.NET
I have experience using SQL Server, but just starting with Service Broker. I've been looking at Service Broker to help me with the following scenario:
Customers come to our website and submit a request.
This request is sent to a Partner's web service for handling and we get back a request Id (GUID) which both of us can use in tracking the request.
We save info about the request in a Tracking table and wait for status updates from our Partner.
Over the next hours/days/weeks, our Partner sends us status updates on the request via a webhook listener we have running on our end.
For performance reasons, it was recommended this webhook just validate the update and queue it on a "processing queue" for later processing.
We have a processing application which retrieves the status updates from the processing queue and handles the update -- mostly updating the Tracking table and sometimes by requesting files from the Partner. Requesting files may take some time, hence having this processing separate from the webhook receiving the updates. And there is a good possibility we may need several instances of the processing application.
Since we are already using SQL Server, I was hoping to leverage Service Broker to provide the processing queue. Some of the issues I'm struggling with are:
I think we only need 1 processing queue. Most of the SB documentation shows an Initiator and a Target queue.
When the processing app receives an update, I'd like to continue to process more updates for that specific request (conversation groups?). Examples I've seen show receiving a message, sending a response, and then closing the conversation. I'd like to keep the conversation open until we receive a "request complete" update from our Partner.
After receiving an update from the queue, do I need to send a response message? If so, to who? It looks like my Consumer app now has to be also a Producer.
I've read all the warnings about "conversation leaks" and "fire-and-forget" anti-patterns, so I want to follow best practices.
Are there any good examples of using Service Broker for a producer-consumer scenario like this?
If you have any questions or need more information, please let me know. Thanks!
randy
I have a dotnet core 3.1 web api application. The app is a gateway to expose various other services that communicate through Service Bus (queues, topics). We're using service bus for asynchronous messaging between services. I dont want any sync communication between gateway and other services.
So, I'm trying to find a way to receive request in controller to for example create a resource and send that command to service bus, then sit and wait.
Meanwhile, service will save command to db and emit event.
Gateway is listening on topic to receive the message and now, somehow, I need to pass this message back to the request in controller that is waiting for this message to finish its own execution.
As far as I know you can't know if the execution of the handler has finish properly by waiting in the controller (as is in your case) in a straight foward way, but, maybe you can do a work arround.
As you know Service bus has a queue or topic where the listener can get the messages sent. But also it has a DeadLetter, where the wrong messages are queued back if any unhandle exception occurs, any message has an internal Id or also your custom body has an Id that you can use to track it and look for it in boths queue or topics. Maybe this can solve your problem, but this is not efficience at all as this is not the desire functionalities of this kind of service, because, in deed, his function is to let you know that your message has been queued, and then you should follow your execution and handle the message in your service.
You're creating your resource asynchronously, so don't try to give the appearance of synchronous behavior. Return an HTTP status code 202 from your API request and let the client query for the created resource at a later time.
I am working with pub/sub for the first time and its quite confusing. I just want to receive push notifications on my MVC application whenever I receive an email on gmail account. I have setup the project id (enabled pub/sub API), created a topic with permissions (gmail-api-push#system.gserviceaccount.com) and added a subscriber to that topic, everything from console.cloud.google.com as I don't think I need to setup these from my code everytime.
I am trying to set the delivery type to 'Push into an endpoint URL' with the URL of my choice (I tried to setup localhost/home, also with SSL, then one of my online domains for testing) but keep getting this "generic:3" error on bottom-left. I don't want to use 'Pull' each time as the delivery type.
There isn't a lot of help on this apart from developers.google.com but I'm not getting the reason for this error. Any help would be highly appreciated
Based from this documentation, if you want to push notifications when there are changes to Gmail mailboxes, you need to use the Cloud Pub/Sub API. Be noted that in push delivery, the Pub/Sub server sends a request to the subscriber application, at a preconfigured endpoint. The subscriber's HTTP response serves as an implicit acknowledgement: a success response indicates that the message has been successfully processed and the Pub/Sub system can delete it from the subscription; a non-success response indicates that the Pub/Sub server should resend it.
Usually, generic error occurs when a transaction fails. By default, the API Gateway returns a very basic error to the client when a message filter fails. You can try the workaround in this forum.
In a scale-out scenario where one server consists of master+worker endpoints and another server consists of workers, is it safe to call bus.Publish from an endpoint when it finishes handling a given event? (Keeping in mind bus.Publish could be invoked from an endpoint sitting on the worker server).
My initial reaction is that it's not safe since it sounds like the example where you should never call publish from a web server...
We could certainly use the WCF wrapper and call out to a service that exists only on the master+worker endpoint server, but does anyone have any practical experience with this?
Thanks!
Each logical subscriber has a receiving endpoint. If you're using the distributor, this is the distributor endpoint, or distributor queue, if you will. So the subscriber will subscribe to specific events and specify it's receiving endpoint. The publisher will have no idea if it's a single endpoint instance, or if it's a distributor receiving the message.
The distributor will then send the message to a worker that is ready to process the message.
This is explained in more detail and with some clarifying images on this page: http://docs.particular.net/nservicebus/scalability-and-ha/distributor/publish-subscribe
In the end, we made our web apps "send-only endpoints" which essentially means they simply send commands directly to an endpoint via a chosen transport (in our case MSMQ). Once we need to scale, we will eventually implement "Sender Side Distribution" rather than utilizing the distributor.
From the NSB support team: "If you add more endpoints, Sender Side Distribution is the way to go. It acts as a round-robin mechanism running on the sender side which would send messages to a different 'worker' endpoint when you scale out."
https://docs.particular.net/transports/msmq/sender-side-distribution
If you only need to fire-and-forget messages from a website or some other app/service, I'd recommend this approach - it's quite simple.