Let's imagine that we have Q named "NotificationQ" and have a consumer who gets a task from that Q and sends emails to customers.
Emailing process sends an email by API from mailgun. That API request does not turn 200 every time(the reason is not important). In that time I need to tell RabbitMQ that tasks fail. I know there is a feature called autoAck but if a request fails how the RabbitMQ client pack understood that a fail.
Am I manually trigger ack to say that request failed?
I using https://www.nuget.org/packages/RabbitMQ.Client/ pack to handle RabbitMQ tasks.
var channel = RabbitPrepareFactory.GetConnectionFactory();
channel.BasicQos(0, 1, false);
var notificationPack = channel.BasicGet("notification", true);
var message = System.Text.Encoding.UTF8.GetString(notificationPack.Body.ToArray());
var task = JsonConvert.DeserializeObject<ForgetPasswordEmailNotification>(message);
var isEmailSendSuccessful = SomeFakeEmailSendFunctions(task.Email);
if (!isEmailSendSuccessful)
{
//something for tell RabbitMQ that task fail and not delte that task in q
.......
}
I think this could be usefull. I would use something like a Dead Letter
https://www.rabbitmq.com/dlx.html
So everytime a message is failing for whatever reason, you push the message to that queue.
Once your messaged was recieved by your consumer and the scope of the operation finished, that message is acknowledged so that other consumers will not take a already processed message.
[Edit]
I dont't think its a good ideea to process a message from a queue and afterwards to leave it there if something happens to your BackEnd. If you implement the dead letter queue you could try to reprocess those messages at some time ( Maybe a CronJob ) or if you really don't wanna have dead letter queues you could try to implement in your Client a Retry Mechanism. Polly could work very well in your case https://github.com/App-vNext/Polly
In Azure Service Bus I need to listen for messages arriving from multiple subscriptions from different services busses at once.
To do this I created a list that contains objects with a connection string, a topic, a subscription name and some other information (the list is called 'jobs').
For each item in this list I am then creating a different task that creates the ServiceBusClient and the processor.
var jobs = GetAllServiceBusTopics();
Parallel.ForEach(jobs, async job =>
{
var client = new ServiceBusClient(job.Environment.ServiceBusConnectionString);
var options = new ServiceBusProcessorOptions();
var processor = client.CreateProcessor(job.Environment.TopicName, _subscriptionName, new ServiceBusProcessorOptions());
try
{
processor.ProcessMessageAsync += MessageHandler;
//Pass the job object somehow to the "MessageHandler" below.
processor.ProcessErrorAsync += ErrorHandler;
await processor.StartProcessingAsync();
Console.WriteLine("Wait for a minute and then press any key to end the processing");
Console.ReadKey();
Console.WriteLine("\nStopping the receiver...");
await processor.StopProcessingAsync();
Console.WriteLine("Stopped receiving messages");
}
finally
{
await processor.DisposeAsync();
await client.DisposeAsync();
}
});
And the handler that is called if a new message arrives:
static async Task MessageHandler(ProcessMessageEventArgs args)
{
//I need the "job" object from my loop above here.
}
How the concept generally works I learned on this website of Microsoft.
My first question:
Is this approach okay, or am I running in the wrong direction? Can I do it like this?
But even if this is okay, I have another more important task:
I need to pass the "job" object from my loop somehow to the message handler - as a parameter.
But I have currently no idea how to archvie this. Any proposals on this?
Is this approach okay, or am I running in the wrong direction? Can I do it like this?
Yes, you can do this. One thing to keep in mind is that you instantiate multiple ServiceBusClient instances, each causing a new connection to be established rather than using the same connection. I don't know how big the number of topics (jobs) might be but if it's large, you'll end up with connections starvation.
I need to pass the "job" object from my loop somehow to the message handler - as a parameter. But I have currently no idea how to archvie this. Any proposals on this?
That's not how ServiceBusProcessor is designed. It doesn't receive anything other than the incoming message that needs to be processed. If you need to have a job ID, that should be part of the message payload/metadata. If you need to know the entity it arrived from, you could add a subscription filter action to add a custom header with the identifier. An alternative approach would require wrapping the ServiceBusProcessor to retain the job ID/subscription identifier and use that in the event handler.
I am working on some POC project and trying to solve the following problem.
I have a Publisher which is sending a messages to the Queue:
bus.PublishAsync<IBaseScenario>(new TestScenario())
.ContinueWith(task =>
{
if (task.IsCompleted && !task.IsFaulted)
Console.WriteLine("TestScenario queued with success.");
else
Console.WriteLine(task.Exception.Message);
});
And some Consumers which are consuming a messages:
bus.SubscribeAsync<IBaseScenario>("test_1_consumer",
message => Task.Factory.StartNew(() =>
{
var testScenario = message as TestScenario;
var anotherTestScenario = message as AnotherTestScenario;
ResolveScenario(testScenario);
ResolveScenario(anotherTestScenario);
}).ContinueWith(task =>
{
if (task.IsCompleted && !task.IsFaulted)
Console.WriteLine("Task ended up with success.");
else
Console.WriteLine(task.Exception.Message);
}));
At this point everything is working as needed, but here is what I would like to achieve.
My Message is some kind of Scenario which contains steps, each Scenario is sent to the Queue and then maintained by a Consumer.
I would like to get a some kind of ACK info from Consumer sent to Publisher everytime when the each Step is done on the Consumer site (for example if its ended up with success or not.
I would like to get also an info about which Consumer got the Message.
Every Message (Scenario) should be treated as atomic operation, so there should not be possible to doing steps on different Consumers and if some Step will end without success, then the whole scenario should be treated as failed.
Are these 2 requirements possible to solve using the following architecture or do I need to use something more?
The easiest thing to do would be to use EasyNetQ's request response model described here https://github.com/EasyNetQ/EasyNetQ/wiki/Request-Response
In the response you can put the identity of the consumer that processed the message and the final status of the scenario. If one scenario is sent in one message, and that scenario contains all the steps necessary then all steps would be processed by a single consumer.
That said, message duplication is always a problem due to either sending the message twice or a message being requeued after a consumer fails. If it is critical that a scenario NEVER be processed more than once, then you will need to implement message deduplication or make each scenario idempotent. That is a general fact of life when working with RabbitMQ.
while (true)
{
BasicDeliverEventArgs e = (BasicDeliverEventArgs)Consumer.Queue.Dequeue();
IBasicProperties properties = e.BasicProperties;
byte[] body = e.Body;
Console.WriteLine("Recieved Message : " + Encoding.UTF8.GetString(body));
ch.BasicAck(e.DeliveryTag, false);
}
This is what we do when we Retrieve Message by subscription..We use While Loop because we want Consumer to listen Continously..what if i want to make this even based..that is when a new message arrives in the queue at that time only Consumer should Consume the message..or on any such similar event..
use the RabbitMQ.Client.Events.EventingBasicConsumer for a eventing consumer instead of a blocking one.
You're currently blocking on the Consumer.Queue.Dequeue(). If I understand your question correctly, you want to asynchronously consume messages.
The standard way of doing this would be to write your own IBasicConsumer (probably by subclassing DefaultBasicConsumer) and set it as the consumer for the channel.
The trouble with this is that you have to be very careful about what you do in IBasicConsumer.HandleBasicDelivery. If you use any synchronous AMQP methods, such as basic.publish, you'll get a dead-lock. If you do anything that takes a long time, you'll run into some other problems.
If you do need synchronous methods or long-running actions, what you're doing is about the right way to do it. Have a look at Subscription; it's an IBasicConsumer that consumes messages and puts them on a queue for you.
If you need any more help, a great place to ask is the rabbitmq-discuss mailing list.
I had this problem and could not find an answer so created a demonstration project to have the RabbitMQ subscription raise .Net events when a message is received. The subscription runs on its own thread leaving the UI (in mycase) free to do it thing.
I amusing call my project RabbitEar as it listens out for messages from the mighty RabbitMQ
I intend to share this with the RabbitMQ site so if they think its of value they can include a link / code in there examples.
Check it out at http://rabbitears.codeplex.com/
Thanks
Simon
Having set up a ReferenceDataRequest I send it along to an EventQueue
Service refdata = _session.GetService("//blp/refdata");
Request request = refdata.CreateRequest("ReferenceDataRequest");
// append the appropriate symbol and field data to the request
EventQueue eventQueue = new EventQueue();
Guid guid = Guid.NewGuid();
CorrelationID id = new CorrelationID(guid);
_session.SendRequest(request, eventQueue, id);
long _eventWaitTimeout = 60000;
myEvent = eventQueue.NextEvent(_eventWaitTimeout);
Normally I can grab the message from the queue, but I'm hitting the situation now that if I'm making a number of requests in the same run of the app (normally around the tenth), I see a TIMEOUT EventType
if (myEvent.Type == Event.EventType.TIMEOUT)
throw new Exception("Timed Out - need to rethink this strategy");
else
msg = myEvent.GetMessages().First();
These are being made on the same thread, but I'm assuming that there's something somewhere along the line that I'm consuming and not releasing.
Anyone have any clues or advice?
There aren't many references on SO to BLP's API, but hopefully we can start to rectify that situation.
I just wanted to share something, thanks to the code you included in your initial post.
If you make a request for historical intraday data for a long duration (which results in many events generated by Bloomberg API), do not use the pattern specified in the API documentation, as it may end up making your application very slow to retrieve all events.
Basically, do not call NextEvent() on a Session object! Use a dedicated EventQueue instead.
Instead of doing this:
var cID = new CorrelationID(1);
session.SendRequest(request, cID);
do {
Event eventObj = session.NextEvent();
...
}
Do this:
var cID = new CorrelationID(1);
var eventQueue = new EventQueue();
session.SendRequest(request, eventQueue, cID);
do {
Event eventObj = eventQueue.NextEvent();
...
}
This can result in some performance improvement, though the API is known to not be particularly deterministic...
I didn't really ever get around to solving this question, but we did find a workaround.
Based on a small, apparently throwaway, comment in the Server API documentation, we opted to create a second session. One session is responsible for static requests, the other for real-time. e.g.
_marketDataSession.OpenService("//blp/mktdata");
_staticSession.OpenService("//blp/refdata");
The means one session operates in subscription mode, the other more synchronously - I think it was this duality which was at the root of our problems.
Since making that change, we've not had any problems.
My reading of the docs agrees that you need separate sessions for the "//blp/mktdata" and "//blp/refdata" services.
A client appeared to have a similar problem. I solved it by making hundreds of sessions rather than passing in hundreds of requests in one session. Bloomberg may not be to happy with this BFI (brute force and ignorance) approach as we are sending the field requests for each session but it works.
Nice to see another person on stackoverflow enjoying the pain of bloomberg API :-)
I'm ashamed to say I use the following pattern (I suspect copied from the example code). It seems to work reasonably robustly, but probably ignores some important messages. But I don't get your time-out problem. It's Java, but all the languages work basically the same.
cid = session.sendRequest(request, null);
while (true) {
Event event = session.nextEvent();
MessageIterator msgIter = event.messageIterator();
while (msgIter.hasNext()) {
Message msg = msgIter.next();
if (msg.correlationID() == cid) {
processMessage(msg, fieldStrings, result);
}
}
if (event.eventType() == Event.EventType.RESPONSE) {
break;
}
}
This may work because it consumes all messages off each event.
It sounds like you are making too many requests at once. BB will only process a certain number of requests per connection at any given time. Note that opening more and more connections will not help because there are limits per subscription as well. If you make a large number of time consuming requests simultaneously, some may timeout. Also, you should process the request completely(until you receive RESPONSE message), or cancel them. A partial request that is outstanding is wasting a slot. Since splitting into two sessions, seems to have helped you, it sounds like you are also making a lot of subscription requests at the same time. Are you using subscriptions as a way to take snapshots? That is subscribe to an instrument, get initial values, and de-subscribe. If so, you should try to find a different design. This is not the way the subscriptions are intended to be used. An outstanding subscription request also uses a request slot. That is why it is best to batch as many subscriptions as possible in a single subscription list instead of making many individual requests. Hope this helps with your use of the api.
By the way, I can't tell from your sample code, but while you are blocked on messages from the event queue, are you also reading from the main event queue while(in a seperate event queue)? You must process all the messages out of the queue, especially if you have outstanding subscriptions. Responses can queue up really fast. If you are not processing messages, the session may hit some queue limits which may be why you are getting timeouts. Also, if you don't read messages, you may be marked a slow consumer and not receive more data until you start consuming the pending messages. The api is async. Event queues are just a way to block on specific requests without having to process all messages from the main queue in a context where blocking is ok, and it would otherwise be be difficult to interrupt the logic flow to process parts asynchronously.