Azure Service Bus Subscriber regularly phoning home? - c#

We have pub/sub application that involves an external client subscribing to a Web Role publisher via an Azure Service Bus Topic. Our current billing cycle indicates we've sent/received >25K messages, while our dashboard indicates we've sent <100. We're investigating our implementation and checking our assumptions in order to understand the disparity.
As part of our investigation we've gathered wireshark captures of client<=>service bus traffic on the client machine. We've noticed a regular pattern of communication that we haven't seen documented and would like to better understand. The following exchange occurs once every 50s when there is otherwise no activity on the bus:
The client pushes ~200B to the service bus.
10s later, the service bus pushes ~800B to the client. The client registers the receipt of an empty message (determined via breakpoint.)
The client immediately responds by pushing ~1000B to the service bus.
Some relevant information:
This occurs when our web role is not actively pushing data to the service bus.
Upon receiving a legit message from the Web Role, the pattern described above will not occur again until a full 50s has passed.
Both client and server connect to sb://namespace.servicebus.windows.net via TCP.
Our application messages are <64 KB
Questions
What is responsible for the regular, 3-packet message exchange we're seeing? Is it some sort of keep-alive?
Do each of the 3 packets count as a separately billable message?
Is this behavior configurable or otherwise documented?
EDIT:
This is the code the receives the messages:
private void Listen()
{
_subscriptionClient.ReceiveAsync().ContinueWith(MessageReceived);
}
private void MessageReceived(Task<BrokeredMessage> task)
{
if (task.Status != TaskStatus.Faulted && task.Result != null)
{
task.Result.CompleteAsync();
// Do some things...
}
Listen();
}

I think what you are seeing is the Receive call in the background. Behind the scenes the Receive calls are all using long polling. Which means they call out to the Service Bus endpoint and ask for a message. The Service Bus service gets that request and if it has a message it will return it immediately. If it doesn't have a message it will hold the connection open for a time period in case a message arrives. If a message arrives within that time frame it will be returned to the client. If a message is not available by the end of the time frame a response is sent to the client indicating that no message was there (aka, your null BrokeredMessage). If you call Receive with no overloads (like you've done here) it will immediately make another request. This loop continues to happend until a message is received.
Thus, what you are seeing are the number of times the client requests a message but there isn't one there. The long polling makes it nicer than what the Windows Azure Storage Queues have because they will just immediately return a null result if there is no message. For both technologies it is common to implement an exponential back off for requests. There are lots of examples out there of how to do this. This cuts back on how often you need to go check the queue and can reduce your transaction count.
To answer your questions:
Yes, this is normal expected behaviour.
No, this is only one transaction. For Service Bus you get charged a transaction each time you put a message on a queue and each time a message is requested (which can be a little opaque given that Recieve makes calls multiple times in the background). Note that the docs point out that you get charged for each idle transaction (meaning a null result from a Receive call).
Again, you can implement a back off methodology so that you aren't hitting the queue so often. Another suggestion I've recently heard was if you have a queue that isn't seeing a lot of traffic you could also check the queue depth to see if it was > 0 before entering the loop for processing and if you get no messages back from a receive call you could go back to watching the queue depth. I've not tried that and it is possible that you could get throttled if you did the queue depth check too often I'd think.
If these are your production numbers then your subscription isn't really processing a lot of messages. It would likely be a really good idea to have a back off policy to a time that is acceptable to wait before it is processed. Like, if it is okay that a message sits for more than 10 minutes then create a back off approach that will eventually just be checking for a message every 10 minutes, then when it gets one process it and immediately check again.
Oh, there is a Receive overload that takes a timeout, but I'm not 100% that is a server timeout or a local timeout. If it is local then it could still be making the calls every X seconds to the service. I think this is based on the OperationTimeout value set on the Messaging Factory Settings when creating the SubscriptionClient. You'd have to test that.

Related

Azure Service Bus subscription.close() not working as intended

I have a scaled out application, where each instance connects to a azure service bus subscription with the same name. The end result being that only a single instance gets to act on any given message because they are all listening to the same subscription.
Occasionally the application needs to place an instance into an idle state (service fabric ActiveSecondary replica). When this occurs, I need to close the subscription so that this instance no longer receives messages. If there were 2 instances originally, once one gets placed into the idle state all message should go to the remaining instance. This is important so that all messages are handled by a properly configured primary instance.
When the instance becomes idle, a cancellation token is cancelled. I have code listening for the cancellation and calling Close() on the SubscriptionClient generated when I created the subscription originally.
The issue is, even after I call Close() on one instance, messages are still being randomly split between it and the primary.
Is the way I'm doing this inherently wrong, or is something else in my code causing this behavior?
The Azure Service Bus track 0 and 1 SDKs do not support CancellationTokens. If you're closing your client and messages won't be processed, they'd be picked up another competing instance when visible again. That's where MaxLockDuration and MaxDeliveryCount are important to ensure messages have enough processing attempts to account the situation you're describing w/o waiting for too long.
Disregard this post. Turns out I had the same subscription name used twice within a single instance, so they were competing for the events. The close() function works as expected.

How to gracefully disconnect from rabbitmq queue

I am experiencing a racing condition issue with my rabbitmq client. My service has multiple instances listening on a single queue, storing received messages into a db.
When they all get restarted at once, i sometimes see messages being redelivered and stored in the db twice. This is normally handled on client side by checking if the correlationid has already been stored in the db. This works 99.9% of the time (i am processing 5mill messages a day, it happens once or twice a day).
So as i said, i suspect a racing condition being responsible for this. I think i receive the message again while my first message is still being processed. So when i check i dont see it stored in the db, and in the end, store it twice.
I should not that this is a non-issue, but has been bothering me because i can't really explain what happens.
I suspect that it happens when i restart the services. I think i disconnect from the queue, while i am still processing the message, triggering rabbitmq to redeliver again to another instance that is not shutdown yet.
What i want to do is when i am stopping the service is to
tell rabbitmq that i dont want to receive further messages
wait for all currently processing messages to finish
send acks / nacks
shutdown
Right now i am first deregistering the received event
_consumerServer.Received -= MessageReceived;
then i am disposing the channel and the server
if (_channel != null)
{
_channel.Close();
_channel.Dispose();
}
if (_connectionServer != null)
{
_connectionServer.Close();
_connectionServer.Dispose();
}
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Rather than try and shut down a consumer so that messages won't be redelivered, you should handle redelivery correctly. Check for and handle the case where the redelivered flag is set on a message, and act appropriately. You should also try store your messages in such a way that the store operation is idempotent - i.e. it can happen multiple times and you will only have one record in your database.
Please see the guidelines that the team have provided here:
https://www.rabbitmq.com/reliability.html#consumer

Calling Abandon on an Azure Service Bus re-queues the message at the back rather than the front of the queue

I'm using an Azure Service Bus Queue with Session based messaging enabled. To consume from the queue I register an IMessageSessionAsyncHandler and then process the message in the OnMessageAsync method.
This issue I'm seeing is that if I abandon a message for whatever reason, rather than being received again immediately, I receive the next message in the session and only after processing that message, do I receive the first message again (assuming only two messages in the session).
As an example, lets say I have a queue with 2 messages on it, both with the same SessionId. The two messages have sequence numbers of 1 and 2 respectively. I start receiving and get message with sequence 1, as expected. If I then abandon this message using message.Abandon (the reason for abandoning is irrelevant), I immediately get the next message in the session (sequence number 2). Only after handling (or abandoning) this second message, do I get the first message again.
This behaviour I'm seeing isn't what I'd expect from abandoning a message and isn't consistent with other ways of using the queue. I've tested this same example in the following scenarios
without the use of an IMessageSessionAsyncHandler and instead just manually accepting a message session.
without the use of sessions and instead just having two independent messages on the queue.
In both scenarios, I see the expected bahaviour, in that when I abandon a message it is always guaranteed to be the next message received, unless the max delivery count is exceeded and it is dead-lettered.
My question is this: Is the behaviour I'm seeing with the use of an IMessageSessionAsyncHandler expected, or is this a bug in the Service Bus Library? If this is not a bug, can anyone give me an explaination for why this behaves different to the other ways of receiving?
When you Register a session handler on the Queueclient, Prefetch is turned on internally to improve latency and throughput of the receivers. Unfortunately for the IMessageSessionAsyncHandler scenario this behavior cannot be overriden. One option is to abandon the Session itself when you encounter a message in a session which needs to be abandoned, this will ensure that the messages are delivered in order.

Suggestions for developing a TCP/IP based message client

I've got a server side protocol that controls a telephony system, I've already implemented a client library that communicates with it which is in production now, however there are some problems with the system I have at the moment, so I am considering re-writing it.
My client library is currently written in Java but I am thinking of re-writing it in both C# and Java to allow for different clients to have access to the same back end.
The messages start with a keyword have a number of bytes of meta data and then some data. The messages are always terminated by an end of message character.
Communication is duplex between the client and the server usually taking the form of a request from the Client which provokes several responses from the server, but can be notifications.
The messages are marked as being on of:
C: Command
P: Pending (server is still handling the request)
D: Data data as a response to
R: Response
B: Busy (Server is too busy to handle response at the moment)
N: Notification
My current architecture has each message being parsed and a thread spawned to handle it, however I'm finding that some of the Notifications are processed out of order which is causing me some trouble as they have to be handled in the same order they arrive.
The duplex messages tend to take the following message format:
Client -> Server: Command
Server -> Client: Pending (Optional)
Server -> Client: Data (optional)
Server -> Client: Response (2nd entry in message data denotes whether this is an error or not)
I've been using the protocol for over a year and I've never seen the a busy message but that doesn't mean they don't happen.
The server can also send notifications to the client, and there are a few Response messages that are auto triggered by events on the server so they are sent without a corresponding Command being issued.
Some Notification Messages will arrive as part of sequence of messages, which are related for example:
NotificationName M00001
NotificationName M00001
NotificationName M00000
The string M0000X means that either there is more data to come or that this is the end of the messages.
At present the tcp client is fairly dumb it just spawns a thread that notifies an event on a subscriber that the message has been received, the event is specific to the message keyword and the type of message (So data,Responses and Notifications are handled separately) this works fairly effectively for Data and response messages, but falls over with the notification messages as they seem to arrive in rapid sequence and a race condition sometimes seems to cause the Message end to be processed before the ones that have the data are processed, leading to lost message data.
Given this really badly written description of how the system works how would you go about writing the client side transport code?
The meta data does not have a message number, and I have not control over the underlying protocol as it's provided by a vendor.
The requirement that messages must be processed in the order in which they're received almost forces a producer/consumer design, where the listener gets requests from the client, parses them, and then places the parsed request into a queue. A separate thread (the consumer) takes each message from the queue in order, processes it, and sends a response to the client.
Alternately, the consumer could put the result into a queue so that another thread (perhaps the listener thread?) can send the result to the client. In that case you'd have two producer/consumer relationships:
Listener -> event queue -> processing thread -> output queue -> output thread
In .NET, this kind of thing is pretty easy to implement using BlockingCollection to handle the queues. I don't know if there is something similar in Java.
The possibility of a multi-message request complicates things a little bit, as it seems like the listener will have to buffer messages until the last part of the request comes in before placing the entire thing into the queue.
To me, the beauty of the producer/consumer design is that it forces a hard separation between different parts of the program, making each much easier to debug and minimizing the possibility of shared state causing problems. The only slightly complicated part here is that you'll have to include the connection (socket or whatever) as part of the message that gets shared in the queues so that the output thread knows where to send the response.
It's not clear to me if you have to process all messages in the order they're received or if you just need to process messages for any particular client in the proper order. For example, if you have:
Client 1 message A
Client 1 message B
Client 2 message A
Is it okay to process the first message from Client 2 before you process the second message from Client 1? If so, then you can increase throughput by using what is logically multiple queues--one per client. Your "consumer" then becomes multiple threads. You just have to make sure that only one message per client is being processed at any time.
I would have one thread per client which does the parsing and processing. That way the processing would be in the order it is sent/arrives.
As you have stated, the tasks cannot be perform in parallel safely. performing the parsing and processing in different threads is likely to add as much overhead as you might save.
If your processing is relatively simple and doesn't depend on external systems, a single thread should be able to handle 1K to 20K messages per second.
Is there any other issues you would want to fix?
I can recommend only for Java-based solution.
I would use some already mature transport framework. By "some" I mean the only one I have worked with until now -- Apache MINA. However, it works and it's very flexible.
Regarding processing messages out-of-order -- for messages which must be produced in the order they were received you could build queues and put such messages into queues.
To limit number of queues, you could instantiate, say, 4 queues, and route incoming message to particular queue depending on the last 2 bits (indeces 0-3) of the hash of the ordering part of the message (for example, on the client_id contained in the message).
If you have more concrete questions, I can update my answer appropriately.

How can I throttle the amount of messages coming from ActiveMQ in my C# app?

I'm using ActiveMQ in a .Net program and I'm flooded with message-events.
In short when I get a queue-event 'onMessage(IMessage receivedMsg)' I put the message into an internal queue out of which X threads do their thing.
At first I had: 'AcknowledgementMode.AutoAcknowledge' when creating the session so I'm guessing that all the messages in the queue got sucked down and put into the memory queue (which is risky since with a crash, everything is lost).
So then I used: 'AcknowledgementMode.ClientAcknowledge' when creating the session, and when a worker was ready with the message it calls the 'commit()' method on the message. However, still all the messages get sucked down from the queue.
How can I configure it that ONLY an X amount of messages are being processed or are in an internal queue, and that not everything is being 'downloaded' right away?
Are you on .NET 4.0? You could use a BlockingCollection . Set it to the maximum amount it may contain. As soon as a thread tries to put in an excess element, the Add operation will block until the collection falls below the threshold again.
Maybe that would do it for throttling?
There is also an API for throttling in the Rx framework, but I do not know how it is implemented. If you implement your Queue source as Observable, this API would become available for you, but I don't know if this hits your needs.
You can set the client prefetch to control how many messages the client will be sent. When the Session is in Auto Ack, the client will only ack a message once its been delivered to your app via the onMessage callback or through a synchronous receive. By default the client will prefetch 1000 messages from the broker, if the client goes down these messages would be redelivered to another client it this was a Queue, otherwise for a topic they are just discarded as a topic is a broadcast based channel. If you set the prefetch to one then you client would only be sent one message from the sever, then each time your onMessage callback completes a new message would be dispatched as the client would ack that message, that is if the session is in Auto Ack mode.
Refer to the NMS configuration page for all the options:
http://activemq.apache.org/nms/configuring.html
Regards
Tim.
FuseSource.com

Categories

Resources