Setup
I have a model of a distributed system in which there is a producer(P), a consumer(C) and 1, 2, 3, ... n workers(Wn). All of these components are communicating via the Microsoft Azure Service Bus(B). Inside the bus, there is a topic(T) and a queue(Q).
The (P) is at varying rates pushing messages into the (T). The (Wn)'s [number of them is a consequence of the rate of (P)'s messages] are fetching these messages from there, altering them according to some pre-defined function and after this forwards the messages to the (Q), from which the (C) picks them up and handles them according to plan.
Purpose
The purpose of this model is to investigate the scalability of a system like this, with specific regards taken to the Azure Service Bus. The applications themselves are written in C# and they are all executed from within the same system.
Questions
I have two concerns in regard to the functionality of the Azure Service Bus:
Is there a way to tell either the (B) to be more loose in terms of balancing, or perhaps make the (W)'s more 'eager' to participate?
There seems to be a pre-destined order of message distribution, making the load balance uneven (among the (W)'s).
Say for instance I have 3 (W)'s - or (W3): if (P) now were to send 1.000 messages to the (T), I would expect a somewhat even distribution, going towards 1/3 of all messages for each of the (W). This is however not the case; it seems as if the rest of the (W)'s just sits there waiting for the busy (W) to handle message after message after message. Suddenly, perhaps after 15 to 20 messages, another (W) will receive a message, but still the balance is very uneven.
Consequently, I now have (W)'s just sitting around doing nothing (for varying periods of time).
Is there a way, either in (B)'s settings or (W)'s code, to specifically set the time of the PeekLock()?
I have experimented with Thread.Sleep(timeToSleep) in the (W) OnMessage()-function. This seems to fit my needs, if it wasn't for the concern aired in the first question.
My experimentation: whenever a message arrives at a (W), the work begins and just before message.Complete() is sent to the (B), I pull off a Thread.Sleep(2000) or something along that line. Ideally, another (W) should pick up where this first (W) fell asleep, but they don't. The first (W) wakes up and grabs another message and so the cycle continues, sometimes 15-20 times until another (W) finally grabs a message.
Images
If you excuse my poor effort at explaining through drawing, this is current scenario (figure 1) and the ideal, wanted, scenario (figure 2):
Figure 1: current scenario
Figure 2: optimal/wanted scenario
I hope for some clarification on this matter. Thank you in advance!
The distribution of messages across consumers is handled in the about order that requests for messages are being made to Service Bus. There is no assurance for excatly even distribution at the message level, and the distribution will be affected by feature usage, including prefetch. In any actual workload situation, you'll find that distribution is fair, because busy workers will not ask for more messages.
Related
I'm writing an ASP .NET Core application where I'm using the Google PubSub emulator where I can both publishe and subscribe to a topic. However, when I publish a "large" amount of messages 1000+, I would like to pull as many as possible.
I use the Google.Cloud.PubSub.V1 library which provides SubscriberServiceApiClient to interact with their API. I pull asynchronously with the PullAsync method which has the parameter maxMessages. According to their documentation this decides the max number of messages that can be pulled by each request, however it may return fewer. If I provide an argument that specifies a maxMessages number above 100, it will not make a difference. This means the maximum number of messages I can receive from each request is always 100, which seems low. I've also tried to pull through their REST Api, which is also limited to 100 messages per pull.
I'm unsure whether it is due to some limit or if I'm doing something wrong. I have tried searching in their documentation and elsewhere, but without luck.
In general, Google Cloud Pub/Sub cannot return more than 1,000 messages to a single PullAsync call. This may be even smaller when running through the emulator. The value of returnImmediately can also affect how many messages are returned. If you want to maximize the number of messages returned, then you'll want to set returnImmediately to false. However, even in this scenario, you'll not necessarily get maxMessages in each response; Cloud Pub/Sub tries to balance returning fuller responses with minimizing end-to-end latency by waiting too long.
In general, to maximize throughput, you'll need to have multiple PullAsync calls active at once. However, even better is to use SubscriberClient, which handles the underlying requests behind the scenes for you and delivers messages to the function you specify as they arrive.
Max messages is still capped at 1,000 messages in November 2019. Pubsub does not allow to get more messages at a time. As seen in the picture below, I tried to pull messages in a loop, with 1,000 at a time. In half of the requests it gets a lot less than the maximum amount of messages. I managed to pull around 50,000 messages within the 9 minutes maximum runtime of a Cloud Function.
An alternative solution is async subscribing to a pubsub topic with google.cloud.pubsub_v1.SubscriberClient.subscribe(). However, this solution is better suited in a long running process which you could describe as a sort of collector sitting on a server.
This is a question about message passing. This relates specifically to an in-house application written in C#. But it has a home grown "message passing" system resembling erlang.
Okay, we hope that it will be possible to learn from erlang folks or documentation to find an elegant solution to a couple of message passing challenges. But alas, after reading erlang documentation online and forums these topics don't seem to be addressed--that we can find.
So the question is: In erlang, when does the queue to send messages to a process get full? And does erlang handle the queue full situation? Or are the queues for message passing in erlang boundless--that is only limited by system memory?
Well in our system it involves processing a stream of financial data with potentially billions of tuples of information being read from disk, each tuple of financial information is called a "tick" in the financial world.
So it was necessary to set a limit to the queue size of each "process" in our system. We arbitrarily selected 1000 items max in each queue. Now those queues quickly get filled entirely by the tick messages.
The problem is that the processes also need to send other types of messages to each other besides just ticks but the ticks fill up the queues preventing any other types of message from getting passed.
As a "band aid" solution (which is messy) allow multiple queues per process for each message type. So a process will have a tick queue, and a command queue, and fill queue, and so on.
But erlang seems so much cleaner by having a single queue to each "process" that carries different message types. But again, how does it deal with the the queue getting hogged by a flood of only one of the message types?
So perhaps this is a question about the internals of erlang. Does erlang internally have separate limits on message types in a queue? Or does it internally have a separate queue per type of message?
In any case, how are sending processes aware when a queue is too full to receive certain types of message? Does the send fail? Does that mean error handling becomes necessary in erlang for inability to send?
In our system, it tracks when queues get full and then prevents any processes from running which will attempt to add to a full queue until that queue has more space. This avoids messy error handling logic since processes, once invoked, are guaranteed to have room to send one message.
But again, if we put multiple types of messages on that same queue. Other message types will be blocked that must get through.
It has become my perhaps mistaken impression that erlang wasn't designed to handle this situation so perhaps it doesn't address the problem of a queues getting filled with a flood of a single message type.
But we hope someone know how to answer this point to good reference information or book that covers this specific scenario.
Erlang sends all messages to a single queue with the system memory being the upper limit on the queue size. If you want to prioritize messages you have to scan the entire queue for the high priority messages before fetching a low priority one.
There are ways to get around this by spawning handler processes which throttle and prioritize traffic, but the erlang VM as such has no support for it.
Answer to the additional question in the comment:
Even at Safari books online, the main ones never say how messages are passed on erlang. It's clear they don't used "shared memory". So how do they communicate? is it via loopback tcp/ip when on the same machine?
Within one virtual machine, messages are simply copied (except for big enough binaries; for them, pointers are copied) between memory areas assigned to processes. If you start several Erlang VMs on the same machine, they can communicate over TCP/IP.
I have a scenario where about 10 different messages will need to be enqueued and then dequeued / processed. One subscriber will need all 10 messages, but another will only need 8 of the 10 messages. I am trying to understand what the best way is to setup this type of architecture. Do you create a queue for each message type so the subscriber(s) can just subscribe to the relevant queues or do you dump them all to the same queue and ignore the messages that are not relevant to that subscriber? I want to ensure the solution is flexible / scalable, etc.
Process:
10 different xml messages will be enqueued to an IBM WebSphere MQ server.
We will use .Net (Most likely WCF since WebSphere MQ 7.1 has added in WCF support)
We will dequeue the messages and load them into another backend DB (Most likely SQL Server).
Solution needs to scale well because we will be processing a very large number of messages and this could grow (Probably 40-50,000 / hr). At least large amount for us.
As always greatly appreciate the info.
--S
Creating queues is relatively 'cheap' from a resource perspective, plus yes, it's better to use a queue for each specific purpose, so it's probably better in this case to separate them by target client if possible. Using a queue to pull messages selectively based on some criteria (correlation ID or some other thing) is usually a bad idea. The best performing scenario in messaging is the most straightforward one: simply pull messages from the queue as they arrive, rather than peeking and receiving selectively.
As to scaling, I can't speak for Websphere MQ or other IBM products, but 40-50K messages per hour isn't particularly hard for MSMQ on Windows Server to handle, so I'd assume IBM can do that as well. Usually the bottleneck isn't the queuing platform itself but rather the process of dequeuing and processing individual messages.
OK, based on the comments, here's a suggestion that will scale and doesn't require much change on the apps.
On the producer side, I'd copy the message selection criteria to a message property and then publish the message to a topic. The only change that is required here to the app is the message property. If for some reason you don't want to make it publish using the native functionality, you can define an alias over a topic. The app thinks it is sending messages but they are really publications.
On the consumer side you have a couple of choices. One is to create administrative subscriptions for each app and use a selector in the subscription. The messages are then funneled to a dedicated queue per consumer, based on the selection criteria. The apps think that they are simply consuming messages.
Alternatively the app can simply subscribe to the topic. This gives you the option of a dynamic subscription that doesn't receive messages when the app is disconnected (if in fact you wanted that) or a durable subscription that is functionally equivalent to the administrative subscription.
This solution will easily scale to the volumes you cited. Another option is that the producer doesn't use properties. Here, the consumer application consumes all messages, breaks open the message payload on each and decides whether to process or ignore the message. In this solution the producer is still publishing to a topic. Any solution involving straight queueing forces the producer to know all the destinations. Add another consumer, change the producer. Also, there's a PUT for each destination.
The worst case is a producer putting multiple messages and a consumer having to read each one to decide if it's going to be ignored. That option might have problems scaling, depending on how deep in the payload the selection criteria field lies. Really long XPath expression = poor performance and no way to tune WMQ to make up for it since the latency is all in the application at that point.
Best case, producer sets a message property and publishes. Consumers select on property in their subscription or an administrative subscription does this for them. Whether this solution uses application subscriptions or administrative subscriptions doesn't make any difference as far as scalability is concerned.
I am working on two apps that use an MSMQ as a message bus mechanism so that A transfers messages to B. This clearly has to be robust so initially we chose MSMQ to store and transfer the messages.
When testing the app we noticed that in real-world conditions, where msmq is called to handle approximately 50.000 messages a minute (which sounds quite low to me) then we quickly reach the max storage size of the msmq /storage directory (defaults to 1.2gb i think).
We can increase that but I was wondering whether there is a better approach to handle slow receivers and fast senders. Is there a better queue or a better approach to use in this case?
Actually it isnt so much a problem of slow receivers since msmq will maintain the (received) messages in the storage dir for something like 6 hours or until the service is restarted. So essentially if in 5 minutes we reach the 1gb threshold then in a few hours we will reach terratybes of data!
Please read this blog to understand how MSMQ uses resources which I put together after years of supporting MSMQ at Microsoft.
It really does cover all the areas you need to know about.
If you have heard something about MSMQ that isn't in the blog then it is alomost certainly wrong - such as the 1.2GB storage limit for MSMQ. The maximum size of the msmq\storage directory is the hard disk capacity - it's an NTFS folder!
You should be able to have a queue with millions of messages in it (assuming you have enough kernel memory, as mentioned in the blog)
Cheers
John Breakwell
You should apply an SLA to your subscribers, they have to read their messages with in X amount of time or they lose them. You can scale this SLA to match the volume of messages that arrive.
For subscribers that cannot meet their SLA then simply put, they don't really care about receiving their messages that quickly (if they did, they would be available). For these subscribers you can offer a slower channel, such as an XML dump of the messages in the last hour (or what ever granularity is required). You probably wouldn't store each individual message here, but just an aggregate of changes (eg, something that can be queried from a DB).
Use separate queues for each message type, this way you can apply different priorities depending on the importance of the message, if one queue becomes full, messages of other types won't be blocked. It also makes it simpler to monitor if each message is being processed within its SLA by looking at the first message in the queue and seeing when it was added to determine how long it was waiting (see NServiceBus).
From your above metrics of 1GB in 5 minutes at 50,000 messages/minute I calculate each message to be about 4kb. This is quite large for a message since messages should normally only be carrying top level details about something happening, mostly IDs of what was changed, and the intent of what was changed. Larger data is better served from some other out-of-band channel for transferring large blobs (eg, file share, sftp, etc).
Also, since a service should encapsulate its own data, you shouldn't need to share much data between services. So large data within a service using messages to say what happened isn't unusual, large data between separate services using messages indicates that some boundaries are probably leaking.
I am creating a mass mailer application, where a web application sets up a email template and then queues a bunch of email address for sending. The other side will be a Windows service (or exe) that will poll this queue, picking up the messages for sending.
My question is, what would the advantage be of using SQL Service Broker (or MSMQ) over just creating my own custom queue table?
Everything I'm reading is suggesting I use Service Broker, but I really don't see what the huge advantage over a flat table (that would be a lot simpler to work with for me). For reference the application will be used to send 50,000-100,000 emails almost daily.
Do you know how to implement a queue over a flat table? This is not a silly question, implementing a queue over a table correctly is much harder than it sounds. Queue-like-tables are notoriously deadlock prone and you need to carefully consider the table design and the enqueue and dequeue operations. Also, do you know how to scale your pooling of the table? And how are you goind to handle retries and timeouts (ie. what timers are used for)?
I'm not saying you should use SSB. The lerning curve is very steep and is primarily a distributed applicaiton platform, not a local queueing product so some features, like dialogs, will actually be obstacles for you rather than advantages. I'm just saying that you must consider also the difficulties of flat-table-queues. If you never implemented a flat-table-queue then be warned, there are many dragons under that bridge.
50k-100k messages per day is nothing, is only one message per second. If you want 100k per minute, then we have something to talk about.
If you every need to port to another vendor's database, you will have less problem if you used normal tables.
As you seem to only have one reader and one write from your queue, I would tend to use a standard table until you hit problem. However if you start to feel the need to use “locking hints” etc, that the time to switch to the Service Broker Queues.
I would not use MSMQ, if both the sender and the reader need a database connection to work. MSMQ would be good if the sender did not talk to the database at all, as it lets the sender keep working when the database is down. However having to setup and maintain both the MSMQ and the database is likely to be more work then it is worth for most systems.
For advantages of Service Broker see this link:
http://msdn.microsoft.com/en-us/library/ms166063.aspx
In general we try to use a tool or standard functionality rather than building things ourselves. This lowers the cost and can make upgrading easier.
I know this is old question, but is sufficiently abstract to be relevant for long enough time.
After using both paradigms I would suggest flat table. It is surprisingly scalable and nifty. Correct hints need to be used.
Once the application goes distributed, or starts using mutiple allways on groups with different RW and RO servers, the Service Broker (or any other method of distributed communication) becomes a neccessity.
Flat table
needs only few hints (higly dependent on isolation level) to work scalably and reliably in the consumer (READPAST, UPDLOCK, ROWLOCK)
the order of message processing is not set in stone
the consumer must make sure that the message stays in the queue if the processing fails
needs some polling mechanism (job, CDC (here lies madness :)), external application...)
turn of maintenance jobs and automatic statistics for the table
Service broker
needs extremely overblown "infrastructure" (message types, contracts, services, queues, activation procedures, must be enabled after each server restart, conversations need to be correctly created and dropped...)
is extremely opaque - we have spent ages trying to make it run after it mysteriously stopped working
there is a predefined order of message processing
the tables it uses can cause deadlocks themselfs if SB is overused
is the only way (except for linked servers...) to send messages directly from database on RW server of one HA group to a database that is RO in this HA group (without any external app)
is the only way to send messages between different servers (linked servers are a big NONO (unless they become an YESYES - you know the drill - it depends)) (without any external app)