Message Passing Like erlang in C# With Very Large Data Stream - c#

This is a question about message passing. This relates specifically to an in-house application written in C#. But it has a home grown "message passing" system resembling erlang.
Okay, we hope that it will be possible to learn from erlang folks or documentation to find an elegant solution to a couple of message passing challenges. But alas, after reading erlang documentation online and forums these topics don't seem to be addressed--that we can find.
So the question is: In erlang, when does the queue to send messages to a process get full? And does erlang handle the queue full situation? Or are the queues for message passing in erlang boundless--that is only limited by system memory?
Well in our system it involves processing a stream of financial data with potentially billions of tuples of information being read from disk, each tuple of financial information is called a "tick" in the financial world.
So it was necessary to set a limit to the queue size of each "process" in our system. We arbitrarily selected 1000 items max in each queue. Now those queues quickly get filled entirely by the tick messages.
The problem is that the processes also need to send other types of messages to each other besides just ticks but the ticks fill up the queues preventing any other types of message from getting passed.
As a "band aid" solution (which is messy) allow multiple queues per process for each message type. So a process will have a tick queue, and a command queue, and fill queue, and so on.
But erlang seems so much cleaner by having a single queue to each "process" that carries different message types. But again, how does it deal with the the queue getting hogged by a flood of only one of the message types?
So perhaps this is a question about the internals of erlang. Does erlang internally have separate limits on message types in a queue? Or does it internally have a separate queue per type of message?
In any case, how are sending processes aware when a queue is too full to receive certain types of message? Does the send fail? Does that mean error handling becomes necessary in erlang for inability to send?
In our system, it tracks when queues get full and then prevents any processes from running which will attempt to add to a full queue until that queue has more space. This avoids messy error handling logic since processes, once invoked, are guaranteed to have room to send one message.
But again, if we put multiple types of messages on that same queue. Other message types will be blocked that must get through.
It has become my perhaps mistaken impression that erlang wasn't designed to handle this situation so perhaps it doesn't address the problem of a queues getting filled with a flood of a single message type.
But we hope someone know how to answer this point to good reference information or book that covers this specific scenario.

Erlang sends all messages to a single queue with the system memory being the upper limit on the queue size. If you want to prioritize messages you have to scan the entire queue for the high priority messages before fetching a low priority one.
There are ways to get around this by spawning handler processes which throttle and prioritize traffic, but the erlang VM as such has no support for it.

Answer to the additional question in the comment:
Even at Safari books online, the main ones never say how messages are passed on erlang. It's clear they don't used "shared memory". So how do they communicate? is it via loopback tcp/ip when on the same machine?
Within one virtual machine, messages are simply copied (except for big enough binaries; for them, pointers are copied) between memory areas assigned to processes. If you start several Erlang VMs on the same machine, they can communicate over TCP/IP.

Related

MSMQ: Is it possible to consume the message in the same order as they were being sent?

Let say I have a queue Q ,Q is the destination of message.
I know that MSMQ guarantees that multi messages encompassed in the transaction are received in the same order in which
they were sent.But my application send one message per transaction to Q(messages have same destination).Can messages order still being preserved when it reach Q?
Yes, you should be OK although it may depend on how flaky the network is. MSMQ uses internal ack messages to ensure messages are successfully delivered. The retry mechanism will handle any lost messages. It may be possible, depending on retry window size, etc., for a lost message to be delayed behind the following message.
I can also imagine it may be possible for one message to get ahead of another - for example, a 1kb message following a 4MB message - due to the time it takes for a message to be persisted from the network stack to the disk.
Both scenarios are edge cases, though.
Bottom line is that order is not guaranteed outside a transaction so, if your app depends on it, make sure the messages contain a sequence number of some kind.
Make sure your servicebehaviour is decoraated with EnsureOrderedDispatch:=true
Here is the link that Microsoft is confirmed with
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/ordered-processing-of-messages-in-single-concurrency-mode
Have a look at this stackoverflow question. It does refer to multiple clients, but the content of the post might be useful to you.

Check number of messages in input queue

Is this possible to get the number of messages inside my InputQueue using NServiceBus and do I need to bypass it and use native MSMQ interface?
It's not gonna be a complete monitoring, we've got a system comprising several NSB components and they're monitored through the usage of Windows performance counters. What I'm trying to achieve is just a simple health check -> sending a NSB message to a component, its response is to contain let's say DB access status and number of MSMQ messages in its queue.
That's why I'd like to make it as simple as possible. So the question is: can I check the message number in a simple way or I'd rather need to read the performance counter ?
You'd have to use the System.Messaging.MessageQueue.GetAllMessages() or one of its enumerator methods to get that information. NServiceBus doesn't expose this.

How to implement Message Queuing Solution

I have a scenario where about 10 different messages will need to be enqueued and then dequeued / processed. One subscriber will need all 10 messages, but another will only need 8 of the 10 messages. I am trying to understand what the best way is to setup this type of architecture. Do you create a queue for each message type so the subscriber(s) can just subscribe to the relevant queues or do you dump them all to the same queue and ignore the messages that are not relevant to that subscriber? I want to ensure the solution is flexible / scalable, etc.
Process:
10 different xml messages will be enqueued to an IBM WebSphere MQ server.
We will use .Net (Most likely WCF since WebSphere MQ 7.1 has added in WCF support)
We will dequeue the messages and load them into another backend DB (Most likely SQL Server).
Solution needs to scale well because we will be processing a very large number of messages and this could grow (Probably 40-50,000 / hr). At least large amount for us.
As always greatly appreciate the info.
--S
Creating queues is relatively 'cheap' from a resource perspective, plus yes, it's better to use a queue for each specific purpose, so it's probably better in this case to separate them by target client if possible. Using a queue to pull messages selectively based on some criteria (correlation ID or some other thing) is usually a bad idea. The best performing scenario in messaging is the most straightforward one: simply pull messages from the queue as they arrive, rather than peeking and receiving selectively.
As to scaling, I can't speak for Websphere MQ or other IBM products, but 40-50K messages per hour isn't particularly hard for MSMQ on Windows Server to handle, so I'd assume IBM can do that as well. Usually the bottleneck isn't the queuing platform itself but rather the process of dequeuing and processing individual messages.
OK, based on the comments, here's a suggestion that will scale and doesn't require much change on the apps.
On the producer side, I'd copy the message selection criteria to a message property and then publish the message to a topic. The only change that is required here to the app is the message property. If for some reason you don't want to make it publish using the native functionality, you can define an alias over a topic. The app thinks it is sending messages but they are really publications.
On the consumer side you have a couple of choices. One is to create administrative subscriptions for each app and use a selector in the subscription. The messages are then funneled to a dedicated queue per consumer, based on the selection criteria. The apps think that they are simply consuming messages.
Alternatively the app can simply subscribe to the topic. This gives you the option of a dynamic subscription that doesn't receive messages when the app is disconnected (if in fact you wanted that) or a durable subscription that is functionally equivalent to the administrative subscription.
This solution will easily scale to the volumes you cited. Another option is that the producer doesn't use properties. Here, the consumer application consumes all messages, breaks open the message payload on each and decides whether to process or ignore the message. In this solution the producer is still publishing to a topic. Any solution involving straight queueing forces the producer to know all the destinations. Add another consumer, change the producer. Also, there's a PUT for each destination.
The worst case is a producer putting multiple messages and a consumer having to read each one to decide if it's going to be ignored. That option might have problems scaling, depending on how deep in the payload the selection criteria field lies. Really long XPath expression = poor performance and no way to tune WMQ to make up for it since the latency is all in the application at that point.
Best case, producer sets a message property and publishes. Consumers select on property in their subscription or an administrative subscription does this for them. Whether this solution uses application subscriptions or administrative subscriptions doesn't make any difference as far as scalability is concerned.

Message queue considerations - MSMQ storage issue kills current app

I am working on two apps that use an MSMQ as a message bus mechanism so that A transfers messages to B. This clearly has to be robust so initially we chose MSMQ to store and transfer the messages.
When testing the app we noticed that in real-world conditions, where msmq is called to handle approximately 50.000 messages a minute (which sounds quite low to me) then we quickly reach the max storage size of the msmq /storage directory (defaults to 1.2gb i think).
We can increase that but I was wondering whether there is a better approach to handle slow receivers and fast senders. Is there a better queue or a better approach to use in this case?
Actually it isnt so much a problem of slow receivers since msmq will maintain the (received) messages in the storage dir for something like 6 hours or until the service is restarted. So essentially if in 5 minutes we reach the 1gb threshold then in a few hours we will reach terratybes of data!
Please read this blog to understand how MSMQ uses resources which I put together after years of supporting MSMQ at Microsoft.
It really does cover all the areas you need to know about.
If you have heard something about MSMQ that isn't in the blog then it is alomost certainly wrong - such as the 1.2GB storage limit for MSMQ. The maximum size of the msmq\storage directory is the hard disk capacity - it's an NTFS folder!
You should be able to have a queue with millions of messages in it (assuming you have enough kernel memory, as mentioned in the blog)
Cheers
John Breakwell
You should apply an SLA to your subscribers, they have to read their messages with in X amount of time or they lose them. You can scale this SLA to match the volume of messages that arrive.
For subscribers that cannot meet their SLA then simply put, they don't really care about receiving their messages that quickly (if they did, they would be available). For these subscribers you can offer a slower channel, such as an XML dump of the messages in the last hour (or what ever granularity is required). You probably wouldn't store each individual message here, but just an aggregate of changes (eg, something that can be queried from a DB).
Use separate queues for each message type, this way you can apply different priorities depending on the importance of the message, if one queue becomes full, messages of other types won't be blocked. It also makes it simpler to monitor if each message is being processed within its SLA by looking at the first message in the queue and seeing when it was added to determine how long it was waiting (see NServiceBus).
From your above metrics of 1GB in 5 minutes at 50,000 messages/minute I calculate each message to be about 4kb. This is quite large for a message since messages should normally only be carrying top level details about something happening, mostly IDs of what was changed, and the intent of what was changed. Larger data is better served from some other out-of-band channel for transferring large blobs (eg, file share, sftp, etc).
Also, since a service should encapsulate its own data, you shouldn't need to share much data between services. So large data within a service using messages to say what happened isn't unusual, large data between separate services using messages indicates that some boundaries are probably leaking.

Buffer pool management using C#

We need to develop some kind of buffer management for an application we are developing using C#.
Essentially, the application receives messages from devices as and when they come in (there could be many in a short space of time). We need to queue them up in some kind of buffer pool so that we can process them in a managed fashion.
We were thinking of allocating a block of memory in 256 byte chunks (all messages are less than that) and then using buffer pool management to have a pool of available buffers that can be used for incoming messages and a pool of buffers ready to be processed.
So the flow would be "Get a buffer" (process it) "Release buffer" or "Leave it in the pool". We would also need to know when the buffer was filling up.
Potentially, we would also need a way to "peek" into the buffers to see what the highest priority buffer in the pool is rather than always getting the next buffer.
Is there already support for this in .NET or is there some open source code that we could use?
C# sharps memory management is actually quite good, so instead of having a pool of buffers, you could just allocate exactly what you need and stick it into a queue. Once you are done with buffer just let the garbage collector handle it.
One other option (knowing only very little about your application), is to process the messages minimally as you get them, and turn them into full fledged objects (with priorities and all), then your queue could prioritize them just by investigating the correct set of attributes or methods.
If your messages come in too fast even for minimal processing you could have a two queue system. One is just a queue of unprocessed buffers, and the next queue is the queue of message objects built from the buffers.
I hope this helps.
#grieve: Networking is native, meaning that when buffers are used the receive/send data on the network, they are pinned in memory. see my comments below for elaboration.
Why wouldn't you just receive the messages, create a DeviceMessage (for lack of a better name) object, and put that object into a Queue ? If the prioritization is important, implement a PriorityQueue class that handles that automatically (by placing the DeviceMessage objects in priority order as they're inserted into the queue). Seems like a more OO approach, and would simplify maintenance over time with regards to the prioritization.
I know this is an old post, but I think you should take a look at the memory pool implemented in the ILNumerics project. I think they did exactly what you need and it is a very nice piece of code.
Download the code at http://ilnumerics.net/ and take a look at the file ILMemoryPool.cs
I'm doing something similar. I have messages coming in on MTA threads that need to be serviced on STA threads.
I used a BlockingCollection (part of the parallel fx extensions) that is monitored by several STA threads (configurable, but defaults to xr * the number of cores). Each thread tries to pop a message off the queue. They either time out and try again or successfully pop a message off and service it.
I've got it wired with perfmon counters to keep track of idle time, job lengths, incoming messages, etc, which can be used to tweak the queue's settings.
You'd have to implement a custom collection, or perhaps extend BC, to implement queue item priorities.
One of the reasons why I implemented it this way is that, as I understand it, queueing theory generally favors single-line, multiple-servers (why do I feel like I'm going to catch crap about that?).

Categories

Resources