We are using MassTransit with RabbitMQ and part of our implementation includes an outbox pattern.
Now i'm trying to create a docker container whose only purpose is to dispatch messages from outboxes in several databases.
The container gets a list of connection strings to the various databases and then starts to dispatch messages from their outboxes.
Currently we store the following information in our outbox (with examples):
MessageType: SomeNamespace.SomeType, SomeContract
MessageBody: {"SomeProperty":"MyValue"}
TransmitMethod: Send/Publish
QueueName: SomeQueueName
My question is if it's possible to dispatch these messages without having access to the contract types?
I can add more information to the table if needed to make this happen.
You can look at how the MassTransit message scheduler support for Quartz.NET captures and ultimately sends the message on the transport. In this case, it's saving the serialized message from the transport and reloading the JSON into the message body at serialization time.
You might also find useful details in the relational outbox draft PR.
Related
I have a web application that publishes messages to a topic then several Windows services that subscribe to those topics, some with multiple instances. If the services are running when the messages are published everything works correctly but if they are not then the messages are retained on the queue(s) subscribing to that topic but aren't read when the services start back up.
The desired behavior-
When a message is published to the topic string MyTopic, it is read
from the MyTopicQueue only once. I use some wildcard topics so each message is sent to multiple queues, but multiple instances of a services subscribe to the same topic string and each message should be read by only of those instances
If the subscribers to the MyTopic topic aren't online when the message is published then the messages are retained on MyTopicQueue.
When the Windows services subscribing
to a particularly topic come back on line each retained message is
read from MyTopicQueue by only a single subscriber.
I've found some [typically for IBM] spotty documentation about the MQSUBRQ and MQSO_PUBLICATIONS_ON_REQUEST options but I'm not sure how I should set them. Can someone please help figure out what I need to do to get my desired behavior? [Other than switching back to RabbitMQ which I can't do though I'd prefer it.]
My options:
private readonly int _openOptions = MQC.MQSO_CREATE | MQC.MQSO_FAIL_IF_QUIESCING | MQC.MQSO_MANAGED;
private readonly MQGetMessageOptions _messageOptions = new MQGetMessageOptions()
Code to open the Topic:
_topic = _queueManager.AccessTopic(_settings.TopicString, null,
MQC.MQTOPIC_OPEN_AS_SUBSCRIPTION, _openOptions);
The line of code that reads from the topic (taken from a loop):
_topic.Get(mqMessage, _messageOptions);
If you want the messages to accumulate while you are not connected you need to make the subscription durable by adding MQC.MQSO_DURABLE. In order to be able to resume an existing subscription add MQC.MQSO_RESUME in addition to MQC.MQSO_CREATE.
Be careful with terminology, what you are describing as retained messages is a durable subscription.
Retained publications are something else were MQ can retain one most recently published message on each topic and this message will be retrieved by new subscribers by default unless they use MQSO_NEW_PUBLICATIONS_ONLY to skip receiving the retained publication.
MQSO_PUBLICATIONS_ON_REQUEST allows a subscriber to only receive retained publications on request, it will not receive non-retained publications.
If you want multiple consumers to work together on a single subscription you have two options:
Look at shared subscribers in XMS.NET, look at the CLONESUPP property.
Create a one time durable subscription to a queue on the topics you want consumed, then have your consumers directly consume from the queue not a topic.
Introduction
We exchange income data with an external party. Each year income tax regulations change and a new message schema has to be implemented. Altogether we now have 8 different schema versions each of which are deployed in a separate 'year income tax' application and this amount increases by 1 each year.
Because we pay our hosting company per installed application, we want to decrease the amount of applications installed.
All these applications are functionally equal, which means we validate incoming messages, and forward valid messages into a specific MQSeries queue. Each invalid message is routed to a response queue. Each application has it's own 'valid' and 'invalid' message queues.
The plan
One generic application that processes all 8(+) messages. New schemas must be deployable without application changes or downtime for previous, running 'income year tax' flows.
So far...
I can receive multiple messages on the same BizTalk receive port (MessageType XmlDocument) and am able to validate these messages dynamically in an orchestration by calling a custom receive pipeline (XML Disassembler + XML Validator). Exceptions as well as valid messages are processed as prescribed. There are no references between the Schemas and the generic application, so schemas can be deployed without need to stop running processes. So far, so good.
The orchestration has 1 receive shape, and 2 send shapes (valid, invalid).
SSO contains the values for routing the 'valid' and 'invalid' messages to their correct queue. Based on the incoming messagetype SSO is questioned for the correct 'valid' or 'invalid' queuedefinition.
The problem
I have previously dealt with dynamic FTP, FILE, WCF and SMTP ports, which all worked flawlessly after supplying the adapter with the correct Context Properties. Even MSMQ seems to have a fairly straightforward approach on dynamically setting transport properties.
However, I cannot seem to find MQSeries MQMT ContextProperties to set the queuedefinition dynamically.
Microsoft does not provide much information on this, and extensive searches on the internet hasn't provided me with anything useful (examples) either.
I tried matching IBM's docs with Microsoft's, but altogether I am now stuck.
I would suggest to use MQSC adapter for IBM MQ integration. It is part of Host Integration Server MSI. It only requires MQ client to be installed on the server Vs MQ Server for Windows installation required by MQSeries adapter.
Set the OutboundTransportLocation property in following format mqsc://{channelName}/tcp/{server{({port})/{queuemanager}/{queuename}
TransportType = MQSC
Context Properties - Schema can be found within assembly MQSeriesEx.MQSPropertySchemaEx with namespace (http://schemas.microsoft.com/BizTalk/2003/mqs-properties).
There are only few context properties you would need to set if at all required.
Channel_HeartBeat
Channel_MaxMessageLength
Channel_UserId
Channel_Password
ConnectionTimeout
If additional properties are required than use MQSeries.MQSPropertySchema context properties.
Thanks Vikas for your suggestion.
I followed your directions and found it works!
However, I found it a little more complicated than needed as it required me configuring channel names for each flow.
The solution that best suited me was the one I had in mind all along, and it was right before me. My attempts failed because I made a fatal mistake by setting the outgoing message's properties where I should have set the dynamic send port's properties.
SendPort(Microsoft.XLANGs.BaseTypes.Address)="MQS://SERVER/QMANAGER/QUEUENAME";
I'm currently building a system using MassTransit and RabbitMQ as my messaging layer. I'm trying to find a way to have a Consumer that listens on all messages of all types on the bus. This is for our audit logging framework and we want to log all of the events going across the message bus.
Is there a way to do this in MassTransit?
You would need to add some type of auditing interface to your message that could be subscribed for auditing purposes. For example, if you were to create a base interface:
public interface IAuditable
{
DateTime Timestamp {get;}
string Username {get}
}
Or whatever properties must be commonly available for auditing. Then you can subscribe to that interface and get a copy of every message. Or you could make it an empty interface and just audit message headers. But the messages would need to implement it and publish it to get a copy.
This seems like a generally bad idea, since you're creating copies of the messages all over the place...
Another approach would be to add an observer to message consumption and use that observer to either write to the audit storage or to send a message to an audit queue and let that asynchronous consumer to write to the audit storage.
The thing is, if you're auditing every message, and every message is sending an audit message, make sure you don't observer your audit consumer or you'll die the infinite death.
The observer option is my favorite, since it not only logs the message, but allows the disposition (success/fault) to be captured, as well as the host which consumed the message, processing duration, etc.
MassTransit has build in support for auditing
See this link:
https://masstransit-project.com/advanced/audit.html
So you're better use their built in functionallity instead of creating observers and other hacks
Two main parts need to be saved for each message to provide complete audit:
The message itself
Metadata
Message metadata includes:
Message id
Message type
Context type (Send, Publish or Consume)
Conversation id
Correlation id
Initiator id
Request id (for request/response)
Source address
Destination address
Response address (for request/response)
Fault address
Have been looking for a Message bus with publish/subscribe functionality. Found that AWS SQS does not support FIFO, so had to give up on it. Working with Azure Service bus, found that queues do support FIFO, but it seems like Topics do not support FIFO. And topics are what we need, with their pub-to-many-sub model :(
Is it just a setting I am missing? I tried sending 100 messages from my C# client, and the subscribers got the messages in the wrong order. Any tips would be appreciated.
Thanks!
You can use session to get an Azure topic to provide FIFO ordering but that's not quite the same thing as guaranteeing the order in which messages are processed.
Sessions are not enough of a guarantee here. For example, if you are using PeekLock mode then a message that times out will return to the queue and be processed out of order. You can use ReceiveAndDelete mode to counter this behavior but that means you lose the transactional nature of message handling.
One of the reasons why documentation may be a little light on this area is that it isn't a common use case. Messaging is about decoupling though asynchronous communication and ordering guarantees create temporal coupling between applications.
Ideally you should design your payloads so ordering doesn't matter. If that fails, use a timestamp that allows you to discard messages received out of order.
Discussed in more detail here: Don’t assume message ordering in Azure Service Bus
You should be able to achieve this by setting property SupportOrdering to true
// Configure Topic Settings
TopicDescription td = new TopicDescription("TestTopic");
td.SupportOrdering = true;
// Create a new Topic with custom settings
string connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
namespaceManager.CreateTopic(td);
I have a redis instance that publishes messages via different topics. Instead of implementing a complex heartbeat mechanism (complex because the instance would stop publishing messages after some time if they are not consumed), is there a way to check whether pubs are consumed by anyone?
For example, instance RedisServer publishes messages to topic1 and topic2. RedisClient1 subscribes to topic1 and RedisClient2 subscribes to topic2. When RedisClient2 for whatever reason stops consuming messages of topic2 then I want RedisServer to know about it and decide when to stop publishing messages to topic2. The discontinuation of topic2 consumption is unpredictable hence I am not able to inform RedisServer of the discontinuation/unsubscription.
I thought if there was a way for a redis instance to know whether messages of a certain topic are consumed or not then that would be very helpful information.
Any idea whether that is possible?
Given you are using a recent-enough version of redis (> 2.8.0) these two commands may help you:
PUBSUB CHANNELS [pattern]
Which lists the currently active channels ( = channel having at least one subscriber) matching the pattern.
PUBSUB NUMSUB [chan1 ... chanN]
Which returns the number of subscribers for the specified channels (doesn't work for patterns however).
Note: Both solutions won't enable you to determine if a message was truely processed! If you need to know about completion of tasks (if your messages are triggering something), then I would recommend searching for a full blown job queue (for example Resque, if you want to stick with Redis)
Edit: Here's the Redis doc. for all of the above: http://redis.io/commands/pubsub
You can also use the result of PUBLISH. It will give you the number of subscribers that received the message: http://redis.io/commands/publish
This way you don't need to poll the PUBSUB command, just do your "stop publishing" messages logic after you publish a message.
At most you publish one message with no one subscribing.