Lately we have setup easynetq queues in publish-subscribe and request-response way and common queue connection / endpoint. The goal is setup easynetq in way that developing logic using queue must be independent between developers - currently we have configured easynetq that every developer machine gets his own prefixed set of queues but it looks like that if 2 ot more developers starts subscribers they read messages from owned queues and also other developers queueus.
How configure easynetq using code to resolve this issue?
An easier way to isolate the activities of different developers, as you describe, would be to use a different virtual host (vhost in the connection string) for each developer. vhosts are completely isolated from each other.
Queues could then have the same name for each developer. The fact that they are on a different vhost would separate them.
https://www.rabbitmq.com/vhosts.html
Related
I am relatively new to MassTransit in .NET and its integration with AWS SNS/SQS. The documentation and tutorials are fine, but as usual, the devil is in the details.
I am especially unsure on what the best way would be to configure MassTransit in an environment with multiple envs, e.g., development, staging, production, and multiple application instances in production environment.
Multi-environment sketch
Here is what I know so far:
For different environments, create a dedicated SNS topic for each message and each environment, e.g., development-my-event and production-my-event.
Endpoints need to be unique. Therefore, create a separate endpoint for each consumer in the application, e.g., development-consumer1-my-event. A second consumer in a different application will get an endpoint development-consumer2-my-event.
Now assuming that there are multiple people working simultaneously on the applications, in order to avoid messaging conflicts, they will need their own topics and queues as well, e.g., development-user123-my-event. Is that a valid way to go?
The production code runs in a clustered environment, i.e., there will be multiple instances of any application. Do we have to make the endpoints of each applications then even more unique, like, addings another identifier so that the queue names do not conflict?
Hoping that this is not the case, I would assume that I can start many identical consumers that listen to the same endpoint, which would (hopefully) also solve that I would only one of those to process e.g. a command message.
Any insights to my thoughts highly appreciated, thanks!
So far:
Implemented custom entity and endpoint name formatters to distinguish different environments.
Formatter diversy further for different dev machines
Specified to use ".fifo" for command queues and topics
Yes, you'd need to add yet another discriminator for each developer. Not sure I would ever do this, I'd likely just use localstack and develop locally.
If by multiple instances, you mean scaled out to load balance by competing consumer on the same queue, that's the default behavior. If you need to fan out events to the same consumer in multiple applications, you'll need to either scope them or use some prefix on the endpoint name formatter.
As mentioned above, competing consumer is the default if a service is scaled out to multiple instances.
RabbitMQ allows for 'Quorum Queues'. As far as I have read in the documentation, 'quorum' queues allow queues to be replicated on all nodes within a rabbit cluster, whereas 'classic' queues host a specific queue on a specific node. I understand that there will be a higher latency when using 'quorum' over 'classic' queues.
I use ServiceStack to talk to RabbitMQ. The exchanges and the queues are created automatically - based around my requests and responses, and this all works well.
I am writing software for use in a highly available environment - I am writing C# code, using .NET 6 in a Linux environment (docker containers running in K8s), and am using ServiceStack 6.0.2. I would like to use 'quorum' queues rather than 'classic' queues if possible to help prevent message loss if one of the rabbit nodes in the cluster goes down.
Is it possible for ServiceStack to create 'quorum' queues? Having read through the documentation, searched SO, searched the ServiceStack forums, general web searching and experimentation in a stand-alone application, I can find no obvious way of creating these types of queues automatically via ServiceStack. By the looks of it, the queues are registered with various features, but always seem to be created as 'classic' queues.
Furthermore, will there be any problem with using ServiceStack and 'quorum' queues? The RabbitMQ documentation suggests that A client library that can use regular mirrored queues will be able to use quorum queues., but I am unclear if this is the case with ServiceStack.
No ServiceStack doesn't support creating Rabbit MQ Quorum Queues.
ServiceStack MQ is a messaging abstraction over multiple MQ implementations to enable alternative Reply and OneWay endpoints for invoking your Services.
You'll need to utilize the MQ libraries directly when you need additional MQ-specific features beyond this.
I would like to find a solution to create a pub/sub medium for 2 microservices to talk to each other,
I am aware i can use some third parties E.g Redis, RabbitMQ
Implementing event-based communication between microservices (integration events)
The challenge lies on the client is unable to allow install any third parties tool due to security reason.
The messageQueue server in Windows won't be allowed to use too.
I can only use the applications that is only existed in the server.
Therefore i am asking if there is anyway that i can create one simple app using windows service.
It is a one-to-many relationship. I have one service that will be dealing with data, once if there is any update, it will publish to those services that is subsribed to it.
It seems my problem could be similar with
.NET Scalable Pub/Sub service implementation
WCF Pub/Sub with subscriber caching(link is dead on the WCF pub-sub)
but i dont see any critical solutions.
I was thinking to use data notifications that MSSQL offers as last alternatives, but seems like it could cause a bottle neck when the applications get scale up.
The internet is so much flooded with articles using third parties tool.
Thanks
Check out Rebus library, that allows using different transport methods to send end receive messages in just a line of code (so in the future you can change it without effort).
You could use SQL Server or try to develop your own transport method
I have a straightforward, existing ASP.NET MVC web solution. The server-based stuff writes information to a database. I am now going to integrate/synchronize this system with a number of other 3rd-party systems. I want to separate the integration processing from the existing core processing, leaving the existing system as untouched as possible.
My plan is as follows:
whenever a database write occurs on the core system server I will write a message to an MSMQ Queue.
an entirely separate server-based windows service will poll the MSMQ, look at the message and will write messages to one or more 'outbound' sync MSMQ queues.
other windows services will monitor the 'outbound' sync queues, and will talk to the 3rd-party systems as necessary, managing the outbound synchronization.
I have a couple of questions:
Should I have a single windows service doing all this, or should I have several services, one central 'routing' one and one for each 3rd-party system?
Should I use WCF for any of this. Does that buy me anything, given that the 'trigger' for writing to the initial queue is already 'happening' on a server-based process?
Thanks very much.
To answer your questions:
Should I have a single windows service doing all this
Definitely not. What if you want to scale out the routing service, or relocate it?
Should I use WCF
If you have your heart set on msmq then the only advantage WCF gives you is it provides a convenient, proven way to design and host your service endpoints, and an alternative to mucking around in System.Messaging. I would say at this stage it doesn't matter that much.
Does that buy me anything
Not sure what you mean, but as Wiktor says in his post, you could chose not to use vanilla .Net or WCF and choose a service bus type framework such as masstransit or nservicebus.
The benefit here is it abstracts you away from the messaging sub-system so you could in theory move away from msmq in the future to rabbitmq or azure queues.
First, a separate windows service is always safer than any attempt to integrate this with your asp.net runtime.
Second, do not write anything by yourself. Use
http://code.google.com/p/masstransit/
It is straightforward and does everything you need. Reference the library from their nuget package, read some tutorials and you will love it.
I am creating a new ASP MVC order application in the Amazon (AWS) cloud with the persistence layer at my local datacenter. I will be using the CQRS pattern. The goal of the project is high availability using Queue(s) to store and forward writes (commands/events) that can be picked up and handled asynchronously at my local datacenter. Then, ff the WAN or my local datacenter fails, my cloud MVC app can still take orders and just queue them up until processing can resume.
My first thought was to use AWS SQS for the queuing and create my own queue consumer/dispatcher/handler in my own c# application to process the incoming messages/events.
MVC (# Amazon) --> Event/POCO --> SQS --> QueueReader (# my datacenter) --> DB
Then I found NServiceBus. NSB seems to handle lots of details very nicely: message handling, retries, error handling, etc. I hate to reinvent the wheel, and NServiceBus seems like a full featured and mature product that would be perfect for me.
But on further research, it does NOT look like NServiceBus is really meant to be used over the WAN in physically separated environments (Cloud to my Datacenter). Google and SO don't really paint a good picture of using NServiceBus across the WAN like I need.
Can I do this?
MVC (# Amazon) --> Event/POCO --> NServiceBus over WAN --> NServiceBus Handler(s) --> DB
How can I use NServiceBus across the WAN? Or is there a better solution to handle queuing and message handling between Amazon an my local datacenter?
Using SQS as a transport for NServiceBus is an option, however you have to be aware of the trade offs as described here. This has been done with Azure queue storage, though I'm not aware of any great SQS implementations.
Another option is to create a VPN between your datacenter and an AWS VPC. This would allow direct MSMQ communication between AWS servers and your data center, provided you open the appropriate ports in the corresponding security group. There are some caveats with this approach. First, is regarding endpoint names. NServiceBus version 2.6 and below uses Environment.MachineName as the name of the endpoint, for which you would have to setup a proper DNS. I believe later versions use the machine's IP address. Perhaps a more important caveat is that a VPN makes your systems more coupled.
Yet another way, is to use the NServiceBus notion of a gateway. This however should be a logical business decision. A gateway is very similar to the regular transport but is usually has a different business context behind it.
NServiceBus includes a Gateway component that handles bridging physically separated data centers.
http://docs.particular.net/nservicebus/gateway/
It basically moves the messaging to an HTTP channel and handles the retry logic and deduplication issues that you'd normally have with a web service.
If you download the full NServiceBus package (not just include it via NuGet) then you will see a folder full of samples and one of those covers usage of the Gateway, and that is a great way to get started.