MassTransmit/RabbitMQ multiple distinct instances on the same machine - c#

I was wondering if it is possible to run multiple MassTransmit or RabbitMQ instances on the same server. Basically we have a .net app using MassTransmit on top of RabbitMQ. Unfortunately a lot of our clients run both live and test environments on the same server so in order to deploy to the real world we need a way of having either multiple instances or the ability to segregate messages between live and test.
A few ideas I've had
1) Do something like: https://lazareski.com/multiple-rabbitmq-instances-on-1-machine/
The problem here is it relies on a lot of config on clients sites.
2) I could include a header in all messages and each consumer checks for the present of the correct header before consuming the message (e.g. header has 'live' or 'test'.) Obviously this means all messages being received from all instances whether they are meant for them or not which is far from ideal.
Ideally I would like to be able to do something with minimal setup on a clients site, like a virtual sub instance or directory for each environment.

There are two ways to work around this issue.
The first way is the most obvious - you need to use virtual hosts.
From the documentation:
Virtual hosts provide logical grouping and separation of resources.
Separation of physical resources is not a goal of virtual hosts and
should be considered an implementation detail.
Create two virtual hosts in your RMQ instance, called test and prod and the only thing you would need to do on MassTransit side is to change the RMQ connection string:
Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host(new Uri("rabbitmq://localhost/test"), host =>
{
host.Username("username");
host.Password("password");
});
});
So you will use rabbitmq://localhost/prod for production. Naturally, those values should not be hard-coded but come from the configuration.
I believe that virtual hosts cover your needs entirely.
If you really need to run the test environment completely separated, you can just run it in a Docker container. This option will give you the ability to kill the whole thing and start from scratch when you need a clean environment. You can easily remap default ports to avoid conflicts with the production instance:
docker run -d --name test-rmq -p 5673:5672 -p 8080:15672 rabbitmq:3-management
If you run the command above, the new instance will be accessible via AMQP on localhost:5673 and the management plugin will be on http://localhost:8080

Related

Testing same test cases across multiple environments

Currently we have set up our UI test automation in one environment(Dev). We are using C#, .Net, Visual Studio, specflow and MSTest
CONFIG
We read an app.config file for environment specific variables. We use Azure Pipeline for CI and run these tests on a nightly build.
<configuration>
<appSettings>
<add key="DevApp" value="https://dev.websitename.com />
</appSettings>
<connectionStrings>
<add name="DevDatabase" connectionString="http://dev.url/" />
</connectionStrings>
</configuration>
Now we want to run these tests on our UAT environment as well. The setup will be the same, and we want to run the same tests with the same data. The only difference is that for UAT we will point to different URL's and a different database.
for example
Dev env = https://dev.websitename.com
UAT env = https://uat.websitename.com
server name="DevDatabase" connectionString="http://dev.url/"
server name="UATDatabase" connectionString="http://uat.url/"
PASSWORDS
In terms of password, out application is an internal application and we use windows auth. So in our dev and uat environment we have the same password set up for all users. So in Dev = devpassword and UAT = uatpassword
For both dev and test we are using the same users with the password being the only difference.When testing we launch the browser using impersonation and launch the browser as 'run as' for that user
var service = ChromeDriverService.CreateDefaultService(driverpath)
if user is not null then we do this
var pwd = new SecureString()
service.StartDomain = Configurationhelper.Domain
service.StartupUserName = username
service.StartupPassword= = pwd
service.StartupLoadUserProfile = true
we store domain and password and other environmental variables in a separate config file as constants.
**Main issue: **
This wont work now so I think it could be best to store passwords as secrets in AZURE pipeline variables? if so, how would i change this code? for example
The server team, db team and devops team have taken care of server,db setup and urls etc
So for me its just configuring the test automation repo with my changes to configuration
What could be an elegant approach to do this?
AZURE PIPELINE
How could we run tests for both these environments in parallel? by parallel i mean having both run on a nightly run. Our Azure pipeline has 2 separate clients UAT and DEV pointing to the same artifact. The tasks and Variable are the same for both environments but with different values obviously
Currently they both would run in isolation
Solutions to this problem come down to how does the context (in your case the environment and all its associated connection strings and URLs) get to the tests where they will be consumed. In your question, you stated several orthogonal concerns:
using the same data
running in a different environment
running in parallel
Not mentioned is another concern
how to handle secrets (e.g. passwords in connection strings)
I'll explain one solution (a strategy really) that addresses these concerns, and why it appears to be a maintainable and extensible solution.
Using the same data
This can be very simple or very complex. The simple solution is to create a database of canonical and representative test data, and to then swap in that database to your environment. That can be done via your database's backup/restore or by creating the data programmatically. You would need to have a mechanism to restore or wipe the data whether the test(s) succeeds or fails.
Very rarely will using the environment's database "as is" lead to reliable tests; tests often modify state, and a database is the ultimate form of state; mutations of state will affect future tests.
It is with this last sentence that a full swap that occurs before/after each test is probably a) faster (occurring at a bulk/macro level with a quicker swap function), b) more maintainable (data can be reviewed/created ahead of time) and c) less brittle.
Running in a different environment
This, like the heart of your question discusses, is where you come down to whether to use multiple files or a single file. Using multiple files means you can take advantage of some of the built-in .NET configuration mechanisms that allow you to specify the environment. That means duplicating the files and changing the values to reflect the environment.
The other way, you mentioned, is storing all of this information into a single configuration file. If you do it this way, you need some sort of discriminator to disambiguate the entries, and your test needs to be able to pass in the environment name to some API to pull the value. I prefer this mechanism personally, because when you add a new feature/test you can add all the configuration in one place.
So the two mechanisms are roughly the same in terms of work, except the latter leads to a more compact editing session when adding new config, and from a readability/maintainability scenario, you have fewer places to look.
Secrets
If you follow the single-source of configuration approach, you simply extend that to your secrets, but you select the appropriate secret store (e.g. a secrets file or Azure Key Vault, or some such... again with an environment-based discriminator). Here's an example:
{
"DEV.Some.Key" : "http://devhost/some/path",
"UAT.Some.Key" : "https://uathost/some/other/path"
...
}
Using a discriminator means far less changes to your DevOps pipeline, which is, from a developer/editing experience, most likely slower and more cumbersome than editing a file or key vault.
Running in parallel
While you could rotate out the context and design your solution to run in parallel using the MSTest mechanisms, it would be more elegant to allocate this to your pipeline itself, and have enough resources to be able to run these pipelines in parallel by having enough build agents and so on.
Conclusion
It comes down to which parts of the solution should be addressed by which resources. The solution above allocates environmental selection and test execution into the pipeline itself. Granular values such as connection strings and secrets are allocated to a single source to reduce the friction that occurs when having to edit these values.
Following this strategy might better leverage your team's skills as well. Those with a DevOps mindset can most likely spin up new environments and parallelize more readily than a Developer mindset, who would be more aware of what data needs to be setup and how to craft the tests.

WCF - how to connect to specific instance

Using WCF, .NET 4.5, Visual Studio 2015, and want to use per-session instancing, not singleton. The services provided are to be full-duplex, over tcp.net.
Suppose I have two machines, A & B...
B as a client, connects to a "service" provided as a WCF service on same machine B, and starts talking to it, call it object “X”. It ALSO connects to another instance of the same service, call it object “Y”
A as a client, wants to connect to, and use, exact same objects B is talking to, objects “X” and “Y”, except now it’s remote-remote, not local-remote.
“X” and “Y” are actually a video servers, and both have “state”.
Can I do this? How, when I’m a client, do I specify WHICH service instance I want to connect to?
Obviously, on machine "B", I could kludge this by having the services just be front-ends with no "state", which communicate with some processes running on "B", but that would require I write a bunch of interprocess code, which I hate.
Machine B is expected to be running 100's of these "video server" instances, each one being talked to by a local master (singleton) service, AND being talked to by end-user machines.
I realize this question is a bit generic, but it also addresses a question I could not find asked, or answered, on the Internets.
I just thought of one possible, but kludge-y solution: since the master service is a singleton, when service instance "X" is created by the end-user, it could connect to the singleton master service, through a proxy to the singleton. Then, the singleton can talk back to instance "X" over a callback channel. Yeah, that would work! messy, but possible.
I'd still like to know if end user A and end user B can both talk to the same (non-singleton) service instance on machine C through some funky channel manipulation or something. As I understand the rules of WCF, this simply isn't possible. Perhaps maybe if you're hosting the service yourself, instead of IIS, but even then, I don't think it's possible?
I've faced the same problem and solved it by creating two service references, one for the local one for the remote. Let's call it LocalServiceClient and RemoteServiceClient.
In a class, create a property called Client (or whatever you like to call it):
public LocalServiceClient Client {
get {
return new LocalServiceClient();
}
}
Okay this is for only one of them. Just create another now, and set which one to use with a compiler flag:
#if DEBUG
public LocalServiceClient Client {
get {
return new LocalServiceClient();
}
}
#else
public RemoteServiceClient Client {
get {
return new RemoteServiceClient();
}
}
#endif
Instantiate any instances of your Client using var keyword, so it will be implicitly-typed, or just use Client directly:
var client = Client;
client.DoSomething...
//or
Client.DoSomething...
This way, when you are working locally, it will connect to the local service, and on release configuration (make sure you are on Release when publishing) it will compile for the remote one. Make sure you have the exact same signature/code for both services though at the WCF-side.
There are also methods that you can dynamically do it in code, or like in web.config, they would also work for sure, but they are usually an overkill. You probably need to connect to local one in debugging, and the remote one in production, and this is going to give you exactly what you need.

consuming multiple clients of the same C# webservice from different remote locations

I'm working on two webservices
Call the first one ModelService
and the second one ConfigurationService
My goal is to have multiple servers running the ConfigurationService and one central server running the ModelService
So far what I have working is the ModelService has a ServiceReference added which points to http://localhost:4958/ConfigurationService.svc
And I access it as:
ConfigurationService.ConfigurationServiceClient svc = new ConfigurationService.ConfigurationServiceClient();
ConfigurationService.WrappedConfiguration config = svc.GetConfiguration();
I know there are constructors that use things like string endpointConfigurationName, string remoteAddess which I'm guessing are how I will point to instances of the ConfigurationService on different servers.
What I can't get to work/don't understand, is what do I add as a service reference to ModelService in order for it to be able to create ConfigurationService objects for each of the remote servers.
And how do I configure a ConfigurationService on a server to know what it's endpoint is?
You can add service reference from any of your servers running ConfigurationService. The important part is that you have to keep list of those servers (URLs) somewhere in ModelService to be able to create client to any of "configuration servers". The mentioned constructor will allow you to do that.

Custom API requirement

We are currently working on an API for an existing system.
It basically wraps some web-requests as an easy-to-use library that 3rd party companies should be able to use with our product.
As part of the API, there is an event mechanism where the server can call back to the client via a constantly-running socket connection.
To minimize load on the server, we want to only have one connection per computer. Currently there is a socket open per process, and that could eventually cause load problems if you had multiple applications using the API.
So my question is: if we want to deploy our API as a single standalone assembly, what is the best way to fix our problem?
A couple options we thought of:
Write an out of process COM object (don't know if that works in .Net)
Include a second exe file that would be required for events, it would have to single-instance itself, and open a named pipe or something to communicate through multiple processes
Extract this exe file from an embedded resource and execute it
None of those really seem ideal.
Any better ideas?
Do you mean something like Net.TCP port sharing?
You could fix the client-side port while opening your socket, say 45534. Since one port can be opened by only one process, only one process at a time would be able to open socket connection to the server.
Well, there are many ways to solve this as expressed in all the answers and comments, but may be the simpler way you can use is just have global status store in a place accesible for all the users of the current machine (may be you might have various users logged-in on the machine) where you store WHO has the right to have this open. Something like a "lock" as is used to be called. That store can be a field in a local or intranet database, a simple file, or whatever. That way you don't need to build or distribute extra binaries.
When a client connects to your server you create a new thread to handle him (not a process). You can store his IP address in a static dictionary (shared between all threads).
Something like:
static Dictionary<string, TcpClient> clients = new Dictionary<string, TcpClient>();
//This method is executed in a thread
void ProcessRequest(TcpClient client)
{
string ip = null;
//TODO: get client IP address
lock (clients)
{
...
if (clients.ContainsKey(ip))
{
//TODO: Deny connection
return;
}
else
{
clients.Add(ip, client);
}
}
//TODO: Answer the client
}
//TODO: Delete client from list on disconnection
The best solution we've come up with is to create a windows service that opens up a named pipe to manage multiple client processes through one socket connection to the server.
Then our API will be able to detect if the service is running/installed and fall back to creating it's own connection for the client otherwise.
3rd parties can decide if they want to bundle the service with their product or not, but core applications from our system will have it installed.
I will mark this as the answer in a few days if no one has a better option. I was hoping there was a way to execute our assembly as a new process, but all roads to do this do not seem very reliable.

Set service dependencies after install

I have an application that runs as a Windows service. It stores various things settings in a database that are looked up when the service starts. I built the service to support various types of databases (SQL Server, Oracle, MySQL, etc). Often times end users choose to configure the software to use SQL Server (they can simply modify a config file with the connection string and restart the service). The problem is that when their machine boots up, often times SQL Server is started after my service so my service errors out on start up because it can't connect to the database. I know that I can specify dependencies for my service to help guide the Windows service manager to start the appropriate services before mine. However, I don't know what services to depend upon at install time (when my service is registered) since the user can change databases later on.
So my question is: is there a way for the user to manually indicate the service dependencies based on the database that they are using? If not, what is the proper design approach that I should be taking? I've thought about trying to do something like wait 30 seconds after my service starts up before connecting to the database but this seems really flaky for various reasons. I've also considered trying to "lazily" connect to the database; the problem is that I need a connection immediately upon start up since the database contains various pieces of vital info that my service needs when it first starts. Any ideas?
Dennis
what your looking for is SC.exe. This is a command line tool that users can use to configure services.
sc [Servername] Command Servicename [Optionname= Optionvalue...]
more specificly you would want to use
sc [ServerName] config ServiceName depend=servicetoDependOn
Here is a link on the commandlike options for SC.EXE
http://msdn.microsoft.com/en-us/library/ms810435.aspx
A possible (far from ideal) code solution:
In you startup method code it as a loop that terminates when you've got a connection. Then in that loop trap any database connection errors and keep retrying as the following pseudo code illustrates:
bool connected = false;
while (!connected)
{
try
{
connected = openDatabase(...);
}
catch (connection error)
{
// It might be worth waiting for some time here
}
}
This means that your program doesn't continue until it has a connection. However, it could also mean that your program never gets out of this loop, so you'd need some way of terminating it - either manually or after a certain number of tries.
As you need your service to start in a reasonable time, this code can't go in the main initialisation. You have to arrange for your program to "start" successfully, but not do any processing until this method had returned connected = true. You might achieve this by putting this code in a thread and then starting your actual application code on the "thread completed" event.
Not a direct answer put some points you can look into
Windows service can be started Automatically with a delay. You can check this question in SO for some information about it.
How to make Windows Service start as “Automatic (Delayed Start)”
Check this post How to: Code Service Dependencies

Categories

Resources