I'm working on two webservices
Call the first one ModelService
and the second one ConfigurationService
My goal is to have multiple servers running the ConfigurationService and one central server running the ModelService
So far what I have working is the ModelService has a ServiceReference added which points to http://localhost:4958/ConfigurationService.svc
And I access it as:
ConfigurationService.ConfigurationServiceClient svc = new ConfigurationService.ConfigurationServiceClient();
ConfigurationService.WrappedConfiguration config = svc.GetConfiguration();
I know there are constructors that use things like string endpointConfigurationName, string remoteAddess which I'm guessing are how I will point to instances of the ConfigurationService on different servers.
What I can't get to work/don't understand, is what do I add as a service reference to ModelService in order for it to be able to create ConfigurationService objects for each of the remote servers.
And how do I configure a ConfigurationService on a server to know what it's endpoint is?
You can add service reference from any of your servers running ConfigurationService. The important part is that you have to keep list of those servers (URLs) somewhere in ModelService to be able to create client to any of "configuration servers". The mentioned constructor will allow you to do that.
Related
I was wondering if it is possible to run multiple MassTransmit or RabbitMQ instances on the same server. Basically we have a .net app using MassTransmit on top of RabbitMQ. Unfortunately a lot of our clients run both live and test environments on the same server so in order to deploy to the real world we need a way of having either multiple instances or the ability to segregate messages between live and test.
A few ideas I've had
1) Do something like: https://lazareski.com/multiple-rabbitmq-instances-on-1-machine/
The problem here is it relies on a lot of config on clients sites.
2) I could include a header in all messages and each consumer checks for the present of the correct header before consuming the message (e.g. header has 'live' or 'test'.) Obviously this means all messages being received from all instances whether they are meant for them or not which is far from ideal.
Ideally I would like to be able to do something with minimal setup on a clients site, like a virtual sub instance or directory for each environment.
There are two ways to work around this issue.
The first way is the most obvious - you need to use virtual hosts.
From the documentation:
Virtual hosts provide logical grouping and separation of resources.
Separation of physical resources is not a goal of virtual hosts and
should be considered an implementation detail.
Create two virtual hosts in your RMQ instance, called test and prod and the only thing you would need to do on MassTransit side is to change the RMQ connection string:
Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host(new Uri("rabbitmq://localhost/test"), host =>
{
host.Username("username");
host.Password("password");
});
});
So you will use rabbitmq://localhost/prod for production. Naturally, those values should not be hard-coded but come from the configuration.
I believe that virtual hosts cover your needs entirely.
If you really need to run the test environment completely separated, you can just run it in a Docker container. This option will give you the ability to kill the whole thing and start from scratch when you need a clean environment. You can easily remap default ports to avoid conflicts with the production instance:
docker run -d --name test-rmq -p 5673:5672 -p 8080:15672 rabbitmq:3-management
If you run the command above, the new instance will be accessible via AMQP on localhost:5673 and the management plugin will be on http://localhost:8080
I have an SF application type consisting of two service types – stateless WebApi Gateway service type and stateless Worker service type. I am creating one application instance with default Gateway service instance. The Gateway service instance creates Worker service instances dynamically on demand by using code like this (the client variable is the System.Fabric.FabricClient instance):
var serviceDescription = new StatefulServiceDescription()
{
ApplicationName = new Uri("fabric:/Gateway"),
ServiceName = new Uri("fabric:/Gateway/Worker-" + SomeUniqueWorkerId),
ServiceTypeName = "WorkerType",
HasPersistedState = true,
PartitionSchemeDescription = new UniformInt64RangePartitionSchemeDescription(),
MinReplicaSetSize = 1,
TargetReplicaSetSize = 1
};
await client.ServiceManager.CreateServiceAsync(serviceDescription);
When SF places two or more instances of the Worker service type onto one node, they all share the same process (i.e. Worker.exe). This is problematic because the different Worker service instances need to dynamically load different versions of assemblies from different file shares. Therefore, my question is:
Is it possible to force SF to host multiple service instances of the same type on one node in separate processes?
(I think that guest executables work that way.)
You can now specify the ServicePackageActivationMode when creating services.
With the default mode or with ServicePackageActivationMode set to "SharedProcess", all of these service objects would run in the same processes. However, by specifying ExclusiveProcess, each service object will end up created in its own process. Let's say that you had these two stateless services deployed on a simple 5 node cluster.
With the Shared|Default mode, you'd get 5 processes, one per node, each with 2 service objects running inside them. With the Exclusive mode, you get 10 processes, 2 per node, each with 1 service object running inside it.
New-ServiceFabricService -Stateless -PartitionSchemeSingleton -ApplicationName "fabric:/App" -ServiceName "fabric:/App/svc" -ServiceTypeName "T1" -InstanceCount -1 -ServicePackageActivationMode ExclusiveProcess
New-ServiceFabricService -Stateless -PartitionSchemeSingleton -ApplicationName "fabric:/App" -ServiceName "fabric:/App/svc" -ServiceTypeName "T2" -InstanceCount -1 -ServicePackageActivationMode ExclusiveProcess
In your example above, all you need to do is add
ServicePackageActivationMode = ServicePackageActivationMode.ExclusiveProcess to your ServiceDescription.
This is a good piece of docs which goes into more detail about each model and how to choose which is right for a given situation. Most commonly I see it used to avoid sharing statics that can't be factored out of the service code and owned at the host process layer instead.
This is by design, and it is not possible to run multiple service instances of the same type in separate processes on the the same node, today. We are working on making this an option, however.
For now, if you need process-level isolation, you have to use separate application instances. In your scenario, you can do this by separating the Web service and the Worker service into individual application types.
Using WCF, .NET 4.5, Visual Studio 2015, and want to use per-session instancing, not singleton. The services provided are to be full-duplex, over tcp.net.
Suppose I have two machines, A & B...
B as a client, connects to a "service" provided as a WCF service on same machine B, and starts talking to it, call it object “X”. It ALSO connects to another instance of the same service, call it object “Y”
A as a client, wants to connect to, and use, exact same objects B is talking to, objects “X” and “Y”, except now it’s remote-remote, not local-remote.
“X” and “Y” are actually a video servers, and both have “state”.
Can I do this? How, when I’m a client, do I specify WHICH service instance I want to connect to?
Obviously, on machine "B", I could kludge this by having the services just be front-ends with no "state", which communicate with some processes running on "B", but that would require I write a bunch of interprocess code, which I hate.
Machine B is expected to be running 100's of these "video server" instances, each one being talked to by a local master (singleton) service, AND being talked to by end-user machines.
I realize this question is a bit generic, but it also addresses a question I could not find asked, or answered, on the Internets.
I just thought of one possible, but kludge-y solution: since the master service is a singleton, when service instance "X" is created by the end-user, it could connect to the singleton master service, through a proxy to the singleton. Then, the singleton can talk back to instance "X" over a callback channel. Yeah, that would work! messy, but possible.
I'd still like to know if end user A and end user B can both talk to the same (non-singleton) service instance on machine C through some funky channel manipulation or something. As I understand the rules of WCF, this simply isn't possible. Perhaps maybe if you're hosting the service yourself, instead of IIS, but even then, I don't think it's possible?
I've faced the same problem and solved it by creating two service references, one for the local one for the remote. Let's call it LocalServiceClient and RemoteServiceClient.
In a class, create a property called Client (or whatever you like to call it):
public LocalServiceClient Client {
get {
return new LocalServiceClient();
}
}
Okay this is for only one of them. Just create another now, and set which one to use with a compiler flag:
#if DEBUG
public LocalServiceClient Client {
get {
return new LocalServiceClient();
}
}
#else
public RemoteServiceClient Client {
get {
return new RemoteServiceClient();
}
}
#endif
Instantiate any instances of your Client using var keyword, so it will be implicitly-typed, or just use Client directly:
var client = Client;
client.DoSomething...
//or
Client.DoSomething...
This way, when you are working locally, it will connect to the local service, and on release configuration (make sure you are on Release when publishing) it will compile for the remote one. Make sure you have the exact same signature/code for both services though at the WCF-side.
There are also methods that you can dynamically do it in code, or like in web.config, they would also work for sure, but they are usually an overkill. You probably need to connect to local one in debugging, and the remote one in production, and this is going to give you exactly what you need.
I am deploying a client app to a mobile laptop that is configured to use one of two network servers. The network servers are identical but with different IP addresses as each is in a different office.
When the client app is first started, it needs to determine only once which office it is in and therefore which dataservice to connect to. So, using the client machine's ip address, I wish to do
something like this:
internal TYPE??? dataservice = new ResolveDataService();
NovaDataServiceClient ResolveDataService()
{
if (localip == xxx.xxx.xxx.xxx)
{
retrun new DataService.NovaDataServiceClient();
}
else
{
return new LibraryWebService.NovaDataServiceClient();
}
}
Furthermore, since it only has to be done once, a static constructor would be preferred. But the real problem is that the namespace "DataService" and "LibraryWebService" were given to the Add Service Reference of the client project so in the above code
internal TYPE??? dataservice
The Type is not known until ResolveDataService is called.
How is this done correctly? Thanks
If these two services are exactly the same and just differ by IP address the right thing to do is to only have one service reference and set the endpoint when you create the service. The easiest way in your case would probably be to add a second endpoint configuration with a different name attribute to the app.config and supply that in the client's constructor.
We are currently working on an API for an existing system.
It basically wraps some web-requests as an easy-to-use library that 3rd party companies should be able to use with our product.
As part of the API, there is an event mechanism where the server can call back to the client via a constantly-running socket connection.
To minimize load on the server, we want to only have one connection per computer. Currently there is a socket open per process, and that could eventually cause load problems if you had multiple applications using the API.
So my question is: if we want to deploy our API as a single standalone assembly, what is the best way to fix our problem?
A couple options we thought of:
Write an out of process COM object (don't know if that works in .Net)
Include a second exe file that would be required for events, it would have to single-instance itself, and open a named pipe or something to communicate through multiple processes
Extract this exe file from an embedded resource and execute it
None of those really seem ideal.
Any better ideas?
Do you mean something like Net.TCP port sharing?
You could fix the client-side port while opening your socket, say 45534. Since one port can be opened by only one process, only one process at a time would be able to open socket connection to the server.
Well, there are many ways to solve this as expressed in all the answers and comments, but may be the simpler way you can use is just have global status store in a place accesible for all the users of the current machine (may be you might have various users logged-in on the machine) where you store WHO has the right to have this open. Something like a "lock" as is used to be called. That store can be a field in a local or intranet database, a simple file, or whatever. That way you don't need to build or distribute extra binaries.
When a client connects to your server you create a new thread to handle him (not a process). You can store his IP address in a static dictionary (shared between all threads).
Something like:
static Dictionary<string, TcpClient> clients = new Dictionary<string, TcpClient>();
//This method is executed in a thread
void ProcessRequest(TcpClient client)
{
string ip = null;
//TODO: get client IP address
lock (clients)
{
...
if (clients.ContainsKey(ip))
{
//TODO: Deny connection
return;
}
else
{
clients.Add(ip, client);
}
}
//TODO: Answer the client
}
//TODO: Delete client from list on disconnection
The best solution we've come up with is to create a windows service that opens up a named pipe to manage multiple client processes through one socket connection to the server.
Then our API will be able to detect if the service is running/installed and fall back to creating it's own connection for the client otherwise.
3rd parties can decide if they want to bundle the service with their product or not, but core applications from our system will have it installed.
I will mark this as the answer in a few days if no one has a better option. I was hoping there was a way to execute our assembly as a new process, but all roads to do this do not seem very reliable.