I'm having trouble hosting multiple websites on different sub-domains using Service Fabric and OWIN.
Let's say I have four HTTP(S) servers on my Service Fabric cloud. Each of them are running on a different port. So at some point of their initialization, they will respectively call:
Microsoft.Owin.Hosting.WebApp.Start("http://+:80/", UnsecureStartup);
Microsoft.Owin.Hosting.WebApp.Start("https://+:443/", MainStartup);
Microsoft.Owin.Hosting.WebApp.Start("https://+:4431/", ApiStartup);
Microsoft.Owin.Hosting.WebApp.Start("https://+:4432/", ExtrasStartup);
This all works fine. All requests are successfully fulfilled. All four ports serve the startups they've been assigned in their respective stateless services, and the HTTPS ones make use of the same certificate as set from ServiceManifest.xml in proper Service Fabric fashion. Here's a similar single-server setup.
We always planned to use sub-domains instead of different ports. Now we have our domain and we're trying to do the following:
Microsoft.Owin.Hosting.WebApp.Start("http://example.com:80/", UnsecureStartup);
Microsoft.Owin.Hosting.WebApp.Start("https://example.com:443/", MainStartup);
Microsoft.Owin.Hosting.WebApp.Start("https://api.example.com:443/", ApiStartup);
Microsoft.Owin.Hosting.WebApp.Start("https://extras.example.com:443/", ExtrasStartup);
The code above does run. All four stateless services start and go green in Service Fabric Explorer. Yet, every single request (both to http:80 and to https:445) is met with the same response:
Service Unavailable
HTTP Error 503. The service is unavailable.
Why does Service Fabric allow us to have multiple Owin servers, even on the same port, if it's not possible for me to select one based on hostname? Does anyone have any idea of how we can make this work?
Any help is greatly appreciated.
In my understanding, hostnames does not work very well with SF.
You have the following options as I see it:
1) You use different ports on your internal services and put a WAF in front of your SF Cluster and let that one handle SSL offloading, URL routing and NATing to your internal ports. This way you will only need 1 public IP.
2) You add more public IPs and let your public load balancing handle it. You will need 3 IP addresses for this, but since the first 5 IPs are free in azure, this wont cost you anything extra.
You probobly want to go with option 1 as it gives you easier certificate managing, better security and more flexibility but at the cost of $.
Related
This seems to say that is is always necessary otherwise you may resolve an incorrect service -- as there is no guarantee that services won't move around etc...
The default asp.net core service template uses
UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
Is there a reason for this? When is it okay to not use the ServiceFabricIntegration middleware?
E: I see these are actually a flags enum. So likely you should always use UseUniqueServiceUrl https://github.com/Azure/service-fabric-aspnetcore/blob/develop/src/Microsoft.ServiceFabric.AspNetCore/WebHostBuilderServiceFabricExtension.cs
Respectfully, I don't think you have read the docs well enough. It is well explained here:
Services that use a dynamically-assigned port should make use of this middleware.
Services that use a fixed unique port do not have this problem in a cooperative environment. A fixed unique port is typically used for externally-facing services that need a well-known port for client applications to connect to. For example, most Internet-facing web applications will use port 80 or 443 for web browser connections. In this case, the unique identifier should not be enabled.
So summarized: when using Kestrel or WebListener you can choose to use a dynamic port or a fixed port. See the sections Use WebListener/Kestrel with a static port and Use WebListener/Kestrel with a dynamic port in the mentioned link. When you opt to use a dynamic port use ServiceFabricIntegrationOptions.UseUniqueServiceUrl, otherwise use ServiceFabricIntegrationOptions.None as the parameter for the middleware.
Now, as of the why you need this unique service url middleware in case of a dynamic port, there is a scenario that describes the possible problem:
If services use dynamically-assigned application ports, a service replica may coincidentally use the same IP:port endpoint of another service that was previously on the same physical or virtual machine. This can cause a client to mistakely connect to the wrong service. This can happen if the following sequence of events occur:
Service A listens on 10.0.0.1:30000 over HTTP.
Client resolves Service A and gets address 10.0.0.1:30000
Service A moves to a different node.
Service B is placed on 10.0.0.1 and coincidentally uses the same port 30000.
Client attempts to connect to service A with cached address 10.0.0.1:30000.
Client is now successfully connected to service B not realizing it is connected to the wrong service.
This can cause bugs at random times that can be difficult to diagnose
I need multiple clients that talk to a WCF service. The WCF service also must be able to connect to any one of the clients also.
So - it sounds like the server and the clients need to have both a WCF server and client built into each one.
Is this correct or is there some way to do this?
I was looking at NetPeerTcpBinding, but that is obsolete. To be fair I'm not sure if that is a valid solution either.
Background:
I plan to have a Windows service installed on hundreds of machines in our network with a WCF service and a WCF client built in.
I will have one Windows service installed on a server with a WCF service and a client built in.
I will have a Windows Forms application
I will have a database
The clients on the network will connect to the service running on the server in order to insert some information on the database.
The user will use the Windows Forms application to connect to the Windows service on the server and this Windows service will connect to the relevant client on the factory floor (to allow remote browsing of files and folders).
Hence I believe the machines on the floor and the server both require a WCF cleint and service built in.
The reason people are recommending wsHttpDualBinding is because it is in itself a secure and interoperable binding that is designed for use with duplex service contracts that allows both services and clients to send and receive messages.
The type of communication mentioned 'duplex' has several variations. Half and Full are the simplest.
Half Duplex: Works like a walkie-talkie, one person may speak at any given time.
Full Duplex: Like a phone, any person may speak at any given time.
Each will introduce a benefit and a problem, they also provide ways to build this communication more effectively based upon your needs.
I'm slightly confused, but I'll attempt to clarify.
You have an assortment of approaches that may occur here, a Windows Communication Foundation (WCF) Service requires the following:
Address
Binding
Contract
Those are essentially the "ABC's" for WCF. The creation of those depicts a picture like this:
As you can see the Service will contain:
Host
Service
Client
The host houses the service which the client will consume so those service methods perform a desired task. An example representation:
As you see Client-1 is going through the Internet (HTTP, HTTPS, etc.) then will hit the Host, which will have the service perform those tasks.
Now Client-n is consuming the service locally, so it is talking over (TCP, etc.) as an example.
The easiest way to remember: One service can be consumed by however many clients require those methods to perform a task. You can create very complex models using a service-oriented architecture (SOA).
All WCF is, is a mean to connect your application to a host or
centralized location you may not have access to.
As you can see in the above image, the Client communicates through a Service to the Host. Which performs a series of task. WCF will talk over an array of protocols. Hopefully this will provide a better understanding of how WCF is structured.
There are a lot of tutorials and even post to get you started. Some excellent books such as "WCF Step by Step".
Essentially your looking for an asynchronous full duplex connection, or a synchronous full duplex service. As mentioned above, your task in essence is the point of a Service.
The question: How does this work best?
It will boil down to your design. There are limitations and structures that you will need to adhere to to truly optimize it for your goal.
Such obstacles may be:
Server Load
Communication Path
Security
Multiple Clients Altering UI / Same Data
Etc.
The list continues and continues. I'd really look up tutorials or a few books on WCF. Here are a few:
WCF Step by Step
WCF Multi-Tier Development
WCF Service Development
They will help you work with the service structure to adhere to your desired goal.
Remember the "ABCs" for the most success with WCF.
Use wsDualHttpBinding if you want your service communicate with your clients.
Read WS Dual HTTP.
You might want to try out creating a WCF service using netTcpBinding. It will work for your requirements. You can use the article How to: Use netTcpBinding with Windows Authentication and Transport Security in WCF Calling from Windows Forms as a start:
Also, there are many examples included within the WCF Samples package which you can use.
I am very very to WCF. Rather this is my first shot at WCF. I have a service that just works fine on my local system. I have the following requirement. I need to host the service in a clustered server environment. I will have a group of lets say 3 servers. How should I go about hosting the service? I want to host it in the IIS. Do I host the service on all three servers? If so how would they fall back in case the primary server is down for some reason. Can I have one single endpoint address pointing to the active server?
Thanks
If you write the code without using session state you can use a load balancer in front of the 3 servers to handle the clustering. Be sure to write the code in a way that it does not matter what server you are on when the call is executed, since it could be different one call to the next.
I have a need to install an "agent" (I'm thinking it will run as a Windows Service) on many servers in my network. This agent will host a WCF service with several operations to perform specific tasks on the server. This I can handle.
The second part is to build a control center, where I can browse which servers are available (the agent will "register" themselves with my central database). Most of the servers will probably be running the most recent version of my service, but I'm sure there will be some servers which fail to update properly and may run an out dated version for some time (if I get it right, the service contract wont change much, so this shouldn't be a big deal).
Most of my WCF development has been Many Clients to a Single WCF Service, now I'm doing the reverse. How should I manage all of these EndPoints in my control center app? In the past, I've always had a single EndPoint mapped in my App.config. What would some code look like that builds a WCF EndPoint on the fly, based on say a set of string ip; int port; variables I read from my database?
This article has some code examples on how to create an end point on the fly:
http://en.csharp-online.net/WCF_Essentials%E2%80%94Programmatic_Endpoint_Configuration
WCF4 has a Discovery API built-in that might just do everything you need.
Is there anyway to configure a WCF service with a failover endpoint if the primary endpoint dies? Kind of like being able to specify a failover server in a SQL cluster.
Specifically I am using the TCP/IP binding for speed, but on the rare occurrence that the machine is not available I would like to redirect traffic to the failover server. Not too bothered about losing messages. I'd just prefer not to write the code to handle re-routing.
You need to use a layer 4 load balancer in front of the two endpoints. Prob best to stick with a dedicated piece of hardware.
Without trying to sound too vague but I think Windows Network Load Balancing (NLB) should handle this for you.
Haven't done it yet with WCF but plan to have a local DNS entry pointing to our Network Load Balancing (NLB) virtual iP address which will direct all traffic to one of our servers hosting services within IIS. I have used NLB for this exact scenario in the past for web sites and see no reason why it will not work well with WCF.
The beauty of it is that you can take servers in and out of the virtual cluster at will and NLB takes care of all the ugly re-directing to an available node. It also comes with a great price tag: $FREE with your Windows Server license.
We've had good luck with BigIP as a solution, though it's not cheap or easy to set up.
One nice feature is it allows you to set up your SSL certificate (and backdoor to the CA) at the load balancer's common endpoint. Then you can use protocols to transfer the requests back to the WCF servers so the entire transmission is encrypted.