In our ServiceStack (v3)-based API, we have some services that are for internal use only, so we've put a [Restrict(InternalOnly = true)] attribute on all of our internal request DTOs.
The problem is that we use load balancing, and the restricted services get publicly accessible to everyone because the IP calling the API is always the load balancer's IP, and therefore an internal IP.
Is there any way to circumvent this, so that the internal services are restricted to internal IPs EXCEPT the load balancer's IP?
I haven't seen a built in way (See [Restrict] tests) to restrict based on specific IPs. However you can trivially filter the requests yourself using a custom attribute:
public class AllowLocalExcludingLBAttribute : RequestFilterAttribute
{
public override void Execute(IHttpRequest req, IHttpResponse res, object requestDto)
{
// If the request is not local or it belongs to the load balancer then throw an exception
if(!req.IsLocal || req.RemoteIp == "10.0.0.1")
throw new HttpError(System.Net.HttpStatusCode.Forbidden, "403", "Service can only be accessed internally");
}
}
Then you simply add [AllowLocalExcludingLB] on your services or action methods where you would have otherwise used the [Restrict] attribute or use it in conjunction with other restrictions. Replace 10.0.0.1 with your load balancer IP.
Related
I have an application where we communicate with hundreds of HTTPs endpoints. The application is a proxy of sorts.
When testing with polly, I've noticed that if one endpoint, say api.endpoint1.com fails, the calls to api.endpoint2.com and api.endpoint3.com will also be in an open/blocked state.
This makes sense as I've only defined one policy, but what is the recommended approach to handling this scenario so that calls to unrelated endpoints are not blocked due to another having performance issues?
Do I create a collection of Policy's, one for each endpoint or is there a way to supply a context key of sorts(i.e. the hostname) to scope the failures to a given host endpoint?
I've reviewed Polly's docs regarding context keys and it appears these are a way to exchange data back and forth and not what I'm looking for here.
var policy = Policy
.Handle<TimeoutException>()
.CircuitBreaker(1, TimeSpan.FromSeconds(1));
//dynamic, large list of endpoints.
var m = new HttpRequestMessage(HttpMethod.Post, "https://api.endpoint1.com")
{
Content = new StringContent("some JSON data here", Encoding.UTF8,"application/json")
};
policy.Execute(() => HTTPClientWrapper.PostAsync(message));
Yes, your best bet is to create a separate policy per endpoint. This is better than doing it per host because an endpoint may be slow responding for a reason that's specific to that endpoint (e.g., stored procedure is slow).
I've used a Dictionary<string, Policy> with the endpoint URL as the key.
if (!_circuitBreakerPolices.ContainsKey(url))
{
CircuitBreakerPolicy policy = Policy.Handle<Exception>().AdvancedCircuitBreakerAsync(
onBreak: ...
);
_circuitBreakerPolicies.Add(url, policy);
}
await _circuitBreakerPolicies[url].ExecuteAsync(async () => ... );
Here is my alternative solution which does not maintain a collection of policies (either via an IDictionary or via an IConcurrentPolicyRegistry) rather it takes advantage of named typed clients. (Yes you have read correctly named and typed HttpClients)
The named and typed clients
Most probably you have heard (or even used) named or typed clients. But I'm certain that you haven't used named and typed clients. It is a less documented feature of HttpClientFactory + HttpClient combo.
If you look at the different overloads of the AddHttpClient extension method then you can spot this one:
public static IHttpClientBuilder AddHttpClient<TClient,TImplementation>
(this IServiceCollection services, string name, Action<HttpClient> configureClient)
where TClient : class where TImplementation : class, TClient;
It allows us to register a typed client and give a logical name to it. But how can I get the proper instance? That's where the ITypedHttpClientFactory comes into the picture. It allows us to create a typed client from a named client. Wait what??? I hope you will understand this sentence at the end of this post. :)
The typed client
For the sake of simplicity let me use this typed client as an example:
public interface IResilientClient
{
Task GetAsync();
}
public class ResilientClient: IResilientClient
{
private readonly HttpClient client;
public ResilientClient(HttpClient client)
{
this.client = client;
}
public Task GetAsync()
{
//TODO: implement it properly
return Task.CompletedTask;
}
}
The named and typed clients registration
Let suppose you have a list of downstream system urls (urls). Then you can register multiple typed client instances with different unique names and base urls
foreach (string url in urls)
{
builder.Services
.AddHttpClient<IResilientClient, ResilientClient>(url,
client => client.BaseAddress = new Uri(url))
.AddPolicyHandler(GetCircuitBreakerPolicy());
}
Here I have used the url as the unique name
So, we can get the appropriate instance based on the downstream url
The policy definition
private IAsyncPolicy<HttpResponseMessage> GetCircuitBreakerPolicy()
=> Policy<HttpResponseMessage>
.Handle<TimeoutException>()
.CircuitBreakerAsync(1, TimeSpan.FromSeconds(1));
I have modified the policy to support async: .CircuitBreakerAsync
I've also amended it to be suitable with the AddPolicyHandler: Policy<HttpResponseMessage>
It is defined as a function so each registered named typed client will have a different Circuit Breaker instance
The usage
This is be a bit clumsy, but I think it is okay. So, wherever you want to use one of the named typed clients you have to inject two interfaces:
IHttpClientFactory: To be able to create a named HttpClient
ITypedHttpClientFactory<ResilientClient>: To be able to create a typed client from the named HttpClient
public XYZService(
IHttpClientFactory namedClientFactory,
ITypedHttpClientFactory<ResilientClient> namedTypedClientFactory)
{
var namedClient = namedClientFactory.CreateClient(xyzUrl);
var namedTypedClient = namedTypedClientFactory.CreateClient(namedClient);
}
Please note that you have to use ResilientClient concrete class as the type parameter not the interface IResilientClient
If you would use the interface then you would receive the following runtime error:
InvalidOperationException: A suitable constructor for type 'IResilientClient' could not be located. Ensure the type is concrete and all parameters of a public constructor are either registered as services or passed as arguments. Also ensure no extraneous arguments are provided.
Summary
With the named and typed client feature of AddHttpClient we can register multiple instances of the same typed client
With the IHttpClientFactory we can retrieve a registered named client which has the proper BaseAddress and decorated with a Circuit Breaker
With the ITypedHttpClientFactory we can convert the named client into a typed client to be able to hide low-level API usage
Related sample application's github repository
I'm trying to implement HTTPS on selected pages of my site. Using the attribute RequireHttps works but causes problems testing as we don't have a cert installed locally.
The solution I'm looking for will need to ignore localhost and ignore one test server while working on our second test server where we do have a cert in place.
Some further background on this. The aim is to move the site gradually to https. It's an ecommerce site so obviously portions are already secure and I know that for many reasons moving the entire site to secure is a good thing. I also know that once you move from Page A to Page B where B is secure then it won't go back to HTTP when you move back to A, that's fine.
I want to move the site in stages just in case there are problems with things like mismatched content, site maps, SEO, google ranking etc.
Some of the various solutions I have tried - I've implemented a class derived from the RequireHttps attribute as follows:
public class CustomRequireHttps : RequireHttpsAttribute
{
protected override void HandleNonHttpsRequest(AuthorizationContext filterContext)
{
if (filterContext.HttpContext.Request.Url != null && (!String.Equals(filterContext.HttpContext.Request.HttpMethod, "GET", StringComparison.OrdinalIgnoreCase)
&& !String.Equals(filterContext.HttpContext.Request.HttpMethod, "HEAD", StringComparison.OrdinalIgnoreCase)
&& !filterContext.HttpContext.Request.Url.Host.Contains("localhost")
&& !filterContext.HttpContext.Request.Url.Host.Contains("testing")))
{
base.HandleNonHttpsRequest(filterContext);
}
}
}
And have applied this attribute to one page but it hasn't worked as intended, it either applies HTTPS to all pages on the site or doesn't work at all.
I have also tried this solution which works but only on localhost and not on the two test servers:
#if !DEBUG
[RequireHttps]
#endif
Then I tried overriding the OnAuthorizartion method like so:
public override void OnAuthorization(AuthorizationContext filterContext)
{
if (filterContext == null)
{
throw new ArgumentNullException("filterContext");
}
if (filterContext.HttpContext != null && filterContext.HttpContext.Request.IsLocal)
{
return;
}
base.OnAuthorization(filterContext);
}
It worked locally but once I got it onto the server with the test cert suddenly every page is HTTPS which I do not understand as I've only used this derived attribute on one page.
So, what I'm looking to achieve is to implement HTTPS on a select number of pages on my site. This HTTPS request needs to be ignored on localhost and the first test server but, it needs to NOT to be ignored on the second test server which has a cert.
So far it either doesn't work at all or is on every page on the site.
However, and this is the kicker, If I use the RequireHttps attribute it works perfectly on the second test server but causes problems on all servers without a cert. By 'works perfectly' I mean it implements HTTPS only on the pages where I've used that attribute and does not suddenly switch all pages to secure.
Any ideas what I'm doing wrong here?
There can be a lot going on, for example when your links are local, when a switch is made to HTTPS, all pages are HTTPS (not applying require HTTPS doesn't switch back to HTTP). From a security standpoint, you should serve all pages from HTTPS when you need it for a subset of pages (otherwise, you might share secure cookies / login tokens over unencrypted HTTP). So probably your attribute is applied, and all subsequent requests are served over SSL.
Secondly, testing on localhost request uri will serve the page over HTTP on your second server. My opinion to solve this problem is to create a switch in your web.config if the pages should be served over HTTPS. Check this switch in your global filterConfig:
public static class FilterConfig
{
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
var useSsl = Convert.ToBoolean(ConfigurationManager.AppSettings["useSsl"]);
if (useSsl )
{
filters.Add(new RequireHttpsAttribute());
}
}
}
I am developing a single-tenant web application that will be deployed in client data centers and for security reasons we would like to disable the metadata exchange on the applications WCF services. Is it possible to do this this programatically within our service application or another mechanism besides the web.config? We want to prevent more technically minded clients from going to the web.config and turning metadata exchange back on.
You can disable the metadata exchange programmatically by setting the HttpGetEnabled/HttpsGetEnabled to false.
First, Create a derive host from ServiceHost.
public class DerivedHost : ServiceHost
{
public DerivedHost( Type t, params Uri baseAddresses ) :
base( t, baseAddresses )
{
DisableMetadataExchange();
}
private void DisableMetadataExchange()
{
var metadata = Description.Behaviors.Find<ServiceMetadataBehavior>();
if metadata != null)
{
// This code will disable metadata exchange
metadata .HttpGetEnabled = false;
metadata .HttpsGetEnabled = false;
}
}
}
Second, Create a derived factory from ServiceHostFactory.
public class DerivedFactory : ServiceHostFactory
{
public override ServiceHost CreateServiceHost( Type t, Uri[] baseAddresses )
{
return new DerivedHost( t, baseAddresses );
}
}
Third, Create or Edit your your svc file Markup and apply your derived factory.
<% #ServiceHost Factory=”DerivedFactory” Service=”MyService” %>
Fourth, Test your service in the browser and you should see a message contain "Metadata publishing for this service is currently disabled".
If want more details about this implementation kindly visit this link.
Yes. If you code your WCF service as "self describing", which basically means using a WCF intercept layer to handle all the incoming requests to an endpoint, you can just return null from the MEX request.
To make this work is a bit tricky but in my experience leads to a much cleaner implementation than all those voluminous web.config entries. This is described here WCF Configuration without a config file.
Need a way for one service on a well-known Endpoint to return strings which are relative addresses. The client can then connect to Endpoints using these relative addresses.
Clearly this resembles REST in some ways, but in this case running a Windows Service using NetNamedPipeBinding for IPC, so no need for HTTP.
Don't want to create the Endpoint ahead of time since there will be a potentially large number of relative addresses, only some of which the client would be interested in.
All Contracts are known in advance.
Tried to find a solution with AddressFilterMode but wasn't sure how to provision new Binding so that client connected to it, UriTemplate but don't want to use the HTTP framework. Haven't looked into RoutingService because constrained to .Net 3.5.
Pseudocode for client would be something like that below...
namespace Testing
{
class RunTest
{
static void Test()
{
NetNamedPipeBinding namedpipe = new NetNamedPipeBinding();
ChannelFactory<Contracts.IRoot> factoryRoot =
new ChannelFactory<Contracts.IRoot>(
namedpipe
, new EndpointAddress("net.pipe://localhost/root");
);
Contracts.IRoot root = factoryRoot.CreateChannel();
ICommunicationObject commsRoot = root as ICommunicationObject;
commsRoot.Open();
// Service examines address and creates Endpoint dynamically.
string address = root.SomeFunctionWhichGetsARelativeAddress();
// IBar service routes endpoint requests internally based on
// "address" variable.
ChannelFactory<Contracts.IBar> factoryBar =
new ChannelFactory<Contracts.IBar>(
namedpipe
, new EndpointAddress("net.pipe://localhost/root/IBar/" +
address)
);
Contracts.IBar bar = factoryBar.CreateChannel();
bar.DoSomething();
}
} // Ends class RunTest
} // Ends namespace Testing
Message Filters are the way to go. You can use “Prefix” or create a custom.
WCF Addressing In Depth
From the Message Filters section of the article:
...it uses message filters to determine the matching endpoint, if one
exists. You can choose which message filter to use or you can provide
your own. This flexibility allows you to break free from the
traditional dispatching model when using Windows Communication
Foundation to implement things other than traditional SOAP—for
instance, the techniques described here enable you to implement
REST/POX-style services on the Windows Communication Foundation
messaging foundation.
Nice question, by the way. I learned something trying to figure this out.
AddressFilterMode.Prefix might suffice. The actual Endpoint used can be inspected in Service methods via
OperationContext.Current.IncomingMessageHeaders.To
Helper code can parse the endpoint and do any necessary internal processing from there.
Hopefully there's some extensibility on the server side which can simplify that code.
Pseudocode for host:
namespace Services
{
[System.ServiceModel.ServiceBehavior(AddressFilterMode =
System.ServiceModel.AddressFilterMode.Prefix)]
class BarService : Contracts.IBar
{
#region IBar Members
public void DoSomething()
{
System.Uri endpoint = System.ServiceModel.OperationContext.Current.IncomingMessageHeaders.To;
Console.WriteLine("DoSomething endpoint: {0}", endpoint);
}
} // Ends class BarService
} // Ends namespace Services
class RunHost
{
static void HostIBar()
{
System.Uri uriBase = new System.Uri("net.pipe://localhost");
System.ServiceModel.ServiceHost hostBar =
new System.ServiceModel.ServiceHost(
typeof(Services.BarService),
uriBase);
hostBar.AddServiceEndpoint(
typeof(Contracts.IBar) // Type implementedContract
, namedpipeBinding // System.ServiceModel.Channels.Binding binding
, "root/IBar" //string address
);
hostBar.Open();
Console.WriteLine("Press <ENTER> to stop...");
Console.ReadLine();
}
}
Correction: I'd originally said that this wouldn't treat "net.pipe://localhost/root/IBar/1" and "net.pipe://localhost/root/IBar/2" as distinct endpoints, but it does. Each causes its own WCF Service instance to be created and called.
An additional change was to encode the data in URL style query parameters and not embed it in the path. E.g.: "net.pipe://localhost/root/IBar?something=1&somethingelse=11" and "net.pipe://localhost/root/IBar?something=2&somethingelse=22" using HttpUtility.ParseQueryString
I'm using something like this on my server:
TcpServerChannel channel = new TcpServerChannel(settings.RemotingPort);
ChannelServices.RegisterChannel(channel, true);
RemotingServices.Marshal(myRemoteObject, "myRemoteObject");
I would like to subscribe to some kind of event so that whenever a remote client connects to myRemoteObject, I can check the Thread.CurrentPrincipal.Identity.Name to decide whether to authorize him.
Currently I'm doing the authorizing check in every exposed remote method of myRemoteObject which is a messy...
In my remoting application i defined a special object/interface where clients first need to authorize. The special object then returns, if the client successfully authorized the remote object. So you have the authorization at one place.
It looks something like this.
public interface IPortal
{
object SignIn(string name, string password);
}
public class Portal : MarshalByRefObject, IPortal
{
private object _remoteObject;
public Portal() {
_remoteObject = new RemoteObject();
}
public object SignIn(string name, string password)
{
// Authorization
// return your remote object
return _remoteObject;
}
}
In your application you host the Portal-Object
TcpServerChannel channel = new TcpServerChannel(settings.RemotingPort);
ChannelServices.RegisterChannel(channel, true);
Portal portal = new Portal()
RemotingServices.Marshal(portal , "portal");
You could use something like PostSharp to factor out the check from every method - just do it in the AOP advice. (You apply this to the class which is exposing its methods, not to the client connection.)
This approach is independent of whatever transport you use for remoting - it just factors out the cross-cutting concern of authorization across all the methods in your remoted class.