resource request validation, service or business layer responsibility? - c#

Assuming you have a Business Layer that you will be using for both front-end external facing web application, as well as back-end internal facing application. The external application will always contain the user's logged-in identity/profile in session. The back end application is only for internal administrators.
In a scenario where you have the following business layer method SensitiveInfoManager.GetResource(id). You can imagine that when external users call this method you would want some sort of validation to insure that the id that is passed in does in fact belong to the user that is requesting it. Assuming you have the right structure in the database where you will be able to establish a link from the requesting user to the resource they are requesting. Also you can imagine that a back-end website administrator should be able to call the same method, however that user is in no way tied to the resource, but by definition of being an internal administrator should simply be able to request whatever resource they want.
The question is how do you accomplish this with maximum reuse and best separation of concerns? Do you incorporate this validation into the business layer, setting some sort of flag at class level that says "validate me" or "don't validate me" depending on who the consumer is. Or do you front your business layer with a Service Layer, tasking it with authorization of the requested resources. Forcing the front-end application to channel request through the service layer, where the back-end application may go to the Business Layer directly?

I think that the Service Layer is the most natural place for the Authorization process.
If however you decide to add the authorization functionality to the Business Layer, then I would create an interface IAuthorizationAuthority that contains all the functionality to check for permissions. I would create two classes that implement this interface (one for the external application and one for the admin application) and use a Dependency Injection library so that you can decide on application level which implementation should be used.

Related

In a layered architecture, how can the Application layer know about Web URLs?

I'm currently working on a .NET 5 app that is using a layered architecture (Web / Application / Infrastructure / Domain). If I am to follow the onion/clean architecture pattern, the dependencies should flow in one direction only, e.g:
Web -> Application -> Infrastructure -> Domain
I now find myself needing to send several emails from the Application layer containing specific front-end URLs. This means that the Application layer will know about the Web layer, breaking the dependency flow.
A sample use case flow would be:
User makes a request, gets handled by a controller in the Web layer
Controller calls a handler on the Application layer
The Application layer uses an email service from the Infrastructure layer to send an email
On step #3 I'm in the Application layer but need Web URLs to construct the email body.
How can I solve for this issue?
I've recently solved this problem within my organization. In our case we have an API "market place" used by the company as a whole, then a reverse proxy used by closely integrated clients, and finally an internal load balancer for the API containers.
That's 3 layers of URL knowledge that my Web and App layers shouldn't know about (even at the Web layer it shouldn't know this because that would make our web layer have more than one responsibility rather than just be a router (e.g. via Mediatr)).
Using Rewriters in the Infrastructure
This is what Z. Danev's answer is all about. This works, but you must maintain all the rules for each of these layers, and each of those rewrites may add overhead. Also, those rules could get tricky depending on the complexity of the data you return.
It is a valid solution though. Depending on your organization this may be an easy thing, or it may be a hard one because it's maintained by other teams, need work tickets, and so on to get the job done.
Well, if you can't or don't want to do that, then...
Application Layer Dependency Inversion and Patterns
Disclaimer: This solution works great for us, but it does have one drawback: at some level, you have to maintain something that knows about the layers above. So caveat emptor.
The situation I described above is roughly analogous to your problem, though perhaps more complex (you can do the same but simplify it). Without violating your architectural principals you need to provide an interface (or more than one) that can be injected into your application layer as an application service.
We called ours ILinkBuilderService and created a LinkBuilderService that itself can be wired up through a DI container with individual ILinkBuilder implementations. Each of these implementations could be a MarketPlaceBuilder, a GatewayBuilder, etc. and will be arranged according to a Chain of Responsibility and Strategy patterns from outermost proxy to innermost.
In this way, the builders inspect the web context (headers, request, etc.) to determine which one should handle the responsibility of building links. Your application layer (e.g. your email sender) simply calls the link building service interface with key data, and this is used to generate client-facing URLs without exposing the application layer to the web context.
Without going into too many details, these builders inspect headers like X-Forwarded-For, custom headers, and other details provided by proxies as the HTTP request hits each endpoint. Chain of Responsibility is key, because it allows the application layer to generate the correct URL no matter at which layer the request originated from.
So how does this not break the one-way flow?
Well, you push these builders one-way down into your application layer. Technically, they do reach back up to the web layer for context, but that is encapsulated. This is ok and does not violate your architecture. This is what dependency inversion is all about.
Consider configuring "well known urls" at the web infrastructure level (gateway or load balancer for example) so you can have "mycompany.com/user-action-1" in the email and that will translate to the proper endpoint of your web app.

Do I need an access token for my web api service?

I have an application for managing user data. All the business logic is encapsulated within a separate web api service, which the user management web application (among others) calls into. At the moment all web api calls are exposed (they are anonymous). However the web api sits on a separate domain and is only accessible to the applications that call into it.
Is there any benefit to adding bearer tokens and enforce authentication for each API call?
If the web api service is on a separate domain and adequately protected from the internet, then you dont need to authenticate at the service level for external security (over and above any application logins you have).
However, that is not to say that your application is not internally exposed and could be intentionally or accidentally called by malicious intent or an incorrectly configured application, for example, someone accidentally points a load test at production. For this reason I would secure it, at least with a HMAC if you dont want to implement full blown authentication.
EDITED: To add that with any public facing web real estate you should classify your data and decide the appropriate level of security to apply. In some circumstances you may not want to secure GETs of low sensitivity data. On the flip side, exposing GETs allows someone access to try denial of service attacks (by calling your API in a loop from multiple servers / a botnet). When it comes to POSTs, the risk is higher, since consumers will be inserting in to your datastore.
It's also always good to keep the OWASP Top 10 in mind when dealing with security.

Domain Driven Design Windows Azure Web Job

Certain behaviors of my domain model qualifies to be delegated to Azure WebJob. If I continue to use same domain model class across Website & Web Job,
it seems like violating separation of concerns. There will be tight coupling between two different processes.
Should a background process, Web Job in this case, always have its dedicated Domain Model & behaviors exposed by this should only be consumed by one process?
If your domain model is free of any dependencies to the environment (which it should be), then I don't see a problem with that.
On the contrary: Using the same domain model within a bounded context is preferable, because you are able to capture the business rules in one place. Like this, you are sure you don't run into impedance mismatch problems between two models.
If you are using domain events, you already have a basis for the communication with the web job. This is exactly what we've been doing for over a year, and it works great:
Web apps publish domain events to an Azure Storage Queue
The web job receives them and performs the background processing on the same model
So all you need to do is create two separate application / service layers (one for the web application, one for the background worker) and make sure all domain logic is in a reusable library.

Exposing existing Service Layer using ASP.NET WebApi

I currently have a layered architecture that is as follows:
Service Layer - This is the main interaction point with the domain.
Contains all the business rules, validation, etc.
Data/Repository Layer - This is the layer that handles all persistence of the data. Contains no business logic or validation. Contains basically Repository<T>, UnitOfWork (EF Specific) and all the EF things like DbContext, EntityTypeConfiguration's, etc.
Entity Framework
SQL Server
I am using an Anemic Domain Model, so basic POCO's that represent the problem domain.
I have a couple questions about exposing this via ASP.NET WebApi.
Where does the security live at? Basically things like does a user have the access to edit a record, or type of record. Can a user perform a specific action, etc. As well as things like Authentication/Role Based Authorization.
Should I use the WebApi as the actual service layer, or use it to expose my existing service layer over HTTP in a RESTful manner?
Given a basic example of say changing a name of a category, where do I enforce that the current user has authority to change said record? Do I rely on the Thread.CurrentPrincipal to get the Identity to check for a given role, and set that in the WebApi? Mvc Application?
Are there any good examples out there that show this type of situation I am talking about?
BTW - I am using ASP.NET MVC 5 to serve up the shell of the application (SPA) and then the front-end is going to be all AngularJS.
Regarding your first question about the level of security your services should have the correct answer is what I believe it should be a principle in all applications:
Services should have enough security to protect the data from unwanted users.
Once you create a service and make it public you are exposed to possible attacks of course having complex security rules may increase development time and some situations may create a decrease in performance; measure the level of the threat and plan your security accordingly.
WebApi was created with intention to provide services through Http/Rest all the principles and features build-in were made with that intention so regarding your second question and like you inferred at the end of it it is a service layer but an Http/Rest service layer.
WebApi uses an attribute Authorize to enforce security an as it is normally with .NET Frameworks you can inherit from it and extend it. You can learn more about it here.
And since you are using Angularjs and even though you will need MVC5 to use WebApi my recommendation is that you do not use MVC razor or any other server technology to render your pages.

Where to locate custom membership, roles, profile providers in a 3-tier setup?

I have a 3 tier ASP.NET MVC 3 project that has a data layer, service layer then a presentation layer which calls upon the service layer to get data. I'm actually using the doFactory patterns in action solution.
I want to implement a custom membership, roles, profile provider but I'm not sure exactly where to put it. I was thinking of putting it in the service layer then have the provider call on the DAO objects to get the info.
Any other ideas?
You're thinking pretty well. Though the UI layer interacts with the client and takes their password, your service layer should process attempts the enter system.
Your action methods pass along the information to the service objects responsible for authorization.
Your service layer would have no idea whether it is in a web application or not.
The data layers is just the place where that information is stored, not where it is processed.
You might choose to keep the ID of the user in the UI layer, in session. On login the Service layer would take the username/password/whatever and return a UserID. Or, your action methods could pass in a session key into the service layer each time, to get the User information.
Edit due to comment: I'm doing this in my current project (couple $million scope). I have my security decisions in the action methods. (Though of course the tools for making this simple are objects from the Service Layer.) For example, if the current user doesn't have this role or that role, then redirect them to a rejection page, otherwise, do the thing. MyServiceLayerObject.DoThing() has no security inside it.
It's the simplest way for my app and many others. ("Simplest" means it will will be screwed up the least. When it comes to security, simple is good!) Since the Action method is the gateway to the functionality, having security in the service layer would just cause extra work and actually obscure what security was happening. Now, that's my app, where there is usually one place where each action takes place.
Your app may be different. The more different action methods and (especially) different components are using your Services Layer's functionality, the more you'd want your Service Layer functionality locked down with your authorization scheme. Many people feel that security should always be in the service layer, and that any additional security actions in the UI layer would be bonus redundancy. I don't agree with that.
Here is an existing implementation of Membership Providers in 3 tier world that I found when looking for the same thing...
http://elysianonline.com/programming/wcf-wrapper-for-asp-net-membership/
And here ...
http://elysianonline.com/programming/using-the-wcf-membership-provider/

Categories

Resources