Currently I have a Web API project with FluentValidation tied in to verify the requests that come in. This is working fine to make sure that the requests make sense.
My next step is to verify the request. What I mean by this is some POST (create) requests link to existing entities and may require the following checks:
I need to verify that the linked entities belong to the current user
Check to see if the user already has an 'Active' entity of the same type requested.
Check that the linked entities support the requested entity
How can I be doing these checks? I don't want to tie it into my FluentValidation as this should just validate the requests and I don't want to make trips to the DB if I'm going to return a Bad Request due to validation.
I could add these checks into each method in the controller but that doesn't seem very nice. Is there an Action or something similar that I can plug in which will be called after FluentValidation does it thing but before it hits the controller?
Thanks
Alex
It is possible to create custom Action Filters to do these checks, but in my experience it doesn't typically make sense to do so unless the thing you're trying to check is applicable to almost every request (e.g. make sure the user is logged in).
I would just put the logic for the kinds of checks you're talking about into separate utility classes where it can be easily reused, and make it the responsibility of each action to call the appropriate utility methods based on what checks need to occur for that action.
Related
I'm new and learning how to build an MVC application and while implementing CRUD operations and I'm wondering if the logic from Edit and Create should be placed inside one single method as some tutorials seem to suggest.
Is this a good practice, and why?
Does this respect SOLID principles?
Thank you so much in advance <3
You could use the same request model for both create and update methods; however, they should still be separate endpoints for a few key reasons:
Security: You may want to implement additional controller logic around the editing of existing entities.
Different input required: To update an existing entity, you need to know the ID of the entity to be updated. Generally, this is passed in the form of a URL parameter in a PUT request. For example, to update user 2, you would send a PUT request to /api/users/2 with a request body containing the JSON of your create/edit user request model.
Clearer logging: By utilizing separate request types, it is much easier to interpret basic access logs. For example, if you see several 500 response codes in access logs for PUT requests, you are able to focus your investigation on the update logic.
With that said, there isn't really one right way. However, most development teams (including one-person teams) will opt for separate methods for some of the reasons above.
So I have been coming across stuff over the last few weeks which is making me improve my understanding of REST.
At work we are having issues with some rest api resource access which has some pretty complex access, so I was wondering if someone could help me understand if/what we are doing wrong and what is the right way go about it.
So the issue we're having is that we have an endpoint for getting all orders. e.g. /orders this endpoint has pagination and filters etc, this endpoint will get a list of orders.
We have two main types of users (admin and account).
If you are an admin, you are able to see all the orders, if you are an account user, it gets a bit more tricky because account users can see orders based on a set of permissions.
So by default, an account user can see all orders that they have placed. They will also be able to see any orders that they have for them. (so this distinction here is that orders are placed by a user, but can be for another. e.g. I might order something for another user).
Due to the way the application is designed, users are able to see orders for other branches as well, one example would be a branch reporter that deals with the orders from all branches and collates reports etc.
So an example for all of this would be:
if you are an administrator, you see all orders by default, if you are an account user, and have permissions for branch x and y, you will see all orders for x & y as well as any orders placed by yourself.
Is the domain design faulty here, is that what's stopping me from seeing a feasible solution?
I have been looking about user contexts a bit so that might be a way to split some of the issues a bit. So an example is different users see different resources in different ways. So to not make a one size fits all solution (which this definitely is) you should build 3 apis. If I did that I could definitely separate admin from account. But I dont know what to do with the complexity of accounts.
I have a suspicion that the key to this relates to removing these permissions checked out of the database and move them into permission, but I am not sure how one would handle dynamic permissions.
I am sorry if a lot of this is rant or is this doesn't make sense. Any help would be appreciated even if it's only to put me on the right path.
Like I said, I have been trying to understand how to configure REST while trying to forget about the underlying database, but a problem as bonkers as this stumps me.
I think you are mixing 2 important aspects, the first is the style you want your API to follow, the second is the authorisation logic you want to apply.
REST
If you want to build a consistent, clear, and maintainable rest API first you need to understand your domain.
So what is an Order for you, who is going to consume your API, is your code base unique and so has it to be, or can be split reducing complexity, coupling and increasing the isolation?
If you have a single understanding of Order, just leave it as it is.
Authorisation
The authorisation rules are just another example of business logic, and this are often subject to change as the business evolves.
I suggest you to treat the authorisation as you treat any other logic, like when you calculate an order total.
So if you have a service layer, create an Authorisation service where you check what a user is allowed to see.
You can also do it in a filter, so that before returning the list of orders, you apply the security rules and you remove all orders that the user is not supposed to view.
You don't like this approach? Then you need to move the "authorisation filters" down the stack so that the query itself takes it into account.
Based on what you said there are various aspects which you need to account when performing a GetOrderByUserID(x), if you use EF, you can generate the authorisation filters as lambda expression and you can add it to your where clause, making sure you and the necessary joins so that you can take the branch, accounts and so on.
Conclusion
It is down to you how you want your system to implement the security, but unless you recognise the existence of multiple domain objects Order, BranchOrder, DelegatedOrder, AccountOrder and so on, in the request there should be no evidence of the security that will be applied afterwards, the only thing is that the authorisation of the REST request has to carry enough details, likely in the headers, to identify who is requesting the resources.
Also note that even if you break down your domain object (Order) into more specific types of order, you will still need to apply security, and therefore you will still need to create and maintain the security rules.
Your question is a bit broad and with just enough information to give you hints, if you want a more specific answer, you need to update your question to be more specific as well.
If you have code level access at "orders" end. Then you can design in this way.
Make the user's id as a mandatory input at the "Orders" end. So whenever any user calls the API, he should make the request with parameters of User ID.
Once you get the user ID at "orders" end, check the authorization of user such as whether user is admin or not and user has access to which branch.
Now, Limit the data in response accordingly.
You should make the changes at "Orders" end to authorize, filter the data and sending the response accordingly.
I have a particular scenario where an aggregate has behavior to check whether an address is valid. This validation is triggered on the aggregate via inline ajax form validation on a web site. In between the aggregate and the web site is an application service which orchestrates the two.
As it stands, I create what is essentially an empty aggregate and set the address property so the check can be done. Based on this I return true or false back to the web site (ASP.NET MVC). This doesn't seem like the right approach on the context of DDD.
public bool IsAddressAvailable(string address)
{
var aggregate = new Aggregate
{
Address = address
};
return aggregate.IsAddressValid();
}
What options do I have that would work better using DDD? I was consider separating it out into a domain service. Any advice would be appreciated!
Normally your aggregates should not expose Get- methods, you always want to follow Tell-Don't-Ask principle.
If something needs to be done - then you call an aggregate method and it makes it done.
But you normally don't want to ask Aggregate if the data is valid or not. Especially if you already have a service that does this job for you, why mixing this "validation" with aggregates?
The rule of thumb is:
If something is not needed for Aggregate's behavior it doesn't need to be a part of the aggregate
You only pass valid data into your domain. It means that when you call an aggregate behavior asking it to do something for you, the data you pass is already validated. You don't want to pollute your domain with data validation / if-else branches, etc. Keep it straight and simple.
In your case, as far as I understand, you only need to validate user's input, so you don't need to bother your domain to do it for two reasons:
You don't do anything, don't change system's state. It is considered to be a "read" operation, do it straightforward (call your service, validate against some tables, etc)
You cannot rely on validation result. Now it tells you "correct" and in 10 milliseconds (while you get the response over the wire, while HTML is rendered in browser, etc) it is already a history, it MAY change any time. So this validation is just a guidance, no more.
Therefore if you only need "read-only" validation just do it against your service.
If you need to validate user's data as a part of operation then do it before you call the domain (perhaps in your command handler).
And be aware of racing conditions (DB unique constraints can help).
You should also consider reading this to think deeper about set validation: http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
We have a MVC3 application that we have created many small actions and views to handle placing the data wherever we need to. For instance if it was a blog and we wanted to show comments, we have a comment action and view and we can place that wherever we want, a user profile view, and blog post view, etc.
The problem this has caused is each small view or action needs to make a call, usually to the same service, multiple times per a page load because of all the other small views we have in our application. So on a very large page containing these small views, we could have 80+ sql calls with 40% of them being duplicates and then the page slows down. Current solution is to cache some data, and pass some data around in the ViewBag if we can so if you want like a user's profile, you check to see if its cache or the ViewBag if it isn't ask for it.
That feels really really dirty for a design pattern and the viewbag approach seems awful since it has to be passed from the top down. We've added some data into HttpCurrent.Items to make it per a request (instead of caching since that data can change) but there has to be some clean solution that doesn't feel wrong and is clean too?
EDIT
I've been asked to be more specific and while this is a internal business application I can't give away to much of the specifics.
So to put this into a software analogy. Lets compare this to facebook. Imagine this MVC app had an action for each facebook post, then under that action it has another action for the like button and number of comments, then another action for showing the top comments to the user. The way our app is designed we would get the current users profile in each action (thus like 4 times at the minimum in the above situation) and then the child action would get the parent wall post to verify that you have permission to see it. Now you can consider caching the calls to each security check, wall post, etc, but I feel like caching is for things that will be needed over the lifetime of the app, not just little pieces here and there to correct a mistake in how your application is architected.
Are you able to replace any of your #Html.Action() calls with #Html.Partial() calls, passing in the model data instead of relying on an action method to get it from the db?
You could create a CompositeViewModel that contains your other ViewModels as properties. For example, it might contain a UserViewModel property, a BlogPostViewModel property, and a Collection<BlogComment> property.
In your action method that returns the container / master view, you can optimize the data access. It sounds like you already have a lot of the repeatable code abstracted through a service, but you didn't post any code so I'm not sure how DRY this approach would be.
But if you can do this without repeating a lot of code from your child actions, you can then use #Html.Partial("~/Path/to/view.cshtml", Model.UserViewModel) in your master view, and keep the child action method for other pages that don't have such a heavy load.
I see two potential places your code might be helped based on my understanding of your problem.
You have too many calls per page. In other words your division of work is too granular. You might be able to combine calls to your service by making objects that contain more information. If you have a comments object and an object that has aggregate data about comments, maybe combine them into one object/one call. Just a thought.
Caching more effectively. You said you're already trying to cache the data, but want a possibly better way to do this. On a recent project I worked on I used an AOP framework to do caching on WCF calls. It worked out really well for development, but was ultimately too slow in a heavy traffic production website.
The code would come out like this for a WCF call (roughly):
[Caching(300)]
Comment GetComment(int commentId);
You'd just put a decorator on the WCF call with a time interval and the AOP would take care of the rest as far as caching. Granted we also used an external caching framework (AppFabric) to store the results of the WCF calls.
Aspect Oriented Framework (AOP): http://en.wikipedia.org/wiki/Aspect-oriented_programming
We used Unity for AOP: Enterprise Library Unity vs Other IoC Containers
I would strongly consider trying to cache the actual service calls though, so that you can call them to your hearts content.
The best thing to do is to create an ActionFilter that will create and teardown your persistence method. This will ensure that the most expensive part of data access (ie creating the connection) is limited to once per request.
public class SqlConnectionActionFilter : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
var sessionController = filterContext.Controller;
if (filterContext.IsChildAction)
return;
//Create your SqlConnection here
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
if (filterContext.IsChildAction)
return;
//Commit transaction & Teardown SqlConnection
}
}
The thing is: if you are doing the query 80 times, then you are hitting the db 80 times. Putting a request-scoped cache is the best solution. The most elegant way of implementing it is through AOP, so your code doesn't mind about that problem.
I'm only a newcomer to ASP.NET MVC and am not sure how to achieve a certain task the "right way".
Essentially, I store the logged in userId in HttpContext.User.Identity and have written an EnhancedAuthorizeAttribute to perform some custom authorization.
In the overriden OnAuthorization method, my domain model hits the database to ensure the current user id can access the passed in routeValue "BatchCode". The prototype is:
ReviewGroup GetReviewGroupFromBatchCode(string batchCode);
It will return null if the user can't access the ReviewGroup and the OnAuthorization then denies access.
Now, I know the decorated action method will only get executed if OnAuthorization passes, but I don't want to hit the database a second time to get the ReviewGroup again.
I am thinking of storing the ReviewGroup in HttpContext.Items["reviewGroup"] and accessing this from the controller at the moment.
Is this a feasible solution, or am I on the wrong path?
Thanks!
The HttpContext.Items is alive only for the duration of the request. If you want to persist it for a longer time, you should put it in
a) session - good
b) profile - dont see the advantage
c) cookie - not recommended
d) hit the database everytime - should be OK
Store it in filterContext.RouteData.DataTokens?
Alternatively, one of the best ways to avoid hitting the database, and easiest, is caching.
Retrieve it, stick it in a cache. If it it's needed again it's already in memory and no DB hit is required. If not, then when the cache goes out of scope, so will the object.
Unless your doing a VERY big select from ReviewGroup or you have a enormous database, then a second database hit wouldn't be too much of an issue. Modern databases are very efficient at making selects, especially with properly indexed tables.
In my experience, this is the best way of doing authorisation and a similar method to how I authorized specific actions in my applications.
So in short, I wouldn't worry at all about the second database hit.