I'm new and learning how to build an MVC application and while implementing CRUD operations and I'm wondering if the logic from Edit and Create should be placed inside one single method as some tutorials seem to suggest.
Is this a good practice, and why?
Does this respect SOLID principles?
Thank you so much in advance <3
You could use the same request model for both create and update methods; however, they should still be separate endpoints for a few key reasons:
Security: You may want to implement additional controller logic around the editing of existing entities.
Different input required: To update an existing entity, you need to know the ID of the entity to be updated. Generally, this is passed in the form of a URL parameter in a PUT request. For example, to update user 2, you would send a PUT request to /api/users/2 with a request body containing the JSON of your create/edit user request model.
Clearer logging: By utilizing separate request types, it is much easier to interpret basic access logs. For example, if you see several 500 response codes in access logs for PUT requests, you are able to focus your investigation on the update logic.
With that said, there isn't really one right way. However, most development teams (including one-person teams) will opt for separate methods for some of the reasons above.
Related
What should we do in case when we have UI that's not task based with tasks corresponding to our entity methods which, in turn, correspond to ubiquitous language?
For example, lets say we have a domain model for WorkItem that has properties: StartDate, DueDate, AssignedToEmployeeId, WorkItemType, Title, Description, CreatedbyEmployeeId.
Now, some things can change with the WorkItem and broken down, it boils to methods like:
WorkItem.ReassignToAnotherEmployee(string employeeId)
WorkItem.Postpone(DateTime newDateTime)
WorkItem.ExtendDueDate(DateTime newDueDate)
WorkItem.Describe(string description)
But on our UI side there is just one form with fields corresponding to our properties and a single Save button. So, CRUD UI. Obviously, that leads to have a single CRUD REST API endpoint like PUT domain.com/workitems/{id}.
Question is: how to handle requests that come to this endpoint from the domain model perspective?
OPTION 1
Have CRUD like method WorkItem.Update(...)? (this, obviously, defeats the whole purpose of ubiquitous language and DDD)
OPTION 2
Application service that is called by endpoint controller have method WorkItemsService.Update(...) but within that service we call each one of the domain models methods that correspond to ubiquitous language? something like:
public class WorkItemService {
...
public Update(params) {
WorkItem item = _workItemRepository.get(params.workItemId);
//i am leaving out check for which properties actually changed
//as its not crucial for this example
item.ReassignToAnotherEmployee(params.employeeId);
item.Postpone(params.newDateTime);
item.ExtendDueDate(params.newDueDate);
item.Describe(params.description);
_workItemRepository.save(item);
}
}
Or maybe some third option?
Is there some rule of thumb here?
[UPDATE]
To be clear, question can be rephrased in a way: Should CRUD-like WorkItem.Update() ever become a part of our model even if our domain experts express it in a way we want to be able update a WorkItem or should we always avoid it and go for what does "update" actually mean for the business?
Is your domain/sub-domain inherently CRUD?
"if our domain experts express it in a way we want to be able update a
WorkItem"
If your sub-domain aligns well with CRUD you shouldn't try to force a domain model. CRUD is not an anti-pattern and can actually be the perfect fit for certain sub-domains. CRUD becomes problematic when business experts are expressing rich business processes that are wrongly translated to CRUD UIs & backends by developers, leading to code/UL misalignment.
Note that business processes can also be expensive to discover & model explicitly. Sometimes (e.g. lack of resources) it may be acceptable to let those live in the heads of domain experts. They will drive a simple CRUD UI from paper-based processes as opposed to having the system guide them. CRUD may be perfectly fine here since although processes are complex, we aren't trying to model them in the system which remains simple.
I can't tell whether or not your domain is inherently CRUD, but I just wanted to point out that if it is, then embrace it and go for simpler business logic patterns (Active Record, Transaction Script, etc.). If you find yourself constantly wanting to map every bit of data with a single method call then you may be in a CRUD domain.
Isolate corruption
If you settle that a domain model will benefit your model, then you should stop corruption from spreading through the system as early as you can. This is done with an anti-corruption layer which in your case would be responsible for interpreting CRUD calls and transforming them into more meaningful business processes.
The anti-corruption layer should sit between the parts of the system you want to protect and the legacy/misbehaving/etc part. That would be option #2. In this case the anti-corruption code will most likely have to compare the current state with the new state to try and figure out what changes were done and how to correlate these to more explicit business processes.
Like you said, option 1 is pretty much against the ruleset. Additional offering a generic update is no good to the clients of your domain enitity.
I would go with a 2ish option: having an Application level service but reflecting the UL to it. Your controller would need to call a meaningful application service method with a meaningful parameter/command that changes the state of a Domain model.
I always try to think from the view of a client of my Service/Domain Model Code. As this client i want to know exactly what i call. Having a CRUD like Update is counter intiuitiv and doesn't help you to follow the UL and is more confusing to the clients. They would need to know the code behind that update method to know what they are changing.
To your Update: no don't include a generic update (atleast not with the name Update) always reflect business rules/processes. A client of your code would never know what i does.
In terms if this is a specific business process that gets triggered from a specific controller api endpoint you can call it that way. Let's say your Update is actually the business process DoAWorkItemReassignAndPostponeDueToEmployeeWentOnVacation() then you could bulk this operation but don't go with the generic Update. Always reflect UL.
Currently I have a Web API project with FluentValidation tied in to verify the requests that come in. This is working fine to make sure that the requests make sense.
My next step is to verify the request. What I mean by this is some POST (create) requests link to existing entities and may require the following checks:
I need to verify that the linked entities belong to the current user
Check to see if the user already has an 'Active' entity of the same type requested.
Check that the linked entities support the requested entity
How can I be doing these checks? I don't want to tie it into my FluentValidation as this should just validate the requests and I don't want to make trips to the DB if I'm going to return a Bad Request due to validation.
I could add these checks into each method in the controller but that doesn't seem very nice. Is there an Action or something similar that I can plug in which will be called after FluentValidation does it thing but before it hits the controller?
Thanks
Alex
It is possible to create custom Action Filters to do these checks, but in my experience it doesn't typically make sense to do so unless the thing you're trying to check is applicable to almost every request (e.g. make sure the user is logged in).
I would just put the logic for the kinds of checks you're talking about into separate utility classes where it can be easily reused, and make it the responsibility of each action to call the appropriate utility methods based on what checks need to occur for that action.
I want to build server for Point Of Sales which is going to be on top of Web Api asp.net
My plan is to have one controller for bills.
For handling bills from web-api post to sql server I am planing to use micro ORM PetaPoco.
One bill is going to be written in database at three tables.
PetaPoco pushing me to have three "pocos" for-each table one.
I want to write these three pocos to database inside transaction.
How I shod design my controller and classes to looks nice also work nice.
Should I ?
Make my controller accept three (3) classes for for parameters, Is this possible at asp.net web api at all ? Can I deserialize from one request three different classes ?
Make my controller accept one class, after that on server side from that class make three pocos which is going to be written to Database server ? Can someone post how wold looks like that class which is going to be split at three parts ?
Make my controller have three methods for posting separate data (bills-header, bills-payment, bills-articles) one by one ?
Perhaps it will be so hard in this case to have one transaction for three separate calls?
Any other approach ?
I would definitely go with option 2 - since your web client should be agnostic of the implementation details - i.e. whether you are persisting to one table or 3 tables shouldn't really matter to the client.
The controller or the service method would look like this (obviously the naming is not great - you'll have to modify it according to your domain lingo):
public void AddBill(BillDTO bill)
{
//Map the DTO to your entities
var bill1 = mapper1.Map(bill);
var bill2 = mapper2.Map(bill);
var bill3 = mapper3.Map(bill);
//Open the transaction
using (var scope = db.Transaction)
{
// Do transacted updates here
db.Save(bill1);
db.Save(bill2);
db.Save(bill3);
// Commit
scope.Complete();
}
}
You should read about DTO pattern, it would answer some of your questions:
1. WebAPI supports it.
2. That sounds like DTO, so it is a good solution, as you hide your persistence model from consumer.
3. There's no point to force consumer to make three calls, each call has own 'infrastructure' cost, so it is better have one infrastructure cost instead of one.
I'm try to understand Repository pattern to implement it in my app. And I'm stuck with it in a some way.
Here is a simplified algorithm of how the app is accessing to a data:
At first time the app has no data. It needs to connect to a web-service to get this data. So all the low-level logic of interaction with the web-service will be hiding behind the WebServiceRepository class. All the data passed from the web-service to the app will be cached.
Next time when the app will request the data this data will be searched in the cache before requesting them from the web-service. Cache represents itself as a database and XML files and will be accessed through the CacheRepository.
The cached data can be in three states: valid (can be shown to user), invalid (old data that can't be shown) and partly-valid (can be shown but must be updated as soon as possible).
a) If the cached data is valid then after we get them we can stop.
b) If the chached data is invalid or partly-valid we need to access WebServiceRepository. If the access to the web-service is ended with a success then requested data will be cached and then will be showed to user (I think this must be implemented as a second call to the CacheRepository).
c) So the entry point of the data access is the CacheRepository. Web-service will be called only if there is no fully valid cache.
I can't figure out where to place the logic of verifying the cache (valid/invalid/partly-valid)? Where to place the call of the WebServiceRepository? I think that this logic can't be placed in no one of Repositories, because of violation the Single Responsibility Principle (SRP) from SOLID.
Should I implement some sort of RepositoryService and put all the logic in it? Or maybe is there a way to link WebServiceRepository and WebServiceRepository?
What are patterns and approaches to implement that?
Another question is how to get partly-valid data from cache and then request the web-service in the one method's call? I think to use delegates and events. Is there other approaches?
Please, give an advice. Which is the correct way to link all the functionality listed above?
P.S. Maybe I described all a bit confusing. I can give some additional clarifications if needed.
P.P.S. Under CacheRepository (and under WebServiceRepository) I meant a set of repositories - CustomerCacheRepository, ProductCacheRepository and so on. Thanks #hacktick for the comment.
if your webservice gives you crud methods for different entities create a repository for every entityroot.
if there are customers create a CustomerRepository. if there are documents with attachments as childs create a DocumentRepository that returns documents with attachments as a property.
a repository is only responsible for a specific type of entity (ie. customers or documents). repositories are not used for "cross cutting concerns" such as caching. (ie. your example of an CacheRepository)
inject (ie. StuctureMap) a IDataCache instance for every repository.
a call to Repository.GetAll() returns all entities for the current repository. every entity is registered in the cache. note the id of that object in the cache.
a call to Repository.FindById() checks the cache first for the id. if the object is valid return it.
notifications about invalidation of an object is routed to the cache. you could implement client-side invalidation or push messages from the server to the client for example via messagequeues.
information about the status whether an object is currently valid or not should not be stored in the entity object itself but rather only in the cache.
We have a MVC3 application that we have created many small actions and views to handle placing the data wherever we need to. For instance if it was a blog and we wanted to show comments, we have a comment action and view and we can place that wherever we want, a user profile view, and blog post view, etc.
The problem this has caused is each small view or action needs to make a call, usually to the same service, multiple times per a page load because of all the other small views we have in our application. So on a very large page containing these small views, we could have 80+ sql calls with 40% of them being duplicates and then the page slows down. Current solution is to cache some data, and pass some data around in the ViewBag if we can so if you want like a user's profile, you check to see if its cache or the ViewBag if it isn't ask for it.
That feels really really dirty for a design pattern and the viewbag approach seems awful since it has to be passed from the top down. We've added some data into HttpCurrent.Items to make it per a request (instead of caching since that data can change) but there has to be some clean solution that doesn't feel wrong and is clean too?
EDIT
I've been asked to be more specific and while this is a internal business application I can't give away to much of the specifics.
So to put this into a software analogy. Lets compare this to facebook. Imagine this MVC app had an action for each facebook post, then under that action it has another action for the like button and number of comments, then another action for showing the top comments to the user. The way our app is designed we would get the current users profile in each action (thus like 4 times at the minimum in the above situation) and then the child action would get the parent wall post to verify that you have permission to see it. Now you can consider caching the calls to each security check, wall post, etc, but I feel like caching is for things that will be needed over the lifetime of the app, not just little pieces here and there to correct a mistake in how your application is architected.
Are you able to replace any of your #Html.Action() calls with #Html.Partial() calls, passing in the model data instead of relying on an action method to get it from the db?
You could create a CompositeViewModel that contains your other ViewModels as properties. For example, it might contain a UserViewModel property, a BlogPostViewModel property, and a Collection<BlogComment> property.
In your action method that returns the container / master view, you can optimize the data access. It sounds like you already have a lot of the repeatable code abstracted through a service, but you didn't post any code so I'm not sure how DRY this approach would be.
But if you can do this without repeating a lot of code from your child actions, you can then use #Html.Partial("~/Path/to/view.cshtml", Model.UserViewModel) in your master view, and keep the child action method for other pages that don't have such a heavy load.
I see two potential places your code might be helped based on my understanding of your problem.
You have too many calls per page. In other words your division of work is too granular. You might be able to combine calls to your service by making objects that contain more information. If you have a comments object and an object that has aggregate data about comments, maybe combine them into one object/one call. Just a thought.
Caching more effectively. You said you're already trying to cache the data, but want a possibly better way to do this. On a recent project I worked on I used an AOP framework to do caching on WCF calls. It worked out really well for development, but was ultimately too slow in a heavy traffic production website.
The code would come out like this for a WCF call (roughly):
[Caching(300)]
Comment GetComment(int commentId);
You'd just put a decorator on the WCF call with a time interval and the AOP would take care of the rest as far as caching. Granted we also used an external caching framework (AppFabric) to store the results of the WCF calls.
Aspect Oriented Framework (AOP): http://en.wikipedia.org/wiki/Aspect-oriented_programming
We used Unity for AOP: Enterprise Library Unity vs Other IoC Containers
I would strongly consider trying to cache the actual service calls though, so that you can call them to your hearts content.
The best thing to do is to create an ActionFilter that will create and teardown your persistence method. This will ensure that the most expensive part of data access (ie creating the connection) is limited to once per request.
public class SqlConnectionActionFilter : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
var sessionController = filterContext.Controller;
if (filterContext.IsChildAction)
return;
//Create your SqlConnection here
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
if (filterContext.IsChildAction)
return;
//Commit transaction & Teardown SqlConnection
}
}
The thing is: if you are doing the query 80 times, then you are hitting the db 80 times. Putting a request-scoped cache is the best solution. The most elegant way of implementing it is through AOP, so your code doesn't mind about that problem.