I am using EF code first in one of my mvc 3 projects. I have a question about what patterns to use when passing a complex EF POCO object to and from the views.
For example, a customer object has a list of orders and each order has a list of items. The customer object will be sent to the view. The view updates the customer object and its inside objects (orders, items), then send it back the controller. The controller have EF to persist the customer object.
My questions are following:
Should I serialize the EF poco object to a JSON object so I can use it inside the view?
How can I re-construct the customer object when I recieve updates from view?
After the customer object is reconstructed, Is it possible to save the entire object graph (customer, orders, items) in one shot?
Thanks
I tend to stay away from using the EF POCO objects as the model for my views. I generally will create View Models from one or more POCO objects as what I need in a view never matches exactly a single EF POCO object. The view models will then create the EF objects that are then saved to the DB.
Should I serialize the EF poco object to a JSON object so I can use it inside the view?
No.
How can I re-construct the customer object when I recieve updates from view? Don't. Let the default modelbinder materialize the POSTed data into a viewmodel (or editmodel), and use that data to issue commands to a lower layer.
After the customer object is reconstructed, Is it possible to save the entire object graph (customer, orders, items) in one shot? It is, but you shouldn't. Instead, deal with each update individually based on your use cases.
Follow mojo722 and Pluc's advice here. Don't use EF POCO entities in your MVC layer. Use viewmodels. Here's how it would work:
Controller needs data, it asks a lower layer. The lower layer gets the data and returns entities (or better yet, entity views).
Controller converts entities to viewmodels (AutoMapper is good for this, but you can map manually as well).
Controller passes viewmodels to view.
View sends HTTP POST data from HTML form.
Default model binder converts HTTP POSTed form data to a viewmodel.
Controller receives viewmodel data, issues a command to the lower layer.
Lower layer uses EF to save new entity state.
Related
I'm new to C# and .NET core.
I'm wondering why when editing a model using bind, it creates a new model and binds the attributes from the post. But if you do not place all your fields in the form post as hidden and the bind it will null them out?
Shouldn't it load a model and update the bind parameters and leave the ones alone?
For example if I'm updating a person and person has
Id, Name, Age, updated, Created
Edit(int id, [Bind("Id,Name,Age") Person p]
When I go to _context.update(p), it's nulling out updated and Created because they weren't bound.
WHY does it work like that?
How can I make it only update the bound parameters without nulling out the ones I don't need to load?
What you pass in is a deserialized block of data that MVC is mapping into an entity definition. It doesn't auto-magically open a DbContext, load the entity, and overwrite values, it just creates an instance of the entity, and copies in the values across. Everything else would be left as defaults.
As a general rule I advise against ever passing entities from the client to the server to avoid confusion about what is being sent back. When performing an update, accept a view model with the applicable properties to update, and ideally the data model and view model should include a row version #. (I.e. Timestamp column in SQL Server that can be converted to/from Base64 string to send/compare in ViewModel)
From there, when performing an update, you fetch the entity by ID, compare the timestamp, then can leverage Automapper to handle copying data from the ViewModel into the Entity, or copy the values across manually.
That way, when your ViewModel comes back with the data to update:
using (var context = new AppDbContext())
{
var entity = context.Persons.Single(x => x.Id == id);
if (entity.TimeStampBase64 != viewModel.TimestampBase64)
{
// Handle fact that data has changed since the client last loaded it.
}
Mapper.Map(entity, viewModel);
context.SaveChanges();
}
You could use the entity definition as-is and still load the existing entity and use Automapper to copy values from the transit entity class to the DbContext tracked one, however it's better to avoid confusing instances of entities between tracked "real" entity instances, and potentially incomplete untracked transit instances. Code will often have methods that accept entities as parameters to do things like validation, calculations, etc. and it can get confusing if later those methods assume they will get "real" entities vs. getting called somewhere that has only a transient DTO flavour of an entity.
It might seem simpler to take an entity in and just call DbContext.Update(entity) with it, however this leaves you with a number of issues including:
You need to pass everything about the entity to the client so that the client can send it back to the server. This requires hidden inputs or serializing the entire entity in the page exposes more about your domain model to the browser. This increases the payload size to the client and back.
Because you need to serialize everything, "quick fixes" like serializing the entire entity in a <script> block for later use can lead to lazy-load "surprises" as the serializer will attempt to touch all navigation properties.
Passing an entity back to the server to perform an Update() means trusting the data coming back from the client. Browser debug tools can intercept a Form submit or Ajax POST and tamper with the data in the payload. This can open the door to unexpected tampering. DbContext.Update also results in an UPDATE statement that overwrites all columns whether anything changed or not, where change tracked entities will only build UPDATE statements to include values that actually changed only if anything actually changed.
I am trying to pass a complex type to WebApi having this on my ApiController :
[HttpPost]
public void DoSomeCrud(JObject data)
{
ComplexModel item = data.ToObject<ComplexModel>();
// Do some logic here
}
My issue is that one of the properties I have inside my ComplexModel is an Entity Framework entity. I don't have problems passing that entity if detached, however as soon as I get that entity from DbContext the model cannot be passed to WebApi as expected.
My question is.. : Is there anyway to detach my entity preserving my references to foreign keys ? Because I need those references on the WebApi side.
Thanks
It is not best practice to use model from entity framework as data transfer object (Dto) for Web Api because you can get problem with serialization since models from EF are actually proxies which supports lazy loading and navigation properties (if you don't detach it).
The best practice is, for separation of concern, you should define your own Dto objects instead of using entity models directly from EF.
Simple example, if you have Customer entity, you also should have CustomerDto entity which projects any property from Customer you want.
i have to show an object (a POCO class) in a form.
In my controller, i get the objects data from the objects repository.
But in the form, i have to show some extra data about the object as well, like the country name and not the countryid, the number of persons assigned (to fetch from a 1:N relation), the history of edits (to fetch from yet another table) and the bit 'CanBeCancelled'.
The question is: where should i put this logic?
I came up with these alternatives:
The repository itself: create an extra function which returns this
exact viewmodel
a conversionservice, which converts the class to the
viewmodel (it knows where to get the data)
the controller: it knows
what data to show in the view(model), so it should get all the data
from the different repositories
What is a good way to place this logic (with 'this logic' i mean the logic to know that the number of persons is fetched in repository A, the history is fetched by repository B and the countryname is fetched by the CountryRepository and the boolean 'CanBeCancelled' is fetched by the StateEngine service) ?
If there are no other constraints, I would follow simple rule stated by Single Responsibility Principle - each layer should do its own job and presume that other layers do their job properly. In this case repositories return the business object, services process the business object and the controller only knows how to display the object properly. In details:
Number of persons, history and country name are already in the storage, and should come from there. So repository should return a complete object - as long as the operations are about the same entity.
When several entities are involved in the process, service is responsible for calling corresponding repositories and constructing an object.
Things that are figured out according to the business rules are the job for service object as well.
Controller receives complete object by calling single method of a service and displays it
Benefits of this approach will be evident once you decide to change something, say business rule about how the object is allowed to be cancelled. This has nothing to do with access to the database, and does not involve application UI, so the only place you want to change in this case is service implementation. This approach allows you to do just that, without need to alter code of repositories and controllers.
Just curious.
Say I have a Base entity and I'm deriving about 10 different child entities from it using the Table Per Type method. I also have a generic repository which can fetch me data from each of these child entities. I eventually want to map each of the child entities to a separate view model and link each of the view models to its own grid (JqGrid) on my website, with each grid having its own Create, Read, Update, Delete methods. I can do all of that, but I'm not sure what's the proper way to go about it while keeping code to a minimum.
Right now, I have every field defined (from both the parent and child entity) in each of my view models. Is it better to have a "parent" view model and then deriving the child view models from it in order to mimic the inheritance structure of the entities? I wouldn't think so....having inheritance in view models doesn't make much sense to me.
Also, I really don't want to duplicate CRUD operations for each grid. Is that considered good practice? Should each view model have its own set of CRUD operations in this case?
Take 'Read' for instance. I'm basically returning JSON data based on the ID (key) field of the view model for each grid. And since all grids will have this ID column (part of the parent entity), should I only have one function that takes care of this for all grids? Should I be using reflections? Should I be making use of polymorphic properties of the parent/child entities?
Or is it better to keep these operations separate for each grid?
Hmmm..
It depends.
On top of all rules I would say: Keep it simple and don't repeat yourself.
Some comments:
Say I have a Base entity and I'm deriving about 10 different child
entities from it using the Table Per Type method.
Only as a side note: You are aware of the poor performance (at least for EF < 5) of TPT, right? It is something to keep in mind especially if the tables can be large or you have a deep inheritance hierarchy (entities derived from derived entities..., etc.)
I eventually want to map each of the child entities to a separate view
model
Which is in my opinion a good idea, alone for possible different validation rules you might apply to the ViewModels for each derived entity.
Is it better to have a "parent" view model and then deriving the child
view models from it in order to mimic the inheritance structure of the
entities?
To mimic the inheritance of entities is not a reason in my opinion. But if you have for example view validation rules on base model properties and they apply to all derived entities why not keeping those rules in one place - like a base ViewModel. Otherwise you had to repeat them in every derived ViewModel.
Should each view model have its own set of CRUD operations in this
case?
If the derived entities are "flat" (have only scalar properties and no navigation properties) you only would need something like:
Read: context.BaseEntities.OfType<T>().Where(...)...
Add: context.BaseEntities.Add(T entity);
Delete: context.BaseEntities.Remove(T entity);
Update: context.Entry(object o).State = EntityState.Modified;
All these methods work for base and derived entities. Why would you want to create such methods for each entity separately? You might need separate methods though in more complex situations, for example if derived entity number 7 has a navigation property to another entity and your view for that entity does allow to change relationships to the other entity. So, it depends. I would not start with duplicating methods that all do the same, rather refactor later when I see that I need special handling (unless maybe when you can foresee from the beginning that special handling is expected at some point during the project evolvement).
I'm basically returning JSON data based on the ID (key) field of the
view model for each grid. And since all grids will have this ID column
(part of the parent entity), should I only have one function that
takes care of this for all grids?
On repository/service side, yes, if only scalar properties are loaded for each derived entity. If you need navigation properties for derived entity 7 you may need something special (maybe an Include). Projecting the data into the ViewModels might be specific for every entity because you have separate ViewModels per entity.
Should I be using reflections?
WTF? Why that? Better not.
Should I be making use of polymorphic properties of the parent/child
entities?
??? (<- this is supposed to be a "Confused"-Emoticon)
I have to create a complex "read model" (cart) with CartLine and some other informations. At the moment I have a ViewModel based on many other objects (Cart, Operation...), the logic to build this object is dispatched in a Respository and a Controller (not in Aggregate), and I want refactor this code, with the repository to return directly the "read model" (With formatted text, price...).
I am only allowed to use Stored Procedure (client's policy) with Dapper. I am looking for a better way to create this read model :
1.Call existing stored procedures, map the stored proc results on DTO and then map again the results on my read model
public class Cart
{
public Cart(CartDb cartDb, IEnumerable<CartDetailDb> cartDetailsDb,
OperationDB operationDb)
{
//Code
}
}
-> Have two levels of objects, I think it's a mess
2.Create stored procedures that will map directly to my read model (to avoid the DTOs)
-> I don't like this method because I could end up putting some logic in the Stored Procedures
3.Use ViewModel
Other suggestions?
If I understand you correctly, your data model for this entity does not exactly line up with your domain model for this Read entity. You would also like your repository layer to return you the domain model version of Read directly without requiring some intermediary DTO layer.
In my opinion, option #1 makes the most sense. Because there is an impedance mismatch between your data model and domain model, mapping logic will be required somewhere to translate data between the two models. The most appropriate place for this logic to go would be inside of your repository layer, since the entire purpose of this layer is to map objects between the domain and their persistence.
This does not mean that you need to create a DTO layer to map from your stored procedure that you just then turn around and remap to a domain object. You could simply perform the translation logic directly on the result sets returned by your data access layer and turn it into a domain object in one step.
In the case of data access, the need for DTOs is largely determined by what technology you are using for data access. For example, if you are using the ADO.net libraries (SqlCommand, SqlConnection, etc) then a DTO is probably not required. However, if you're using an ORM like Entity Framework or NHibernate then it may make sense to use the objects generated by those tools strictly as a DTO and map to a full fledged domain object. Of course, since those objects are generated for you that pretty much eliminates any maintenance issues with having a DTO layer.
This also does not mean that you need to place the translation logic in your stored procedures in order to make the data layer return result sets that exactly match your domain model. I would avoid this at all costs as you said yourself, this puts domain logic in your database.
Finally, you mention that the domain object consists of "Formatted Text" such as price. I would point out that most times text formatting is actually part of the UI layer and not your domain model. That means that if you have a property on your model such as Price, it should be represented as a Double or Decimal and not a String formatted as currency. In other words, the value should be 5.00 and not "$5.00".
For handling formatting translations like these, it's appropriate to wrap your domain object with a ViewModel like you mentioned, to handle the translation from the domain to the UI layer. Using a strict separation concerns in situations like these helps to create a more robust system that is easier to maintain.