I hear that for a small project DTO's are not recommended for example here and here. I wonder if it is OK for a considerably small project (team-wise) to merge non-persistent properties in the domain models? eg:
namespace Domain.Entities
{
public class Candidate : BaseEntity
{
public Candidate()
{
// some construction codes
}
// region persistent properties
public string FirstName { get; set; }
public string LastName { get; set; }
public bool? IsMale { get; set; }
public DateTime BirthDate { get; set; }
// other properties ...
// region non-persistent properties
public string FullName => $"{FirstName} {LastName}";
}
}
Is this just keeping simple or am loosing anything valuable this way?
I'm not advocating a particular approach, just sharing information...
I wouldn't put your computation of FullName in a DTO. A DTO is just a simple object, really more of a struct, and shouldn't have any logic in it. The purpose of a DTO is to move data from one layer/tier to another and create a layer of indirection that allows your domain model to evolve independent of your clients. FullName on your Entity as a non-persistent property makes more sense here than in the DTO. If you want to go full enterprise, it would be in a transformer/adapter.
If your project is really small, and is likely never going to grow, then abandoning the DTO can be acceptable. Just keep in mind, that if your project grows you may have to do some refactoring, and there are some other things to consider...
Another benefit of the DTO is keeping some data where it needs to stay. For example, if you have sensitive data in your entity object and you don't put something in place to prevent it from being returned in a web request, you just leaked some information off your app server layer (think the password field in your user entity). A DTO requires you to think about what is being sent to/from the client and makes including data an explicitly intentional act vs an unintentional act. DTOs also make it easier to document what is really required for a client request.
That being said, each DTO is now code you have to write and maintain, which is the main reason to avoid them, and a model change can have a noticeable ripple effect through the system.
It comes down to deciding how you want to handle potential data leakage, how you want to manage your clients (if you can), and how complex your model may get.
I'm a novice trying to wrap my head around MVVM. I'm trying to build something and have not found an answer on how to deal with this:
I have several models/entities, some of which have logical connections and I am wondering where/when to bring it all together nicely.
Assume we have a PersonModel:
public class PersonModel
{
public int Id { get; set; }
public string Name { get; set; }
...
}
And a ClubModel:
public class ClubModel
{
public int Id { get; set; }
public string Name { get; set; }
...
}
And we have MembershipModel (a Person can have several Club memberships):
public class MembershipModel
{
public int Id { get; set; }
public PersonId { get; set; }
public ClubId { get; set; }
}
All these models are stored somewhere, and the models are persisted "as in" in that data storage.
Assume we have separate repositories in place for each of these models that supplies the standard CRUD operations.
Now I want to create a view model to manage all Persons, e.g. renaming, adding memberships, etc. -> PersonMangementViewModel.
In order to nicely bind a Person with all its properties and memberships, I would also create a PersonView(?)Model that can be used in the PersonManagementViewModel. It could contain e.g. view relevant properties and also the memberships:
public class PersonViewModel : PersonModel
{
public Color BkgnColor { get return SomeLogic(); }
public IEnumerable<MembershipModel> { get; set; }
...
}
My question here is, how would I smartly go about getting the Membership info into the PersionViewModel? I could of course create an instance of the MemberShipRepo directly in the PersionViewModel but that seems not nice, especially if you have a lot of Persons. I could also create all repositories in the PersonManagementViewModel and then pass references into the PersonViewModel.
Or does it make more sense to create another layer (e.g. "service" layer) that returns primarily the PersonViewModel, therefore uses the individual repositories and is called from the PersonManagementViewModel (thus removing the burden from it and allowing for re-use of the service elsewhere)?
Happy to have pointed out conceptional mistakes or some further reading.
Thanks
You are creating separate model for each table I guess. Does not matter, but your models are fragmented. You can consider putting related data together using Aggregate Root and Repository per Aggregate root instead of per model. This concept is discussed under DDD. But as you said you are new to MVVM, there is already lot much to learn. Involving DDD at this stage will only complicate the things.
If you decide to keep the things as is, best and quick thing I can guess is what you are doing now. Get instance of model from data store in View Model (or whatever your location) and map somehow. Tools like Automapper are good but they does not fit each situation. Do not hesitate to map by hand if needed. You can also use mix approach (Automapper + map by hand) to simplify the things.
About service layer, sure... why not. Totally depends on you. If used, this layer typically contain your business logic, mapping, formatting of data, validations etc. Again, each of that thing is up to you.
My suggestions:
Focus on your business objectives first.
Design patterns are good and helpful. Those are extract of many exceptionally capable developers to solve specific problem. Do use them. But, do not unnecessarily stick to it. Read above suggestion. In short, avoid over-engineering. Design patterns are created to solve specific problem. If you do not have that problem, then do not mess-up your code with unnecessary pattern.
Read about Aggregate Root, DDD, Repository etc.
Try your best to avoid Generic Repository.
In DDD it is customary to protect an entity's properties like this:
public class Customer
{
private Customer() { }
public Customer(int id, string name) { /* ...populate properties... */ }
public int Id { get; private set; }
public string Name { get; private set; }
// and so on...
}
EF uses reflection so it can handle all those privates.
But what if you need to attach an entity without loading it (a very common thing to do):
var customer = new Customer { Id = getIdFromSomewhere() }; // can't do this!
myContext.Set<Customer>().Attach(customer);
This won't work because the Id setter is private.
What is a good way to deal with this mismatch between the language and DDD?
Ideas:
make Id public (and break DDD)
create a constructor/method to populate a dummy object (makes no sense)
use reflection ("cheat")
???
I think the best compromise, is to use reflection, and set that private Id property, just like EF does. Yes it's reflection and slow, but much faster than loading from the database. And yes it's cheating, but at least as far as the domain is concerned, there is officially no way to instantiate that entity without going through the constructor.
How do you handle this scenario?
PS I did a simple benchmark and it takes about 10s to create a million instances using reflection. So compared to hitting the database, or the reflection performed by EF, the extra overhead is tiny.
"customary" implicitly means it's not a hard set rule, so if you have specific reasons to break those rules in your application, go for it. Making the property setter public would be better than going into reflection for this: not only because of performance issues, but also because it makes it much easier to put unwanted side-effects in your application. Reflection just isn't the way to deal with this.
But I think the first question here is why you would want the ID of an object to be set from the outside in the first place. EF uses the ID primarily to identify objects and you should not use the ID for other logic in your application than just that.
Assuming you have a strong reason to want to change the ID, I actually think you gave the answer yourself in the source you just put in the comments:
So you would have methods to control what happens to your objects and
in doing so, constrain the properties so that they are not exposed to
be set or modified “willy nilly”.
You can keep the private setter and use a method to set the ID.
EDIT:
After reading this I tried doing some more testing myself and you could have the following:
public class Customer
{
private Customer() { }
public Customer(int id) { /* only sets id */ }
public Customer(int id, string name) { /* ...populate properties... */ }
public int Id { get; private set; }
public string Name { get; private set; }
// and so on...
public void SetName(string name)
{
//set name, perhaps check for condition first
}
}
public class MyController
{
//...
var customer = new Customer(getIdFromSomewhere());
myContext.Set<Customer>().Attach(customer);
order.setCustomer(customer);
myContext.SaveChanges(); //sets the customer to order and saves it, without actually changing customer: still read as unchanged.
//...
}
This code leaves the private setters as they were (you will need the methods for editing of course) and only the required changes are pushed to the db afterwards. As is also explained in the link above, only changes made after attaching are used and you should make sure you don't manually set the state of the object to modified, else all properties are pushed (potentially emptying your object).
This is what I'm doing, using reflection. I think it's the best bad option.
var customer = CreateInstanceFromPrivateConstructor<Customer>();
SetPrivateProperty(p=>p.ID, customer, 10);
myContext.Set<Customer>().Attach(customer);
//...and all the above was just for this:
order.setCustomer(customer);
myContext.SaveChanges();
The implementations of those two reflection methods aren't important. What is important:
EF uses reflection for lots of stuff
Database reads are much slower than these reflection calls (the benchmark I mentioned in the question shows how insignificant this perf hit is, about 10s to create a million instances)
Domain is fully DDD - you can't create an entity in a weird state, or create one without going through the constructor (I did that above but I cheated for a specific case, just like EF does)
I'm fairly new to the using ViewModels and I wonder, is it acceptable for a ViewModel to contain instances of domain models as properties, or should the properties of those domain models be properties of the ViewModel itself? For example, if I have a class Album.cs
public class Album
{
public int AlbumId { get; set; }
public string Title { get; set; }
public string Price { get; set; }
public virtual Genre Genre { get; set; }
public virtual Artist Artist { get; set; }
}
Would you typically have the ViewModel hold an instance of the Album.cs class, or would you have the ViewModel have properties for each of the Album.cs class' properties.
public class AlbumViewModel
{
public Album Album { get; set; }
public IEnumerable<SelectListItem> Genres { get; set; }
public IEnumerable<SelectListItem> Artists { get; set; }
public int Rating { get; set; }
// other properties specific to the View
}
public class AlbumViewModel
{
public int AlbumId { get; set; }
public string Title { get; set; }
public string Price { get; set; }
public IEnumerable<SelectListItem> Genres { get; set; }
public IEnumerable<SelectListItem> Artists { get; set; }
public int Rating { get; set; }
// other properties specific to the View
}
tl;dr
Is it acceptable for a ViewModel to contain instances of domain models?
Basically not because you are literally mixing two layers and tying them together. I must admit, I see it happen a lot and it depends a bit on the quick-win-level of your project, but we can state that it's not conform the Single Responsibility Principle of SOLID.
The fun part: this is not limited to view models in MVC, it's actually a matter of separation of the good old data, business and ui layers. I'll illustrate this later, but for now; keep in mind it applies to MVC, but also, it applies to many more design patterns as well.
I'll start with pointing out some general applicable concepts and zoom in into some actual scenario's and examples later.
Let's consider some pros and cons of not mixing the layers.
What it will cost you
There is always a catch, I'll sum them, explain later, and show why they are usually not applicable
duplicate code
adds extra complexity
extra performance hit
What you'll gain
There is always a win, I'll sum it, explain later, and show why this actually makes sense
independent control of the layers
The costs
duplicate code
It's not DRY!
You will need an additional class, which is probably exactly the same as the other one.
This is an invalid argument. The different layers have a well defined different purpose. Therefore, the properties which lives in one layer have a different purpose than a property in the other - even if the properties have the same name!
For example:
This is not repeating yourself:
public class FooViewModel
{
public string Name {get;set;}
}
public class DomainModel
{
public string Name {get;set;}
}
On the other hand, defining a mapping twice, is repeating yourself:
public void Method1(FooViewModel input)
{
//duplicate code: same mapping twice, see Method2
var domainModel = new DomainModel { Name = input.Name };
//logic
}
public void Method2(FooViewModel input)
{
//duplicate code: same mapping twice, see Method1
var domainModel = new DomainModel { Name = input.Name };
//logic
}
It's more work!
Really, is it? If you start coding, more than 99% of the models will overlap. Grabbing a cup of coffee will take more time ;-)
"It needs more maintenance"
Yes it does, that's why you need to unit test your mapping (and remember, don't repeat the mapping).
adds extra complexity
No, it does not. It adds an extra layer, which make it more complicated. It does not add complexity.
A smart friend of mine, once stated it like this:
"A flying plane is a very complicated thing. A falling plane is very complex."
He is not the only one using such a definition, the difference is in predictability which has an actual relation with entropy, a measurement for chaos.
In general: patterns do not add complexity. They exist to help you reduce complexity. They are solutions to well known problems. Obviously, a poorly implemented pattern doesn't help therefore you need to understand the problem before applying the pattern. Ignoring the problem doesn't help either; it just adds technical debt which has to be repaid sometime.
Adding a layer gives you well defined behavior, which due to the obvious extra mapping, will be a (bit) more complicated. Mixing layers for various purposes will lead to unpredictable side-effects when a change is applied. Renaming your database column will result in a mismatch in key/value-lookup in your UI which makes you do a non existing API call. Now, think of this and how this will relate to your debugging efforts and maintenance costs.
extra performance hit
Yes, extra mapping will lead to extra CPU power to be consumed. This, however (unless you have a raspberry pi connected to a remote database) is negligible compared to fetching the data from the database. Bottom line: if this is an issue: use caching.
The win
independent control of the layers
What does this mean?
Any combination of this (and more):
creating a predictable system
altering your business logic without affecting your UI
altering your database, without affecting your business logic
altering your ui, without affecting your database
able to change your actual data store
total independent functionality, isolated well testable behavior and easy to maintain
cope with change and empower business
In essence: you are able to make a change, by altering a well defined piece of code without worrying about nasty side effects.
beware: business counter measures!
"this is to reflect change, it's not going to change!"
Change will come: spending trillions of US dollar annually cannot simply pass by.
Well that's nice. But face it, as a developer; the day you don't make any mistakes is the day you stop working. Same applies to business requirements.
fun fact; software entropy
"my (micro) service or tool is small enough to cope with it!"
This might be the toughest one since there is actually a good point here. If you develop something for one time use, it probably is not able to cope with the change at all and you have to rebuild it anyway, provided you are actually going to reuse it. Nevertheless, for all other things: "change will come", so why make the change more complicated? And, please note, probably, leaving out layers in your minimalistic tool or service will usually puts a data layer closer to the (User)Interface. If you are dealing with an API, your implementation will require a version update which needs to be distributed among all your clients. Can you do that during a single coffee break?
"lets do it quick-and-simple, just for the time being...."
Is your job "for the time being"? Just kidding ;-) but; when are you going to fix it? Probably when your technical debt forces you to. At that time it cost you more than this short coffee break.
"What about 'closed for modification and open for extension'? That's also a SOLID principle!"
Yes, it is! But this doesn't mean you shouldn't fix typo's. Or that every applied business rule can be expressed as an sum of extensions or that you are not allowed to fix things that are broken. Or as Wikipedia states it:
A module will be said to be closed if it is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding)
which actually promotes separation of layers.
Now, some typical scenarios:
ASP.NET MVC
Since, this is what you are using in your actual question:
Let me give an example. Imagine the following view model and domain model:
note: this is also applicable to other layer types, to name a few: DTO, DAO, Entity, ViewModel, Domain, etc.
public class FooViewModel
{
public string Name {get; set;}
//hey, a domain model class!
public DomainClass Genre {get;set;}
}
public class DomainClass
{
public int Id {get; set;}
public string Name {get;set;}
}
So, somewhere in your controller you populate the FooViewModel and pass it on to your view.
Now, consider the following scenarios:
1) The domain model changes.
In this case you'll probably need to adjust the view as well, this is bad practice in context of separation of concerns.
If you have separated the ViewModel from the DomainModel, a minor adjustment in the mappings (ViewModel => DomainModel (and back)) would be sufficient.
2) The DomainClass has nested properties and your view just displays the "GenreName"
I have seen this go wrong in real live scenarios.
In this case a common problem is that the use of #Html.EditorFor will lead to inputs for the nested object. This might include Ids and other sensitive information. This means leaking implementation details! Your actual page is tied to your domain model (which is probably tied to your database somewhere). Following this course, you'll find yourself creating hidden inputs. If you combine this with a server side model binding or automapper it's getting harder to block the manipulation of hidden Id's with tools like firebug, or forgetting to set an attribute on your property, will make it available in your view.
Although it's possible, maybe easy, to block some of those fields, but the more nested Domain/Data objects you have, the more trickier it will become to get this part right. And; what if you are "using" this domainmodel in multiple views? Will they behave the same? Also, bear in mind, that you might want to change your DomainModel for a reason that's not necessarily targeting the view. So with every change in your DomainModel you should be aware that it might affect the view(s) and the security aspects of the controller.
3) In ASP.NET MVC it is common to use validation attributes.
Do you really want your domain to contain metadata about your views? Or apply view-logic to your data-layer? Is your view-validation always the same as the domain-validation? Does it has the same fields (or are some of them a concatenation)? Does it have the same validation logic? Are you are using your domain-models cross application? etc.
I think it's clear this is not the route to take.
4) More
I can give you more scenario's but it's just a matter of taste to what's more appealing. I'll just hope at this point you'll get the point :) Nevertheless, I promised an illustration:
Now, for really dirty and quick-wins it will work, but I don't think you should want it.
It's just a little more effort to build a view-model, which usually is for 80+% similar to the domain model. This might feel like doing unnecessary mappings, but when the first conceptual difference arises, you'll find that it was worth the effort :)
So as an alternative, I propose the following setup for a general case:
create a viewmodel
create a domainmodel
create a datamodel
use a library like automapper to create mapping from one to the other (this will help to map Foo.FooProp to OtherFoo.FooProp)
The benefits are, e.g.; if you create an extra field in one of your database tables, it won't affect your view. It might hit your business layer or mappings, but there it will stop. Of course, most of the time you want to change your view as well, but in this case you don't need to. It therefore keeps the problem isolated in one part of your code.
Web API / data-layer / DTO
First a note: here's a nice article on how DTO (which is not a viewmodel), can be omitted in some scenario's - on which my pragmatic side fully agrees ;-)
Another concrete example of how this will work in a Web-API / ORM (EF) scenario:
Here it's more intuitive, especially when the consumer is a third party, it's unlikely your domain model matches the implementation of your consumer, therefore a viewmodel is more likely to be fully self-contained.
note: The name "domain model", is sometimes mixed with DTO or "Model"
Please note that in Web (or HTTP or REST) API; communications is often done by a data-transfer-object (DTO), which is the actual "thing" that's being exposed on the HTTP-endpoints.
So, where should we put these DTO's you might ask. Are they between domain model and view models? Well, yes; we have already seen that treating them as viewmodel would be hard since the consumer is likely to implement a customized view.
Would the DTO's be able to replace the domainmodels or do they have a reason to exists on their own? In general, the concept of separation would be applicable to the DTO's and domainmodels as well. But then again: you can ask yourself (,and this is where I tend to be a bit pragmatic,); is there enough logic within the domain to explicitly define a domainlayer? I think you'll find that if your service get smaller and smaller, the actual logic, which is part of the domainmodels, decreases as well and may be left out all together and you'll end up with:
EF/(ORM) Entities ↔ DTO/DomainModel ↔ Consumers
disclaimer / note
As #mrjoltcola stated: there is also component over-engineering to keep in mind. If none of the above applies, and the users/programmers can be trusted, you are good to go. But keep in mind that maintainability and re-usability will decrease due to the DomainModel/ViewModel mixing.
Opinions vary, from a mix of technical best practices and personal preferences.
There is nothing wrong with using domain objects in your view models, or even using domain objects as your model, and many people do. Some feel strongly about creating view models for every single view, but personally, I feel many apps are over-engineered by developers who learn and repeat one approach that they are comfortable with. The truth is there are several ways to accomplish the goal using newer versions of ASP.NET MVC.
The biggest risk, when you use a common domain class for your view model and your business and persistence layer, is that of model injection. Adding new properties to a model class can expose those properties outside the boundary of the server. An attacker can potentially see properties he should not see (serialization) and alter values he should not alter (model binders).
To guard against injection, use secure practices that are relevant to your overall approach. If you plan to use domain objects, then make sure to use white lists or black lists (inclusion / exclusion) in the controller or via model binder annotations. Black lists are more convenient, but lazy developers writing future revisions may forget about them or not be aware of them. White lists ([Bind(Include=...)] are obligatory, requiring attention when new fields are added, so they act as an inline view model.
Example:
[Bind(Exclude="CompanyId,TenantId")]
public class CustomerModel
{
public int Id { get; set; }
public int CompanyId { get; set; } // user cannot inject
public int TenantId { get; set; } // ..
public string Name { get; set; }
public string Phone { get; set; }
// ...
}
or
public ActionResult Edit([Bind(Include = "Id,Name,Phone")] CustomerModel customer)
{
// ...
}
The first sample is a good way to enforce multitenant safety across the application. The second sample allows customizing each action.
Be consistent in your approach and clearly document the approach used in your project for other developers.
I recommend you always use view models for login / profile related features to force yourself to "marshall" the fields between the web constroller and the data access layer as a security exercise.
my name is aderson and in this moment i have something question about composition referents to performance. In this model
i have a simple userbase and departmentbase. The userbase have a property of type deparmentbase and departmentbase have a list property of type departmentbase.
When i have a instance of userbase in this moment load information about department but then DepartmentBase load information about Departments too!!!.
Now, when i have a list of userbase for all user the process load again for all users, this is a good practise or what is the better form?
alt text http://img146.imageshack.us/img146/3949/diagram.jpg
I don't know if it is a better (or even applicable) approach, but I sometimes make brief versions of objects that I use for references from other objects. The breif version acts as a base class for the full version of the object, and will typically contain the information that would be visible in a listing of such objects. It will often not contain lists of other objects, and any references to other classes will usually refer to the brief version of that class. This eliminates some unnecessary data loading, as well as some cases of circular references. Example:
public class DepartmentBrief
{
public string Name { get; set; }
}
public class Department : DepartmentBrief
{
public Department()
{
Departments = new List<DepartmentBrief>();
}
public IEnumerable<DepartmentBrief> Departments { get; private set; }
}
public class UserBase
{
public DepartmentBrief Department { get; set; }
}
One difference between this approach and having full object references paired with lazy loading is that you will need to explicitly load extra data when it is needed. If you have a UserBase instance, and you need the department list from the Department of that UserBase, you will need to write some code to fetch the Department object that the DepartmentBrief object in UserBase is identifying. This could be considered a downside, but I personally like the fact that it will be clear when looking at the code exactly when it is going to hit the data store.
It depends, if you need all the department data directly after loading the user list, then this is the best approach. If you don't need it immediately, you better use lazy loading for the department data. This means you postpone the loading of the department data until an explicit method (or property) has been called.