ASP.NET MVC: Custom Validation by DataAnnotation depending on configuration - c#

I'm using DataAnnotation for client and server validation of my view model. I would like to ask you about the best practices of using custom validation.
I have two forms, which use the same view model:
public class RecipientViewModel
{
[Required]
public string Address1 { get; set; }
public string Address2 { get; set; }
}
What I want to achieve, it is that the first form should validate the Address2 field, but the second form did not. Of course my view model is much bigger and I want to do it generic as much as possible.
Is there any possibility to pass a list of fields to be validated and how? For example view could pass it to view model somehow?

Please clarify your question or show more code.
In general the Annotations are very good to check expected structures or a certain kind of expected data. Like length, presence, type.
For other more complicated, complex business cases. Good implementation will have a business layer or a domain design which handles these use cases.
So define your use cases and think about an layer between controller and data model. Maybe something like a validation Services which gets injected into the viewmodel.
HTH
EDIT: You probably want to take a look at - Validating with a Service Layer from the ASP.NET site - http://www.asp.net/mvc/overview/older-versions-1/models-(data)/validating-with-a-service-layer-cs] which shows some of the conepts - the technology might have changed slightly since the article is from 2009 but you got the idea.

Related

DDD: guidance on updating multiple properties of entities

So, i decided to learn DDD as it seems to solve some architectural problems i have been facing. While there are lots of videos and sample blogs, i have not encountered one that guides me to solve the following scenario:
Suppose i have the entity
public class EventOrganizer : IEntity
{
public Guid Id { get; }
public string Name { get; }
public PhoneNumber PrimaryPhone { get; }
public PhoneNumber AlternatePhone { get; private set; }
public Email Email { get; private set; }
public EventOrganizer(string name, PhoneNumber primaryPhoneNr)
{
#region validations
if (primaryPhoneNr == null) throw new ArgumentNullException(nameof(primaryPhoneNr));
//validates minimum length, nullity and special characters
Validator.AsPersonName(name);
#endregion
Id = new Guid();
Name = name;
PrimaryPhone = primaryPhoneNr;
}
}
My problem is: suppose this will be converted and fed to a MVC view and the user wants to update the AlternatePhone, the Email and a lot of other properties that make sense to exist within this entity for the given bounded context (not shown for brevity)
I understand that the correct guidance is to have a method for each operation, but (AND I KNOW ITS KINDA OF ANTI-PATTERN) i cant help but wonder if this wont end up triggering multiple update calls on the database.
How is this handled ? somewhere down the line, will there be something that maps my EventOrganizer to something - say DbEventOrganizer and gathers all changes made to the domain entity and apply those in a single go?
DDD is better suited for task-based UIs. What you describe is very CRUD-oriented. In your case, individual properties are treated as independent data fields where one or many of these can be updated by a single generic business operation (update).
You will have to perform a deeper analysis of your domain than this if you want to be successfull with DDD.
Why would someone update all those fields together? What implicit business operation is the user trying to achieve by doing that? Is there a more concrete business process that is expressed by changing PrimaryPhone, AlternatePhone and Email together?
Perhaps that is changing the ContactInformation of an EventOrganizer? If that's the case then you could model a single ChangeContactInformation operation on EventOrganizer. Your UI would then send a ChangeContactInformation command rather than an update command.
As for the persistence of your aggregate roots (AR), this is usually handled by an ORM like NHibernate if you are using a RDBMS. However, there are other ways to persist your ARs like Event Sourcing, NoSQL DBs and even storing JSON or any other data inter-change formats in a RDBMS.
You question is quite broad!
EventOrganizer itself should not be updating anything. You should keep your update code quite separate from the entity. A different class would take an EventOrganizer object and update the DB. This is called 'persistence ignorance' and makes the code a lot more modular and cohesive.
It would be common to create a View Model - a class whose purpose is to provide the View with the exact data it needs in the exact form it needs. You would need to create the View Model from your EventOrganizer, after which the View can update it - programmatically or with binding. When you're ready to save the changes, you'll need to update your EventOrganizer from the View Model and pass it onto the updater. This seems like a layer you don't need when the project is small and simple, but it is becomes invaluable as the complexity builds.

Why Model is having only declaration in MVC?

I am new to MVC. According to MVC tutorial, Model are the classes which contains business logic. But in all the example which i referred, Model contains only the declaration (using interface). Why the Model cannot contain definition of business logic. Since i compared with MVVM model, where Model contains definition.
Why model look like this?
public interface IDBModel
{
void addRecord();
void deleteRecord();
}
Instead of like below.,
public Class DBModel
{
void addRecord()
{
// Insert logic
}
void deleteRecord()
{
// Delete logic
}
}
Kindly help me to understand the "Model" purpose in MVC and MVVM with some real time examples.
A model is meant to encapsulate data, making it easier to transfer from different logical areas of your application. The first example you give is incorrect, in that you're defining an interface with methods. You're more likely to see a model that looks like this:
public class Person {
public string FirstName { get; set; }
public string LastName { get; set; }
public string FullName() {
return string.Format("{0} {1}", FirstName, LastName);
}
}
Notice that I'm using properties as a way to transfer data, but have a method that performs lightweight logic (this could have also been done as a read only property). 90% of the time this is what your models will look like.
I would treat the M in MVC more like a view model. It contains all properties and formatting logic needed for the view to display itself. No need to have interfaces for it.
The controller is responsible for building that view model based on the models it receives from the services.
I think you misunderstood about the Model.
Worng: Model are the classes which contains business logic (not the business logic).
Models: Model objects are the parts of the application that implement the logic for the application's data domain. Often, model objects retrieve and store model state in a database. For example, a Product object might retrieve information from a database, operate on it, and then write updated information back to a Products table in SQL Server.
Take a look at official ASP.NET MVC Site.
Why model look like this?
Your application may follow certain different patters. other than MVVM.
Real Time Examples / Basic Understandings : Click Here

ViewModels in MVC / MVVM / Separation of layers- best practices?

I'm fairly new to the using ViewModels and I wonder, is it acceptable for a ViewModel to contain instances of domain models as properties, or should the properties of those domain models be properties of the ViewModel itself? For example, if I have a class Album.cs
public class Album
{
public int AlbumId { get; set; }
public string Title { get; set; }
public string Price { get; set; }
public virtual Genre Genre { get; set; }
public virtual Artist Artist { get; set; }
}
Would you typically have the ViewModel hold an instance of the Album.cs class, or would you have the ViewModel have properties for each of the Album.cs class' properties.
public class AlbumViewModel
{
public Album Album { get; set; }
public IEnumerable<SelectListItem> Genres { get; set; }
public IEnumerable<SelectListItem> Artists { get; set; }
public int Rating { get; set; }
// other properties specific to the View
}
public class AlbumViewModel
{
public int AlbumId { get; set; }
public string Title { get; set; }
public string Price { get; set; }
public IEnumerable<SelectListItem> Genres { get; set; }
public IEnumerable<SelectListItem> Artists { get; set; }
public int Rating { get; set; }
// other properties specific to the View
}
tl;dr
Is it acceptable for a ViewModel to contain instances of domain models?
Basically not because you are literally mixing two layers and tying them together. I must admit, I see it happen a lot and it depends a bit on the quick-win-level of your project, but we can state that it's not conform the Single Responsibility Principle of SOLID.
The fun part: this is not limited to view models in MVC, it's actually a matter of separation of the good old data, business and ui layers. I'll illustrate this later, but for now; keep in mind it applies to MVC, but also, it applies to many more design patterns as well.
I'll start with pointing out some general applicable concepts and zoom in into some actual scenario's and examples later.
Let's consider some pros and cons of not mixing the layers.
What it will cost you
There is always a catch, I'll sum them, explain later, and show why they are usually not applicable
duplicate code
adds extra complexity
extra performance hit
What you'll gain
There is always a win, I'll sum it, explain later, and show why this actually makes sense
independent control of the layers
The costs
duplicate code
It's not DRY!
You will need an additional class, which is probably exactly the same as the other one.
This is an invalid argument. The different layers have a well defined different purpose. Therefore, the properties which lives in one layer have a different purpose than a property in the other - even if the properties have the same name!
For example:
This is not repeating yourself:
public class FooViewModel
{
public string Name {get;set;}
}
public class DomainModel
{
public string Name {get;set;}
}
On the other hand, defining a mapping twice, is repeating yourself:
public void Method1(FooViewModel input)
{
//duplicate code: same mapping twice, see Method2
var domainModel = new DomainModel { Name = input.Name };
//logic
}
public void Method2(FooViewModel input)
{
//duplicate code: same mapping twice, see Method1
var domainModel = new DomainModel { Name = input.Name };
//logic
}
It's more work!
Really, is it? If you start coding, more than 99% of the models will overlap. Grabbing a cup of coffee will take more time ;-)
"It needs more maintenance"
Yes it does, that's why you need to unit test your mapping (and remember, don't repeat the mapping).
adds extra complexity
No, it does not. It adds an extra layer, which make it more complicated. It does not add complexity.
A smart friend of mine, once stated it like this:
"A flying plane is a very complicated thing. A falling plane is very complex."
He is not the only one using such a definition, the difference is in predictability which has an actual relation with entropy, a measurement for chaos.
In general: patterns do not add complexity. They exist to help you reduce complexity. They are solutions to well known problems. Obviously, a poorly implemented pattern doesn't help therefore you need to understand the problem before applying the pattern. Ignoring the problem doesn't help either; it just adds technical debt which has to be repaid sometime.
Adding a layer gives you well defined behavior, which due to the obvious extra mapping, will be a (bit) more complicated. Mixing layers for various purposes will lead to unpredictable side-effects when a change is applied. Renaming your database column will result in a mismatch in key/value-lookup in your UI which makes you do a non existing API call. Now, think of this and how this will relate to your debugging efforts and maintenance costs.
extra performance hit
Yes, extra mapping will lead to extra CPU power to be consumed. This, however (unless you have a raspberry pi connected to a remote database) is negligible compared to fetching the data from the database. Bottom line: if this is an issue: use caching.
The win
independent control of the layers
What does this mean?
Any combination of this (and more):
creating a predictable system
altering your business logic without affecting your UI
altering your database, without affecting your business logic
altering your ui, without affecting your database
able to change your actual data store
total independent functionality, isolated well testable behavior and easy to maintain
cope with change and empower business
In essence: you are able to make a change, by altering a well defined piece of code without worrying about nasty side effects.
beware: business counter measures!
"this is to reflect change, it's not going to change!"
Change will come: spending trillions of US dollar annually cannot simply pass by.
Well that's nice. But face it, as a developer; the day you don't make any mistakes is the day you stop working. Same applies to business requirements.
fun fact; software entropy
"my (micro) service or tool is small enough to cope with it!"
This might be the toughest one since there is actually a good point here. If you develop something for one time use, it probably is not able to cope with the change at all and you have to rebuild it anyway, provided you are actually going to reuse it. Nevertheless, for all other things: "change will come", so why make the change more complicated? And, please note, probably, leaving out layers in your minimalistic tool or service will usually puts a data layer closer to the (User)Interface. If you are dealing with an API, your implementation will require a version update which needs to be distributed among all your clients. Can you do that during a single coffee break?
"lets do it quick-and-simple, just for the time being...."
Is your job "for the time being"? Just kidding ;-) but; when are you going to fix it? Probably when your technical debt forces you to. At that time it cost you more than this short coffee break.
"What about 'closed for modification and open for extension'? That's also a SOLID principle!"
Yes, it is! But this doesn't mean you shouldn't fix typo's. Or that every applied business rule can be expressed as an sum of extensions or that you are not allowed to fix things that are broken. Or as Wikipedia states it:
A module will be said to be closed if it is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding)
which actually promotes separation of layers.
Now, some typical scenarios:
ASP.NET MVC
Since, this is what you are using in your actual question:
Let me give an example. Imagine the following view model and domain model:
note: this is also applicable to other layer types, to name a few: DTO, DAO, Entity, ViewModel, Domain, etc.
public class FooViewModel
{
public string Name {get; set;}
//hey, a domain model class!
public DomainClass Genre {get;set;}
}
public class DomainClass
{
public int Id {get; set;}
public string Name {get;set;}
}
So, somewhere in your controller you populate the FooViewModel and pass it on to your view.
Now, consider the following scenarios:
1) The domain model changes.
In this case you'll probably need to adjust the view as well, this is bad practice in context of separation of concerns.
If you have separated the ViewModel from the DomainModel, a minor adjustment in the mappings (ViewModel => DomainModel (and back)) would be sufficient.
2) The DomainClass has nested properties and your view just displays the "GenreName"
I have seen this go wrong in real live scenarios.
In this case a common problem is that the use of #Html.EditorFor will lead to inputs for the nested object. This might include Ids and other sensitive information. This means leaking implementation details! Your actual page is tied to your domain model (which is probably tied to your database somewhere). Following this course, you'll find yourself creating hidden inputs. If you combine this with a server side model binding or automapper it's getting harder to block the manipulation of hidden Id's with tools like firebug, or forgetting to set an attribute on your property, will make it available in your view.
Although it's possible, maybe easy, to block some of those fields, but the more nested Domain/Data objects you have, the more trickier it will become to get this part right. And; what if you are "using" this domainmodel in multiple views? Will they behave the same? Also, bear in mind, that you might want to change your DomainModel for a reason that's not necessarily targeting the view. So with every change in your DomainModel you should be aware that it might affect the view(s) and the security aspects of the controller.
3) In ASP.NET MVC it is common to use validation attributes.
Do you really want your domain to contain metadata about your views? Or apply view-logic to your data-layer? Is your view-validation always the same as the domain-validation? Does it has the same fields (or are some of them a concatenation)? Does it have the same validation logic? Are you are using your domain-models cross application? etc.
I think it's clear this is not the route to take.
4) More
I can give you more scenario's but it's just a matter of taste to what's more appealing. I'll just hope at this point you'll get the point :) Nevertheless, I promised an illustration:
Now, for really dirty and quick-wins it will work, but I don't think you should want it.
It's just a little more effort to build a view-model, which usually is for 80+% similar to the domain model. This might feel like doing unnecessary mappings, but when the first conceptual difference arises, you'll find that it was worth the effort :)
So as an alternative, I propose the following setup for a general case:
create a viewmodel
create a domainmodel
create a datamodel
use a library like automapper to create mapping from one to the other (this will help to map Foo.FooProp to OtherFoo.FooProp)
The benefits are, e.g.; if you create an extra field in one of your database tables, it won't affect your view. It might hit your business layer or mappings, but there it will stop. Of course, most of the time you want to change your view as well, but in this case you don't need to. It therefore keeps the problem isolated in one part of your code.
Web API / data-layer / DTO
First a note: here's a nice article on how DTO (which is not a viewmodel), can be omitted in some scenario's - on which my pragmatic side fully agrees ;-)
Another concrete example of how this will work in a Web-API / ORM (EF) scenario:
Here it's more intuitive, especially when the consumer is a third party, it's unlikely your domain model matches the implementation of your consumer, therefore a viewmodel is more likely to be fully self-contained.
note: The name "domain model", is sometimes mixed with DTO or "Model"
Please note that in Web (or HTTP or REST) API; communications is often done by a data-transfer-object (DTO), which is the actual "thing" that's being exposed on the HTTP-endpoints.
So, where should we put these DTO's you might ask. Are they between domain model and view models? Well, yes; we have already seen that treating them as viewmodel would be hard since the consumer is likely to implement a customized view.
Would the DTO's be able to replace the domainmodels or do they have a reason to exists on their own? In general, the concept of separation would be applicable to the DTO's and domainmodels as well. But then again: you can ask yourself (,and this is where I tend to be a bit pragmatic,); is there enough logic within the domain to explicitly define a domainlayer? I think you'll find that if your service get smaller and smaller, the actual logic, which is part of the domainmodels, decreases as well and may be left out all together and you'll end up with:
EF/(ORM) Entities ↔ DTO/DomainModel ↔ Consumers
disclaimer / note
As #mrjoltcola stated: there is also component over-engineering to keep in mind. If none of the above applies, and the users/programmers can be trusted, you are good to go. But keep in mind that maintainability and re-usability will decrease due to the DomainModel/ViewModel mixing.
Opinions vary, from a mix of technical best practices and personal preferences.
There is nothing wrong with using domain objects in your view models, or even using domain objects as your model, and many people do. Some feel strongly about creating view models for every single view, but personally, I feel many apps are over-engineered by developers who learn and repeat one approach that they are comfortable with. The truth is there are several ways to accomplish the goal using newer versions of ASP.NET MVC.
The biggest risk, when you use a common domain class for your view model and your business and persistence layer, is that of model injection. Adding new properties to a model class can expose those properties outside the boundary of the server. An attacker can potentially see properties he should not see (serialization) and alter values he should not alter (model binders).
To guard against injection, use secure practices that are relevant to your overall approach. If you plan to use domain objects, then make sure to use white lists or black lists (inclusion / exclusion) in the controller or via model binder annotations. Black lists are more convenient, but lazy developers writing future revisions may forget about them or not be aware of them. White lists ([Bind(Include=...)] are obligatory, requiring attention when new fields are added, so they act as an inline view model.
Example:
[Bind(Exclude="CompanyId,TenantId")]
public class CustomerModel
{
public int Id { get; set; }
public int CompanyId { get; set; } // user cannot inject
public int TenantId { get; set; } // ..
public string Name { get; set; }
public string Phone { get; set; }
// ...
}
or
public ActionResult Edit([Bind(Include = "Id,Name,Phone")] CustomerModel customer)
{
// ...
}
The first sample is a good way to enforce multitenant safety across the application. The second sample allows customizing each action.
Be consistent in your approach and clearly document the approach used in your project for other developers.
I recommend you always use view models for login / profile related features to force yourself to "marshall" the fields between the web constroller and the data access layer as a security exercise.

View model validation vs domain model validation

If client validation is done when is it necessary to do domain level validation?
I use ASP.NET MVC for my web applications. I like to distinguish between my domain models and view models. My domain models contain the data that comes from my database and my view models contain the data on my views/pages.
Lets say I am working with customer data.
I will have a table in my database called Customers.
I will have a customer class which could look something like this:
public class Customer
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime DateOfBirth { get; set; }
}
And I will a create customer view model to represent only the data that I have on my view:
[Validator(typeof(CustomerCreateViewModelValidator))]
public class CustomerCreateViewModel
{
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime DateOfBirth { get; set; }
}
I will have a create view that accepts my CustomerCreateViewModel and binds my input fields to my view model:
#model MyProject.ViewModels.Customers.CustomerCreateViewModel
#using (Html.BeginForm())
{
<table>
<tr>
<td>
#Html.TextBoxFor(x => x.FirstName)
#Html.ValidationMessageFor(x => x.FirstName)
</td>
</tr>
<tr>
<td>
#Html.TextBoxFor(x => x.LastName)
#Html.ValidationMessageFor(x => x.LastName)
</td>
</tr>
</table>
<button id="SaveButton" type="submit">Save</button>
}
As you can see I have a CustomerCreateViewModelValidator that contains my validation rules. After the user has entered some data into the text boxes he will click the submit button. If some of the fields are empty then validation fails. If all the required fields are entered then validation succeeds. I will then map the data from my view model to my domain model like this:
Customer customer = Mapper.Map<Customer>(viewModel);
This customer domain model I take and pass it onto my repository layer and it adds the data to my table.
When does validation need to be done on a domain model? I do all my validation on my view model. I can validate my data in my domain model just before I add it to the database but seeing that it was validated on the view model wouldn't it be just replicating the same validation on the client side?
Could someone please share some light on this validation matter?
Always validate at both levels.
You need to validate the view models because you want to feed back to the user as quickly and easily as possible if they've done something wrong. You also don't want to be bothering the rest of your domain logic if the model is invalid.
But, you will also want to verify that everything's happy in the domain, once the view model has been validated. For simple models, these checks may be the same, and so it does look like duplicating logic, however as soon as your application grows so you may have multiple user interfaces, or many different applications using the same domain models, it becomes so important to check within the domain.
For example, if your application grows so you end up providing an API to customers to interact directly with the application programmatically, it becomes a necessity to validate the domain models, since you cannot guarantee that the user interface used has validated the data to the standard that you need (or even validated it at all). There is an argument to say that the data received by APIs should be validated in much the same way as the view models are validated, and that's probably a good idea since that is achieving the same goal as the view model validation is. But regardless of the route in (either from a UI or the API), you will want to always guarantee that the data is valid, so defining that in a central place is ideal.
The aims of the two levels of validation is different, too. I would expect a view model validation to inform me of all problems (such as missing first name, last name is too long, DoB is not a date). However, I think it would be ok for the domain logic to fail on the first error, and just report that one. Again, for simple models, it may be possible to collect all errors and report them all back, however the more complex an application gets, the harder it gets to anticipate all errors, especially if the logic will change depending on the data. But, as long as only good data gets past, that should be fine!
As a general rule I consider the domain model to be the most important code and therefore management of its state holy. For that reason I would never assume for the domain model to be in a valid state just because it was operated on by a presentation layer that is supposed to enforce validity. This would mean your domain layer is tightly coupled to your presentation layer.
It is best to start thinking from the domain model outwards (onion architecture). The reasoning behind all this is that the domain model is the least likely to change over time and acts as a core to an application, insulating layers from each others' flaws.
So starting with a domain model that enforces its own validity you are left with the question of duplication of validation code. There are some ways to avoid this. Your view model may for example try to create a domain object and translate any exceptions thrown as validation failures. Validators can also be extracted and reused. Depending on your use-cases you have to see what works best for you. Just beware to keep it simple. Perhaps, if your use-cases are not to though, it might be most maintainable to simply duplicate the validation. Remember that deduplication increases complexity.
I have seen code bases in which only the domain layer handled the validation and codebases in which validation was handled in both the domain- and the presentation layer. I have a preference to simply duplicating the validation logic at this point, because I have seen how hard it is to meaningfully map domain validation errors well to a contextual user-interface.
I tend to think of client validation as more sanitizing the data at the UI level. In other words, checking that, for example, an input field that is a number is given a number by the user. Or whether the length of a text input meets the minimum length requirement. Stuff like that.
At the domain level, you should be checking business domain rules. For example, if the user is entering details about a new Product, does the product name already exist? Or maybe checking that the user has a selected a valid Department when configuring a new User, based on that User's skills? This are just out of the air examples, but I hope they give an idea of what I mean.
You would need to have a model validator in case you have several clients for your model. For instance if you have ASP.NET MVC calling your model and a WPF application, in this case it makes sense to have the validation logic on the model. But in your case where you got only one client that would be overkill.

Model validation based on the DataType

I have a project, with a classic 3 tier structure: DataStore, BusinessLogic, Web-Frontend
In the DataStore I have a Model (simplified) e.g. ConfigModel.cs:
public class ConfigModel
{
[DataType(DataType.EmailAddress)]
public string DefaultSenderEmail { get; set; }
public IPAddress FallbackDNS { get; set; }
}
Here comes the question:
What's an elegant way to programmatically add Validators according to either the actual DataType, or the DataType Attribute?
A few answers that I have considered myself so far, but did not find them satisfactory:
Add an [EmailAddress] validation attribute to the parameter: I don't want duplication and I don't want any reference to MVC specific code in my DataStore Layer.
Make separate ViewModels and use AutoMapper: Since some of my models are a lot more complex than that, I'd hate to make specific ViewModels.
Thanks!
I would also consider using AutoMapper, but not as an answer to this solution.
Maybe you can consider this: http://weblogs.asp.net/srkirkland/archive/2011/02/15/adding-client-validation-to-dataannotations-datatype-attribute.aspx
That is not duplication. DataType is used for different purposes, and validation is different thing. Although they may sound the same (specifically for EmailAddress), you should not consider using both as duplication. Moreover, validation is automatically controlled for non nullable types - they are marked as Required. Datetimes are also checked for valid format automatically.
What you will definitely hate is controlling what properties of your domain model can be edited by users using BindAttribute and controlling different validations on same model, when using different views. So go for using ViewModels! Decorate them with all the attributes needed for your web application and map back to domain models using AutoMapper
You also may also want to check out FluentValidation

Categories

Resources