Background
We've been migrating a large amount of legacy code & systems to ASP.NET MVC forms. I've already coded up a number of CRUD type interfaces with MVC 4, using model binding, validation, attributes etc., so I'm fairly familiar with that whole paradigm. So far, all of these forms have been on our backend administration & management applications, where it pays to have very strict input validation. We are launching our first consumer facing application in MVC and are faced with a different type of problem.
The Problem
Our legacy forms in this area are the main revenue engine for our company. Usability of the consumer experience is the rule of the day. To that end, we want our forms to be as lenient as possible - the legacy system did a number of things to automatically correct user input (in entirely custom, non standard ways each time, of course). To that end, we don't so much want input validation as we want sanitation.
Examples
We ask the user for numerical inputs which have a unit of measure implied. Common ones are currency amounts or square footage. The input label is clear that they don't need to provide these formats:
What is the approximate square footage? (example: 2000)
What is your budget? (example: 150)
People being people, not everyone follows the directions, and we frequently get answers like:
approx 2100
1500 sq. ft.
$47.50, give or take
(Okay, I exaggerate on the last one.) The model that we are ultimately storing into our business logic accepts numeric input type for these fields (e.g. int & float). We can of course use datatype validator attributes (example [DataType(DataType.Currency)] for the budget input, or just having the field type be an integer for the square footage) to clearly indicate to the user they are doing it wrong, providing helpful error messages such as:
The square footage must be numbers only.
A better user experience, however, would be to attempt to interpret their response as leniently as possible, so they may complete the form with as little interruption as possible. (Note we have an extensive customer service side who can sort out mistakes on our system afterwards, but we have to get the user to complete the form before we can make contact.) For the square footage examples above, that would just mean stripping out non-digit characters. For the budget, it would mean stripping out everything that's not a digit or a decimal point. Only then would we apply the rest of the validation (is a number, greater than 0, less than 50000 etc.)
We're stuck on the best approach to take to accomplish this.
Potential Solutions
We've considered custom attributes, custom model bindings, and a separate scrubber service class that would live between the model and the database. Here are some of the considerations we've taken into account trying to decide upon the approach.
Custom Validation Attributes
I've read a number of helpful resources on this. (They have varying degrees of relevancy and recency. A lot of stuff I found searching for this was written for MVC2 or MVC3 and is available with standard attributes in MVC4.)
Extending ASP.NET MVC’s Validation
Custom Validation Attribute in ASP.NET MVC3
A lot of questions & topics on input sanitization which were focused on Cross-site scripting attacks or database injection.
What I haven't found is anyone doing what I want to do, which would be changing the model value itself. I could obviously create a local copy of the value, sanitize it and provide a pass/fail, but this would result in a lot of duplicate code. I would still have to sanitize any input values again before saving to the database.
Changing the model value itself has 3 benefits:
It affects subsequent validation rules, which would improve their acceptance rate.
The value is closer to what will be put into the database, reducing the additional prep & mapping overhead needed before storage.
In the event of the form being rejected for another reason, it gently suggests to the user "You're trying to hard on these fields."
Is this a valid approach? Is there someone out there who has used validation attributes in this way that I just missed?
Custom Model Binding
I read Splitting DateTime - Unit Testing ASP.NET MVC Custom Model Binders which focuses on custom date time input fields with custom validation & parsing done at the model binding layer. This lives a lot closer to the model itself, so it seems like a more appropriate place to be modifying the model values. In fact, the example class DateAndTimeModelBinder : IModelBinder does exactly that in a few places.
However, the controller action signature provided for this example does not make use of an overall model class. It looks like this
public ActionResult Edit(int id,
[DateAndTime("year", "mo", "day", "hh","mm","secondsorhwatever")]
DateTime foo) {
Rather than this
public ActionResult Edit(
MyModelWithADateTimeProperty model) {
Shortly before that, the article does say
First, usage. You can either put this Custom Model Binder in charge of all your DateTimes by registering it in the Global.asax:
ModelBinders.Binders[typeof(DateTime)] =
new DateAndTimeModelBinder() { Date = "Date", Time = "Time" };
Would that be sufficient to invoke the model binding for the date time field on the single-parameter model example MyModelWithADateTimeProperty?
The other potential draw back that I see here is that the model binder operates on a type, rather than an attribute you can apply to the standard data types. So for example, each set of validation rules I wanted to apply would necessitate a new, custom type. This isn't necessarily bad, but it could get messy and cause a lot of repeated code. Imagine:
public class MyDataModel {
[Required]
public CurrencyType BudgetRange { get; set; }
public PositiveOnlyCurrencyType PaymentAmount { get; set; }
[Required]
public StripNonDigitsIntegerType SquareFootage { get; set; }
Not the ugliest model code I've ever seen, but not the prettiest either.
Custom, External scrubber class
This has the fewest questions for me, but it has the most drawbacks as well. I've done a few things like this before, only to really regret it for one of the following reasons:
Being separate from the controller and model, it is nigh impossible to elegantly extend its validation rules to the client side.
It thoroughly obfuscates what is and what isn't an acceptable input for the different model fields.
It creates some very cumbersome hoops for displaying errors back to the user. You have to pass in your model state to the scrubber service, which makes your scrubber service uniquely tied to the MVC framework. OR you have to make your scrubber service capable of returning errors in a format that the controller can digest, which is rather more logic than is usually recommended for a controller.
The Question
Which approach would you take (or, have you taken) to accomplish this type of sanitization? What problems did it solve for you? What problems did you run into?
I would take ModelBinder approach.
When form data comes in - it goes to model binders infrastructure. There you can override decimal model binder to refine input. After that you can send it to validation routines without neeed to write specific validation attributes or something like that.
Also you can use one intelligent model binder that will do type switch internaly or override ModelBinderProvider, so your code wont be bloated with ModelBinderAttribute. Here is Jimmy Bogart article about this. Also you will get some flexibility, because you can use attributes to declare if model uses strict or adaptive binding.
In overall, IMHO, validation attributes are not suposed to alter input. They should validate it. Model binders are in fact responsible for converting all weird stuff that comes in into something usable in your system and your third approach duplicates Model binder functionality)
Hope this helps and sorry for my english)
Related
I have a question regarding read models. I use read models when i get data from database and eqivalent entity/aggregate models to use in repositories. My question is can read model class have constructor which would check properties? For instance could I have such read model class. From the other hand i already have such checks in eqivalent domain model EmployeeModel therefore i am not convinced as it would be a bit of duplication. The additional question would be if in my EmployeeModel (domain) has not nullable EmploymentDate can i mark it nullablein read model means can read model be a diffrent that eqivalent domain model?
class EmployeeReadModel
{
public DateTime? EmploymentDate { get; set; }
}
can i add constructor and check for such read model?
class EmployeeReadModel
{
public DateTime? EmploymentDate { get; set; }
EmployeeReadModel(DateTime? employeeDate)
{
EmploymentDate = employeeDate?? throw new Exception();
}
}
A read model is something that I see as going over-the-wire. As such it should be easily serializable and methods usually present a problem. Also, if there isn't a default constructor then you also have issues.
Since a read model represents existing data there isn't too much sense in validating it. I would rather leave the validation to the domain model.
Given that a read model is more of a data transfer object chances are that once it leaves your system the receiving system is going to use it plainly as data. For instance, even a web front-end would parse a json representation of the data to consume it.
If you really would like methods on your read model classes then perhaps consider extension methods as these don't interfere with any serialization.
Can domain-driven-design read model have basic logic?
You won't normally have domain logic, in the sense of "state machines" in the read model.
However, you do have constraints that you may need to satisfy, that are inconsistent with the data that you have available.
For example, suppose I'm sent a query with ID:12345, and I'm supposed to respond with a message using the Foo schema, which includes a Bar member that is restricted to the integer values 0-9. We look in the book of record using ID:12345, and discover that the domain model has decided "this one goes to eleven".
So the data that is available doesn't match the required pre-conditions. Now what?
One thing to notice in this sort of setting is that you've got conflicting requirements; if you manage to get all the way to production without discovering that conflict, then you've failed at a number of quality inspection points in your pipeline.
In other words, you're supposed to not have this problem by having discovered it and fixed it a long time ago.
One of the nice things about crash on conflict is that it pulls the Andon cord hard -- everything screeches to a halt. Bonus - that's really easy to detect. The downside, of course, is that you lose revenue until you get a fix deployed.
The downside is that a lot of things can get caught in the blast radius of the crash. And in particular if your monitoring and repairing tools can't run because you are crashing on conflict, it's going to be a real pain to fix.
In other words, we want to be very precise - it's not the responsibility of the read model to detect whether the write model or the human operators are behaving correctly; it's only the job of the read model to determine if read model can satisfy its own requirements with the data that has been provided.
In my domain each Domain Entity may have many Value Objects. I have created value objects to represent money, weight, count, length, volume, percentage, etc.
Each of these value objects contains both a numeric value and a unit of measure. E.g. money contains the monetary value and the currency ($, euro,...) , weight contains the numeric value and the unit of weight (kilo, pound, ...)
In the user interface these are displayed side-by-side as well: field name, its value followed by its accompanying unit, typically in a properties panel. The domain entities have equivalent DTOs that are exposed to the UI.
I have been searching for the best way to transfer the value objects inside the DTOs to the UI.
Do I simply expose the specific value object as a part of the DTO?
Do I expose a generic "value object"-equivalent that provides name/value/unit in a DTO?
Do I split it into separate name/value/unit members inside the DTO, just to reassemble them in the UI?
Do I transfer them as a KeyValuePair or Tuple inside the DTO?
Something else?
I have searched intensively but no other question seems to quite address this issue. Greatly appreciate any suggestions!
EDIT:
In the UI both values and units could get changed and sent back to the domain to update.
I would be inclined to agree with debuggr's comment above if these are one-way transfers; Value Objects aren't really Domain objects - they have no behaviour that can change their state and therefore in many ways they are only specialised "bit-buckets" in that you can serialise them without losing context.
However; if you have followed DDD practices (or if your back-end is using multi-threading, etc) then your Value Objects are immutable i.e they perhaps look something like this:
public class Money
{
readonly decimal _amount;
readonly string _currency;
public decimal Amount {get{return _amount;}}
public decimal Currency {get{return _currency;}}
public Money(decimal amount, string currency)
{
//validity checks here and then
_amount=amount;
_currency=currency;
}
}
Now if you need to send these back from the client, you can't easily re-use them directly in DTO objects unless whatever DTO mapping system you have (custom WebAPI Model binder, Automapper, etc) can easily let you bind the DTO to a Value Object using constructors...which may or may not be a problem for you, it could get messy :)
I would tend to stay away from "generic" DTO objects for things like this though, bear in mind that on the UI you still want some semblance of the "Domain" for the client-side code to work with (regardless of if that's Javascript on a Web Page or C# on a Form/Console, or whatever). Plus, it tends to be only a matter of time before you find an exceptional Value Object that has Name/Value/Unit/Plus One Weird Property specific to that Value concept
The only "fool-proof"*** way of handling this is one DTO per Value Object; although this is extra work you can't really go wrong - if you have lots and lots of these Value Objects, you can always write a simple DTO generation tool or use a T4 template to generate them for you, based on the public properties of your Value Objects.
***not a guarantee
DDD is all about behavior and explicitly expressing intent, next to clearly identifying the bounded contexts (transactional and organizational boundaries) for the problem you are trying to solve. This is far more important than the type of "structural" questions for which you are requesting answers.
I.e. starting from the "Domain Entities" that may have "Value Objects", where "Domain Entities" are mapped as a "DTO" to show/be edited in a UI is a statement about how you have structured things, that says nothing about what a user is trying to achieve in this UI, nor what the organization is required to do in response to this (i.e. the real business rules, such as awarding discounts, changing a shipping address, recommending other products a user might be interested in, changing a billing currency, etc).
It appears from your description, that you have a domain model that is mirroring what needs to be viewed/edited on a UI. That is kinda "putting the horse behind the carriage". Now you have a lot of "tiers" that provide no added value, and add a lot of complexity.
Let me try to explain what I mean, using the (simplified) example that was mentioned on having an "Order" with "Money". Using the approach that was mentioned, trying to show this on screen would likely involve the following steps:
Read the "Order Entity" for a given OrderId and its related "Money" values (likely in Order Lines for specific Product Types with a given Quantity and Unit Price). This would require a SQL statement with several joins (if using a SQL DB).
Map each of these somehow to a mirroring "domain objects" structure.
Map these again to mirroring a "DTO" object hierarchy.
Map these "DTO" objects to "View" or "ViewModel" objects in the UI.
That is a lot of work that in this example has not yielded any benefit of having a model which is supposed to capture and execute business logic.
Now as the next step, the user is editing fields in a UI. And you somehow have to marshal this back to your domain entity using the reverse route and try to infer the user's intent from the fields that were changed and subsequently apply business rules to that.
So say for instance that the user changes the currency on the "MoneyDTO" of a line item. What could be the user's intent? Make this the new Billing Currency and change it for all other line items as well? And how does this relate to the business rules? Do you need to look up the exchange rate and change the "Moneys" for all line items? Is there different business logic for more volatile currencies? Do you need to switch to new rules regarding VAT?
Those are the types of questions that seem to be more relevant for your domain, and would likely lead to a structure of domain entities and services that is different from the model which is viewed/modified on a UI.
Why not simply store the viewmodel in your database (e.g. as Json so it can be retrieved with a single query and rendered directly), so that you do not need additional translation layers to show it to a user. Also, why not structure your UI to reveal intent, and map this to commands to be sent to your domain service. E.g. a "change shipping address" command is likely relevant in the "shipping" bounded context of your organisation, "change billing currency" is relevant in the "billing" bounded context.
Also, if you complement this with domain events that are generated from your domain, denoting something that "has happened" you get additional benefits. For example the "order line added" event could be picked up by the "Additional Products A User Might Be Interested In" service, that in response updates the "Suggested Products" viewmodel in the UI for the user.
I would recommend you to have a look at concepts from CQRS as one possible means for dealing with these types of problems. As a very basic introduction with some more detailed references you could check out Martin Fowler's take on this: http://martinfowler.com/bliki/CQRS.html
I have a view model which should check that label of a new entity is unique (not in DB yet).
At the moment I've done it in the view model class:
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
if (PowerOrDuty != null)
{
if (PowerOrDuty.Identifier == null)
{
using (var db = new PowersAndDutiesContext())
{
var existingLabels = db.PowersAndDuties.Select(pod => pod.Label);
if (existingLabels.Contains(PowerOrDuty.Label))
{
yield return new ValidationResult("Cannot create a new power or duty because another power or duty with this label already exists");
}
}
}
......
Please note that this is a small internal app with small DB and my time is limited, so the code is not perfect.
I feel that DB access from view models might be a bad practice. Should view model have direct DB access? Should it be able to call a repository to get the available labels? Should validation requiring DB access be done in a controller instead?
Should view model have direct DB access?
I think this should be avoided at all cost
Should it be able to call a repository to get the available labels ?
This is not the concern of a ViewModel.
This would introduce some complexity in the testing of your ViewModel (which should almost need none) I guess it is a sign of trouble coming.
Should validation requiring DB access be done in a controller instead ?
Maybe, if by "DB" you mean "Repository". But what comes to mind is a separate custom validation class that you will be able to (un)plug, test, and reuse, in another controller for ajax validation, etc
I think that accessing DB from VM is not wrong... AFAIK it is not breaking MVC concept (since it is a presentation layer concept). Said that, it could be better if you have the Validate method provided by a Service Layer.
But all the logic related to the content of the ViewModel, it is better kept in the VM than in the Controller. Cleaner controllers is better.
Your view model should not be tied to your context, it only cares about displaying data and validating it after a submit. You can perform validation like a required field or a value in range, but you can't know if a label already exists in your database.
You can't also fetch a list of "forbidden labels" before displaying your form, in order to test your label afterwards, because that list could have changed during this time (another user updating you database).
In my opinion, validation at model level should focus on what it can validate without knowledge of the data source, and let your database notify you errors like submitting a duplicate value in a field which has an unique constraint. You'll catch exceptions coming from your database for errors like those, and manage them accordingly.
Anyway, i think there's no straightforward answer for a problem like this.
I personally like the ViewModels to be anemic -- simply classes with properties.
For custom server-side validation like this, I prefer it go either in a service, with the consumption of the service in your controller, or even behind a custom validator.
With a custom validator, you could even (optionally) execute the validation remotely. That gets a little more complex though, but I've done it using a generic remote validator that consumes an Ajax action method to perform the validation, and wire that up through both the client validator and remote validator (to ensure you have your validation logic in a single method).
But which ever way you go, I think it is more common -- and in my opinion, more clean -- to keep all logic out of your ViewModel. Even in a simple app, your ViewModel should be dumb to your database context. Ideally, only services (not necessarily web services, but just an abstraction layer) are aware of your database context.
This, to me, should be done regardless of the size of application. I think the effort and complexity (it only adds another assembly to your solution) is worth the abstraction you get. Down the road, if you happen to decide to consume your services from another application, or if you decide to swap out your database context, it's much easier with that abstraction in place.
Say my phone number is stored in the database as a 10-digit string:
0000000000
And I want to format this phone number when presenting it to the user as:
(000) 000-0000
And I have an extension method in a utility assembly that handles this formatting:
static string ToPhoneNumber(this string value)
{
return Regex.Replace(value, #"(\d{3})(\d{3})(\d{4})", "($1) $2-$3");
}
My question is, at what point do I apply this conversion?
1) In the view:
#Model.PhoneNumber.ToPhoneNumber()
2) In the view model:
public string FormattedPhoneNumber
{
get
{
return this.PhoneNumber.ToPhoneNumber()
}
}
3) In the controller:
userModel.FormattedPhoneNumber = userModel.PhoneNumber.ToPhoneNumber()
4) In the domain model (same implementation as #2)
5) In the service (same implementation as #3)
Also, does the answer depend whether it's a global formatting need (like phone number) vs. an isolated one-time formatting on a single view?
I would give my thoughts, but don't want to influence any answers.
I personally like to keep things in my ViewModel because what you end up with is strange looking code in your view if you don't. Let's take your example.
Razor View:
#using MyNamespace.Models.Extensions
#model MyNamespace.Models.ViewModels.IndexViewModel
#if (string.IsNullOrWhiteSpace(Model.PhoneNumber) {
<div> #Model.PhoneNumber.ToPhoneNumber() </div>
}
Versus the alternative:
Razor View:
#model MyNamespace.Models.ViewModels.IndexViewModel
#Model.FormattedPhoneNumber
ViewModel :
public string FormattedPhoneNumber {
get {
return PhoneNumber.IsEmpty()
? "Not Available"
: PhoneNumber.ToPhoneNumber();
}
}
You could definitely improve my code, but the point is that it keeps your views simpler and lest cluttered with branching logic.
Also, I never claimed to be a saint, so I don't always follow my own advice, but I should. Do as I say, not as I do :)
I think it is view responsibility to decide how to display data. Because only the view knows what is available for presentation. On the other hand it is probably easier to do it in controller. And controller would know about locale of the user. Over all I think it makes very little difference.
First off, with architectural patterns in general, and especially those dealing with "separation of concerns", the final arbiter is always "what is the best approach in my scenario" - I strongly believe that dogmatic adherence to a set of rules without considering your own plans and needs is a horrible practice. Not to mention the fact there is no clear consensus here: depending on your variety of XYZ (MVC, MVP, MVVM) you'll find opposing thoughts on what goes where all over the internets.
That said, my quick-twitch answer to the question is "Use your judgement".
Arguments for "in the view":
it deals with presentation, therefore it is the views responsibility
Arguments for "in the view model":
generally, the role of the view model is to provide "ready to data bind" representations of the model - hence, transforming model data into a form directly consumable by the view is the responsibility of the view model
Arguments for the model:
this could be an excessively common representation for the model data; therefore, following DRY, the model will assume responsibility for this representation
Arguments for the controller:
... Ok, can't think of a reasonable one here. Controllers typically respond to actions, so it's a stretch to justify it belonging here.
The point I'm trying to make is that so long as a single point of your system accepts and takes on the responsibility and that responsibility is handled solely by that component/layer/class, you've accomplished the primary goal, which is to prevent dilution/repetition/low cohesion.
My personal opinion, fwiw, would probably fall on the view or view model. If this were WPF I'd almost certainly say the view (via the format providers available to wpf data binding). In the web world, I'd probably lean towards the view, although a strong argument for the model exists - say you now want to expose this data via a REST/JSON/etc service: you can easily handle this change (assuming you want to return the formatted data, that is)
TL/DR: It really depends; follow common sense and use your judgement. Just keeping all the related logic in a single place is the important part, and question any dogmatic/commandment-style "Thou Shalt" statements.
It depends on a few things your definition of ViewModel, are you following a (self-coined) MVCVM* approach, where you'd have a ViewModel specific to your view in addition to your domain models?
If so, the VM could certainly contain the formatting logic, that is the whole point of having this View Model in the first place, to Model the View. So Option 2.
That said, the reasoning behind this is that formatting yourself would begin to the DRY principle if you were formatting like this:
#Regex.Replace(Model.PhoneNumber, #"(\d{3})(\d{3})(\d{4})", "($1) $2-$3");
Since you've got an extension method, it's not that much of a problem to call the formatter in your view at all, but I'd still prefer to do it in the dedicated VM.
If your VM is really just the domain model that contains the raw data (see this, pattern 1) then it should definitely be in your View, so Option 1. Just a note, that if you're using this pattern I'd suggest against it as it's making your view strongly coupled against a low-level object, you're better abstracting this out into what you need ensuring that your couplings between Domain Model + View Model are actually strong (ie. compiled at compile time, not runtime!).
Above all - This should certainly not go into your domain model.
* Model, View, Controller, ViewModel. Where the ViewModel contains the data that is to be consumed in your View, in the format of which it requires.
I would place this in the viewmodel and not in the view. The view is intended to just present the data/information to the end-user. Keeping up the separation of concerns makes sure every object is as independent as possible. If you pass the formatted numbers to the view, the view has no concerns about what is to be displayed, just display the formatted numbers.
I would argue that this is modeling, not formatting. If the receiving application needs to re-format the order, spacing or capitalization of these fields it must first split the single field into several separate fields.
This should be the responsibility of the Services layer. We are talking schema, not format. The application requires that the fields be split as part of its data contract. Where this split happens should be in the Application Services layer that your app consumes.
The services layer should try to add metadata to the information through schema.
For example, if I receive a phone number from a data contract like this:
1234567890
The requriements for presentation are as follows:
(123) 456 – 7890
The services tier should be breaking the phone number apart into its elements
<PhoneNumber>
<CountryCode>1</CountryCode>
<Area>123</Area>
<Prefix>456</Prefix>
<LineNumber>7890</LineNumber>
</PhoneNumber>
Option 1 is the best, followed by 2.
In the controller, you should actually remove the formatting to send it to the service layer, so neither the domain model nor the service model are aware of the formatting.
I'm new to MVC / MVP and learning it by creating a Winform application.
I have to some extent created the Models, Presenters and Views... Now where do my validations fit.
I think the initial datatype validation (like only numbers in Age field), should be done by view. Whereas the other validations (like whether age is within 200) should be done by model.
Regarding datatype validation, my view exposes the values as properties
public int? Age
{
get
{
int val;
if (Int32.TryParse(TbxAge.Text, out val))
{
return val;
}
return null;
}
set
{
TbxAge.Text = value;
}
}
I can perform validation seperately, but how do I inform presenter that validation is still pending, when it tries to access the property Age ?. Particularly when the field is optional.
Is it good to throw a validationpending exception, but then the presenter must catch it at every point.
Is my understanding correct, or am I missing something.
Update (for the sake of clarity) : In this simple case where the age field is optional, What should I do when the user typed his name instead of a number. I cant pass null as that would mean the field has been left empty by the user. So how do I inform the presenter that an invalid data has been entered...
Coming from the MVP side (I believe it's more appropriate for WinForms) the answer to your question is debatable. However the key for my understanding was that at anytime you should be able to change your view. That is, I should be able to provide a new WinForms view to display your application or hook it upto a ASP.NET MVC frontend.
Once you realise this, the validation becomes aparant. The application itself (the business logic) should throw exceptions, handle errors and so forth. The UI logic should be dumb. In other words, for a WinForms view you should ensure the field is not empty, or so forth. A lot of the properties for the controls allow this - the properties panel of Visual Studio. Coding validation in the GUI for the likes of throwing exceptions is a big no no. If you were to have validation on both the view and the model you'd be duplicating code - all you require is some simple validation such as controls not being empty for example. Let the actual application itself perform the actual validation.
Imagine if I switched your view to a ASP.NET MVC front end. I would not have said controls, and thus some form of client side scripting would be required. The point I'm making is that the only code you should need to write is for the views - that is do not try to generalise the UI validation across views as it will defeat the purpose of separating your concerns.
Your core application should have all your logic within it. The specalised view logic (WinForms properties, Javascript etc...) should be unique per view. Having properties and interfaces that each view must validate against is wrong in my opinion.
If your "view exposes the values as properties", I suspect that you are missing something. The principal distinction between MVP/MVC and some of the other UI decoupling patterns is that they include a model, which is intended to be the main container for data shared by the view and presenter or controller.
If you are using the model as a data container, the responsibility for validation becomes quite clear. Since only the presenter (or controller) actually does anything besides display the data, it is the one responsible for verifying that the data is in an acceptable state for the operation which it is about to perform.
Addition of visual indicators of data validation problems to an editor form/window during editing is certainly nice to have. However, it should be considered more or less equivalent to view "eye candy", and it should be treated as an add-on to the real validation logic (even if it is based on shared metadata or code). The presenter (or controller) should remain the true authority for data validity, and its validation code should not be hosted in the view.
I believe view validation is only relevant in javascript as the view does not run any code on post, only the controller does.
But I would also not ever trust javascript validation as a malicious user could bypass it or an ignorant user might have JS disabled so repeat any JS validation in serverside code in the controller.
The view might have means to display any errors though .