After playing around with Asp.Net MVC for some time I have decided to actually use it in a project. One of the issues that came up is that the frontend site might have different validation rules for a given model than the admin panel.
I am aware of the MetadataType property but since you have more than one contexts this would not work for us out of the box.
In order to solve this I implemented a custom ModelMetadataProvider that redirects the default ModelMetdataProvider to a different type based on the request's execution context. This works pretty well for displaying the needed UI.
The part of this solution I do not like is that I ended up reading the stack from my custom model metadata provider to determine if the given call is for model binding. This is because when I did not do that I would correctly get "Object does not match target type" during the call to TryUpdateModel from the Controller since the model binder was trying to use properties from type A to set values to an instance of type B.
Is reading the call stack such a bad idea for production?
Is there a way to replicate the MetadataTypeAttribute behavior selectively without using attributes?
Thanks in advance,
John
This is one of those instances where you wish the ASP.NET MVC Team hadn't sealed a class - I'm sure they had their reasons. I was going to suggest simply creating your own attribute, derived from MetadataTypeAttribute.
One way to go about this is to take the source of the attribute and write your own:
http://dotnetinside.com/framework/v4.0.30319/framework/v4.0.30319/System.ComponentModel.DataAnnotations/MetadataTypeAttribute
Although, of course, this makes your code less maintainable.
I would assert that to the best of my knowledge, you are already making the right decision with a ModelMetadataProvider as your solution. I'm a little nervous for you regarding analysing the call stack though, change locations, move something to an area; you get my drift, i.e. it would be very easy to break the code with a build time decision that isn't found until runtime or beyond QA.
You haven't supplied how the context is roughly determined, but I would personally tackle that by adding a property to the class itself with an Enum (finite possibilities and design time breakage) with a list of possible contexts, then during spin up of the class, populate it, ready for execution of the Provider, which will pass through the correct metatdatatype based on the value of the Enum.
Many ways to skin this cat, but something that is going to break on build will serve you best, IMHO.
Unless you are using MVC 6 you may find ModelMetadata Fluent Configuration useful.
Some nice examples of how to use it can be found here and here.
What is really important is that it is just code which is completely under your control. Thus, once you have different contexts, you may decide to define different configurations, or you may play a bit harder and make (a set of) different registrations for different contexts.
What really helps is "decorating" (the term used on purpose!) properties of a base class, at least nothing seems to stooping you from doing it.
EDIT: Model Metadata shouldn't be confused with WCF RIA Services Contrib.
Related
I've got a simple factory that's built in C# that instantiates and configures validators that are built in ASP.net and JavaScript. I want a way to test if I'm accidently trying to set a validator twice (for example, having two RequiredValueValidators is not a great idea and could cause ui/ux problems) on the same Control, but I also wish to make sure that validators that use the same same building mechanisms, but in a different way, are preserved (such as two RegularExpressionValidators that use different RE, but not two that use the same RE.)
I've tried a few different possible techniques that I'll detail these as answers below- but in essence I need a technique to pass a description on how to compare two validators of the same base type to discern if they are equal( N.B. 'equal' is NOT 'identical', they could have different IDs (etc) but still do the same job.) that's interpretable at runtime and accessible to other areas of my c# .dll to actually run the check.
My answers will be community wiki with the intent that errors/pitfalls that I fell into will be edited out/corrected/discussed by the community, rather than being merely downvoted for being initially incorrect, so that others' won't suffer the same fate.
One attempt I've had is to set a predicate as an attribute of the method in my factory that builds the validator. This would then be accessed by using reflection somewhere else and then used to compare two potential validators.
A major flaw in this is that you cannot set predicates (or delegates for that matter) as attributes.
A possible work-around is where you give a an individual property (containing the predicate delegate or IEquatable<> implementation and then retrieve that - however there are a lot of different things to consider when comparing validators (what type, configurations, does it rely on other controls etc....) so unless you can create a base class or interface that can deal with different types of IEquatable<ValidatorType> this is also impossible...
I've also tried creating a small, static switch case method within my factory that will be able to simply output another small configurations class that would be created by the switch case. This in essence is much simpler than the previous question, but it's not without it's problems. For example, I cannot define my return and parameter types correctly so that a RegularExpressionValidator can check if it's correct within the same code block as a ValidDateValidatorcheck.
As I design the models for a domain, they almost always end up having some .IsSomething functionality on them. IsNew and IsDirty are common for data persistence purposes, IsValid for business rule validation, even IsFraudulent in a current project (more business rule validation), etc. Whenever I see these implemented by others, they are almost invariably done so as methods. But I find myself wondering if there's a particular reason for that.
I tend to see properties as describing an object and methods as performing some kind of action. These don't really perform an action. They involve code because they're dynamically determined when called, and they're clearly read-only, but to me they still fit as properties rather than methods.
There could potentially be a serialization issue with properties, I suppose. Though a rich domain model tends not to serialize well anyway given that it contains logic and functionality, so any time I need to move something across a service boundary I generally flatten it into a defined DTO structure first anyway.
But I wonder if anybody else has any insight on the subject? Is there a good reason to implement these as methods rather than as properties?
(Tangentially related, though an answer has already been given, extension properties would really help with consistency on something like this. I have a number of IsSomething() extension methods, usually on System.String, for implementing domain-specific logic. But even if properties are the way to go, I may want to stick with methods just for consistency with the extensions.)
Assuming that accessing the property:
Has no side-effects
Is "reasonably speedy" (yeah, very woolly...)
then I see no reason not to make it a property. The serialization shouldn't be an issue - most serialization schemes provide ways of marking a property as transient (i.e. not-to-be-serialized).
I would use a property because:
It describes the object in some way, so conceptually its characteristic, its property
It does not ask for any parameters
It basically just retrieves certain data, not performs any standalone actions or modifications
Where should I place the Validation logic of the Domain objects in my solution? Should I put them in the Domain classes, Business layer or else?
I would also like to make use of Validation Application Block and Policy Injection Application Block from Microsoft Enterprise Library for this.
What validation strategy should I be using to fit all these together nicely?
Thanks all in advance!
It depends. First - You need to understand what You are validating.
You might validate:
that value You retrieve from Http post can be parsed as date time,
that Customer.Name is not larger than 100 symbols,
that Customer has enough money to purchase stuff.
As You can see - these validations are different in nature, so they should be separated. Importance of them varies too (see "All rules aren’t created equal" paragraph).
Thing You might want to consider is not allowing domain object to be in invalid state.
That will greatly reduce complexity because at current time frame, You know that object is valid and You need to validate only current task related things in order to advance.
Also, You should consider avoiding usage of tools in Your domain model because it should be infrastructure free as much as possible.
Another thing - embrace usage of value objects. Those are great for validation encapsulation.
You can do either, depending on your needs.
Putting it in domain classes makes sure the validation is always done, but can make the classes bloated. It also can go against the single responsibility principle depending on how you interpret that (it adds the responsibility to validate). Putting it in domain classes also restricts you to one kind of validation. Also, unless you use inheritance, the same rule might have to be implemented multiple times in related classes (DRY). Validation is spread out through your domain if you do it this way.
External validation (you can get a validation object through DI, factories, business layer, or context) makes sure you can swap out the validation rules depending on context (e.g. for a long running process you want to save in a partially finished state you could have one validation object just to be able to save, and another to check whether the domain class is really valid and ready to be used). Your domain classes will be simpler (less responsibilities, though you'd have to do minimal checks, like null checks, to prevent run time errors), and you could reuse rule sets for related classes as well. Validation is centred in a small area of your domain model in this way. B.t.w. you can inject the external validation into the domain class itself making sure the classes do validate themselves, just don't know what they are validating.
Can't comment on the validation application block though.As always you have to weigh the pros versus the cons, there is never one valid solution.
First off, I agree with #i8abug.
But I did want to go a bit further to talk architecture. Every one of those design architectures, like domain driven, should be taken as nothing more than a suggestion and viewed with scrutiny.
At every step you should ask yourself what the benefit and drawbacks of the point in question is with regards to your application.
A lot of these involve adding a tremendous amount of code and seriously complicating projects with very little benefit.
The validation point is a prime example. As Stefan said, the principle of single responsibility basically says you need to create a whole set of other classes whose purpose is to only validate the state of the original objects. Obviously this adds a LOT of code to the app. Maybe it's generated for you, maybe you have to hand write it. Regardless, more code generally equates to being less robust and certainly equates to being harder to understand.
The benefit of separating all of that is that you can swap out validation rules. Ok, fine. The drawback is that you now have 2 files to look at and create for each class definition. ie: more work. Does your app need to swap out validation rules? Probably not. I'd even wager to say very very few do.
Quite frankly, if you go down this path then you may as well define everything as a struct and let all of those "helper" classes creep back to take care of validation, persistence, setting properties, etc as being a full blown class buys you almost nothing.
All of that said, I tend towards self contained classes. In other words they know how their properties relate to each other and know what are acceptable values. They can also perform operations on themselves and their children. In other words, they know what they are. This tends to lead to simplified coding and implementation. It also leads to knowing exactly where to go for a modification or change. The only separation I really do here is to implement Inversion of Control for persistence; which allows me to swap out data providers at runtime; which has been a requirement on several applications I've done.
Point is, think through what you are doing and decide if it's really the best way to go in your particular situation. All of these programming "rules" are just suggestions after all.
I generally put it in the domain objects. This is because the domain objects are the things that I am concerned about validating so if a rule for a specific object changes, I know where to update it rather than having to search through a bunch of unrelated entity rules in some specific validation class/file.
I realize this may not be considered POCO but every project has specific exceptions and this one often makes sense to me. Likewise, in some projects it makes sense to have your domain entities referenced from the views and, therefore, implement IPropertyChanged rather than constantly copying values from entities to a whole other set of view specific objects.
The old way I did validation was I had an IValidator interface like below which each entity implemented.
public interface IValidator
{
IList<RuleViolation> GetViolations();
}
Now I do this using NHibernate Validation (don't need to use nhibernate ORM to take advantage of the validation library. It is done simply through attributes.
//I can't remember the exact syntax but it is very similar to this
public class MyEntity
{
[NHibernateValidation(Length(min=1, max=10)]
public String Name {get;set;}
}
//... and then later ...
NHibernateValidator.Validate(myEntity);
Edit: I removed my comment about not being a huge fan of enterprise library in general in the past since Chris informed me that it is now very similar to NHibernate Validation
I apologise for this question as it is rather fuzzy and there are several questions integrated but as they are so closely related I did not want to break them apart into several submissions.
I am currently thinking about how to test for configuration errors in an application. There are different options available and one which has been used before is the IDataErrorInfo interface. I am not extremely happy with how this implementation looks, not because it doesn’t work because it do, just that I don’t fully agree with the actual implementation. I have been searching around this site (all 52 related questions) and others to see why Microsoft decided that using the keyword “this” with a index would be a good idea. It is typically used for indexing items in a collection, and even tough one could consider the classes I implement as a collection of errors I do not really agree with that the “this[]” keyword should implicitly be used for testing for them. (Side note: Does this mean that a custom collection class cannot have configuration errors of its own?) Why is this not a method call like “TestErrorState( string propertyname )” or even an indexed a property? And how often is the “string Error { get; }” actually used? To me it looks like kind of a “hack” and not really easy to use.
One of the practical issues I do have due to this implementation is that I have objects relating to other objects and I would like the error states to propagate. This turns out ugly as the class which is on display in the user interface should be in an “error state” due to a related object which is not necessarily shown to the user (unless the user clicks an option on the interface and moves “down” one level in the hierarchy of objects). This means that I need to extend the test for error modes with my own methods for propagating these errors and then I start questioning whether I should just implement something completely different and not go for the IDataErrorInfo interface at all.
Please let me know where I can find some good information about why IDataErrorInfo is the way it is. And if you can provide me with a nice idea for how to have error modes that propagate through a hierarchy of objects that would be just brilliant! When I say propagate I don’t mean as an exception as this feels like an event, it is just that when an object is asked for configuration errors it should also ask all its children for errors and then pass on also the children’s error messages.
The members of the IDataErrorInfo interface typically have no meaning outside of the interface itself, because you hardly ever want to request the validation errors of a entity this way yourself. The IDataErrorInfo interface is meant to be consumed by UI technologies such as MVC and WPF.
Because there is no need to call the Error property or the this[string] directly the IDataErrorInfo members could typically be implemented explicitly. This prevents them from showing up on the class itself and allows you to implement your own -more useful- this[] indexer yourself.
I agree that having an indexer on that interface probably wasn't the best design, but IDataErrorInfo is probably designed with explicit implementation in mind anyway, so it doesn't really matter.
Here is an example of how to implement IDataErrorInfo explicitly.
I am not sure if you are using MVVM but it looks like it. I agree that this[] puts extra unnecessary constraint on the implementation while it could have been a GetError(string propetyName). I think this is a COM hangover and this interface is exposed through COM but will check and confirm.
In practice, I have never found this[] causing any problem. This would almost never be implemented to collection since binding for collections would usually use ObservableCollection and only individual items will implemented IDataErrorInfo.
If you have a hierarchy (e.g. Customer and Orders), you would typically implement a view model for both and order view model would have a reference to its parent customer view model so that it can propagate the error. Scenarios I have had, this approach always worked. If you have a specific problem, post a new question or update yours.
This is mostly a request for comments if there is a reason I should not go down this road.
I have a multi-tierd, CodeSmith generated application. At the UI level there need to be some fields that are required, and the required fields will vary depending on field values in the bound entity. What I am thinking of doing is adding a "PropertyRequired" CustomAttribute to each property in the entities that I can set true or false when I load the entity in its manager. Then I will use Reflection to query the property and give visual feedback to the user at the UI level, and I can validate that all the required properties have a valid value in the manager before I save. I've worked this out as a proof of concept with one property in one entity, but before I try to extend it to the rest of the application I'd like to ask if there is someone with more experience to either tell me to go for it, or why I won't like it when I scale up. If this is a bad idea, or if you can suggest a better approach please offer your opinion.
It is a pretty reasonable way to do it (I've done something very similar before) - but there are always downsides:
any code needing the entity will need the extra reference (assuming that the attribute and entity are in different assemblies)
the values (unless you are clever about it) must be determined at compile-time
you can't use it on entities outside of your control
In most cases the above aren't a problem. If they are an issue, you might want to support an external metadata model - but unless you need it, this would be overkill. Don't do it unless you must (meaning: go ahead and use attributes; they are usually fine).
There is no inherent reason to avoid custom attributes. It is a supported CLR feature which is the backbone for many available products (Code Contracts, FxCop, etc ...).
This is not an unreasonable approach and healthier than baking this stuff into a UI tier. There are a couple of points worth considering before taking the full dive:
You are tightly coupling business logic with the business entity itself. Are there circumstances where a field being required or valid values could change? You may be limiting yourself or be faced with an inconsistent validation mechanism
Dynamic assignment is possible but more tricky - i.e. when you set a field to be required thats what it will be unless you override
Custom attributes can be quite inflexible if further down the line you wanted to do something more complicated - namely if you need to pass state into an attribute driven validation scheme. Attributes like declarative assignment. Only having a true/false required property shouldn't be an issue here though
Just being a devils advocate really, in general for a fairly simple application where you only care about required fields, this is quite a tidy way of doing it