I am doing validation using DataAnnotation attributes on the Model classes, and the Model class is used for validation on both the Client and Server side of the application.
My problem is, I can't figure out how to lazy load my Model's properties without causing circular references
The libraries involved are:
WCF Service Library
Client-Side DataAccess Library
Models Library
Because the Models library is used on both the Client and Server side for data validation, I cannot reference the DataAccess library from within the Models library. Therefore, how can I setup lazy-loading?
For example, I have a ConsumerModel which has a property of PhoneNumbers which should be lazy loaded. How can I load the PhoneNumberModels from within the ConsumerModel without referencing the Client-Side DAL?
Client-side DAL:
using MyModels;
public class ConsumerDataAccess
{
public ConsumerModel GetConsumerById(int id)
{
ConsumerDTO dto = WCFService.GetConsumer(id);
return new ConsumerModel(dto);
}
}
ConsumerModel:
public class ConsumerModel
{
public ObservableCollection<PhoneNumberModel> _phoneNumbers;
public ObservableCollection<PhoneNumberModel> PhoneNumbers
{
get
{
if (_phoneNumbers == null)
{
// Can't reference DataAccess Library since that would cause a Circular Reference
}
}
}
}
What are some alternative ways I could make this architecture work?
I would prefer to keep Validation with the Models, and to use the models from both the Client and Server side for validation. I would also prefer to keep using DataAnnotation for Validation.
EDIT
Here's my final solution based on Lawrence Wenham's answer if anyone is interested. I ended up using a delegate instead of an event.
DAL:
public class ConsumerDataAccess
{
public ConsumerModel GetConsumerById(int id)
{
ConsumerDTO dto = WCFService.GetConsumer(id);
ConsumerModel rtnValue = new ConsumerModel(dto);
ConsumerModel.LazyLoadData = LazyLoadConsumerData;
return rtnValue;
}
}
private object LazyLoadConsumerData(string key, object args)
{
switch (key)
{
case "Phones":
return PhoneDataAccess.GetByConsumerId((int)args);
default:
return null;
}
}
Model Library:
public class ConsumerModel
{
public delegate object LazyLoadDataDelegate(string id, object args);
public LazyLoadDataDelegate LazyLoadData { get; set; }
public ObservableCollection<PhoneNumberModel> _phoneNumbers;
public ObservableCollection<PhoneNumberModel> PhoneNumbers
{
get
{
if (_phoneNumbers == null && LazyLoadData != null)
{
_phoneNumbers = (ObservableCollection<PhoneNumberModel>)
LazyLoadData("Phones", ConsumerId);
}
return _phoneNumbers;
}
}
}
One way might be to raise an event in the get {} of your Model classes properties, and then implement a lazy-loading manager on the client side that has a reference to your DAL. EG:
public class LazyLoadEventArgs: EventArgs
{
public object Data { get; set; }
public string PropertyName { get; set; }
public int Key { get; set; }
}
Then in your Model classes:
public event EventHandler<LazyLoadEventArgs> LazyLoadData;
public ObservableCollection<PhoneNumberModel> PhoneNumbers
{
get
{
if (_phoneNumbers == null)
{
LazyLoadEventArgs args = new LazyLoadEventArgs {
PropertyName = "PhoneNumbers",
Key = this.Id
};
LazyLoadData(this, args);
if (args.Data != null)
this._phoneNumbers = args.Data as ObservableCollection<PhoneNumberModel>;
}
return _phoneNumbers;
}
}
The handler for the LazyLoadData event would have the job of fetching the data from the client side's DAL, then storing it in the .Data property of LazyLoadEventArgs. EG:
private void Model_HandleLazyLoadData(object sender, LazyLoadEventArgs e)
{
switch (e.PropertyName)
{
case "PhoneNumbers":
e.Data = DAL.LoadPhoneNumbers(e.Key);
break;
...
}
}
Do not use "lazy loading" with WCF. Network communication is time expensive. If you plan to use PhoneNumbers your service should expose method which will return Customer with phone numbers. Other approach is using WCF Data Services which offers client side linq queries with ability to define eager loading by Expand method.
You should reduce service calls to minimum.
After reading again your question I don't understand why do you share model between service and client. Model is strictly client's feature. The only shared part should be DTOs.
Related
Little introduction: we have a complex entity and overgrown business logic related to it. With various fields that we can change and fields that updates from external project management software (PMS) like MS Project and some others.
The problem is that it's hard to centralize business logic for changing every fields cause that changes can offend other fields some fields are calculated but should be calculated only in several business scenarios. And different synchronization processes uses different business logic that depends on external data of specific PMS.
At this moment we have such ways to change the fields in our solution:
Constructor with parameters and private parameterless constructor
public class SomeEntity
{
public string SomeField;
private SomeEntity ()
{
}
public SomeEntity (string someField)
{
SomeField = someField;
}
}
Private set with public method to change field value
public class SomeEntity
{
public string SomeField {get; private set;}
public void SetSomeField(string newValue)
{
// there may be some checks
if (string.IsNullOrEmpty(newValue))
{
throw new Exception();
}
SomeField = newValue;
}
}
Event methods that perform operations and set some fields
public class SomeEntity
{
public string SomeField { get; private set; }
public string SomePublishedField { get; private set; }
public void PublishEntity(string publishValue)
{
SomeField = publishValue;
SomePublishedField = $"{publishValue} {DateTime.Now()}";
}
}
Public setters
public class SomeEntity
{
public string SomeField { get; set; }
}
Services that implements business logic:
public class SomeService : ISomeService
{
private DbContext _dbContext;
private ISomeApprovalsService _approvalsService;
public async Task UpdateFromMspAsync (MspSomeEntity mspEntity,
CancellationToken cancellationToken = default)
{
var entity = await _dbContext.SomeEntities
.Include (e => e.Process)
.SingleAsync (e => e.MspId == mspEntity.Id, cancellationToken);
switch mspEntity.Status:
case MspStatusEnum.Cancelled:
entity.Process.State = ProcessStateEnum.Rejected;
entity.Status = EntityStatusEnum.Stopped;
break;
case MspStatusEnum.Accepted:
_approvalsService.SendApprovals (entity.Process);
entity.Status = EntityStatusEnum.Finished;
break;
await _dbContext.SaveChangesAsync (cancellationToken);
}
}
State machine inside entity
public class SomeEntity
{
private StateMachine<TriggerEnum, StateEnum> _stateMachine;
public SomeEntity()
{
ConfigureStateMachine();
}
public string SomeField1 { get; set; }
public string SomeField2 { get; set; }
public string SomeField3 { get; set; }
private void ConfigureStateMachine()
{
_statusStateMachine.Configure(StateEnum.Processing)
.OnEntry(s=>SomeField1 = null)
.Permit(TriggerEnum.Approve, StateEnum.Approved);
_statusStateMachine.Configure(StateEnum.Approved)
.OnEntry(s=> SomeField1 = SomeField2 + SomeField3)
.Permit(TriggerEnum.Publish, StateEnum.Finished)
.Permit(TriggerEnum.Cancel, StateEnum.Canceled);
// etc
}
public void Trigger (TriggerEnum trigger) => _statusStateMachine.Fire(trigger);
}
State machine as service to prevent buisness logic leaks inside of entity.
var machine = _services.GetService<IStateMachine<SomeEntity, TriggerEnum>>();
var entity = await _dbContext.SomeEntities.FirstAsync();
IAttachedStateMachine<TriggerEnum> attachedMachine = machine.AttachToEntity(entity);
attachedMachine.Trigger(TriggerEnum.Publish);
It's wrong by architecture to have so many ways of changing values and we want to refactor this but to change approach, best practice must be chosen.
Please share your experience of resolving similar situation.
Update: found approach for DDD that called "aggregation root". It's looks good but only on paper (in theory) and works good with simple examples like "User, customer, shopping cart, order". On practice on every private setter you will create setter method (like in #2 of my examples). Also different methods for every system you work with. Not even talking about business logic inside database entity that violates SOLID's "single responsibility principle".
I have a simple scenario using the Entity Framework in C#. I have an Entity Post:
public class Post
{
public int Id { get; set; }
public string Name { get; set; }
public string Description { get; set; }
}
In my PostManager I have these methods:
public int AddPost(string name, string description)
{
var post = new Post() { Name = name, Description = description };
using (var db = new DbContext())
{
var res = db.Posts.Add(post);
res.Validate();
db.SaveChanges();
return res.Id;
}
}
public void UpdatePost(int postId, string newName, string newDescription)
{
using (var db = new DbContext())
{
var data = (from post in db.Posts.AsEnumerable()
where post.Id == postId
select post).FirstOrDefault();
data.Name = newName;
data.Description = newDescription;
data.Validate();
db.SaveChanges();
}
}
The method validate() refers to class:
public static class Validator
{
public static void Validate(this Post post)
{
if ( // some control)
throw new someException();
}
I call the validate method before the savechanges() but after adding the object to the context. What's the best practice to validate data in this simple scenario? It's better validate the arguments instead? What's happen to object post if the validate method throw exception after adding the object to the context?
UPDATE:
I have to throw a custom set of exception depending on data validation error.
I strongly recommend you to (if at all possible) to modify your entity so the setters are private (don't worry, EF can still set them on proxy creation), mark the default constructor as protected (EF can still do lazy loading/proxy creation), and make the only public constructors available check the arguments.
This has several benefits:
You limit the number of places where the state of an entity can be changed, leading to less duplication
You protect your class' invariants. By forcing creation of an entity to go via a constructor, you ensure that it is IMPOSSIBLE for an object of your entity to exist in an invalid or unknown state.
You get higher cohesion. By putting the constraints on data closer to the data itself, it becomes easier to understand and reason about your classes.
You code becomes self-documenting to a higher degree. One never has to wonder "is it OK if I set a negative value on this int property?" if it is impossible to even do it in the first place.
Separation of concerns. Your manager shouldn't know how to validate an entity, this just leads to high coupling. I've seen many managers grow into unmaintainable monsters because they simply do everything. Persisting, loading, validation, error handling, conversion, mapping etc. This is basically the polar opposite of SOLID OOP.
I know it is really popular nowadays to just make all "models" into stupid property bags with getters and setters and only a default constructor because (bad) ORMs have forced us to do this, but this is no longer the case, and there are so many issues with this imo.
Code example:
public class Post
{
protected Post() // this constructor is only for EF proxy creation
{
}
public Post(string name, string description)
{
if (/* validation check, inline or delegate */)
throw new ArgumentException();
Name = name;
Description = description;
}
public int Id { get; private set; }
public string Name { get; private set; }
public string Description { get; private set; }
}
Then your PostManager code becomes trivial:
using (var db = new DbContext())
{
var post = new Post(name, description); // possibly try-catch here
db.Posts.Add(post);
db.SaveChanges();
return post.Id;
}
If the creation/validation logic is extremely intricate this pattern lends itself very well for refactoring to a factory taking care of the creation.
I would also note that encapsulating data in entities exposing a minimal state-changing API leads to classes that are several orders of magnitude easier to test in isolation, if you care at all about that sort of thing.
As I mentioned in the comments above, you might want to check out .NET System.ComponentModel.DataAnnotations namespace.
Data Annotations (DA) allows you to specify attributes on properties to describe what values are acceptable. It's important to know that DA is completely independent of databases and ORM APIs such as Entity Framework so classes decorated with DA attributes can be used in any tier of your system whether it be the data tier; WCF; ASP.NET MVC or WPF.
In the example below, I define a Muppet class with a series of properties.
Name is required and has a max length of 50.
Scaryness takes an int but it must be in the range of {0...100}.
Email is decorated with an imaginary custom validator for validating strings that should contain an e-mail.
Example:
public class Muppet
{
[Required]
[StringLength(50)]
public string Name {get; set;}
public Color Color {get; set; }
[Range(0,100)]
public int Scaryness {get; set; }
[MyCustomEmailValidator]
public string Email {get;set; }
}
In my project I have to throw customException when i validate the data. It's possible do it using Data Annotations?
Yes you can. To validate this object at any time of your application (regardless of whether it has reached EF or not) just perform this:
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
.
.
.
Post post = ... // fill it in
Validator.Validate(post);
public static class Validator
{
public static void Validate(this Post post)
{
// uses the extension method GetValidationErrors defined below
if (post.GetValidationErrors().Any())
{
throw new MyCustomException();
}
}
}
public static class ValidationHelpers
{
public static IEnumerable<ValidationResult> GetValidationErrors(this object obj)
{
var validationResults = new List<ValidationResult>();
var context = new ValidationContext(obj, null, null);
Validator.TryValidateObject(obj, context, validationResults, true);
return validationResults;
}
.
.
.
If you want to get the validation error messages you could use this method:
/// <summary>
/// Gets the validation error messages for column.
/// </summary>
/// <param name="obj">The object.</param>
/// <returns></returns>
public static string GetValidationErrorMessages(this object obj)
{
var error = "";
var errors = obj.GetValidationErrors();
var validationResults = errors as ValidationResult[] ?? errors.ToArray();
if (!validationResults.Any())
{
return error;
}
foreach (var ee in validationResults)
{
foreach (var n in ee.MemberNames)
{
error += ee + "; ";
}
}
return error;
}
The free set of steak knives is that the validation attributes will be detected once the object reaches EF where it will be validated there as well in case you forget or the object is changed since.
I think you should be working with Data Annotation as #Micky says above. Your current approach is validating manually after it has been added.
using System.ComponentModel.DataAnnotations;
// Your class
public class Post
{
[Required]
public int Id { get; set; }
[Required,MaxLength(50)]
public string Name { get; set; }
[Required,MinLength(15),MyCustomCheck] // << Here is your custom validator
public string Description { get; set; }
}
// Your factory methods
public class MyFactory() {
public bool AddPost() {
var post = new Post() { Id = 1, Name = null, Description = "This is my test post"};
try {
using (var db = new DbContext()) {
db.Posts.Add(post);
db.SaveChanges();
return true;
}
} catch(System.Data.Entity.Validation.DbEntityValidationException e) {
Console.WriteLine("Something went wrong....");
} catch(MyCustomException e) {
Console.WriteLine(" a Custom Exception was triggered from a custom data annotation...");
}
return false;
}
}
// The custom attribute
[AttributeUsage(AttributeTargets.Property | AttributeTargets.Field, AllowMultiple = false)]
sealed public class MyCustomCheckAttribute : ValidationAttribute
{
public override bool IsValid(object value)
{
if (value instanceof string) {
throw new MyCustomException("The custom exception was just triggered....")
} else {
return true;
}
}
}
// Your custom exception
public class MyCustomException : Exception() {}
See also:
DbEntityValidationException class: https://msdn.microsoft.com/en-us/library/system.data.entity.validation.dbentityvalidationexception(v=vs.113).aspx
Default data annotations
http://www.entityframeworktutorial.net/code-first/dataannotation-in-code-first.aspx
Building your custom data annotations (validators):
https://msdn.microsoft.com/en-us/library/cc668224.aspx
I always use two validations:
client side - using jQuery Unobtrusive Validation in combination with Data Annotations
server side validation - and here it depends on application - validation is performed in controller actions or deeper in business logic. Nice place to do it is to override OnSave method in your context and do it there
Remember that you can write custom Data Annotation attributes which can validate whatever you need.
You can modify the code in this way:
public int AddPost(string name, string description)
{
var post = new Post() { Name = name, Description = description };
if(res.Validate())
{
using (var db = new DbContext())
{
var res = db.Posts.Add(post);
db.SaveChanges();
return res.Id;
}
}
else
return -1; //if not success
}
public static bool Validate(this Post post)
{
bool isValid=false;
//validate post and change isValid to true if success
if(isvalid)
return true;
}
else
return false;
}
After adding data to DbContext and before calling SaveChanges() you can call GetValidationErrors() method of DbContext and check its count to quiqckly check if there are any errors. You can further enumerate all of errors and get error details against each of them. I have bundled Error conversion from ICollection to string in GetValidationErrorsString() extension method.
if (db.GetValidationErrors().Count() > 0)
{
var errorString = db.GetValidationErrorsString();
}
public static string GetValidationErrorsString(this DbContext dbContext)
{
var validationErrors = dbContext.GetValidationErrors();
string errorString = string.Empty;
foreach (var error in validationErrors)
{
foreach (var innerError in error.ValidationErrors)
{
errorString += string.Format("Property: {0}, Error: {1}<br/>", innerError.PropertyName, innerError.ErrorMessage);
}
}
return errorString;
}
I am using Domain Service to fetch data from database from Silverlight Client.
In DomainService1.cs, I have added the following:
[EnableClientAccess()]
public class Product
{
public int productID;
public string productName;
public List<Part> Parts = new List<Part>(); //Part is already present in Model designer
}
In DomainService1 class I added a new method to retrive a collection of the custom class object:
[EnableClientAccess()]
public class DomainService1 : LinqToEntitiesDomainService<HELPERDBNEWEntities1>
{
...
public List<Product> GetProductsList(...)
{
List<Product> resultProducts = new List<Product>();
...
return resultProducts;
}
}
From the silverlight client I am trying to access that method:
DomainService1 ds1 = new DomainService1();
var allproductList = ds1.GetProductsList(...);
ds1.Load<SLProduct>(allproductList).Completed += new EventHandler(Load_Completed); //Not correct usage
However it is not the correct way to call the new method. The reason I added a new class Product in DomainServices.cs is to have an efficient grouping. I cannot achieve the same using the model classes auto-generated by the entity framework.
How call I call the new method from the client?
I believe there is a similar question with an answer here:
Can a DomainService return a single custom type?
Also, here is some discussion about the overall problem of adding custom methods in a Domain Service:
http://forums.silverlight.net/t/159292.aspx/1
While I don't know what you mean by "it is not the correct way to call the new method", or if you're getting any errors, I thought maybe posting some working code might help.
My POCO
public class GraphPointWithMeta
{
[Key]
public Guid PK { get; set; }
public string SeriesName { get; set; }
public string EntityName { get; set; }
public double Amount { get; set; }
public GraphPointWithMeta(string seriesName, string entityName, double amount)
{
PK = Guid.NewGuid();
SeriesName = seriesName;
EntityName = entityName;
Amount = amount;
}
// Default ctor required.
public GraphPointWithMeta()
{
PK = Guid.NewGuid();
}
}
A method in the domain service (EnableClientAccess decorates the class)
public IEnumerable<GraphPointWithMeta> CallingActivityByCommercial()
{
List<GraphPointWithMeta> gps = new List<GraphPointWithMeta>();
// ...
return gps;
}
Called from the Silverlight client like
ctx1.Load(ctx1.CallingActivityByCommercialQuery(), CallingActivityCompleted, null);
client call back method
private void CallingActivityCompleted(LoadOperation<GraphPointWithMeta> lo)
{
// lo.Entities is an IEnumerable<GraphPointWithMeta>
}
I am not sure if your Product class is an actual entity or not. From the way it is defined, it does not appear to be an entity. My answer is assuming it is not an entity. You will need to apply the DataMemberAttribute for your Product properties, and you wouldn't load the product list - load is for Entity Queries (IQueryable on the service side). You would just invoke it like this (client side):
void GetProductList( Action<InvokeOperation<List<Product>>> callback)
{
DomainService ds1 = new DomainService();
ds1.GetProductsList(callback, null);//invoke operation call
}
And the domain service's (server side) method needs the InvokeAttribute and would look like this:
[EnableClientAccess]
public class MyDomainService
{
[Invoke]
public List<Product> GetProductList()
{
var list = new List<Product>();
...
return list;
}
}
And here is how your Product class might be defined (if it is not an entity):
public class Product
{
[DataMember]
public int productID;
[DataMember]
public string productName;
[DataMember]
public List<Part> Parts = new List<Part>(); // you might have some trouble here.
//not sure if any other attributes are needed for Parts,
//since you said this is an entity; also not sure if you
//can even have a list of entities or it needs to be an
//entity collection or what it needs to be. You might
//have to make two separate calls - one to get the products
//and then one to get the parts.
}
Like I said, i am not sure what Product inherits from... Hope this helps.
I'm currently creating objects for an application of mine when this stuff come to mind. I know that using DBML's over Manual Creation of classes(see class below) can improve the speed of my application development but I'm really confused of what would be the other disadvantages and advantages of using DBML's over Manual Creation of classes like what I'm doing below thanks for all people who would help. :)
[Serializable]
public class Building
{
public Building()
{
LastEditDate = DateTime.Now.Date;
LastEditUser = GlobalData.CurrentUser.FirstName + " " + GlobalData.CurrentUser.LastName;
}
public int BuildingID { get; set; }
public string BuildingName { get; set; }
public bool IsActive { get; set; }
public DateTime LastEditDate { get; set; }
public string LastEditUser { get; set; }
public static bool CheckIfBuildingNameExists(string buildingName, int buildingID = 0)
{
return BuildingsDA.CheckIfBuildingNameExists(buildingName, buildingID);
}
public static Building CreateTwin(Building building)
{
return CloningUtility.DeepCloner.CreateDeepClone(building);
}
public static List<Building> GetBuildingList()
{
return BuildingsDA.GetBuildingList();
}
public static List<Building> GetBuildingList(bool flag)
{
return BuildingsDA.GetBuildingList(flag).ToList();
}
public static Building SelectBuildingRecord(int buildingId)
{
return BuildingsDA.SelectBuilding(buildingId);
}
public static void InsertBuildingRecord(Building building)
{
BuildingsDA.InsertBuilding(building);
}
public static void UpdateBuildingRecord(Building building)
{
BuildingsDA.UpdateBuilding(building);
}
public static void DeleteBuildingRecord(int building)
{
BuildingsDA.DeleteBuilding(building);
}
}
and my DAL is like this:
internal static class BuildingsDA
{
internal static Building SelectBuilding(int buildingId)
{
SqlCommand commBuildingSelector = ConnectionManager.MainConnection.CreateCommand();
commBuildingSelector.CommandType = CommandType.StoredProcedure;
commBuildingSelector.CommandText = "Rooms.asp_RMS_Building_Select";
commBuildingSelector.Parameters.AddWithValue("BuildingID", buildingId);
SqlDataReader dreadBuilding = commBuildingSelector.ExecuteReader();
if (dreadBuilding.HasRows)
{
dreadBuilding.Read();
Building building = new Building();
building.BuildingID = int.Parse(dreadBuilding.GetValue(0).ToString());
building.BuildingName = dreadBuilding.GetValue(1).ToString();
building.IsActive = dreadBuilding.GetValue(2).ToString() == "Active";
building.LastEditDate = dreadBuilding.GetValue(3).ToString() != string.Empty ? DateTime.Parse(dreadBuilding.GetValue(3).ToString()) : DateTime.MinValue;
building.LastEditUser = dreadBuilding.GetValue(4).ToString();
dreadBuilding.Close();
return building;
}
dreadBuilding.Close();
return null;
}
....................
}
I would also want to know if what could be the faster between the two methods of OOP implementation thanks :)
DBML
Pros:
You can get your job done fast!
Cons:
You can't shape your entity the way you want, for example you need 5 columns from the table but it has 10 columns you will get all of them, at least its schema. If you don't care much about data volum
You client side will have dependency with DAL (Data Access Layer), if you change property name, type in DAL you need to change in both BLL (Business Logic Layer) and client (Presentation Layer)
If you manual create class you might take a little bit more time to code but you get more flexible with it. Your client code will not depend on your DAL, any changes on DAL will not cause problems on client code.
Creating your model classes manually you can put additional attributes to properties (it cannot be done with DBML), apply your own data validation (as far as I remember it is possible to be done with DBML using partial methods).
With many tables and assocatiations DBML could become hard to read.
Disadventage of creating model classes manually is that you have to do all DBML stuff (attributes and a lot of code).
If you want to create model classes manually you can take a look at Entity Framework Code First or Fluent NHibernate. Both allows creating model easily.
Working through the NerdDinner Tutorial, I'm trying to figure out a good way to perform validation on properties that isn't dependent on a LINQ-to-SQL generated partial class. Here's some example code of what I've done so far:
public abstract class DomainEntity
{
public IEnumerable<ValidationError> ValidationErrors { get; private set; }
public bool Validate()
{
bool isValid = false;
if (this.ValidationErrors != null)
this.ValidationErrors = null;
this.ValidationErrors = this.GetValidationErrors();
if (this.ValidationErrors.Count() == 0)
isValid = true;
return isValid;
}
protected abstract IEnumerable<ValidationError> GetValidationErrors();
}
public partial class Email : DomainEntity
{
protected override IEnumerable<ValidationError> GetValidationErrors()
{
if (!this.ValidateAddress())
yield return new ValidationError("Address", DomainResources.EmailAddressValidationErrorMessage);
yield break;
}
partial void OnValidate(ChangeAction action)
{
bool isValid = this.Validate();
if (!isValid)
throw new InvalidEmailException(this);
}
private bool ValidateAddress()
{
// TODO: Use a regex to validate the email address.
return !string.IsNullOrEmpty(this.Address);
}
}
Where Email is a LINQ-to-SQL generated type based off an Email table. Since the Email table is but one of several entities related to a domain model class (say, "User"), the ideal is to create a "User" domain model class and use the Validation Application Block attributes to validate properties. In other words, I'd like to use this:
public class User
{
private Email emailEntity;
[EmailAddressValidator]
public string EmailAddress
{
get { return emailEntity.Address; }
set { emailEntity.Address = value; }
}
}
So that if I change my database schema, and the changes fall through my LINQ-to-SQL generated classes, I don't have these orphaned partial classes (like partial class Email). I also want the benefit from integrating the Validation Application Block attributes, so that I don't have to maintain a collection of regexes, as is done in the NerdDinner tutorial. Plus, User as a domain class is going to be the functional unit in the domain, not Email and other entities, for creating view models, rendering views, etc. However, there's no way to capture the Validation call without doing something like:
public abstract class DomainEntity
{
public event EventHandler Validation(object sender, EventArgs args);
protected void OnValidation()
{
if (this.Validate != null)
this.Validate(this, EventArgs.Empty);
}
}
public partial class Email
{
partial void OnValidate(ChangeAction action)
{
this.OnValidation();
}
}
And then having User hook into that event and handle all the validation within User. Would that even work well with the Validation Application Block? How to perform validation in aggregated domain classes like User in a sensible way?
Treat validation as a service rather than as a responsibility of the entity, this will let you separate implementation of the validation from the definition of what is valid and turn validation into an explicit operation rather than an implicit one ( managed by L2S ).
Have a look at fluent validation for .net ( http://www.codeplex.com/FluentValidation ) for a good implementation of this approach.