AutoMapper vs ValueInjecter [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Everytime I'm looking for AutoMapper stuff on StackOverflow, I'm reading something about ValueInjecter.
Can somebody tell me the pros and cons between them (performance, features, API usage, extensibility, testing) ?

as the creator of ValueInjecter, I can tell you that I did it because I wanted something simple and very flexible
I really don't like writing much or writing lots of monkey code like:
Prop1.Ignore, Prop2.Ignore etc.
CreateMap<Foo,Bar>(); CreateMap<Tomato, Potato>(); etc.
ValueInjecter is something like mozilla with it's plugins, you create ValueInjections and use them
there are built-in injections for flattening, unflattening, and some that are intended to be inherited
and it works more in an aspect type of way, you don't have to specify all properties 1-to-1, instead you do something like:
take all the int properties from source which name ends with "Id", transform the value and set each to a property in the source object with same name without the Id suffix and it's type is inherited from Entity, stuff like that
so one obvious difference, ValueInjecter is used even in windows forms with flattening and unflattening, that's how flexible it is
(mapping from object to form controls and back)
Automapper, not usable in windows forms, no unflatenning, but it has good stuff like collections mapping, so in case you need it with ValueInjecter you just do something like:
foos.Select(o => new Bar().InjectFrom(o));
you can also use ValueInjecter to map from anonymous and dynamic objects
differences:
automapper create configuration for each mapping possibility CreateMap()
valueinjecter inject from any object to any object (there are also cases when you inject from object to valuetype)
automapper has flattening built it, and only for simple types or from same type, and it doesn't has unflattening
valueinjecter only if you need it you do target.InjectFrom<FlatLoopValueInjection>(source); also <UnflatLoopValueInjection>
and if you want from Foo.Bar.Name of type String to FooBarName of type Class1 you inherit FlatLoopValueInjection and specify this
automapper maps properties with same name by default and for the rest you have to specify one by one, and do stuff like Prop1.Ignore(), Prop2.Ignore() etc.
valueinjecter has a default injection .InjectFrom() that does the properties with the same name and type; for everything else you create your custom valueinjections with individual mapping logic/rules, more like aspects, e.g. from all props of Type Foo to all props of type Bar

Since I've never used any of the other tools, I can only talk about AutoMapper. I had a few goals in mind for building AutoMapper:
Support flattening to dumb DTO objects
Support obvious scenarios out of the box (collections, enumerations etc.)
Be able to easily verify mappings in a test
Allow for edge cases for resolving values from other places (custom type->type mapping, individual member mapping, and some really crazy edge cases).
If you want to do these things, AutoMapper works very well for you. Things AutoMapper doesn't do well are:
Filling existing objects
Unflattening
The reason being I've never needed to do these things. For the most part, our entities don't have setters, don't expose collections, etc. so that's why it's not there. We use AutoMapper to flatten to DTOs and map from UI models to command messages and the like. That's where it works really, really well for us.

I tried both and prefer ValueInjecter because it's so simple:
myObject.InjectFrom(otherObject);
That's all there is to know for the vast majority of my injection needs. It can't possibly get more simple and elegant than this.

This is a question I've been researching too, and for my use case, it seems to be valueinjecter hands down. It requires no prior setup to use (may hit performance I guess, although if smartly implemented it could cache the mappings for future invocations rather than reflecting each time), so you don't need to predefine any mappings before using them.
Most importantly however, it allows reverse mapping. Now I may be missing something here as Jimmy mentions that he sees no use case where its necessary, so maybe I have the pattern wrong, but my use case is that I'm creating a ViewModel object from my ORM. I then display this on my webpage. Once the user finishes I get the ViewModel back in as a httppost, how does this get converted back to the original ORM classes? I'd love to know the pattern with automapper. With ValueInjector it is trivial, and it will even unflatten. e.g Creating a new entity
The model created by the entityframework (model first):
public partial class Family
{
public int Id { get; set; }
public string FamilyName { get; set; }
public virtual Address Address { get; set; }
}
public partial class Address
{
public int Id { get; set; }
public string Line1 { get; set; }
public string Line2 { get; set; }
public string TownCity { get; set; }
public string County { get; set; }
public string Postcode { get; set; }
public virtual Family Family { get; set; }
}
The ViewModel (which I can decorate with validators):
public class FamilyViewModel
{
public int Id { get; set; }
public string FamilyName { get; set; }
public int AddressId { get; set; }
public string AddressLine1 { get; set; }
public string AddressLine2 { get; set; }
public string AddressTownCity { get; set; }
public string AddressCounty { get; set; }
public string AddressPostcode { get; set; }
}
The ViewController:
//
// GET: /Family/Create
public ActionResult Create()
{
return View();
}
//
// POST: /Family/Create
[HttpPost]
public ActionResult Create(FamilyViewModel familyViewModel)
{
try
{
Family family = new Family();
family.InjectFrom<UnflatLoopValueInjection>(familyViewModel);
db.Families.Add(family);
db.SaveChanges();
return RedirectToAction("Index");
}
catch
{
return View();
}
}
To my mind, it doesn't get much simpler than that?
(So this begs the question, whats wrong with the pattern that I run into this (and it seems many others do to), that its not seen as of value to AutoMapper?)
However, if this pattern as decscribed, is one you want to use, then my vote is valueinjecter by a country mile.

Related

Best way to project ViewModel back into Model

Consider having a ViewModel:
public class ViewModel
{
public int id { get; set; }
public int a { get; set; }
public int b { get; set; }
}
and an original Model like this:
public class Model
{
public int id { get; set; }
public int a { get; set; }
public int b { get; set; }
public int c { get; set; }
public virtual Object d { get; set; }
}
Each time I get the view model I have to put all ViewModel properties one by one into Model. Something like:
var model = Db.Models.Find(viewModel.Id);
model.a = viewModel.a;
model.b = viewModel.b;
Db.SaveChanges();
Which always cause lots of problems. I even sometimes forget to mention some properties and then disaster happens!
I was looking for something like:
Mapper.Map(model, viewModel);
BTW: I use AutoMapper only to convert Model to ViewModel but vice-versa I always face errors.
Overall that might be not the answer, that you are looking for, but here's a quote from AutoMapper author:
I can’t for the life of me understand why I’d want to dump a DTO
straight back in to a model object.
I believe best way to map from ViewModel to Entity is not to use AutoMapper for this. AutoMapper is a great tool to use for mapping objects without using any other classes other than static. Otherwise, code gets messier and messier with each added service, and at some point you won't be able to track what caused your field update, collection update, etc.
Specific issues often faced:
Need for non-static classes to do mapping for your entities
You might need to use DbContext to load and reference entities, you might also need other classes - some tool that does image upload to your file storage, some non-static class that does hashing/salt for password, etc etc... You either have to pass it somehow to automapper, inject or create inside AutoMapper profile, and both practices are pretty troublemaking.
Possible need for multiple mappings over same ViewModel(Dto) -> Entity Pair
You might need different mappings for same viewmodel-entity pair, based on if this entity is an aggregate, or not + based on if you need to reference this entity or reference and update. Overall this is solvable, but causes a lot of not-needed noise in code and is even harder to maintain.
Really dirty code that's hard to maintain.
This one is about automatic mapping for primitives (strings, integers, etc) and manual mapping references, transformed values, etc. Code will look really weird for automapper, you would have to define maps for properties (or not, if you prefer implicit automapper mapping - which is also destructive when paired with ORM) AND use AfterMap, BeforeMap, Conventions, ConstructUsing, etc.. for mapping other properties, which complicates stuff even more.
Complex mappings
When you have to do complex mappings, like mapping from 2+ source classes to 1 destination class, you will have to overcomplicate things even more, probably calling code like:
var target = new Target();
Mapper.Map(source1, target);
Mapper.Map(source2, target);
//etc..
That code causes errors, because you cannot map source1 and source2 together, and mapping might depend on order of mapping source classes to target. And I'm not talking if you forget to do 1 mapping or if your maps have conflicting mappings over 1 property, overwriting each other.
These issues might seem small, but on several projects where I faced usage of automapping library for mapping ViewModel/Dto to Entity, it caused much more pain than if it was never used.
Here are some links for you:
Jimmy Bogard, author of AutoMapper about 2-way mapping for your entities
A small article with comments about problems faced when mapping ViewModel->Entity with code examples
Similar question in SO: Best Practices For Mapping DTO to Domain Object?
For this purpose we have written a simple mapper. It maps by name and ignores virtual properties (so it works with entity framework). If you want to ignore certain properties add a PropertyCopyIgnoreAttribute.
Usage:
PropertyCopy.Copy<ViewModel, Model>(vm, dbmodel);
PropertyCopy.Copy<Model, ViewModel>(dbmodel, vm);
Code:
public static class PropertyCopy
{
public static void Copy<TDest, TSource>(TDest destination, TSource source)
where TSource : class
where TDest : class
{
var destProperties = destination.GetType().GetProperties()
.Where(x => !x.CustomAttributes.Any(y => y.AttributeType.Name == PropertyCopyIgnoreAttribute.Name) && x.CanRead && x.CanWrite && !x.GetGetMethod().IsVirtual);
var sourceProperties = source.GetType().GetProperties()
.Where(x => !x.CustomAttributes.Any(y => y.AttributeType.Name == PropertyCopyIgnoreAttribute.Name) && x.CanRead && x.CanWrite && !x.GetGetMethod().IsVirtual);
var copyProperties = sourceProperties.Join(destProperties, x => x.Name, y => y.Name, (x, y) => x);
foreach (var sourceProperty in copyProperties)
{
var prop = destProperties.FirstOrDefault(x => x.Name == sourceProperty.Name);
prop.SetValue(destination, sourceProperty.GetValue(source));
}
}
}
I want to address a specific point in your question, regarding "forgetting some properties and disaster happens". The reason this happens is that you do not have a constructor on your model, you just have setters that can be set (or not) from anywhere. This is not a good approach for defensive coding.
I use constructors on all my Models like so:
public User(Person person, string email, string username, string password, bool isActive)
{
Person = person;
Email = email;
Username = username;
Password = password;
IsActive = isActive;
}
public Person Person { get; }
public string Email { get; }
public string Username { get; }
public string Password { get; }
public bool IsActive { get; }
As you can see I have no setters, so object construction must be done via constructor. If you try to create an object without all the required parameters the compiler will complain.
With this approach it becomes clear, that tools like AutoMapper don't make sense when going from ViewModel to Model, as Model construction using this pattern is no longer about simple mapping, its about constructing your object.
Also as your Models become more sophisticated you will find that they differ significantly from your ViewModels. ViewModels tend to be flat with simple properties like string, int, bool etc. Models on the other hand often include custom objects. You will notice in my example there is a Person object, but UserViewModel would use primitives instead like so:
public class UserViewModel
{
public int Id { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
public string Email { get; set; }
public string Username { get; set; }
public string Password { get; set; }
public bool IsActive { get; set;}
}
So mapping from primitives to complex objects limits AutoMapper's usefulness.
My approach is always manual construction for the ViewModels to Model direction. In the other direction, Models to ViewModels, I often use a hybrid approach, I would manually map Person to FirstName, LastName, I'd but use a mapper for simple properties.
Edit: Based on the discussion below, AutoMapper is better at unflattering than I believed. Though I will refrain from recommending it one way or the other, if you do use it take advantage of features like Construction and Configuration Validation to help prevent silent failures.
Use Newtonsoft.Json to serialize viewmodel first and deserialize it to model.
First we need to Serialize the viewmodel:
var viewmodel = JsonConvert.SerializeObject(companyInfoViewModel);
Then Deserialize it to model:
var model = JsonConvert.DeserializeObject<CompanyInfo>(viewmodel);
Hence, all the data is passed from viewmodel to model easily.
One Line Code:
var company = JsonConvert.DeserializeObject<CompanyInfo>(JsonConvert.SerializeObject(companyInfoViewModel));

Using get/set asp.net c# theres too many properties

I am use to creating a properties class where I would include all my fields and have to write all the get/set properties then have another Database class where i would make all my database calls.
Properties Class
private int _intCard
public int IntCard
{
set { _intcard = value;}
}
Constructor here
Right now this does not feel like the right approach as I have over 120 properties that I will be dealing with and seems really time consuming to have to write each one of those properties out. I will need to add validation to some of the properties is my reason for choosing this way, i could validate it in the set method. Can anyone suggest an alternative way that I could look into to complete the same result.
********************---------------*******************
So giving the comments I understand my design is flawed that is what I figured coming into this question. I have an idea on how to fix this but do not know if it is the correct way to approach this. I searched for Object Design Principles and read up on that but will need more time to grasp what it is teaching me. For now I would like to know if this approach is the correct way
I am keeping track of applicants name,address,phone,faxnumber,cellphone,altphone,altaddress, same for spouse, and then children, references, company information.....and so on
I am not going to lie I do not understand abstract classes yet in order to implement this if that is the approach I should take I will take more time to learn that but for now was hoping this would be suitable.
Property classes would be as followed
applicant.cs, applicantspouse.cs, applicantcontactinfo.cs, appreferences.cs......
Is this along the lines of what I should be doing?
Thanks again
I can't help thinking your object modelling isn't right here. If you have a class with 120 properties then you've not divided up that object into separate roles/responsibilities etc. I would look at increasing (dramatically) the number of classes you're creating, and that way your solution becomes more manageable.
That won't reduce the number of properties that you have to handle. It may be worth considering immutable objects (do you need to set these properties beyond during construction?), and/or the use of the Builder pattern to aid construction.
Finally, do you need to expose these properties ? A key part of OO is telling objects to do things for you, rather than getting their contents and doing things for them. If you can tell an object to do something for you, you quite likely don't need to expose their (internal) fields.
By reading the comments it looks like you need at least two classes Person, Address something like:
public class Person
{
Guid Id {get; set;}
string Name {get; set;}
// ad infinitum the truely unique things that relate to an Individual
Address BusinessAddress {get; set;}
Address HomeAddress {get; set;}
Person Spouse {get; set;}
}
public class Address
{
Guid Id {get; set;}
Line1 {get; set;}
// ad infinitum all the truly unique things that relate to an address
}
The above is essentially pseudo-code and shouldn't be read as "This is exactly how to do it", I haven't for instance stated whether the properties are private/public/protected or indeed provided a constructor.
But it does show how you can use other classes as properties and in the case of "Spouse" create quite rich and deep object Hierarchies (Spouse could contain addresses and potentially another spouse - circular reference ahoy!) which can be populated and used to make code more readable and separate out the responsibility of the code to encapsulate a "concept/entity/domain" into a single unit who's job it is to be "that specific thing". Probably worth looking at OOP concepts like encapsulation, inheritance and so on (basically the four tenets of OO) here to get a feel for what an object should represent, this link has a brief intro and should help you in deciding how to break out the classes and construct more useful objects.
http://codebetter.com/raymondlewallen/2005/07/19/4-major-principles-of-object-oriented-programming/
In modern c# versions there's a super compact sintax for properties:
public class Properties {
public int IntCard { get; set; }
}
Here c# handles the private variable for you, this way you can avoid a lot of keystrokes. For validation you can use Data Annotations. More info here
Hope it helps
Totally agree with #brian-agnew that if you have that many properties in 1 class then you probably need to do some refactoring as you almost certainly do not have enough separation of concerns.
However even after some refactoring, you will still have the properties, so it would be worth looking at the data validation attributes. For example, here is a walk though of using them with MVC: http://www.asp.net/mvc/tutorials/older-versions/models-(data)/validation-with-the-data-annotation-validators-cs. You could then use auto-implemented properties:
public int IntCard { get; set; }
Please note that this does not address your design issues. If your database is on sql-server, to avoid typing you could use a query like this (please modify for your requirement) to get the property list with datatypes and then copy and paste the results. SQL SERVER DEMO
SELECT 'public ' + CASE DATA_TYPE WHEN 'smallint' THEN 'short'
WHEN 'bit' THEN 'bool'
WHEN 'smalldatetime' THEN 'System.DateTime'
WHEN 'datetime' THEN 'System.DateTime'
WHEN 'date' THEN 'System.DateTime'
WHEN 'uniqueidentifier' THEN 'System.Guid'
WHEN 'varchar' THEN 'string'
WHEN 'int' THEN 'int'
WHEN 'numeric' THEN 'decimal'
ELSE DATA_TYPE END
+ CASE IS_NULLABLE WHEN 'NO' THEN '' ELSE '?' END
+ ' ' + COLUMN_NAME
+ ' { get; set; }' AS def
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'YourTableName'
ORDER BY IS_NULLABLE, ORDINAL_POSITION
Tim,
Based on your edit, you look like you are on the right lines. You should be breaking the properties down into specific items, for example:
public class Person
{
public string GivenName { get; set; }
public string Surname { get; set; }
public ContactInfo ContactInformation { get; set; }
}
public class Applicant : Person
{
public Person Spouse { get; set; }
public List<Person> Children { get; set; }
public List<Reference> References { get; set; }
}
public class ContactInfo
{
[Required]
[DataType(DataType.PhoneNumber)]
public string PhoneNumber { get; set; }
[DataType(DataType.EmailAddress)]
public string EmailAddress { get; set; }
public Address PrimaryAddress { get; set; }
public Address AlternativeAddress { get; set; }
}
So the key points for you here are that
the class are broken down into manageable, reusable chunks
Data Annotations (Required & DataType in the ContactInfo class) are used to validate properties
The properties no longer need explicit private variables
P.S. A bit more info about data annotations: http://msdn.microsoft.com/en-us/library/dd901590(v=vs.95).aspx

Rich domain model with behaviours and ORM

After watching NDC12 presentation "Crafting Wicked Domain Models" from Jimmy Bogard (http://ndcoslo.oktaset.com/Agenda), I was wandering how to persist that kind of domain model.
This is sample class from presentation:
public class Member
{
List<Offer> _offers;
public Member(string firstName, string lastName)
{
FirstName = firstName;
LastName = lastName;
_offers = new List<Offer>();
}
public string FirstName { get; set; }
public string LastName { get; set; }
public IEnumerable<Offer> AssignedOffers {
get { return _offers; }
}
public int NumberOfOffers { get; private set; }
public Offer AssignOffer(OfferType offerType, IOfferValueCalc valueCalc)
{
var value = valueCalc.CalculateValue(this, offerType);
var expiration = offerType.CalculateExpiration();
var offer = new Offer(this, offerType, expiration, value);
_offers.Add(offer);
NumberOfOffers++;
return offer;
}
}
so there are some rules contained in this domain model:
- Member must have first and last name
- Number of offers can't be changed outside
- Member is responsible for creating new offer, calculating its value and assignment
If if try to map this to some ORM like Entity Framework or NHibernate, it will not work.
So, what's best approach for mapping this kind of model to database with ORM?
For example, how do I load AssignedOffers from DB if there's no setter?
Only thing that does make sense for me is using command/query architecture: queries are always done with DTO as result, not domain entities, and commands are done on domain models. Also, event sourcing is perfect fit for behaviours on domain model. But this kind of CQS architecture isn't maybe suitable for every project, specially brownfield. Or not?
I'm aware of similar questions here, but couldn't find concrete example and solution.
This is actually a very good question and something I have contemplated. It is potentially difficult to create proper domain objects that are fully encapsulated (i.e. no property setters) and use an ORM to build the domain objects directly.
In my experience there are 3 ways of solving this issue:
As already mention by Luka, NHibernate supports mapping to private fields, rather than property setters.
If using EF (which I don't think supports the above) you could use the memento pattern to restore state to your domain objects. e.g. you use entity framework to populate 'memento' objects which your domain entities accept to set their private fields.
As you have pointed out, using CQRS with event sourcing eliminates this problem. This is my preferred method of crafting perfectly encapsulated domain objects, that also have all the added benefits of event sourcing.
Old thread. But there's a more recent post (late 2014) by Vaughn Vernon that addresses just this scenario, with particular reference to Entity Framework. Given that I somehow struggled to find such information, maybe it can be helpful to post it here as well.
Basically the post advocates for the Product domain (aggregate) object to wrap the ProductState EF POCO data object for what concerns the "data bag" side of things. Of course the domain object would still add all its rich domain behaviour through domain-specific methods/accessors, but it would resort to inner data object when it has to get/set its properties.
Copying snippet straight from post:
public class Product
{
public Product(
TenantId tenantId,
ProductId productId,
ProductOwnerId productOwnerId,
string name,
string description)
{
State = new ProductState();
State.ProductKey = tenantId.Id + ":" + productId.Id;
State.ProductOwnerId = productOwnerId;
State.Name = name;
State.Description = description;
State.BacklogItems = new List<ProductBacklogItem>();
}
internal Product(ProductState state)
{
State = state;
}
//...
private readonly ProductState State;
}
public class ProductState
{
[Key]
public string ProductKey { get; set; }
public ProductOwnerId ProductOwnerId { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public List<ProductBacklogItemState> BacklogItems { get; set; }
...
}
Repository would use internal constructor in order to instantiate (load) an entity instance from its DB-persisted version.
The one bit I can add myself, is that probably Product domain object should be dirtied with one more accessor just for the purpose of persistence through EF: in the same was as new Product(productState) allows a domain entity to be loaded from database, the opposite way should be allowed through something like:
public class Product
{
// ...
internal ProductState State
{
get
{
// return this.State as is, if you trust the caller (repository),
// or deep clone it and return it
}
}
}
// inside repository.Add(Product product):
dbContext.Add(product.State);
For AssignedOffers : if you look at the code you'll see that AssignedOffers returns value from a field. NHibernate can populate that field like this: Map(x => x.AssignedOffers).Access.Field().
Agree with using CQS.
When doing DDD first thing, you ignore the persistence concerns. THe ORM is tighlty coupled to a RDBMS so it's a persistence concern.
An ORM models persistence structure NOT the domain. Basically the repository must 'convert' the received Aggregate Root to one or many persistence entities. The Bounded Context matters a lot since the Aggregate Root changes according to what are you trying to accomplish as well.
Let's say you want to save the Member in the context of a new offer assigned. Then you'll have something like this (of course this is only one possible scenario)
public interface IAssignOffer
{
int OwnerId {get;}
Offer AssignOffer(OfferType offerType, IOfferValueCalc valueCalc);
IEnumerable<Offer> NewOffers {get; }
}
public class Member:IAssignOffer
{
/* implementation */
}
public interface IDomainRepository
{
void Save(IAssignOffer member);
}
Next the repo will get only the data required in order to change the NH entities and that's all.
About EVent Sourcing, I think that you have to see if it fits your domain and I don't see any problem with using Event Sourcing only for storing domain Aggregate Roots while the rest (mainly infrastructure) can be stored in the ordinary way (relational tables). I think CQRS gives you great flexibility in this matter.

EF Code First - How to Model This?

I'm working on a web app (MVC) utilizing Entity Framework code first and I'm trying to figure out how to model this. I could certainly add 15 bool values to a class (bits in the database), but that seems like a pathetic way to go about it. I currently have a customer object that will contain an object for the policies shown in the image below.
I want my view to look just like what is above and there are currently no plans to add a 6th, but architecting the model to support that possibility would be important.
public class customer{
//some random properties like Id, Name, Owner, Etc.
//I could put 15 bools here for the policies in the image
//I could put a policy object here?
}
Here is a design that is simple, self describing, scalable, normalized and extensible. You can add additional policy types or patient types without recompiling the system. You didn't state which database engine you are using, so in order to make it work across most database platforms, I'd suggest you use TPC.
A patient is just a role that a person (aka party) plays in the system. You can have other roles such as "doctor", "employee", "policy holder" and so forth each with their own data. It is important to note that roles are temporal, meaning a single role can be voided, while the person performs other roles in the system.
If "Existing", "AgeIn", "NewPatient" can be determined by looking at properties of the Role or Party, the there is no need for a PatientType. I added it because it is unclear how the types of patiences are defined. You may very well just have a property on Patient to define that.
A party represents any legal entity. Parties have relationships which are often important for a business. So when "Sam" (a person) comes to the "Doctor" (a person playing a role), it is important to know that a "policy" of her dad Bob (a person) will be paying the bill. Hence the reason a Person is mapped in a different table.
PolicyType defines what type of policy a policy really is. In your case, you may have 18 different policy types, like ExistingOriginalMediCare, AgeInOriginalMediCare and so forth. This is where you can store data that influences the "rules" of your policy. For example, some types of policies are only available to people living in California. One system I worked on had thousands of policy types each with hundreds of properties that applications used to infer business rules. This allowed business to create new policy types and "rules" without recompiling the system and everything that depended on it.
However, one can simplify it by taking out the inheritance while maintaining the same capabilities. Here we assume that there will be no other "role" than "patient" and no other "party" than a "person".
That said, it really depends on whether the data will be reused by other applications and how temporal data and associations really are. Feel free to adapt. I often reference these books when designing systems:
Enterprise Patterns and MDA: Building Better Software with Archetype Patterns and UML
Enterprise Model Patterns: Describing the World (UML Version)
The Data Model Resource Book, Volume 3: Universal Patterns for Data Modeling
They have fundamentally changed the way I look at "data".
You could take a look at TPT (Table Per Type) for this, take a look here http://blogs.microsoft.co.il/blogs/gilf/archive/2010/01/22/table-per-type-inheritance-in-entity-framework.aspx
This would mean that you could have a table for each of these different concepts which extend a base table. The bonus of doing it this way is that later on you can add additional info to a specific type.
EG, customer would be your root table and then be extended with concepts such as OriginalMedicareCustomer
If you want to normalize it, I recommend going about it like so:
public class Customer {
// id, name, owner, etc
public virtual IList<CustomerPolicy> Policies { get; set; }
}
public class CustomerPolicy {
// id, name, etc
public bool ExistingPatient { get; set; }
public bool AgeInPatient { get; set; }
public bool NewPatient { get; set; }
}
Without knowing more about your application, I can't say, but I'm guessing that the three booleans for each policy are mutually exclusive? If so, I would instead do something like this:
public enum PatientType { Existing, AgeIn, NewPatient };
public class CustomerPolicy {
// id, name, etc
public PatientType PatientType { get; set; }
}
I'm not entirely sure about your data requirements, but I'd keep it simple and within a table or two, something like this...
public class Customer
{
public int CustomerID { get; set; }
// or implement it via enum like below for policy type
public bool Existing { get; set; }
public bool AgeIn { get; set; }
public bool New{ get; set; }
// no 'virtual' means it's 'required', with virtual could be 'null'
public Policy Policy { get; set; }
}
public enum PolicyBits
{
None = 0x00,
ExistingOriginalMediCare = 0x01,
// ...
AgeInOriginalMediCare = 0x100,
// ...
}
public class Policy
{
public int PolicyID { get; set; }
public int PolicyTypeValue { get; set; }
[NotMapped]
public PolicyBits PolicyType
{
get { return (PolicyBits)PolicyTypeValue; }
set { PolicyTypeValue = (int)value; }
}
}
...enum would help you scale down on the number of 'bits' - but it's not officially supported yet, will be from the next version and so far only in experimental, VS 2011 and .NET 4.5 (as I recall).
but you can temporarily work around it with something like below.
As for the model of the tables - I'm not sure how you want to 'switch' in between existing, new or age-in users - or could you have both or all three at the same time etc. Since all are bits I'm thinking one field should be enough - and maybe put it into a separate table for separation mostly - i.e. so you could redefine that, add new things or introduce new records etc.

c# MongoDB (noRM) - Repository pattern with embedded documents

I’m developing an application with a model similar to Stack Overflow (question / answer etc...)
Modelling a NoSQL Forum Application with C# / ASP.net MVC
The model looks something like this (simplified)
class Question
{
public string Title { get; set; }
public string Body { get; set; }
public DateTime DateCreated { get; set; }
public string UserName { get; set; }
public List<Answer> Replies { get; set; }
}
class Answer
{
public string Body { get; set; }
public DateTime DateCreated { get; set; }
public string UserName { get; set; }
}
So my documents are just one document, with the "answers" embedded in them
I’m trying to design my repositories for this approach.
Should I have 2 separate repositories? For example:
interface IQuestionRepository
{
void PutQuestion(Question question);
Question GetQuestion(string questionID);
}
interface IAnswerRepository
{
void PutAnswer(string questionID, Answer Answer);
Answer GetAnswer(string answerID);
}
Or something like this:
interface IPostRepository
{
void PutQuestion(Question question);
Question GetQuestion(string questionID);
void PutAnswer(string questionID, Answer Answer);
Answer GetAnswer(string answerID);
}
Your model is inherently flawed.
Question should be a root document.
Answer should be a root document.
While written in regards to RavenDB the document modeling information is mostly directly usable by you: http://codeofrob.com/archive/2010/12/21/ravendb-document-design-with-collections.aspx
Edit: FWIW the reason why your model is flawed is with document databases you want your documents to model transaction boundaries. Think of the editing scenario with stack overflow and how much of a nightmare it would be to maintain consistency with multiple people adding and updating answers which all alter the root document, and the poster is updating the question. The amount of contention on the single object will very problematic.
RavenDB provides what they call "patching" that lets you manipulate part of a document structure instead of the entire document exactly to solve problems like this but this design is best avoided up front instead of trying to make it work by greatly increasing the complexity of your persistence model having to do partial updates and handle elaborate concurrency situations.
And to answer the specific question after this, then you would have an AnswersRepository and a QuestsionsRepository
I think that it will be better to create repository for each aggregate rute(only for Question Document)
You don't need an Answer's repository. From a domain point of view, you should just add the answer to your Question object. The question repository should do the job, as Question looks like an aggregate root and you should have a repository per aggregate root (not per entity).
You should be careful not to create a Anemic Domain Model.

Categories

Resources