Recommended asp.net MVC model design approach - c#

I'm trying to decide the best approach for a new project I'm about to start on, when it comes to my model design (and I'm using Dapper.net).
I like the idea of having my models with objects rather than Foreign Key properties, i.e.
public Post LastPost { get; set; }
vs
public int LastPostId { get; set; }
However, if I implement this sort of nice clean approach, I have to multi-map to all objects (which leads onto potential circular referencing of objects within objects, (or have to stop multi-mapping at a certain point and therefore end up with NULL objects at some point down the object tree). Also, if I do multi-map to an extent, then Im perhaps causing unnecessary work, performing joins etc when they're not always going to be needed.
Or, if I decide to use multi-mapping to populate my objects within objects on a 'as needed' basis (in some of my repos methods perform multi mapping because its needed, and in other repos methods, don't bother populating the objects), then it feels kind of dirty in that I can't always be sure if an object (within an object) is null or not.
I've used NHibernate (or at least some of its more basic functionality) in the past and not had the dilemma as I always had objects within my models and if/when they were needed, I could rely on lazy loading to go get them - However, not having that lazy loading with Dapper.net I'm really unsure of the best approach to go with?

Why not have the best of both worlds?
bool _lastPostLoaded;
private Post _lastPost;
public Post LastPost
{
get
{
if(!_lastPostLoaded)
{
_lastPost = cnn.Query<Post>("select * from Posts where Id = #lastPostId",
new {lastPostId});
_lastPostLoaded = true;
}
return _lastPost;
}
set
{
_lastPost = value;
_lastPostLoaded = true;
}
}
This allows you to eager load when needed with multi mapping and lazy load, when you are lazy;

good, It is Lazy loading proxy pattern.

Related

ViewModels in MVC / MVVM / Separation of layers- best practices?

I'm fairly new to the using ViewModels and I wonder, is it acceptable for a ViewModel to contain instances of domain models as properties, or should the properties of those domain models be properties of the ViewModel itself? For example, if I have a class Album.cs
public class Album
{
public int AlbumId { get; set; }
public string Title { get; set; }
public string Price { get; set; }
public virtual Genre Genre { get; set; }
public virtual Artist Artist { get; set; }
}
Would you typically have the ViewModel hold an instance of the Album.cs class, or would you have the ViewModel have properties for each of the Album.cs class' properties.
public class AlbumViewModel
{
public Album Album { get; set; }
public IEnumerable<SelectListItem> Genres { get; set; }
public IEnumerable<SelectListItem> Artists { get; set; }
public int Rating { get; set; }
// other properties specific to the View
}
public class AlbumViewModel
{
public int AlbumId { get; set; }
public string Title { get; set; }
public string Price { get; set; }
public IEnumerable<SelectListItem> Genres { get; set; }
public IEnumerable<SelectListItem> Artists { get; set; }
public int Rating { get; set; }
// other properties specific to the View
}
tl;dr
Is it acceptable for a ViewModel to contain instances of domain models?
Basically not because you are literally mixing two layers and tying them together. I must admit, I see it happen a lot and it depends a bit on the quick-win-level of your project, but we can state that it's not conform the Single Responsibility Principle of SOLID.
The fun part: this is not limited to view models in MVC, it's actually a matter of separation of the good old data, business and ui layers. I'll illustrate this later, but for now; keep in mind it applies to MVC, but also, it applies to many more design patterns as well.
I'll start with pointing out some general applicable concepts and zoom in into some actual scenario's and examples later.
Let's consider some pros and cons of not mixing the layers.
What it will cost you
There is always a catch, I'll sum them, explain later, and show why they are usually not applicable
duplicate code
adds extra complexity
extra performance hit
What you'll gain
There is always a win, I'll sum it, explain later, and show why this actually makes sense
independent control of the layers
The costs
duplicate code
It's not DRY!
You will need an additional class, which is probably exactly the same as the other one.
This is an invalid argument. The different layers have a well defined different purpose. Therefore, the properties which lives in one layer have a different purpose than a property in the other - even if the properties have the same name!
For example:
This is not repeating yourself:
public class FooViewModel
{
public string Name {get;set;}
}
public class DomainModel
{
public string Name {get;set;}
}
On the other hand, defining a mapping twice, is repeating yourself:
public void Method1(FooViewModel input)
{
//duplicate code: same mapping twice, see Method2
var domainModel = new DomainModel { Name = input.Name };
//logic
}
public void Method2(FooViewModel input)
{
//duplicate code: same mapping twice, see Method1
var domainModel = new DomainModel { Name = input.Name };
//logic
}
It's more work!
Really, is it? If you start coding, more than 99% of the models will overlap. Grabbing a cup of coffee will take more time ;-)
"It needs more maintenance"
Yes it does, that's why you need to unit test your mapping (and remember, don't repeat the mapping).
adds extra complexity
No, it does not. It adds an extra layer, which make it more complicated. It does not add complexity.
A smart friend of mine, once stated it like this:
"A flying plane is a very complicated thing. A falling plane is very complex."
He is not the only one using such a definition, the difference is in predictability which has an actual relation with entropy, a measurement for chaos.
In general: patterns do not add complexity. They exist to help you reduce complexity. They are solutions to well known problems. Obviously, a poorly implemented pattern doesn't help therefore you need to understand the problem before applying the pattern. Ignoring the problem doesn't help either; it just adds technical debt which has to be repaid sometime.
Adding a layer gives you well defined behavior, which due to the obvious extra mapping, will be a (bit) more complicated. Mixing layers for various purposes will lead to unpredictable side-effects when a change is applied. Renaming your database column will result in a mismatch in key/value-lookup in your UI which makes you do a non existing API call. Now, think of this and how this will relate to your debugging efforts and maintenance costs.
extra performance hit
Yes, extra mapping will lead to extra CPU power to be consumed. This, however (unless you have a raspberry pi connected to a remote database) is negligible compared to fetching the data from the database. Bottom line: if this is an issue: use caching.
The win
independent control of the layers
What does this mean?
Any combination of this (and more):
creating a predictable system
altering your business logic without affecting your UI
altering your database, without affecting your business logic
altering your ui, without affecting your database
able to change your actual data store
total independent functionality, isolated well testable behavior and easy to maintain
cope with change and empower business
In essence: you are able to make a change, by altering a well defined piece of code without worrying about nasty side effects.
beware: business counter measures!
"this is to reflect change, it's not going to change!"
Change will come: spending trillions of US dollar annually cannot simply pass by.
Well that's nice. But face it, as a developer; the day you don't make any mistakes is the day you stop working. Same applies to business requirements.
fun fact; software entropy
"my (micro) service or tool is small enough to cope with it!"
This might be the toughest one since there is actually a good point here. If you develop something for one time use, it probably is not able to cope with the change at all and you have to rebuild it anyway, provided you are actually going to reuse it. Nevertheless, for all other things: "change will come", so why make the change more complicated? And, please note, probably, leaving out layers in your minimalistic tool or service will usually puts a data layer closer to the (User)Interface. If you are dealing with an API, your implementation will require a version update which needs to be distributed among all your clients. Can you do that during a single coffee break?
"lets do it quick-and-simple, just for the time being...."
Is your job "for the time being"? Just kidding ;-) but; when are you going to fix it? Probably when your technical debt forces you to. At that time it cost you more than this short coffee break.
"What about 'closed for modification and open for extension'? That's also a SOLID principle!"
Yes, it is! But this doesn't mean you shouldn't fix typo's. Or that every applied business rule can be expressed as an sum of extensions or that you are not allowed to fix things that are broken. Or as Wikipedia states it:
A module will be said to be closed if it is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding)
which actually promotes separation of layers.
Now, some typical scenarios:
ASP.NET MVC
Since, this is what you are using in your actual question:
Let me give an example. Imagine the following view model and domain model:
note: this is also applicable to other layer types, to name a few: DTO, DAO, Entity, ViewModel, Domain, etc.
public class FooViewModel
{
public string Name {get; set;}
//hey, a domain model class!
public DomainClass Genre {get;set;}
}
public class DomainClass
{
public int Id {get; set;}
public string Name {get;set;}
}
So, somewhere in your controller you populate the FooViewModel and pass it on to your view.
Now, consider the following scenarios:
1) The domain model changes.
In this case you'll probably need to adjust the view as well, this is bad practice in context of separation of concerns.
If you have separated the ViewModel from the DomainModel, a minor adjustment in the mappings (ViewModel => DomainModel (and back)) would be sufficient.
2) The DomainClass has nested properties and your view just displays the "GenreName"
I have seen this go wrong in real live scenarios.
In this case a common problem is that the use of #Html.EditorFor will lead to inputs for the nested object. This might include Ids and other sensitive information. This means leaking implementation details! Your actual page is tied to your domain model (which is probably tied to your database somewhere). Following this course, you'll find yourself creating hidden inputs. If you combine this with a server side model binding or automapper it's getting harder to block the manipulation of hidden Id's with tools like firebug, or forgetting to set an attribute on your property, will make it available in your view.
Although it's possible, maybe easy, to block some of those fields, but the more nested Domain/Data objects you have, the more trickier it will become to get this part right. And; what if you are "using" this domainmodel in multiple views? Will they behave the same? Also, bear in mind, that you might want to change your DomainModel for a reason that's not necessarily targeting the view. So with every change in your DomainModel you should be aware that it might affect the view(s) and the security aspects of the controller.
3) In ASP.NET MVC it is common to use validation attributes.
Do you really want your domain to contain metadata about your views? Or apply view-logic to your data-layer? Is your view-validation always the same as the domain-validation? Does it has the same fields (or are some of them a concatenation)? Does it have the same validation logic? Are you are using your domain-models cross application? etc.
I think it's clear this is not the route to take.
4) More
I can give you more scenario's but it's just a matter of taste to what's more appealing. I'll just hope at this point you'll get the point :) Nevertheless, I promised an illustration:
Now, for really dirty and quick-wins it will work, but I don't think you should want it.
It's just a little more effort to build a view-model, which usually is for 80+% similar to the domain model. This might feel like doing unnecessary mappings, but when the first conceptual difference arises, you'll find that it was worth the effort :)
So as an alternative, I propose the following setup for a general case:
create a viewmodel
create a domainmodel
create a datamodel
use a library like automapper to create mapping from one to the other (this will help to map Foo.FooProp to OtherFoo.FooProp)
The benefits are, e.g.; if you create an extra field in one of your database tables, it won't affect your view. It might hit your business layer or mappings, but there it will stop. Of course, most of the time you want to change your view as well, but in this case you don't need to. It therefore keeps the problem isolated in one part of your code.
Web API / data-layer / DTO
First a note: here's a nice article on how DTO (which is not a viewmodel), can be omitted in some scenario's - on which my pragmatic side fully agrees ;-)
Another concrete example of how this will work in a Web-API / ORM (EF) scenario:
Here it's more intuitive, especially when the consumer is a third party, it's unlikely your domain model matches the implementation of your consumer, therefore a viewmodel is more likely to be fully self-contained.
note: The name "domain model", is sometimes mixed with DTO or "Model"
Please note that in Web (or HTTP or REST) API; communications is often done by a data-transfer-object (DTO), which is the actual "thing" that's being exposed on the HTTP-endpoints.
So, where should we put these DTO's you might ask. Are they between domain model and view models? Well, yes; we have already seen that treating them as viewmodel would be hard since the consumer is likely to implement a customized view.
Would the DTO's be able to replace the domainmodels or do they have a reason to exists on their own? In general, the concept of separation would be applicable to the DTO's and domainmodels as well. But then again: you can ask yourself (,and this is where I tend to be a bit pragmatic,); is there enough logic within the domain to explicitly define a domainlayer? I think you'll find that if your service get smaller and smaller, the actual logic, which is part of the domainmodels, decreases as well and may be left out all together and you'll end up with:
EF/(ORM) Entities ↔ DTO/DomainModel ↔ Consumers
disclaimer / note
As #mrjoltcola stated: there is also component over-engineering to keep in mind. If none of the above applies, and the users/programmers can be trusted, you are good to go. But keep in mind that maintainability and re-usability will decrease due to the DomainModel/ViewModel mixing.
Opinions vary, from a mix of technical best practices and personal preferences.
There is nothing wrong with using domain objects in your view models, or even using domain objects as your model, and many people do. Some feel strongly about creating view models for every single view, but personally, I feel many apps are over-engineered by developers who learn and repeat one approach that they are comfortable with. The truth is there are several ways to accomplish the goal using newer versions of ASP.NET MVC.
The biggest risk, when you use a common domain class for your view model and your business and persistence layer, is that of model injection. Adding new properties to a model class can expose those properties outside the boundary of the server. An attacker can potentially see properties he should not see (serialization) and alter values he should not alter (model binders).
To guard against injection, use secure practices that are relevant to your overall approach. If you plan to use domain objects, then make sure to use white lists or black lists (inclusion / exclusion) in the controller or via model binder annotations. Black lists are more convenient, but lazy developers writing future revisions may forget about them or not be aware of them. White lists ([Bind(Include=...)] are obligatory, requiring attention when new fields are added, so they act as an inline view model.
Example:
[Bind(Exclude="CompanyId,TenantId")]
public class CustomerModel
{
public int Id { get; set; }
public int CompanyId { get; set; } // user cannot inject
public int TenantId { get; set; } // ..
public string Name { get; set; }
public string Phone { get; set; }
// ...
}
or
public ActionResult Edit([Bind(Include = "Id,Name,Phone")] CustomerModel customer)
{
// ...
}
The first sample is a good way to enforce multitenant safety across the application. The second sample allows customizing each action.
Be consistent in your approach and clearly document the approach used in your project for other developers.
I recommend you always use view models for login / profile related features to force yourself to "marshall" the fields between the web constroller and the data access layer as a security exercise.

Is it ok to use C# Property like this

One of my fellow developer has a code similar to the following snippet
class Data
{
public string Prop1
{
get
{
// return the value stored in the database via a query
}
set
{
// Save the data to local variable
}
}
public void SaveData()
{
// Write all the properties to a file
}
}
class Program
{
public void SaveData()
{
Data d = new Data();
// Fetch the information from database and fill the local variable
d.Prop1 = d.Prop1;
d.SaveData();
}
}
Here the Data class properties fetch the information from DB dynamically. When there is a need to save the Data to a file the developer creates an instance and fills the property using self assignment. Then finally calls a save. I tried arguing that the usage of property is not correct. But he is not convinced.
This are his points
There are nearly 20 such properties.
Fetching all the information is not required except for saving.
Instead of self assignment writing an utility method to fetch all will have same duplicate code in the properties.
Is this usage correct?
I don't think that another developer who will work with the same code will be happy to see :
d.Prop1 = d.Prop1;
Personally I would never do that.
Also it is not the best idea to use property to load data from DB.
I would have method which will load data from DB to local variable and then you can get that data using property. Also get/set logically must work with the same data. It is strange to use get for getting data from DB but to use set to work with local variable.
Properties should really be as lightweight as possible.
When other developers are using properties, they expect them to be intrinsic parts of the object (that is, already loaded and in memory).
The real issue here is that of symmetry - the property get and set should mirror each other, and they don't. This is against what most developers would normally expect.
Having the property load up from database is not recommended - normally one would populate the class via a specific method.
This is pretty terrible, imo.
Properties are supposed to be quick / easy to access; if there's really heavy stuff going on behind a property it should probably be a method instead.
Having two utterly different things going on behind the same property's getter and setter is very confusing. d.Prop1 = d.Prop1 looks like a meaningless self-assignment, not a "Load data from DB" call.
Even if you do have to load twenty different things from a database, doing it this way forces it to be twenty different DB trips; are you sure multiple properties can't be fetched in a single call? That would likely be much better, performance-wise.
"Correct" is often in the eye of the beholder. It also depends how far or how brilliant you want your design to be. I'd never go for the design you describe, it'll become a maintenance nightmare to have the CRUD actions on the POCOs.
Your main issue is the absense of separations of concerns. I.e., The data-object is also responsible for storing and retrieving (actions that need to be defined only once in the whole system). As a result, you end up with duplicated, bloated and unmaintainable code that may quickly become real slow (try a LINQ query with a join on the gettor).
A common scenario with databases is to use small entity classes that only contain the properties, nothing more. A DAO layer takes care of retrieving and filling these POCOs with data from the database and defined the CRUD actions only ones (through some generics). I'd suggest NHibernate for the ORM mapping. The basic principle explained here works with other ORM mappers too and is explained here.
The reasons, esp. nr 1, should be a main candidate for refactoring this into something more maintainable. Duplicated code and logic, when encountered, should be reconsidered strongly. If the gettor above is really getting the database data (I hope I misunderstand that), get rid of it as quickly as you can.
Overly simplified example of separations of concerns:
class Data
{
public string Prop1 {get; set;}
public string Prop2 {get; set;}
}
class Dao<T>
{
SaveEntity<T>(T data)
{
// use reflection for saving your properies (this is what any ORM does for you)
}
IList<T> GetAll<T>()
{
// use reflection to retrieve all data of this type (again, ORM does this for you)
}
}
// usage:
Dao<Data> myDao = new Dao<Data>();
List<Data> allData = myDao.GetAll();
// modify, query etc using Dao, lazy evaluation and caching is done by the ORM for performance
// but more importantly, this design keeps your code clean, readable and maintainable.
EDIT:
One question you should ask your co-worker: what happens if you have many Data (rows in database), or when a property is a result of a joined query (foreign key table). Have a look at Fluent NHibernate if you want a smooth transition from one situation (unmaintainable) to another (maintainable) that's easy enough to understand by anybody.
If I were you I would write a serialize / deserialize function, then provide properties as lightweight wrappers around the in-memory results.
Take a look at the ISerialization interface: http://msdn.microsoft.com/en-us/library/system.runtime.serialization.iserializable.aspx
This would be very hard to work with,
If you set the Prop1, and then get Prop1, you could end up with different results
eg:
//set Prop1 to "abc"
d.Prop1 = "abc";
//if the data source holds "xyz" for Prop1
string myString = d.Prop1;
//myString will equal "xyz"
reading the code without the comment you would expect mystring to equal "abc" not "xyz", this could be confusing.
This would make working with the properties very difficult and require a save every time you change a property for it to work.
As well as agreeing with what everyone else has said on this example, what happens if there are other fields in the Data class? i.e. Prop2, Prop3 etc, do they all go back to the database, each time they are accessed in order to "return the value stored in the database via a query". 10 properties would equal 10 database hits. Setting 10 properties, 10 writes to the database. That's not going to scale.
In my opinion, that's an awful design. Using a property getter to do some "magic" stuff makes the system awkward to maintain. If I would join your team, how should I know that magic behind those properties?
Create a separate method that is called as it behaves.

Creating a large form with multiple dropdowns and text fields in ASP.NET MVC

In my continuing journey through ASP.NET MVC, I am now at the point where I need to render an edit/create form for an entity.
My entity consists of enums and a few other models, created in a repository via LINQtoSQL.
What I am struggling with right now is finding a decent way to render the edit/create forms which will contain a few dropdown lists and a number of text fields. I realize this may not be the most user-friendly approach, but it is what I am going with right now :).
I have a repository layer and a business layer. The controllers interface with the service layer.
Is it best to simply create a viewmodel like so?
public class EventFormViewModel
{
IEventService _eventService;
public IEvent Event { get; private set; }
public IEnumerable<EventCampaign> Campaigns { get; private set; }
public IEnumerable<SelectListItem> Statuses { get; private set; }
// Other tables/dropdowns go here
// Constructor
public EventFormViewModel(IEventService eventService, IEvent ev)
{
_eventService = eventService;
Event = ev;
// Initialize Collections
Campaigns = eventService.getCampaigns().ToSelectList(); //extn method maybe?
Statuses = eventService.getStatus().ToSelectList(); /extn for each table type?
}
So this will give me a new EventFormViewModel which I'll bind to a view. But is this the best way? I'd essentially be pulling all data back from the database for a few different tables and converting them to an IEnumerable. This doesn't seem overly efficient, but I suppose I could cache the contents of the dropdowns.
Also, if all I have is methods that get data for a dropdown, should I just skip the service layer and go right to the repository?
The last part of my question: For the ToSelectList() extension method, would it be possible to write one method for each table and use it generically even if some tables have different columns ("Id" and "Name" versus "Id" and "CampaignName").
Forgive me if this is too general, I'm just trying to avoid going down a dead-end road - or one that will have a lot of potholes.
I wouldn't provide an IEventService for my view model object. I prefer to think of the view model object as a dumb data transfer object. I would let the controller take care of asking the IEventService for the data and passing it on to the view model.
I'd essentially be pulling all data
back from the database for a few
different tables and converting them
to an IEnumerable
I don't see why this would be inefficient? You obviously shouldn't pull all data from the tables. Perform the filtering and joining you need to do in the database as usual. Put the result in the view model.
Also, if all I have is methods that
get data for a dropdown, should I just
skip the service layer and go right to
the repository?
If your application is very simple, then a service layer may be an unneeded layer of abstraction / indirection. But if your application is just a bit complex (from what you've posted above, I would guess that this is the case), consider what you will by taking a shortcut and going straight to a repository and compare this to what you will win in maintainability and testability if you use a service layer.
The worst thing you could do, would be to go through a service layer only when you feel there is a need for it, and go straight to the repository when the service layer will not be providing any extra logic. Whatever you do, be consistent (which almost always means: go through a service layer, even when your application is simple. It won't stay simple).
I would say if you're thinking of "skipping" a layer than you're not really ready to use MVC. The whole point of the layers, even when they're thin, is to facilitate unit testing and try to enforce separation of concerns.
As for generic methods, is there some reason you can just use the OOB objects and then extend them (with extension methods) when they fail to meet your needs?

Handling collection properties in a class and NHibernate entities

I was wondering what is the recommended way to expose a collection within a class and if it is any different from the way of doing that same thing when working with NHibernate entities.
Let me explain... I never had a specific problem with my classes exposing collection properties like:
IList<SomeObjType> MyProperty { get; set; }
Having the setter as protected or private gives me some times a bit more control on how I want to handle the collection.
I recently came across this article by Davy Brion:
http://davybrion.com/blog/2009/10/stop-exposing-collections-already/
Davy, clearly recommends to have collections as IEnumerables instead of lets say Lists in order to disallow users of having the option to directly manipulate the contents of those collections. I can understand his point but I am not entirely convinced and by reading the comments on his post I am not the only one.
When it comes to NHibernate entities though, it makes much sense to hide the collections in the way he proposes especially when cascades are in place. I want to have complete control of an entity that is in session and its collections, and exposing AddXxx and RemoveXxx for collection properties makes much more sense to me.
The problem is how to do it?
If I have the entity's collections as IEnumerables I have no way of adding/removing elements to them without converting them to Lists by doing ToList() which makes a new list and therefore nothing can be persisted, or casting them to Lists which is a pain because of proxies and lazy loading.
The overall idea is to not allow an entity to be retrieved and have its collections manipulated (add.remove elements) directly but only through the methods I expose while honouring the cascades for collection persistence.
Your advice and ideas will be much appreciated.
How about...
private IList<string> _mappedProperty;
public IEnumerable<string> ExposedProperty
{
get { return _mappedProperty.AsEnumerable<string>(); }
}
public void Add(string value)
{
// Apply business rules, raise events, queue message, etc.
_mappedProperty.Add(value);
}
This solution is possible if you use NHibernate to map to the private field, ie. _mappedProperty. You can read more about how to do this in the access and naming strategies documentation here.
In fact, I prefer to map all my classes like this. Its better that the developer decides how to define the public interface of the class, not the ORM.
How about exposing them as ReadOnlyCollection?
IList<SomeObjType> _mappedProperty;
return new ReadOnlyCollection<SomeObjType> ExposedProperty
{
get
{
return new ReadOnlyCollection(_mappedProperty);
}
}
I am using NHibernate and I usually keep the collections as ISet and make the setter protected.
ISet<SomeObjType> MyProperty { get; protected set; }
I also provide the AddXxx and RemoveXxx for collection properties where they are required. This has worked quite satisfactorily for me most of the time. But I will say that there have been instances where it had made sense to allow client code add items to the collection directly.
Basically, what I have seen is if I follow the principle of "Tell, Don't Ask" in my client code, without worrying too much about enforcing rigid access constraints on my Domain Object properties, then I always end up with a good design.

ORM and layers

Sorry for this point being all over the place here...but I feel like a dog chasing my tail and I'm all confused at this point.
I'm trying to see the cleanest way of developing a 3 tiered solution (IL, BL, DL) where the DL is using an ORM to abstract access to a DB.
Everywhere I've seen, people use either LinqToSQL or LLBLGen Pro to generate objects which represent the DB Tables, and refer to those classes in all 3 layers.
Seems like 40 years of coding patterns have been ignored -- or a paradigm shift has happened, and I missed the explanaition part as to why its perfectly ok to do so.
Yet, it appears that there is still some basis to desiring being data storage mechanism agnostic -- look what just happened to LinqToSQL: a lot of code was written against it -- only for MS
to drop it... So I would like to isolate the ORM part as best I can, just don't know how.
So, going back to absolute basics, here are the basic parts that I wish to have assembled in a very very clean way:
The Assemblies I'm starting from:
UL.dll
BL.dll
DL.dll
The main classes:
A Message class that has a property exposing collection (called MessageAddresses) of MessageAddress objects:
class Message
{
public MessageAddress From {get;}
public MessageAddresses To {get;}
}
The functions per layer:
The BL exposes a Method to the UI called GetMessage (Guid id) which returns an instance of Message.
The BL in turn wraps the DL.
The DL has a ProviderFactory which wraps a Provider instance.
The DL.ProviderFactory exposes (possibly...part of my questions) two static methods called
GetMessage(Guid id), and
SaveMessage(Message message)
The ultimate goal would be to be able to swap out a provider that was written for Linq2SQL for one for LLBLGen Pro, or another provider that is not working against an ORM (eg VistaDB).
Design Goals:
I would like layer separation.
I would like each layer to only have dependency on layer below it, rather than above it.
I would like ORM generated classes to be in DL layer only.
I would like UL to share Message class with BL.
Therefore, does this mean that:
a) Message is defined in BL
b) The Db/Orm/Manual representation of the DB Table ('DbMessageRecord', or 'MessageEntity', or whatever else ORM calls it) is defined in DL.
c) BL has dependency on DL
d) Before calling DL methods, that do not have ref or know about BL, the BL has to convert them BL entities (eg: DbMessageRecord)?
UL:
Main()
{
id = 1;
Message m = BL.GetMessage(id);
Console.Write (string.Format("{0} to {1} recipients...", m.From, m.To.Count));
}
BL:
static class MessageService
{
public static Message GetMessage(id)
{
DbMessageRecord message = DLManager.GetMessage(id);
DbMessageAddressRecord[] messageAddresses = DLManager.GetMessageAddresses(id);
return MapMessage(message,
}
protected static Message MapMessage(DbMessageRecord dbMessage. DbMessageAddressRecord[] dbAddresses)
{
Message m = new Message(dbMessage.From);
foreach(DbMessageAddressRecord dbAddressRecord in dbAddresses){
m.To.Add(new MessageAddress (dbAddressRecord.Name, dbAddressRecord.Address);
}
}
DL:
static class MessageManager
{
public static DbMessageRecord GetMessage(id);
public static DbMessageAddressRecord GetMessageAddresses(id);
}
Questions:
a) Obviously this is a lot of work sooner or later.
b) More bugs
c) Slower
d) Since BL now dependency on DL, and is referencing classes in DL (eg DbMessageRecord), it seems that since these are defined by ORM, that you can't rip out one Provider, and replace it with another, ...which makes the whole exercise pointless...might as well use the classes of the ORM all through the BL.
e) Or ...another assembly is needed in between the BL and DL and another mapping is required in order to leave BL independent of underlying DL classes.
Wish I could ask the questions clearer...but I'm really just lost at this point. Any help would be greatly appreciated.
that is a little all over the place and reminds me of my first forays into orm and DDD.
I personally use core domain objects, messaging objects, message handlers and repositories.
So my UI sends a message to a handler which in turn hydrates my objects via repositories and executes the business logic in that domain object. I use NHibernate to for my data access and FluentNHibernate for typed binding rather than loosy goosey .hbm config.
So the messaging is all that is shared between my UI and my handlers and all BL is on the domain.
I know i might have opened myself up for punishment for my explanation, if its not clear i will defend later.
Personally i am not a big fan of code generated objects.
I have to keep adding onto this answer.
Try to think of your messaging as a command rather than as a data entity representing your db. I'll give u an example of one of my simple classes and an infrastructure decision that worked very well for me that i cant take credit for:
[Serializable]
public class AddMediaCategoryRequest : IRequest<AddMediaCategoryResponse>
{
private readonly Guid _parentCategory;
private readonly string _label;
private readonly string _description;
public AddMediaCategoryRequest(Guid parentCategory, string label, string description)
{
_parentCategory = parentCategory;
_description = description;
_label = label;
}
public string Description
{
get { return _description; }
}
public string Label
{
get { return _label; }
}
public Guid ParentCategory
{
get { return _parentCategory; }
}
}
[Serializable]
public class AddMediaCategoryResponse : Response
{
public Guid ID;
}
public interface IRequest<T> : IRequest where T : Response, new() {}
[Serializable]
public class Response
{
protected bool _success;
private string _failureMessage = "This is the default error message. If a failure has been reported, it should have overwritten this message.";
private Exception _exception;
public Response()
{
_success = false;
}
public Response(bool success)
{
_success = success;
}
public Response(string failureMessage)
{
_failureMessage = failureMessage;
}
public Response(string failureMessage, Exception exception)
{
_failureMessage = failureMessage;
_exception = exception;
}
public bool Success
{
get { return _success; }
}
public string FailureMessage
{
get { return _failureMessage; }
}
public Exception Exception
{
get { return _exception; }
}
public void Failed(string failureMessage)
{
_success = false;
_failureMessage = failureMessage;
}
public void Failed(string failureMessage, Exception exception)
{
_success = false;
_failureMessage = failureMessage;
_exception = exception;
}
}
public class AddMediaCategoryRequestHandler : IRequestHandler<AddMediaCategoryRequest,AddMediaCategoryResponse>
{
private readonly IMediaCategoryRepository _mediaCategoryRepository;
public AddMediaCategoryRequestHandler(IMediaCategoryRepository mediaCategoryRepository)
{
_mediaCategoryRepository = mediaCategoryRepository;
}
public AddMediaCategoryResponse HandleRequest(AddMediaCategoryRequest request)
{
MediaCategory parentCategory = null;
MediaCategory mediaCategory = new MediaCategory(request.Description, request.Label,false);
Guid id = _mediaCategoryRepository.Save(mediaCategory);
if(request.ParentCategory!=Guid.Empty)
{
parentCategory = _mediaCategoryRepository.Get(request.ParentCategory);
parentCategory.AddCategoryTo(mediaCategory);
}
AddMediaCategoryResponse response = new AddMediaCategoryResponse();
response.ID = id;
return response;
}
}
I know this goes on and on but this basic system has served me very well over the last year or so
you can see that the handler than allows the domain object to handle the domain specific logic
The concept you seem to be missing is IoC / DI (i.e. Inversion of Control / Dependency Injection). Instead of using static methods, each of your layers should only depend on an interface of the next layer, with actual instance injected into the constructor. You can call your DL a repository, a provider or anything else as long as it's a clean abstraction of the underlying persistence mechanism.
As for the objects that represent the entities (roughly mapping to tables) I strongly advise against having two sets of objects (one database-specific and one not). It is OK for them to be referenced by all three layers as long as they are POCOs (they should not really know they're persisted), or, even DTOs (pure structures with no behavior whatsoever). Making them DTOs fits your BL concept better, however I prefer having my business logic spread across my domain objects ("the OOP style") rather than having notion of the BL ("the Microsoft style").
Not sure about Llblgen, but NHibernate + any IoC like SpringFramework.NET or Windsor provide pretty clean model that supports this.
This is probably too indirect an answer, but last year I wrestled with these sorts of questions in the Java world and found Martin Fowler's Patterns of Enterprise Application Architecture quite helpful (also see his pattern catalog). Many of the patterns deal with the same issues you're struggling with. They are all nicely abstract and helped me organize my thinking to be able to see the problem at a higher level.
I chose an approach that used the iBatis SQL mapper to encapsulate our interactions with the database. (An SQL mapper drives the programming language data model from the SQL tables, whereas an ORM like yours goes the other way around.) The SQL mapper returns lists and hierarchies of Data Transfer Objects, each of which represents a row of some query result. Parameters to queries (and inserts, updates, deletes) are passed in as DTOs too. The BL layer makes calls on the SQL Mapper (run this query, do that insert, etc.) and passes around DTOs. The DTOs go up to the presentation layer (UI) where they drive the template expansion mechanisms that generate XHTML, XML, and JSON representations of the data. So for us, the only DL dependency that flowed up to the UI was the set of DTOs, but they made the UI a lot more streamlined than passing up unpacked field values would.
If you couple the Fowler book with the specific help other posters can give, you'll do fine. This is an area with a lot of tools and prior experience, so there should be many good paths forward.
Edit: #Ciel, You're quite right, a DTO instance is just a POCO (or in my case a Java POJO). A Person DTO could have a first_name field of "Jim" and so on. Each DTO basically corresponds to a row of a database table and is just a bundle of fields, nothing more. This means it's not coupled closely with the DL and is perfectly appropriate to pass up to the UI. Fowler talks about these on p. 401 (not a bad first pattern to cut your teeth on).
Now I'm not using an ORM, which takes your data objects and creates the database. I'm using an SQL mapper, which is just a very efficient and convenient way to package and execute database queries in SQL. I designed my SQL first (I happen to know it pretty well), then I designed my DTOs, and then set up my iBatis configuration to say that, "select * from Person where personid = #personid#" should return me a Java List of Person DTO objects. I've not yet used an ORM (Hibernate, eg, in the Java world), but with one of those you'd create your data model objects first and the database is built from them.
If your data model objects have all sorts of ORM-specific add-ons, then I can see why you would think twice before exposing them up to the UI layer. But there you could create a C# interface that only defines the POCO get and set methods, and use that in all your non-DL APIs, and create an implementation class that has all the ORM-specific stuff in it:
interface Person ...
class ORMPerson : Person ...
Then if you change your ORM later, you can create alternate POCO implementations:
class NewORMPerson : Person ...
and that would only affect your DL layer code, because your BL and UI code uses Person.
#Zvolkov (below) suggests taking this approach of "coding to interfaces, not implementations" up to the next level, by recommending that you can write your application in such a way that all your code uses Person objects, and that you can use a dependency injection framework to dynamically configure your application to create either ORMPersons or NewORMPersons depending on what ORM you want to use that day
Try centralizing all data access using a repository pattern. As far as your entities are concerned, you can try implementing some kind of translation layer that will map your entities, so it won't break your app. This is just temporary and will allow you to slowly refactor your code.
obviously I do not know the full scope of your code base so consider the pain and the gain.
My opinion only, YMMV.
When I'm messing with any new technology, I figure it should meet two criteria or I'm wasting my time. (Or I don't understand it well enough.)
It should simplify things, or worst case make them no more complicated.
It should not increase coupling or reduce cohesiveness.
It sounds like you feel like you're headed in the opposite direction, which I know is not the intention for either LINQ or ORMs.
My own perception of the value of this new stuff is it helps a developer move the boundary between the DL and the BL into a little more abstract territory. The DL looks less like raw tables and more like objects. That's it. (I usually work pretty hard to do this anyway with a little heavier SQL and stored procedures, but I'm probably more comfortable with SQL than average). But if LINQ and ORM aren't helping you with this yet, I'd say keep at it, but that's where the end of the tunnel is; simplification, and moving the abstraction boundary a bit.

Categories

Resources