For example, if I am creating a 3 layer application (data / business / UI) and the data layer is grabbing single or multiple records. Do I convert everything from data layer into generic list/collections before sending to the business layer? Is it ok to send data tables? What about sending info back to the data layer?
If I use objects/lists, are these members of the Data or Business layers? Can I use the same objects to pass to and from the layers?
Here is some pseudo code:
object user with email / password
in UI layer, user inputs email / password. UI layer does validation and then I assume creates a new object user to pass to business layer which does further validation and passes same object to Data layer to insert record. Is this correct?
I am new to .NET (come from 8+ years of ASP VBScript background) and trying to get up to speed on the 'right' way to do things.
I am updating this answer because comments left by Developr seem to indicate that he would like a bit more detail.
The short answer to your question is Yes you'll want to use class instances (objects) to mediate the interface between your UI and your Business Logic Layer. The BLL and DAL will communicate as discussed below. You should not be passing SqlDataTables or SqlDataReaders around.
The simple reasons as to why: objects are type-safe, offer Intellisense support, permit you to make additions or alterations at the Business Layer that aren't necessarily found in the database, and give you some freedom to unlink the application from the database so that you can maintain a consistent BLL interface even as the database changes (within limits, of course). It is simply good programming practice.
The big picture is that, for any page in your UI, you'll have one or more "models" that you want to display and interact with. Objects are the way to capture the current state of a model. In terms of process: the UI will request a model (which may be a single object or a list of objects) from the Business Logic Layer (BLL). The BLL then creates and returns this model - usually using the tools from the Data Access Layer (DAL). If changes are made to the model in the UI, then the UI will send the revised object(s) back to the BLL with instructions as to what to do with them (e.g. insert, update, delete).
.NET is great for this kind of Separation of Concerns because the Generic container classes - and in particular the List<> class - are perfect for this kind of work. They not only permit you to pass the data but they are easily integrated with sophisticated UI controls like grids, lists, etc. via the ObjectDataSource class. You can implement the full range of operations that you need to develop the UI using ObjectDataSource: "Fill" operations with parameters, CRUD operations, sorting, etc.).
Because this is fairly important, let me make a quick diversion to demonstrate how to define an ObjectDataSource:
<asp:ObjectDataSource ID="ObjectDataSource1" runat="server"
OldValuesParameterFormatString="original_{0}"
SelectMethod="GetArticles"
OnObjectCreating="OnObjectCreating"
TypeName="MotivationBusinessModel.ContentPagesLogic">
<SelectParameters>
<asp:SessionParameter DefaultValue="News" Name="category"
SessionField="CurPageCategory" Type="String" />
</SelectParameters>
</asp:ObjectDataSource>
Here, MotivationBusinessModel is the namespace for the BLL and ContentPagesLogic is the class implementing the logic for, well, Content Pages. The method for pulling data is "GetArticles" and it takes a Parameter called CurPageCategory. In this particular case, the ObjectDataSource returns a list of objects that is then used by a grid. Note that I need to pass session state information to the BLL class so, in the code behind, I have a method "OnObjectCreating" that lets me create the object and pass in parameters:
public void OnObjectCreating(object sender, ObjectDataSourceEventArgs e)
{
e.ObjectInstance = new ContentPagesLogic(sessionObj);
}
So, this is how it works. But that begs one very big question - where do the Models / Business Objects come from? ORMs like Linq to SQL and Subsonic offer code generators that let you create a class for each of your database tables. That is, these tools say that the model classes should be defined in your DAL and the map directly onto database tables. Linq to Entities lets you define your objects in a manner quite distinct from the layout of your database but is correspondingly more complex (that is why there is a distinction between Linq to SQL and Linq to Entities). In essence, it is a BLL solution. Joel and I have said in various places on this thread that, really, the Business Layer is generally where the Models should be defined (although I use a mix of BLL and DAL objects in reality).
Once you decide to do this, how do you implement the mapping from models to the database? Well, you write classes in the BLL to pull the data (using your DAL) and fill the object or list of objects. It is Business Logic because the mapping is often accompanied by additional logic to flesh out the Model (e.g. defining the value of derived fields).
Joel creates static Factory classes to implement the model-to-database mapping. This is a good approach as it uses a well-known pattern and places the mapping right in the construction of the object(s) to be returned. You always know where to go to see the mapping and the overall approach is simple and straightforward.
I've taken a different approach. Throughout my BLL, I define Logic classes and Model classes. These are generally defined in matching pairs where both classes are defined in the same file and whose names differ by their suffix (e.g. ClassModel and ClassLogic). The Logic classes know how to work with the Model classes - doing things like Fill, Save ("Upsert"), Delete, and generate feedback for a Model Instance.
In particular, to do the Fill, I leverage methods found in my primary DAL class (shown below) that let me take any class and any SQL query and find a way to create/fill instances of the class using the data returned by the query (either as a single instance or as a list). That is, the Logic class just grabs a Model class definition, defines a SQL Query and sends them to the DAL. The result is a single object or list of objects that I can then pass on to the UI. Note that the query may return fields from one table or multiple tables joined together. At the mapping level, I really don't care - I just want some objects filled.
Here is the first function. It will take an arbitrary class and map it automatically to all matching fields extracted from a query. The matching is performed by finding fields whose name matches a property in the class. If there are extra class fields (e.g. ones that you'll fill using business logic) or extra query fields, they are ignored.
public List<T> ReturnList<T>() where T : new()
{
try
{
List<T> fdList = new List<T>();
myCommand.CommandText = QueryString;
SqlDataReader nwReader = myCommand.ExecuteReader();
Type objectType = typeof (T);
PropertyInfo[] typeFields = objectType.GetProperties();
if (nwReader != null)
{
while (nwReader.Read())
{
T obj = new T();
for (int i = 0; i < nwReader.FieldCount; i++)
{
foreach (PropertyInfo info in typeFields)
{
// Because the class may have fields that are *not* being filled, I don't use nwReader[info.Name] in this function.
if (info.Name == nwReader.GetName(i))
{
if (!nwReader[i].Equals(DBNull.Value))
info.SetValue(obj, nwReader[i], null);
break;
}
}
}
fdList.Add(obj);
}
nwReader.Close();
}
return fdList;
}
catch
{
conn.Close();
throw;
}
}
This is used in the context of my DAL but the only thing that you have to have in the DAL class is a holder for the QueryString, a SqlCommand object with an open Connection and any parameters. The key is just to make sure the ExecuteReader will work when this is called. A typical use of this function by my BLL thus looks like:
return qry.Command("Select AttendDate, Count(*) as ClassAttendCount From ClassAttend")
.Where("ClassID", classID)
.ReturnList<AttendListDateModel>();
You can also implement support for anonymous classes like so:
public List<T> ReturnList<T>(T sample)
{
try
{
List<T> fdList = new List<T>();
myCommand.CommandText = QueryString;
SqlDataReader nwReader = myCommand.ExecuteReader();
var properties = TypeDescriptor.GetProperties(sample);
if (nwReader != null)
{
while (nwReader.Read())
{
int objIdx = 0;
object[] objArray = new object[properties.Count];
for (int i = 0; i < nwReader.FieldCount; i++)
{
foreach (PropertyDescriptor info in properties) // FieldInfo info in typeFields)
{
if (info.Name == nwReader.GetName(i))
{
objArray[objIdx++] = nwReader[info.Name];
break;
}
}
}
fdList.Add((T)Activator.CreateInstance(sample.GetType(), objArray));
}
nwReader.Close();
}
return fdList;
}
catch
{
conn.Close();
throw;
}
}
A call to this looks like:
var qList = qry.Command("Select QueryDesc, UID, StaffID From Query")
.Where("SiteID", sessionObj.siteID)
.ReturnList(new { QueryDesc = "", UID = 0, StaffID=0 });
Now qList is a generic list of dynamically-created class instances defined on the fly.
Let's say you have a function in your BLL that takes a pull-down list as an argument and a request to fill the list with data. Here is how you could fill the pull down with the results retrieved above:
foreach (var queryObj in qList)
{
pullDownList.Add(new ListItem(queryObj.QueryDesc, queryObj.UID.ToString()));
}
In short, we can define anonymous Business Model classes on the fly and then fill them just by passing some (on the fly) SQL to the DAL. Thus, the BLL is very easy to update in response to evolving needs in the UI.
One last note: If you are concerned that defining and passing around objects wastes memory, you shouldn't be: if you use a SqlDataReader to pull the data and place it into the objects that make up your list, you'll only have one in-memory copy (the list) as the reader iterates through in a read-only, forward-only fashion. Of course, if you use DataAdapter and Table classes (etc.) at your data access layer then you would be incurring needless overhead (which is why you shouldn't do it).
In general, I think it is better to send objects rather than data tables. With objects, each layer knows what it is receiving (which objects with what properties etc.). You get compile time safety with objects, you can't accidentally misspell a property name etc. and it forces an inherent contract between the two tiers.
Joshua also brings up a good point, by using your custom object, you are also decoupling the other tiers from the data tier. You can always populate your custom object from another data source and the other tiers will be none the wiser. With a SQL data table, this will probably not be so easy.
Joel also made a good point. Having your data layer aware of your business objects is not a good idea for the same reason as making your business and UI layers aware of the specifics of your data layer.
There are nearly as many "correct" ways to do this as there are programming teams in the world. That said, what I like to do is build a factory for each of my business objects that looks something like this:
public static class SomeBusinessObjectFactory
{
public static SomeBusinessObject FromDataRow(IDataRecord row)
{
return new SomeBusinessObject() { Property1 = row["Property1"], Property2 = row["Property2"] ... };
}
}
I also have a generic translation method that I use to call these factories:
public static IEnumerable<T> TranslateQuery(IEnumerable<IDatarecord> source, Func<IDatarecord, T> Factory)
{
foreach (IDatarecord item in source)
yield return Factory(item);
}
Depending on what your team prefers, the size of the project, etc, these factory objects and translator can live with the business layer or data layer, or even an extra "translation" assembly/layer.
Then my data layer will have code that looks like this:
private SqlConnection GetConnection()
{
var conn = new SqlConnection( /* connection string loaded from config file */ );
conn.Open();
return conn;
}
private static IEnumerable<IDataRecord> ExecuteEnumerable(this SqlCommand command)
{
using (var rdr = command.ExecuteReader())
{
while (rdr.Read())
{
yield return rdr;
}
}
}
public IEnumerable<IDataRecord> SomeQuery(int SomeParameter)
{
string sql = " .... ";
using (var cn = GetConnection())
using (var cmd = new SqlCommand(sql, cn))
{
cmd.Parameters.Add("#Someparameter", SqlDbType.Int).Value = SomeParameter;
return cmd.ExecuteEnumerable();
}
}
And then I can put it all together like this:
SomeGridControl.DataSource = TranslateQuery(SomeQuery(5), SomeBusinessObjectFactory.FromDataRow);
I´d add a new layer, ORM Object Relational Mapping, with the responsability to transform data from Data layer into bussiness objects collections. I think that using objects in your bussiness model is the best practice.
Whatever means you use to pass data between the layers of your application, just be sure that the implementation details of each layer do not leak into the others. You should be able to change how the data in the relational database is stored without modifying any of the code in the business objects layers (other than serialization of course).
A tight coupling between the design of the business objects and the relational data model is extremely irritating and is a waste of a good RDBMS.
There are a lot of great answers here, I would just add that before you spend a lot of time creating translation layers and factories it's important to understand the purpose and future of your application.
Somewhere, whether it's a config mapping file, a factory, or directly in your data/business/ui layer some object/file/class/etc is going to have to have knowledge of what transpires between each layer. If swapping out layers is realistic, then creating the translation layers is useful. Other times, it just makes sense to have some layer (I usually make it in business) know about all Interfaces (or at least enough to broker between data and ui).
Again, this isn't to say all of that stuff is bad, just that it's possible YAGNI. Some DI and ORM frameworks make this stuff so easy that it's stupid not to do it. If you're using one, then it probably makes sense to get it for all it's worth.
I strongly suggest that you do it with objects. Some other way suggests that only interfaces are publics while your implementations are internal, and you expose your methods through a factory of your object, then couple your factories with a façade to finally have a single and unique entry point to your library. Then, only data objects goes through your façade, so you always know what to expect inside as outside your façade.
This way, any UI could call your library's façade, and the only thing that would remain to code is your UI.
Here's a link which I find personally very interesting that explains in summary the different design patterns: GoF .NET Design Patterns for C# and VBNET.
If you'd rather a code sample illustrating what I'm stating, please feel free to ask.
The application that I'm working on now is fairly old (in .NET terms) and uses strongly typed datasets to pass data between the data layer and the business layer. In the business layer, the data in the datasets is manually "or mapped" to business objects before being passed to the front end.
This is probably not a popular design decision though because strongly typed dataset were always somewhat controversial.
Related
I'm developing an app for Windows Phone with SQlite and have a lot of custom SQL queries. Like
string query = "SELECT distinct(destinations.name) as Destinations
FROM destinations, flights
WHERE destinations.d_ID = flights.d_ID
AND flights.Date = #" + date.ToShortDateString() + "#";
then run:
var result = (Application.Current as App).db.Query(query);
For working with SQlite i'm using http://dotnetslackers.com/articles/silverlight/Windows-Phone-7-Native-Database-Programming-via-Sqlite-Client-for-Windows-Phone.aspx#s2-introduction-to-sqlite-client-for-windows-phone
and theirs DBHelper
I want all Queries will be in one place, so I can quickly change them.
Wanted to ask how to do it correctly?
create one static class
create Enum or Dictionary with queries collection
create some XML or similar file with collection -
Thanks for advise
I don't think any of the approaches are valid, for the following reasons:
Create one static class
This is a God object and is considered an anti-pattern, best to stay away from it. It's just going to be a nightmare to maintain.
create Enum or Dictionary with queries collection
Instead of having a God object now, you have a God collection, and are really just implementing the same anti-pattern in a different way.
Additionally, you'll have string keys (or enum keys) and there's not a strong link between the two (what if the dictionary doesn't populate for some reason?).
create some XML or similar file with collection
It could be argued that you're doing the same thing you would be doing with a dictionary; you'd have to key the query somehow and then look it up. It's a very brittle approach.
Possible solution
I recommend that you first abstract out your data layer into logical units. Create a class for data operations which are related.
For example, if you have a few queries and operations that are related to destinations, create an interface that exposes those operations:
public interface IDestinationDataOperations
{
// Get destinations by date.
IEnumerable<string> GetDestinationsByDate(DateTime asOf);
}
Then, create a class that implements this which is specific to SQL Lite. Where you want to make the calls, the variable is of the interface type.
The benefits of this are:
If you change the implementation from SQL Lite to some other underlying data store (web service call, JSON REST call, whatever) you only have to change where you populate the interface variable (this is where dependency injection begins to be of use), as all of your calls are against the abstraction
The interface is more easily testable:
You can test the direct implementation against any test data you want
For items that rely on the interface, you can mock the interface any way you like and not have an actual database underneath for testing.
Then, for other data operations, you can wash, rinse, and repeat.
For bonus points, you can separate out the interface into a unit-of-work for writes and a repository for reads, depending on whether what best suits your needs.
I am trying to create a small personal project which uses EF to handle data access. My project architecture has a UI layer, a service layer, a business layer and a data access layer. The EF is contained within the DAL. I don't think it's right to then make reference to my DAL, from my UI. So I want to create custom classes for 'business objects' which is shared between all my layers.
Example: I have a User table. EF creates a User entity. I have a method to maybe GetListOfUsers(). That, in the presentation, shouldn't reply on a List, as the UI then has a direct link to the DAL. I need to maybe have a method exposed in the DAL to maybe be something like:
List<MyUserObject> GetListOfUsers();
That would then call my internal method which would GetListOfUsers which returns a list of user entities, and then transforms them into my MyUserObejcts, which is then passed back through the layers to my UI.
Is that correct design? I don't feel the UI, or business layer for that matter, should have any knowledge of the entity framework.
What this may mean, though, is maybe I need a 'Transformation layer' between my DAL and my Business layer, which transforms my entities into my custom objects?
Edit:
Here is an example of what I am doing:
I have a data access project, which will contain the Entity Framework. In this project, I will have a method to get me a list of states.
public class DataAccessor
{
taskerEntities te = new taskerEntities();
public List<StateObject> GetStates()
{
var transformer = new Transformer();
var items = (from s in te.r_state select s).ToList();
var states = new List<StateObject>();
foreach (var rState in items)
{
var s = transformer.State(rState);
states.Add(s);
}
return states;
}
}
My UI/Business/Service projects mustn't know about entity framework objects. It, instead, must know about my custom built State objects. So, I have a Shared Library project, containing my custom built objects:
namespace SharedLib
{
public class StateObject
{
public int stateId { get; set; }
public string description { get; set; }
public Boolean isDefault { get; set; }
}
}
So, my DAL get's the items into a list of Entity objects, and then I pass them through my transformation method, to make them into custom buily objects. The tranformation takes an EF object, and outputs a custom object.
public class Transformer
{
public StateObject State (r_state state)
{
var s = new StateObject
{
description = state.description,
isDefault = state.is_default,
stateId = state.state_id
};
return s;
}
}
This seems to work. But is it a valid pattern?
So, at some point, your UI will have to work with the data and business objects that you have. It's a fact of life. You could try to abstract farther, but that would only succeed in the interaction being deferred elsewhere.
I agree that business processes should stand alone from the UI. I also agree that your UI should not directly act with how you access your data. What have you suggested (something along the lines of "GetListOfUsers()") is known as the Repository Pattern.
The purpose of the repository pattern is to:
separate the logic that retrieves the data and maps it to the entity
model from the business logic that acts on the model. The business
logic should be agnostic to the type of data that comprises the data
source layer
My recommendation is to use the Repository Pattern to hide HOW you're accessing your data (and, allow a better separation of concerns) and just be concerned with the fact that you "just want a list of users" or you "just want to calculate the sum of all time sheets" or whatever it is that you want your application to actually focus on. Read the link for a more detailed description.
First, do you really need all that layers in your 'small personal project'?
Second, I think your suggested architecture is a bit unclear.
If I get you right, you want to decouple your UI from your DAL. For that purpose you can for example extract interface of MyUserObject (defined in DAL obviously) class, lets call it IMyUserObject, and instead of referencing DAL from UI, reference some abstract project, where all types are data-agnostic. Also, I suggest that you'd have some service layer which would provide your presentation layer (UI) with concrete objects. If you utilize MVC you can have a link to services from your controller class. Service layer in turn can use Repository or some other technique to deal with DAL (it depends on the complexity you choose)
Considering transformation layer, I think people deal with mapping from one type to another when they have one simple model (DTO) to communicate with DB, another one - domain model, that deals with all the subtleties of business logic, and another one - presentational model, that is suited best to let user interact with. Such layering separates concerns to good measure, making each task simpler, but making app more complicated in general.
So you may end having MyUserObjectDTO, MyUserObject and MyUserObjectView and some mapping or transformation btw them.
In my continuing journey through ASP.NET MVC, I am now at the point where I need to render an edit/create form for an entity.
My entity consists of enums and a few other models, created in a repository via LINQtoSQL.
What I am struggling with right now is finding a decent way to render the edit/create forms which will contain a few dropdown lists and a number of text fields. I realize this may not be the most user-friendly approach, but it is what I am going with right now :).
I have a repository layer and a business layer. The controllers interface with the service layer.
Is it best to simply create a viewmodel like so?
public class EventFormViewModel
{
IEventService _eventService;
public IEvent Event { get; private set; }
public IEnumerable<EventCampaign> Campaigns { get; private set; }
public IEnumerable<SelectListItem> Statuses { get; private set; }
// Other tables/dropdowns go here
// Constructor
public EventFormViewModel(IEventService eventService, IEvent ev)
{
_eventService = eventService;
Event = ev;
// Initialize Collections
Campaigns = eventService.getCampaigns().ToSelectList(); //extn method maybe?
Statuses = eventService.getStatus().ToSelectList(); /extn for each table type?
}
So this will give me a new EventFormViewModel which I'll bind to a view. But is this the best way? I'd essentially be pulling all data back from the database for a few different tables and converting them to an IEnumerable. This doesn't seem overly efficient, but I suppose I could cache the contents of the dropdowns.
Also, if all I have is methods that get data for a dropdown, should I just skip the service layer and go right to the repository?
The last part of my question: For the ToSelectList() extension method, would it be possible to write one method for each table and use it generically even if some tables have different columns ("Id" and "Name" versus "Id" and "CampaignName").
Forgive me if this is too general, I'm just trying to avoid going down a dead-end road - or one that will have a lot of potholes.
I wouldn't provide an IEventService for my view model object. I prefer to think of the view model object as a dumb data transfer object. I would let the controller take care of asking the IEventService for the data and passing it on to the view model.
I'd essentially be pulling all data
back from the database for a few
different tables and converting them
to an IEnumerable
I don't see why this would be inefficient? You obviously shouldn't pull all data from the tables. Perform the filtering and joining you need to do in the database as usual. Put the result in the view model.
Also, if all I have is methods that
get data for a dropdown, should I just
skip the service layer and go right to
the repository?
If your application is very simple, then a service layer may be an unneeded layer of abstraction / indirection. But if your application is just a bit complex (from what you've posted above, I would guess that this is the case), consider what you will by taking a shortcut and going straight to a repository and compare this to what you will win in maintainability and testability if you use a service layer.
The worst thing you could do, would be to go through a service layer only when you feel there is a need for it, and go straight to the repository when the service layer will not be providing any extra logic. Whatever you do, be consistent (which almost always means: go through a service layer, even when your application is simple. It won't stay simple).
I would say if you're thinking of "skipping" a layer than you're not really ready to use MVC. The whole point of the layers, even when they're thin, is to facilitate unit testing and try to enforce separation of concerns.
As for generic methods, is there some reason you can just use the OOB objects and then extend them (with extension methods) when they fail to meet your needs?
I've commonly seen examples like this on business objects:
public void Save()
{
if(this.id > 0)
{
ThingyRepository.UpdateThingy(this);
}
else
{
int id = 0;
ThingyRepository.AddThingy(this, out id);
this.id = id;
}
}
So why here, on the business object? This seems like contextual or data related more so than business logic.
For example, a consumer of this object might go through something like this...
...Get form values from a web app...
Thingy thingy = Thingy.CreateNew(Form["name"].Value, Form["gadget"].Value, Form["process"].Value);
thingy.Save();
Or, something like this for an update...
... Get form values from a web app...
Thingy thingy = Thingy.GetThingyByID(Int32.Parse(Form["id"].Value));
Thingy.Name = Form["name"].Value;
Thingy.Save();
So why is this? Why not contain actual business logic such as calculations, business specific rules, etc., and avoid retrieval/persistence?
Using this approach, the code might look like this:
... Get form values from a web app...
Thingy thingy = Thingy.CreateNew(Form["name"].Value, Form["gadget"].Value, Form["process"].Value);
ThingyRepository.AddThingy(ref thingy, out id);
Or, something like this for an update...
... get form values from a web app ...
Thingy thingy = ThingyRepository.GetThingyByID(Int32.Parse(Form["id"].Value));
thingy.Name = Form["Name"].Value;
ThingyRepository.UpdateThingy(ref thingy);
In both of these examples, the consumer, who knows best what is being done to the object, calls the repository and either requests an ADD or an UPDATE. The object remains DUMB in that context, but still provides it's core business logic as pertains to itself, not how it is retrieved or persisted.
In short, I am not seeing the benefit of consolidating the GET and SAVE methods within the business object itself.
Should I just stop complaining and conform, or am I missing something?
This leads into the Active Record pattern (see P of EAA p. 160).
Personally I am not a fan. Tightly coupling business objects and persistence mechanisms so that changing the persistence mechanism requires a change in the business object? Mixing data layer with domain layer? Violating the single responsibility principle? If my business object is Account then I have the instance method Account.Save but to find an account I have the static method Account.Find? Yucky.
That said, it has its uses. For small projects with objects that directly conform to the database schema and have simple domain logic and aren't concerned with ease of testing, refactoring, dependency injection, open/closed, separation of concerns, etc., it can be a fine choice.
Your domain objects should have no reference to persistance concerns.
Create a repository interface in the domain that will represent a persistance service, and implement it outside the domain (you can implement it in a separate assembly).
This way your aggregate root doesn't need to reference the repository (since it's an aggregate root, it should already have everyting it needs), and it will be free of any dependency or persistance concern. Hence easier to test, and domain focused.
While I have no understanding of DDD, it makes sense to have 1 method (which will do UPSERT. Insert if record doesn't exist, Update otherwise).
User of the class can act dumb and call Save on an existing record and Update on a new record.
Having one point of action is much clearer.
EDIT: The decision of whether to do an INSERT or UPDATE is better left to the repository. User can call Repository.Save(....), which can result in a new record (if record is not already in DB) or an update.
If you don't like their approach make your own. Personally Save() instance methods on business objects smell really good to me. One less class name I need to remember. However, I don't have a problem with a factory save but I don't see why it would be so difficult to have both. IE
class myObject
{
public Save()
{
myObjFactory.Save(this);
}
}
...
class myObjectFactory
{
public void Save(myObject obj)
{
// Upsert myObject
}
}
Sorry for this point being all over the place here...but I feel like a dog chasing my tail and I'm all confused at this point.
I'm trying to see the cleanest way of developing a 3 tiered solution (IL, BL, DL) where the DL is using an ORM to abstract access to a DB.
Everywhere I've seen, people use either LinqToSQL or LLBLGen Pro to generate objects which represent the DB Tables, and refer to those classes in all 3 layers.
Seems like 40 years of coding patterns have been ignored -- or a paradigm shift has happened, and I missed the explanaition part as to why its perfectly ok to do so.
Yet, it appears that there is still some basis to desiring being data storage mechanism agnostic -- look what just happened to LinqToSQL: a lot of code was written against it -- only for MS
to drop it... So I would like to isolate the ORM part as best I can, just don't know how.
So, going back to absolute basics, here are the basic parts that I wish to have assembled in a very very clean way:
The Assemblies I'm starting from:
UL.dll
BL.dll
DL.dll
The main classes:
A Message class that has a property exposing collection (called MessageAddresses) of MessageAddress objects:
class Message
{
public MessageAddress From {get;}
public MessageAddresses To {get;}
}
The functions per layer:
The BL exposes a Method to the UI called GetMessage (Guid id) which returns an instance of Message.
The BL in turn wraps the DL.
The DL has a ProviderFactory which wraps a Provider instance.
The DL.ProviderFactory exposes (possibly...part of my questions) two static methods called
GetMessage(Guid id), and
SaveMessage(Message message)
The ultimate goal would be to be able to swap out a provider that was written for Linq2SQL for one for LLBLGen Pro, or another provider that is not working against an ORM (eg VistaDB).
Design Goals:
I would like layer separation.
I would like each layer to only have dependency on layer below it, rather than above it.
I would like ORM generated classes to be in DL layer only.
I would like UL to share Message class with BL.
Therefore, does this mean that:
a) Message is defined in BL
b) The Db/Orm/Manual representation of the DB Table ('DbMessageRecord', or 'MessageEntity', or whatever else ORM calls it) is defined in DL.
c) BL has dependency on DL
d) Before calling DL methods, that do not have ref or know about BL, the BL has to convert them BL entities (eg: DbMessageRecord)?
UL:
Main()
{
id = 1;
Message m = BL.GetMessage(id);
Console.Write (string.Format("{0} to {1} recipients...", m.From, m.To.Count));
}
BL:
static class MessageService
{
public static Message GetMessage(id)
{
DbMessageRecord message = DLManager.GetMessage(id);
DbMessageAddressRecord[] messageAddresses = DLManager.GetMessageAddresses(id);
return MapMessage(message,
}
protected static Message MapMessage(DbMessageRecord dbMessage. DbMessageAddressRecord[] dbAddresses)
{
Message m = new Message(dbMessage.From);
foreach(DbMessageAddressRecord dbAddressRecord in dbAddresses){
m.To.Add(new MessageAddress (dbAddressRecord.Name, dbAddressRecord.Address);
}
}
DL:
static class MessageManager
{
public static DbMessageRecord GetMessage(id);
public static DbMessageAddressRecord GetMessageAddresses(id);
}
Questions:
a) Obviously this is a lot of work sooner or later.
b) More bugs
c) Slower
d) Since BL now dependency on DL, and is referencing classes in DL (eg DbMessageRecord), it seems that since these are defined by ORM, that you can't rip out one Provider, and replace it with another, ...which makes the whole exercise pointless...might as well use the classes of the ORM all through the BL.
e) Or ...another assembly is needed in between the BL and DL and another mapping is required in order to leave BL independent of underlying DL classes.
Wish I could ask the questions clearer...but I'm really just lost at this point. Any help would be greatly appreciated.
that is a little all over the place and reminds me of my first forays into orm and DDD.
I personally use core domain objects, messaging objects, message handlers and repositories.
So my UI sends a message to a handler which in turn hydrates my objects via repositories and executes the business logic in that domain object. I use NHibernate to for my data access and FluentNHibernate for typed binding rather than loosy goosey .hbm config.
So the messaging is all that is shared between my UI and my handlers and all BL is on the domain.
I know i might have opened myself up for punishment for my explanation, if its not clear i will defend later.
Personally i am not a big fan of code generated objects.
I have to keep adding onto this answer.
Try to think of your messaging as a command rather than as a data entity representing your db. I'll give u an example of one of my simple classes and an infrastructure decision that worked very well for me that i cant take credit for:
[Serializable]
public class AddMediaCategoryRequest : IRequest<AddMediaCategoryResponse>
{
private readonly Guid _parentCategory;
private readonly string _label;
private readonly string _description;
public AddMediaCategoryRequest(Guid parentCategory, string label, string description)
{
_parentCategory = parentCategory;
_description = description;
_label = label;
}
public string Description
{
get { return _description; }
}
public string Label
{
get { return _label; }
}
public Guid ParentCategory
{
get { return _parentCategory; }
}
}
[Serializable]
public class AddMediaCategoryResponse : Response
{
public Guid ID;
}
public interface IRequest<T> : IRequest where T : Response, new() {}
[Serializable]
public class Response
{
protected bool _success;
private string _failureMessage = "This is the default error message. If a failure has been reported, it should have overwritten this message.";
private Exception _exception;
public Response()
{
_success = false;
}
public Response(bool success)
{
_success = success;
}
public Response(string failureMessage)
{
_failureMessage = failureMessage;
}
public Response(string failureMessage, Exception exception)
{
_failureMessage = failureMessage;
_exception = exception;
}
public bool Success
{
get { return _success; }
}
public string FailureMessage
{
get { return _failureMessage; }
}
public Exception Exception
{
get { return _exception; }
}
public void Failed(string failureMessage)
{
_success = false;
_failureMessage = failureMessage;
}
public void Failed(string failureMessage, Exception exception)
{
_success = false;
_failureMessage = failureMessage;
_exception = exception;
}
}
public class AddMediaCategoryRequestHandler : IRequestHandler<AddMediaCategoryRequest,AddMediaCategoryResponse>
{
private readonly IMediaCategoryRepository _mediaCategoryRepository;
public AddMediaCategoryRequestHandler(IMediaCategoryRepository mediaCategoryRepository)
{
_mediaCategoryRepository = mediaCategoryRepository;
}
public AddMediaCategoryResponse HandleRequest(AddMediaCategoryRequest request)
{
MediaCategory parentCategory = null;
MediaCategory mediaCategory = new MediaCategory(request.Description, request.Label,false);
Guid id = _mediaCategoryRepository.Save(mediaCategory);
if(request.ParentCategory!=Guid.Empty)
{
parentCategory = _mediaCategoryRepository.Get(request.ParentCategory);
parentCategory.AddCategoryTo(mediaCategory);
}
AddMediaCategoryResponse response = new AddMediaCategoryResponse();
response.ID = id;
return response;
}
}
I know this goes on and on but this basic system has served me very well over the last year or so
you can see that the handler than allows the domain object to handle the domain specific logic
The concept you seem to be missing is IoC / DI (i.e. Inversion of Control / Dependency Injection). Instead of using static methods, each of your layers should only depend on an interface of the next layer, with actual instance injected into the constructor. You can call your DL a repository, a provider or anything else as long as it's a clean abstraction of the underlying persistence mechanism.
As for the objects that represent the entities (roughly mapping to tables) I strongly advise against having two sets of objects (one database-specific and one not). It is OK for them to be referenced by all three layers as long as they are POCOs (they should not really know they're persisted), or, even DTOs (pure structures with no behavior whatsoever). Making them DTOs fits your BL concept better, however I prefer having my business logic spread across my domain objects ("the OOP style") rather than having notion of the BL ("the Microsoft style").
Not sure about Llblgen, but NHibernate + any IoC like SpringFramework.NET or Windsor provide pretty clean model that supports this.
This is probably too indirect an answer, but last year I wrestled with these sorts of questions in the Java world and found Martin Fowler's Patterns of Enterprise Application Architecture quite helpful (also see his pattern catalog). Many of the patterns deal with the same issues you're struggling with. They are all nicely abstract and helped me organize my thinking to be able to see the problem at a higher level.
I chose an approach that used the iBatis SQL mapper to encapsulate our interactions with the database. (An SQL mapper drives the programming language data model from the SQL tables, whereas an ORM like yours goes the other way around.) The SQL mapper returns lists and hierarchies of Data Transfer Objects, each of which represents a row of some query result. Parameters to queries (and inserts, updates, deletes) are passed in as DTOs too. The BL layer makes calls on the SQL Mapper (run this query, do that insert, etc.) and passes around DTOs. The DTOs go up to the presentation layer (UI) where they drive the template expansion mechanisms that generate XHTML, XML, and JSON representations of the data. So for us, the only DL dependency that flowed up to the UI was the set of DTOs, but they made the UI a lot more streamlined than passing up unpacked field values would.
If you couple the Fowler book with the specific help other posters can give, you'll do fine. This is an area with a lot of tools and prior experience, so there should be many good paths forward.
Edit: #Ciel, You're quite right, a DTO instance is just a POCO (or in my case a Java POJO). A Person DTO could have a first_name field of "Jim" and so on. Each DTO basically corresponds to a row of a database table and is just a bundle of fields, nothing more. This means it's not coupled closely with the DL and is perfectly appropriate to pass up to the UI. Fowler talks about these on p. 401 (not a bad first pattern to cut your teeth on).
Now I'm not using an ORM, which takes your data objects and creates the database. I'm using an SQL mapper, which is just a very efficient and convenient way to package and execute database queries in SQL. I designed my SQL first (I happen to know it pretty well), then I designed my DTOs, and then set up my iBatis configuration to say that, "select * from Person where personid = #personid#" should return me a Java List of Person DTO objects. I've not yet used an ORM (Hibernate, eg, in the Java world), but with one of those you'd create your data model objects first and the database is built from them.
If your data model objects have all sorts of ORM-specific add-ons, then I can see why you would think twice before exposing them up to the UI layer. But there you could create a C# interface that only defines the POCO get and set methods, and use that in all your non-DL APIs, and create an implementation class that has all the ORM-specific stuff in it:
interface Person ...
class ORMPerson : Person ...
Then if you change your ORM later, you can create alternate POCO implementations:
class NewORMPerson : Person ...
and that would only affect your DL layer code, because your BL and UI code uses Person.
#Zvolkov (below) suggests taking this approach of "coding to interfaces, not implementations" up to the next level, by recommending that you can write your application in such a way that all your code uses Person objects, and that you can use a dependency injection framework to dynamically configure your application to create either ORMPersons or NewORMPersons depending on what ORM you want to use that day
Try centralizing all data access using a repository pattern. As far as your entities are concerned, you can try implementing some kind of translation layer that will map your entities, so it won't break your app. This is just temporary and will allow you to slowly refactor your code.
obviously I do not know the full scope of your code base so consider the pain and the gain.
My opinion only, YMMV.
When I'm messing with any new technology, I figure it should meet two criteria or I'm wasting my time. (Or I don't understand it well enough.)
It should simplify things, or worst case make them no more complicated.
It should not increase coupling or reduce cohesiveness.
It sounds like you feel like you're headed in the opposite direction, which I know is not the intention for either LINQ or ORMs.
My own perception of the value of this new stuff is it helps a developer move the boundary between the DL and the BL into a little more abstract territory. The DL looks less like raw tables and more like objects. That's it. (I usually work pretty hard to do this anyway with a little heavier SQL and stored procedures, but I'm probably more comfortable with SQL than average). But if LINQ and ORM aren't helping you with this yet, I'd say keep at it, but that's where the end of the tunnel is; simplification, and moving the abstraction boundary a bit.