I'm learning to program using C# and ASP.NET with a SQL Server database. I have developed a system to store and view trades taken on a financial market. Basic functionality is:
Add/Update/Delete an Order
Add/Update/Delete a Trade (a trade comprises one or more orders)
View trades
View orders
There are other entities as well, things like Brokers, Accounts, Strategies, etc that support the main Order and Trade entities.
I have designed my program to have a Database utility class called DBUtil which has all the interfaces to the database. For example to add a new trade I would call DBUtil.InsertTrade(<params>), to add an order DBUtil.InsertOrder(<params>), to update a trade, DBUtil.UpdateTrade(<params>), etc. I was wondering if it would be better to create a Trade class, an Order class, a Broker class, etc. Would that improve the elegance, quality and maintainability of the program? It seems like adding a lot more code for no benefit, well, I can't see the benefits right now of taking such an approach.
As far as I can see adding a Trade class would simply create an extra layer of code, because I would have to call DBUtil.InsertTrade() from the Trade class anyway when adding a trade, for example.
Yes it will improve the maintainability of your code because your business objects will be strongly typed. In addition to that you can create a test scenario without having to connect to real database using mockup of your business objects. One of disadvantages is more code have to be written of course but it will benefit you in the future expansion of your application.
Usually if you use Linq2Sql or EF, VS can create these classes for you.
Edit:
See also this question Why do we need a business logic layer?
It really depends on what the application will grow into and who will maintain it.
IF you are happy with it at the minute then why change it.
I would advise you to read up on software development patterns, at the minute it sounds like you are using the Active Record pattern, and that is OK:
http://en.wikipedia.org/wiki/Active_record_pattern
What you are thinking of is moving to a Domain Driven Design solution.
http://en.wikipedia.org/wiki/Domain-driven_design
Yes Domain object will be usefull for this case.
Related
In my quest to understand MVP-VM design pattern in preparation for a new project which will be using DevExpress, I am unable to find an example with a database example.
Viewing DevExpress, I am unable to locate an example with this design pattern.
So, my question is this, does anyone who has experience of this design pattern, if you have experience of this with DevExpress even better, can give me an example of database connection, mapping data from a database (query/table) to model, and saving data to the database?
I appreciate this may sound rather an elementary question, but I do not know with this pattern what is the best practice for taking data from the view to persist to a database, and vice versa. Therefore, I do not wish to go off and doing incorrect.
Thanks
Classic architectures define the following parts:
DAL (Data Access Layer) : Deals with db operations, contains no business logic.
BLL (Business Logic Layer) : Deals with business logic and constraints, but no database operations.
PL (Presentation Layer) : Deals with presentation and user interaction.
MVP/MVVM deals with PL only. The big question is, what exactly is the M(odel) in this patterns. One example is using Entity Framework or DevExpress XPO to map objects to the database. But what are this objects? Are they the VMs? Are they the Model(s)? Or are they just simple Dtos of the Dal? If you look at the previous definitions, the last one will be the most matching one. But that means you have to map this dtos to you model objects (where the logic resides) and to you vms (where the databinding happens) or first to your model and in a second step from there to your vms? What about the filtering/sorting/paging capabilities of the datagrid, that only works if you give it direct access to you dal?
In my opinion true separation of the three aspects could only be achieved with CQRS or similiar architectures. At least I don't know of others. But this is only for extreme complex applications with lot of business logic and comes with much overhead and trade offs.
So, for your app, you have to decide which barriers you will break to suite your needs and to avoid needless complexity for your project. A simple CRUD application could use Entity Framework/XPO and use the objects directly as VMs for databinding. You have no bll at all. If it grows more complex you perhaps go a little bit in the direction of cqrs. You define views and use EF/XPO and their objects directly for databinding, but for complex operations you create a bll (domain)model with objects (that are loaded with EF/XPO either) that execute theese operations (needing to reload you vms afterwards). And if it is even more complex, you could go the complete way to CQRS perhaps with event sourcing and perhaps modeling with DDD.
I seem to be missing something and extensive use of google didn't help to improve my understanding...
Here is my problem:
I like to create my domain model in a persistence ignorant manner, for example:
I don't want to add virtual if I don't need it otherwise.
I don't like to add a default constructor, because I like my objects to always be fully constructed. Furthermore, the need for a default constructor is problematic in the context of dependency injection.
I don't want to use overly complicated mappings, because my domain model uses interfaces or other constructs not readily supported by the ORM.
One solution to this would be to have separate domain objects and data entities. Retrieval of the constructed domain objects could easily be solved using the repository pattern and building the domain object from the data entity returned by the ORM. Using AutoMapper, this would be trivial and not too much code overhead.
But I have one big problem with this approach: It seems that I can't really support lazy loading without writing code for it myself. Additionally, I would have quite a lot of classes for the same "thing", especially in the extended context of WCF and UI:
Data entity (mapped to the ORM)
Domain model
WCF DTO
View model
So, my question is: What am I missing? How is this problem generally solved?
UPDATE:
The answers so far suggest what I already feared: It looks like I have two options:
Make compromises on the domain model to match the prerequisites of the ORM and thus have a domain model the ORM leaks into
Create a lot of additional code
UPDATE:
In addition to the accepted answer, please see my answer for concrete information on how I solved those problems for me.
I would question that matching the prereqs of an ORM is necessarily "making compromises". However, some of these are fair points from the standpoint of a highly SOLID, loosely-coupled architecture.
An ORM framework exists for one sole reason; to take a domain model implemented by you, and persist it into a similar DB structure, without you having to implement a large number of bug-prone, near-impossible-to-unit-test SQL strings or stored procedures. They also easily implement concepts like lazy-loading; hydrating an object at the last minute before that object is needed, instead of building a large object graph yourself.
If you want stored procs, or have them and need to use them (whether you want to or not), most ORMs are not the right tool for the job. If you have a very complex domain structure such that the ORM cannot map the relationship between a field and its data source, I would seriously question why you are using that domain and that data source. And if you want 100% POCO objects, with no knowledge of the persistence mechanism behind, then you will likely end up doing an end run around most of the power of an ORM, because if the domain doesn't have virtual members or child collections that can be replaced with proxies, then you are forced to eager-load the entire object graph (which may well be impossible if you have a massive interlinked object graph).
While ORMs do require some knowledge in the domain of the persistence mechanism in terms of domain design, an ORM still results in much more SOLID designs, IMO. Without an ORM, these are your options:
Roll your own Repository that contains a method to produce and persist every type of "top-level" object in your domain (a "God Object" anti-pattern)
Create DAOs that each work on a different object type. These types require you to hard-code the get and set between ADO DataReaders and your objects; in the average case a mapping greatly simplifies the process. The DAOs also have to know about each other; to persist an Invoice you need the DAO for the Invoice, which needs a DAO for the InvoiceLine, Customer and GeneralLedger objects as well. And, there must be a common, abstracted transaction control mechanism built into all of this.
Set up an ActiveRecord pattern where objects persist themselves (and put even more knowledge about the persistence mechanism into your domain)
Overall, the second option is the most SOLID, but more often than not it turns into a beast-and-two-thirds to maintain, especially when dealing with a domain containing backreferences and circular references. For instance, for fast retrieval and/or traversal, an InvoiceLineDetail record (perhaps containing shipping notes or tax information) might refer directly to the Invoice as well as the InvoiceLine to which it belongs. That creates a 3-node circular reference that requires either an O(n^2) algorithm to detect that the object has been handled already, or hard-coded logic concerning a "cascade" behavior for the backreference. I've had to implement "graph walkers" before; trust me, you DO NOT WANT to do this if there is ANY other way of doing the job.
So, in conclusion, my opinion is that ORMs are the least of all evils given a sufficiently complex domain. They encapsulate much of what is not SOLID about persistence mechanisms, and reduce knowledge of the domain about its persistence to very high-level implementation details that break down to simple rules ("all domain objects must have all their public members marked virtual").
In short - it is not solved
(here goes additional useless characters to post my awesome answer)
All good points.
I don't have an answer (but the comment got too long when I decided to add something about stored procs) except to say my philosophy seems to be identical to yours and I code or code generate.
Things like partial classes make this a lot easier than it used to be in the early .NET days. But ORMs (as a distinct "thing" as opposed to something that just gets done in getting to and from the database) still require a LOT of compromises and they are, frankly, too leaky of an abstraction for me. And I'm not big on having a lot of dupe classes because my designs tend to have a very long life and change a lot over the years (decades, even).
As far as the database side, stored procs are a necessity in my view. I know that ORMs support them, but the tendency is not to do so by most ORM users and that is a huge negative for me - because they talk about a best practice and then they couple to a table-based design even if it is created from a code-first model. Seems to me they should look at an object datastore if they don't want to use a relational database in a way which utilizes its strengths. I believe in Code AND Database first - i.e. model the database and the object model simultaneously back and forth and then work inwards from both ends. I'm going to lay it out right here:
If you let your developers code ORM against your tables, your app is going to have problems being able to live for years. Tables need to change. More and more people are going to want to knock up against those entities, and now they all are using an ORM generated from tables. And you are going to want to refactor your tables over time. In addition, only stored procedures are going to give you any kind of usable role-based manageability without dealing with every tabl on a per-column GRANT basis - which is super-painful. If you program well in OO, you have to understand the benefits of controlled coupling. That's all stored procedures are - USE THEM so your database has a well-defined interface. Or don't use a relational database if you just want a "dumb" datastore.
Have you looked at the Entity Framework 4.1 Code First? IIRC, the domain objects are pure POCOs.
this what we did on our latest project, and it worked out pretty well
use EF 4.1 with virtual keywords for our business objects and have our own custom implementation of T4 template. Wrapping the ObjectContext behind an interface for repository style dataaccess.
using automapper to convert between Bo To DTO
using autoMapper to convert between ViewModel and DTO.
you would think that viewmodel and Dto and Business objects are same thing, and they might look same, but they have a very clear seperation in terms of concerns.
View Models are more about UI screen, DTO is more about the task you are accomplishing, and Business objects primarily concerned about the domain
There are some comprimises along the way, but if you want EF, then the benfits outweigh things that you give up
Over a year later, I have solved these problems for me now.
Using NHibernate, I am able to map fairly complex Domain Models to reasonable database designs that wouldn't make a DBA cringe.
Sometimes it is needed to create a new implementation of the IUserType interface so that NHibernate can correctly persist a custom type. Thanks to NHibernates extensible nature, that is no big deal.
I found no way to avoid adding virtual to my properties without loosing lazy loading. I still don't particularly like it, especially because of all the warnings from Code Analysis about virtual properties without derived classes overriding them, but out of pragmatism, I can now live with it.
For the default constructor I also found a solution I can live with. I add the constructors I need as public constructors and I add an obsolete protected constructor for NHibernate to use:
[Obsolete("This constructor exists because of NHibernate. Do not use.")]
protected DataExportForeignKey()
{
}
I'm working on a small application from scratch and using it to try to teach myself architecture and design concepts. It's a .NET 3.5, WPF application, and I'm using Sql Compact Edition as my data store.
I'm working on the business logic layer, and have just now begun to write the DAL. I'm just using SqlCeComamnds to send over simple queries and SqlCeResultSet to get at the results. I'm starting to design my Insert and Update methods, and here's the issue - I don't know the best way to get the necessary data from the BLL into the DAL. Do I pass in a generic collection? Do I have a massive parameter list with all the data for the database? Do I simply pass in the actual business object (thus tying my DAL to the conrete stuff in the BLL?).
I thought about using interfaces - simply passing IBusinessObjectA into the DAL, which provides the simplicity I'm looking for without tying me TOO tightly to current implementations. What do you guys think?
I don't think there is a simple answer to your questions because there are many options depending on the circumstances. I have found it helpful to read the two books below to help me understand the problems you describe better.
MS .NET: Architecting Applications for the Enterprise (Esposito, Saltarello)
MS Application Architecture Guide, 2nd edition.
The second book is available online. Look here.
I think it is OK to pass the Business object to the Data Access Layer. I think the BLL's job is just to work with its objects, to check if all rules are being followed, about what can be saved, by whom, on what fields, time, etc.
Once it has done that it should pass it to the DAL, and I think it is IT'S job to figure out how to convert what it got into something that can be persisted, but it wont check what is being persisted or read or by whom, it will just do it. This could be straight foward, a la linq, but if your logic mdoels do not match your data model 1:1, then the DAL should do all the conversion.
About tying your DAL to the stuff in the BLL, I think you should worry about the other way around, tying your BLL to your DAL. I would use an interface to represent your DAL (as in IRepository) that way you can make your BLL call any kind of persistance mechanism just by changing the type of IRepository it is using (extra points if you use IoC :P). The concrete classes that implement the IRepository would be tied to the business objects, but they have to know what is it that they are saving don't they? while the BLL does NOT have to know what is doing the saving.
To pass business object in the DAL is the simpler and fastest method. It works in small projects, but have same disadvantages:
1) Business Objects are part of BLL layer, and if you pass objects in BLL then DAL becomes dependent of BLL. low layer knows about upper one - this contradicts the idea of layers at all.
2) Business Object are usially very complex to save it directly in BD. In this case it is better to introduce new "Mappers" intermediate layer.
To overcome all these issues I usially make interface to DAL independent of Business Objects. I use "Row" classes instead - representation of one record in the database or XML. In .NET 3.5 linqtosql autogenerated classes can be used for this purpose.
If I was in your position, I'd probably use LINQ to SQL to define my data access layer - it'll save you lots of work maintaining all that SqlCeFooBar stuff and give you a designer (of sorts) for maintaining your database that you would otherwise lack, using SQL CE.
So in that case, I'd probably couple the business logic layer pretty tightly to the entities exposed by the L2S layer. The justification being that the entities are the business objects, albeit devoid of any services.
I probably wouldn't let the entities get as far up the hierarchy as the UI though. At that level, it makes much more sense to use a model specifically for the view - especially given that you're using WPF.
Of course, all of this depends upon the size and complexity of your application. I'm assuming it's a fairly small scale application (single user?) given that you're using SQL CE.
We are currently revamping our architecture and design of application. We have just completed design of Data Access Layer which is generic in the sense that it works using XML and reflection to persist data.
Any ways now we are in the phase of designing business layer. We have read some books related to Enterprise Architecture and Design so we have found that there are few patterns that can be applied on business layer. Table Pattern and Domain Model are example of such patterns. Also we have found Domain Driven Design as well.
Earlier we decided to build Entities against table objects. But we found that there is difference in Entities and Value Objects when it comes to DDD. For those of you who have gone through such design. Please guide me related to pattern, practice and sample.
Thank you in advance! Also please feel free to discuss if you didn't get any point of mine.
#Adil, this is not an answer to your original question, but I would advise you to revise your decision to roll your own data access layer. You note that you'd like to go to NHibernate: just do it now.
IMO, writing an ORM is a waste of time unless you have some very specific restrictions. There are a wealth of options out there, with hundreds of hours of effort poured into them already. Leverage it! LINQ2SQL, Entity framework, NHibernate, Subsonic, LLBLGen are all good, and there are more out there.
Note too that if you roll your own you won't get to use the goodness that is LINQ without a lot of effort.
As far as layering goes, try not to go nuts: keep the number of layers in check and concentrate instead on building a worthwhile interface between them to guard against your abstractions leaking.
I've seen a number of very "patterned", beautifully layered projects that in use end up with logic everywhere and persistence abstractions leaked all over the place. Keep it simple!
CSLA.NET works pretty well as a base for the business layer.
#Adil,
I'm not very experient user, anyway, this is the kind of model I'm using (also with NHibernate).
GUI - with all the web forms and so on
BLL - The catalogs that are responsible for creating instances of new objects
DAL - The place where classes responsible for interaction with NHibernate are implemented. The NHibernate mapping files are here.
Model - Class Library that is used by the BLL and DAL for data transfer object between.
Different patterns are used. For example, the BLL and DAL have a Factory class that allow access to an interface. The catalogs are Singleton classes. All of the catalogs are accessible using a master Singleton class representing my business logic top object (for example "Enterprise" => "Enterprise.PeopleCatalog".
Anyway, hope it helped...
#AngryHacker, thanks for the tip, could you give an example of CSLA.NET?
I am developing an app in ASP.NET C# and came across the following scenario:
I will have to create some maintenance screens for different entities (tables)
Those entities will basically have the same behaviour within the UI: Search, GetById, Save, Create and GetAll
The entities may have different structure i.e. different properties (fields)
As I am talking about 20 plus admin screens, which design pattern I could take advantage of in order to minimize the amount of code I will have to write?
I though of the bridge pattern but I am a little confused on how to implement it ...
A little bit of the technology background I am using:
ASP.NET classic (n-tier)
LINQ to SQL and DAO objects
SQL Server 2005
For a set of admin screens that are just doing CRUD (Create, Read, Update, Delete) operations and with little in the way of business logic, I'd be quite tempted to more or less eschew design patterns and take a look at asp.net dynamic data. This is especially true if you want to minimise the amount of code you want to write.
This is not a design pattern... but I would strongly suggest using Dynamic Data. Jonathan Carter has some great articles about it: http://lostintangent.com/index.php?s=dynamic+data
If you're really just doing some basic stuff like this: Search, GetById, Save, Create and GetAll, I would recommend you use repositories. If done wrong repositories can get really bad and nasty, but if you're really primarily limited to this set of operations you've basically described a repository with that set of operations.
You'll want to look at ways in which you can extract the extra logic for example of searching so that you're not creating duplicate logic.
Repositories are nice and testable as long as you make sure not to let them get out of control. I give you this warning only because I've seen far too many people create monster classes out of repositories.
The repositories work with your objects. They are basically the intermediary which handles the persistence of your data. This abstraction allows you to hide from the rest of your code how you're persisting your data. In this case the implementations of your repositories will be using LinqToSql as I believe that is what you said you were using.
There are plenty of resources explaining the repository pattern.
What you want is not a design pattern. You are looking for an ORM with scaffolding. I have used and highly recommend SubSonic - http://subsonicproject.com. You can read about its scaffolding features here: http://subsonicproject.com/web-forms-controls/the-scaffold/