I have a bunch of entities that reflect data flow in my application. Same data goes like this:
1. fluent nHibernate mapping
2. Database Access Object
3. Entity (something like "clean data on server side")
4. DTO object, that is Entity plus maybe some additional fields, used for client-server interaction.
These all must support same set of fields aand I also must have Automapper mappings set up between DAO and Entity and DTO and Entity. As you see it's a lot of manual copypaste. Is there any software to aid in automatic generation of many similar objects from list of fields ?
I use C#.
If there are truly a bunch of common properties between each of the layers, you can put them on a common denominator base type that will be extended by each object in the flow.
If, on the other hand, each layer will be slightly but predictably different, consider writing a LinqPad script to generate the needed code for the other three layers based on one of the layers. LinqPad is a good choice for this kind of work because you don't have to deal with parsing; you can simply use reflection to examine a compiled assembly containing one of the layers when generating the others.
Ultimately, though, I would recommend trying to reduce the number of layers you're dealing with. For example, for layer (4) can you simply "add" the additionaly properties using containment instead of copy paste? For example:
public class CustomerDto {
public CustomeEntity Entity { get; set; }
public int SomeProp { get; set; }
}
I'm not too familiar with NHibernate, but do you really need different types for (1) and (2)?
Another way to approach things is to focus on the data that individual features need to work with rather than creating a whole host of classes for each "table" in the database. That way, you can write a LINQ query to go straight from the data into the type you'll actually use.
Adding layers to an application often seems nice in theory, but in practice I've found that it often fails to provide much abstraction while adding a lot of coding overhead.
Related
I'm building a workflow for some forms, that would route to users for approval. I have an abstract 'FormBase' class which stores a LinkedList of 'Approver' objects and has some helpers that modify the approver list, move the form to the next person and so on. These helpers are all 'virtual' so that a given form can override them with custom behaviours.
From this 'base' will come instances of various forms, each of which will differ in the data it contains. It'll be the normal set of lists, strings, and numeric values you'd find in a normal form. So I might have
class MaterialForm : FormBase
class CustomerForm : FormBase
etc, and a new instance is created when a user creates and submits a form.
I'd like to persist the field details of in EF6 (or 5), in a flexible way so I could create more forms deriving from FormBase without too much fiddling. Ideally I'd like all behaviour specific to that form type to live in the MaterialForm derived class. I figure I can't persist this derived class to EF (unless I'm wrong!). I'm considering:
a) JSonifying the field details, and storing them as a string in the class which gets stored to EF. I'd probably do the same for the approver list, and each time I need to modify the list I'd pull them out, modify the list and push them back.
b) Including a 'FormData' property in my abstract class, then including a derived version of that in each concrete implementation (eg MaterialFormData, CustomerFormData). However, the abstract class didn't seem to like my use of a derived type in this way. Also unclear is how the DbSets would be setup in this case as you'd probably need a new table for each type.
I feel I'm misunderstanding something fundamental about how the 'real' classes relate to those stored in EF. What would you recommend as an architecture for this case?
When it comes to Entity Framework you have three supported models for inheritance: Table Per Type (TPT), Table Per Hierarchy (TPH), and Table Per Concrete Class (TPC).
The use of TPC is generally avoided, and choosing between the other two comes down to several factors including performance and flexibility. There is a great article outlining the differences here.
I'd also recommend reading this and this for more information and examples on how these patterns work.
However, in your example it sounds like the issue is at the 'design' stage in terms of coming up with a suitable 'model' for your application. My advice is that, if you feel the class structure you have come up with can not accurately represent the model you are working with, you either need to change the structure; your model is massively complex; or you are constrained by an outside system (a database schema that you have no way of changing for example).
In this case, have you considered doing a class diagram? Even if you use the EF designer, try and visualize the model, as that's usually the best way of determining where improvements can be made, and also gives you a good start in the design (especially if you're going code first).
Try that, and post that back here if necessary. We can then help redesign the model if required. My feeling is that there's nearly always some good way of representing your requirement with a decent OO perspective, and it's best to analyse that before looking at more elaborate options! The key here is to avoid thinking of creating new class types dynamically if that can be avoided.
I am trying to create 3-tier winform application. Since this is my first attempt of 3-tier design, I got stuck and have few questions.
The application will support attaching multiple sqlite db files.
So I created class like this
public class Database
{
public string Name { get; set; }
public string FilePath { get; set; }
public bool isAttached { get; private set; }
}
Now I want to have collection of those objects.
Should I create another class like DatabaseList below or is enough to just create a List
public class DatabaseList : List<Database>
{
...
vs
List<Database> myDatabases;
What should be created in Form1.cs?
For example I assume the collection above should be created in BusinessLayer and not in Form1.cs and only BusinessLayer class is created in Form1.cs. Is this correct?
Where to put Attach Method?
The method would be like this:
public void AttachDB(Database db)
{
MySqliteHelper.Attach(db.Name, db.FilePath);
this.Add(db);
}
Do I put the method in DatabaseList class (if this is the way to create collection) or should it be in BusinessLayer?
How to make the Attach method to support additional relational databases like MS SQL Compact Edition which also resides in a single file
I was thinknig of creating another general database helper class with same methods as MySqliteHelper and the AttachDB method would call that instead. Something like
MyDBHelper.Attach(db.Name, db.FilePath);
Or is this where Dependency Injections like Ninject can be helpful? I never used that before and all I am recalling from Ninject is a samurai having different weapons so it seems to me to be kinda similar to my problem having different specific database classes.
I'm going to tackle this question in parts because it covers a lot of ground.
What qualifies as a 3-tier architecture?
A 3-tier (or n-tier, tiered) architecture is basically any design where the interface doesn't directly communicate with the database, no matter how thin the actual tiers are. You could create a single class with functions to get and save data, and it would still qualify as a 3-tier architecture. That being said, what I'm going to explain below is probably the most common implementation of a 3-tier architecture.
Layer vs. Tier: What's the difference?
To understand the 3-tier architecture, it's important to first make a distinction between a layer and a tier. An application can have many physical layers and still contain only three logical tiers. If a picture really is worth a million words, the diagram below should clear that up for you.
In the diagram above, the Business/Middle Tier is comprised of business logic, business objects, and data access objects. The purpose of this tier is to serve the middle man between the user interface and the database.
The Data Access Layer (DAL)
The data access layer is comprised of a data access component (see below) and one or more data access objects. Depending on the need, the data acess objects are usually set up one of two ways:
One Data Access Object for each Business Object
One Data Access Object shared by many Business Objects
It sounds like you're going to be dealing with several databases, so it would probably make sense to go with the one-to-one option. Doing it this way you'll have the flexibility to specify which database/connection corresponds to which business object.
Data Access Component
Your data access component should be a very generic class containing only the bare-bones methods needed to connect and interact with a database. In the diagram above, that component is represented by the dbConnection class.
Questions & Answers
What should be created in Form1.cs?
The only thing the front end deals with are the business objects and the business logic. Sometimes it's not that black and white, but that's the idea.
Where to put Attach Method?
Instead of an Attach method, pass a connection string into your data access component. A connection string can be used to attach and/or connect to pretty much any database.
How to make the Attach method to support additional relational databases like MS SQL Compact Edition which also resides in a single file?
See above.
Should I create another class like DatabaseList below or is enough to just create a List?
Honestly, this is up to you and doesn't affect the validity of the 3-tier architecture. You know the specific requirements that you're trying to meet, so do it if it makes sense. Give consideration to how your Data Access Object(s) will interact with this class though, because you will need to expose the methods for executing queries and non-queries on whatever database is selected from the list.
What you lack is thinking in terms of objects and their responsibility.
What object is responsible for creating instances of your database descriptions? Should it be Form1?
The OOP tells you that if you have such doubts you can follow the Pure Fabrication principle and just create another class to be responsible for this. This is just as simple.
So you can create a class, let call it DatabaseManager, put your list of databases there plus the Attach method. You probably also want this manager to be an ambient class (the same instance shared among other classes) so you can build a Singleton out of it (but this is not necessary).
DI containers could probably help you to organize services and manage their lifetime but I recommend you start with a good book on this before you misuse the idea. Mark Seemann's "Dependency Injection in .NET" is fine.
You need to think in terms of modularity and abstraction. See you have multiple entities to be passed across layers.
Following are the examples:
1. Presentation will create an object of business layer or business facade. But it will expect the logical entity from business layer.
Business layer will create the object of DataAccess and will expect the logical entity from DataAccess to perform business operations.
DataAccess will do whatever it would like to do to get the information from database. So if you need to connect the oracle / sql /sqllite / files system whatever but it will convert or say initialize the Logical entity (entity is a Class only consisting of properties).
So every layer will have their own responsibility and perform the operation it is responsible for.
So I think your db related operations will go in DataAccess.
I seem to be missing something and extensive use of google didn't help to improve my understanding...
Here is my problem:
I like to create my domain model in a persistence ignorant manner, for example:
I don't want to add virtual if I don't need it otherwise.
I don't like to add a default constructor, because I like my objects to always be fully constructed. Furthermore, the need for a default constructor is problematic in the context of dependency injection.
I don't want to use overly complicated mappings, because my domain model uses interfaces or other constructs not readily supported by the ORM.
One solution to this would be to have separate domain objects and data entities. Retrieval of the constructed domain objects could easily be solved using the repository pattern and building the domain object from the data entity returned by the ORM. Using AutoMapper, this would be trivial and not too much code overhead.
But I have one big problem with this approach: It seems that I can't really support lazy loading without writing code for it myself. Additionally, I would have quite a lot of classes for the same "thing", especially in the extended context of WCF and UI:
Data entity (mapped to the ORM)
Domain model
WCF DTO
View model
So, my question is: What am I missing? How is this problem generally solved?
UPDATE:
The answers so far suggest what I already feared: It looks like I have two options:
Make compromises on the domain model to match the prerequisites of the ORM and thus have a domain model the ORM leaks into
Create a lot of additional code
UPDATE:
In addition to the accepted answer, please see my answer for concrete information on how I solved those problems for me.
I would question that matching the prereqs of an ORM is necessarily "making compromises". However, some of these are fair points from the standpoint of a highly SOLID, loosely-coupled architecture.
An ORM framework exists for one sole reason; to take a domain model implemented by you, and persist it into a similar DB structure, without you having to implement a large number of bug-prone, near-impossible-to-unit-test SQL strings or stored procedures. They also easily implement concepts like lazy-loading; hydrating an object at the last minute before that object is needed, instead of building a large object graph yourself.
If you want stored procs, or have them and need to use them (whether you want to or not), most ORMs are not the right tool for the job. If you have a very complex domain structure such that the ORM cannot map the relationship between a field and its data source, I would seriously question why you are using that domain and that data source. And if you want 100% POCO objects, with no knowledge of the persistence mechanism behind, then you will likely end up doing an end run around most of the power of an ORM, because if the domain doesn't have virtual members or child collections that can be replaced with proxies, then you are forced to eager-load the entire object graph (which may well be impossible if you have a massive interlinked object graph).
While ORMs do require some knowledge in the domain of the persistence mechanism in terms of domain design, an ORM still results in much more SOLID designs, IMO. Without an ORM, these are your options:
Roll your own Repository that contains a method to produce and persist every type of "top-level" object in your domain (a "God Object" anti-pattern)
Create DAOs that each work on a different object type. These types require you to hard-code the get and set between ADO DataReaders and your objects; in the average case a mapping greatly simplifies the process. The DAOs also have to know about each other; to persist an Invoice you need the DAO for the Invoice, which needs a DAO for the InvoiceLine, Customer and GeneralLedger objects as well. And, there must be a common, abstracted transaction control mechanism built into all of this.
Set up an ActiveRecord pattern where objects persist themselves (and put even more knowledge about the persistence mechanism into your domain)
Overall, the second option is the most SOLID, but more often than not it turns into a beast-and-two-thirds to maintain, especially when dealing with a domain containing backreferences and circular references. For instance, for fast retrieval and/or traversal, an InvoiceLineDetail record (perhaps containing shipping notes or tax information) might refer directly to the Invoice as well as the InvoiceLine to which it belongs. That creates a 3-node circular reference that requires either an O(n^2) algorithm to detect that the object has been handled already, or hard-coded logic concerning a "cascade" behavior for the backreference. I've had to implement "graph walkers" before; trust me, you DO NOT WANT to do this if there is ANY other way of doing the job.
So, in conclusion, my opinion is that ORMs are the least of all evils given a sufficiently complex domain. They encapsulate much of what is not SOLID about persistence mechanisms, and reduce knowledge of the domain about its persistence to very high-level implementation details that break down to simple rules ("all domain objects must have all their public members marked virtual").
In short - it is not solved
(here goes additional useless characters to post my awesome answer)
All good points.
I don't have an answer (but the comment got too long when I decided to add something about stored procs) except to say my philosophy seems to be identical to yours and I code or code generate.
Things like partial classes make this a lot easier than it used to be in the early .NET days. But ORMs (as a distinct "thing" as opposed to something that just gets done in getting to and from the database) still require a LOT of compromises and they are, frankly, too leaky of an abstraction for me. And I'm not big on having a lot of dupe classes because my designs tend to have a very long life and change a lot over the years (decades, even).
As far as the database side, stored procs are a necessity in my view. I know that ORMs support them, but the tendency is not to do so by most ORM users and that is a huge negative for me - because they talk about a best practice and then they couple to a table-based design even if it is created from a code-first model. Seems to me they should look at an object datastore if they don't want to use a relational database in a way which utilizes its strengths. I believe in Code AND Database first - i.e. model the database and the object model simultaneously back and forth and then work inwards from both ends. I'm going to lay it out right here:
If you let your developers code ORM against your tables, your app is going to have problems being able to live for years. Tables need to change. More and more people are going to want to knock up against those entities, and now they all are using an ORM generated from tables. And you are going to want to refactor your tables over time. In addition, only stored procedures are going to give you any kind of usable role-based manageability without dealing with every tabl on a per-column GRANT basis - which is super-painful. If you program well in OO, you have to understand the benefits of controlled coupling. That's all stored procedures are - USE THEM so your database has a well-defined interface. Or don't use a relational database if you just want a "dumb" datastore.
Have you looked at the Entity Framework 4.1 Code First? IIRC, the domain objects are pure POCOs.
this what we did on our latest project, and it worked out pretty well
use EF 4.1 with virtual keywords for our business objects and have our own custom implementation of T4 template. Wrapping the ObjectContext behind an interface for repository style dataaccess.
using automapper to convert between Bo To DTO
using autoMapper to convert between ViewModel and DTO.
you would think that viewmodel and Dto and Business objects are same thing, and they might look same, but they have a very clear seperation in terms of concerns.
View Models are more about UI screen, DTO is more about the task you are accomplishing, and Business objects primarily concerned about the domain
There are some comprimises along the way, but if you want EF, then the benfits outweigh things that you give up
Over a year later, I have solved these problems for me now.
Using NHibernate, I am able to map fairly complex Domain Models to reasonable database designs that wouldn't make a DBA cringe.
Sometimes it is needed to create a new implementation of the IUserType interface so that NHibernate can correctly persist a custom type. Thanks to NHibernates extensible nature, that is no big deal.
I found no way to avoid adding virtual to my properties without loosing lazy loading. I still don't particularly like it, especially because of all the warnings from Code Analysis about virtual properties without derived classes overriding them, but out of pragmatism, I can now live with it.
For the default constructor I also found a solution I can live with. I add the constructors I need as public constructors and I add an obsolete protected constructor for NHibernate to use:
[Obsolete("This constructor exists because of NHibernate. Do not use.")]
protected DataExportForeignKey()
{
}
DTO
I'm building a Web application I would like to scale to many users. Also, I need to expose functionality to trusted third parties via Web Services.
I'm using LLBLGen to generate the data access layer (using SQL Server 2008). The goal is to build a business logic layer that shields the Web App from the details of DAL and, of course, to provide an extra level of validation beyond the DAL. Also, as far as I can tell right now, the Web Service will essentially be a thin wrapper over the BLL.
The DAL, of course, has its own set of entity objects, for instance, CustomerEntity, ProductEntity, and so forth. However, I don't want the presentation layer to have access to these objects directly, as they contain DAL specific methods and the assembly is specific to the DAL and so on. So, the idea is to create Data Transfer Objects (DTO). The idea is that these will be, essentially, plain old C#/.NET objects that have all the fields of, say, a CustomerEntity that are actually the database table Customer but none of the other stuff, except maybe some IsChanged/IsDirty properties. So, there would be CustomerDTO, ProductDTO, etc. I assume these would inherit from a base DTO class. I believe I can generate these with some template for LLBLGen, but I'm not sure about it yet.
So, the idea is that the BLL will expose its functionality by accepting and returning these DTO objects. I think the Web Service will handle converting these objects to XML for the third parties using it, many may not be using .NET (also, some things will be script callable from AJAX calls on the Web App, using JSON).
I'm not sure the best way to design this and exactly how to go forward. Here are some issues:
1) How should this be exposed to the clients (The presentation tier and to the Web Service code)
I was thinking that there would be one public class that has these methods, every call would be be an atomic operation:
InsertDTO, UpdateDTO, DeleteDTO, GetProducts, GetProductByCustomer, and so forth ...
Then the clients would just call these methods and pass in the appropriate arguments, typically a DTO.
Is this a good, workable approach?
2) What to return from these methods? Obviously, the Get/Fetch sort of methods will return DTO. But what about Inserts? Part of the signature could be:
InsertDTO(DTO dto)
However, when inserting what should be returned? I want to be notified of errors. However, I use autoincrementing primary keys for some tables (However, a few tables have natural keys, particularly many-to-many ones).
One option I thought about was a Result class:
class Result
{
public Exception Error {get; set;}
public DTO AffectedObject {get; set;}
}
So, on an insert, the DTO would get its get ID (like CustomerDTO.CustomerID) property set and then put in this result object. The client will know if there is an error if Result.Error != null and then it would know the ID from the Result.AffectedObject property.
Is this a good approach? One problem is that it seems like it is passing a lot of data back and forth that is redundant (when it's just the ID). I don't think adding a "int NewID" property would be clean because some inserts will not have a autoincrementing key like that. Another issue is that I don't think Web Services would handle this well? I believe they would just return the base DTO for AffectedObject in the Result class, rather than the derived DTO. I suppose I could solve this by having a LOT of the different kinds of Result objects (maybe derived from a base Result and inherit the Error property) but that doesn't seem very clean.
All right, I hope this isn't too wordy but I want to be clear.
1: That is a pretty standard approach, that lends itself well to a "repository" implementation for the best unit-testable approach.
2: Exceptions (which should be declared as "faults" on the WCF boundary, btw) will get raised automatically. You don't need to handle that directly. For data - there are three common approaches:
use ref on the contract (not very pretty)
return the (updated) object - i.e. public DTO SomeOperation(DTO item);
return just the updated identity information (primary-key / timestamp / etc)
One thing about all of these is that it doesn't necessitate a different type per operation (contrast your Result class, which would need to be duplicated per DTO).
Q1: You can think of your WCF Data Contract composite types as DTOs to solve this problem. This way your UI layer only has access to the DataContract's DataMember properties. Your atomic operations would be the methods exposed by your WCF Interface.
Q2: Configure your Response data contracts to return a new custom type with your primary keys etc... WCF can also be configured to bubble exceptions back to the UI.
I'm relatively new to NHibernate, but have been using it for the last few programs and I'm in love. I've come to a situation where I need to aggregate data from 4-5 databases into a single database. Specifically it is serial number data. Each database will have its own mapping file, but ultimately the entities all share the same basic structure (Serial class).
I understand NHibernate wants a mapping per class, and so my initial thought was to have a base Serial Class and then inherit from it for each different database and create a unique mapping file (the inherited class would have zero content). This should work great for grabbing all the data and populating the objects. What I would then like to do is save these inherited classes (not sure what the proper term is) to the base class table using the base class mapping.
The problem is I have no idea how to force NHIbernate to use a specific mapping file for an object. Casting the inherited class to the base class does nothing when using 'session.save()' (it complains of no mapping).
Is there a way to explicitly specify which mapping to use? Or is there just some OOP principal I am missing to more specifically cast an inherited class to base class? Or is this idea just a bad one.
All of the inheritance stuff I could find with regards to NHibernate (Chapter 8) doesn't seem to be totally applicable to this function, but I could be wrong (the table-per-concrete-class looks maybe useful, but I can't wrap my head around it totally with regards to how NHibernate figures out what to do).
I don't know if this'll help, but I wouldn't be trying to do that, basically.
Essentially, I think you're possibly suffering from "golder hammer" syndrome: when you have a REALLY REALLY nice hammer (i.e. Hibernate (and I share your opinion on it; it's a MAGNIFICENT tool)), everything looks like a nail.
I'd generally try to simply have a "manual conversion" class, i.e. one which has constructors which take the hibernate classes for your individual Serial Classes and which simply copies the data over to its own specific format; then Hibernate can simply serialize it to the (single) database using its own mapping.
Effectively, the reason why I think this is a better solution is that what you're effectively trying to do is have asymmetric serialization in your class; i.e. read from one database in your derived class, write to another database in your base class. Nothing too horrible about that, really, except that it's fundamentally a unidirectional process; if you really want conversion from one database to the other, simply do the conversion, and be over with it.
This might help;
Using NHibernate with Multiple Databases
From the article;
Introduction
...
described using NHibernate with
ASP.NET; it offered guidelines for
communicating with a single database.
But it is sometimes necessary to
communicate with multiple databases
concurrently. For NHibernate to do
this, a session factory needs to exist
for each database that you will be
communicating with. But, as is often
the case with multiple databases, some
of the databases are rarely used. So
it may be a good idea to not create
session factories until they're
actually needed. This article picks up
where the previous NHibernate with
ASP.NET article left off and describes
the implementation details of this
simple-sounding approach. Although the
previous article focused on ASP.NET,
the below suggestion is supported in
both ASP.NET and .NET.
...
The first thing to do when working
with multiple databases is to
configure proper communications.
Create a separate config file for each
database, put them all into a central
config folder, and then reference them
from the web/app.config.
...
I'm not 100% sure this will do what I need, but I found this googling today about NHibernate and anonymous types:
http://infozerk.com/averyblog/refactoring-using-object-constructors-in-hql-with-nhibernate/
The interesting part (to me, I'm new to this) is the 'new' keyword in the HQL select clause. So what I could do is select the SerialX from DatabaseX using mappingX, and pass it to a constructor for SerialY (the general/base Serial). So now I have SerialY generated from mappingX/databaseX, and (hopefully) I could then session.save and NHibernate will use mappingY/databaseY.
The reason I like this is simply not having two classes with the same data persisted (I think!). There is really no functional difference between this and returning a list of SerialX, iterating through it and generating SerialY and adding it to a new list (the first and best answer given).
This doesn't have the more general benefit of making useful cases for NHibernate mappings with inheritance, but I think it will do the limited stuff I want.
While it's true you will need a mapping file/class for each of those tables, there's nothing that stops you from making all of those classes implement a common interface.
You can then aggregate them all together into a single collection in your application layer (I.e. List) where each of those classes implement List)
You will probably have to write some plumbing to keep track of which session to store it under (since you're targetting multiple databases) if you wish to do updates. But the process for doing that will vary based on how you have things set up.
I wrote a really long post with code and everything to respond to Dan. It ended up I think I missed the obvious.
public class Serial
{
public string SerialNumber {get; set;}
public string ItemNumber {get; set;}
public string OrderNumber {get; set;}
}
...
Serial serial = sessionX.get(typeof(Serial), someID);
sessionY.save(serial);
NHibernate should use mappingX for the get and mappingY for the save since the sessions aren't being shared, and the mapping is tied to the session. So I can have 2 mappings pointing to the same class because in any particular session there is only a single mapping to class relationship.
At least I think that is the case (can't test atm).
Unfortunately this specific case is really boring and not useful. In a different program of the same domain I derive from the base class for a specific portion of business logic. I didn't want to create a mapping file since it was just to make a small chunk of code easier. Anyways, I couldn't make it work in NHibernate due to the same reasons as my first question and did do the method McWafflestix describes to get around it (since it was minor).
That said I have found this via google:
http://jira.nhibernate.org/browse/NH-662
That is exactly the same situation, and it appears (possibly) addressed in NH 2.1+? I haven't followed up on it yet.
(note: Dan, in my case I am getting from several db's, only writing to one. I'm still interested in your suggestion about the interface because I think that is a good idea for other cases. Would you define the mapping against the interface? If I try and save a class that implements the interface that doesn't have a mapping definition, would NHibernate use the interface mapping? Or would I have to declare empty sublcasses in the mapping for each class that implements the interface mapping?)