I'm relatively new to NHibernate, but have been using it for the last few programs and I'm in love. I've come to a situation where I need to aggregate data from 4-5 databases into a single database. Specifically it is serial number data. Each database will have its own mapping file, but ultimately the entities all share the same basic structure (Serial class).
I understand NHibernate wants a mapping per class, and so my initial thought was to have a base Serial Class and then inherit from it for each different database and create a unique mapping file (the inherited class would have zero content). This should work great for grabbing all the data and populating the objects. What I would then like to do is save these inherited classes (not sure what the proper term is) to the base class table using the base class mapping.
The problem is I have no idea how to force NHIbernate to use a specific mapping file for an object. Casting the inherited class to the base class does nothing when using 'session.save()' (it complains of no mapping).
Is there a way to explicitly specify which mapping to use? Or is there just some OOP principal I am missing to more specifically cast an inherited class to base class? Or is this idea just a bad one.
All of the inheritance stuff I could find with regards to NHibernate (Chapter 8) doesn't seem to be totally applicable to this function, but I could be wrong (the table-per-concrete-class looks maybe useful, but I can't wrap my head around it totally with regards to how NHibernate figures out what to do).
I don't know if this'll help, but I wouldn't be trying to do that, basically.
Essentially, I think you're possibly suffering from "golder hammer" syndrome: when you have a REALLY REALLY nice hammer (i.e. Hibernate (and I share your opinion on it; it's a MAGNIFICENT tool)), everything looks like a nail.
I'd generally try to simply have a "manual conversion" class, i.e. one which has constructors which take the hibernate classes for your individual Serial Classes and which simply copies the data over to its own specific format; then Hibernate can simply serialize it to the (single) database using its own mapping.
Effectively, the reason why I think this is a better solution is that what you're effectively trying to do is have asymmetric serialization in your class; i.e. read from one database in your derived class, write to another database in your base class. Nothing too horrible about that, really, except that it's fundamentally a unidirectional process; if you really want conversion from one database to the other, simply do the conversion, and be over with it.
This might help;
Using NHibernate with Multiple Databases
From the article;
Introduction
...
described using NHibernate with
ASP.NET; it offered guidelines for
communicating with a single database.
But it is sometimes necessary to
communicate with multiple databases
concurrently. For NHibernate to do
this, a session factory needs to exist
for each database that you will be
communicating with. But, as is often
the case with multiple databases, some
of the databases are rarely used. So
it may be a good idea to not create
session factories until they're
actually needed. This article picks up
where the previous NHibernate with
ASP.NET article left off and describes
the implementation details of this
simple-sounding approach. Although the
previous article focused on ASP.NET,
the below suggestion is supported in
both ASP.NET and .NET.
...
The first thing to do when working
with multiple databases is to
configure proper communications.
Create a separate config file for each
database, put them all into a central
config folder, and then reference them
from the web/app.config.
...
I'm not 100% sure this will do what I need, but I found this googling today about NHibernate and anonymous types:
http://infozerk.com/averyblog/refactoring-using-object-constructors-in-hql-with-nhibernate/
The interesting part (to me, I'm new to this) is the 'new' keyword in the HQL select clause. So what I could do is select the SerialX from DatabaseX using mappingX, and pass it to a constructor for SerialY (the general/base Serial). So now I have SerialY generated from mappingX/databaseX, and (hopefully) I could then session.save and NHibernate will use mappingY/databaseY.
The reason I like this is simply not having two classes with the same data persisted (I think!). There is really no functional difference between this and returning a list of SerialX, iterating through it and generating SerialY and adding it to a new list (the first and best answer given).
This doesn't have the more general benefit of making useful cases for NHibernate mappings with inheritance, but I think it will do the limited stuff I want.
While it's true you will need a mapping file/class for each of those tables, there's nothing that stops you from making all of those classes implement a common interface.
You can then aggregate them all together into a single collection in your application layer (I.e. List) where each of those classes implement List)
You will probably have to write some plumbing to keep track of which session to store it under (since you're targetting multiple databases) if you wish to do updates. But the process for doing that will vary based on how you have things set up.
I wrote a really long post with code and everything to respond to Dan. It ended up I think I missed the obvious.
public class Serial
{
public string SerialNumber {get; set;}
public string ItemNumber {get; set;}
public string OrderNumber {get; set;}
}
...
Serial serial = sessionX.get(typeof(Serial), someID);
sessionY.save(serial);
NHibernate should use mappingX for the get and mappingY for the save since the sessions aren't being shared, and the mapping is tied to the session. So I can have 2 mappings pointing to the same class because in any particular session there is only a single mapping to class relationship.
At least I think that is the case (can't test atm).
Unfortunately this specific case is really boring and not useful. In a different program of the same domain I derive from the base class for a specific portion of business logic. I didn't want to create a mapping file since it was just to make a small chunk of code easier. Anyways, I couldn't make it work in NHibernate due to the same reasons as my first question and did do the method McWafflestix describes to get around it (since it was minor).
That said I have found this via google:
http://jira.nhibernate.org/browse/NH-662
That is exactly the same situation, and it appears (possibly) addressed in NH 2.1+? I haven't followed up on it yet.
(note: Dan, in my case I am getting from several db's, only writing to one. I'm still interested in your suggestion about the interface because I think that is a good idea for other cases. Would you define the mapping against the interface? If I try and save a class that implements the interface that doesn't have a mapping definition, would NHibernate use the interface mapping? Or would I have to declare empty sublcasses in the mapping for each class that implements the interface mapping?)
Related
Our team has just started using Sql Metal and I have been playing around with it for 2 days. While doing this, I have noticed couple of things.
When we run command like following
sqlmetal /code:ps.cs /server:devapp042dbs
/database:promotionalsponsorship /namespace:DAL
It creates a "LINQ to SQL SQLMEtal" object model. Now, this is not our regular class. It has a lot of autogenerated code and it almost smells like LINQ/EF with a lot of autogenerated properties and methods.
I have used Micro ORMs like Dapper and ORMLite from Service stack and the onderful thing about those is that it works with the simple objectmodel that we have created rather than auto-generating its own object model.
My question is that can we use these SQLMetal mapping classes as our Models of the application or we have to create a simple wrapper class around it using which we can extract all the information that we need to.
To clarify my point following are the samples of what I call a SQL Metal Class and a simple model class
Although this question would possibly be closed, as the answer is subjective, the short answer is yes, it is perfectly valid to use such autogenerated set of classes as your model. There are plenty of succesful apps built this way.
Since these classes are partial, you can even extend your domain model by adding custom properties/methods/events.
If you are concerned that the autogenerated code is not clean enough, consider the code first approach of the Entity Framework, nHibernate or any other ORM that supports this scenario. This way you start from a clean POCO model and just define its mapping to a relational structure.
Let's say I have a project where I use Entity Framework, but I want to use my own classes instead of the EF classes.
Reasons for using my own classes:
Easy to add properties in code
Easy to derive and inherit
Less binding to the database
Now, my database has table names like User and Conference.
However, In my domain project, I also call my files User.cs and Conference.cs.
That means I suddenly have two objects with the same naming, which is usually very annoying to work with, because you have to use namespaces all the time to know the difference.
My question is how to solve this problem?
My ideas:
Prefix all database tables with 'db'. I usually do this, but in this case, I cannot change the database
Prefix or postfix all C# classes with "Poco" or something similar
I just don't like any of my ideas.
How do you usually do this?
It's difficult to tell without more background but it sounds like you are using the Entity Framework designer to generate EF classes. This is known as the "Model First" workflow. Have you considered using the Code First / Code Only workflow? When doing code first you can have POCO classes that have no knowledge of the database, EF, or data annotations. The mapping between the database and your POCOs can be done externally in the the DBContext or in EntityTypeConfiguration classes.
You should be able to achieve your goal of decoupling from EF with just one set of objects via code first.
To extend the above answer, the database table name User (or Users as many DB designers prefer) is the identifier for the persistence store for the object User that's defined in your code file User.cs. None of these identifiers share the same space, so there should be no confusion. Indeed, they are named similarly to create a loose coupling across spaces (data store, code, development environment) so you can maintain sanity and others can read your code.
I have a bunch of entities that reflect data flow in my application. Same data goes like this:
1. fluent nHibernate mapping
2. Database Access Object
3. Entity (something like "clean data on server side")
4. DTO object, that is Entity plus maybe some additional fields, used for client-server interaction.
These all must support same set of fields aand I also must have Automapper mappings set up between DAO and Entity and DTO and Entity. As you see it's a lot of manual copypaste. Is there any software to aid in automatic generation of many similar objects from list of fields ?
I use C#.
If there are truly a bunch of common properties between each of the layers, you can put them on a common denominator base type that will be extended by each object in the flow.
If, on the other hand, each layer will be slightly but predictably different, consider writing a LinqPad script to generate the needed code for the other three layers based on one of the layers. LinqPad is a good choice for this kind of work because you don't have to deal with parsing; you can simply use reflection to examine a compiled assembly containing one of the layers when generating the others.
Ultimately, though, I would recommend trying to reduce the number of layers you're dealing with. For example, for layer (4) can you simply "add" the additionaly properties using containment instead of copy paste? For example:
public class CustomerDto {
public CustomeEntity Entity { get; set; }
public int SomeProp { get; set; }
}
I'm not too familiar with NHibernate, but do you really need different types for (1) and (2)?
Another way to approach things is to focus on the data that individual features need to work with rather than creating a whole host of classes for each "table" in the database. That way, you can write a LINQ query to go straight from the data into the type you'll actually use.
Adding layers to an application often seems nice in theory, but in practice I've found that it often fails to provide much abstraction while adding a lot of coding overhead.
I'm building a workflow for some forms, that would route to users for approval. I have an abstract 'FormBase' class which stores a LinkedList of 'Approver' objects and has some helpers that modify the approver list, move the form to the next person and so on. These helpers are all 'virtual' so that a given form can override them with custom behaviours.
From this 'base' will come instances of various forms, each of which will differ in the data it contains. It'll be the normal set of lists, strings, and numeric values you'd find in a normal form. So I might have
class MaterialForm : FormBase
class CustomerForm : FormBase
etc, and a new instance is created when a user creates and submits a form.
I'd like to persist the field details of in EF6 (or 5), in a flexible way so I could create more forms deriving from FormBase without too much fiddling. Ideally I'd like all behaviour specific to that form type to live in the MaterialForm derived class. I figure I can't persist this derived class to EF (unless I'm wrong!). I'm considering:
a) JSonifying the field details, and storing them as a string in the class which gets stored to EF. I'd probably do the same for the approver list, and each time I need to modify the list I'd pull them out, modify the list and push them back.
b) Including a 'FormData' property in my abstract class, then including a derived version of that in each concrete implementation (eg MaterialFormData, CustomerFormData). However, the abstract class didn't seem to like my use of a derived type in this way. Also unclear is how the DbSets would be setup in this case as you'd probably need a new table for each type.
I feel I'm misunderstanding something fundamental about how the 'real' classes relate to those stored in EF. What would you recommend as an architecture for this case?
When it comes to Entity Framework you have three supported models for inheritance: Table Per Type (TPT), Table Per Hierarchy (TPH), and Table Per Concrete Class (TPC).
The use of TPC is generally avoided, and choosing between the other two comes down to several factors including performance and flexibility. There is a great article outlining the differences here.
I'd also recommend reading this and this for more information and examples on how these patterns work.
However, in your example it sounds like the issue is at the 'design' stage in terms of coming up with a suitable 'model' for your application. My advice is that, if you feel the class structure you have come up with can not accurately represent the model you are working with, you either need to change the structure; your model is massively complex; or you are constrained by an outside system (a database schema that you have no way of changing for example).
In this case, have you considered doing a class diagram? Even if you use the EF designer, try and visualize the model, as that's usually the best way of determining where improvements can be made, and also gives you a good start in the design (especially if you're going code first).
Try that, and post that back here if necessary. We can then help redesign the model if required. My feeling is that there's nearly always some good way of representing your requirement with a decent OO perspective, and it's best to analyse that before looking at more elaborate options! The key here is to avoid thinking of creating new class types dynamically if that can be avoided.
I seem to be missing something and extensive use of google didn't help to improve my understanding...
Here is my problem:
I like to create my domain model in a persistence ignorant manner, for example:
I don't want to add virtual if I don't need it otherwise.
I don't like to add a default constructor, because I like my objects to always be fully constructed. Furthermore, the need for a default constructor is problematic in the context of dependency injection.
I don't want to use overly complicated mappings, because my domain model uses interfaces or other constructs not readily supported by the ORM.
One solution to this would be to have separate domain objects and data entities. Retrieval of the constructed domain objects could easily be solved using the repository pattern and building the domain object from the data entity returned by the ORM. Using AutoMapper, this would be trivial and not too much code overhead.
But I have one big problem with this approach: It seems that I can't really support lazy loading without writing code for it myself. Additionally, I would have quite a lot of classes for the same "thing", especially in the extended context of WCF and UI:
Data entity (mapped to the ORM)
Domain model
WCF DTO
View model
So, my question is: What am I missing? How is this problem generally solved?
UPDATE:
The answers so far suggest what I already feared: It looks like I have two options:
Make compromises on the domain model to match the prerequisites of the ORM and thus have a domain model the ORM leaks into
Create a lot of additional code
UPDATE:
In addition to the accepted answer, please see my answer for concrete information on how I solved those problems for me.
I would question that matching the prereqs of an ORM is necessarily "making compromises". However, some of these are fair points from the standpoint of a highly SOLID, loosely-coupled architecture.
An ORM framework exists for one sole reason; to take a domain model implemented by you, and persist it into a similar DB structure, without you having to implement a large number of bug-prone, near-impossible-to-unit-test SQL strings or stored procedures. They also easily implement concepts like lazy-loading; hydrating an object at the last minute before that object is needed, instead of building a large object graph yourself.
If you want stored procs, or have them and need to use them (whether you want to or not), most ORMs are not the right tool for the job. If you have a very complex domain structure such that the ORM cannot map the relationship between a field and its data source, I would seriously question why you are using that domain and that data source. And if you want 100% POCO objects, with no knowledge of the persistence mechanism behind, then you will likely end up doing an end run around most of the power of an ORM, because if the domain doesn't have virtual members or child collections that can be replaced with proxies, then you are forced to eager-load the entire object graph (which may well be impossible if you have a massive interlinked object graph).
While ORMs do require some knowledge in the domain of the persistence mechanism in terms of domain design, an ORM still results in much more SOLID designs, IMO. Without an ORM, these are your options:
Roll your own Repository that contains a method to produce and persist every type of "top-level" object in your domain (a "God Object" anti-pattern)
Create DAOs that each work on a different object type. These types require you to hard-code the get and set between ADO DataReaders and your objects; in the average case a mapping greatly simplifies the process. The DAOs also have to know about each other; to persist an Invoice you need the DAO for the Invoice, which needs a DAO for the InvoiceLine, Customer and GeneralLedger objects as well. And, there must be a common, abstracted transaction control mechanism built into all of this.
Set up an ActiveRecord pattern where objects persist themselves (and put even more knowledge about the persistence mechanism into your domain)
Overall, the second option is the most SOLID, but more often than not it turns into a beast-and-two-thirds to maintain, especially when dealing with a domain containing backreferences and circular references. For instance, for fast retrieval and/or traversal, an InvoiceLineDetail record (perhaps containing shipping notes or tax information) might refer directly to the Invoice as well as the InvoiceLine to which it belongs. That creates a 3-node circular reference that requires either an O(n^2) algorithm to detect that the object has been handled already, or hard-coded logic concerning a "cascade" behavior for the backreference. I've had to implement "graph walkers" before; trust me, you DO NOT WANT to do this if there is ANY other way of doing the job.
So, in conclusion, my opinion is that ORMs are the least of all evils given a sufficiently complex domain. They encapsulate much of what is not SOLID about persistence mechanisms, and reduce knowledge of the domain about its persistence to very high-level implementation details that break down to simple rules ("all domain objects must have all their public members marked virtual").
In short - it is not solved
(here goes additional useless characters to post my awesome answer)
All good points.
I don't have an answer (but the comment got too long when I decided to add something about stored procs) except to say my philosophy seems to be identical to yours and I code or code generate.
Things like partial classes make this a lot easier than it used to be in the early .NET days. But ORMs (as a distinct "thing" as opposed to something that just gets done in getting to and from the database) still require a LOT of compromises and they are, frankly, too leaky of an abstraction for me. And I'm not big on having a lot of dupe classes because my designs tend to have a very long life and change a lot over the years (decades, even).
As far as the database side, stored procs are a necessity in my view. I know that ORMs support them, but the tendency is not to do so by most ORM users and that is a huge negative for me - because they talk about a best practice and then they couple to a table-based design even if it is created from a code-first model. Seems to me they should look at an object datastore if they don't want to use a relational database in a way which utilizes its strengths. I believe in Code AND Database first - i.e. model the database and the object model simultaneously back and forth and then work inwards from both ends. I'm going to lay it out right here:
If you let your developers code ORM against your tables, your app is going to have problems being able to live for years. Tables need to change. More and more people are going to want to knock up against those entities, and now they all are using an ORM generated from tables. And you are going to want to refactor your tables over time. In addition, only stored procedures are going to give you any kind of usable role-based manageability without dealing with every tabl on a per-column GRANT basis - which is super-painful. If you program well in OO, you have to understand the benefits of controlled coupling. That's all stored procedures are - USE THEM so your database has a well-defined interface. Or don't use a relational database if you just want a "dumb" datastore.
Have you looked at the Entity Framework 4.1 Code First? IIRC, the domain objects are pure POCOs.
this what we did on our latest project, and it worked out pretty well
use EF 4.1 with virtual keywords for our business objects and have our own custom implementation of T4 template. Wrapping the ObjectContext behind an interface for repository style dataaccess.
using automapper to convert between Bo To DTO
using autoMapper to convert between ViewModel and DTO.
you would think that viewmodel and Dto and Business objects are same thing, and they might look same, but they have a very clear seperation in terms of concerns.
View Models are more about UI screen, DTO is more about the task you are accomplishing, and Business objects primarily concerned about the domain
There are some comprimises along the way, but if you want EF, then the benfits outweigh things that you give up
Over a year later, I have solved these problems for me now.
Using NHibernate, I am able to map fairly complex Domain Models to reasonable database designs that wouldn't make a DBA cringe.
Sometimes it is needed to create a new implementation of the IUserType interface so that NHibernate can correctly persist a custom type. Thanks to NHibernates extensible nature, that is no big deal.
I found no way to avoid adding virtual to my properties without loosing lazy loading. I still don't particularly like it, especially because of all the warnings from Code Analysis about virtual properties without derived classes overriding them, but out of pragmatism, I can now live with it.
For the default constructor I also found a solution I can live with. I add the constructors I need as public constructors and I add an obsolete protected constructor for NHibernate to use:
[Obsolete("This constructor exists because of NHibernate. Do not use.")]
protected DataExportForeignKey()
{
}