I have a Candidate class. When somebody deletes a Candidate, I want a DeletedCandidate which is derived from Candidate to be stored in a separate table.
How can I model this in EF, Code first? I think my best option is TBC, but when I use the following in Context, a System.Data.MappingException is thrown.
modelBuilder.Entity<DeletedCandidate>().Map(d => {
d.ToTable("DeletedCandidate");
d.MapInheritedProperties();
});
I would not use inheritance for archiving-like tasks. You will always have to make a distinction between Candidates and DeletedCandidates in your mainstream application code. With EF you'll have to do that by always retrieving candidates by OfType<Candidate>.
I would make a separate class (and table) for DeletedCandidates. Thus, you can always get to them when needed, but they never get in harm's way in the rest of your code. A drawback may be that you always have to keep the properties (and columns) of the two in sync. This could be relieved by having both classes implement a common interface (which you can easily do with code-first).
If you need to preserve foreign key relationships to DeletedCandidates it's a different story. In that case I think the best you can do is using a deleted flag (but you're going to need filtering to get the active candidates).
Just an advice :D.
Related
I know that the underlying ORM used in Orchard is NHibernate and it does support the so-called ClassMapping which may help customize the mappings the way we want.
However I'm not sure about how Orchard utilizes the mapping methods supported by NHibernate. In this case it seems to always use the strategy similar to Table Per Type in EF as well as some other ORMs. With that strategy, the base class will be mapped to some common table whereas the derived class will be mapped to another table which contains all properties of its own (not declared in the base class). Those 2 tables will have a one-one relationship.
Now I really want to make it use the strategy similar to Table Per Concrete Type in which the base and derived classes will be mapped to 2 different tables with all properties (including inherited properties) being mapped to columns. Those 2 tables will not have any relationship, so querying for columns in just one table will not unexpectedly generate an inner JOIN (for one-one relationship).
Actually that requirement makes sense in case we just need to partition our data (from 1 big table to 2 or more small tables that have the same schema). We don't want to re-declare or use some kind of duplicate model classes (with different names), instead we just need to create a new model class and let it inherit from one base model class containing all necessary properties.
With the current code like this:
public class ARecord {
//properties ...
}
public class BRecord : ARecord {
//empty here
}
Currently we cannot use BRecord because it is understood as another part of the ARecord, the auto-generated query (always with INNER JOIN) will be failed because of some does-not-exist table or column names.
How can I solve this?
You're not going to like it ;) In a nutshell, the answer is don't do inheritance at all. Orchard was very deliberately designed around the idea of composition, steering well clear of inheritance in its content models. Maybe the central idea in Orchard is to make the concept of content part the "atom of content", and to design those basic units as very simple and composable pieces of functionality that do one thing well.
After all these years, the concept has held remarkably well, and I've yet to see an example of a content model where inheritance would have been more elegant and appropriate. This is reflected in the way nHibernate is customized and used in Orchard, as you've discovered.
So the solution to your problem can likely be one of two things:
You're modeling contents, and you should re-think your approach towards composition of parts. If you give a little more details about your specific scenario (maybe in a new question), I'm happy to help specifically in this direction.
You're modeling non-content data, in which case you might want to consider opting out of Orchard's specific nHibernate content-specialized idiosyncrasies and do things closer to the metal. Again, if you give more specifics about your scenario, I'm happy to take a look and give some pointers.
I have a question about about proxy in NHibernate. I've inside in my log file a lot of logs like: Narrowing proxy to - this operation breaks ==
There are a few other questions on the web and different answer:
Stackoverflow NHibernate narrowing proxy warning It being a big deal or not depends on the level of risk you are willing to accept. Since there is going to always been a disconnect between your code and your database you can not always assure that the casting will work. This will lead to bugs that might be difficult to diagnose and may not be resolved without changes to the database or the code.
Another post from hibernate:
Narrowing problem Don't worry about his warning, just put the following in your log file and you shouldn't see it anymore...
Why it happens?
Suppose you have a Product with a many-to-one association to Address. Both are entities and Address has a ShippingAddress subclass.
Let's Session.get(..) a Product from the db that has ShippingAddress as association. Because the many-to-one is lazy, it will return a Address proxy. Note that this is a Address proxy and not a ShippingAddress proxy as the proxy will always match the type that is mentioned in Product (see the hibernate book for details).
This proxy is stored by Hibernate in his proxy cache.
Now we Session.get(...) the same ShippingAddress from the db, the one that is associated to the Product that we used fetched from the db.
Now Hibernate will see that it already contains a proxy for this ShippingAddress and will return this. However, it will notice that the types aren't the same so a "downcasting" must occur. Because this latter action isn't possible with "proxies" it will create new one and return it...
As you can see, nothing to worry about.
You could consider making the Address a value type...
In my case that's no option.
And a last one
ProxyWarnLog Removed in Hibernate 4?
Here is the code from NHibernate: StatefulPersistenceContext.cs -> NarrowProxy(..)
So, is it a problem or not? I always work with detached objects in my program.
I hope someone can help me.
Thanks a lot.
This can be a trouble if you use the == operator to compare entities and decide whether they are the same one or not.
This does similarly occur (but with Equals indeed) when you add an entity to a collection mapped as a set. If the set happens to already contain the entity but instantiated through another proxy type, the add may fail to respect its contract, it may add again the entity, and the set would then contains two occurrences of the same entity.
This is a may, not a will, because you can avoid this trouble by overriding Equals (and GetHashcode, as it is mandatory to return the same hashcode for objects being equal) on your entities, in order to compare them through their primary key and entity type. For set, it is sufficient (since it do not use the == operator, but the Equals method, prioritizing the one from IEquatable<T> if your type implements it).
For ==, you need then to define the == and != operators on your class for using your Equals implementation.
Read here or here for more on this. Beware, their example implementations are just examples, and are not suitable for a domain model using inheritance. Read here for a overriding example handling inheritance. (But it does not handle transient entities: better append to it a final test on natural ids if they have one, rather than yielding false if both are considered transient while being not the same reference.) And this blog provides a base class with == redefined (found in this Stack Overflow question).
Working with detached entities increases this risk. If you have not already overridden those two methods on your entities, it would be safer to do it, regardless of this warning.
I have a frustrating situation owing to this little quirk of EF. Here's a simple demo of the behavior. First the DB schema:
As you see, RestrictedProduct is a special case of product, which I'm intending to make a subclass of Product with some special code.
Now I import to an EF data model:
Oops! EF saw that RestrictedProduct had only 2 fields, both FKs, so it mapped it as a one-to-many relationship between Product and Restriction. So I go back to the database and add a Dummy field to RestrictedProduct, and now my EF model looks much better:
But that Dummy field is silly and pointless. Maybe I could delete it? I blow away the field from the DB table and the entity model, then refresh the model from the DB...
Oh, no! The Product-Restriction association is back, under a new name (RestrictedProduct1)! Plus, it won't compile:
Error 3034: Problem in mapping fragments starting at lines (x, y) :Two entities with possibly different keys are mapped to the same row. Ensure these two mapping fragments map both ends of the AssociationSet to the corresponding columns.
Is there any way to prevent this behavior, short of keeping the Dummy field on the RestrictedProduct table?
I just came across the same issue, and as an alternative to putting the dummy field in your RestrictedProduct table to force the creation of an entity you can also make your RestrictedProduct.RestrictionId field nullable and EF will then generate an entity for it. You can then modify it to use inheritance and any subsequent "Update model from database" will not cause undesired nav properties. Not really a nice solution but a work around.
Let's walk slowly into your problem.
1st thing you need to decide is if the restricted product is
really a special case of product or is it a possible extension
to each product.
From your original DB Scheme it seems that any product may have
a relation to a single restriction however a single restriction
can be shared among many products.. so this is a simple 1 to many
situation which means that restricted product is NOT a special case
of product! Restriction is an independent entity which has nothing
to do with product in a specific way.
Therefore EF is correct in the 1st importation of your scheme:
1. a product can have 0 or 1 restrictions.
2. a restriction is another entity which can be related to many products.
I do not see your problem.
Currently our new database design is changing rapidly and I don't always have time to keep up to date with the latest changes being made. Therefore I would like to create some basic integration tests that are basically sanity checks on my mappings against the database.
Here are a few of the things I'd like to accomplish in these tests:
Detect columns I have not defined in my mapping but exist in the database
Detect columns I have mapped but do NOT exist in the database
Detect columns that I have mapped where the data types between the database and my business objects no longer jive with each other
Detect column name changes between database and my mapping
I found the following article by Ayende but I just want to see what other people out there are doing to handle these sort of things. Basically I'm looking for simplified tests that cover a lot of my mappings but do not require me to write seperate queries for every business object in my mappings.
I'm happy with this test, that comes from the Ayende proposed one:
[Test]
public void PerformSanityCheck()
{
foreach (var s in NHHelper.Instance.GetConfig().ClassMappings)
{
Console.WriteLine(" *************** " + s.MappedClass.Name);
NHHelper.Instance.CurrentSession.CreateQuery(string.Format("from {0} e", s.MappedClass.Name))
.SetFirstResult(0).SetMaxResults(50).List();
}
}
I'm using plain old query since this version comes from a very old project and I'm to lazy to update with QueryOver or Linq2NH or something else...
It basically ping all mapped entities configured and grasp some data too in order to see that all is ok. It does not care if some field exists in the table but not on the mapping, that can generate problem in persistence if not nullable.
I'm aware that Fabio Maulo has something eventually more accurate.
As a personal consideration, if you are thinking on improvement, I would try to implement such a strategy: since mapping are browsable by API, look for any explicit / implicit table declaration in the map, and ping it with the database using the standard schema helperclasses you have inside NH ( they eventually uses the ADO.NET schema classes, but they insulate all the configuration stuff we already did in NH itself) By playng a little with naming strategy we can achieve a one by one table field check list. Another improvement can be done by, in case of unmatching field, looking for a candidate by applying Levensthein Distance to all the available names and choosing one if some threshold requisites are satisfied. This of course is useless in class first scenarios when the DB schema are generated by NH itself.
I use this one too:
Verifying NHibernate Entities Contain Only Virtual Members
I'm currently facing a performance problem with creating POCO objects from my database. I'm using Entity Framework 4 as OR-Mapper.
The whole application is a prototype for now.
Let's assume I want to have some business objects like classes 'Printer' or 'Scanner'. Both classes inherit from a BaseClass called Product.
The business classes exist.
I try to use a more generic database approach. I don't want to create tables for "Printer" nor "Scanner". I want to have 3 tables: One called Product, and the other Property and PropertyValue (which stores all assigned values to a specific Product).
In my business layer I do create a specific object like this:
public Printer GetPrinter(int IDProduct)
{
Printer item = new Printer();
// get the product object with EF
// get all PropertyValues
// (with Reflection) foreach property in item.GetType().GetProperties
// {
// property.SetValue("specific value")
// }
return item;
}
This is how the EF model looks like:
Works fine so far. For now I'm doing performance tests for retrieving multiple sets.
I've created a prototype and improved it several times to increase the performance. It is still far away from being usable.
I takes 919ms to create 300 objects who only contain 3 properties.
The reason for choosing such DB design is to have a generic database design. Adding new properties should only be done in the business model.
Am I just being too stupid to create a performant way of retrieving xx objects or is my approach totally wrong? As far as I understand OR-Mapper, they are basically doing the same?
I think you missed whole point of ORM. The reason why people are using ORM is to be able to persist buisness objects and easily retrieve business objects. You are using ORM to get just data for your business objects' factories. Factories are using reflection to build business object from materialized classes retrieved by ORM. This will always be very slow because:
Query compilation is slow (you can precompile it)
Object materialization is slow (you can't avoid it)
Reflection is slow (you can't avoid it)
IMO if you want to follwo this DB design to have generic tables absolutely independent on your business objects you don't need ORM or at least you don't need EF.
The reason for your performance problems is that generic approach is not follwed in your business model. So you must somewhere convert generic data to specific data = slow operation.
If you want to improve performance define set of shared properties and place them into Product. Then either use your current PropertyValue and Property for additional non shared properties or use simply ExtendedProperties table storing key value pairs. Your entities will be of type Product with inner type property, shared properties and collection of extended properties. That is generic approach.
Firstly, it's not clear to me what you have in the way of POCOs. Did you hand code these and your context or T4 generate them? There are some great articles here that benchmark performance with no POCO, T4 Generated POCOs/Context and hand coded POCOs/Context. As expected there is HUGE performance savings going with POCOs (more than a 15-fold boost in performance in his benchmark) going the POCO route over those generated by the Entity Framework. You don't say what DBMS...if MSSQL have you turned on the profiler and see what's being generated?