I am new to NHibernate/FluentNHibernate. I use FNH for my coding now as I find it is easier to use. However, I am working with some existing code base which is written in NHibernate. Today I found a bug in the code where the database wasn't getting updated as expected. After about 30 mins I found out that I hadn't updated the mapping xml even though I added a new class variable - so that row in the table wasn't getting updated. My question is, is there a way to identify such incomplete mappings with NHibernate easily so that I don't have to manually check the mapping always when something goes wrong? i.e. A warning message if I am updating an object which has non default data for any fields which aren't mapped?
Take a look at the PersistenceSpecification class in FluentNHibernate: http://wiki.fluentnhibernate.org/Persistence_specification_testing
You could wrap this up using reflection to test every property if that makes sense for your system.
You could also try to use the NHibernate mapping metadata and search for unmapped properties via reflection in a UnitTest.
By using the Metatdata, it is transparent for your application if you are using fluent nhibernate or other means to create the nhibernate mapping.
If you test your mappings in UnitTests you will know during test-time not during application startup if your mappings are alright.
This question seems to be related and this shows how to query the metadata.
The bug where the database did not get updated can be caused by issues other than not mapped field/property. There may be other mapping mistakes that are impossible to catch using reflection. What if you used wrong cascade or wrong generator? Or forgot association mapping?
If you want to catch majority of mapping issues you should create an integration test that will execute against real or in-memory database. Good overview of this approach is here.
Related
I'm working on a c# project having some tiers with data type encapsulations. But whenever I add a field to a model in a top level layer (say Application Service), I need to remember where else should I change to get my application working properly.
I'm looking for a pattern or method to prevent getting potential logical errors followed by not updating my mapping classes. I think if I can require my mapping classes to resolve newly added fields (for example by throwing an exception if they're not resolved), the problem will be solved.
So any idea for a solution? or how I can implement my own idea?
You can use a library like automapper that will give you an error if not all properties are mapped properly (http://docs.automapper.org/en/stable/Configuration-validation.html), plus it saves you from writing all the code to map each objects.
If you don't want to use a library, make sure to wrap the mappings in factories so that, at least, the code is centralised and easily discoverable, but that's still error prone. Using constructors instead of object initialisers also helps finding mappings at compile time.
To the point; is there any way to customize the pluralization service for database-first EF models?
Specifically, I'd like to use the *Set suffix notation, wherein the entity sets and collection navigation properties are named accordingly:
User to UserSet
Report to ReportSet
etc.
I know I've seen this made possible with code-first, however I'm stuck with database-first as the development process.
I'm aware of IPluralizationService, but can't figure out how to substitute my custom implementation.
Currently, I'm manually working through the entity sets and collection properties in the model browser (VS2015) and appending "Set" to each of them; this is fine to do once, however whenever I regenerate the model it becomes quite the pain in my ass.
Any suggestions?
You could write something that will update the edmx file to the new names.
Also I was going to suggest you could alter the t4 script (the .tt files) but I think that will break the mapping with the edmx file in a database first situation.
But I think you should reconsider code first, you can use the code first generator multiple times, just clean out the context class, and the connection string in the config and make a new context that is named the same (it will overwrite the table classes). You can nuget EntityFramework.CodeTemplates.CSharp and alter the t4 templates that it downloads to include "Set" and that is what it will use to generate the classes.
And then you don't fall into edmx hell, edmx files are a pain once you start trying to maintain them instead of letting them just be what is generated.
I ended up writing a script (PHP of all things) to perform XML transformations on the EDMX file. I lose support for some of the more obscure features due to the way the transformation is performed, however it was the only way I could do it without sacrificing kittens to an omniscient force. Most importantly, it maintains the mappings as expected.
I couldn't figure out a way to work the transformation script into the generation pipeline yet; though I may look at invoking it from the text template.
I'm aware that the question and its answers are 4 years old, but:
In EF6 you can implement a pluralization convention, and replace the default English pluralization with your own.
The original implementation uses a convention, which calls a service. I'm not sure whether you can simply register a service for IPluralizationService in the DependencyResolver, but you can definitely write your own convention.
The only warning is that the GitHub code relies on internal methods which you need to copy/substitute, e.g.
var entitySet = model.StoreModel.Container.EntitySets.SingleOrDefault(
e => e.ElementType == GetRootType(item));
replacing original
model.StoreModel.GetEntitySet(item);
and the method GetRootType().
I must keep the same domain running at two places at the same time. One end must be able to run "offline", while still must receive and send data to the other end from time to time when "online". Basically we got a central server which aggregates data comming from the clients and serves some updated data (like the latest price of a product, new products, etc). I'm using NHibernate to take care of persistance.
I'm trying to use NHibernate's Replicate method
session.Replicate(detached, ReplicationMode.LatestVersion);
to get the object comming from the other end and incorporate/merge/attach to the "local" database.
It fails to execute because it can't cascade the references and collections. Reviewing the cascade options from FluentNHibernate (and even directly looking at NHibernate source code) I could not find the REPLICATE cascade type. From Hibernate's documentation:
CascadeType.REPLICATE
My question is: does anybody knows why FluentNHibernate is missing such option? Is there a different/better way to set this kind of cascade behaviour?
I tried the Cascade.Merge() option together with session.Merge(detached), but although the cascade works just fine, it give me some headaches, mainly because of the id generation and optmistic lock (versioning).
EDIT: NHibernate's source code DOES have a ReplicateCascadeStyle class that maps to the string "replicate". The Cascade / CascadeConverter classes (from Mapping.ByCode namespace) DOES NOT have Replicate as an option. So NHibernate itself supports cascade on Replicate, but only through manual mapping I guess.
OK, as I'm using Fluent NHibernate to map about 100+ classes, switch to xml mapping is not an option to me.
So I forked Fluent NHibernate on GitHub, added the missing Cascade.Replicate option and sent a pull request.
Hope it helps someone.
OK,
This is probably not simple but I figured I would throw it out there:
I get the idea of extending an Model-First entity in EF with a partial class to add data annotation elements somthing like this:
[Required]
string MyString {get;set;}
However, if I am in a multi-tenant system where I may want to customize which fields are actually required when passed to the end client can I dynamically set the annotation depending on how the client has configured the setting, say in another table for instance?
Update: In the multi-tenant system there are at least two databases. One that stores system configuration information. In addition each customer would have their own individual database. The system DB controls routing and selecting the proper customer database from there.
Any insights or ideas anyone has on how to accomplish this would be great!
Thanks,
Brent
If you are using EF 4.1, you could create different DbContexts, referencing the same entities, but provide different mappings using the Fluent Api.
Here is a link to a video that describes using the api.
Fluent Api
Note: Your database would need to be setup to accommodate all the different configurations. For example, if in one context, "FirstName" is required, and in another it is not, your db should allow NULL in order to cope with both situations.
You can't change attributes dynamically.
One option would be to crate the types dynamically, probably inheriting some class (or implementing an interface), that you actually work with. Although I'm not sure this would work with EF.
Another possibility is if EF had another way you could tell it the same thing, but I don't know EF much, so I can't tell if something like that exists.
I'm trying to implement a repository method for removing entities using only primary key, mainly because from a webapp I usually only are aware of the primary key when invoking a "delete request" from a web page.
Because of the ORM, the option today is to get the entity from the database, and then deleting it, which gives me an extra roundtrip.
I could use HQL delete, but since I want to create a generic delete method for all entities, that won't fly unless I use reflection to find out which field that is the primary key (doable, but doesn't feel correct).
Or is it in the nature of NHibernate to need the entity in order to correctly handle cascades?
I tried this approach, with the assumption that it would not load the entity unless explicitly necessary, however haven't had time to test it yet. Maybe someone can shed some light on how this will be handled?
var entity = session.Load<T>( primaryKey );
session.Delete( entity );
EDIT: Have now tested it and it seems that it still does a full select on the entity before deletion.
Load may return a proxy object but it isn't guaranteed. Your mapping may contain cascade deletes that will force NHibernate to load the object from the database in order to determine how to delete the object and its graph.
I would implement this using Load as you are doing. For some objects NHibernate may not need to do a select first. In cases where it does, that's the [usually] trivial price you pay for using an o/r mapper.
This was already asked and answered before: How to delete an object by using PK in nhibernate?
I even have two blog posts about it:
http://sessionfactory.blogspot.com/2010/08/deleting-by-id.html
http://sessionfactory.blogspot.com/2010/08/delete-by-id-gotchas.html
nHibernate is an O(bject)RM. I agree with you that it probably needs the objects to resolve dependencies.
You can of course use direct ADO.Net calls to delete your objects. That presents its own problems of course since you'll have to take care of any cascading issues yourself. If you do go down this road, don't forget to evict from the nHibernate session whatever objects you delete using this method.
But, if this delete is in a really sensitive part of your system, that might be the way to go.
I'd make 100% sure though that this is the case. Throwing away all the nHibernate gives you because of this would not be wise.
I get the sense you know this, and you're looking for a strictly nHibernate answer, and I don't think it exists, sorry.
Disclaimer: I am not able to test it as of now. But won't following thing help:
Person entity = new Person();
entity.Id = primaryKey;
session.Delete( entity );
Don't load the entity but build your entity having just the primary key. I would have loved to test it but right now my environment is not working.