Removing an item from a collection(NHibernate) - c#

I have parent child relationship between two entities(Parent and Child).
My Parent mapping is as follows:
<class name="Parent" table="Parents">
...
<bag name="Children" cascade="all">
<key column="ParentID"></key>
<one-to-many class="Child"></one-to-many>
</bag>
</class>
I would like to execute the following:
someParent.Children.Remove(someChild);
The Child class has a reference to another parent class, Type. The relationship looks like
Note: I apologize for the non-linked url above, I couldn't seem to get past the asterisk in the url string using Markup
Due to this relationship, when the above code is called, instead of a DELETE query, an UPDATE query is generated which removes the ParentID from the Child table(sets to null).
Is it possible to force NHibernate to delete the child record completely, when removed from the Parent.Children collection?
UPDATE
#Spencer's Solution
Very attractive solution as this is something that can be implemented in future classes. However, due to the way sessions are handled(in my particular case) in the repository pattern, this is near impossible as we would have to pass session types(CallSessionContext/WebSessionContext) depending on the application.
#Jamie's Solution
Simple and quick to implement, however I've hit another road block. My child entity looks as follows:
When using the new method, NHibernate generates an update statement setting the TypeID and ParentID to null, as opposed to a single delete outright. If I missed something within the implementation, let me know as this method would be painless to move forward with.
#The One-Shot-Delete solution described here, outlines an idea dereferencing the collection to force a single delete. Same results as above however, an update statement is issued.
//Instantiate new collection and add persisted items
List<Child> children = new List<Child>();
children.AddRange(parent.Children);
//Find and remove requested items from new collection
var childrenToRemove = children
.Where(c => c.Type.TypeID == 1)
.ToList();
foreach (var c in childrenToRemove) { children.Remove(m); }
parent.Children = null;
//Set persisted collection to new list
parent.Children = Children;
Solution
Took a bit of digging, but Jamie's solution came through with some additional modifications. For future readers, based on my class model above:
Type mapping - Inverse = true, Cascade = all
Parent mapping - Inverse = true, Cascade = all-delete-orphan
Remove methods as described in Jamie's solution works. This does produce a single delete statement per orphaned item, so there is the possibility for tuning in the future, however the end result is successful.

Instead of exposing the IList<Child>, control access to the collection through a method:
RemoveChild(Child child)
{
Children.Remove(child);
child.Parent = null;
child.Type.RemoveChild(child);
}
Type.RemoveChild would look similar but you would have to be careful to not put it into an infinite loop calling each other's RemoveChild methods.

I don't think this is exactly possible because Hibernate has no way of knowing if the record has been orphaned. It can check if any other classes relate to the child class but then it would be assuming that it's aware of the entire DB structure which may not be the case.
You're not completely out of luck however. By using the IList interface in conjunction with a custom ICascadeDeleteChild interface you'd create you can come up with a rather seamless solution. Here are the basic steps.
Create a class that inheirits IList and IList<> and call it CascadeDeleteList or something along those lines.
Create a private .Net List inside this class and simply proxy the various method calls to this List.
Create an Interface called ICascadeDeleteChild and give it a method Delete()
Under the Delete method for your CascadeDeleteList check the type of the object that is to be deleted. If it is of type ICascadeDeleteChild then call it's Delete method.
Change your Child class to implement the ICascadeDeleteChild interface.
I know it seems like a pain but once this is done these interfaces should be simple to port around.

Related

EF 6 Saving multiple levels of child entities and multiple parents

Given this model:
I would like to be able to save in one SaveChange call the relations. Which means, I either have a new or updated ContainerParent, and multiple first level children and each of those can have 1 or 2 levels deeper.
The thing is, the children both have a key to themselves, for finding its parent, and a key to the container, for the container to get all its Children independently of their hierarchical level.
With this pseudo code (in the case of all entities are created, not updated)
var newContainerParent = context.ContainerParents.Add(new ContainerParent());
var rootChild = context.Children.Add(new Child());
var secondLevelChild = new Child();
var thirdLevelChild = new Child();
secondLevelChild.Children.Add(thirdLevelChild);
rootChild.Children.Add(secondLevelChild);
newContainerParent.Children.Add(rootChild);
context.SaveChanges();
Problem with this code, is that only the rootchild will have the FK for the container set. I also tried to add the children to they child parent AND the container:
rootChild.Children.Add(secondLevelChild);
newContainerParent.Children.Add(rootChild);
newContainerParent.Children.Add(secondLevelChild);
newContainerParent.Children.Add(thirdLevelChild);
I have the same problem while updating an existing container with new children. I set all the children with the already existing key of the parent, but when SaveChanges is called the key is not saved, its reverted to null.
I fixed it by doing all this in 2 steps, saving once and then getting all the newly created children and updating them with the parent key, the calling SaveChanges again.
I have a feeling I'm missing something, that I should not need to save twice.
The number or frequence of SaveChange calls have no implication on anything, not on performance or so. So why do you want to minimize it ?
Actually, storing such a self referencing table with one SaveChanges is not possible,
cause the ID of an new entity, is generated, when it is saved. So you first need to save it, and then you get the ID, that you can store in another entity. This might require further update-Commands, to the entity you just stored.
You have two chances to solve this.
1) manually generated ID's, handle it all yourself and you know the ID before your store it.
2) In case you have no circularity in your dependency, so a perfect tree structure, you save the items top-down, level by level. I assume you have the childs having a reference to it's parents, so the root has no reference to any other items, you save that first, than the 1st level children, and so on.
This requires multiple SaveChanges, but this is not a disadvantage. It is one Insert-SQL-Command per entity anyway, no matter if you do it in 1 SaveChanges or in 100 SaveChanges.
Both solutions avoid "Update" Commands to the entities, they do Inserts only.
Entity Framework could actually find out this dependencies itself and create an order for new entities to insert, but this is not implemented today, or not perfect, especially on self-referenced tables. The order of saving items is kind of random. So you have to enforce the order with intermediate SaveChanges.

NHibernate, Add parent if not exists when adding child

I'm using this code to determine if it should create a parent or if the parent already exists:
var id = 1;
var parent = Session.Get<Parent>(id);
if (parent == null)
parent = new Parent();
var child = new Child();
child.Parent = parent;
parent.Children.Add(child);
Session.Save(parent);
Right now this seems very inefficient, this method queries the database with 3 separate sql queries everytime when a child is added:
Get parent based on id
Insert child
Insert/update parent (depending if the parent did exist)
Could i do this in a better way?
There are two scenarios in fact.
In the first case, when we really do not know, if there is a parent with provided id - there's no other way. Such solution will always require so many sql statements.. to find out if there is a parent and insert if not.
In the second scenario, if we do know that there is a parent in DB (with provided id) - we can make it more efficient with a built in support: Load<Parent>(id)
9.2. Loading an object
... Load() returns an object that is an uninitialized proxy and
does not actually hit the database until you invoke a method of the
object...
Get more details here:
NHibernate difference between Query<T>, Get<T> and Load<T>
Given that you are in a regular business logic development, I wouldn't mind these queries.
Get is quite fast, because it usually performs a lookup by primary key, which usually is a clustered index (don't know about SQLite).
Existing parents need to be found to be linked anyway. You can postpone the actual query by using Load, but in my experience you need the parent anyway.
Updating the parent may be unnecessary. What does it update? Is there a missing inverse-mapping?
If you are doing the whole thing for a lot of records (not only one), you may consider other options. (Batches, pre-fetching, Futures, whatever.)
If you think that you need a highly optimized implementation for exactly this code, you should consider to avoid using an ORM and implement it in plain SQL (and recheck if it would really use less queries). Writing object oriented code requires having objects in memory which sometimes requires getting data from the database that wouldn't be required in a highly optimized implementation.

Workflow for data from Backbone through NHibernate

This question isn't code-centric, it's a question about idioms. I'm using Backbone/Marionette on the front and C#, NHibernate in the back.
I've got a few tables mapped and working for creates and updates. For simplicity, say I've got two tables, a parent and child, and the child has potentially many rows. The child reference to the parent is not-nullable, so I've got an Inverse relationship, and that's all working. The flow of data from Backbone to the Controller to NHibernate is pretty simple, and I'm using ISession.SaveOrUpdate() at the end. I can post mappings and so forth if necessary. I will say that the mappings Fluent NHibernate generates uses a Bag on the parent.
Here's a concrete example of the situation I'm trying to understand. Say I've got a parent entry with two rows in the child table. I manipulate the data so that one of the children is removed, but no other changes are made. Javascript sends over an object "tree" with the parent entry and the one child row that's left. The mappings are all handled fine, but the sql that's generated is a bunch of (unnecessary, but whatever) update statements. What I would like to happen instead is that NHibernate notices that there is only one child relationship in this new object, but there are two children in the actual database, and then NHibernate deletes that other child. The 'cascade-delete-orphans' option is not working, because the other child isn't actually being orphaned. It still has a reference to the parent in the fk column, and that column is non-nullable anyway, which is why I used the Inverse mapping option.
Is that possible to setup in the mappings? If not, what is a good way to tackle this situation?
Since you are sending an object from the client side, and then create the entity from that object and try to persist, NHibernate will not automatically delete the child entity since it does not know the child object is deleted (it only see you are only try to update one parent entity and a child entity), which is correct in my opinion. For example if you want to just update the parent entity field, then you have to load entire object graph to do it, otherwise NHibernate will delete all children since they are not loaded.
What you should do here is to load the parent entity, and remove missing child entity(s) deleted from it and then persist (instead of mapping the entity), code should look like following,
void Update(ParentDto parentDto){
Parent parent = _session.Get<Parent>(parentDto.Id);
//update parent fields
var childRemoved = //find removed child from parent;
parent.Children.Remove(childRemoved);
_session.SaveOrUpdate(parent);
}

Best way to implement parent child relationship

I am working on a CAD application, where I have block entity. Each block entity have a list of child entities. When these entities are rendered, every block entity knows about its child entity (as it can figure out the child entity in the list), and hence when block entity is selected, the whole block entity along with its child entities gets selected. However, child entity does not know the parent block and other child entities of that block, and due to this when child entity is selected, I cannot get the whole block entity along with all its child entities selected.
As a fix to this problem I created a property in child entities to hold the reference of parent block entity. But, then there might be some issues with cross-referencing and making my data structures error prone.
For Ex: Having a Copy command, somebody working on these data structures few days from now, might just copy the same parent while creating a copy of child entity. However, new copy should belong to some other parent block.
Please suggest the better way to implement this relationship, so that when a child entity is selected I can select whole block entity along with all its child entities.
public class BlockEntity
{
public List<ChildEntity> Children = new List<ChildEntity>();
}
public class ChildEntity
{
public readonly BlockEntity Parent;
}
I have recently come across this issue. I, and others I talked with, came up with two choices:
Do what you are doing with the Parent<-->Child relationship, both knowing about each other.
Have a Parent-->Child relationship and make everything be handled at the parent.
Both are viable solutions, but both have their issues. The first case, what you are doing, seems better and Microsoft seems to use this with their TreeView/TreeNode and DataTable/DataRow/etc. objects for example, because they can each reference back to their respective parents.
Maybe add constraints to the parent, such as not allowing access to the parent's child-collection directly, but only allowing an AddChild function in which you can do the necessary linking. Or do what #Irfan suggests and have the child require you pass the parent to its constructor. Constrain your copy method as well, but always document everything to remove as much confusion as possible.
The later of the above examples is a little easier as everything is always accessed from the parent. This was our solution. We have many functions in the parent to check for and manage the children within. So in this case you would select the child in the CAD app then go to the parents and check in their collection to see if the child exists there. If it does you select the parent and the rest of the children.
It's up to you, but in each case you need to add constraints and error checking to make sure things happen as close to your desired way as possible. Hope this helps.
what reference problems will you get by creating a reference to parent? With this readonly reference which can only be set during construction of the object I see no problem at all.
The Contructor (I am sure you know) looks like:
public ChildEntity(BlockEntity p)
{
Parent = p;
}
//Test just to show Parent can not be assigned elsewhere
public void test()
{
//this line below will show compile error
Parent = new BlockEntity();
}
Do you think there would be some problem with this. The list is a loose reference so there is no stack overflow exception.

Illegal attempt to associate a collection with two open sessions error when deleting via services

There are several posts related to this error but I'm running into something different.
Very simple NHibernate scenario. Parent and child tables with one to many relationship. One parent can have multiple children.
I need to delete a Parent record with child records so I put together very basic code which works fine:
var childRecordList = new List<ChildRecord>();
var parentRecord = ParentRecordRepository.Get(parentRecordId);
childRecordList = ChildRecordRepository.GetAll().Where(c=>c.ParentRecord.Id==parentRecord.Id);
foreach(var childRecord in childRecordList)
{
ChildRecordRepository.Delete(childRecord);
}
ParentRecordRepository.Delete(parentRecord);
Works. Deletes child and the parent records.
If I take the logic above and turn it into a Services method as "DeleteRecord(ParentRecord parentRecord)" it starts failing with the Illegal attempt to associate a collection with two open sessions error on ParentRecordRepository.Delete(parentRecord);
Services are called by instantiating a service class and then calling the DeleteRecord method:
var parentRecord = ParentRecordRepository.Get(id);
var recordService = new RecordService();
recordService.DeleteRecord(parentRecord);
Can't figure out why. Help ?
Based on your working example I'm a bit suspicious about what your ParentRepository is doing to populate it's children. If you have cascade options set up correctly and the mapping includes the child object definitions with the parent, then you shouldn't be deleting children independently, and deleting the parent would work, including deleting the children as expected. If I had to guess, I'd be expecting to see something like:
ChildRecordRepository.GetAll().Where(c=>c.ParentId == Id);
somewhere in the parent Repository.Get callstack where the parent and child repositories are using different Session instances.
Perhaps provide the mapping configuration for parent and child, and the contents of the parent's Get() method.
I tried creating a session by instantiating an instance of the repository and then doing operation on an object pulled from it.
Then I would open a new session of the repository within the Service layer and try to delete the object passed from the controller created session with it. That was the problem.
The bottom line is the same session has to used to both get and delete an object.

Categories

Resources