EF 6.1.3.
I have a domain which contains many instances of a "Header/ Item" type pattern, where the a Header can have many Items (1 to many), and also has a "current" or "latest" item.
This is represented as follows:
Header
Guid Id
Guid CurrentItemId
Item CurrentItem
ICollection<Item> AllItems
Item
HeaderId
Id
The PK of the Items is always the HeaderID + ItemID. The reason being that, by far, the most common access pattern for items is to list all items related to a given header, and having HeaderID be the first part of the PK/clustered index means we get that data with clustered index seeks.
Our problem is that when we use the CurrentItem navigation property, it only ever uses the ItemID to do the lookup, which results in not so great query plans.
I assume this is because the conventions for EF us to use the CurrentItemId to look up the CurrentItem. My question is, is there a way for my to tell EF to always perform its joins for CurrentItem by mapping the Header.Id,Header.CurrentItemId -> Item.HeaderId,Item.Id?
I believe this is a slight different scenario than the one described here: composite key as foreign key
In my case, I have a one to one mapping not one top many, and there doesn't seem to be a WithforeignKey method available for that scenario.
We ended up not being able to get EF to generate the SQL the way we wanted - so we wrote a db command interceptor to dynamically find instances of this join and re-write the join to match our designed composite key.
We configure this as the DbContext level like so:
this.ModifyJoin<Item, Header>(
(i) => new Header() { CurrentItemId = i.Id }, //What to find
(i) => new Header() { CurerntItemId = i.Id, Id = i.HeaderId }); //What to replace with
This information is attached to the context instance itself, so when the command interceptor sees the overrides, it uses them to re-write the SQL.
This ends up working well for most scenarios, but there are some - such as when additional filtering is doing on the Item table as part of the LINQ statement, that the aliasing rules used by EF become too complex to follow without writing a full SQL parser.
For our use, this results in the ideal join about 90% of the time, which is good enough for us.
The code to do all this isn't difficult, but it's too big to put here. Add a comment if you want a copy and I'll put it up on GitHub.
Related
I am developping an application using EF6 with Fluent API and I have an issue to manage Many-To-Many relationship.
For some internal reasons the Join table has a specific format including 4 fields
- Left Id (FK)
- Right Id (FK)
- StartDate (dateTime)
- EndDate (datetime)
Deleting a link is in fact setting the EndDate as not null but i don't now how to configure it in EF6.
In an other hand when reading links the record with Not NULL EndDate shouldn't be considered.
Can you give me a solution ?
Thank you.
Join tables and EF
EF automates some things for you. For this, it uses convention-over-configuration. If you stick to the convention, you can skip on a whole lot of common configuration.
For example, if your entity has a property named Id, EF will inherently assume that this is the PK.
Similarly, if two entity types have nav props that refer to each other (and only one direct link between the two entities exists), then EF will automatically assume that these nav props are the two sides to a single many-to-many relationship. EF will make a join table in the database, but it will keep this hidden from you, and let you deal with the two entity types themselves.
For some internal reasons the Join table has a specific format including 4 fields - Left Id (FK) - Right Id (FK) - StartDate (dateTime) - EndDate (datetime)
Your join table no longer conforms to what the content of a conventional and automatically generated EF join table is. You are expecting a level of custom configurability that EF cannot provide based on blind convention, which means you have to explicitly configure this.
Secondly, the fact that you have these additional columns implies that you wish to use this data at some point (presumably to show the historical relations between two entities. Therefore, it doesn't make sense to rely on EF's automatic join tables as the join table and it content would be hidden from the application/developer.
It's possible that the second consideration is invalid for you, if you don't need the application to ever fetch the ended entries. But the overall point still stands.
The solution here is to make the join record an explicit entity of its own. In essence, you are not dealing with a many-to-many here, you are dealing with a specific entity (the join element) with two one-to-many relationships (one for each of the two entity types).
This enables you to achieve exactly what you want. Your expectation of what EF can automate for you simply doesn't apply in this case.
Soft delete
Deleting a link is in fact setting the EndDate as not null but i don't now how to configure it in EF6.
In general, this is known as "soft delete" behavior, albeit maybe slightly differently here. In a regular soft delete pattern, when an entry is deleted, the database secretly retains the entry but the application doesn't know that and doesn't see the entry again.
It's unclear if you intend for ended entries to still show up in the application, e.g. the relational history. If this is not the case, then your situation is exactly soft delete behavior.
This isn't something you configure on the model level, but rather something you override in your database's SaveChanges behavior. A simple example of how I implement a soft delete:
public override int SaveChanges()
{
// Get all entries of the change trackes (of a given type)
var entries = ChangeTracker.Entries<IAuditedEntity>().ToList();
// Filter the entries that are being deleted
foreach (var entry in entries.Where(entry.State == EntityState.Deleted))
{
// Change the entry so it instead updates the entry and does not delete it
entry.Entity.DeletedOn = DateTime.Now;
entry.State = EntityState.Modified;
}
return base.SaveChanges();
}
This allows you to prevent deletions to the entities that you want this to apply to, which is the safest way to implement a soft delete as this serves as a catch-all for database deletes coming from whichever consumer uses this db context.
The solution to your question is pretty much the same. Assuming you named your join entity (see previous chapter) JoinEntity:
public override int SaveChanges()
{
var entries = ChangeTracker.Entries<JoinEntity>().ToList();
// Filter the entries that are being deleted
foreach (var entry in entries.Where(entry.State == EntityState.Deleted))
{
// Change the entry so it instead updates the entry and does not delete it
entry.Entity.Ended = DateTime.Now;
entry.State = EntityState.Modified;
}
return base.SaveChanges();
}
Word of warning
Soft deletes tend to be a catch-all for all entities (or at least a significant chuck of your database). Therefore, it makes sense to catch this at the db context level as I did here.
However, if this entity is unique in that it is soft deleted, then this is more of a business logic implementation than it is a DAL-architecture. If you start writing many custom rules for different types of entities, the db context logic is going to get clutterend and it's not going to be nice to work with because you need to account for multiple possible operations happening during the SaveChanges.
Take note to not push what is supposed to be a business logic decision to the DAL. I can't draw this line for you, it depends on your context. But evaluate whether the db context is the best place to implement this behavior.
Can you give me a solution ?
If your linking table has extra columns you have to model it as an Entity, and the EndDate logic for navigation needs to be explicit. EF won't do any of that for you.
I'm trying to get all the Hotfix and include all the details (associated with it) where the property Available is 1. This is my code:
public static IList<HotFix> GetAllHotFix()
{
using (Context context = new Context())
{
return context.HotFix
.Include(h => h.AssociatedPRs)
.Include(h => h.Detail.Where(d => d.Available = 1))
.ToList();
}
}
And I'm getting that error. I tried using .ThenInclude but couldn't solve it.
Inside HotFix I have:
[Required]
public virtual List<HotFixDetail> Detail { get; set; }
Although you forgot to write your class definitions, it seems that you have a HotFix class. Every HotFix has a sequence of zero or more AssociatedPRs and a sequence of zero or more Details.
Ever Detail has at least one numeric property Available.
You want all HotFixes, each with all its AssociatedPRs, and all Details that have a property Available value equal to 1 (didn't you mean that available is a Boolean?)
When using entity framework, people tend to use include to get an item with its sub-items. This is not always the most efficient method, as it gets the complete row of a table, inclusive all the properties that you do not plan to use.
For instance, if you have a one-to-many relationship, Schools with their Students, then each Student will have a foreign key to the School that this `Student attends.
So if School [10] has 1000 Students, then every Student will have a foreign key to the School with a value 10. If you use Include to fetch School [10] with its Students, then this foreign key value is also selected, and sent a 1000 times. You already know it will equal the Schools primary key value, hence it is a waste of processing power to transport this value 10 a 1001 times.
When querying data, always use Select, and Select only the properties you actually plan to use. Only use Include if you plan to update the fetched data.
Another good advice is to use plurals to describe sequences and singulars to describe one item in your sequence
Your query will be:
var result = context.HotFixes.Select(hotfix => new
{
// Select only the hotfix properties you actually plan to use:
Id = hotfix.Id,
Date = hotfix.Date,
...
AssociatedPRs = hotfix.AssociatedPRs.Select(accociatedPr => new
{
// again, select only the associatedPr properties that you plan to use
Id = associatedPr.Id,
Name = associatedPr.Name,
...
// foreign key not needed, you already know the value
// HotFixId = associatedPr.HotFixId
})
.ToList(),
Details = hotfix.Details
.Where(detail => detail.Available == 1)
.Select(detail => new
{
Id = detail.Id,
Description = detail.Description,
...
// not needed, you know the value:
// Available = detail.Available,
// not needed, you know the value:
// HotFixId = detail.HotFixId,
})
.ToList(),
});
I used anonymous type. You can only use it within the procedure in which the anonymous type is defined. If you need to return the fetched data, you'll need to put the selected data in a class.
return context.HotFixes.Select(hotfix => new HotFix()
{
Id = hotfix.Id,
Date = hotfix.Date,
...
AssociatedPRs = hotfix.AssociatedPRs.Select(accociatedPr => new AssociatedPr()
{
... // etc
Note: you still don't have to fill all the fields, unless your function requirement specifically states this.
It might be confusing for users of your function to not know which fields will actually be filled and which ones will not. On the other hand: when adding items to your database they are already accustomed not to fill in all fields, for instance the primary and foreign keys.
As a solution for that not all fields are filled, some developers design an extra layer: the repository layer (using the repository pattern). For this they create classes that represent the data that people want to put into storage and want to save into the storage. Usually those people are not interested in that the data is saved in a relational database, with foreign keys and stuff. So the repository classes won't have the foreign keys
The advantage of the repository pattern is, that the repository layer hides the actual structure of your storage system. It even hides that it is a relational database. It might also be in a JSON-file. If the database changes, users of the repository layer don't have to know about this, and probably don't need to change as well.
A repository pattern also makes it easier to mock the database for unit testing: since users don't know that the data is in a relational database, for the unit test you can save the date in a JSON file, or a CSV-file or whatever.
The disadvantage is that you need to write extra classes that holds the data that is to be put into the repository, or fetched from the repository.
Whether it is wise to add this extra layer or not, depends on how often you expect your database to change layout in the future, and how good your unit tests need to be.
I have couple of questions with update functionaliy using NHibernate
I have Customer and location entities with 1:n relationship. Customer has location property. While creating/updating customer entity, I just assigned location property and commited changes.
new Location() { Id = ViewModel.LocationId };
Is it proper way to do it or do I need to retrieve the location entity from db and attach it again like below
newCust.Location = GetlocationfromDB(ViewModel.LocationId);
And how does it work with m:n relationship. I have order and orderitems entities. So, if a newgroup is added/deleted, do I need to check which group is added and get from db and attach it or just groupid will do fine..
This isn't the right way to do it - it might work if you have your unsaved-value mapping right for the primary key, but the proper way to do it is to use session.Load(ViewModel.LocationId) see http://ayende.com/blog/3988/nhibernate-the-difference-between-get-load-and-querying-by-id
There are a number of ways of dealing with this, but it sounds like you want your relationship to be mapped as a set (to prevent duplicates) rather than a bag. If you map it as a set and use ISet for the property type of the relationship, the duplicates will be handled for you. If however you use a bag, you would need to remove duplicates in your own code. Again, you should be using session.Load to get the group if it's an already existing group.
Hello I'm trying to do the impossible apparently.
I need a self referenced table with a many to many relationship to itself that also has a specific order in c# entity framework (4.2) database first.
Think of it like Friends having Friends in which they order their friendship > Best Friend to Worst Friend.
Is there anyway to do this without using the "FriendToFriend" relationship entity? I would like to be able to use Friend.Friends (removing the order column creates it), but I would have a default order based on their friendshipOrder. My work around is looking like extending the generated classes to have a new property for Friends in order.
Any one else have any better ideas?
Entity framework does not support ordered collections. This is one of many situations where EF shows its immaturity.
Try nHibernate if it is a viable option. It supports ordered collections.
With EF you will have to map the intermediate table with extra column and manually adjust the ordering according to your logic.
I know I'm late to this, but when designing this as a data model, I would prefer to add a relationship table, and that relationship table should have a property that defines the order (for example, worst friend is 0, best is 100).
Then, in EF, I would explicitly order by that property, if the list I'm retrieving should be of that order.
That means that whatever method you use to query the data, that relationship can be consistently used. So if you were using EF, you could use it (although, yes, it's not as handy as Friend.Friends, but the code would be clearer as to its intention - Friend.FriendRelationships.Select(p => p.Friend).OrderBy(p => p.OrderValue)), and if you were using direct SQL, then you could use it too.
If I came across Friend.Friends in code, I would have no idea what ordering would be applied to it.
If you must have it though, you could always add it as a non-db property -
public class Friend
{
public virtual List<FriendRelationship> UnorderedFriendList { get; set; }
[NotMapped]
public IEnumerable<Friend> Friends
{
get
{
return UnorderedFriendList.Select(p => p.Friend).OrderByDescending(p => p.OrderValue);
}
}
}
I have an entity that maps to a table called Rule. The table for this entity has an FK to another Table called Category. I'm trying to figure out how to pull in a property from Category in my Rule entity. I'm pretty sure I want to use a join in my entity mapping, but I can't figure out how to configure it so that it works. Here is my mapping:
Join("Category", x =>
{
x.Map(i => i.CategoryName, "Name");
x.KeyColumn("CategoryId");
x.Inverse();
});
Here is the SQL that it's generating...
SELECT ...
FROM Rule rules0_ left outer join Category rules0_1_ on rules0_.Id=rules0_1_.CategoryId
WHERE ...
Here is the SQL that I want.
SELECT ...
FROM Rule rules0_ left outer join Category rules0_1_ on rules0_.CategoryId=rules0_1_.Id
WHERE ...
I can't seem to find anything on the JoinPart that will let me do this. Subselect looks promising from the little bit of documentation I've found, but I can't find any examples of how to use it. Any advice on this problem would be much appreciated. Thanks!
"Join" is poorly named. a "join" in an NHibernate mapping implies a zero-to-one relationship based on a relation of the primary keys of the two tables. You would use a join if, for instance, you had a User table and a UserAdditionalInfo table, with zero or one record per User. The UserAdditionalInfo table would likely reference the PK from User as both a foreign key and its own primary key. This type of thing is common when a DBA has to religiously maintain a schema for a legacy app, but a newer app needs new fields for the same conceptual record.
What you actually need in your situation is a References relationship, where a record has a foreign key relationship to zero or one other records. You'd set it up fluently like so:
References(x=>Category)
.Column("CategoryId")
.Inverse()
.Cascade.None();
The problem with this is that Category must now be mapped; it is a separate entity which is now related to yours. Your options are to live with this model, to "flatten" it by making the entity reference private, changing the mapping to access the entity as such, and coding "pass-throughs" to the properties you want public, or by using a code tool like AutoMapper to project this deep domain model into a flat DTO at runtime for general use. They all have pros and cons.