I’m trying to copy/clone entity graph with EF6.1 and getting duplicate entities.
Below is a piece of my model which consist of a Template that I want to modify, copy and assign to different users, something like Save As function.
Here is my entities model:
What I’m doing is:
var newTemplate = ctx.Templates
.Where(t => t.TemplateId == SelectedTemplate.TemplateId)
.Include(t => t.Properties.Select(p => p.PropertyCollections))
.Include(t => t.Properties.Select(p => p.Values))
.AsNoTracking()
.First();
newTemplate.TemplateName = newTemplateName;
ctx.Templates.Add(newTemplate);
ctx.SaveChanges();
And what I get is shown below where “Template1” is the source and “Template2” is the copy in which every ‘PropertyCollection’ has a duplicated entry for each ‘Property’.
Result after copy:
I understood that with AsNoTracking there is no identity mapping which is the reason behind this but I can’t find even a custom solution.
I didn't really test your code, but I think your Entities might really get messed up when doing it that way. Maybe this attempt would work for you. It's for EF4 but might work.
You are adding the whole graph, so EF is inserting everything. You are using AsNoTracking to "trick" EF instead of its original purpose.
I would suggest you to write a few lines of code to actually implement your business requirement, which is create a new Template based on another one.
So, get the template (without the AsNoTracking), and create a new template initializing the properties based on the original template values. Then add the new template to the context. EF will insert the new template and reference the existing dependent entities.
This is also a safer way to implement this, as in the future you might require to set some properties with different values in the new template.
Related
I apologise if this has been asked already, I am struggling greatly with the terminology of what I am trying to find out about as it conflicts with functionality in Entity Framework.
What I am trying to do:
I would like to create an application that on setup gives the user to use 1 database as a "trial"/"startup" database, i.e. non-production database. This would allow a user to trial the application but would not have backups etc. in no way would this be a "production" database. This could be SQLite for example.
When the user is then ready, they could then click "convert to production" (or similar), and give it the target of the new database machine/database. This would be considered the "production" environment. This could be something like MySQL, SQLServer or.. whatever else EF connects to these days..
The question:
Does EF support this type of migration/data transfer live? Would it need another app where you could configure the EF source and EF destination for it to then run through the process of conversion/seeding/population of the data source to another data source?
Why I have asked here:
I have tried to search for things around this topic, but transferring/migration brings up subjects totally non-related, so any help would be much appreciated.
From what you describe I don't think there is anything out of the box to support that. You can map a DbContext to either database, then it would be a matter of fetching and detaching entities from the evaluation DbContext and attaching them to the production one.
For a relatively simple schema / object graph this would be fairly straight-forward to implement.
ICollection<Customer> customers = new List<Customer>();
using(var context = new AppDbContext(evalConnectionString))
{
customers = context.Customers.AsNoTracking().ToList();
}
using(var context = new AppDbContext(productionConnectionString))
{ // Assuming an empty database...
context.Customers.AddRange(customers);
}
Though for more complex models this could take some work, especially when dealing with things like existing lookups/references. Where you want to move objects that might share the same reference to another object you would need to query the destination DbContext for existing relatives and substitute them before saving the "parent" entity.
ICollection<Order> orders = new List<Order>();
using(var context = new AppDbContext(evalConnectionString))
{
orders = context.Orders
.Include(x => x.Customer)
.AsNoTracking()
.ToList();
}
using(var context = new AppDbContext(productionConnectionString))
{
var customerIds = orders.Select(x => x.Customer.CustomerId)
.Distinct().ToList();
var existingCustomers = context.Customers
.Where(x => customerIds.Contains(x.CustomerId))
.ToList();
foreach(var order in orders)
{ // Assuming all customers were loaded
var existingCustomer = existingCustomers.SingleOrDefault(x => x.CustomerId == order.Customer.CustomerId);
if(existingCustomer != null)
order.Customer = existingCustomer;
else
existingCustomers.Add(order.Customer);
context.Orders.Add(order);
}
}
This is a very simple example to outline how to handle scenarios where you may be inserting data with references that may, or may not exist in the target DbContext. If we are copying across Orders and want to deal with their respective Customers we first need to check if any tracked customer reference exists and use that reference to avoid a duplicate row being inserted or throwing an exception.
Normally loading the orders and related references from one DbContext should ensure that multiple orders referencing the same Customer entity will all share the same entity reference. However, to use detached entities that we can associate with the new DbContext via AsNoTracking(), detached references to the same record will not be the same reference so we need to treat these with care.
For example where there are 2 orders for the same customer:
var ordersA = context.Orders.Include(x => x.Customer).ToList();
Assert.AreSame(orders[0].Customer, orders[1].Customer); // Passes
var ordersB = context.Orders.Include(x => x.Customer).AsNoTracking().ToList();
Assert.AreSame(orders[0].Customer, orders[1].Customer); // Fails
Even though in the 2nd example both are for the same customer. Each will have a Customer reference with the same ID, but 2 different references because the DbContext is not tracking the references used. One of the several "gotchas" with detached entities and efforts to boost performance etc. Using tracked references isn't ideal since those entities will still think they are associated with another DbContext. We can detach them, but that means diving through the object graph and detaching all references. (Do-able, but messy compared to just loading them detached)
Where it can also get complicated is when possibly migrating data in batches (disposing of a DbContext regularly to avoid performance pitfalls for larger data volumes) or synchronizing data over time. It is generally advisable to first check the destination DbContext for matching records and use those to avoid duplicate data being inserted. (or throwing exceptions)
So simple data models this is fairly straight forward. For more complex ones where there is more data to bring across and more relationships between that data, it's more complicated. For those systems I'd probably look at generating a database-to-database migration such as creating INSERT statements for the desired target DB from the data in the source database. There it is just a matter of inserting the data in relational order to comply with the data constraints. (Either using a tool or rolling your own script generation)
I've got the following code snippet in a repository class, using Dapper to query and Slapper.Automapper to map:
class MyPocoClass{
MyPocoClassId int;
...
}
//later:
var results = connection.Query<dynamic>("select MyPocoClassID, ...");
return AutoMapper.MapDynamic<MyPocoClass>(results).ToList();
results above has many items, but the list returned by AutoMapper.MapDynamic has only one item (which is clearly wrong). However, I found that adding the following configuration to AutoMapper fixes the problem:
AutoMapper.Configuration.AddIdentifier(typeof(MyPocoClass), "MyPocoID");
Why does Slapper.AutoMapper need to know the key of my class to simply map a list to another list? Is it trying to eliminate duplicates? I'll also note that this only happens while mapping a certain one of my POCOs (so far)...and I can't figure out why this particular POCO is special.
Turns out this is a bug in Slapper.AutoMapper.
The library supports case-insensitive mapping and convention-based keys. The SQL result set has MyPocoClassID and the class itself has MyPocoClassId -- which is not a problem for Slapper.AutoMapper as far as mapping goes. But internally Slapper.AutoMapper identifies (by convention) that MyPocoClass has MyPocoClassId as its identifier, and it can't find that field in the result set. The library uses that key to eliminate duplicates in the output list (for some reason), and since they're all 'null/empty', we get only one record.
I may submit a pull request to fix this problem, but since the library appears to be unmaintained I don't think it'll help.
I have a situation where I need to add a new item to a property for a group of objects that has a many-to-many relationship. Is there any way to do this in bulk using EntityFramework.Extended? Something like...
Ctx.Foos
.Where(f => fooIds.Contains(f.FooId))
.Update(f => f.Bars.Add(bar)) // <-- what would go here???
Obviously, the Update() part is not correct. For the time being, I've worked around it by retrieving the set and looping through. Just wondering if there is a better way.
The short answer is: NO
The Update() method allows only to update properties directly related to the entity. Since you want to update a many or many relations (not a property), this library does not allow to do it.
Disclaimer: I'm the owner of the project Entity Framework Extensions
The best & fastest way if you need performance is by using the Entity Framework Extensions library with the BulkSaveChanges method.
Ctx.Foos
.Where(f => fooIds.Contains(f.FooId))
.ToList()
.ForEach(f => f.Bars.Add(bar))
Ctx.BulkSaveChanges();
Having the DB below I would like to retrieve all bricks in C# and include the Workshops on those BrickBacks that has any.
I managed to retrieve all the Bricks and include the BrickBacks by simply doing
context.Bricks.Include(b=>b.Back).ToList()
But in this case BrickBack is an abstract class which its subclass may contain a Workshop but this is not always the case.
Unfortunately I can't just do
context.Bricks.Include(b=>b.Back).Include(b=>b.Back.Workshop).ToList()
How can this be done?
This could work context.Bricks.Include("Back").Include("Workshop").ToList()
WorkShop will be null if Workshop_Id is null in the database.
Not possible. Maybe you can approach it from a different angle:
ConcreteBacks.Include(b => b.WorkShop)
.Include(b => b.Bricks)
.AsEnumerable()
.SelectMany(b => b.Bricks)
This will pull all ConcreteBacks and the included data from the database and then return the Bricks in a flattened list.
The AsEnumerable() is necessary because EF only includes data off the root entities in the result set. Without AsEnumerable() this would be Brick and the ConcreteBacks would be ignored. But now EF only knows about the part before AsEnumerable(), so it includes everything off ConcreteBack.
I've run into a scenario where I essentially need to write the changes of a child entity of a one-to-many association to the database, but not save any changes made to the parent entity.
The Entity Framework currently deals with database commits in the context scope (EntityContext.SaveChanges()), which makes sense for enforcing relationships, etc. But I'm wondering if there is some best practice or maybe a recommended way to go about doing fine-grained database commits on individual entites instead of the entire context.
Best practices? Do you mean, besides, "Don't do it!"?
I don't think there is a best practice for making an ObjectContext different than the state of the database.
If you must do this, I would new up a new ObjectContext and make the changes to the child entity there. That way, both contexts are consistent.
I have a similar need. The solution I am considering is to implement wrapper properties on all entities that store any property changes privately without affecting the actual entity property. I then would add a SaveChanges() method to the entity which would write the changes to the entity and then call SaveChanges() on the context.
The problem with this approach is that you need to make all your entities conform to this pattern. But, it seems to work pretty well. It does have another downside in that if you make a lot of changes to a lot of objects with a lot of data, you end up with extraneous copies in memory.
The only other solution I can think of is to, upon saving changes, save the entity states of all changed/added/deleted entities, set them to unmodified except the one you're changing, save the changes, and then restore the states of the other entities. But that sounds potentially slow.
This can be accomplished by using AcceptAllChanges().
Make your changes to the parent entity, call AcceptAllChanges(), then make your changes to the related Entities and call SaveChanges(). The changes you have made to the parent will not be saved because they have been "committed" to the Entity but not saved to the database.
using (AdventureWorksEntities adv = new AdventureWorksEntities())
{
var completeHeader = (from o in adv.SalesOrderHeader.Include("SalesOrderDetail")
where o.DueDate > System.DateTime.Now
select o).First();
completeHeader.ShipDate = System.DateTime.Now;
adv.AcceptAllChanges();
var details = completeHeader.SalesOrderDetail.Where(x => x.UnitPrice > 10.0m);
foreach (SalesOrderDetail d in details)
{
d.UnitPriceDiscount += 5.0m;
}
adv.SaveChanges();
}
This worked for me. Use the ChangeTracker.Clear() method to clear out changes for other entities.
_contextICH.ChangeTracker.Clear();
var x = _contextICH.UnitOfMeasure.Attach(parameterModel);
x.State = (parameterModel.ID != null) ? Microsoft.EntityFrameworkCore.EntityState.Modified : Microsoft.EntityFrameworkCore.EntityState.Added;
_contextICH.SaveChanges();