I'm using EF 6 in my .NET MVC app. I have these classes:
public class Member
{
public int ID { get; set; }
public string Name { get; set; }
public int FactoryID { get; set; }
public Factory Factory { get; set; }
}
public class Factory
{
public int ID { get; set; }
public string Name { get; set; }
public virtual ICollection<Member> Members { get; set; }
}
code to add:
var newMember = new Member();
newMember.Name = 1;
newMember.FactoryID = 2;
context.Members.Add(newMember);
context.SaveChanges();
code to get:
var member = context.Members.SingleOrDefault(x => x.ID == id);
var factory = member.Factory;
so, when I add Member in one API call, then get Member in another API call then Member.Factory is defined.
When I try to get Member right away after adding then Member.Factory is NULL.
What's the reason of it? And how can it be resolved?
The reason it would work sometimes but not others is because EF will provide related entities it knows about when you reference entities by ID. If Lazy loading is enabled, EF will go to the DB to pull back any related entities that it doesn't know about. When it comes to serializing responses though, Lazy loading can result in performance issues or cyclic reference errors.
For example, with lazy loading turned off:
If I do something like:
using (var context = new MyContext())
{
var member = new Member
{
FactoryId = 3;
// ...
}
context.Members.Add(member);
context.SaveChanges();
return member;
}
the returned Member's "Factory" reference will be #null because that EF context had no notion of what Factory ID 3 actually was. The insert will succeed provided that a data record for Factory ID #3 exists, but the context does not know about it.
If in another example I do something like this:
using (var context = new MyContext())
{
// Call some code using this context that results in the following running...
var factory = context.Factories.Single(x => x.FactoryId == 3);
// more code...
var member = new Member
{
FactoryId = 3;
// ...
}
context.Members.Add(member);
context.SaveChanges();
return member;
}
In this case, EF will return Factory #3 along with the member because the context instance knew about Factory #3. When member was saved, the known reference was automatically associated.
The above example uses a DbContext in a using block, which makes the scenario seem obvious, however, with code that uses an IoC container to scope a DbContext to a request for example it can be a little less clear cut for any given scenario across various methods etc. that may be called to work out what entities the DbContext may, or may not know about.
When dealing with references where you will want to return details about entities and their references, or following code would benefit from accessing references, my typical advice is to set references, not FKs in your entities. This way you assure that the entity you are creating is in a complete and fit for purpose state.
For instance:
using (var context = new MyContext())
{
var factory = context.Factories.Single(x => x.FactoryId == factoryId);
var member = new Member
{
Factory = factory;
// ...
}
context.Members.Add(member);
context.SaveChanges();
return member;
}
I avoid exposing FKs entirely within my entities to enforce using the references, and use Shadow Properties (EFCore) and mapping (EF6) to ensure FKs are not accessible in my entities. The trouble with FKs is that when editing entities that have both a reference and a FK column, there are 2 sources of truth. Does updating the Factory change the factory reference, or does updating the FactoryId? What if I have a Factory reference pointing at Factory ID 3, but I change FactoryId on Member to 4? Some code may depend on the FK, while other may go to the Factory reference.
Explicitly working with references means that related entities are asserted at that point (rather than waiting for any number of FK violations on SaveChanges). It will use any loaded references the context has already loaded, or go to the DB if needed.
Where I do use FKs over references is for bulk operations where I just want to update or insert a large amount of information as quickly as possible. In these cases I use a bounded context with bare bones simple entity definitions with FKs and no references to create, set FKs, and save. No need for returning complete data and references.
Related
I'm using Entity Framework and want to use lazy loading on properties, so I'm making the properties virtual.
An example:
public class Child
{
public string Name { get; set; }
}
public class Parent
{
public Parent()
{
Child = new Child();
}
public virtual Child Child { get; set; }
}
When I do that, I get a CA2214 warning:
Severity Code Description Project File Line Suppression State
Warning CA2214 'Parent.Parent()' contains a call chain that results in a call to a virtual method defined by the class. Review the following call stack for unintended consequences:
Parent..ctor()
Parent.set_Child(Child):Void Project C:\Parent.cs 18 Active
I'd like to remove this warning, but if I mark Parent as sealed I get the expected error:
Severity Code Description Project File Line Suppression State
Error CS0549 'Parent.Child.get' is a new virtual member in sealed class 'Parent' Project C:\Parent.cs 24 N/A
So how can I resolve this warning (without ignoring it) and still use virtual?
Use an initializer on the property rather than the constructor.
public class Parent
{
public virtual Child Child { get; set; } = new Child();
}
Edit: Regarding the above and that
...there are times when I would need to set properties for Child in
the constructor...
The simple rule is "you probably shouldn't". An entity's role is to represent data state for that entity, nothing more. Initializing an entity "graph" should not be done in a top-level entity's constructor, but rather using a Factory pattern. For instance, I use a Repository pattern with EF which I manage not only getters, but also serve as the factory providing Create methods, as well as handling Delete for soft-delete scenarios. This helps ensure that an entity with dependencies is always created in a "minimally complete" and valid state.
Even the above example I would say is a bad example. Even though it doesn't trip the compiler warning, the entity at the point of a parent's construction isn't in a complete and valid state. If I were to do something like:
using (var context = new MyContext())
{
var parent = new Parent();
parent.Name = "Myself";
context.SaveChanges();
}
If the Parent auto-initializes a Child, that SaveChanges will want to save that new Child, and there is nothing that ensured that all required fields on the child are set, that SaveChanges call will fail. Child isn't in a complete enough state.
The only place I would advocate auto-initializing would be collections:
public class Parent
{
public virtual ICollection<Child> Children { get; internal set; } = new List<Child>();
}
The above is still "complete" in that an empty collection won't attempt to save anything for children if I populate a new parent without adding any children. It is also convenient so that when I create a new parent, I have the option to immediately start adding/associating children without tripping a null reference exception if I had no children.
To initialize an object graph with a factory method helps ensure that entities are always created in a minimally complete state, which means they can be saved immediately without error. As I mentioned above, I generally use my Repository to serve as the entity factory since it's already wired up with the DbContext through the unit of work to resolve dependencies as needed.
As an example if I have an Order entity that I can create that consists of an order number, customer reference, and one or more order lines for products which are required to save a valid order, my OrderRepository might have a CreateOrder method something like this:
public Order CreateOrder(int customerId, IEnumerable<OrderedProductViewModel> orderedProducts)
{
if (!orderedProducts.Where(x => x.Quantity > 0).Any())
throw new ArgumentException("No products selected.");
var customer = Context.Customers.Single(x => x.CustomerId == customerId);
var products = Context.Products.Where(x => orderedProducts.Where(o => o.Quantity > 0).Select(o => o.ProductId).Contains(x.ProductId)).ToList();
if (products.Count() < orderedProducts.Count())
throw new ArgumentException("Invalid products included in order.");
var order = new Order
{
Customer = customer,
OrderLines = orderedProducts.Select(x => new OrderLine
{
Product = products.Single(p => p.ProductId == x.ProductId),
Quantity = x.Quantity
}
}
Context.Orders.Add(order);
return order;
}
This is a contextual example of a factory method I might use, and some of the basic validation. OrderedProductViewModel represents effectively a tuple of a ProductId and a Quantity. It along with a Customer ID represent the minimum state of an order I would allow to be saved. There may be other optional details that might be set outside of this method before an Order is considered complete enough to ship, but the factory ensures it is complete enough to save.
I could have calling code like:
using (var contextScope = ContextScopeFactory.Create())
{
var order = OrderRepository.Create(selectedCustomerId, selectedProducts);
contextScope.SaveChanges();
}
And that would be happy. Or I could continue to set available information on the order before calling SaveChanges. My OrderRepository would not have any business logic because that business logic may be dependent on configuration for the client and the repository has no business knowing or caring about that; but it could have a dependency for something like an IOrderValidator to pass the newly proposed Order to which the business logic could run across to assert that the Order is valid enough to be saved. The repoistory/factory asserts it is complete enough, but it can be tied to a validator back in the business logic to assert it is valid enough. (I.e. minimum/ maximum order size / value, etc.)
This coupled with DDD where I make all setters internal and use action methods on my entities helps control ensuring that my entities are always maintained in a complete state. I'm guessing this is something you are trying to ensure through using the constructors so I thought I'd share the above as an example to provide some possible ideas and alternatives to accomplish that.
Following is the action that is adding a Loan request to the database:
[HttpPost]
public ActionResult Add(Models.ViewModels.Loans.LoanEditorViewModel loanEditorViewModel)
{
if (!ModelState.IsValid)
return View(loanEditorViewModel);
var loanViewModel = loanEditorViewModel.LoanViewModel;
loanViewModel.LoanProduct = LoanProductService.GetLoanProductById(loanViewModel.LoanProductId); // <-- don't want to add to this table in database
loanViewModel.Borrower = BorrowerService.GetBorrowerById(loanViewModel.BorrowerId); //<-- don't want to add to this table in database
Models.Loans.Loan loan = AutoMapper.Mapper.Map<Models.Loans.Loan>(loanEditorViewModel.LoanViewModel);
loanService.AddNewLoan(loan);
return RedirectToAction("Index");
}
Following is the AddNewLoan() method:
public int AddNewLoan(Models.Loans.Loan loan)
{
loan.LoanStatus = Models.Loans.LoanStatus.PENDING;
_LoanService.Insert(loan);
return 0;
}
And here is the code for Insert()
public virtual void Insert(TEntity entity)
{
if (entity == null)
throw new ArgumentNullException(nameof(entity));
try
{
entity.DateCreated = entity.DateUpdated = DateTime.Now;
entity.CreatedBy = entity.UpdatedBy = GetCurrentUser();
Entities.Add(entity);
context.SaveChanges();
}
catch (DbUpdateException exception)
{
throw new Exception(GetFullErrorTextAndRollbackEntityChanges(exception), exception);
}
}
It is adding one row successfully in Loans table but it is also adding rows to LoanProduct and Borrower table as I showed in first code comments.
I checked the possibility of multiple calls to this action and Insert method but they are called once.
UPDATE
I am facing similar problem but opposite in functioning problem here: Entity not updating using Code-First approach
I think these two have same reason of Change Tracking. But one is adding other is not updating.
The following code seems a bit odd:
var loanViewModel = loanEditorViewModel.LoanViewModel;
loanViewModel.LoanProduct = LoanProductService.GetLoanProductById(loanViewModel.LoanProductId); // <-- don't want to add to this table in database
loanViewModel.Borrower = BorrowerService.GetBorrowerById(loanViewModel.BorrowerId); //<-- don't want to add to this table in database
Models.Loans.Loan loan = AutoMapper.Mapper.Map<Models.Loans.Loan>(loanEditorViewModel.LoanViewModel);
You are setting entity references on the view model, then calling automapper. ViewModels should not hold entity references, and automapper should effectively be ignoring any referenced entities and only map the entity structure being created. Automapper will be creating new instances based on the data being passed in.
Instead, something like this should work as expected:
// Assuming these will throw if not found? Otherwise assert that these were returned.
var loanProduct = LoanProductService.GetLoanProductById(loanViewModel.LoanProductId);
var borrower = BorrowerService.GetBorrowerById(loanViewModel.BorrowerId);
Models.Loans.Loan loan = AutoMapper.Mapper.Map<Models.Loans.Loan>(loanEditorViewModel.LoanViewModel);
loan.LoanProduct = loanProduct;
loan.Borrower = borrower;
Edit:
The next thing to check is that your Services are using the exact same DbContext reference. Are you using Dependency Injection with an IoC container such as Autofac or Unity? If so, make sure that the DbContext is set registered as Instance Per Request or similar lifetime scope. If the Services effectively new up a new DbContext then the LoanService DbContext will not know about the instances of the Product and Borrower that were fetched by another service's DbContext.
If you are not using a DI library, then you should consider adding one. Otherwise you will need to update your services to accept a single DbContext with each call or leverage a Unit of Work pattern such as Mehdime's DbContextScope to facilitate the services resolving their DbContext from the Unit of Work.
For example to ensure the same DbContext:
using (var context = new MyDbContext())
{
var loanProduct = LoanProductService.GetLoanProductById(context, loanViewModel.LoanProductId);
var borrower = BorrowerService.GetBorrowerById(context, loanViewModel.BorrowerId);
Models.Loans.Loan loan = AutoMapper.Mapper.Map<Models.Loans.Loan>(loanEditorViewModel.LoanViewModel);
loan.LoanProduct = loanProduct;
loan.Borrower = borrower;
LoanService.AddNewLoan(context, loan);
}
If you are sure that the services are all provided the same DbContext instance, then there may be something odd happening in your Entities.Add() method. Honestly your solution looks to have far too much abstraction around something as simple as a CRUD create and association operation. This looks like a case of premature code optimization for DRY without starting with the simplest solution. The code can more simply just scope a DbContext, fetch the applicable entities, create the new instance, associate, add to the DbSet, and SaveChanges. There's no benefit to abstracting out calls for rudimentary operations such as fetching a reference by ID.
public ActionResult Add(Models.ViewModels.Loans.LoanEditorViewModel loanEditorViewModel)
{
if (!ModelState.IsValid)
return View(loanEditorViewModel);
var loanViewModel = loanEditorViewModel.LoanViewModel;
using (var context = new AppContext())
{
var loanProduct = context.LoanProducts.Single(x => x.LoanProductId ==
loanViewModel.LoanProductId);
var borrower = context.Borrowers.Single(x => x.BorrowerId == loanViewModel.BorrowerId);
var loan = AutoMapper.Mapper.Map<Loan>(loanEditorViewModel.LoanViewModel);
loan.LoanProduct = loanProduct;
loan.Borrower = borrower;
context.SaveChanges();
}
return RedirectToAction("Index");
}
Sprinkle with some exception handling and it's done and dusted. No layered service abstractions. From there you can aim to make the action test-able by using an IoC container like Autofac to manage the Context and/or introducing a repository/service layer /w UoW pattern. The above would serve as a minimum viable solution for the action. Any abstraction etc. should be applied afterwards. Sketch out with pencil before cracking out the oils. :)
Using Mehdime's DbContextScope it would look like:
public ActionResult Add(Models.ViewModels.Loans.LoanEditorViewModel loanEditorViewModel)
{
if (!ModelState.IsValid)
return View(loanEditorViewModel);
var loanViewModel = loanEditorViewModel.LoanViewModel;
using (var contextScope = ContextScopeFactory.Create())
{
var loanProduct = LoanRepository.GetLoanProductById( loanViewModel.LoanProductId).Single();
var borrower = LoanRepository.GetBorrowerById(loanViewModel.BorrowerId);
var loan = LoanRepository.CreateLoan(loanViewModel, loanProduct, borrower).Single();
contextScope.SaveChanges();
}
return RedirectToAction("Index");
}
In my case I leverage a repository pattern that uses the DbContextScopeLocator to resolve it's ContextScope to get a DbContext. The Repo manages fetching data and ensuring that the creation of entities are given all required data necessary to create a complete and valid entity. I opt for a repository-per-controller rather than something like a generic pattern or repository/service per entity because IMO this better manages the Single Responsibility Principle given the code only has one reason to change (It serves the controller, not shared between many controllers with potentially different concerns). Unit tests can mock out the repository to serve expected data state. Repo get methods return IQueryable so that the consumer logic can determine how it wants to consume the data.
Finally with the help of the link shared by #GertArnold Duplicate DataType is being created on every Product Creation
Since all my models inherit a BaseModel class, I modified my Insert method like this:
public virtual void Insert(TEntity entity, params BaseModel[] unchangedModels)
{
if (entity == null)
throw new ArgumentNullException(nameof(entity));
try
{
entity.DateCreated = entity.DateUpdated = DateTime.Now;
entity.CreatedBy = entity.UpdatedBy = GetCurrentUser();
Entities.Add(entity);
if (unchangedModels != null)
{
foreach (var model in unchangedModels)
{
_context.Entry(model).State = EntityState.Unchanged;
}
}
_context.SaveChanges();
}
catch (DbUpdateException exception)
{
throw new Exception(GetFullErrorTextAndRollbackEntityChanges(exception), exception);
}
}
And called it like this:
_LoanService.Insert(loan, loan.LoanProduct, loan.Borrower);
By far the simplest way to tackle this is to add the two primitive foreign key properties to the Loan class, i.e. LoanProductId and BorrowerId. For example like this (I obviously have to guess the types of LoanProduct and Borrower):
public int LoanProductId { get; set; }
[ForeignKey("LoanProductId")]
public Product LoanProduct { get; set; }
public int BorrowerId { get; set; }
[ForeignKey("BorrowerId")]
public User Borrower { get; set; }
Without the primitive FK properties you have so-called independent associations that can only be set by assigning objects of which the state must be managed carefully. Adding the FK properties turns it into foreign key associations that are must easier to set. AutoMapper will simply set these properties when the names match and you're done.
Check Models.Loans.Loan?Is it a joined model of Loans table , LoanProduct and Borrower table.
You have to add
Loans lentity = new Loans()
lentity.property=value;
Entities.Add(lentity );
var lentity = new Loans { FirstName = "William", LastName = "Shakespeare" };
context.Add<Loans >(lentity );
context.SaveChanges();
I am using .net core.
My Goal: I want to be able Edit a SalesOrder just after Creating.
Right now I am able to Create and Edit. But it is throwing an error
The instance of entity type 'SalesOrder' cannot be tracked because
another instance of this type with the same key is already being
tracked. When adding new entities, for most key types a unique
temporary key value will be created if no key is set (i.e. if the key
property is assigned the default value for its type). If you are
explicitly setting key values for new entities, ensure they do not
collide with existing entities or temporary values generated for other
new entities. When attaching existing entities, ensure that only one
entity instance with a given key value is attached to the context.
When I try editing just after creating.
My Save() function:
public class SalesOrdersController : Controller
{
private readonly ApplicationDbContext _dbContext;
public SalesOrdersController(ApplicationDbContext dbContext){
_dbContext = dbContext;
}
// ...other Controller actions
public JsonResult Save([FromBody]SalesOrderViewModel salesOrderViewModel)
{
SalesOrder salesOrder = new SalesOrder();
salesOrder.document_id = salesOrderViewModel.document_id;
salesOrder.customer = salesOrderViewModel.customer;
salesOrder.document_status_id = salesOrderViewModel.document_status_id;
...
salesOrder.object_state = salesOrderViewModel.object_state;
_dbContext.Entry(salesOrder).State = Helpers.ConvertState(salesOrder.object_state);
_dbContext.SaveChanges();
salesOrderViewModel.document_id = salesOrder.document_id;
salesOrderViewModel.object_state = ObjectState.Unchanged;
return Json(new { salesOrderViewModel });
}
}
And a function to update states depending on the request:
public static EntityState ConvertState(ObjectState objectState){
switch (objectState){
case ObjectState.Added:
return EntityState.Added;
case ObjectState.Modified:
return EntityState.Modified;
case ObjectState.Deleted:
return EntityState.Deleted;
default:
return EntityState.Unchanged;
}
}
I understand that it is a problem with refreshing the entity state just after creating. How can I resolve that error?
You said you understand the problem... so the solution is to get the original entity from the database and update its properties directly and then update it itself. I mean what you need to do is to avoid calling
context.Update(entity);
Where entity is the object in your model.
So One solution would be something like the following which I agree it may not the best way of solving it.
Let's assume you are using generic repository (which is harder than non generic because you do not know the fields beforehand)
public void Edit(TBusinessObject entity)
{
var originalEntity = context.Set<TBusinessObject>().AsNoTracking().FirstOrDefault(r => r.Id.Equals(entity.Id));
EntityEntry<TBusinessObject> original = context.Entry(originalEntity);
EntityEntry<TBusinessObject> client = context.Entry(entity);
foreach (var property in original.OriginalValues.Properties)
{
var dbMember = original.Member(property.Name);
var clientMember = client.Member(property.Name);
if(!property.IsPrimaryKey() && dbMember.CurrentValue != clientMember.CurrentValue && clientMember.CurrentValue!= null)
{
dbMember.CurrentValue = clientMember.CurrentValue;
dbMember.IsModified = true;
}
}
context.Update(originalEntity);
context.SaveChanges(true);
}
Again, This code could be optimized and it would be way simpler if it was not a generic repository, where you know the names and the types of the fields.
Update 1:
I found out that EF.Core although not yet fully fledged with all the features that were supported by EF6. Yet it is somehow leaned towards the modern development practices.. The example that you posted was all about using EF.Core to implement the traditional repository mentality. If you switch to use UnitOfWork or CQRS you will not face these problems, changes like updates and CRUS in general will be smooth like never before. I am passing an object to the context, and the context itself is able to figure out to what table it belongs and how to handle it. Therefore I recommend changing the way you choose to utilize EF.Core
Try this simplest implementation:
public void Commit()
{
using (var context = new ApplicationDbContext())
{
context.UpdateRange(Changed);
context.AddRange(Added);
context.RemoveRange(Deleted);
context.SaveChanges();
ClearAllChanges();
}
}
Where "Changed, Added, Deleted" are just lists where you might consider AsynchronousBags
I'm trying to get my head around this issue where I am using the Entity Framework (6) in an N-tier application. Since data from the repository (which contains all communication with the database) should be used in a higher tier (the UI, services etc), I need to map it to DTOs.
In the database, there's quite a few many-to-many relationships going on, so the datastructure can/will get complex somewhere along the line of the applications lifetime. What I stumbled upon is, that I am repeating the exact same code when writing the repository methods. An example of this is my FirmRepository which contains a GetAll() method and GetById(int firmId) method.
In the GetById(int firmId) method, I have the following code (incomplete since there's a lot more relations that needs to be mapped to DTOs):
public DTO.Firm GetById(int id)
{
// Return result
var result = new DTO.Firm();
try
{
// Database connection
using (var ctx = new MyEntities())
{
// Get the firm from the database
var firm = (from f in ctx.Firms
where f.ID == id
select f).FirstOrDefault();
// If a firm was found, start mapping to DTO object
if (firm != null)
{
result.Address = firm.Address;
result.Address2 = firm.Address2;
result.VAT = firm.VAT;
result.Email = firm.Email;
// Map Zipcode and City
result.City = new DTO.City()
{
CityName = firm.City.City1,
ZipCode = firm.City.ZipCode
};
// Map ISO code and country
result.Country = new DTO.Country()
{
CountryName = firm.Country.Country1,
ISO = firm.Country.ISO
};
// Check if this firm has any exclusive parameters
if (firm.ExclusiveParameterType_Product_Firm.Any())
{
var exclusiveParamsList = new List<DTO.ExclusiveParameterType>();
// Map Exclusive parameter types
foreach (var param in firm.ExclusiveParameterType_Product_Firm)
{
// Check if the exclusive parameter type isn't null before proceeding
if (param.ExclusiveParameterType != null)
{
// Create a new exclusive parameter type DTO
var exclusiveParameter = new DTO.ExclusiveParameterType()
{
ID = param.ExclusiveParameterType.ID,
Description = param.ExclusiveParameterType.Description,
Name = param.ExclusiveParameterType.Name
};
// Add the new DTO to the list
exclusiveParamsList.Add(exclusiveParameter);
}
}
// A lot more objects to map....
// Set the list on the result object
result.ExclusiveParameterTypes = exclusiveParamsList;
}
}
}
// Return DTO
return result;
}
catch (Exception e)
{
// Log exception
Logging.Instance.Error(e);
// Simply return null
return null;
}
}
This is just one method. The GetAll() method will then have the exact same mapping logic which results in duplicated code. Also, when more methods gets added, i.e. a Find or Search method, the same mapping needs to be copied again. This is, of course, not ideal.
I have read a lot about the famous AutoMapper framework that can map entites to/from DTOs, but since I have these many-to-many relations it quickly feels bloated with AutoMapper config code. I've also read this article, which make sense in my eyes: http://rogeralsing.com/2013/12/01/why-mapping-dtos-to-entities-using-automapper-and-entityframework-is-horrible/
Is there any other way of doing this without copy/pasting the same code over and over again?
Thanks in advance!
You can make an extension method on Entity firm (DB.Firm) like this,
public static class Extensions
{
public static DTO.Firm ToDto(this DB.Firm firm)
{
var result = new DTO.Firm();
result.Address = firm.Address;
result.Address2 = firm.Address2;
//...
return result;
}
}
Then you can convert DB.Firm object anywhere in your code like firm.ToDto();
An alternate strategy is to use a combination of the class constructor and an explicit and/or implicit conversion operator(s). It allows you to cast one user-defined entity to another entity. The feature also has the added benefit of abstracting the process out so you aren't repeating yourself.
In your DTO.Firm class, define either an explicit or implicit operator (Note: I am making assumptions about the name of your classes):
public class Firm {
public Firm(DB.Firm firm) {
Address = firm.Address;
Email = firm.Email;
City = new DTO.City() {
CityName = firm.City.City1;
ZipCode = firm.City.ZipCode;
};
// etc.
}
public string Address { get; set;}
public string Email { get; set; }
public DTO.City City { get; set; }
// etc.
public static explicit operator Firm(DB.Firm f) {
return new Firm(f);
}
}
You can then use it in your repository code like this:
public DTO.Firm GetById(int id) {
using (var ctx = new MyEntities()) {
var firm = (from f in ctx.Firms
where f.ID == id
select f).FirstOrDefault();
return (DTO.Firm)firm;
}
}
public List<DTO.Firm> GetAll() {
using (var ctx = new MyEntities()) {
return ctx.Firms.Cast<DTO.Firm>().ToList();
}
}
Here's the reference in MSDN.
About mapping: it actually does not really matter if you use Automapper or prepare you mappings completely manually in some method (extension one or as explicit casting operator as mentioned in other answers) - the point is to have it in one place for reusability.
Just remember - you used FirstOrDefault method, so you actually called the database for a Firm entity. Now, when you are using properties of this entity, especiallly collections, they will be lazy loaded. If you have a lot of them (as you suggest in your question), you may face a huge amount of additional call and it might be a problem, especcially in foreach loop. You may end up with dozen of calls and heavy performace issues just to retrieve one dto. Just rethink, if you really need to get such a big object with all its relations.
For me, your problem is much deeper and considers application architecture. I must say, I personally do not like repository pattern with Entity Framework, in addition with Unit Of Work pattern. It seems to be very popular (at least of you take a look at google results for the query), but for me it does not fit very well with EF. Of course, it's just my opinion, you may not agree with me. For me it's just building another abstraction over already implemented Unit Of Work (DbContext) and repositories (DbSet objects). I found this article very interesing considering this topic. Command/query separation way-of-doing-things seems much more elegant for me, and also it fits into SOLID rules much better.
As I said, it's just my opinion and you may or may not agree with me. But I hope it gives you some perpective here.
I have a database context with lazy loading disabled. I am using eager loading to load all of my entities. I cannot update many to many relationships.
Here's the repository.
public class GenericRepository<TEntity> : IGenericRepository<TEntity>
where TEntity : class
{
... other code here...
public virtual void Update(TEntity t)
{
Set.Attach(t);
Context.Entry(t).State = EntityState.Modified;
}
...other code here...
}
Here's the User model.
public partial class User
{
public User()
{
this.Locks = new HashSet<Lock>();
this.BusinessModels = new HashSet<BusinessModel>();
}
public int UserId { get; set; }
public string Username { get; set; }
public string Name { get; set; }
public string Phone { get; set; }
public string JobTitle { get; set; }
public string RecoveryEmail { get; set; }
public Nullable<double> Zoom { get; set; }
public virtual ICollection<Lock> Locks { get; set; }
public virtual ICollection<BusinessModel> BusinessModels { get; set; }
}
If I modify the business models collection, it does not save the business models collection although I have attached the entire entity.
Worker.UserRepository.Update(user);
I'm not sure what is going on. I don't want to break my generic repository/unit of work pattern just to update many-to-many relationships.
Edit 2: I've got this working...but it is extremely different from the pattern that I'm going for. Having hard implementations means I will need to create a method for each type that has a many to many relationship. I am investigating now to see if I can make this a generic method.
Edit 3: So the previous implementation I had did not work like I thought it would. But now, I have a slightly working implementation. If someone would please help me so I can move on from this, I will love you forever.
public virtual void Update(TEntity updated,
IEnumerable<object> set,
string navigationProperty,
Expression<Func<TEntity, bool>> filter,
Type propertyType)
{
// Find the existing item
var existing = Context.Set<TEntity>().Include(navigationProperty).FirstOrDefault(filter);
// Iterate through every item in the many-to-many relationship
foreach (var o in set)
{
// Attach it if its unattached
if (Context.Entry(o).State == EntityState.Detached)
// Exception "an object with the same key already exists"
// This is due to the include statement up above. That statement
// is necessary in order to edit the entity's navigation
// property.
Context.Set(propertyType).Attach(o);
}
// Set the new value on the navigation property.
Context.Entry(existing).Collection(navigationProperty).CurrentValue = set;
// Set new primitive property values.
Context.Entry(existing).CurrentValues.SetValues(updated);
Context.Entry(existing).State = EntityState.Modified;
}
I then call it like this:
Worker.UserRepository.Update(user, user.BusinessModels, "BusinessModels", i => i.UserId == user.UserId, typeof (BusinessModel));
Extremely messy, but it lets me update many-to-many relationships with generics. My big problem is the exception when I go to attach new values that already exist. They're already loaded because of the include statement.
This works:
This doesn't:
After many painful hours, I have finally found a way to update many-to-many relationships with a completely generic repository. This will allow me to create (and save) many different types of entities without creating boilerplate code for each one.
This method assumes that:
Your entity already exists
Your many to many relationship is stored in a table with a composite key
You are using eager loading to load your relationships into context
You are using a unit-of-work/generic repository pattern to save your entities.
Here's the Update generic method.
public virtual void Update(Expression<Func<TEntity, bool>> filter,
IEnumerable<object> updatedSet, // Updated many-to-many relationships
IEnumerable<object> availableSet, // Lookup collection
string propertyName) // The name of the navigation property
{
// Get the generic type of the set
var type = updatedSet.GetType().GetGenericArguments()[0];
// Get the previous entity from the database based on repository type
var previous = Context
.Set<TEntity>()
.Include(propertyName)
.FirstOrDefault(filter);
/* Create a container that will hold the values of
* the generic many-to-many relationships we are updating.
*/
var values = CreateList(type);
/* For each object in the updated set find the existing
* entity in the database. This is to avoid Entity Framework
* from creating new objects or throwing an
* error because the object is already attached.
*/
foreach (var entry in updatedSet
.Select(obj => (int)obj
.GetType()
.GetProperty("Id")
.GetValue(obj, null))
.Select(value => Context.Set(type).Find(value)))
{
values.Add(entry);
}
/* Get the collection where the previous many to many relationships
* are stored and assign the new ones.
*/
Context.Entry(previous).Collection(propertyName).CurrentValue = values;
}
Here's a helper method I found online which allows me to create generic lists based on whatever type I give it.
public IList CreateList(Type type)
{
var genericList = typeof(List<>).MakeGenericType(type);
return (IList)Activator.CreateInstance(genericList);
}
And from now on, this is what calls to update many-to-many relationships look like:
Worker.UserRepository.Update(u => u.UserId == user.UserId,
user.BusinessModels, // Many-to-many relationship to update
Worker.BusinessModelRepository.Get(), // Full set
"BusinessModels"); // Property name
Of course, in the end you will need to somewhere call:
Context.SaveChanges();
I hope this helps anyone who never truly found how to use many-to-many relationships with generic repositories and unit-of-work classes in Entity Framework.
#dimgl Your solution worked for me. What I've done in addition was to replace the hard-coded type and name of the primaryKey with dynamically retrieved ones:
ObjectContext objectContext = ((IObjectContextAdapter)context).ObjectContext;
ObjectSet<TEntity> set = objectContext.CreateObjectSet<TEntity>();
IEnumerable<string> keyNames = set.EntitySet.ElementType.KeyMembers.Select(k => k.Name);
var keyName = keyNames.FirstOrDefault();
var keyType = typeof(TEntity).GetProperty(keyName).PropertyType
foreach (var entry in updatedSet
.Select(obj =>
Convert.ChangeType(obj.GetType()
.GetProperty(keyName)
.GetValue(obj, null), keyType))
.Select(value => context.Set<TEntity>().Find(value)))
{
values.Add(entry);
}
Like this your code won't depend on the Entity key's name and type.