EntityFramework - Entity proxy error - c#

I am working on a system using Entityframework and have been for over 12monts now, and the project has been going well, up until yesterday, where I have now got a strange error which I have no idea why it occurs.
I am doing nothing different to what I have done before, but once I load the entity in question and try to access any child entities I get the following error:
The entity wrapper stored in the proxy does not reference the same proxy
Can anyone shed any light on what this actually means and what would cause this?
Showing my code doesnt really help.
Here is a simplified version of the code:
var quote = new QuoteHelper().GetById(orderId);
var updatedQuotes = new Provider().GetExportQuotes(quote.DeparturePoint.Id,quote.DestinationPoint);
The error occurs when accessing DeparturePoint and DestinationPoint but Quote loads correctly, and all properties are loaded.
The entity Quote looks a little like this:
public class Quote : BaseQuote, ICloneable
{
public Guid DeparturePointId { get; set; }
public virtual LocationPoint DeparturePoint{ get; set; }
public Guid DestinationPointId { get; set; }
public virtual LocationPoint DestinationPoint{ get; set; }
}

This happened to me too when I tried to implement ICloneable on my entity and cloned it using MemberwiseClone. Worked great when I was using entities that I instantiated myself. However, when I used this to clone an entity that had been loaded using EF, I got this error whenever I tried to add it to a DbSet (or in various other parts).
After some digging, I found that when you clone an EF-loaded entity, you're cloning the proxy class as well. One of the things a proxy class carries around is a reference to the wrapper fo the given entity. Because a shallow copy only copies a reference to the wrapper, you suddenly have two entities that have the same wrapper instance.
At this point, EF thinks you've created or borrowed a different proxy class for your entity which it assumes is for purposes of mischief and blocks you.
Edit
Here's a snippet that I created to work around this problem. Note that this will do a fair job of copying just the EF properties, but it's not perfect. Note that you'll need to modify it if you have private fields that must be copied as well, but you get the idea.
/// <summary>
/// Makes a shallow copy of an entity object. This works much like a MemberwiseClone
/// but directly instantiates a new object and copies only properties that work with
/// EF and don't have the NotMappedAttribute.
/// </summary>
/// <typeparam name="TEntity">The entity type.</typeparam>
/// <param name="source">The source entity.</param>
public static TEntity ShallowCopyEntity<TEntity>(TEntity source) where TEntity : class, new()
{
// Get properties from EF that are read/write and not marked witht he NotMappedAttribute
var sourceProperties = typeof(TEntity)
.GetProperties()
.Where(p => p.CanRead && p.CanWrite &&
p.GetCustomAttributes(typeof(System.ComponentModel.DataAnnotations.NotMappedAttribute), true).Length == 0);
var newObj = new TEntity();
foreach (var property in sourceProperties)
{
// Copy value
property.SetValue(newObj, property.GetValue(source, null), null);
}
return newObj;
}

Above solution may occur such as error "Conflicting changes to the role x of the relationship y have been detected". I achieve that error with using this method;
public virtual TEntity DetachEntity(TEntity entityToDetach)
{
if (entityToDetach != null)
context.Entry(entityToDetach).State = EntityState.Detached;
context.SaveChanges();
return entityToDetach;
}
i hope it'll work for you too.

I solved it this way.
using (var ctx = new MyContext())
{
ctx.Configuration.ProxyCreationEnabled = false;
return ctx.Deferrals.AsNoTracking().Where(r =>
r.DeferralID.Equals(deferralID)).FirstOrDefault();
}

Related

Updating many-to-many relationships with a generic repository

I have a database context with lazy loading disabled. I am using eager loading to load all of my entities. I cannot update many to many relationships.
Here's the repository.
public class GenericRepository<TEntity> : IGenericRepository<TEntity>
where TEntity : class
{
... other code here...
public virtual void Update(TEntity t)
{
Set.Attach(t);
Context.Entry(t).State = EntityState.Modified;
}
...other code here...
}
Here's the User model.
public partial class User
{
public User()
{
this.Locks = new HashSet<Lock>();
this.BusinessModels = new HashSet<BusinessModel>();
}
public int UserId { get; set; }
public string Username { get; set; }
public string Name { get; set; }
public string Phone { get; set; }
public string JobTitle { get; set; }
public string RecoveryEmail { get; set; }
public Nullable<double> Zoom { get; set; }
public virtual ICollection<Lock> Locks { get; set; }
public virtual ICollection<BusinessModel> BusinessModels { get; set; }
}
If I modify the business models collection, it does not save the business models collection although I have attached the entire entity.
Worker.UserRepository.Update(user);
I'm not sure what is going on. I don't want to break my generic repository/unit of work pattern just to update many-to-many relationships.
Edit 2: I've got this working...but it is extremely different from the pattern that I'm going for. Having hard implementations means I will need to create a method for each type that has a many to many relationship. I am investigating now to see if I can make this a generic method.
Edit 3: So the previous implementation I had did not work like I thought it would. But now, I have a slightly working implementation. If someone would please help me so I can move on from this, I will love you forever.
public virtual void Update(TEntity updated,
IEnumerable<object> set,
string navigationProperty,
Expression<Func<TEntity, bool>> filter,
Type propertyType)
{
// Find the existing item
var existing = Context.Set<TEntity>().Include(navigationProperty).FirstOrDefault(filter);
// Iterate through every item in the many-to-many relationship
foreach (var o in set)
{
// Attach it if its unattached
if (Context.Entry(o).State == EntityState.Detached)
// Exception "an object with the same key already exists"
// This is due to the include statement up above. That statement
// is necessary in order to edit the entity's navigation
// property.
Context.Set(propertyType).Attach(o);
}
// Set the new value on the navigation property.
Context.Entry(existing).Collection(navigationProperty).CurrentValue = set;
// Set new primitive property values.
Context.Entry(existing).CurrentValues.SetValues(updated);
Context.Entry(existing).State = EntityState.Modified;
}
I then call it like this:
Worker.UserRepository.Update(user, user.BusinessModels, "BusinessModels", i => i.UserId == user.UserId, typeof (BusinessModel));
Extremely messy, but it lets me update many-to-many relationships with generics. My big problem is the exception when I go to attach new values that already exist. They're already loaded because of the include statement.
This works:
This doesn't:
After many painful hours, I have finally found a way to update many-to-many relationships with a completely generic repository. This will allow me to create (and save) many different types of entities without creating boilerplate code for each one.
This method assumes that:
Your entity already exists
Your many to many relationship is stored in a table with a composite key
You are using eager loading to load your relationships into context
You are using a unit-of-work/generic repository pattern to save your entities.
Here's the Update generic method.
public virtual void Update(Expression<Func<TEntity, bool>> filter,
IEnumerable<object> updatedSet, // Updated many-to-many relationships
IEnumerable<object> availableSet, // Lookup collection
string propertyName) // The name of the navigation property
{
// Get the generic type of the set
var type = updatedSet.GetType().GetGenericArguments()[0];
// Get the previous entity from the database based on repository type
var previous = Context
.Set<TEntity>()
.Include(propertyName)
.FirstOrDefault(filter);
/* Create a container that will hold the values of
* the generic many-to-many relationships we are updating.
*/
var values = CreateList(type);
/* For each object in the updated set find the existing
* entity in the database. This is to avoid Entity Framework
* from creating new objects or throwing an
* error because the object is already attached.
*/
foreach (var entry in updatedSet
.Select(obj => (int)obj
.GetType()
.GetProperty("Id")
.GetValue(obj, null))
.Select(value => Context.Set(type).Find(value)))
{
values.Add(entry);
}
/* Get the collection where the previous many to many relationships
* are stored and assign the new ones.
*/
Context.Entry(previous).Collection(propertyName).CurrentValue = values;
}
Here's a helper method I found online which allows me to create generic lists based on whatever type I give it.
public IList CreateList(Type type)
{
var genericList = typeof(List<>).MakeGenericType(type);
return (IList)Activator.CreateInstance(genericList);
}
And from now on, this is what calls to update many-to-many relationships look like:
Worker.UserRepository.Update(u => u.UserId == user.UserId,
user.BusinessModels, // Many-to-many relationship to update
Worker.BusinessModelRepository.Get(), // Full set
"BusinessModels"); // Property name
Of course, in the end you will need to somewhere call:
Context.SaveChanges();
I hope this helps anyone who never truly found how to use many-to-many relationships with generic repositories and unit-of-work classes in Entity Framework.
#dimgl Your solution worked for me. What I've done in addition was to replace the hard-coded type and name of the primaryKey with dynamically retrieved ones:
ObjectContext objectContext = ((IObjectContextAdapter)context).ObjectContext;
ObjectSet<TEntity> set = objectContext.CreateObjectSet<TEntity>();
IEnumerable<string> keyNames = set.EntitySet.ElementType.KeyMembers.Select(k => k.Name);
var keyName = keyNames.FirstOrDefault();
var keyType = typeof(TEntity).GetProperty(keyName).PropertyType
foreach (var entry in updatedSet
.Select(obj =>
Convert.ChangeType(obj.GetType()
.GetProperty(keyName)
.GetValue(obj, null), keyType))
.Select(value => context.Set<TEntity>().Find(value)))
{
values.Add(entry);
}
Like this your code won't depend on the Entity key's name and type.

Filter all navigation properties before they are loaded (lazy or eager) into memory

For future visitors: for EF6 you are probably better off using filters, for example via this project: https://github.com/jbogard/EntityFramework.Filters
In the application we're building we apply the "soft delete" pattern where every class has a 'Deleted' bool. In practice, every class simply inherits from this base class:
public abstract class Entity
{
public virtual int Id { get; set; }
public virtual bool Deleted { get; set; }
}
To give a brief example, suppose I have the classes GymMember and Workout:
public class GymMember: Entity
{
public string Name { get; set; }
public virtual ICollection<Workout> Workouts { get; set; }
}
public class Workout: Entity
{
public virtual DateTime Date { get; set; }
}
When I fetch the list of gym members from the database, I can make sure that none of the 'deleted' gym members are fetched, like this:
var gymMembers = context.GymMembers.Where(g => !g.Deleted);
However, when I iterate through these gym members, their Workouts are loaded from the database without any regard for their Deleted flag. While I cannot blame Entity Framework for not picking up on this, I would like to configure or intercept lazy property loading somehow so that deleted navigational properties are never loaded.
I've been going through my options, but they seem scarce:
Going to Database First and use conditional mapping for every object for every one-to-many property.
This is simply not an option, since it would be too much manual work. (Our application is huge and getting huger every day). We also do not want to give up the advantages of using Code First (of which there are many)
Always eagerly loading navigation properties.
Again, not an option. This configuration is only available per entity. Always eagerly loading entities would also impose a serious performance penalty.
Applying the Expression Visitor pattern that automatically injects .Where(e => !e.Deleted) anywhere it finds an IQueryable<Entity>, as described here and here.
I actually tested this in a proof of concept application, and it worked wonderfully.
This was a very interesting option, but alas, it fails to apply filtering to lazily loaded navigation properties. This is obvious, as those lazy properties would not appear in the expression/query and as such cannot be replaced. I wonder if Entity Framework would allow for an injection point somewhere in their DynamicProxy class that loads the lazy properties.
I also fear for for other consequences, such as the possibility of breaking the Include mechanism in EF.
Writing a custom class that implements ICollection but filters the Deleted entities automatically.
This was actually my first approach. The idea would be to use a backing property for every collection property that internally uses a custom Collection class:
public class GymMember: Entity
{
public string Name { get; set; }
private ICollection<Workout> _workouts;
public virtual ICollection<Workout> Workouts
{
get { return _workouts ?? (_workouts = new CustomCollection()); }
set { _workouts = new CustomCollection(value); }
}
}
While this approach is actually not bad, I still have some issues with it:
It still loads all the Workouts into memory and filters the Deleted ones when the property setter is hit. In my humble opinion, this is much too late.
There is a logical mismatch between executed queries and the data that is loaded.
Image a scenario where I want a list of the gym members that did a workout since last week:
var gymMembers = context.GymMembers.Where(g => g.Workouts.Any(w => w.Date >= DateTime.Now.AddDays(-7).Date));
This query might return a gym member that only has workouts that are deleted but also satisfy the predicate. Once they are loaded into memory, it appears as if this gym member has no workouts at all!
You could say that the developer should be aware of the Deleted and always include it in his queries, but that's something I would really like to avoid. Maybe the ExpressionVisitor could offer the answer here again.
It's actually impossible to mark a navigation property as Deleted when using the CustomCollection.
Imagine this scenario:
var gymMember = context.GymMembers.First();
gymMember.Workouts.First().Deleted = true;
context.SaveChanges();`
You would expect that the appropriate Workout record is updated in the database, and you would be wrong! Since the gymMember is being inspected by the ChangeTracker for any changes, the property gymMember.Workouts will suddenly return 1 fewer workout. That's because CustomCollection automatically filters deleted instances, remember? So now Entity Framework thinks the workout needs to be deleted, and EF will try to set the FK to null, or actually delete the record. (depending on how your DB is configured). This is what we were trying to avoid with the soft delete pattern to begin with!!!
I stumbled upon an interesting blog post that overrides the default SaveChanges method of the DbContext so that any entries with an EntityState.Deleted are changed back to EntityState.Modified but this again feels 'hacky' and rather unsafe. However, I'm willing to try it out if it solves problems without any unintended side effects.
So here I am StackOverflow. I've researched my options quite extensively, if I may say so myself, and I'm at my wits end. So now I turn to you. How have you implemented soft deletes in your enterprise application?
To reiterate, these are the requirements I'm looking for:
Queries should automatically exclude the Deleted entities on the DB level
Deleting an entity and calling 'SaveChanges' should simply update the appropriate record and have no other side effects.
When navigational properties are loaded, whether lazy or eager, the Deleted ones should be automatically excluded.
I am looking forward to any and all suggestions, thank you in advance.
After much research, I've finally found a way to achieve what I wanted.
The gist of it is that I intercept materialized entities with an event handler on the object context, and then inject my custom collection class in every collection property that I can find (with reflection).
The most important part is intercepting the "DbCollectionEntry", the class responsible for loading related collection properties. By wiggling myself in between the entity and the DbCollectionEntry, I gain full control over what's loaded when and how. The only downside is that this DbCollectionEntry class has little to no public members, which requires me to use reflection to manipulate it.
Here is my custom collection class that implements ICollection and contains a reference to the appropriate DbCollectionEntry:
public class FilteredCollection <TEntity> : ICollection<TEntity> where TEntity : Entity
{
private readonly DbCollectionEntry _dbCollectionEntry;
private readonly Func<TEntity, Boolean> _compiledFilter;
private readonly Expression<Func<TEntity, Boolean>> _filter;
private ICollection<TEntity> _collection;
private int? _cachedCount;
public FilteredCollection(ICollection<TEntity> collection, DbCollectionEntry dbCollectionEntry)
{
_filter = entity => !entity.Deleted;
_dbCollectionEntry = dbCollectionEntry;
_compiledFilter = _filter.Compile();
_collection = collection != null ? collection.Where(_compiledFilter).ToList() : null;
}
private ICollection<TEntity> Entities
{
get
{
if (_dbCollectionEntry.IsLoaded == false && _collection == null)
{
IQueryable<TEntity> query = _dbCollectionEntry.Query().Cast<TEntity>().Where(_filter);
_dbCollectionEntry.CurrentValue = this;
_collection = query.ToList();
object internalCollectionEntry =
_dbCollectionEntry.GetType()
.GetField("_internalCollectionEntry", BindingFlags.NonPublic | BindingFlags.Instance)
.GetValue(_dbCollectionEntry);
object relatedEnd =
internalCollectionEntry.GetType()
.BaseType.GetField("_relatedEnd", BindingFlags.NonPublic | BindingFlags.Instance)
.GetValue(internalCollectionEntry);
relatedEnd.GetType()
.GetField("_isLoaded", BindingFlags.NonPublic | BindingFlags.Instance)
.SetValue(relatedEnd, true);
}
return _collection;
}
}
#region ICollection<T> Members
void ICollection<TEntity>.Add(TEntity item)
{
if(_compiledFilter(item))
Entities.Add(item);
}
void ICollection<TEntity>.Clear()
{
Entities.Clear();
}
Boolean ICollection<TEntity>.Contains(TEntity item)
{
return Entities.Contains(item);
}
void ICollection<TEntity>.CopyTo(TEntity[] array, Int32 arrayIndex)
{
Entities.CopyTo(array, arrayIndex);
}
Int32 ICollection<TEntity>.Count
{
get
{
if (_dbCollectionEntry.IsLoaded)
return _collection.Count;
return _dbCollectionEntry.Query().Cast<TEntity>().Count(_filter);
}
}
Boolean ICollection<TEntity>.IsReadOnly
{
get
{
return Entities.IsReadOnly;
}
}
Boolean ICollection<TEntity>.Remove(TEntity item)
{
return Entities.Remove(item);
}
#endregion
#region IEnumerable<T> Members
IEnumerator<TEntity> IEnumerable<TEntity>.GetEnumerator()
{
return Entities.GetEnumerator();
}
#endregion
#region IEnumerable Members
IEnumerator IEnumerable.GetEnumerator()
{
return ( ( this as IEnumerable<TEntity> ).GetEnumerator() );
}
#endregion
}
If you skim through it, you'll find that the most important part is the "Entities" property, which will lazy load the actual values. In the constructor of the FilteredCollection I pass an optional ICollection for scenario's where the collection is already eagerly loaded.
Of course, we still need to configure Entity Framework so that our FilteredCollection is used everywhere where there are collection properties. This can be achieved by hooking into the ObjectMaterialized event of the underlying ObjectContext of Entity Framework:
(this as IObjectContextAdapter).ObjectContext.ObjectMaterialized +=
delegate(Object sender, ObjectMaterializedEventArgs e)
{
if (e.Entity is Entity)
{
var entityType = e.Entity.GetType();
IEnumerable<PropertyInfo> collectionProperties;
if (!CollectionPropertiesPerType.TryGetValue(entityType, out collectionProperties))
{
CollectionPropertiesPerType[entityType] = (collectionProperties = entityType.GetProperties()
.Where(p => p.PropertyType.IsGenericType && typeof(ICollection<>) == p.PropertyType.GetGenericTypeDefinition()));
}
foreach (var collectionProperty in collectionProperties)
{
var collectionType = typeof(FilteredCollection<>).MakeGenericType(collectionProperty.PropertyType.GetGenericArguments());
DbCollectionEntry dbCollectionEntry = Entry(e.Entity).Collection(collectionProperty.Name);
dbCollectionEntry.CurrentValue = Activator.CreateInstance(collectionType, new[] { dbCollectionEntry.CurrentValue, dbCollectionEntry });
}
}
};
It all looks rather complicated, but what it does essentially is scan the materialized type for collection properties and change the value to a filtered collection. It also passes the DbCollectionEntry to the filtered collection so it can work its magic.
This covers the whole 'loading entities' part. The only downside so far is that eagerly loaded collection properties will still include the deleted entities, but they are filtered out in the 'Add' method of the FilterCollection class. This is an acceptable downside, although I have yet to do some testing on how this affects the SaveChanges() method.
Of course, this still leaves one issue: there is no automatic filtering on queries. If you want to fetch the gym members who did a workout in the past week, you want to exclude the deleted workouts automatically.
This is achieved through an ExpressionVisitor that automatically applies a '.Where(e => !e.Deleted)' filter to every IQueryable it can find in a given expression.
Here is the code:
public class DeletedFilterInterceptor: ExpressionVisitor
{
public Expression<Func<Entity, bool>> Filter { get; set; }
public DeletedFilterInterceptor()
{
Filter = entity => !entity.Deleted;
}
protected override Expression VisitMember(MemberExpression ex)
{
return !ex.Type.IsGenericType ? base.VisitMember(ex) : CreateWhereExpression(Filter, ex) ?? base.VisitMember(ex);
}
private Expression CreateWhereExpression(Expression<Func<Entity, bool>> filter, Expression ex)
{
var type = ex.Type;//.GetGenericArguments().First();
var test = CreateExpression(filter, type);
if (test == null)
return null;
var listType = typeof(IQueryable<>).MakeGenericType(type);
return Expression.Convert(Expression.Call(typeof(Enumerable), "Where", new Type[] { type }, (Expression)ex, test), listType);
}
private LambdaExpression CreateExpression(Expression<Func<Entity, bool>> condition, Type type)
{
var lambda = (LambdaExpression) condition;
if (!typeof(Entity).IsAssignableFrom(type))
return null;
var newParams = new[] { Expression.Parameter(type, "entity") };
var paramMap = lambda.Parameters.Select((original, i) => new { original, replacement = newParams[i] }).ToDictionary(p => p.original, p => p.replacement);
var fixedBody = ParameterRebinder.ReplaceParameters(paramMap, lambda.Body);
lambda = Expression.Lambda(fixedBody, newParams);
return lambda;
}
}
public class ParameterRebinder : ExpressionVisitor
{
private readonly Dictionary<ParameterExpression, ParameterExpression> _map;
public ParameterRebinder(Dictionary<ParameterExpression, ParameterExpression> map)
{
_map = map ?? new Dictionary<ParameterExpression, ParameterExpression>();
}
public static Expression ReplaceParameters(Dictionary<ParameterExpression, ParameterExpression> map, Expression exp)
{
return new ParameterRebinder(map).Visit(exp);
}
protected override Expression VisitParameter(ParameterExpression node)
{
ParameterExpression replacement;
if (_map.TryGetValue(node, out replacement))
node = replacement;
return base.VisitParameter(node);
}
}
I am running a bit short on time, so I'll get back to this post later with more details, but the gist of it is written down and for those of you eager to try everything out; I've posted the full test application here: https://github.com/amoerie/TestingGround
However, there might still be some errors, as this is very much a work in progress. The conceptual idea is sound though, and I expect it to fully function soon once I've refactored everything neatly and find the time to write some tests for this.
Have you considered using views in your database to load your problem entities with the deleted items excluded?
It does mean you will need to use stored procedures to map INSERT/UPDATE/DELETE functionality, but it would definitely solve your problem if Workout maps to a View with the deleted rows omitted. Also - this may not work the same in a code first approach...
One possibly way might be using specifications with a base specification that checks the soft deleted flag for all queries together with an include strategy.
I’ll illustrate an adjusted version of the specification pattern that I've used in a project (which had its origin in this blog post)
public abstract class SpecificationBase<T> : ISpecification<T>
where T : Entity
{
private readonly IPredicateBuilderFactory _builderFactory;
private IPredicateBuilder<T> _predicateBuilder;
protected SpecificationBase(IPredicateBuilderFactory builderFactory)
{
_builderFactory = builderFactory;
}
public IPredicateBuilder<T> PredicateBuilder
{
get
{
return _predicateBuilder ?? (_predicateBuilder = BuildPredicate());
}
}
protected abstract void AddSatisfactionCriterion(IPredicateBuilder<T> predicateBuilder);
private IPredicateBuilder<T> BuildPredicate()
{
var predicateBuilder = _builderFactory.Make<T>();
predicateBuilder.Check(candidate => !candidate.IsDeleted)
AddSatisfactionCriterion(predicateBuilder);
return predicateBuilder;
}
}
The IPredicateBuilder is a wrapper to the predicate builder included in the LINQKit.dll.
The specification base class is responsible to create the predicate builder. Once created the criteria that should be applied to all query can be added. The predicate builder can then be passed to the inherited specifications for adding further criteria. For example:
public class IdSpecification<T> : SpecificationBase<T>
where T : Entity
{
private readonly int _id;
public IdSpecification(int id, IPredicateBuilderFactory builderFactory)
: base(builderFactory)
{
_id = id;
}
protected override void AddSatisfactionCriterion(IPredicateBuilder<T> predicateBuilder)
{
predicateBuilder.And(entity => entity.Id == _id);
}
}
The IdSpecification's full predicate would then be:
entity => !entity.IsDeleted && entity.Id == _id
The specification can then be passed to the repository which uses the PredicateBuilder property to build up the where clause:
public IQueryable<T> FindAll(ISpecification<T> spec)
{
return context.AsExpandable().Where(spec.PredicateBuilder.Complete()).AsQueryable();
}
AsExpandable() is part of the LINQKit.dll.
In regards to including/lazy loading properties one can extend the specification with a further property about includes. The specification base can add the base includes and then child specifications add their includes. The repository can then before fetching from the db apply the includes from the specification.
public IQueryable<T> Apply<T>(IDbSet<T> context, ISpecification<T> specification)
{
if (specification.IncludePaths == null)
return context;
return specification.IncludePaths.Aggregate<string, IQueryable<T>>(context, (current, path) => current.Include(path));
}
Let me know if something is unclear. I tried not to make this a monster post so some details might be left out.
Edit: I realized that I didn't fully answer your question(s); navigation properties. What if you make the navigation property internal (using this post to configure it and creating non-mapped public properties that are IQueryable. The non mapped properties can have a custom attribute and the repository adds the base specification's predicate to the where, without eagerly loading it. When someone do apply an eager operation the filter will apply. Something like:
public T Find(int id)
{
var entity = Context.SingleOrDefault(x => x.Id == id);
if (entity != null)
{
foreach(var property in entity.GetType()
.GetProperties()
.Where(info => info.CustomAttributes.OfType<FilteredNavigationProperty>().Any()))
{
var collection = (property.GetValue(property) as IQueryable<IEntity>);
collection = collection.Where(spec.PredicateBuilder.Complete());
}
}
return entity;
}
I haven't tested the above code but it could work with some tweaking :)
Edit 2: Deletes.
If you're using a general/generic repository you could simply add some further functionality to the delete method:
public void Delete(T entity)
{
var castedEntity = entity as Entity;
if (castedEntity != null)
{
castedEntity.IsDeleted = true;
}
else
{
_context.Remove(entity);
}
}

Where to put Created date and Created by in DDD?

I use Entity Framework and want to use DDD principles. However, there are some information regarding the entities that is on the borderline between what is logging/persistence information and what is information about the domain objects.
I my situation these are put in an abstract base class that all entities inherit from:
public abstract class BaseEntity: IBaseEntity
{
/// <summary>
/// The unique identifier
/// </summary>
public int Id { get; set; }
/// <summary>
/// The user that created this instance
/// </summary>
public User CreatedBy { get; set; }
/// <summary>
/// The date and time the object was created
/// </summary>
public DateTime CreatedDate { get; set; }
/// <summary>
/// Which user was the last one to change this object
/// </summary>
public User LastChangedBy { get; set; }
/// <summary>
/// When was the object last changed
/// </summary>
public DateTime LastChangedDate { get; set; }
/// <summary>
/// This is the status of the entity. See EntityStatus documentation for more information.
/// </summary>
public EntityStatus EntityStatus { get; set; }
/// <summary>
/// Sets the default value for a new object
/// </summary>
protected BaseEntity()
{
CreatedDate = DateTime.Now;
EntityStatus = EntityStatus.Active;
LastChangedDate = DateTime.Now;
}
}
Now a Domain Object can't be instantiated without providing the date and time. However, I feel it is the wrong place to put it. I can argue for both really. Maybe it should not be mixed with the domain at all?
Since I'm using EF Code First it makes sense to put it there, or else I would need to create new classes that inherit from the base class in the DAL also, duplicating code and needing to map to both domain objects and MVC models which does seem more messy than the approach above.
The question(s):
Is it Ok to use DateTime.Now in the Domain model at all? Where do you put this kind of information using DDD and EF Code First? Should User to be set in the domain object or require it in the Business Layer?
Update
I think jgauffin har the right answer here - but it is really quite a fundamental change. However, on my search for an alternate solution I almost had it solved with this. I used the ChangeTracker.Entries to find ut if an entity is added or modified and set the fields accordingly. This is done in my UnitOfWork Save() method.
The problem is loading navigation properties, like User (DateTime is set correctly). It might be since the user is a property on the abstract base class the entity inherits from. I also don't like putting strings in there, however it might solve some simple scenarios for someone, so I post the solution here:
public void SaveChanges(User changedBy)
{
foreach (var entry in _context.ChangeTracker.Entries<BaseEntity>())
{
if (entry.State == EntityState.Added)
{
entry.Entity.CreatedDate = DateTime.Now;
entry.Entity.LastChangedDate = DateTime.Now;
entry.Entity.CreatedBy = changedBy;
entry.Entity.LastChangedBy = changedBy;
}
if (entry.State == EntityState.Modified)
{
entry.Entity.CreatedDate = entry.OriginalValues.GetValue<DateTime("CreatedDate");
entry.Entity.CreatedBy = entry.OriginalValues.GetValue<User>("CreatedBy");
entry.Entity.LastChangedDate = DateTime.Now;
entry.Entity.LastChangedBy = changedBy;
}
}
_context.SaveChanges();
}
Is it Ok to use DateTime.Now in the Domain model at all?
Yes.
Where do you put this kind of information using DDD and EF Code First? Should User to be set in the domain object or require it in the Business Layer?
Well. First of all: A DDD model is always in a valid state. That's impossible with public setters. In DDD you work with the models using methods since the methods can make sure that all required information has been specified and is valid.
For instance, if you can mark an item as completed it's likely that the UpdatedAt date should be changed too. If you let the calling code make sure of that it's likely that it will be forgotten somewhere. Instead you should have something like:
public class MyDomainModel
{
public void MarkAsCompleted(User completedBy)
{
if (completedBy == null) throw new ArgumentNullException("completedBy");
State = MyState.Completed;
UpdatedAt = DateTime.Now;
CompletedAt = DateTime.Now;
CompletedBy = completedBy;
}
}
Read my blog post about that approach: http://blog.gauffin.org/2012/06/protect-your-data/
Update
How to make shure that noone changes the "CreatedBy" and "CreatedDate" later on
I usually have two constructors for the models which also fits the DB. one protected one which can be used by my persistance layer and one which requires the mandatory fields. Put the createdby in that constructor and set the createdate in it:
public class YourModel
{
public YourModel(User createdBy)
{
CreatedDate = DateTime.Now;
CreatedBy = createdby;
}
// for persistance
protected YourModel()
{}
}
Then have private setters for those fields.
I get a lot of R# warning "Virtual member call in constructor", I've read about it before and it is not supposed to be a good practice.
That's usually not a problem. Read here: Virtual member call in a constructor
Is it Ok to use DateTime.Now in the Domain model at all?
It isn't terrible, but the problem is that you will end up having to duplicate code and it will more difficult to achieve consistency.
Where do you put this kind of information using DDD and EF Code First?
You are correct to assert that this type of information doesn't belong in your domain. It is typically called an audit log or trail. There are a few ways to implement auditing with EF. Take a look at AuditDbContext - Entity Framework Auditing Context for instance, or just search around for EF auditing implementations. The idea is that before EF persists changes to an entity, it raises an event which you can listen to and assign the required audit values.
Should User to be set in the domain object or require it in the
Business Layer?
It is best to handle this at the infrastructure/repository level with an auditing implementation as stated above. This is the final stop before data is persisted and thus is the perfect place to take care of this.

Loaded from another DataContext?

In my previous applications when I used linq-to-sql I would always use one class to put my linq-to-sql code in, so I would only have one DataContext.
My current application though is getting too big and I started splitting my code up in different classes (One for Customer, one for Location, one for Supplier...) and they all have their own DataContext DatabaseDesignDataContext dc = new DatabaseDesignDataContext();
Now when I try to save a contact with a location (which I got from a different DataContext) I get the following error:
"An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported."
I assume this is because I create a DataContext for every class, but I wouldn't know how to this differently?
I'm looking for any ideas, thanks.
My classes look like the following:
public class LocatieManagement
{
private static DatabaseDesignDataContext dc = new DatabaseDesignDataContext();
public static void addLocatie(locatie nieuweLocatie)
{
dc.locaties.InsertOnSubmit(nieuweLocatie);
dc.SubmitChanges();
}
public static IEnumerable<locatie> getLocaties()
{
var query = (from l in dc.locaties
select l);
IEnumerable<locatie> locaties = query;
return locaties;
}
public static locatie getLocatie(int locatie_id)
{
var query = (from l in dc.locaties
where l.locatie_id == locatie_id
select l).Single();
locatie locatie = query;
return locatie;
}
}
That happens if the entity is still attached to the original datacontext. Turn off deferred loading (dc.DeferredLoadingEnabled = false):
partial class SomeDataContext
{
partial void OnCreated()
{
this.DeferredLoadingEnabled = false;
}
}
You may also need to serialize/deserialize it once (e.g. using datacontractserializer) to disconnect it from the original DC, here's a clone method that use the datacontractserializer:
internal static T CloneEntity<T>(T originalEntity) where T : someentitybaseclass
{
Type entityType = typeof(T);
DataContractSerializer ser =
new DataContractSerializer(entityType);
using (MemoryStream ms = new MemoryStream())
{
ser.WriteObject(ms, originalEntity);
ms.Position = 0;
return (T)ser.ReadObject(ms);
}
}
This happens because you're trying to manage data from differing contexts - you will need to properly detach and attach your objects to proceed - however, I would suggest preventing the need to do this.
So, first things first: remove the data context instances from your entity classes.
From here create 'operational' classes that expose the CRUDs and whatnot to work with that specific type of entity class, which each function using a dedicated data context for that unit of work, perhaps overloading to accept a current context for when a unit of work entails subsequent operations.
I know everybody probably gets tired of hearing this, but you really should look at using Repositories for Data Access (and using the Unit of Work pattern to ensure that all of the repositories that are sharing a unit of work are using the same DataContext).
You can read up on how to do things here: Revisiting the Repository and Unit of Work Patterns with Entity Framework (the same concepts apply to LINQ to SQL as well).
Another solution I found for this is to create one parent class DataContext
public class DataContext
{
public static DatabaseDesignDataContext dc = new DatabaseDesignDataContext();
}
And let all my other classes inherit this one.
public class LocatieManagement : DataContext
{
public static void addLocatie(locatie nieuweLocatie)
{
dc.locaties.InsertOnSubmit(nieuweLocatie);
dc.SubmitChanges();
}
}
Then all the classes use the same DataContext.

Updating LINQ to SQL - my attitude via reflection. Asking for improvement hints

a lot of people can't deal with updating entities "automatically" - I mean by rewriting each value separately.
Imagine following situation: we have a WCF service where client receives an Order entity via WCF means, and then changes some of the properties, then sends it back via WCF. To update such object normally we need to rewrite each of the properties manually, and I don't want to write separate code for every class (and rewrite it when properties change, for example) I have tried Linq2SQLEntityBase but somehow I can't get it to work, even though I studied the example thoroughly. So here is my proposition:
public class Alterator<TEntity,TDataContext>
where TDataContext : DataContext, new()
{
/// <summary>
/// Updates a group of entities, performing an atomic operation for each of them (the lambda is executed for each entity separately).
/// </summary>
/// <param name="newOrModifiedEntity">Any kind of IEnumerable of entities to insert or update.</param>
/// <param name="findOriginalLambda">A lambda expression that should return an original TEntity if the entity is to be updated or null if it is new.</param>
public static void AlterAll(IEnumerable<TEntity> entities, Func<TDataContext,TEntity> findOriginalLambda)
{
foreach (TEntity newEntity in entities) {
//a new DataContext initialization is required for this function to work correctly
using (TDataContext dataContext = new TDataContext())
{
dataContext.DeferredLoadingEnabled = false;
Type entityType = typeof(TEntity);
ITable tab = dataContext.GetTable(entityType);
TEntity originalEntity = findOriginalLambda(dataContext);
//if the lambda returned no matching existing record in database, create a new one. No need to if-check for existence before as long as your lambda is properly built
//(I suggest using SingleOrDefault() or FirstOrDefault() in queries).
if (originalEntity == null)
tab.InsertOnSubmit(newEntity);
else
{
foreach (PropertyInfo p in entityType.GetProperties().Where(k => k.CanWrite))
{
var c = p.GetValue(newEntity, null);
var n = p.GetValue(originalEntity, null);
if (c != null && n != null && !c.Equals(n))
p.SetValue(originalEntity, c, null);
}
dataContext.SubmitChanges();
}
}
}
}
/// <summary>
/// Updates a single entity if the lambda expression returns a valid original entity. Inserts one if lambda returns null.
/// </summary>
/// <param name="newOrModifiedEntity">Entity to update or insert.</param>
/// <param name="findOriginalLambda">A lambda expression that should return an original TEntity if the entity is to be updated or null if it is new.</param>
public static void Alter(TEntity newOrModifiedEntity, Func<TDataContext, TEntity> findOriginalLambda) {
AlterAll(new TEntity[] { newOrModifiedEntity }, findOriginalLambda);
}
}
And an update code sample. It finds an order based on its key - OrderItemID. If null is returned, a new entry will be created.
public void AlterOrderItem(OrderItem o) {
Alterator<OrderItem, JDataContext>.Alter(o, t => t.OrderItems.Where(k => k.OrderItemID == o.OrderItemID).SingleOrDefault());
}
Is there a better way to do this? I've been trying over a week now.
I suggest taking a look at West Wind Business Framework at west wind
or download
Its a business object wrapper that makes life easier
Look at attaching entities, after you have returned an object from your service, when the client updates and returns the object, in the WCF service you can simply attach the object to the data context and then save the changes.

Categories

Resources