I have a Provider class and an Article one. Article has a int ProviderId{get;set} and a public virtual Provider Provider {get;set;} properties.
I know about Lazy loading and why I can't access the Provider property in Article outside the context but I have a generic method that returns the next T like this:
public static T Next<T>(T currentElement) where T : class, IModel {
T data;
if (currentElement.Id >= GetLastId<T>())
return currentElement;
using (DatabaseEntities context = new DatabaseEntities()) {
data = context.Set<T>().Single(el => el.Id == currentElement.Id + 1);
}
return data;
}
But I can't retrieve Child entities like Provider in Articles Class. How can I include all entities? Should I update the method and make one per entity?
I read about Eager loading and Explicit loading but I don't know how can I implement these in my method.
Note:
Not all my entities have entity children and I have more methods like Previous<T>(), First<T>() or Last<T>() that do the work you expect.
You could create an overload of your Next method that accepts an Expression<Func<T, object>>:
public static T Next<T>(T currentElement, Expression<Func<T, object>> navigationProperty) where T : class, IModel
{
T data;
if (currentElement.Id >= GetLastId<T>())
return currentElement;
using (DatabaseEntities context = new DatabaseEntities())
{
IQueryable<T> dbQuery = context.Set<T>();
if (navigationProperty != null)
dbQuery = dbQuery.Include<T, object>(navigationProperty);
data = dbQuery.AsNoTracking().Single(el => el.Id == currentElement.Id + 1);
}
return data;
}
Usage:
var entity = Next(instance, x => x.Provider);
Please refer to the following blog post for more information and a complete example.
Implementing a generic data access layer using Entity Framework: https://blog.magnusmontin.net/2013/05/30/generic-dal-using-entity-framework/
You could extend your method to take a list of possible includes. For example:
public static T Next<T>(T currentElement, params Expression<Func<T, object>>[] includes)
where T : class, IModel
{
if (currentElement.Id >= GetLastId<T>())
return currentElement;
using (DatabaseEntities context = new DatabaseEntities())
{
IQueryable<T> query = context.Set<T>();
//Deferred execution allows us to build up the query
foreach (var include in includes)
{
query = query.Include(include);
}
return query.Single(el => el.Id == currentElement.Id + 1);
}
}
And use it like this:
var someEntity = ...;
var nextEntity = Next(someEntity,
e => e.Childcollection1,
e => e.Childcollection2,
//
e => e.ChildcollectionN)
As an additional point, to get your next entity, you shouldn't rely on the ID values being sequential, for example, they will get out of sequence if entries are deleted. Instead, consider something like this:
return query.OrderBy(el => el.Id).First(el => el.Id > currentElement.Id);
So you have a one-to-many relation between Provider and Article: Every Provider has zero or more Articles, and every Article belongs to exactly one Provider.
If you have configured this one-to-many relationship correctly in Entity Framework, a Provider should have a reference to its many Articles:
public virtual ICollection<Article> Articles{get; set;}
Furthermore you have a function Next, that takes as input an object of class T, which is a class that implements IModel. Next returns the next element from the DbSet of Ts, assuming you have a proper definition for the next element.
Apparently you want to adapt function Next such that you can use this function to get the next Provider with all its Articles.
More generic: if T is a type that has properly designed one-to-many relationships in it, and you have an object of type T, you want the T with the first Id higher than the Id of the current T, inclusive all its ICollections.
I've divided this into sever smaller functions:
A function that, given a type T returns all properties that implement ICollection
A function that, given a DbSet returns all elements of the DbSet inclusive all its ICollections
Given those two, and functions like OrderBy and FirstOrDefault you will be able to get the first Provider with Id larger than current provider with all its Articles.
static class QueryableExtensions
{
// returns true if PropertyInfo implements ICollection<...>
public static bool ImplementsICollection(this PropertyInfo propertyInfo)
{
return propertyInfo.IsGenericType
&& propertyInfo.GetGenericTypeDefinition() == typeof(ICollection<>);
}
// returns all readable & writable PropertyInfos of type T that implement ICollection<...>
public static IEnumerable<PropertyInfo> CollectionProperties<T>()
{
return typeof(T).GetProperties()
.Where(prop => prop.CanRead
&& prop.CanWrite
&& prop.PropertyType.ImplementICollection();
}
// given an IQueryable<T>, adds the Include of all ICollection<...> T has
public static IQueryable<T> IncludeICollection<T>(IQueryable<T> query)
{
var iCollectionProperties = CollectionProperties<T>();
foreach (var collectionProperty in collectionProperties)
{
query = query.Include(prop.Name);
}
return query;
}
// given a IQueryable<T> and a T return the first T in the query with Id higher than T
// for this we need to be sure every T has an IComparable property Id
// so T should implement interface IId (see below)
public T Next<T>(this IQueryable<T> query, T currentItem)
where T : IId // makes sure every T has aproperty Id
{
T nextElement = query
// keep only those DbSet items that have larger Ids:
.Where(item => currentItem.Id < item.Id)
// sort in ascending Id:
.OrderBy(item => item.Id
// keep only the first element of this sorted collection:
.FirstOrDefault();
return T
}
}
I need to be sure that every T has an Id, otherwise you can't get the next Id. Probably you have this in your IModel interface:
public Interface IId
{
public int Id {get;}
}
After these functions your query will be like:
public static T Next<T>(T currentElement) where T : class, IModel, IId
{
T data;
if (currentElement.Id >= GetLastId<T>())
return currentElement;
using (DatabaseEntities context = new DatabaseEntities())
{
return context.Set<T>().Next(currentElement);
}
}
I'm not looking to rehash an answer, but here is a function that combines a little bit of reflection to automagically load all foreign key related data based on model declarations. I've written it as a way to load an object by something other than its primary key, but it can be modified to do so if needed.
private IQueryable<T> GetFullSet()
{
IEnumerable<IEntityType> entities = _dbContext.Model.GetEntityTypes();
IEntityType thisEntity = entities.SingleOrDefault(x => x.ClrType.Name == typeof(T).Name);
IQueryable<T> set = _dbContext.Set<T>();
foreach (IForeignKey fKey in thisEntity.GetReferencingForeignKeys())
{
set = set.Include(fKey.PrincipalToDependent.Name);
}
return set;
}
Related
Tried to search a lot for this but couldn't come up with an answer that works. Here's what I am trying to do:
I have Entity ObjectA that has a Navigation property ObjectB (not a Collection type, it's a virtual property that gets loaded via lazy loading). When I do a Where query for A, I want to also expand property B in the expression using another Expression that maps B.
Here's some code to demonstrate, question is in the ToObjectADto() function
public static Expression<Func<ObjectB, ObjectBDto>> ToObjectBDto()
{
return b => new ObjectBDto
{
Prop1 = b.Prop1,
Prop2 = b.Prop2;
};
}
public static Expression<Func<ObjectA, ObjectADto>> ToObjectADto()
{
return a => new ObjectADto
{
Name = a.Name,
SomeProperty = a.SomeProperty,
ObjectB = /* How can I call the ToObjectBDto Expression here without re-writing it? */
};
}
var aDto = _dbContext.ObjectAs.Where(q => q.SomeProperty > 0).Select(ToObjectADto());
I tried to create a compiled Expression:
private static _toBDtoCompiled = ToObjectBDto().Compile();
and then call it in ToObjectADto() like below but I get the API Error There is already an open DataReader associated error because it's doing it client side.
public static Expression<Func<ObjectA, ObjectADto>> ToObjectADto()
{
return a => new ObjectADto
{
Name = a.Name,
SomeProperty = a.SomeProperty,
ObjectB = _toBDto().Invoke(a.ObjectB)
};
}
My recommendation would be to save yourself the work and leverage AutoMapper. The benefit here is that Automapper can feed off EF's IQueryable implementation via ProjectTo to build queries and populate DTO graphs.
var config = new MapperConfiguration(cfg =>
{
cfg.CreateMap<ObjectA, ObjectADto>();
cfg.CreateMap<ObjectB, ObjectBDto>();
});
var aDto = _dbContext.ObjectAs.Where(q => q.SomeProperty > 0).ProjectTo<ObjectADto>(config);
Any particular mapping that cannot be inferred between Object and DTO can be set up in the mapping. The benefit of ProjectTo vs. custom mapping is that it will build a related query without tripping out lazy load hits or risking triggering code that EF cannot translate into SQL. (One query to populate all relevant DTOs)
Automapper can assist with copying values from a DTO back into either a new entity or update an existing entity:
var config = new MapperConfiguration(cfg =>
{
cfg.CreateMap<NewObjectADto, ObjectA>();
cfg.CreateMap<UpdateObjectADto, ObjectA>();
});
var mapper = config.CreateMapper();
New..
var objectA = mapper.Map<ObjectA>(dto);
_dbContext.ObjectAs.Add(objectA);
... or Update existing.
var objectA = _dbContext.ObjectAs.Single(x => x.ObjectAId == dto.ObjectAId);
mapper.Map(objectA, dto);
Where the DTO reflects either the data needed to create a new object, or the data that the client is allowed to update for updating the existing object. The goal being to keep update/add/delete operations as atomic as possible vs. passing large complex objects /w relatives to be updated all at once. I.e. actions like "AddObjectBToA" "RemoveObjectBFromA" etc. rather than resolving all operations via a single "UpdateObjectA".
It's a pity that C# doesn't handle compiling a lambda into an expression, where one expression calls another. Particularly since an expression tree can represent this case. But then EF Core 3 or higher won't look through invoke expressions anyway.
Automapper is probably easier. But if you don't want to use 3rd party code, you'll have to inline the expression yourself. Including replacing any ParameterExpression with the argument to the method.
public static R Invoke<T, R>(this Expression<Func<T, R>> expression, T argument) => throw new NotImplementedException();
// etc for expressions with more parameters
public class InlineVisitor : ExpressionVisitor {
protected override Expression VisitMethodCall(MethodCallExpression node)
{
if (node.Method.Name == "Invoke"
&& node.Object == null
&& node.Arguments.Count >= 1
&& node.Arguments[0] is LambdaExpression expr)
return Visit(
new ReplacingExpressionVisitor(
expr.Parameters.ToArray(),
node.Arguments.Skip(1).ToArray())
.Visit(expr.Body)
);
return base.VisitMethodCall(node);
}
}
// usage;
public static Expression<Func<ObjectA, ObjectADto>> ToObjectADto()
{
var ToBorNotToB = ToObjectBDto();
Expression<Func<ObjectA, ObjectADto>> expr = a => new ObjectADto
{
Name = a.Name,
SomeProperty = a.SomeProperty,
ObjectB = ToBorNotToB.Invoke(a.ObjectB)
};
return new InlineVisitor().VisitAndConvert(expr), "");
}
I would like to selectively ignore a property from a table.
I have an API which exposes the following methods.
public interface IReadService
{
FullDTO Get();
HeaderDTO[] GetList();
}
My data structure looks like so:
public ServiceDTO : ServiceHeaderDTO
{
public string LargeXMLData { get; set; }
}
public ServiceHeaderDTO
{
public int Id { get; set; }
public string Description { get; set; }
//.... Other properties
}
I have a few services which have similar issues, So I would like to be able to ignore the XML property in some cases, so I'm not using extra time to send a large string property which will be ignored.
Normally you might write something like this to hide a property
var entities = context.Services.Select(x =>
new Service { Id = Id, Description = Description, LargeXMLData = "" }).ToArray();
var dtos = this.AdaptToDTO(entities);
Now this would be fine if I had to do this in a single service, but when you have 20 services duplicating the logic it gets annoying.
I would like the be able to just say:
var entities = context.Services.Excluding(x => x.LargeXMLData).ToArray();
var dtos = this.AdaptToHeaderDTO(entities);
Edit: I'm not using automapper. Alot of our code has mappings which cannot translate to expressions. I do not want to have to specify maps
Is there a simple way I can exclude a property from a query? Without having to manually build maps.
Preferably a way which uses the existing mappings internal to EF which maps the entity to the db object
Normally you might write something like this to hide a property
var entities = context.Services.Select(x =>
new Service { Id = Id, Description = Description, LargeXMLData = "" })
If you can do that manually, it should be doable automatically using the exact same concept, with little reflection and Expression APIs.
But note that this woult work only for EF Core, since EF6 does not support projecting to entity types, like new Service { ... } here, and projecting to dynamic types at runtime is not trivial and also will break the DTO mapping.
With that being said, following is a sample implementation of the aforementioned concept:
public static partial class QueryableExtensions
{
public static IQueryable<T> Excluding<T>(this IQueryable<T> source, params Expression<Func<T, object>>[] excludeProperties)
{
var excludeMembers = excludeProperties
.Select(p => ExtractMember(p.Body).Name)
.ToList();
if (excludeMembers.Count == 0) return source;
// Build selector like (T e) => new T { Prop1 = e.Prop1, Prop2 = e.Prop2, ... }
// for each public property of T not included in the excludeMembers list,
// which then will be used as argument for LINQ Select
var parameter = Expression.Parameter(typeof(T), "e");
var bindings = typeof(T).GetProperties()
.Where(p => p.CanWrite && !excludeMembers.Contains(p.Name))
.Select(p => Expression.Bind(p, Expression.MakeMemberAccess(parameter, p)));
var body = Expression.MemberInit(Expression.New(typeof(T)), bindings);
var selector = Expression.Lambda<Func<T, T>>(body, parameter);
return source.Select(selector);
}
static MemberInfo ExtractMember(Expression source)
{
// Remove Convert if present (for value type properties cast to object)
if (source.NodeType == ExpressionType.Convert)
source = ((UnaryExpression)source).Operand;
return ((MemberExpression)source).Member;
}
}
The usage would be exactly as desired:
var entities = context.Services.Excluding(x => x.LargeXMLData).ToArray();
The problem with this though is that it will automatically "include" navigation properties and/or unmapped properties.
So it would be better to use EF model metadata instead of reflection. The problem is that currently EF Core does not provide a good public way of plugging into their infrastructure, or to get access to DbContext (thus Model) from IQueryble, so it has to be passed as argument to the custom method:
public static IQueryable<T> Excluding<T>(this IQueryable<T> source, DbContext context, params Expression<Func<T, object>>[] excludeProperties)
{
var excludeMembers = excludeProperties
.Select(p => ExtractMember(p.Body).Name)
.ToList();
if (excludeMembers.Count == 0) return source;
// Build selector like (T e) => new T { Prop1 = e.Prop1, Prop2 = e.Prop2, ... }
// for each property of T not included in the excludeMembers list,
// which then will be used as argument for LINQ Select
var parameter = Expression.Parameter(typeof(T), "e");
var bindings = context.Model.FindEntityType(typeof(T)).GetProperties()
.Where(p => p.PropertyInfo != null && !excludeMembers.Contains(p.Name))
.Select(p => Expression.Bind(p.PropertyInfo, Expression.MakeMemberAccess(parameter, p.PropertyInfo)));
var body = Expression.MemberInit(Expression.New(typeof(T)), bindings);
var selector = Expression.Lambda<Func<T, T>>(body, parameter);
return source.Select(selector);
}
which makes the usage not so elegant (but doing the job):
var entities = context.Services.Excluding(context, x => x.LargeXMLData).ToArray();
Now the only remaining potential problem are shadow properties, but they cannot be handled with projection, so this technique simply cannot be used for entities with shadow properties.
Finally, the pure EF Core alternative of the above is to put the LargeXMLData into separate single property "entity" and use table splitting to map it to the same table. Then you can use the regular Include method to include it where needed (by default it would be excluded).
I needed to double-check this before answering, but are you using Automapper or some other mapping provider for the ProjectTo implementation? Automapper's ProjectTo extension method requires a mapper configuration, so it may be that your mapping implementation is materializing the entities prematurely.
With Automapper, your example projecting to a DTO that does not contain the large XML field would result in a query to the database that does not return the large XML without needing any new "Exclude" method.
For instance, if I were to use:
var config = new MappingConfiguration<Service, ServiceHeaderDTO>();
var services = context.Services
.ProjectTo<ServiceHeaderDTO>(config)
.ToList();
The resulting SQL would not return the XMLData because ServiceHeaderDTO does not request it.
It is equivalent to doing:
var services = context.Services
.Select(x => new ServiceHeaderDTO
{
ServiceId = x.ServiceId,
// ... remaining fields, excluding the XML Data
}).ToList();
As long as I don't reference x.LargeXMLData, it will not be returned by my resulting query. Where you can run into big data coming back is if something like the following happens behind the scenes:
var services = context.Services
.ToList()
.Select(x => new ServiceHeaderDTO
{
ServiceId = x.ServiceId,
// ... remaining fields, excluding the XML Data
}).ToList();
That extra .ToList() call will materialize the complete Service entity to memory including the XMLData field. Now Automapper's ProjectTo does not work against IEnumerable, only IQueryable so it is unlikely that any query fed to it was doing this, but if you are using a home-grown mapping implementation where ProjectTo is materializing the entities before mapping, then I would strongly recommend using Automapper as it's IQueryable implementation avoids this problem for you automatically.
Edit: Tested with EF Core and Automapper just in case the behaviour changed, but it also excludes anything not referenced in the mapped DTO.
Suppose I have the following aggregate root:
public class Aggregate
{
public int Id {get; set;}
public List<Entity> Entities {get; set;}
}
And the following repository:
public class AggregateRepository
{
public Aggregate GetPaged(int Id)
{
return db.Aggregate
.Include(x=>x.Entities)
.Find(id)
}
}
Question: how can I get a paged and sort list of entities? Which is the best approach to get the entities paged and sorted, but also with the aggregate information?
Edited:
What are you think about the following approach?
public class AggregateRepository
{
public IEnumerable<Entity> GetEntitiesPaged(int id)
{
return db.Aggregate
.Include(x=>x.Aggregate)
.Where(x=>x.Id = id)
.Select(x=>x.Entities)
.Take(20);
}
}
Instead of return an aggregate object, I can receive a list of entities (20 entities, in this case) with aggregate object included. Is it a good approach working with an aggregate in DDD pattern?
The short answer is that you should avoid querying your domain model.
Rather use a specialized query layer with a read model if required; else something more raw such as DataRow.
Update:
You should try not to create aggregates when querying. This means not accessing a repository. A query layer would look something like this:
public interface ISomethingQuery
{
IEnumerable<SomethingDto> GetPage(SearchSPecification specification, int pageNumber);
// -or-
IEnumerable<DataRow> GetPage(SearchSPecification specification, int pageNumber);
}
You would then use an implementation of this query interface to get the required data for display/reporting.
First of all you should separate your write-side(commands) from the read side (queries), which called CQRS. You can take a look this example.
But if you just want to get a paged and sorted list of entities, you can use the following approach.
public ICollection<Aggregate> GetSortedAggregates(AggregateListFilter filter, out int rowCount)
{
var query = (base.Repository.CurrentSession() as ISession).QueryOver<Aggregate>();
query = query.And(q => q.Status != StatusType.Deleted);
if (!string.IsNullOrWhiteSpace(filter.Name))
query = query.And(q => q.Name == filter.Name);
rowCount = query.RowCount();
switch (filter.OrderColumnName)
{
case ".Name":
query = filter.OrderDirection == OrderByDirections.Ascending ? query.OrderBy(x => x.Name).Asc : query.OrderBy(x => x.Name).Desc;
break;
default:
query = filter.OrderDirection == OrderByDirections.Ascending ? query.OrderBy(x => x.Id).Asc : query.OrderBy(x => x.Id).Desc;
break;
}
if (filter.CurrentPageIndex > 0)
{
return query
.Skip((filter.CurrentPageIndex - 1) * filter.PageSize)
.Take(filter.PageSize)
.List();
}
return query.List();
}
I have recently moved from coding in Java to c# and I am still learning the various elements of c#.
To access an existing database, which I cannot redesign, I am using Entity Frameworks 6 and 'Code First from database' to generate contexts and types representing the database tables. I am using Ling-To-SQL to retrieve the data from the database which is heavily denormalized.
My current task is create a report where each section is read from various tables, which all have a relationship to one base table.
This is my working example:
using(var db = new PaymentContext())
{
var out = from pay in db.Payment
join typ in db.Type on new { pay.ID, pay.TypeID } equals
new { typ.ID, typ.TypeID }
join base in db.BaseTable on
new { pay.Key1, pay.Key2, pay.Key3, pay.Key4, pay.Key5 } equals
new { base.Key1, base.Key2, base.Key3, base.Key4, base.Key5 }
where
base.Cancelled.Equals("0") &&
base.TimeStamp.CompareTo(startTime) > 0 &&
base.TimeStamp.CompareTo(endTime) < 1 &&
.
(other conditions)
.
group new { pay, typ } by new { typ.PaymentType } into grp
select new
{
name = grp.Key,
count = grp.Count(),
total = grp.Sum(x => x.pay.Amount)
};
}
There will be a large number of sections in the report and each section will generate a where clause which will contain the conditions shown. In some sections, the required data will be extracted from tables up to five levels below the BaseTable.
What I want to do is create a resuable where clause for each report section, to avoid a lot of duplicated code.
After a lot of searching, I tried to use the solution suggested here , but this has been superseded in Entity Framework 6.
How do I avoid duplicating code unnecessarily?
I did try to use the extension clauses you suggested, but my generated classes do not extend the BaseTable, so I had to explicitly define the link through the navigation property. As only a small number of tables will be common in the queries, I decided to apply the filters directly to each table as required. I will define these as required.
krillgar suggested moving to straight LINQ syntax, which seems like good advice. We intend to redesign our database in the near future and this will remove some of the SQL dependency. I merged the suggested filters and full LINQ syntax to access my data.
// A class to hold all the possible conditions applied for the report
// Can be applied at various levels within the select
public class WhereConditions
{
public string CancelledFlag { get; set; } = "0"; // <= default setting
public DateTime StartTime { get; set; }
public DateTime EndTime { get; set; }
}
// Class to define all the filters to be applied to any level of table
public static class QueryExtensions
{
public static IQueryable<BaseTable> ApplyCancellationFilter(this IQueryable<BaseTable> base, WhereConditions clause)
{
return base.Where(bse => bse.CancelFlag.Equals(clause.CancelledFlag));
}
public static IQueryable<BaseTable> ApplyTimeFilter(this IQueryable<BaseTable> base, WhereConditions clause)
{
return base.Where(bse => bse.TimeStamp.CompareTo(clause.StartTime) > 0 &&
bse.TimeStamp.CompareTo(clause.EndTime) < 1);
}
}
And the query is composed as follows:
using (var db = new PaymentContext())
{
IEnumerable<BaseTable> filter = db.BaseTable.ApplyCancellationFilter(clause).ApplyTimeFilter(clause);
var result = db.Payment.
Join(
filter,
pay => new { pay.Key1, pay.Key2, pay.Key3, pay.Key4, pay.Key5 },
bse => new { bse.Key1, bse.Key2, bse.Key3, bse.Key4, bse.Key5 },
(pay, bse) => new { Payment = pay, BaseTable = bse }).
Join(
db.Type,
pay => new { pay.Payment.TypeKey1, pay.Payment.TypeKey2 },
typ => new { typ.TypeKey1, typ.TypeKey2 },
(pay, typ) => new { name = typ.Description, amount = pay.Amount }).
GroupBy(x => x.name).
Select(y => new { name = y.Key,
count = y.Count(),
amount = y.Sum(z => z.amount)});
}
And then to finally execute composed query.
var reportDetail = result.ToArray(); // <= Access database here
As this query is the simplest I will have to apply, future queries will become much more complicated.
The nice thing about LINQ is that methods like Where() return an IEnumerable<T> that you can feed into the next method.
You could refactor the where clauses into extension methods akin to:
public static class PaymentQueryExtensions {
public static IQueryable<T> ApplyNotCancelledFilter(
this IQueryable<T> payments)
where T : BaseTable {
// no explicit 'join' needed to access properties of base class in EF Model
return payments.Where(p => p.Cancelled.Equals("0"));
}
public static IQueryable<T> ApplyTimeFilter(
this IQueryable<T> payments, DateTime startTime, DateTime endTime)
where T: BaseTable {
return payments.Where(p => p.TimeStamp.CompareTo(startTime) > 0
&& p.TimeStamp.CompareTo(endTime) < 1);
}
public static IGrouping<Typ, T> GroupByType(
this IQueryable<T> payments)
where T: BaseTable {
// assuming the relationship Payment -> Typ has been set up with a backlink property Payment.Typ
// e.g. for EF fluent API:
// ModelBuilder.Entity<Typ>().HasMany(t => t.Payment).WithRequired(p => p.Typ);
return payments.GroupBy(p => p.Typ);
}
}
And then compose your queries using these building blocks:
IEnumerable<Payment> payments = db.Payment
.ApplyNotCancelledFilter()
.ApplyTimeFilter(startTime, endTime);
if (renderSectionOne) {
payments = payments.ApplySectionOneFilter();
}
var paymentsByType = payments.GroupByType();
var result = paymentsByType.Select(new
{
name = grp.Key,
count = grp.Count(),
total = grp.Sum(x => x.pay.Amount)
}
);
Now that you have composed the query, execute it by enumerating. No DB access has happened until now.
var output = result.ToArray(); // <- DB access happens here
Edit After the suggestion of Ivan, I looked at our codebase. As he mentioned, the Extension methods should work on IQueryable instead of IEnumerable. Just take care that you only use expressions that can be translated to SQL, i.e. do not call any custom code like an overriden ToString() method.
Edit 2 If Payment and other model classes inherit BaseTable, the filter methods can be written as generic methods that accept any child type of BaseTable. Also added example for grouping method.
For future visitors: for EF6 you are probably better off using filters, for example via this project: https://github.com/jbogard/EntityFramework.Filters
In the application we're building we apply the "soft delete" pattern where every class has a 'Deleted' bool. In practice, every class simply inherits from this base class:
public abstract class Entity
{
public virtual int Id { get; set; }
public virtual bool Deleted { get; set; }
}
To give a brief example, suppose I have the classes GymMember and Workout:
public class GymMember: Entity
{
public string Name { get; set; }
public virtual ICollection<Workout> Workouts { get; set; }
}
public class Workout: Entity
{
public virtual DateTime Date { get; set; }
}
When I fetch the list of gym members from the database, I can make sure that none of the 'deleted' gym members are fetched, like this:
var gymMembers = context.GymMembers.Where(g => !g.Deleted);
However, when I iterate through these gym members, their Workouts are loaded from the database without any regard for their Deleted flag. While I cannot blame Entity Framework for not picking up on this, I would like to configure or intercept lazy property loading somehow so that deleted navigational properties are never loaded.
I've been going through my options, but they seem scarce:
Going to Database First and use conditional mapping for every object for every one-to-many property.
This is simply not an option, since it would be too much manual work. (Our application is huge and getting huger every day). We also do not want to give up the advantages of using Code First (of which there are many)
Always eagerly loading navigation properties.
Again, not an option. This configuration is only available per entity. Always eagerly loading entities would also impose a serious performance penalty.
Applying the Expression Visitor pattern that automatically injects .Where(e => !e.Deleted) anywhere it finds an IQueryable<Entity>, as described here and here.
I actually tested this in a proof of concept application, and it worked wonderfully.
This was a very interesting option, but alas, it fails to apply filtering to lazily loaded navigation properties. This is obvious, as those lazy properties would not appear in the expression/query and as such cannot be replaced. I wonder if Entity Framework would allow for an injection point somewhere in their DynamicProxy class that loads the lazy properties.
I also fear for for other consequences, such as the possibility of breaking the Include mechanism in EF.
Writing a custom class that implements ICollection but filters the Deleted entities automatically.
This was actually my first approach. The idea would be to use a backing property for every collection property that internally uses a custom Collection class:
public class GymMember: Entity
{
public string Name { get; set; }
private ICollection<Workout> _workouts;
public virtual ICollection<Workout> Workouts
{
get { return _workouts ?? (_workouts = new CustomCollection()); }
set { _workouts = new CustomCollection(value); }
}
}
While this approach is actually not bad, I still have some issues with it:
It still loads all the Workouts into memory and filters the Deleted ones when the property setter is hit. In my humble opinion, this is much too late.
There is a logical mismatch between executed queries and the data that is loaded.
Image a scenario where I want a list of the gym members that did a workout since last week:
var gymMembers = context.GymMembers.Where(g => g.Workouts.Any(w => w.Date >= DateTime.Now.AddDays(-7).Date));
This query might return a gym member that only has workouts that are deleted but also satisfy the predicate. Once they are loaded into memory, it appears as if this gym member has no workouts at all!
You could say that the developer should be aware of the Deleted and always include it in his queries, but that's something I would really like to avoid. Maybe the ExpressionVisitor could offer the answer here again.
It's actually impossible to mark a navigation property as Deleted when using the CustomCollection.
Imagine this scenario:
var gymMember = context.GymMembers.First();
gymMember.Workouts.First().Deleted = true;
context.SaveChanges();`
You would expect that the appropriate Workout record is updated in the database, and you would be wrong! Since the gymMember is being inspected by the ChangeTracker for any changes, the property gymMember.Workouts will suddenly return 1 fewer workout. That's because CustomCollection automatically filters deleted instances, remember? So now Entity Framework thinks the workout needs to be deleted, and EF will try to set the FK to null, or actually delete the record. (depending on how your DB is configured). This is what we were trying to avoid with the soft delete pattern to begin with!!!
I stumbled upon an interesting blog post that overrides the default SaveChanges method of the DbContext so that any entries with an EntityState.Deleted are changed back to EntityState.Modified but this again feels 'hacky' and rather unsafe. However, I'm willing to try it out if it solves problems without any unintended side effects.
So here I am StackOverflow. I've researched my options quite extensively, if I may say so myself, and I'm at my wits end. So now I turn to you. How have you implemented soft deletes in your enterprise application?
To reiterate, these are the requirements I'm looking for:
Queries should automatically exclude the Deleted entities on the DB level
Deleting an entity and calling 'SaveChanges' should simply update the appropriate record and have no other side effects.
When navigational properties are loaded, whether lazy or eager, the Deleted ones should be automatically excluded.
I am looking forward to any and all suggestions, thank you in advance.
After much research, I've finally found a way to achieve what I wanted.
The gist of it is that I intercept materialized entities with an event handler on the object context, and then inject my custom collection class in every collection property that I can find (with reflection).
The most important part is intercepting the "DbCollectionEntry", the class responsible for loading related collection properties. By wiggling myself in between the entity and the DbCollectionEntry, I gain full control over what's loaded when and how. The only downside is that this DbCollectionEntry class has little to no public members, which requires me to use reflection to manipulate it.
Here is my custom collection class that implements ICollection and contains a reference to the appropriate DbCollectionEntry:
public class FilteredCollection <TEntity> : ICollection<TEntity> where TEntity : Entity
{
private readonly DbCollectionEntry _dbCollectionEntry;
private readonly Func<TEntity, Boolean> _compiledFilter;
private readonly Expression<Func<TEntity, Boolean>> _filter;
private ICollection<TEntity> _collection;
private int? _cachedCount;
public FilteredCollection(ICollection<TEntity> collection, DbCollectionEntry dbCollectionEntry)
{
_filter = entity => !entity.Deleted;
_dbCollectionEntry = dbCollectionEntry;
_compiledFilter = _filter.Compile();
_collection = collection != null ? collection.Where(_compiledFilter).ToList() : null;
}
private ICollection<TEntity> Entities
{
get
{
if (_dbCollectionEntry.IsLoaded == false && _collection == null)
{
IQueryable<TEntity> query = _dbCollectionEntry.Query().Cast<TEntity>().Where(_filter);
_dbCollectionEntry.CurrentValue = this;
_collection = query.ToList();
object internalCollectionEntry =
_dbCollectionEntry.GetType()
.GetField("_internalCollectionEntry", BindingFlags.NonPublic | BindingFlags.Instance)
.GetValue(_dbCollectionEntry);
object relatedEnd =
internalCollectionEntry.GetType()
.BaseType.GetField("_relatedEnd", BindingFlags.NonPublic | BindingFlags.Instance)
.GetValue(internalCollectionEntry);
relatedEnd.GetType()
.GetField("_isLoaded", BindingFlags.NonPublic | BindingFlags.Instance)
.SetValue(relatedEnd, true);
}
return _collection;
}
}
#region ICollection<T> Members
void ICollection<TEntity>.Add(TEntity item)
{
if(_compiledFilter(item))
Entities.Add(item);
}
void ICollection<TEntity>.Clear()
{
Entities.Clear();
}
Boolean ICollection<TEntity>.Contains(TEntity item)
{
return Entities.Contains(item);
}
void ICollection<TEntity>.CopyTo(TEntity[] array, Int32 arrayIndex)
{
Entities.CopyTo(array, arrayIndex);
}
Int32 ICollection<TEntity>.Count
{
get
{
if (_dbCollectionEntry.IsLoaded)
return _collection.Count;
return _dbCollectionEntry.Query().Cast<TEntity>().Count(_filter);
}
}
Boolean ICollection<TEntity>.IsReadOnly
{
get
{
return Entities.IsReadOnly;
}
}
Boolean ICollection<TEntity>.Remove(TEntity item)
{
return Entities.Remove(item);
}
#endregion
#region IEnumerable<T> Members
IEnumerator<TEntity> IEnumerable<TEntity>.GetEnumerator()
{
return Entities.GetEnumerator();
}
#endregion
#region IEnumerable Members
IEnumerator IEnumerable.GetEnumerator()
{
return ( ( this as IEnumerable<TEntity> ).GetEnumerator() );
}
#endregion
}
If you skim through it, you'll find that the most important part is the "Entities" property, which will lazy load the actual values. In the constructor of the FilteredCollection I pass an optional ICollection for scenario's where the collection is already eagerly loaded.
Of course, we still need to configure Entity Framework so that our FilteredCollection is used everywhere where there are collection properties. This can be achieved by hooking into the ObjectMaterialized event of the underlying ObjectContext of Entity Framework:
(this as IObjectContextAdapter).ObjectContext.ObjectMaterialized +=
delegate(Object sender, ObjectMaterializedEventArgs e)
{
if (e.Entity is Entity)
{
var entityType = e.Entity.GetType();
IEnumerable<PropertyInfo> collectionProperties;
if (!CollectionPropertiesPerType.TryGetValue(entityType, out collectionProperties))
{
CollectionPropertiesPerType[entityType] = (collectionProperties = entityType.GetProperties()
.Where(p => p.PropertyType.IsGenericType && typeof(ICollection<>) == p.PropertyType.GetGenericTypeDefinition()));
}
foreach (var collectionProperty in collectionProperties)
{
var collectionType = typeof(FilteredCollection<>).MakeGenericType(collectionProperty.PropertyType.GetGenericArguments());
DbCollectionEntry dbCollectionEntry = Entry(e.Entity).Collection(collectionProperty.Name);
dbCollectionEntry.CurrentValue = Activator.CreateInstance(collectionType, new[] { dbCollectionEntry.CurrentValue, dbCollectionEntry });
}
}
};
It all looks rather complicated, but what it does essentially is scan the materialized type for collection properties and change the value to a filtered collection. It also passes the DbCollectionEntry to the filtered collection so it can work its magic.
This covers the whole 'loading entities' part. The only downside so far is that eagerly loaded collection properties will still include the deleted entities, but they are filtered out in the 'Add' method of the FilterCollection class. This is an acceptable downside, although I have yet to do some testing on how this affects the SaveChanges() method.
Of course, this still leaves one issue: there is no automatic filtering on queries. If you want to fetch the gym members who did a workout in the past week, you want to exclude the deleted workouts automatically.
This is achieved through an ExpressionVisitor that automatically applies a '.Where(e => !e.Deleted)' filter to every IQueryable it can find in a given expression.
Here is the code:
public class DeletedFilterInterceptor: ExpressionVisitor
{
public Expression<Func<Entity, bool>> Filter { get; set; }
public DeletedFilterInterceptor()
{
Filter = entity => !entity.Deleted;
}
protected override Expression VisitMember(MemberExpression ex)
{
return !ex.Type.IsGenericType ? base.VisitMember(ex) : CreateWhereExpression(Filter, ex) ?? base.VisitMember(ex);
}
private Expression CreateWhereExpression(Expression<Func<Entity, bool>> filter, Expression ex)
{
var type = ex.Type;//.GetGenericArguments().First();
var test = CreateExpression(filter, type);
if (test == null)
return null;
var listType = typeof(IQueryable<>).MakeGenericType(type);
return Expression.Convert(Expression.Call(typeof(Enumerable), "Where", new Type[] { type }, (Expression)ex, test), listType);
}
private LambdaExpression CreateExpression(Expression<Func<Entity, bool>> condition, Type type)
{
var lambda = (LambdaExpression) condition;
if (!typeof(Entity).IsAssignableFrom(type))
return null;
var newParams = new[] { Expression.Parameter(type, "entity") };
var paramMap = lambda.Parameters.Select((original, i) => new { original, replacement = newParams[i] }).ToDictionary(p => p.original, p => p.replacement);
var fixedBody = ParameterRebinder.ReplaceParameters(paramMap, lambda.Body);
lambda = Expression.Lambda(fixedBody, newParams);
return lambda;
}
}
public class ParameterRebinder : ExpressionVisitor
{
private readonly Dictionary<ParameterExpression, ParameterExpression> _map;
public ParameterRebinder(Dictionary<ParameterExpression, ParameterExpression> map)
{
_map = map ?? new Dictionary<ParameterExpression, ParameterExpression>();
}
public static Expression ReplaceParameters(Dictionary<ParameterExpression, ParameterExpression> map, Expression exp)
{
return new ParameterRebinder(map).Visit(exp);
}
protected override Expression VisitParameter(ParameterExpression node)
{
ParameterExpression replacement;
if (_map.TryGetValue(node, out replacement))
node = replacement;
return base.VisitParameter(node);
}
}
I am running a bit short on time, so I'll get back to this post later with more details, but the gist of it is written down and for those of you eager to try everything out; I've posted the full test application here: https://github.com/amoerie/TestingGround
However, there might still be some errors, as this is very much a work in progress. The conceptual idea is sound though, and I expect it to fully function soon once I've refactored everything neatly and find the time to write some tests for this.
Have you considered using views in your database to load your problem entities with the deleted items excluded?
It does mean you will need to use stored procedures to map INSERT/UPDATE/DELETE functionality, but it would definitely solve your problem if Workout maps to a View with the deleted rows omitted. Also - this may not work the same in a code first approach...
One possibly way might be using specifications with a base specification that checks the soft deleted flag for all queries together with an include strategy.
I’ll illustrate an adjusted version of the specification pattern that I've used in a project (which had its origin in this blog post)
public abstract class SpecificationBase<T> : ISpecification<T>
where T : Entity
{
private readonly IPredicateBuilderFactory _builderFactory;
private IPredicateBuilder<T> _predicateBuilder;
protected SpecificationBase(IPredicateBuilderFactory builderFactory)
{
_builderFactory = builderFactory;
}
public IPredicateBuilder<T> PredicateBuilder
{
get
{
return _predicateBuilder ?? (_predicateBuilder = BuildPredicate());
}
}
protected abstract void AddSatisfactionCriterion(IPredicateBuilder<T> predicateBuilder);
private IPredicateBuilder<T> BuildPredicate()
{
var predicateBuilder = _builderFactory.Make<T>();
predicateBuilder.Check(candidate => !candidate.IsDeleted)
AddSatisfactionCriterion(predicateBuilder);
return predicateBuilder;
}
}
The IPredicateBuilder is a wrapper to the predicate builder included in the LINQKit.dll.
The specification base class is responsible to create the predicate builder. Once created the criteria that should be applied to all query can be added. The predicate builder can then be passed to the inherited specifications for adding further criteria. For example:
public class IdSpecification<T> : SpecificationBase<T>
where T : Entity
{
private readonly int _id;
public IdSpecification(int id, IPredicateBuilderFactory builderFactory)
: base(builderFactory)
{
_id = id;
}
protected override void AddSatisfactionCriterion(IPredicateBuilder<T> predicateBuilder)
{
predicateBuilder.And(entity => entity.Id == _id);
}
}
The IdSpecification's full predicate would then be:
entity => !entity.IsDeleted && entity.Id == _id
The specification can then be passed to the repository which uses the PredicateBuilder property to build up the where clause:
public IQueryable<T> FindAll(ISpecification<T> spec)
{
return context.AsExpandable().Where(spec.PredicateBuilder.Complete()).AsQueryable();
}
AsExpandable() is part of the LINQKit.dll.
In regards to including/lazy loading properties one can extend the specification with a further property about includes. The specification base can add the base includes and then child specifications add their includes. The repository can then before fetching from the db apply the includes from the specification.
public IQueryable<T> Apply<T>(IDbSet<T> context, ISpecification<T> specification)
{
if (specification.IncludePaths == null)
return context;
return specification.IncludePaths.Aggregate<string, IQueryable<T>>(context, (current, path) => current.Include(path));
}
Let me know if something is unclear. I tried not to make this a monster post so some details might be left out.
Edit: I realized that I didn't fully answer your question(s); navigation properties. What if you make the navigation property internal (using this post to configure it and creating non-mapped public properties that are IQueryable. The non mapped properties can have a custom attribute and the repository adds the base specification's predicate to the where, without eagerly loading it. When someone do apply an eager operation the filter will apply. Something like:
public T Find(int id)
{
var entity = Context.SingleOrDefault(x => x.Id == id);
if (entity != null)
{
foreach(var property in entity.GetType()
.GetProperties()
.Where(info => info.CustomAttributes.OfType<FilteredNavigationProperty>().Any()))
{
var collection = (property.GetValue(property) as IQueryable<IEntity>);
collection = collection.Where(spec.PredicateBuilder.Complete());
}
}
return entity;
}
I haven't tested the above code but it could work with some tweaking :)
Edit 2: Deletes.
If you're using a general/generic repository you could simply add some further functionality to the delete method:
public void Delete(T entity)
{
var castedEntity = entity as Entity;
if (castedEntity != null)
{
castedEntity.IsDeleted = true;
}
else
{
_context.Remove(entity);
}
}