Entity Framework Concurrency control using "Timestamp" value check - c#

For concurrency control i Write "VersionCheck" function in my Context class,I need to dynamically load Context object and check if version is the same as the current context object RowVersion. now i use switch statement. (code below)
and also, Is there more convenient way for version control?
p.s. RowVersion is TimeStamp type in Database.
public class SchoolContext : DbContext
{
public DbSet<Group> Groups { get; set; }
public DbSet<Person> Persons { get; set; }
public bool VersionCheck(string objName)
{
var dbc = new SchoolContext();
byte[] bt1 = null;
byte[] bt2 = null;
switch (objName)
{
case "Person":
dbc.Persons.Load();
bt1 = dbc.Persons.SingleOrDefault(p => p.Id == 1).RowVersion;
bt2 = this.Persons.Local.SingleOrDefault(p => p.Id == 1).RowVersion;
break;
case "Group":
dbc.Groups.Load();
bt1 = dbc.Groups.SingleOrDefault(p => p.Id == 1).RowVersion;
bt2 = this.Groups.Local.SingleOrDefault(p => p.Id == 1).RowVersion;
break;
}
if (bt1 == null && bt2 == null)
{
throw new Exception("One of the Variable is null!");
return true;
}
for (int i = 0; i < bt1.Length; i++)
{
if (bt1[i] != bt2[i])
{
MessageBox.Show("Current object changed!");
return false;
}
}
return true;
}
}

Optimistic concurrency explained
The described approach looks like data corruption waiting to happen.
Unless you are locking the row or table during the time you read and check the rowversion, then it can change after to you have read it to check its value.
Use Optimistic concurrency paradigm properly.
eg https://stackoverflow.com/a/14718991/1347784

Related

EF Core how to implement audit log of changes to value objects

I am using EF Core/.NET Core 2.1, and following DDD. I need to implement an audit log of all changes to my entities, and have done so using code from this blog post (relevant code from this post included below). This code works and tracks changes to any properties, however when it logs changes to my value objects, it only lists the new values, and no old values.
Some code:
public class Item
{
protected Item(){}
//truncated for brevity
public Weight Weight { get; private set; }
}
public class Weight : ValueObject<Weight>
{
public WeightUnit WeightUnit { get; private set; }
public double WeightValue { get; private set; }
protected Weight() { }
public Weight(WeightUnit weightUnit, double weight)
{
this.WeightUnit = weightUnit;
this.WeightValue = weight;
}
}
and the audit tracking code from my context class
public class MyContext : DbContext
{
//truncated for brevity
public override int SaveChanges(bool acceptAllChangesOnSuccess)
{
var auditEntries = OnBeforeSaveChanges();
var result = base.SaveChanges(acceptAllChangesOnSuccess);
OnAfterSaveChanges(auditEntries);
return result;
}
private List<AuditEntry> OnBeforeSaveChanges()
{
if (!this.AuditingAndEntityTimestampingEnabled)
{
return null;
}
ChangeTracker.DetectChanges();
var auditEntries = new List<AuditEntry>();
foreach (var entry in ChangeTracker.Entries())
{
if (entry.Entity is Audit || entry.State == EntityState.Detached || entry.State == EntityState.Unchanged)
{
continue;
}
var auditEntry = new AuditEntry(entry)
{
TableName = entry.Metadata.Relational().TableName
};
auditEntries.Add(auditEntry);
foreach (var property in entry.Properties)
{
if (property.IsTemporary)
{
// value will be generated by the database, get the value after saving
auditEntry.TemporaryProperties.Add(property);
continue;
}
string propertyName = property.Metadata.Name;
if (property.Metadata.IsPrimaryKey())
{
auditEntry.KeyValues[propertyName] = property.CurrentValue;
continue;
}
switch (entry.State)
{
case EntityState.Added:
auditEntry.NewValues[propertyName] = property.CurrentValue;
break;
case EntityState.Deleted:
auditEntry.OldValues[propertyName] = property.OriginalValue;
break;
case EntityState.Modified:
if (property.IsModified)
{
auditEntry.OldValues[propertyName] = property.OriginalValue;
auditEntry.NewValues[propertyName] = property.CurrentValue;
}
break;
}
}
}
// Save audit entities that have all the modifications
foreach (var auditEntry in auditEntries.Where(_ => !_.HasTemporaryProperties))
{
Audits.Add(auditEntry.ToAudit());
}
// keep a list of entries where the value of some properties are unknown at this step
return auditEntries.Where(_ => _.HasTemporaryProperties).ToList();
}
}
Here is a screenshot of how the audit changes persist to the database. The non-value object properties on Item have their old/new values listed, where changes to value objects only list the new values:
Is there a way to get the previous values of the value objects?
UPDATE:
So, the reason the OldValues column is null for changes to my value objects is due to the State of the value object being Added when it has been changed. I added a call to the IsOwned() method to the switch statement, and try to grab the property.OriginalValue within like this:
case EntityState.Added:
if (entry.Metadata.IsOwned())
{
auditEntry.OldValues[propertyName] = property.OriginalValue;
}
auditEntry.NewValues[propertyName] = property.CurrentValue;
break;
However, this simply logs the current value that the value object is being updated to.
So the question still stands - is there any way to get the previous value of a value object using the EF Core ChangeTracker, or do I need to re-think my use of DDD Value Objects due to my audit requirement?
isstead of
auditEntry.OldValues[propertyName] = property.OriginalValue;
use
auditEntry.OldValues[propertyName] = entry.GetDatabaseValues().GetValue<object>(propertyName).ToString();

Best way to find values not in two lists c#

I have two lists which I need to compare (carOptions and custOptions).
Both of these lists are in my Customer class like below:
public class CustomerDTO
{
public int CustomerId { get; set; }
//other props removed for brevity
public List<OptionDTO> SelectedCarOptions { get; set; }
public List<OptionDTO> SelectedCustomerOptions { get; set; }
}
var existingData = _myRepository.GetDataByCustomer(customerId, year);
var existingCarOptions = existingData.Select(f => f.SelectedCarOptions);
var existingCustomerOptions = existingData.Select(f => f.SelectedCustomerOptions);
existingData is an IEnumerable of CustomerDTO and then existingCarOptions and existingCustomerOptions is an IEnumerable<List<OptionDTO>>
In the method, I have a list of IEnumerable<OptionDTO> options that gets passed in. I then break this down into car or customer based on the Enum as below:
var newCarOptions = options.Where(o => o.OptionTypeID == OptionType.CarOptions);
var newCustomerOptions = options.Where(o => o.OptionTypeID == OptionType.CustomerOptions).ToList();
What I need to do is find which options are in one collection but no in the other.
I tried as below but getting an Error on the Except (I maybe need to create my own static method in that class) but I am not sure this is the best approach really?
if (existingCarOptions.Count() != newCarOptions.Count())
{
//var test = newCarOptions.Except(existingCarOptions);
}
if (existingCustomerOptions.Count() != newCustomerOptions.Count())
{
//var test2 = newCustomerOptions.Except(existingCustomerOptions);
}
Is it also quite a bit of code in the method - I could split it out into sperate methods if required but perhaps there is a simpler way I could achieve this?
I'm assuming OptionDTO has a property called Id, which uniquely identifies an option (you have to change the code accordingly if this is not the case), you may use HashSets to quickly find unmatched OptionsDTOs, while keeping the overall time cost O(n) (where n is the max number of combined options).
Create the existing options sets:
var existingCarOptions = existingData.SelectMany(d => d.SelectedCarOptions).Select(o => o.Id);
var existingCustomerOptions = existingData.SelectMany(d => d.SelectedCustomerOptions).Select(o => o.Id);
var existingCarOptionsIds = new HashSet<int>(existingCarOptions);
var existingCustomerOptionsIds = new HashSet<int>(existingCustomerOptions );
Then you extract options missing in existing sets with:
var unmatchedCarOptions = newCarOptions.Where(o => !existingCarOptionsIds.Contains(o.Id));
var unmatchedCustomerOptions = newCustomerOptions.Where(o => !existingCustomerOptionsIds.Contains(o.Id));
If you want to compare two classes you can use IEqualityComparer
public class OptionComparer : IEqualityComparer<OptionDTO>
{
public bool Equals(OptionDTO x, OptionDTO y)
{
if (object.ReferenceEquals(x, y))
{
return true;
}
if (object.ReferenceEquals(x, null) ||
object.ReferenceEquals(y, null))
{
return false;
}
return x.OptionTypeID == y.OptionTypeID ;
}
public int GetHashCode(OptionDTO obj)
{
if (obj == null)
{
return 0;
}
return obj.OptionTypeID.GetHashCode();
}
With using this you can ıdentify that What is the concept of equality for these classes.
Now we can find different values..
public List<OptionDTO>CalculateDiffBetweenLists(List<OptionDTO> left, List<OptionDTO> right){
List<OptionDTO> optionDiff;
optionDiff = left.Except(right, new OptionComparer ()).ToList();
return optionDiff ;
}

how to check many to many table for duplicates thru Entity Framework

For example, take the following database tables: Students, Courses, and StudentsCourses.
How do I, in Entity Framework, make sure when adding a new course for a student, that the course doesn't already exist? Which means to me, checking StudentCourses table.
Do I need to write straight sql to check that?
using (var context = new StudentContext()
{
var alreadyExists = context.StudentsCourses.Any(x => x.StudentId == studentId && x.CourseId == courseId);
}
If you have a simple many to many relationship, you may not have a StudentsCourse entity. I like this pattern for adding to a many to many relationship:
public Student
{
private _Courses = new List<Course>();
public int ID { get; set; }
public virtual ICollection Courses
{
get { return _Courses; }
protected set { _Courses = value; }
}
public void AddCourse(Course course)
{
//And you can add your duplicate check here
if(!Courses.Any(c => c.ID == course.ID))
Courses.Add(course);
}
}
Unfortunately the Courses property is not a read-only collection so it doesn't prevent someone bypassing that method elsewhere with:
student.Courses.Add(course);
But then the methods suggested in the other answers don't prevent that either
Create a method:
public bool IsStudentOnCourse(int studentId, int courseId)
{
using (var db = new DBContext()) //replace for real context..
{
return db.StudentsCourses.Any(x => x.StudentId == studentId && x.CourseId == courseId);
}
}
BenjaminPaul's answer works very wel. Another option is to try and retrieve your student, and if it does not exist, create a new one.
You could create a method like this
public StudentCourse CreateOrUpdate(VM_StudentCourse studentCourse)
{
StudentCourse dbStudentCourse;
using (var context = new StudentContext()
{
dbStudentCourse = context.StudentsCourses.FirstOrDefault(x => x.StudentId == studentCourse.studentId && x.CourseId == studentCourse.courseId);
If (dbStudentCourse == null)
{
dbStudent = new StudentCourse();
dbStudent.StudentId = studentCourse.StudentId;
dbStudent.CourseId = studentCourse.CourseId;
context.Add(dbStudent);
}
dbStudent.OtherProperty1 = studentCourse.SomeProp;
dbStudent.OtherProperty2 = studentCourse.SomeOtherProp;
context.SaveChanges();
}
return dbStudentCourse;
}

How can I log all entities change, during .SaveChanges() using EF code first?

I'm using EF code first. I'm using a base Repository for all my repositories and an IUnitofWork that inject to the repositories, too:
public interface IUnitOfWork : IDisposable
{
IDbSet<TEntity> Set<TEntity>() where TEntity : class;
int SaveChanges();
}
public class BaseRepository<T> where T : class
{
protected readonly DbContext _dbContext;
protected readonly IDbSet<T> _dbSet;
public BaseRepository(IUnitOfWork uow)
{
_dbContext = (DbContext)uow;
_dbSet = uow.Set<T>();
}
//other methods
}
e.g my OrderRepository is like this:
class OrderRepository: BaseRepository<Order>
{
IUnitOfWork _uow;
IDbSet<Order> _order;
public OrderRepository(IUnitOfWork uow)
: base(uow)
{
_uow = uow;
_order = _uow.Set<Order>();
}
//other methods
}
And I use it in this way:
public void Save(Order order)
{
using (IUnitOfWork uow = new MyDBContext())
{
OrderRepository repository = new OrderRepository(uow);
try
{
repository.ApplyChanges<Order>(order);
uow.SaveChanges();
}
}
}
Is there any way to log change histories of all entities(include their navigation properties) during .SaveChanges()? I want to log original values(before save occurs) and changed values(after save occurs).
You can get the before and after values for all changed entities by going through DbContext.ChangeTracker. Unfortunately the API is a little verbose:
var changeInfo = context.ChangeTracker.Entries()
.Where (t => t.State == EntityState.Modified)
.Select (t => new {
Original = t.OriginalValues.PropertyNames.ToDictionary (pn => pn, pn => t.OriginalValues[pn]),
Current = t.CurrentValues.PropertyNames.ToDictionary (pn => pn, pn => t.CurrentValues[pn]),
});
You can modify that to include things like the type of the entity if you need that for your logging. There is also a ToObject() method on the DbPropertyValues (the type of OriginalValues and CurrentValues) you could call if you already have a way to log whole objects, although the objects returned from that method will not have their navigation properties populated.
You can also modify that code to get all entities in the context by taking out the Where clause, if that makes more sense given your requirements.
I have overridded the default SaveChanges method to log changes for add/update/delete in entity. Though it does not cover navigation property changes.
Based on this article: Using entity framework for auditing
public int SaveChanges(string userId)
{
int objectsCount;
List<DbEntityEntry> newEntities = new List<DbEntityEntry>();
// Get all Added/Deleted/Modified entities (not Unmodified or Detached)
foreach (var entry in this.ChangeTracker.Entries().Where
(x => (x.State == System.Data.EntityState.Added) ||
(x.State == System.Data.EntityState.Deleted) ||
(x.State == System.Data.EntityState.Modified)))
{
if (entry.State == System.Data.EntityState.Added)
{
newEntities.Add(entry);
}
else
{
// For each changed record, get the audit record entries and add them
foreach (AuditLog changeDescription in GetAuditRecordsForEntity(entry, userId))
{
this.AuditLogs.Add(changeDescription);
}
}
}
// Default save changes call to actually save changes to the database
objectsCount = base.SaveChanges();
// We don't have recordId for insert statements that's why we need to call this method again.
foreach (var entry in newEntities)
{
// For each changed record, get the audit record entries and add them
foreach (AuditLog changeDescription in GetAuditRecordsForEntity(entry, userId, true))
{
this.AuditLogs.Add(changeDescription);
}
// TODO: Think about performance here. We are calling db twice for one insertion.
objectsCount += base.SaveChanges();
}
return objectsCount;
}
#endregion
#region Helper Methods
/// <summary>
/// Helper method to create record description for Audit table based on operation done on dbEntity
/// - Insert, Delete, Update
/// </summary>
/// <param name="dbEntity"></param>
/// <param name="userId"></param>
/// <returns></returns>
private List<AuditLog> GetAuditRecordsForEntity(DbEntityEntry dbEntity, string userId, bool insertSpecial = false)
{
List<AuditLog> changesCollection = new List<AuditLog>();
DateTime changeTime = DateTime.Now;
// Get Entity Type Name.
string tableName1 = dbEntity.GetTableName();
// http://stackoverflow.com/questions/2281972/how-to-get-a-list-of-properties-with-a-given-attribute
// Get primary key value (If we have more than one key column, this will need to be adjusted)
string primaryKeyName = dbEntity.GetAuditRecordKeyName();
int primaryKeyId = 0;
object primaryKeyValue;
if (dbEntity.State == System.Data.EntityState.Added || insertSpecial)
{
primaryKeyValue = dbEntity.GetPropertyValue(primaryKeyName, true);
if(primaryKeyValue != null)
{
Int32.TryParse(primaryKeyValue.ToString(), out primaryKeyId);
}
// For Inserts, just add the whole record
// If the dbEntity implements IDescribableEntity,
// use the description from Describe(), otherwise use ToString()
changesCollection.Add(new AuditLog()
{
UserId = userId,
EventDate = changeTime,
EventType = ModelConstants.UPDATE_TYPE_ADD,
TableName = tableName1,
RecordId = primaryKeyId, // Again, adjust this if you have a multi-column key
ColumnName = "ALL", // To show all column names have been changed
NewValue = (dbEntity.CurrentValues.ToObject() is IAuditableEntity) ?
(dbEntity.CurrentValues.ToObject() as IAuditableEntity).Describe() :
dbEntity.CurrentValues.ToObject().ToString()
}
);
}
else if (dbEntity.State == System.Data.EntityState.Deleted)
{
primaryKeyValue = dbEntity.GetPropertyValue(primaryKeyName);
if (primaryKeyValue != null)
{
Int32.TryParse(primaryKeyValue.ToString(), out primaryKeyId);
}
// With deletes use whole record and get description from Describe() or ToString()
changesCollection.Add(new AuditLog()
{
UserId = userId,
EventDate = changeTime,
EventType = ModelConstants.UPDATE_TYPE_DELETE,
TableName = tableName1,
RecordId = primaryKeyId,
ColumnName = "ALL",
OriginalValue = (dbEntity.OriginalValues.ToObject() is IAuditableEntity) ?
(dbEntity.OriginalValues.ToObject() as IAuditableEntity).Describe() :
dbEntity.OriginalValues.ToObject().ToString()
});
}
else if (dbEntity.State == System.Data.EntityState.Modified)
{
primaryKeyValue = dbEntity.GetPropertyValue(primaryKeyName);
if (primaryKeyValue != null)
{
Int32.TryParse(primaryKeyValue.ToString(), out primaryKeyId);
}
foreach (string propertyName in dbEntity.OriginalValues.PropertyNames)
{
// For updates, we only want to capture the columns that actually changed
if (!object.Equals(dbEntity.OriginalValues.GetValue<object>(propertyName),
dbEntity.CurrentValues.GetValue<object>(propertyName)))
{
changesCollection.Add(new AuditLog()
{
UserId = userId,
EventDate = changeTime,
EventType = ModelConstants.UPDATE_TYPE_MODIFY,
TableName = tableName1,
RecordId = primaryKeyId,
ColumnName = propertyName,
OriginalValue = dbEntity.OriginalValues.GetValue<object>(propertyName) == null ? null : dbEntity.OriginalValues.GetValue<object>(propertyName).ToString(),
NewValue = dbEntity.CurrentValues.GetValue<object>(propertyName) == null ? null : dbEntity.CurrentValues.GetValue<object>(propertyName).ToString()
}
);
}
}
}
// Otherwise, don't do anything, we don't care about Unchanged or Detached entities
return changesCollection;
}
you have scared people away with the extra requirement
Include their navigation properties
This is simply a non trivial exercise.
And if this is important, you should manage/track changes across references with code.
this is a sample covering this topic
Undo changes in entity framework entities
There is a sample doing close top what you want here
undo changes
It can easily be converted to load before and after images elsewhere.
Given the ObjectState entry after DetectChanges is called, you can implement a simple entity by entity option. and per UOW. But the navigation / references version makes this very complex as you worded the requirement.
EDIT : How to access the changeList
public class Repository<TPoco>{
/....
public DbEntityEntry<T> Entry(T entity) { return Context.Entry(entity); }
public virtual IList<ChangePair> GetChanges(object poco) {
var changes = new List<ObjectPair>();
var thePoco = (TPoco) poco;
foreach (var propName in Entry(thePoco).CurrentValues.PropertyNames) {
var curr = Entry(thePoco).CurrentValues[propName];
var orig = Entry(thePoco).OriginalValues[propName];
if (curr != null && orig != null) {
if (curr.Equals(orig)) {
continue;
}
}
if (curr == null && orig == null) {
continue;
}
var aChangePair = new ChangePair {Key = propName, Current = curr, Original = orig};
changes.Add(aChangePair);
}
return changes;
}
///... partial repository shown
}
// FYI the simple return structure
public class ChangePair {
public string Key { get; set; }
public object Original { get; set; }
public object Current { get; set; }
}
DbContext has ChangeTracker property.
You can override .SaveChanges() in your context and log changes.
I don't think that entity framework can do it for you. Probably, you must detect changes directly in your model classes.
I've expanded on Steve's answer to provide a check for Changed, Added, and Deleted entities and print them in a sensible way.
(My use case is to ensure there are no unsaved changes before disposing of a DbContext instance, but this check could be done at any point)
/// <summary>Helper method that checks whether the DbContext had any unsaved changes before it was disposed.</summary>
private void CheckForUnsavedChanges(DbContext dbContext)
{
try
{
List<DbEntityEntry> changedEntityEntries = dbContext.ChangeTracker.Entries()
.Where(t => t.State != EntityState.Unchanged && t.State != EntityState.Detached).ToList();
if (!changedEntityEntries.Any())
return;
throw new Exception("Detected that there were unsaved changes made using a DbContext. This could be due to a missing call to `.SaveChanges()` or possibly " +
"some read-only operations that modified the returned entities (in which case you might wish to use `.AsNoTracking()` in your query). Changes:\n " +
String.Join("\n ", changedEntityEntries.Select(entry => $"{entry.Entity.GetType()} {entry.State}:\n " + String.Join("\n ",
entry.State == EntityState.Modified ? entry.CurrentValues.PropertyNames
// Only output properties whose values have changed (and hope they have a good ToString() implementation)
.Where(pn => entry.OriginalValues?[pn] != entry.CurrentValues[pn])
.Select(pn => $"{pn} ({entry.OriginalValues?[pn]} -> {entry.CurrentValues[pn]})") :
// Added or Deleted entities are output in their entirety
entry.State == EntityState.Added ? entry.CurrentValues.PropertyNames.Select(pn => $"{pn} = {entry.CurrentValues[pn]}") :
/* entry.State == EntityState.Deleted ? */ entry.CurrentValues.PropertyNames.Select(pn => $"{pn} = {entry.OriginalValues[pn]}")))));
}
catch (Exception ex)
{
_logger.Error("Issue encountered when checking for unsaved changes.", ex);
}
}

Entity Framework (6) Performance Optimisation advice

I have an ADO.Net Data Access layer in my application that uses basic ADO.Net coupled with CRUD stored procedures (one per operation e.g. Select_myTable, Insert_myTable). As you can imagine, in a large system (like ours), the number of DB objects required by the DA layer is pretty large.
I've been looking at the possibility of refactoring the layer classes into EF POCO classes. I've managed to do this, but when I try to performance test, it gets pretty horrific. Using the class below (create object, set Key to desired value, call dataselect), 100000 runs of data loading only takes about 47 seconds (there are only a handful of records in the DB). Whereas the Stored Proc method takes about 7 seconds.
I'm looking for advice on how to optimise this - as a point of note, I cannot change the exposed functionality of the layer - only how it implements the methods (i.e. I can't pass responsibility for context ownership to the BO layer)
Thanks
public class DAContext : DbContext
{
public DAContext(DbConnection connection, DbTransaction trans)
: base(connection, false)
{
this.Database.UseTransaction(trans);
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
//Stop Pluralising the Object names for table names.
modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
//Set any property ending in "Key" as a key type.
modelBuilder.Properties().Where(prop => prop.Name.ToLower().EndsWith("key")).Configure(config => config.IsKey());
}
public DbSet<MyTable> MyTable{ get; set; }
}
public class MyTable : DataAccessBase
{
#region Properties
public int MyTableKey { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public bool Active { get; set; }
public int CreatedBy { get; set; }
public DateTime CreatedDate { get; set; }
public int ModifiedBy { get; set; }
public DateTime ModifiedDate { get; set; }
#endregion
#region constructors
public MyTable()
{
//Set Default Values.
Active = true;
Name = string.Empty;
CreatedDate = DateTime.MinValue;
ModifiedDate = DateTime.MinValue;
}
#endregion
#region Methods
public override void DataSelect(System.Data.SqlClient.SqlConnection connection, System.Data.SqlClient.SqlTransaction transaction)
{
using (DAContext ctxt = new DAContext(connection, transaction))
{
var limitquery = from C in ctxt.MyTable
select C;
//TODO: Sort the Query
limitquery = FilterQuery(limitquery);
var limit = limitquery.FirstOrDefault();
if (limit != null)
{
this.Name = limit.Name;
this.Description = limit.Description;
this.Active = limit.Active;
this.CreatedBy = limit.CreatedBy;
this.CreatedDate = limit.CreatedDate;
this.ModifiedBy = limit.ModifiedBy;
this.ModifiedDate = limit.ModifiedDate;
}
else
{
throw new ObjectNotFoundException(string.Format("No MyTable with the specified Key ({0}) exists", this.MyTableKey));
}
}
}
private IQueryable<MyTable1> FilterQuery(IQueryable<MyTable1> limitQuery)
{
if (MyTableKey > 0) limitQuery = limitQuery.Where(C => C.MyTableKey == MyTableKey);
if (!string.IsNullOrEmpty(Name)) limitQuery = limitQuery.Where(C => C.Name == Name);
if (!string.IsNullOrEmpty(Description)) limitQuery = limitQuery.Where(C => C.Description == Description);
if (Active) limitQuery = limitQuery.Where(C => C.Active == true);
if (CreatedBy > 0) limitQuery = limitQuery.Where(C => C.CreatedBy == CreatedBy);
if (ModifiedBy > 0) limitQuery = limitQuery.Where(C => C.ModifiedBy == ModifiedBy);
if (CreatedDate > DateTime.MinValue) limitQuery = limitQuery.Where(C => C.CreatedDate == CreatedDate);
if (ModifiedDate > DateTime.MinValue) limitQuery = limitQuery.Where(C => C.ModifiedDate == ModifiedDate);
return limitQuery;
}
#endregion
}
Selects are slow with tracking on. You should definitely turn off tracking and measure again.
Take a look at my benchmarks
http://netpl.blogspot.com/2013/05/yet-another-orm-micro-benchmark-part-23_15.html
This might be just a hunch, but ... In your stored procedure, the filters are well defined and the SP is in a compiled state with decent execution plan. Your EF query gets constructed from scratch and recompiled on every use. So the task now becomes to devise a way to compile and preserve your EF queries, between uses. One way would be to rewrite your FilterQuery to not rely on fluent conditional method chain. Instead of appending, or not, a new condition every time your parameter set changes, convert it into one, where the filter is either applied when condition is met, or overridden by something like 1.Equals(1) when not. This way your query can be complied and made available for re-use. The backing SQL will look funky, but execution times should improve. Alternatively you could devise Aspect Oriented Programming approach, where compiled queries would be re-used based on parameter values. If I will have the time, I will post a sample on Code Project.

Categories

Resources