Effort supported unit tests conflict - c#

I've inherited some tests with this project. They were working when running against the SQL database, but slowly. I'm trying to switch over to using Effort.
.NET4.5, EF6.2, Effort 1.3.10.
I have two possibly related issues with my unit tests.
It doesn't matter if I run the tests in parallel or not.
1) If I run more than one at a time, I get
Saving or accepting changes failed because more than one entity of type 'Center.Shared.Person' have the same primary key value. Ensure that explicitly set primary key values are unique. Ensure that database-generated primary keys are configured correctly in the database and in the Entity Framework model. Use the Entity Designer for Database First/Model First configuration. Use the 'HasDatabaseGeneratedOption" fluent API or 'DatabaseGeneratedAttribute' for Code First configuration. ---> System.InvalidOperationException: Saving or accepting changes failed because more than one entity of type 'Center.Shared.Person' have the same primary key value. Ensure that explicitly set primary key values are unique. Ensure that database-generated primary keys are configured correctly in the database and in the Entity Framework model. Use the Entity Designer for Database First/Model First configuration. Use the 'HasDatabaseGeneratedOption" fluent API or 'DatabaseGeneratedAttribute' for Code First configuration..
So it appears that the tests are not properly isolated.
Tracing through the code, I can see that CreateTransient is called, but it apparently isn't transient enough.
public DbConnection CreateConnection(string nameOrConnectionString)
{
lock (_lock)
{
if (_connection == null)
{
_connection = Effort.DbConnectionFactory.CreateTransient();
}
return _connection;
}
}
In the TestInitialize routine I try to reset the database.
[TestInitialize]
public override void Initialize()
{
db.Database.Delete();
db.Database.CreateIfNotExists();
db.Database.Initialize(true);
This is highly convoluted code, so if we need to post more code, it's a long time before we get to the bottom of the rabbit hole. Probably better to create a PoC.
2) If I run the tests independently, I get a different problem. Again, these passed against SQL but not Effort.
[TestMethod]
public void ClientAccessorTests_Find()
{
Client result;
Client client = new Client()
{
Complete = false,
HeadOfHousehold = true,
PersonID = _person.PersonID
};
_accessor.Create(client, _accessor.DefaultConnectionContext);
result = _accessor.Find(new object[] { client.ClientID }, _accessor.DefaultConnectionContext);
Assert.IsNotNull(result); // Fails with Assert.IsNotNull failed.
}
Create consists of
public virtual EntityType Create(EntityType entity, ConnectionContext connectionContext)
{
IsContextValid(connectionContext);
if (entity == null) throw new ArgumentException("", "entity");
using (var db = CreateDbContext<DbContextType>(connectionContext))
{
db.Set<EntityType>().Add(entity);
db.SaveChanges();
}
return entity;
}
Find consists of
public virtual EntityType Find(object[] primaryKey, ConnectionContext connectionContext)
{
IsContextValid(connectionContext);
if (primaryKey == null || primaryKey.Length == 0) throw new ArgumentException("", "primaryKey");
using (var db = CreateDbContext<DbContextType>(connectionContext))
{
return db.Set<EntityType>().Find(primaryKey);
}
}
I know it is calling CreateDbContext, but tracing into the code, as far as I can tell it appears to be the same database with the same ID.
What is it that should cause the tests to be isolated?
And any ideas on why Find would quit working when using an in-memory database?

I was trying to use the implicit approach, where everything is hooked up via settings in the app.config file.
I started having better luck once I abandoned that approach and created the database connection and set it explicitly.
System.Data.Common.DbConnection connection = new EffortProviderFactory("").CreateConnection("");
_accessor = new ClientAccessor();
_accessor.Connection = connection;
db = new EntitiesDb(connection);
The base Accessor creates copies of the DB at every turn, which is fine so long as it uses the same DbConnection. So I set that on the accessor and then use it here:
if (_connection == null) { // this is the path for the application
if (connectionContext == null) {
ret = new T();
} else {
ret = (T)Activator.CreateInstance(typeof(T), new object[] { connectionContext });
}
} else { // this is the path for unit tests.
ret = (T)Activator.CreateInstance(typeof(T), new object[] { _connection });
}
Finally, I had to add a constructor that took a DbConnection to DbContext and its decendants.
public EntitiesDb(DbConnection connection) : base(connection) { }
'Find' now works and the tests don't interfere with each other.
Next step is pushing this all down to the base classes.

Related

How to avoid entity tracking on Save with EF Core?

I am writing a system that has a concept of idempotent operations: If clients give the system an operation id more than once the system will reject those "duplicated" operations immediately.
I want to implement this by storing UUID Primary-Key values in a table in SQL server such that SQL server will just reject duplicated writes, as expected. My problem comes when EF Core tries to be smart about these values and cache them: EF Core will reject the addition of the entity without ever pinging SQL server because it knows there's already a tracked entity with that same PK. This behavior is ideal in most scenarios but in my specific scenario it will become very memory-intensive real quick. I don't want this behavior.
This is the code that I'm using to manually trigger the specific error I need to give clients of the system:
Action throwIdempotentOpError = () => {
throw new ExecutionError("The operation you are trying to perform was already performed, please try again with a new client mutation id");
};
if (opsRepo.IdempotentOperations.Local.Any(op => op.ClientMutationId == mutationGuid)) {
throwIdempotentOpError();
}
var operation = new IdempotentOperation {
ClientMutationId = mutationGuid,
CreatedAt = DateTime.UtcNow,
UpdatedAt = DateTime.UtcNow,
RawDocument = context.Document.OriginalQuery,
Status = IdempotentOperationStatus.Started
};
try {
opsRepo.IdempotentOperations.Add(operation);
await opsRepo.SaveChangesAsync();
} catch (DbUpdateException ex) {
if (ex.InnerException != null && ex.InnerException.Message.StartsWith("Violation of PRIMARY KEY constraint 'PK_IdempotentOperations'")) {
throwIdempotentOpError();
}
throw;
}
Ideally I would only have to throw the error inside the catch block.
How can I disable that entity tracking behavior on the .Add call?
For context: opsRepo is a DbContext
You can solve your problem with cashing added Ids and save them into for example Redis, check new ids with cached Ids and prevent to insert duplicate items and reject them. But for solving problem with DbContext, you should detach the inserted items like this:
...
try {
opsRepo.IdempotentOperations.Add(operation);
await opsRepo.SaveChangesAsync();
//detached inserted entities
ClearDbContextState();
}
...
public void ClearDbContextState()
{
var entities = opsRepo.ChangeTracker.Entries<IdempotentOperations>().Where(e => e.State == EntityState.Added ||
e.State == EntityState.Modified).ToList();
foreach (var entry in entities)
entry.State = EntityState.Detached;
}
I ended up using IHttpContextAccessor as a way to scope out the injection of my DbContext implementation, side-stepping the issue #Panagiotis Kanavos commented.

Updating existing data in EF 6 throws exception - "...entity of the same type already has the same primary key value."

I am trying to update a record using Entity Framework 6, code-first, no fluent mapping or a tool like Automapper.
The entity(Employee) has other composite properties associated with it like Addreess(collection), Department
It is also inherited from a base called User
The save method is as follows, with _dbContext being the DbConext implementation
public bool UpdateEmployee(Employee employee)
{
var entity = _dbContext.Employees.Where(c => c.Id == employee.Id).AsQueryable().FirstOrDefault();
if (entity == null)
{
_dbContext.Employees.Add(employee);
}
else
{
_dbContext.Entry(employee).State = EntityState.Modified; // <- Exception raised here
_dbContext.Employees.Attach(employee);
}
return _dbContext.SaveChanges() > 0;
}
I keep getting the error:
Attaching an entity of type failed because another entity of the same
type already has the same primary key value. This can happen when
using the 'Attach' method or setting the state of an entity to
'Unchanged' or 'Modified' if any entities in the graph have
conflicting key values. This may be because some entities are new and
have not yet received database-generated key values. In this case use
the 'Add' method or the 'Added' entity state to track the graph and
then set the state of non-new entities to 'Unchanged' or 'Modified' as
appropriate.
I have tried the following:
Attaching before setting to EntityState.Modified
Adding AsNoTracking() on querying if the object exists(No exception but DB is not updated) - https://stackoverflow.com/a/23228001/919426
Saving using the base entity _dbContext.Users instead of the Employee entity - https://stackoverflow.com/a/25575634/919426
None of which is working for me now.
What could I have gotten wrong for some of those solutions not to work in my situation?
EF already includes a way to map properties without resorting to Automapper, assuming you do not have navigation properties to update:
public bool UpdateEmployee(Employee employee)
{
var entity = _dbContext.Employees.Where(c => c.Id == employee.Id).AsQueryable().FirstOrDefault();
if (entity == null)
{
_dbContext.Employees.Add(employee);
}
else
{
_dbContext.Entry(entity).CurrentValues.SetValues(employee);
}
return _dbContext.SaveChanges() > 0;
}
This usually generates a better SQL statement since it will only update the properties that have changed.
If you still want to use the original method, you'll get rid of entity from the context, either using AsNoTracking (not sure why it didn't update in your case, it should have no effect, so the problem might be something else) or as modifying your query to prevent it from materializing the entity in the first place, using something like bool exists = dbContext.Employees.Any(c => c.Id == employee.Id) for example.
This worked for myself
var aExists = _db.Model.Find(newOrOldOne.id);
if(aExists==null)
{
_db.Model.Add(newOrOldOne);
}
else
{
_db.Entry(aExists).State = EntityState.Detached;
_db.Entry(newOrOldOne).State = EntityState.Modified;
}
I've encountered the same thing when using a repository and unit of work pattern (as documented in the mvc4 with ef5 tutorial).
The GenericRepository contains an Update(TEntity) method that attempts to Attach then set the Entry.State = Modified. The up-voted 'answer' above doesn't resolve this if you are going to stick to the uow / repo pattern.
I did attempt to use the detach process prior to the attach, but it still failed for the same reason as indicated in the initial question.
The reason for this, it turns out, is that I was checking to see if a record existed, then using automapper to generate an entity object from my dto prior to calling update().
By checking for the existance of that record, i put the entity object in scope, and wasn't able to detach it (which is also the reason the initial questioner wasn't able to detach)... Tt tracked the record and didn't allow any changes after I automapper'ed the dto into an entity and then attempted to update.
Here's the generic repo's implementation of update:
public virtual void Update(TEntity entityToUpdate)
{
dbSet.Attach(entityToUpdate);
context.Entry(entityToUpdate).State = EntityState.Modified;
}
This is my PUT method (i'm using WebApi with Angular)
[HttpPut]
public IHttpActionResult Put(int id, Product product)
{
IHttpActionResult ret;
try
{
// remove pre-check because it locks the record
// var e = unitOfWork.ProductRepository.GetByID(id);
// if (e != null) {
var toSave = _mapper.Map<ProductEntity>(product);
unitOfWork.ProductRepository.Update(toSave);
unitOfWork.Save();
var p = _mapper.Map<Product>(toSave);
ret = Ok(p);
// }
// else
// ret = NotFound();
}
catch (DbEntityValidationException ex)
{
ret = BadRequest(ValidationErrorsToMessages(ex));
}
catch (Exception ex)
{
ret = InternalServerError(ex);
}
return ret;
}
As you can see, i've commented out my check to see if the record exists. I guess i'll see how it works if I attempt to update a record that no longer exists, as i no longer have a NotFound() return opportunity.
So to answer the initial question, i'd say don't look for entity==null before making the attempt, or come up with another methodology. maybe in my case, i could dispose of my UnitOfWork after discovery of the object and then do my update.
You need to detach to avoid duplicate primary key exception whist invoking SaveChanges
db.Entry(entity).State = EntityState.Detached;

Update only works in debug mode

I'm new to using entity as a data layer between MVC and SQL Server, so I apologize up front if what I'm doing is bad practice.
Let me start by sharing the code that is handling the update.
Update Delivery:
public bool One(Delivery toUpdate)
{
using (var dbContext = new FDb())
{
try
{
var deliveryInDb = this.dbTable(dbContext).Single(x => x.DeliveryId == toUpdate.DeliveryId);
dbContext.Entry(deliveryInDb).CurrentValues.SetValues(toUpdate);
//removal first
List<DeliveryDay> currentDays = FEngineCore.DeliveryDay.Get.ForValue((x => x.DeliveryId), toUpdate.DeliveryId);
List<DeliveryTime> currentTimes = FEngineCore.DeliveryTime.Get.ForValue((x => x.DeliveryId), toUpdate.DeliveryId);
//remove delivery days that are not needed
foreach (var curDay in currentDays)
{
if (!toUpdate.DeliveryDays.Select(x => x.DeliveryDayId).Contains(curDay.DeliveryDayId))
{
FEngineCore.DeliveryDay.Delete.One((x => x.DeliveryDayId), curDay.DeliveryDayId);
deliveryInDb.DeliveryDays.Remove(curDay);
}
}
//remove delivery times that are not needed
foreach (var curTime in currentTimes)
{
if (!toUpdate.DeliveryTimes.Select(x => x.DeliveryTimeId).Contains(curTime.DeliveryTimeId))
{
FEngineCore.DeliveryTime.Delete.One((x => x.DeliveryTimeId), curTime.DeliveryTimeId);
deliveryInDb.DeliveryTimes.Remove(curTime);
}
}
foreach (var day in toUpdate.DeliveryDays)
{
if (day.DeliveryDayId == 0)
{
dbContext.DeliveryDays.Add(day);
}
else
{
if (dbContext.DeliveryDays.Local.Any(e => e.DeliveryDayId == day.DeliveryDayId))
{
dbContext.Entry(dbContext.DeliveryDays.Local.First(e => e.DeliveryDayId == day.DeliveryDayId)).CurrentValues.SetValues(day);
dbContext.Entry(dbContext.DeliveryDays.Local.First(e => e.DeliveryDayId == day.DeliveryDayId)).State = EntityState.Modified;
}
else
{
DeliveryDay modDay = new DeliveryDay
{
DayOfWeek = day.DayOfWeek,
DeliveryDayId = day.DeliveryDayId,
DeliveryId = day.DeliveryId,
Interval = day.Interval
};
dbContext.DeliveryDays.Attach(modDay);
dbContext.Entry(modDay).State = EntityState.Modified;
}
deliveryInDb.DeliveryDays.Add(day);
}
}
foreach (var time in toUpdate.DeliveryTimes)
{
if (time.DeliveryTimeId == 0)
{
dbContext.DeliveryTimes.Add(time);
}
else
{
if (dbContext.DeliveryTimes.Local.Any(e => e.DeliveryTimeId == time.DeliveryTimeId))
{
dbContext.Entry(dbContext.DeliveryTimes.Local.First(e => e.DeliveryTimeId == time.DeliveryTimeId)).CurrentValues.SetValues(time);
dbContext.Entry(dbContext.DeliveryTimes.Local.First(e => e.DeliveryTimeId == time.DeliveryTimeId)).State = EntityState.Modified;
}
else
{
DeliveryTime modTime = new DeliveryTime
{
DeliveryId = time.DeliveryId,
DeliveryLocationId = time.DeliveryLocationId,
DeliveryTimeId = time.DeliveryTimeId,
DropoffTime = time.DropoffTime
};
dbContext.DeliveryTimes.Attach(modTime);
dbContext.Entry(modTime).State = EntityState.Modified;
}
deliveryInDb.DeliveryTimes.Add(time);
}
}
dbContext.SaveChanges();
dbContext.Entry(deliveryInDb).State = EntityState.Detached;
return true;
}
catch (Exception ex)
{
Console.WriteLine(ex.InnerException);
return false;
}
}
}
Let me continue by explaining that the delivery object has 2 children; DeliveryTime and DeliveryDay. The issue that arises happens when I try to remove one deliveryTime and modify nothing else. The end result of running the code normally (not in debug) is that the deliveryTime is in fact not removed. Here's the interesting thing guys, when I debug it and go through the break points, everything works as expected!
Let me continue by posting the code that is running behind the removal method of the deliveryTime (actually all entity objects in my system).
public bool One<V>(Expression<Func<T, V>> property, V value) where V : IComparable
{
using (var dbContext = new FoodsbyDb())
{
try
{
T toDelete;
//get the body as a property that represents the property of the entity object
MemberExpression entityPropertyExpression = property.Body as MemberExpression;
//get the parameter that is representing the entity object
ParameterExpression entityObjectExpression = (ParameterExpression)entityPropertyExpression.Expression;
//represent the value being checked against as an expression constant
Expression valueAsExpression = Expression.Constant(value);
//check the equality of the property and the value
Expression equalsExpression = Expression.Equal(entityPropertyExpression, valueAsExpression);
//create an expression that takes the entity object as a parameter, and checks the equality using the equalsExpression variable
Expression<Func<T, bool>> filterLambda = Expression.Lambda<Func<T, bool>>(equalsExpression, entityObjectExpression);
toDelete = this.dbTable(dbContext)
.SingleOrDefault(filterLambda);
if (toDelete != null)
{
this.dbTable(dbContext)
.Remove(toDelete);
dbContext.SaveChanges();
return true;
}
return false;
}
catch (Exception ex)
{
Console.WriteLine(ex.InnerException);
return false;
}
}
}
The code above is obviously generic, and it handles all my entity objects. I have tested it in and out and know for sure the problem does not lie in there. I thought it would be helpful to post it so you all can have a full understanding of what's going on.
Here's my best guess as to what's going on:
The reference to the removed deliveryTime still exists when the database context is saved, but when I debug, the system has enough time to remove the context.
Here was one of my attempted solutions:
Remove all references to the children objects immediately after setting currentDays and currentTimes and then proceeding to add them back to deliveryInDb as you enumerate through them.
Because I am new to all of this, if you see some bad practice along with the solution, I wouldn't mind constructive criticism to improve my programming method.
I actually encountered this issue in a project at work. The project is an older MVC4 project using EF 6.1.
In our situation, a simple update attempting to set a related entity property to null was failing to actually set it to null while running the web app normally (in debug mode). When setting a break point on the line of code that sets the property to null the database would be updated as expected, though. So, the update was working when a break point was in place but not working when allowed to run normally.
Using an EF interceptor, we could see that, with the break point in place, the update query was going through as expected.
Now, in our situation the related entity was using the virtual keyword to allow for lazy loading. I think this is the root of the issue. When a break point is present, EF has enough time to both lazily load that related entity and evaluate whatever it needs to evaluate and finally set it to null. When running without a break point, I think EF gets caught up trying to lazily load that entity and therefore fails to think it needs to be updated. To be clear, I was both accessing the related entity property for the first time and setting it null using a one-liner of code.
foo.Bar = null;
I resolved this issue, in our scenario, by accessing that property at least once prior to setting it to null so that EF is forced to load it. With it loaded, setting it to null seems to work as intended now. So again, to be clear, I think the issue is a combo of lazy loading and the one-liner of code both accessing that property for the first time and assigning it to null.
It appears that you're using multiple instances of your DbContext, which are not synchronized.
The solution would be to use a single instance, and pass that instance between your methods.

DbContext + ObjectContext in TransactionScope cause MDTC Exception

I have an old ObjectContext and few new DbContext in my project (i.e. BoundedContext for different purposes).
Some time I need to commit changes from few of them in one transactions. In some cases I need to persist data from ObjectContext and DbContext.
In EF 5.0 to avoid of MSDC I write some wraper
public class ContextUnitOfWork
{
List<IContext> ContextList;
public ContextUnitOfWork()
{
ContextList = new List<IContext>();
}
public void RegisterContext(IContext Context)
{
ContextList.Add(Context);
}
public bool IsDisposed
{
get
{
return ContextList.Any(x => x.IsDisposed);
}
}
public bool HasChangedEntities
{
get
{
return ContextList.Any(x => x.HasChangedEntities);
}
}
public void Commit()
{
bool HasDbContext = ContextList.OfType<System.Data.Entity.DbContext>().Any();
try
{
if (HasDbContext)
{
ContextList.ForEach(x =>
{
if (x is System.Data.Entity.DbContext)
{
(x as System.Data.Entity.DbContext).Database.Connection.Open();
}
else if (x is System.Data.Objects.ObjectContext)
{
((System.Data.Objects.ObjectContext)x).Connection.Open();
}
});
}
using (var scope = new System.Transactions.TransactionScope(System.Transactions.TransactionScopeOption.Required,
new System.Transactions.TransactionOptions { IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted }))
{
ContextList.ForEach(x => x.Commit());
scope.Complete();
}
}
catch (System.Data.UpdateException uex)
{
var ErrorList = uex.StateEntries.Select(x => x.Entity).ToList();
}
finally
{
if (HasDbContext)
{
ContextList.ForEach(x =>
{
if (x is System.Data.Entity.DbContext)
{
(x as System.Data.Entity.DbContext).Database.Connection.Close();
}
else if (x is System.Data.Objects.ObjectContext)
{
((System.Data.Objects.ObjectContext)x).Connection.Close();
}
});
};
}
}
}
But in EntityFramework 6.0.1 it doesn't work. ObjectContext commit successfully, but when DbContext call SaveChanges() an Exception of type EntityException with text
"The underlying provider failed on EnlistTransaction." And Inner Expection contains {"Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."}
Any Idea to commit contexts in one transaction and avoid MDTC exception?
You are attempting to run everything in a local transaction which is very tricky even with multiple contexts of the same type. The reason for this is that you cannot have multiple open connections with the same local transaction. And very often a new connection will be opened for the next context if the previous context is still alive. This will trigger a promotion of the local transaction to a distributed transaction.
My experience with EF is that it only re-uses the current connection when the connectionstring (the normal one inside the entityconnectionstring) is EXACTLY identical. If there is a single difference, the transaction will be promoted to a distributed transaction, which must be enabled by the system, which in your case, it is not.
Also, if you are already executing a query, and are still reading results from that query, that starting another query at the same time, will (of course) require another connection, and therefore, the local transaction will be promoted to a distributed transaction.
Can you check if the connection strings are identical? I would still be surprised if current connection is re-used though.

Protecting critical sections based on a condition in C#

I'm dealing with a courious scenario.
I'm using EntityFramework to save (insert/update) into a SQL database in a multithreaded environment. The problem is i need to access database to see whether a register with a particular key has been already created in order to set a field value (executing) or it's new to set a different value (pending). Those registers are identified by a unique guid.
I've solved this problem by setting a lock since i do know entity will not be present in any other process, in other words, i will not have same guid in different processes and it seems to be working fine. It looks something like that:
static readonly object LockableObject = new object();
static void SaveElement(Entity e)
{
lock(LockableObject)
{
Entity e2 = Repository.FindByKey(e);
if (e2 != null)
{
Repository.Insert(e2);
}
else
{
Repository.Update(e2);
}
}
}
But this implies when i have a huge ammount of requests to be saved, they will be queued.
I wonder if there is something like that (please, take it just as an idea):
static void SaveElement(Entity e)
{
(using ThisWouldBeAClassToProtectBasedOnACondition protector = new ThisWouldBeAClassToProtectBasedOnACondition(e => e.UniqueId)
{
Entity e2 = Repository.FindByKey(e);
if (e2 != null)
{
Repository.Insert(e2);
}
else
{
Repository.Update(e2);
}
}
}
The idea would be having a kind of protection that protected based on a condition so each entity e would have its own lock based on e.UniqueId property.
Any idea?
Don't use application-locks where database transactions or constraints are needed.
The use of a lock to prevent duplicate entries in a database is not a good idea. It limits the scalability of your application be forcing only a single instance to ever exist that can add or update such records. Or worse, someone will eventually try to scale the application to multiple processes or servers and it will cause data corruption (since locks are local to a single process).
What you should consider instead is using a combination of unique constraints in the database and transactions to ensure that no two attempts to add the same entry can both succeed. One will succeed - the other will be forced to rollback.
This might work for you, you can just lock on the instance of e:
lock(e)
{
Entity e2 = Repository.FindByKey(e);
if (e2 != null)
{
Repository.Insert(e2);
}
else
{
Repository.Update(e2);
}
}

Categories

Resources