Can someone please tell me which is the better way to update a table with a list of values? And which one to use when and what's the reason behind that?
Method 1:
public void SavDetails(List<MyTable> list)
{
_Entity.MyTable.AddOrUpdate(list.ToArray());
try
{
_Entity.SaveChanges();
}
catch (DbUpdateException ex)
{
Console.WriteLine(ex);
}
}
Method 2:
public void SaveDetails(List<MyTable> list)
{
foreach (var file in list)
{
_Entity.MyTable.Add(file);
_Entity.Entry(file).State = EntityState.Modified;
try
{
_Entity.SaveChanges();
}
catch (DbUpdateException ex)
{
Console.WriteLine(ex);
}
}
}
Method 1
The AddOrUpdate method is in the System.Data.Entity.Migrations namespace, because it is intended for use with migrations. If it works for you then it may be a good option, but be aware that this is not how it was designed to be used.
Method 2
This seems to assume that all entities already exist in the database, and it will fail if they do not. If you know this, then it will be more efficient because it doesn't need to check.
Do not call SaveChanges() inside the loop, as this causes multiple calls to the database. Instead, call it once at the end and all the entities will be updated at once. Also, you should replace Add (which implies adding a new object) with Attach (which is for telling the context to track existing entities). This will make your intent easier to follow.
If you can't rely on all entities already existing in the database, then you'll need to manually check:
var existing = _Entity.MyTable.Select(o => o.Id).ToList();
foreach (var file in list)
{
if (existing.Contains(file.Id))
{
_Entity.MyTable.Attach(file);
_Entity.Entry(file).State = EntityState.Modified;
}
else
{
_Entity.MyTable.AddObject(file);
}
}
_Entity.SaveChanges();
If you want to delete rows from the database table which are not in your list, that will require additional logic.
Conclusion
There is no way of knowing for sure which approach is best for your scenario. If both methods work, and they're both fast enough, then choose the one you prefer. If speed is an issue, then benchmark them and choose the faster option.
Related
I have a method that update many items. Because of concurrency, some items could be update correctly, another can't update because user send incorrect data and anothers can't be update because another process could delete the item that user wants to update.
So I was thinking how give information to the consumer of the library which items could be updated, which items has errors and which items couldn't be update because they are deleted.
I was thinking a method something like that, but I have the feel that it is a bit smelly code.
public List<TypeItems> UpdateItems(IEnumerable<MyType> paramItems)
{
List<MyTyme> myLstCorrectItems = new List<MyType>();
List<Exception> myLstExceptions = new List<Exception>();
foreach (MyTime item in paramItems)
{
try
{
Item.Update(newValue);
myLstCorrectItems.Add(item);
}
catch (System.Exception)
{
Exception myException = new Exception("ERROR " + item.ID + ex.Message);
myLstExceptions.Add(myException);
}
}
if(myLstExceptions.Count == 0)
{
return myLstCorrectItems;
}
foreach(MyType iterator in myLstCorrectItems)
{
Exception myException = new Exception("OK " + iterator.ID);
myLstExceptions.Add(myException);
}
throw new AggregateException("ERRORS", myLstExceptions);
}
The idea of the code is that if there is no items with exceptions, return the list of all the items that are updated. When the consumer receive the list, it can compare if some item is not in the list, it means it was deleted and can warning to the user. If the item exists, it has the new values because of the update.
If there is at least one item with errors, I create an aggregate exception in which I add all the exceptions. Also, I create a "fake expcetion" for the correct items, so the consumer would receive all the processed items. So if some item is not in the agreegate exception, it can know that was deleted from another process.
Perhaps it would be better to use a custom exception, to avoid to parse the string of the error, to see if it is OK or ERROR, but this is not my doubt.
My doubt it is to know if it is the general idea is good or not, to use a "fake expcetion" to include the correct items.
But if there is a better or alternative ways to notify the final result of each item, I would be good to can know them.
Thanks.
EDIT: Solution 1: return correct and incorrect items:
public (List<MyType> CorrectItems, List<MyType> IncorrectItems) UpdateItems(IEnumerable<MyType> paramItems)
{
List<MyTyme> myLstCorrectItems = new List<MyType>();
List<Mytype> myLstIncorrectItems = new List<Mytype>();
foreach (MyTime item in paramItems)
{
try
{
Item.Update(newValue);
myLstCorrectItems.Add(item);
}
catch (System.Exception)
{
myLstIncorrectItems.Add(item);
}
}
return (myLstCorrectItems, myLstIncorrectItems);
}
It is a bad idea, to create these fake exceptions to mark the correct items. This is very confusing and I certainly would not expect it. Another solution is to create your own type for the result:
public UpdateItemsResult UpdateItems(IEnumerable<MyType> paramItems)
{
// ...
}
Your type might look like this:
public class UpdateItemsResult
{
public List<int> IdsOfUpdatedItems {get;set;}
public List<int> IdsOfFailedItems {get;set;}
}
The first list contains the ids of the items, at which the update worked. The second list contains the ids of items, at which some error occurred. Of couse, you can adapt the type to your needs.
What I know is EF creates transaction for DbContext.SaveChanges.
But I need a block of operations including inserts and I need identity results from them to complete my sequence.
So what I do looks like this:
using var dbTransaction = context.DataBase.BeginTransaction();
try {
context.Add(myNewEntity);
context.SaveChanges();
otherEntity.RefId = myNewEntity.Id;
context.Update(otherEntity);
// some other inserts and updates
context.SaveChanges();
dbTransaction.Commit();
}
catch {
dbTransaction.Rollback();
throw;
}
So I call SaveChanges on inserts to get identities and not to break relations.
It looks like transactions in transactions. Is it correct? Is it how it should be done? I mean - Commit doesn't require SaveChanges? I assume it just saves the changes, but I want to be sure.
Your code will be working properly, but I prefer to do it this way:
try {
context.Add(myNewEntity);
var result= context.SaveChanges();
if(result==0){
dbTransaction.Rollback();
... return error
}
otherEntity.RefId = myNewEntity.Id;
context.Update(otherEntity);
// some other inserts and updates
result=context.SaveChanges();
if(result==0){
dbTransaction.Rollback();
... return error
}
dbTransaction.Commit();
}
catch {
dbTransaction.Rollback();
throw;
}
It is very usefull if for example you update or add or delete several records.
In this case the result will return the number of effected records and instead of result==0 I usually use if result < ...effected records I expect.
I'm on a team using an EF, Code-first approach with ODP.Net (Oracle). We need to attempt to write updates to multiple rows in a table, and store any exceptions in a collection to be bubbled up to a handler (so writing doesn't halt because one record can't be written). However, this code throws an exception saying
System.InvalidOperationException: The operation cannot be completed because the DbContext has been disposed.
I'm not sure why. The same behavior occurs if the method is changed to be a synchronous method and uses .Find().
InvModel _model;
public InvoiceRepository(InvModel model)
{
_model = model;
}
public void SetStatusesToSent(IEnumerable<Invoice> Invoices)
{
var exceptions = new List<Exception>();
foreach (var id in invoices)
{
try
{
var iDL = await _model.INVOICES.FindAsync(id);/*THROWS A DBCONTEXT EXCEPTION HERE*/
iDL.STATUS = Statuses.Sent; // get value from Statuses and assign
_model.SaveChanges(); //save changes to the model
}
catch (Exception ex)
{
exceptions.Add(ex);
continue; //not necessary but makes the intent more legible
}
}
}
Additional detail update: _model is injected by DI.
Remember that LINQ executes lazily - that is when you actually use the information.
The problem might be, that Your DbContext has gone out of scope...
Use .ToList() or .ToArray() to force execution at that time.
I am porting app from EF 6 to EF Core 2.2. I have an object with some related objects inside, each one with database generated ID and GUID (db - postgresql).
I'm trying to create a generic method to add a whole object graph with all related objects the same way as in EF 6 - like this:
var res = context.Set<T>().Add(entity);
Before the insert, EF make temporary IDs, which will be replaced with real database IDs.
So, because inside different object I might have exactly the same objects (for better understanding, my subject area is medicine, I have several different analyzes that are performed from the same sample), in EF Core I can't add whole object graph like this - getting errors, for example:
Key (\"ID\")=(5) already exists
But in the EF 6 version, everything used to work - all objects are inserted including inner objects with correct IDs and GUIDs, without duplicates.
In both versions, temporary IDs in the same objects are also equal, but only in EF Core version, I'm getting this error.
I have tried add attributes
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
tried changing the DbContext
modelBuilder.Entity<Sample>().Property(e => e.ID).ValueGeneratedOnAdd();
but neither works for me - I think the problem is not here.
Also I found this article in Microsoft docs, which says
If the graph does contain duplicates, then it will be necessary to process the graph before sending it to EF to consolidate multiple instances into one.
but I'm not sure - is this about my case?
Am I doing this wrong or is it impossible in EF Core 2.2?
Magic sauce: Create an interface and implement it on object you don't want to save on the object graph, then simply do not set that object as modified. The paradigm that I was failing to understand was that I never really wanted to save a 'defining' object during a save when that object was being used to define the object being saved.
I save the defining objects with a separate process. Works perfectly.
public virtual T InsertOrUpdate(T oneObject)
{
T output = null;
if (oneObject.Id == Guid.Empty)
{
output = this.Insert(oneObject);
}
else
{
try
{
_dbContext.ChangeTracker.TrackGraph(oneObject, e =>
{
if (e.Entry.IsKeySet)
{
// See if the entry has interface with 'StaticObject'
List<Type> x = e.Entry.Entity.GetType().GetInterfaces().ToList();
if (x.Contains(typeof(IStaticObject)))
{
_logger.LogWarning($"Not tracking entry {e.Entry}");
}
else
{
e.Entry.State = EntityState.Modified;
}
}
else
{
e.Entry.State = EntityState.Added;
}
});
_dbContext.Entry(oneObject).State = EntityState.Modified;
_dbContext.SaveChanges();
output = oneObject;
}
catch (Exception ex)
{
_logger.LogError(ex, $"Problem updating object {oneObject.Id}");
}
}
return output;
}
public virtual T Insert(T oneObject)
{
try
{
_dbContext.Attach(oneObject);
_dbContext.Entry(oneObject);
_dbContext.SaveChanges();
}
catch (Exception error)
{
_logger.LogError(error.Message, error);
}
return oneObject;
}
public static void CacheUncachedMessageIDs(List<int> messageIDs)
{
var uncachedRecordIDs = LocalCacheController.GetUncachedRecordIDs<PrivateMessage>(messageIDs);
if (!uncachedRecordIDs.Any()) return;
using (var db = new DBContext())
{
.....
}
}
The above method is repeated regularly throughout the project (except with different generics passed in). I'm looking to avoid repeated usages of the if (!uncachedRecordIDs.Any()) return; lines.
In short, is it possible to make the LocalCacheController.GetUncachedRecordIDs return the CacheUncachedMessageIDs method?
This will guarantee a new data context is not created unless it needs to be (stops accidentally forgetting to add the return line in the parent method).
It is not possible for a nested method to return from parent method.
You can do some unhandled Exception inside GetUncachedRecordIDs, that will do the trick, but it is not supposed to do this, so it creates confusion. Moreover, it is very slow.
Another not suggested mechanic is to use some goto magic. This also generates confusion because goto allows unexpected behaviour in program execution flow.
Your best bet would be to return a Result object with simple bool HasUncachedRecordIDs field and then check it. If it passes, then return. This solution solves the problem of calling a method, which is Any() in this case.
var uncachedRecordIDsResult = LocalCacheController.GetUncachedRecordIDs<PrivateMessage>(messageIDs);
if(uncachedRecordIDsResult.HasUncachedRecordIDs) return;
My reasoning for lack of this feature in the language is that calling GetUncachedRecordIDs in basically any function would unexpectedly end that parent function, without warning. Also, it would intertwine closely both functions, and best programming practices involve loose coupling of classes and methods.
You could pass an Action to your GetUncachedRecordIDs method which you only invoke if you need to. Rough sketch of the idea:
// LocalCacheController
void GetUncachedRecordIDs<T>(List<int> messageIDs, Action<List<int>> action)
{
// ...
if (!cached) {
action(recordIds);
}
}
// ...
public static void CacheUncachedMessageIDs(List<int> messageIDs)
{
LocalCacheController.GetUncachedRecordIDs<PrivateMessage>(messageIDs, uncachedRecordIDs => {
using (var db = new DBContext())
{
// ...
}
});
}