I have an issue, I want continue to insert data after the exception was raised by SQL Server.
I got an Unique Index on 3 different columns in table to detect duplicates.
For example I am trying to insert 2 rows, the first one is an duplicate, the second one is not.
When the duplicate is detected it goes in the catch, then I'm doing nothing, but when it comes on the second row which is not an duplicate, an exception is raised again for the previous row.
This is my code:
public async Task<IEnumerable<Result>> Handle(NewResultCommandDTO requests, CancellationToken cancellationToken)
{
var results = new List<Result>();
...
for (var j = 0; j < resultDetails.Count(); j++)
{
var rd = resultDetails.ElementAt(j);
var newResult1 = new Result
{
AthleteFEIID = rd.AthleteFEIID,
CompetitionCode = competition.CompetitionCode,
HorseId = horse.Id,
};
results.Add(newResult1);
try
{
await _resultsService.AddResultAsync(newResult1);
await _resultsService.CompleteAsync();
}
catch (Exception ex)
{
var x = ex;
}
}
}
public async Task AddResultAsync(Result result)
{
Context.Results.AddAsync(result);
}
public async Task CompleteAsync()
{
await Context.SaveChangesAsync().ConfigureAwait(false);
}
Thank you for your help !
await _resultsService.CompleteAsync(); is throwing the sql exception statement.
await _resultsService.AddResultAsync(newResult1); statement is already adding the entity in the db context. Even if the next statement throws the exception and it goes to catch block, the duplicated entity is still added in the context. So, when you are adding the next entity in the context and trying to save it, it is throwing exception because of the previous duplicated entity which is not removed from the context.
One solution is to remove the duplicated entity from the context when it goes to catch block.
try
{
await _resultsService.AddResultAsync(newResult1);
await _resultsService.CompleteAsync();
}
catch (Exception ex) {
var x = ex;
_resultsService.RemoveResult(newResult1);
}
public void RemoveResult(Result result)
{
Context.Results.Remove(result);
}
Another solution is to check if the duplication already exists in the table before adding it. For that you will have to write a get method using the unique indexed columns.
Related
I am using the transaction scope from System.Transactions.
I have this method where I have two insertions in database. The first Localization is inserted, but then rolled back since it fails on the second insertion.
Now the error is not with the data I send. The data is good. When I remove the transaction scope it works.
I get this error:
System.InvalidOperationException: A root ambient transaction was completed before the nested transaction. The nested transactions should be completed first.
It also enters the second catch and disposes the scope. What could be the problem?
This is my code:
public async Task InsertCategory(InsertCategoryRequest request)
{
using var scope = new TransactionScope();
int localizationId;
try
{
localizationId = await _localizationRepository.InsertLocalization(new Localization
{
English = request.NameEN,
Albanian = request.NameAL,
Macedonian = request.NameMK
});
}
catch (Exception e)
{
scope.Dispose();
Log.Error("Unable to insert localization {#Exception}", e);
throw ExceptionHandler.ThrowException(ErrorCode.Localization_UnableToInsert);
}
try
{
await _categoryRepository.InsertCategory(new Category
{
Name = request.NameEN,
LocalizationId = localizationId
});
}
catch (Exception e)
{
scope.Dispose();
Log.Error("Unable to insert category {#Exception}", e);
throw ExceptionHandler.ThrowException(ErrorCode.Category_UnableToInsert);
}
scope.Complete();
scope.Dispose();
}
I found the answer. I looked for such a long time, but after I posted I found the answer lol.
Just added TransactionScopeAsyncFlowOption.Enabled when constructing the Transaction Scope.
I have tried to implement an optimistic concurrency 'worker'.
Goal is to read a batch of data from the same database table (single table with no relations) with multiple parallel 'worker'. This did seem to work so far. I get optimistic concurrency exceptions here and there, catch them and retry.
So far so good, and the function to get the data works stable on my local setup. When moving the application to a test environment however, I get a strange timeout exception, which even if caught, will end the async function (breaks the while loop). Does someone see a flaw in the implementation? What could cause the timeout? What could cause the end of the async function?
public async IAsyncEnumerable<List<WorkItem>> LoadBatchedWorkload([EnumeratorCancellation] CancellationToken token, int batchSize, int runID)
{
DataContext context = null;
try
{
context = GetNewContext(); // create a new dbContext
List<WorkItem> workItems;
bool loadSuccessInner;
while (true)
{
if (token.IsCancellationRequested) break;
loadSuccessInner = false;
context.Dispose();
context = GetNewContext(); // create a new dbContext
RunState currentRunState = context.Runs.Where(a => a.Id == runID).First().Status;
try
{
// Error happens on the following line: Microsoft.Data.SqlClient.SqlException: Timeout
workItems = context.WorkItems.Where(a => a.State == ProcessState.ToProcess).Take(batchSize).ToList();
loadSuccessInner = true;
}
catch (Exception ex)
{
workItems = new List<WorkItem>();
}
if (workItems.Count == 0 && loadSuccessInner)
{
break;
}
//... update to a different RunState
//... if set successful yield the result
//... else cleanup and retry
}
}
finally
{
if (context != null) context.Dispose();
}
}
I verified that EntityFramework (here with MS SQL Server adapter) does full server side query, which
translates to a simple query like this: SELECT TOP 10 field_1, field_2 FROM WorkItems WHERE field_2 = 0
The query usually takes <1ms and the timeout is left on default of
30s
I verified that there are no cancelation requests fired
This happens also when there is only a single worker and no one else is accessing the database. I'm aware that a timeout can happen when the resource is busy or blocked. But until now, I never saw a timeout on any other query yet.
(I'll update this answer whenever more information is being provided.)
Does someone see a flaw in the implementation?
Generally, your code looks fine.
What could cause the end of the async function?
Nothing in the code you showed should normally be an issue. Start by putting another try-catch block inside the loop, to ensure, that no other exceptions are getting thrown anywhere else (especially later in the not shown code):
public async IAsyncEnumerable<List<WorkItem>> LoadBatchedWorkload([EnumeratorCancellation] CancellationToken token, int batchSize, int runID)
{
DataContext context = null;
try
{
context = GetNewContext();
List<WorkItem> workItems;
bool loadSuccessInner;
while (true)
{
try
{
// ... (the inner loop code)
}
catch (Exception e)
{
// TODO: Log the exception here using your favorite method.
throw;
}
}
}
finally
{
if (context != null) context.Dispose();
}
}
Take a look at your log and ensure, that the log does not show any exceptions being thrown. Then additionally log every possible exit condition (break and return) from the loop, to find out how and why the code exits the loop.
If there are no other break or return statements in your code, then the only way the code can exit from the loop is if zero workItems are successfully returned from the database.
What could cause the timeout?
Make sure, that any Task returning/async methods you call are being called using await.
To track down, where the exceptions are actually coming from, you should deploy a Debug release with pdb files to get a full stack trace with source code line references.
You can also implement a DbCommandInterceptor and trace failing commands on your own:
public class TracingCommandInterceptor : DbCommandInterceptor
{
public override void CommandFailed(DbCommand command, CommandErrorEventData eventData)
{
LogException(eventData);
}
public override Task CommandFailedAsync(DbCommand command, CommandErrorEventData eventData, CancellationToken cancellationToken = new CancellationToken())
{
LogException(eventData);
return Task.CompletedTask;
}
private static void LogException(CommandErrorEventData eventData)
{
if (eventData.Exception is SqlException sqlException)
{
// -2 = Timeout error
// See https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/cc645611(v=sql.105)?redirectedfrom=MSDN
if (sqlException.Number == -2)
{
var stackTrace = new StackTrace();
var stackTraceText = stackTrace.ToString();
// TODO: Do some logging here and output the stackTraceText
// and other helpful information like the command text etc.
// -->
}
}
}
}
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseLoggerFactory(LoggingFactory);
optionsBuilder.UseSqlServer(connectionString);
optionsBuilder.EnableSensitiveDataLogging();
optionsBuilder.EnableDetailedErrors();
// Add the command interceptor.
optionsBuilder.AddInterceptors(new TracingCommandInterceptor());
base.OnConfiguring(optionsBuilder);
}
Additionally logging the command text of the failed command in the interceptor is also a good idea.
I'm trying to insert some potentially poor quality data in to the system and I need a "per row" report of what happened so i've ben trying to do this ...
public async Task<IEnumerable<Result<Invoice>>> AddAllAsync(IEnumerable<Invoice> invoices, Guid bucketId)
{
var results = new List<Result<Invoice>>();
log.Debug(invoices.ToJson());
foreach (var invoice in invoices)
{
try
{
results.Add(new Result<Invoice> { Success = true, Item = await AddAsync(invoice, bucketId), Message = "Imported Successfullly" });
await SaveChangesAsync();
}
catch (Exception ex)
{
results.Add(new Result<Invoice> { Message = ex.Message, Item = invoice });
}
}
return results;
}
My problem is that after a single add call fails that attempted add is left in the change tracker so calling add again with a different item raises the exception again for the first item.
Is there a way to (without rebuilding the context) do "batch inserts" and getting details on a per row / entity level of all the issues not just the first?
I have a process where I retrieve records from a database periodically, and run 3 operations on each. For each record, the 3 operations must either all succeed, or none at all. In case of a failure on one of the operations, I want the operations that have been already processed for the previous records to be
committed, so that next time the process runs, it picks up on the record for which one of the 3 transactions failed previously.
I thought of wrapping the 3 operations in a transaction per record, and loop for each record, but I want to ensure that using a database transaction in this scenario is efficient. The following is what have in mind. Is it correct?
public async Task OrderCollectionProcessorWorker()
{
using (var context = new DbContext())
{
try
{
IList<Order> ordersToCollect =
await context.Orders.Where(
x => x.OrderStatusId == OrderStatusCodes.DeliveredId)
.ToListAsync(_cancellationTokenSource.Token);
await ProcessCollectionsAsync(context, ordersToCollect);
}
catch (Exception ex)
{
Log.Error("Exception in OrderCollectionProcessorWorker", ex);
}
}
}
/// <summary>
/// For each order to collect, perform 3 operations
/// </summary>
/// <param name="context">db context</param>
/// <param name="ordersToCollect">List of Orders for collection</param>
private async Task ProcessCollectionsAsync(DbContext context, IList<Order> ordersToCollect)
{
if (ordersToCollect.Count == 0) return;
Log.Debug($"ProcessCollections: processing {ordersToCollect.Count} orders");
foreach (var order in ordersToCollect)
{
// group the 3 operations in one transaction for each order
// so that if one operation fails, the operations performend on the previous orders
// are committed
using (var transaction = context.Database.BeginTransaction())
{
try
{
// *************************
// run the 3 operations here
// operations consist of updating the order itself, and other database updates
Operation1(order);
Operation2(order);
Operation3(order);
// *************************
await context.SaveChangesAsync();
transaction.Commit();
}
catch (Exception ex)
{
transaction?.Rollback();
Log.Error("General exception when executing ProcessCollectionsAsync on Order " + order.Id, ex);
throw new Exception("ProcessCollections failed on Order " + order.Id, ex);
}
}
}
}
It seems like a correct way of doing it, apart perhaps from fact that in catch you should rethrow the exception or do something else to stop progressing on the loop (if I understood correctly your requirments). It is even not necessary to use
var transaction = context.Database.BeginTransaction()
because
await context.SaveChangesAsync();
creates its own transaction. Every change you made is stored in the context and when you call SaveChanges there is transaction made and all the changes are written as 1 batch. If something fails all the changes are rollbacked. Another call to SaveChanges will make another transaction on new changes. Please bear in mind however that in case transaction fails you should no longer use the same context but create a new one. To summarize I would write your method as follows:
private async Task ProcessCollectionsAsync(DbContext context, IList<Order> ordersToCollect)
{
if (ordersToCollect.Count == 0) return;
Log.Debug($"ProcessCollections: processing {ordersToCollect.Count} orders");
foreach (var order in ordersToCollect)
{
// group the 3 operations in one transaction for each order
// so that if one operation fails, the operations performend on the previous orders
// are committed
try
{
// *************************
// run the 3 operations here
// operations consist of updating the order itself, and other database updates
Operation1(order);
Operation2(order);
Operation3(order);
// *************************
await context.SaveChangesAsync();
}
catch (Exception ex)
{
Log.Error("General exception when executing ProcessCollectionsAsync on Order " + order.Id, ex);
throw;
}
}
I'm testing out a bulk import using EF Core and trying to save asynchronously.
I'm trying to add 100 entities at a time and then asynchronously save and repeat until they are all saved. I occasionally get a PK error because it tries to add two entities with the same id to the database. None of the entities being added have an id set, the ids are sequences that are auto generated.
The code:
public async Task<bool> BulkAddAsync(IEnumerable<VehicleCatalogModel> models)
{
_dbContext.ChangeTracker.AutoDetectChangesEnabled = false;
try
{
var entities = models.Select(ToEntity);
_dbContext.Set<VehicleCatalog>().AddRange(entities);
await _dbContext.SaveChangesAsync();
}
catch (Exception ex)
{
_logger.LogError(0, ex, "An Error occurred during the import");
return false;
}
return true;
}
I am calling the method in an xunit test that generates a list of test data and calls the import
var result = manager.BulkAddAsync(modelsToAdd.AsEnumerable());
var counter = 0;
while (!result.IsCompleted && counter < 10)
{
Thread.Sleep(6000);
counter++;
}
Assert.True(result.IsCompleted && result.Result);
It should hold up the execution after 100 entities have been added until they are saved, and then add more but I am still occasionally getting this error. Is there something else I need to add to get this to work correctly? Or a better method of bulk insert?