I'm on a team using an EF, Code-first approach with ODP.Net (Oracle). We need to attempt to write updates to multiple rows in a table, and store any exceptions in a collection to be bubbled up to a handler (so writing doesn't halt because one record can't be written). However, this code throws an exception saying
System.InvalidOperationException: The operation cannot be completed because the DbContext has been disposed.
I'm not sure why. The same behavior occurs if the method is changed to be a synchronous method and uses .Find().
InvModel _model;
public InvoiceRepository(InvModel model)
{
_model = model;
}
public void SetStatusesToSent(IEnumerable<Invoice> Invoices)
{
var exceptions = new List<Exception>();
foreach (var id in invoices)
{
try
{
var iDL = await _model.INVOICES.FindAsync(id);/*THROWS A DBCONTEXT EXCEPTION HERE*/
iDL.STATUS = Statuses.Sent; // get value from Statuses and assign
_model.SaveChanges(); //save changes to the model
}
catch (Exception ex)
{
exceptions.Add(ex);
continue; //not necessary but makes the intent more legible
}
}
}
Additional detail update: _model is injected by DI.
Remember that LINQ executes lazily - that is when you actually use the information.
The problem might be, that Your DbContext has gone out of scope...
Use .ToList() or .ToArray() to force execution at that time.
I have the following logic:
public async Task UpdateData(DbContext context)
{
try
{
await LongUpdate(context);
}
catch (Exception e)
{
try
{
await context.Database.ExecuteSqlCommandAsync($#"update d set d.UpdatedAt = GETDATE() from SomeTable d where id > 11");
}
catch (Exception ex)
{
throw;
}
}
}
// this operations takes about 1 minute
private static async Task<int> LongUpdate(DbContext context)
{
context.Database.CommandTimeout = 5; // change this to 15 to see MultipleActiveResultSets exception
return await context.Database.SqlQuery<int>($#"update otherTable set UpdatedAt = GETDATE();SELECT ##ROWCOUNT").FirstOrDefaultAsync();
}
As presented above there are two update operations both awaited.
LongUpdate takes more than minute.
When timeout is set to 5s:
LongUpdate throws timeout exception and the second update is executed successfully.
When I increase timeout to 15s or more:
LongUpdate throws timeout exception but second update immediately throws: System.InvalidOperationException: The connection does not support MultipleActiveResultSets..
Shouldn’t await prevent this exception?
Why this depends on timeout value?
According to EF docs Database property should not be used in a way you do. So because it is as i think incorrect way we could not even consider what is happenning. All you db operations should go via Database Context using DbSet<T> with Save or SaveAsyncmethod ofDbContext` call after changes in datasets. Of course you could execute raw sql but other way like this:
public static IList<StockQuote> GetLast(this DbSet<StockQuote> dataSet, int stockId)
{
IList<StockQuote> lastQuote = dataSet.FromSqlRaw("SELECT * FROM stockquote WHERE StockId = {0} ORDER BY Timestamp DESC LIMIT 1", new object[] { stockId })
.ToList();
return lastQuote;
}
To create DbContext (in below example to MySql) with command timeout you coulde use something like this:
public static class ServiceCollectionExtension
{
public static IServiceCollection ConfigureMySqlServerDbContext<TContext>(this IServiceCollection serviceCollection, string connectionString,
ILoggerFactory loggerFactory, int timeout = 600)
where TContext : DbContext
{
return serviceCollection.AddDbContext<TContext>(options => options.UseQueryTrackingBehavior(QueryTrackingBehavior.TrackAll)
.UseLoggerFactory(loggerFactory)
.UseMySql(connectionString, ServerVersion.AutoDetect(connectionString), sqlOptions => sqlOptions.CommandTimeout(timeout))
.UseLazyLoadingProxies());
}
}
just call services.ConfigureMySqlServerDbContext<ModelContext>(Settings.ConnectionString, loggerFactory);
I think if you change your approach you get rid of exceptions.
Shouldn’t await prevent this exception?
It depends on your pattern. We need to ensure that all access is sequential. In another word, the second asynchronous request on the same DbContext instance shouldn't start before the first request finishes (and that's the whole point). Although This is typically done by using the await keyword on each async operation, in some cases we may not achieve it. In your case, the first part of LongUpdate method execution, context.Database.SqlQuery<int>() is not an async method itself. It will provide results synchronously for FirstOrDefaultAsync(). I think this is not a problem with EF async behavior.
Why does it depend on the timeout value?
After a specific amount of time, the SQL query execution enters a critical state that can't leave it without spending more time than what you set as CommandTimeout, but your code moves forward and, the exception happens.
Note the applications that have IO-related contention will benefit the most from using asynchronous queries and save operations according to Performance considerations for EF 4, 5, and 6. The page EF async methods are slower than non-async lists some noticeable points.
The command timeout is distinct from the connection timeout. A value set with this API for the command timeout will override any value set in the connection string. Database.CommandTimeout Property is use for Gets or sets the timeout value, in seconds, for all context operations.
private static async Task<int> LongUpdate(DbContext context)
{
context.Database.CommandTimeout = 5; // change this to 15 to see MultipleActiveResultSets exception
return await context.Database.SqlQuery<int>($#"update otherTable set UpdatedAt = GETDATE();SELECT ##ROWCOUNT").FirstOrDefaultAsync();
}
here you set CommandTimeout, If your query not execute in 5 second then TimeoutException fired and after that you are trying to execute another query in catch block, but you use same context here, which is already timeout and its throws: System.InvalidOperationException:.
So to fix this you have to initialize your context again.
public async Task UpdateData(DbContext context)
{
try
{
await LongUpdate(context);
}
catch (Exception e)
{
try
{
context = new MyContext()// initialize your DbContext here.
await context.Database.ExecuteSqlCommandAsync($#"update d set d.UpdatedAt = GETDATE() from SomeTable d where id > 11");
}
catch (Exception ex)
{
throw;
}
}
}
I have a code like this:
try
{
Member member = database.Members.Where(m=>m.ID=1).FirstOrDefault();
member.Name = "NewMemberName";
database.Entry(member).State = EntityState.Modified;
database.SaveChanges();
}
catch (Exception ex)
{
database.Logs.Add(new Log() { Value=ex.ToString() });
database.SaveChanges();
}
And Entity:
[StringLength(5)]
public string Name { get; set; }
If the Name String more than 5 it would be error and catch the exception ,but when I add a log then save ,the exception from SaveChange(); still remains,how should I do?(Can't change the schema)
the exception from SaveChange(); still remains
Well, if this throws an exception:
database.SaveChanges();
Then there's a pretty good chance that this will also throw an exception:
database.SaveChanges();
Basically, in your catch block you shouldn't be immediately re-trying the operation that just failed a millisecond ago. Instead, log the failure and handle the exception:
catch (Exception ex)
{
// DO NOT call SaveChanges() here.
}
Of course, if writing to the database is failing, then logging to the database is also likely to fail. Suppose for example that the connection string is wrong or the database is down or timing out. You can't log that.
I recommend using a logging framework (log4net, NLog, etc.) as a separate dependency from your Entity Framework data access layer. It's a small learning curve, but you end up with a pretty robust logging system that can much more effectively handle problems. And can be easily configured to log to multiple places, so if writing to one error log (the database) fails then you still have another one (a file, for example).
At the very least, if persisting your data context fails, you'll need to log to a new data context. Otherwise the part that failed is still there.
Something structurally more like this:
try
{
using (var database = new DbContext())
{
Member member = database.Members.Where(m=>m.ID=1).FirstOrDefault();
member.Name = "NewMemberName";
database.Entry(member).State = EntityState.Modified;
database.SaveChanges();
}
}
catch (Exception ex)
{
using (var database = new DbContext())
{
database.Logs.Add(new Log() { Value=ex.ToString() });
database.SaveChanges();
}
}
Situation:
My application need to process the first step in the business rules (the initial try-catch statement). If an certain error occurs when the process calls the helper method during the step, I need to switch to a second process in the catch statement. The back up process uses the same helper method. If an same error occurs during the second process, I need to stop the entire process and throw the exception.
Implementation:
I was going to insert another try-catch statement into the catch statement of the first try-catch statement.
//run initial process
try
{
//initial information used in helper method
string s1 = "value 1";
//call helper method
HelperMethod(s1);
}
catch(Exception e1)
{
//backup information if first process generates an exception in the helper method
string s2 = "value 2";
//try catch statement for second process.
try
{
HelperMethod(s2);
}
catch(Exception e2)
{
throw e2;
}
}
What would be the correct design pattern to avoid code smells in this implementation?
I caused some confusion and left out that when the first process fails and switches to the second process, it will send different information to the helper method. I have updated the scenario to reflect the entire process.
If the HelperMethod needs a second try, there is nothing directly wrong with this, but your code in the catch tries to do way too much, and it destroys the stacktrace from e2.
You only need:
try
{
//call helper method
HelperMethod();
}
catch(Exception e1)
{
// maybe log e1, it is getting lost here
HelperMethod();
}
I wouldn't say it is bad, although I'd almost certainly refactor the second block of code into a second method, so keep it comprehensible. And probably catch something more specific than Exception. A second try is sometimes necessary, especially for things like Dispose() implementations that might themselves throw (WCF, I'm looking at you).
The general idea putting a try-catch inside the catch of a parent try-catch doesn't seem like a code-smell to me. I can think of other legitimate reasons for doing this - for instance, when cleaning up an operation that failed where you do not want to ever throw another error (such as if the clean-up operation also fails). Your implementation, however, raises two questions for me: 1) Wim's comment, and 2) do you really want to entirely disregard why the operation originally failed (the e1 Exception)? Whether the second process succeeds or fails, your code does nothing with the original exception.
Generally speaking, this isn't a problem, and it isn't a code smell that I know of.
With that said, you may want to look at handling the error within your first helper method instead of just throwing it (and, thus, handling the call to the second helper method in there). That's only if it makes sense, but it is a possible change.
Yes, a more general pattern is have the basic method include an overload that accepts an int attempt parameter, and then conditionally call itself recursively.
private void MyMethod (parameterList)
{ MyMethod(ParameterList, 0)l }
private void MyMethod(ParameterList, int attempt)
{
try { HelperMethod(); }
catch(SomeSpecificException)
{
if (attempt < MAXATTEMPTS)
MyMethod(ParameterList, ++attempt);
else throw;
}
}
It shouldn't be that bad. Just document clearly why you're doing it, and most DEFINITELY try catching a more specific Exception type.
If you need some retry mechanism, which it looks like, you may want to explore different techniques, looping with delays etc.
It would be a little clearer if you called a different function in the catch so that a reader doesn't think you're just retrying the same function, as is, over again. If there's state happening that's not being shown in your example, you should document it carefully, at a minimum.
You also shouldn't throw e2; like that: you should simply throw; if you're going to work with the exception you caught at all. If not, you shouldn't try/catch.
Where you do not reference e1, you should simply catch (Exception) or better still catch (YourSpecificException)
If you're doing this to try and recover from some sort of transient error, then you need to be careful about how you implement this.
For example, in an environment where you're using SQL Server Mirroring, it's possible that the server you're connected to may stop being the master mid-connection.
In that scenario, it may be valid for your application to try and reconnect, and re-execute any statements on the new master - rather than sending an error back to the caller immediately.
You need to be careful to ensure that the methods you're calling don't have their own automatic retry mechanism, and that your callers are aware there is an automatic retry built into your method. Failing to ensure this can result in scenarios where you cause a flood of retry attempts, overloading shared resources (such as Database servers).
You should also ensure you're catching exceptions specific to the transient error you're trying to retry. So, in the example I gave, SqlException, and then examining to see if the error was that the SQL connection failed because the host was no longer the master.
If you need to retry more than once, consider placing an 'automatic backoff' retry delay - the first failure is retried immediately, the second after a delay of (say) 1 second, then doubled up to a maximum of (say) 90 seconds. This should help prevent overloading resources.
I would also suggest restructuring your method so that you don't have an inner-try/catch.
For example:
bool helper_success = false;
bool automatic_retry = false;
//run initial process
try
{
//call helper method
HelperMethod();
helper_success = true;
}
catch(Exception e)
{
// check if e is a transient exception. If so, set automatic_retry = true
}
if (automatic_retry)
{ //try catch statement for second process.
try
{
HelperMethod();
}
catch(Exception e)
{
throw;
}
}
Here's another pattern:
// set up state for first attempt
if(!HelperMethod(false)) {
// set up state for second attempt
HelperMethod(true);
// no need to try catch since you're just throwing anyway
}
Here, HelperMethod is
bool HelperMethod(bool throwOnFailure)
and the return value indicates whether or not success occurred (i.e., false indicates failure and true indicates success). You could also do:
// could wrap in try/catch
HelperMethod(2, stateChanger);
where HelperMethod is
void HelperMethod(int numberOfTries, StateChanger[] stateChanger)
where numberOfTries indicates the number of times to try before throwing an exception and StateChanger[] is an array of delegates that will change the state for you between calls (i.e., stateChanger[0] is called before the first attempt, stateChanger[1] is called before the second attempt, etc.)
This last option indicates that you might have a smelly setup though. It looks like the class that is encapsulating this process is responsible for both keeping track of state (which employee to look up) as well as looking up the employee (HelperMethod). By SRP, these should be separate.
Of course, you need to a catch a more specific exception than you currently are (don't catch the base class Exception!) and you should just throw instead of throw e if you need to rethrow the exception after logging, cleanup, etc.
You could emulate C#'s TryParse method signatures:
class Program
{
static void Main(string[] args)
{
Exception ex;
Console.WriteLine("trying 'ex'");
if (TryHelper("ex", out ex))
{
Console.WriteLine("'ex' worked");
}
else
{
Console.WriteLine("'ex' failed: " + ex.Message);
Console.WriteLine("trying 'test'");
if (TryHelper("test", out ex))
{
Console.WriteLine("'test' worked");
}
else
{
Console.WriteLine("'test' failed: " + ex.Message);
throw ex;
}
}
}
private static bool TryHelper(string s, out Exception result)
{
try
{
HelperMethod(s);
result = null;
return true;
}
catch (Exception ex)
{
// log here to preserve stack trace
result = ex;
return false;
}
}
private static void HelperMethod(string s)
{
if (s.Equals("ex"))
{
throw new Exception("s can be anything except 'ex'");
}
}
}
Another way is to flatten the try/catch blocks, useful if you're using some exception-happy API:
public void Foo()
{
try
{
HelperMethod("value 1");
return; // finished
}
catch (Exception e)
{
// possibly log exception
}
try
{
HelperMethod("value 2");
return; // finished
}
catch (Exception e)
{
// possibly log exception
}
// ... more here if needed
}
An option for retry (that most people will probably flame) would be to use a goto. C# doesn't have filtered exceptions but this could be used in a similar manner.
const int MAX_RETRY = 3;
public static void DoWork()
{
//Do Something
}
public static void DoWorkWithRetry()
{
var #try = 0;
retry:
try
{
DoWork();
}
catch (Exception)
{
#try++;
if (#try < MAX_RETRY)
goto retry;
throw;
}
}
In this case you know this "exception" probably will happen so I would prefer a simple approach an leave exceptions for the unknown events.
//run initial process
try
{
//initial information used in helper method
string s1 = "value 1";
//call helper method
if(!HelperMethod(s1))
{
//backup information if first process generates an exception in the helper method
string s2 = "value 2";
if(!HelperMethod(s2))
{
return ErrorOfSomeKind;
}
}
return Ok;
}
catch(ApplicationException ex)
{
throw;
}
I know that I've done the above nested try catch recently to handle decoding data where two third party libraries throw exceptions on failure to decode (Try json decode, then try base64 decode), but my preference is to have functions return a value which can be checked.
I generally only use the throwing of exceptions to exit early and notify something up the chain about the error if it's fatal to the process.
If a function is unable to provide a meaningful response, that is not typically a fatal problem (Unlike bad input data).
It seems like the main risk in nested try catch is that you also end up catching all the other (maybe important) exceptions that might occur.
My Code something like this
try
{
using (TransactionScope iScope = new TransactionScope())
{
try
{
isInsertSuccess = InsertProfile(account);
}
catch (Exception ex)
{
throw;
}
if (isInsertSuccess)
{
iScope.Complete();
retValue = true;
}
}
}
catch (TransactionAbortedException tax)
{
throw;
}
catch (Exception ex)
{
throw;
}
Now what happen is that even if my value is TRUE a TransactionAbortedException Exception occurs randomly, but data get's inserted/updated in DB.
Any idea what went wrong?
As the TransactionAbortedException documentation says,
This exception is also thrown when an attempt is made
to commit the transaction and the
transaction aborts.
This is why you see the exception even after calling Transaction.Complete: the Complete method is not the same thing as Commit:
calling this method [TransactionScope.Complete] does not guarantee
a commit of the transaction. It is
merely a way of informing the
transaction manager of your status
The transaction isn't committed until you exit the using statement: see the CommittableTransaction.Commit documentation for details. At that point any actions participating in the transaction may vote to abort the transaction and you'll get a TransactionAbortedException.
To debug the underlying problem you need to analyze the exception details and stack trace. As Mark noted in a comment, it may well be caused by a deadlock or another interaction with other database processes.