I am experiencing a "Transaction timeout exceeded" issue in the code snippet below
using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted, Timeout = new TimeSpan(1, 0, 0) }))
{
try
{
segregationAssignment = new SegregationAssignment(dbContext).Assign(rmaU);
dbContext.SaveChanges();
scope.Complete();
scope.Dispose();
}
catch (DbUpdateException eb)
{
scope.Dispose();
return RedirectToAction("Details", details).WithErrorMessage(string.Format(Validations.not_possible_to_operation,
details.IsConfirmSegregate ? Buttons.segregate.ToLower() : Buttons.refuse.ToLower(), Models.rma_u));
}
catch (Exception ex)
{
scope.Dispose();
return RedirectToAction("Details", details).WithErrorMessage(string.Format(Validations.not_possible_to_operation,
details.IsConfirmSegregate ? Buttons.segregate.ToLower() : Buttons.refuse.ToLower(), Models.rma_u));
}
}
Inside it we have some additions of objects in context in the Assign method, only that. I do not understand why you're giving timeout in a process that takes a maximum of 10 minutes.
If you can help me, I'll be grateful. We are using version 5 of entity.
Related
What does the Rollback method in EF Core do? If I didn't use Commit, I don't need it anyway. If I used Commit, the transaction has already been completed.
using (var context = new AppDbContext())
{
using (var transaction = context.Database.BeginTransaction())
{
try
{
var myObjectOne = new MyObjectOne() { Name = "Book" };
context.MyObjectOnes.Add(myObjectOne);
context.SaveChanges();
var myVal = myObjectOne.Id * 3.14;
var myObjectTwo = new MyObjectTwo() { Name = "Notebook", Price = 100, ReferenceId = myVal };
context.MyObjectTwos.Add(myObjectTwo);
context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
}
What does the RollBack Method do? C# EF Core.
I have some tasks (nWorkers = 3):
var taskFactory = new TaskFactory(cancellationTokenSource.Token,
TaskCreationOptions.LongRunning, TaskContinuationOptions.LongRunning,
TaskScheduler.Default);
for (int i = 0; i < nWorkers; i++)
{
var task = taskFactory.StartNew(() => this.WorkerMethod(parserItems,
cancellationTokenSource));
tasks[i] = task;
}
And the following method called by the tasks:
protected override void WorkerMethod(BlockingCollection<ParserItem> parserItems,
CancellationTokenSource cancellationTokenSource)
{
//...log-1...
using (var connection = new OracleConnection(connectionString))
{
OracleTransaction transaction = null;
try
{
cancellationTokenSource.Token.ThrowIfCancellationRequested();
connection.Open();
//...log-2...
transaction = connection.BeginTransaction();
//...log-3...
using (var cmd = connection.CreateCommand())
{
foreach (var parserItem in parserItems.GetConsumingEnumerable(
cancellationTokenSource.Token))
{
cancellationTokenSource.Token.ThrowIfCancellationRequested();
try
{
foreach (var statement in this.ProcessRecord(parserItem))
{
cmd.CommandText = statement;
try
{
cmd.ExecuteNonQuery();
}
catch (OracleException ex)
{
//...log-4...
if (!this.acceptedErrorCodes.Contains(ex.Number))
{
throw;
}
}
}
}
catch (FormatException ex)
{
log.Warn(ex.Message);
}
}
if (!cancellationTokenSource.Token.IsCancellationRequested)
{
transaction.Commit();
}
else
{
throw new Exception("DBComponent has been canceled");
}
}
}
catch (Exception ex)
{
//...log-5...
cancellationTokenSource.Cancel();
if (transaction != null)
{
try
{
transaction.Rollback();
//...log-6...
}
catch (Exception rollbackException)
{
//...log-7...
}
}
throw;
}
finally
{
if (transaction != null)
{
transaction.Dispose();
}
connection.Close();
//...log-8...
}
}
//...log-9...
}
There is a producer of ParserItem objects and these are the consumers. Normally it works fine, there are sometimes that there is an Oracle connection timeout, but in these cases I can see the exception message and everything works as designed.
But sometimes the process get stuck. When it gets stuck, in the log file I can see log-1 message and after that (more or less 15 seconds later) I see log-8 message, but what is driving me nuts is why i cannot see neither the exception message log-5 nor the log-9 message.
Since the cancellationTokenSource.Cancel() method is never called, the producer of items for the bounded collection is stuck until a timeout two hours later.
It is compiled for NET Framework 4 and I'm using Oracle.ManagedDataAccess libraries for the Oracle connection.
Any help would be greatly appreciated.
You should never dispose a transaction or connection when you use using scope. Second, you should rarely rely on exception based programming style. Your code rewritten below:
using (var connection = new OracleConnection(connectionString))
{
using (var transaction = connection.BeginTransaction())
{
connection.Open();
//...log-2...
using (var cmd = connection.CreateCommand())
{
foreach (var parserItem in parserItems.GetConsumingEnumerable(cancellationTokenSource.Token))
{
if (!cancellationTokenSource.IsCancellationRequested)
{
try
{
foreach (var statement in ProcessRecord(parserItem))
{
cmd.CommandText = statement;
try
{
cmd.ExecuteNonQuery();
}
catch (OracleException ex)
{
//...log-4...
if (!acceptedErrorCodes.Contains(ex.ErrorCode))
{
log.Warn(ex.Message);
}
}
}
}
catch (FormatException ex)
{
log.Warn(ex.Message);
}
}
}
if (!cancellationTokenSource.IsCancellationRequested)
{
transaction.Commit();
}
else
{
transaction.Rollback();
throw new Exception("DBComponent has been canceled");
}
}
}
}
//...log-9...
Let me know if this helps.
I can confirm everything you're saying. (program stuck, low CPU usage, oracle connection timeouts, etc.)
One workaround is to use Threads instead of Tasks.
UPDATE: after careful investigation I found out that when you use a high number of Tasks, the ThreadPool worker threads queued by the Oracle driver become slow to start, which ends up causing a (fake) connect timeout.
A couple of solutions for this:
Solution 1: Increase the ThreadPool's minimum number of threads, e.g.:
ThreadPool.SetMinThreads(50, 50); // YMMV
OR
Solution 2: Configure your connection to use pooling and set its minimum size appropriately.
var ocsb = new OracleConnectionStringBuilder();
ocsb.DataSource = ocsb.DataSource;
ocsb.UserID = "myuser";
ocsb.Password = "secret";
ocsb.Pooling = true;
ocsb.MinPoolSize = 20; // YMMV
IMPORTANT: before calling any routine that creates a high number of tasks, open a single connection using that will "warm-up" the pool:
using(var oc = new OracleConnection(ocsb.ToString()))
{
oc.Open();
oc.Close();
}
Note: Oracle indexes the connection pools by the connect string (with the password removed), so if you want to open additional connections you must use always the same exact connect string.
I have a code that adds data to two EntityFramework 6 DataContexts, like this:
using(var scope = new TransactionScope())
{
using(var requestsCtx = new RequestsContext())
{
using(var logsCtx = new LogsContext())
{
var req = new Request { Id = 1, Value = 2 };
requestsCtx.Requests.Add(req);
var log = new LogEntry { RequestId = 1, State = "OK" };
logsCtx.Logs.Add(log);
try
{
requestsCtx.SaveChanges();
}
catch(Exception ex)
{
log.State = "Error: " + ex.Message;
}
logsCtx.SaveChanges();
}
}
}
There is an insert trigger in Requests table that rejects some values using RAISEERROR. This situation is normal and should be handled by the try-catch block where the SaveChanges method is invoked. If the second SaveChanges method fails, however, the changes to both DataContexts must be reverted entirely - hence the transaction scope.
Here goes the error: when requestsCtx.SaveChanges() throws a exception, the whole Transaction.Current has its state set to Aborted and the latter logsCtx.SaveChanges() fails with the following:
TransactionException:
The operation is not valid for the state of the transaction.
Why is this happening and how do tell EF that the first exception is not critical?
Really not sure if this will work, but it might be worth trying.
private void SaveChanges()
{
using(var scope = new TransactionScope())
{
var log = CreateRequest();
bool saveLogSuccess = CreateLogEntry(log);
if (saveLogSuccess)
{
scope.Complete();
}
}
}
private LogEntry CreateRequest()
{
var req = new Request { Id = 1, Value = 2 };
var log = new LogEntry { RequestId = 1, State = "OK" };
using(var requestsCtx = new RequestsContext())
{
requestsCtx.Requests.Add(req);
try
{
requestsCtx.SaveChanges();
}
catch(Exception ex)
{
log.State = "Error: " + ex.Message;
}
finally
{
return log;
}
}
}
private bool CreateLogEntry(LogEntry log)
{
using(var logsCtx = new LogsContext())
{
try
{
logsCtx.Logs.Add(log);
logsCtx.SaveChanges();
}
catch (Exception)
{
return false;
}
return true;
}
}
from the documentation on transactionscope: http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope%28v=vs.110%29.aspx
If no exception occurs within the transaction scope (that is, between
the initialization of the TransactionScope object and the calling of
its Dispose method), then the transaction in which the scope
participates is allowed to proceed. If an exception does occur within
the transaction scope, the transaction in which it participates will
be rolled back.
Basically as soon as an exception is encountered, the transaction is rolled back (as it seems you're aware) - I think this might work but am really not sure and can't test to confirm. It seems like this goes against the intended use of transaction scope, and I'm not familiar enough with exception handling/bubbling, but maybe it will help! :)
I think I finally figured it out. The trick was to use an isolated transaction for the first SaveChanges:
using(var requestsCtx = new RequestsContext())
using(var logsCtx = new LogsContext())
{
var req = new Request { Id = 1, Value = 2 };
requestsCtx.Requests.Add(req);
var log = new LogEntry { RequestId = 1, State = "OK" };
logsCtx.Logs.Add(log);
using(var outerScope = new TransactionScope())
{
using(var innerScope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
try
{
requestsCtx.SaveChanges();
innerScope.Complete();
}
catch(Exception ex)
{
log.State = "Error: " + ex.Message;
}
}
logsCtx.SaveChanges();
outerScope.Complete();
}
}
Warning: most of the articles about RequiresNew mode discourage using it due to performance reasons. It works perfectly for my scenario, however if there are any side effects that I'm unaware of, please let me know.
I'm really not experienced in this subject, and I can't seem to find out where to quite start.
I'm trying to read data from a list on SharePoint 13 (365 Preview) into a WinRT App. I added a Service Reference to mysite.sharepoint.com/_vti_bin/listdata.svc and it added correctly. From there I built this wrapper for getting the a list asynchronously:
private Task<IEnumerable<MyListItems>> GetMyListAsync()
{
var tcs = new TaskCompletionSource<IEnumerable<MyListItems>>();
var sharepointContext =
new WelcomescreentestTeamSiteDataContext(
new Uri("https://mysite.sharepoint.com/_vti_bin/listdata.svc"))
{
Credentials = new NetworkCredential("user.name", "pass.word", "mysite.onmicrosoft.com")
}; ;
try
{
sharepointContext.MyList.BeginExecute(asyncResult =>
{
try
{
var result = sharepointContext.MyList.EndExecute(asyncResult);
tcs.TrySetResult(result);
}
catch (OperationCanceledException ex)
{
tcs.TrySetCanceled();
}
catch (Exception ex)
{
if (!tcs.TrySetException(ex))
{
throw;
}
}
}, new object());
}
catch (Exception ex)
{
tcs.TrySetException(ex);
tcs.SetCanceled();
}
return tcs.Task;
}
I've changed the username / domain around quite a bit, but nothing seems to work.
What's the right approach here?
I've built in a SAML-based security approach which works, but I'm still wondering why this isn't working.
I've code like that:
class Importer
{
private DatabaseContext m_context;
public: Importer()
{
m_context = new DatabaseContext();
m_context.CommandTimeout = 5400; //This is seconds
}
public bool Import (ref String p_outErrorMsg)
{
List<SomeData> dataToImport = new List<SomeData>();
getSomeData(ref dataTiImport);
bool result = false;
try
{
using(TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TimeSpan(2, 0, 0)))
{ //Two hours timeout
result = importDatas(dataToImport);
if (result == true)
{
scope.Complete();
}
}
}
catch (TransactionAbortedException ex)
{
p_outErrorMsg = String.Format("TransactionAbortedException Message: {0}", ex.Message);
}
catch (ApplicationException ex)
{
p_outErrorMsg = String.Format("ApplicationException Message: {0}", ex.Message);
}
}
bool importDatas(List<SomeData> p_DataToImport)
{
foreach (SomeData data in p_DataToImport)
{ //There can be somehitg about 3000 iterations
if (!importSimpleData(data))
{
return false;
}
return true;
}
}
bool importSimpleData(SomeData p_Data)
{
//creation some object o1
try
{
m_context.objetc1s.InsertOnSubmit(o1);
m_context.SubmitChanges();
}
catch (Exception e)
{
//Error handlig
return false
}
//creation some object o2
o2.id_o1 = o1.id_o1;
try
{
m_context.objetc2s.InsertOnSubmit(o2);
m_context.SubmitChanges();
}
catch (Exception e)
{
//Error handlig
return false
}
//creation some object o3
o3.id_o2 = o2.id_o2;
try
{
m_context.objetc3s.InsertOnSubmit(o3);
m_context.SubmitChanges();
}
catch (Exception e)
{
//Error handlig
return false
}
//creation some object o4
o4.id_o1 = o1.id_o1;
try
{
m_context.objetc4s.InsertOnSubmit(o4);
m_context.SubmitChanges();
}
catch (Exception e)
{
//Error handlig
return false
}
return true;
}
}
And if List has 500 records, all is writing fine.
But when the list is near to 1000, I've always exception:
TransactionAbortedException.Message = "the transaction has aborted".
Firstly I think that timeout was to small so I did introduce to code this two lines:
...
m_context.CommandTimeout = 5400; //This is seconds (1.5 hour)
...
using(TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TimeSpan(2, 0, 0))) { //Two hours timeout
...
As you can see in presented above code.
The same exception still occurs, did I miss something?
What do I do wrongly?
I have to add that data base is remote (not local)
Thanks in advance for the help!
I'd have to dig up the documentation again, but setting a transaction timeout to 2 hours may not be happening for you. There is a cap on how long the transaction timeout can be that comes down through machine.config and if you specify more than that cap, it quietly ignores you.
I ran into this a long time ago, and found a reflection-based way to tweak that setting here by Matt Honeycutt to make sure you're really getting the timeout you specify.
It seems that importSimpleData fails on some row and importData returns false. In such case you don't call scope.Complete() and it's the reason why transaction rolls back.