(SQL SERVER 2008)
If a Transaction Timeout error occurs within a TransactionScope (.Complete()) would you expect the transaction to be rolled back?
Update:
The error is actually being thrown in the closing curly brace (i.e. .Dispose()), not .Complete(). Full error is:
The transaction has aborted. System.Transactions.TransactionAbortedException TransactionAbortedException System.Transactions.TransactionAbortedException: The transaction has aborted. ---> System.TimeoutException: Transaction Timeout
--- End of inner exception stack trace ---
at System.Transactions.TransactionStateAborted.BeginCommit(InternalTransaction tx, Boolean asyncCommit, AsyncCallback asyncCallback, Object asyncState)
at System.Transactions.CommittableTransaction.Commit()
at System.Transactions.TransactionScope.InternalDispose()
at System.Transactions.TransactionScope.Dispose()
As far as I can tell the transaction is not rolled back and the tables remained locked until I issued a KILL against the SPID/session_id.
I used DBCC OPENTRAN to get the oldest transaction and then KILL it.
I have tried KILL WITH STATUS but get a message that no status is available as nothing is being rolled back. Status of the SPID/session_id in sys.dm_exec_sessions is 'sleeping'. Code snippet:
try
{
using (var transaction = new TransactionScope())
{
LOTS OF WORK CARRIED OUT WITH LINQ ENTITIES/SubmitChanges() etc.
transaction.Complete(); //Transaction timeout
}
return result;
}
catch (Exception ex)
{
logger.ErrorException(ex.Message, ex);
result.Fail(ex.Message);
return result;
}
UPDATE:
Problem is not entirely solved, but further information should anyone else have this problem.
I am using LINQ to SQL and within the transaction scope I call context.SubmitChanges(). I am carrying out a lot of inserts. SQL Server profiler indicates that a separate INSERT statement is issued for each insert.
In development, if I sleep the thread for 60 seconds (default TransactionScope timeout is 60 seconds) BEFORE calling SubmitChanges() then I get a different error when calling TransactionScope.Complete() (The operation is not valid for the state of the transaction.).
If I sleep for 60 seconds AFTER .SubmitChages() and just before .Complete() then I get
'The transaction has aborted - System.TimeoutException: Transaction Timeout'
NOTE however that on my dev machine no open transactions are found when using DBCC opentran - which is what you would expect as you would expect the transaction to rollback.
If I then add the code at the bottom of this question (sorry couldn't get the website to insert it here) to my config file which increases the TransactionScope timeout to 2 minutes, things start working again (research indicates that if this doesn't work there could be a setting in machine.config that is lower than this that is taking precedence).
Whilst this will stop the transaction aborting, due to the nature of the updates, it does mean that locks on a core business table could be up to 2 minutes so other select commands using the default SqlCommand timeout of 30 seconds will timeout. Not ideal, but better than an open transaction sitting there and totally holding up the application.
A few days ago we had a disastrous release that meant we ran out of diskspace mid upgrade (!) so we did end up using the shrink database functionality which apparently can cause performance problems after you have used it.
I feel a rebuild of the database and a rethink of some business functionality coming on...
I'm thinking that the TransactionAbortedException is actually a timeout. If so you should find that the InnerException of the TransactionAbortedException is a timeout.
You should be able to get rid of it by making sure that the timeout of the transactionscope is longer than the command timeout.
Try changing the transaction scope to something like this:
new TransactionScope(TransactionScopeOption.Required, TimeSpan.FromSeconds(60))
And also set an explicit timeout on your context. Should be something like:
myContext.CommandTimeout = 30; //This is seconds
I resolve this problem modifying the "physical file" machine.config.
1. You have to localize the file:
32 Bits: C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\machie.config
64 Bits: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config
2. You have to add the following code:
<system.transactions>
<defaultSettings timeout="00:59:00" />
</system.transactions>
Related
I am suddenly getting this error, on production, in my log file:
System.Web.HttpException (0x80004005): Exception of type 'System.Web.HttpException' was thrown.
System.Web.HttpException (0x80004005): Unable to connect to SQL Server session database.
System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I looked at a lot of other answers like here
but I don't have connections leak and I don't want only to set the max pool to
200 or so because I want to understand why I suddenly get this exception...
These are my connection strings:
<!--Nhibernate-->
<add name="name"
connectionString="Server= servername;Initial Catalog=DBname;UID=username;Password=password;MultipleActiveResultSets=true"
providerName="System.Data.SqlClient" />
<!--Entity Framework-->
<add name="name"
connectionString= "metadata=res://*/Models.Model1.csdl|res://*/Models.Model1.ssdl|res://*/Models.Model1.msl;provider=System.Data.SqlClient;provider connection string="data source=servername;initial catalog=DBname;user id=userName;password=password;multipleactiveresultsets=True;App=EntityFramework""
providerName="System.Data.EntityClient" />
Update
An example of using db without connections leaks:
using (var db = new dbName())
{
using (var connection = db.Database.Connection)
{
var command = connection.CreateCommand();
...
connection.Open();
using (var reader = command.ExecuteReader())
{
...
reader.NextResult();
...
reader.NextResult();
...
reader.NextResult();
...
}
connection.Close();
}
}
UPDATE
It turns out that I indeed had a connections leak in Entity Framework,
a place that didn't use using, and the connections didn't closed!
Example:
private DbContext context = new DbContext();
...
List<dbObject> SeriesEvents = context.dbObject.Where(e => e.RecurrenceId == entity.RecurrenceId).ToList();
the context variable is not getting closed.
I more thing is that this query made a lot of DB queries more than a 100.
Usually this kind of connection pooling issues come with connection leaks. There can be some exceptions happening in DB operations and Connection might not be closed properly.
Please add a try{} catch{} finally{} block and close the connection in the finally block.
If you use the using, then the try catch finally is implicit and runtime itself will close the connection. So you need not explicitly close the connection. If using the using block, please remove connection.Close() from your code.
Using statement is actually syntactic sugar for try{} catch{} finally{} where the connection is closed and disposed in finally by runtime.
try
{
connection.Open();
// DB Operations
}
finally
{
connection.Close();
}
The error message suggests the problem occurs with the session state database but the information you've provided is for the application connections. There is a different pool for session state due to the different connection string.
Common causes for the session state issues are 1) the instance(s) hosting session state database is undersized for the workload or 2) the session state cleanup job causes long-term blocking.
The cleanup job in earlier .NET versions deleted all expired sessions in a single batch and notorious for causing long-term blocking on a busy site, especially when session state is use heavily:
CREATE PROCEDURE dbo.DeleteExpiredSessions
AS
DECLARE #now datetime
SET #now = GETUTCDATE()
DELETE [ASPState].dbo.ASPStateTempSessions
WHERE Expires < #now
RETURN 0
Later .NET versions use a cursor for the delete to greatly improve concurrency.
CREATE PROCEDURE dbo.DeleteExpiredSessions
AS
SET NOCOUNT ON
SET DEADLOCK_PRIORITY LOW
DECLARE #now datetime
SET #now = GETUTCDATE()
CREATE TABLE #tblExpiredSessions
(
SessionId nvarchar(88) NOT NULL PRIMARY KEY
)
INSERT #tblExpiredSessions (SessionId)
SELECT SessionId
FROM [ASPState].dbo.ASPStateTempSessions WITH (READUNCOMMITTED)
WHERE Expires < #now
IF ##ROWCOUNT <> 0
BEGIN
DECLARE ExpiredSessionCursor CURSOR LOCAL FORWARD_ONLY READ_ONLY
FOR SELECT SessionId FROM #tblExpiredSessions
DECLARE #SessionId nvarchar(88)
OPEN ExpiredSessionCursor
FETCH NEXT FROM ExpiredSessionCursor INTO #SessionId
WHILE ##FETCH_STATUS = 0
BEGIN
DELETE FROM [ASPState].dbo.ASPStateTempSessions WHERE SessionId = #SessionId AND Expires < #now
FETCH NEXT FROM ExpiredSessionCursor INTO #SessionId
END
CLOSE ExpiredSessionCursor
DEALLOCATE ExpiredSessionCursor
END
DROP TABLE #tblExpiredSessions
RETURN 0
If the problem is due to an undersized instance and scale up isn't feasible, consider scaling out using session state partitioning.
There is a weird problem with a deployed Windows application that uses a remote connection string to SQL Server 2012.
When inserting records, the SQL Server times out after a relatively short time saying "The wait operation timed out". I'm not able to debug the deployed application to find out why it is happening and where in the code it is happening.
However, I don't get this error when using the same database on the development machine, with a local connection.
Generally the code used is:
void MapData( SqlTransaction transaction, Dictionary<int, IDataObject> items )
{
foreach ( var i in items )
{
transaction.Save( "CHECKPOINT" );
try
{
ImportItem( transaction, i );
}
catch ( Exception e )
{
transaction.Rollback( "CHECKPOINT" );
}
}
ReportStatus();
}
While this code has been working, I am uncertain about remote connections. We only have this one single case where it does NOT work.
What can it be?
Is there a more solid or performant approach than using Save() and Rollback() in a loop?
I don't want to use TransactionScope to spawn new "child" transactions.
Thanks!
Your transaction is taking too long (not sure if it's committing or rolling back). In order to understand why you'd have to run a trace to get performance metrics.
But to get it working you could increase your timeout. Set the SqlCommand CommandTimeout to a larger value or 0 (no timeout). Also, the connection timeout is used for the transaction timeout - usually an issue only on expensive rollbacks. You specify this in the connection string like Connection Timeout=30.
I'm running into an issue where changes made are being rolledback even when none of the queries throw an exception. It's strange since the code works in one environment but isn't committing changes in another.
Here is the function that handles the transaction. When I put a break point on the commit I hit the commit and I can see the changes in the database but when the transaction is disposed the changes are rolled back.
UPDATE:Additional tests show that it isn't a problem with the transaction. If the transaction is completely removed from the code below the app behaves in the same way. The changes are undone when the connection closes.
public bool Transaction(List<string> sqlStatements)
{
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlTransaction tran = conn.BeginTransaction())
{
try
{
foreach (string query in sqlStatements)
{
SqlCommand cmd = new SqlCommand(query, conn, tran);
cmd.CommandTimeout = 300;
cmd.ExecuteNonQuery();
}
tran.Commit();
return true;
}
catch (SqlException sqlError)
{
tran.Rollback();
//Log Exception
return false;
}
}
}
}
Though I was sure, I tried your code at my end and it worked as expected. Again I am repeating that the method is good enough for the transaction handling. And once the transaction is committed, it can't be roll back.
In the above method, transaction disposal has got nothing to do with any rollback. I think, you have been debugging in wrong direction. Though, you may paste the original method here as you might be doing some other database operations.
Just out of blue, what kind of queries you have been firing? Do note that the DDL commands are auto-commit and transaction won't be effective.
When you say you "can see the changes in the database", how are you determining this? I would expect them to be "in the database" if the following query returns the data that was "committed" (run this tsql after stepping over the commit call in Sql Server Management Studio for example):
-- Force the isolation level to "read committed" so we
-- guarantee we are getting data that has definitely been committed.
-- If the data changes back, it must have been from a separate operation.
set transaction isolation level read committed
begin tran
select * from MyTableWithExpectedChanges;
-- You aren't changing anything so this can be rollback or commit
rollback tran
If the data did indeed commit, I would run a SQL Server Profiler session and see what is causing the data to revert back. It sounds like something separate is triggering to restore the data in that scenario.
If the data didn't commit, you have some sort of transaction count mismatch as per other comments.
This issue was eventually tracked back to a trigger that had recently been updated to include a transaction.
We solved the issue by removing the transaction from the trigger.
Given:
A BenchMark class that lets me know when something has completed.
A very large XML file (~120MB) that has been parsed into multiple Lists
Some code:
SqlConnection con = null;
SqlTransaction transaction = null;
try
{
con = getCon(); // gets a new connection object
con.Open();
transaction = con.BeginTransaction();
var bulkCopy = new SqlBulkCopy(con, SqlBulkCopyOptions.Default, transaction)
{
BatchSize = 1000,
DestinationTableName = "Table1"
};
// assume that the BenchMark class is working
b = new BenchMark("Table1");
bulkCopy.WriteToServer(_insertTable1s.AsDataReader()); // _insertTables1s is a List<Table1>
b.Complete();
LogHelper.WriteLogItem(b);
b = new BenchMark("Table2");
bulkCopy.DestinationTableName = "Table2";
bulkCopy.WriteToServer(_insertTable2s.AsDataReader()); // _insertTables2s is a List<Table2>
b.Complete();
LogHelper.WriteLogItem(b);
// etc... this code does a batch insert into about 7 tables all having about 40,000 records being inserted.
b = new BenchMark("Transaction Commit");
transaction.Commit();
b.Complete();
}
catch (Exception e)
{
transaction.Rollback();
LogHelper.WriteLogItem(
LogLevel.Critical,
LogType.DataProcessing,
e.ToString());
}
finally
{
con.Close();
}
The Problem:
On my local development environment, everything is fine. Its when I run this operation in the cloud that causes it to hang. Using the LogHelper.WriteLogItem method, I can watch the progress of this process. I observe it hang randomly on a particular table. No exception is thrown so the transaction isn't rolled back. Say it hangs on Table2 bulk insert. Using MS SQL Management Studio, I run queries on Table3, Table2 and Table1 with no issue (this means that the transaction was aborted?)
Since it hangs, I'll go rerun the process. This time it hangs sooner so I might get logs like this:
7755 Benchmark LoadXML took 00:00:04.2432816
7756 Benchmark Table1 took 00:00:06.3961230
7757 Benchmark Table2 took 00:00:05.2566890
7758 Benchmark Table3 took 00:00:08.4900921
7759 Benchmark Table4 took 00:00:02.0000123
... it hangs on Table5 (because the BenchMark never completed). I go to run it again and the rest of the log looks like:
7780 Benchmark LoadXML took 00:00:04.1203923
... and it hangs here now.
I'm using rackspace cloud hosting if that helps. I have been able to fix this in the past by deleting all the tables from my dbml file and readding them but this time its not working. I'm wondering if the amount of data being processed is causing the problem?
EDIT: The code in this example is run in an Asynchronous thread. I've found out that the Thread is Aborting for an unknown reason and I need to find out why to solve this problem.
If you have admin to your server or database, you can run
SELECT * FROM sys.dm_tran_session_transactions
to see what transactions are currently active - From Pinal
Additionally, you can run sp_lock to make sure there isn't something blocking your transaction.
Because this process is done asynchronously (i.e. a thread is kicked off to handle this) the thread has a problem which aborts it and that is why I get strange behavior where the code stalls at different places. I've solved this by completing this task synchronously (it works but its not ideal).
I guess the real issue is why my thread is aborting since I'm not aborting it in any of my code. I believe that its due to amount of data that is being processed, but I could be wrong.
Either way, I've solved my problem.
I have problem with Timeout, when I run a command through app, a timeout exception is thrown, but when I run it directly in sql there is no timeout exception!
my SP take about 11 min when I run it directly.
for solving this issue, I found below code here, but It doesn't work properly!
Immediately after beginExecute, IAsyncResult.iscomplete become true !!!!
where is the problem ?
IAsyncResult result = command.BeginExecuteNonQuery();
int count = 0;
while (!result.IsCompleted)
{
Console.WriteLine("Waiting ({0})", count++);
System.Threading.Thread.Sleep(1000);
}
Console.WriteLine("Command complete. Affected {0} rows.",
command.EndExecuteNonQuery(result));
regards
Increase the command timeout instead (SqlCommand.CommandTimeout) which by default is 30 seconds.
A connection string will default to a 15 second timeout. See on MSDN.
You can change the timeout on the connection string to last longer (connection timeout=600, for a 10 minute timeout).
See this site for more about connection strings.
Having said that, you should look at optimizing your database and/or stored procedure. 11 minutes for a stored procedure is very very long. Do you have the correct indexes on your tables? Is you stored procedure written in the most optimal way?
Update:
Have you made sure you are using the correct command and that the results are correct? IsComplete being true almost immediately suggests that the command has indeed finished.