Greetings
I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this:
using(var tran = MyDataLayer.Transaction())
{
MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2));
MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc);
tran.Commit();
}
I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created.
So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?
We have spotted the same issue. I have recreated it here - https://github.com/DavidBetteridge/MSMQStressTest
For us we see the issue when reading from the queue rather than writing to it. Our solution was to change the isolation level of the first read in the subscriber to be serialised.
no, but snapshot isolation level isn't the same as serializable.
snapshoted rows are stored in the tempdb until the row commits.
so some other transaction can read the old data just fine.
at least that's how i understood your problem. if not please provide more info like a grapf of the timeline or something similar.
Can you verify that CallSomethingThatEventuallyDoesLinqToSQL is using the same Connection as the first call? Does the second call read data that the first filed into the db... and if it is unable to "see" that data would cause the second to skip a few steps and not do it's job?
Just because you have it wrapped in a .NET transaction doesn't mean the data as seen in the db is the same between connections. You could for instance have connections to two different databases and want to rollback both if one failed, or file data to a DB and post a message to MSMQ... if MSMQ operation failed it would roll back the DB operation too. .NET transaction would take care of this multi-technology feature for you.
I do remember a problem in early versions of ADO.NET (maybe 3.0) where the pooled connection code would allocate a new db connection rather than use the current one when a .NET level TransactionScope was used. I believe it was fully implemented with 3.5 (I may have my versions wrong.. might be 3.5 and 3.5.1). It could also be caused by the MyDataLayer and how it allocates connections.
Use SQL Profiler to trace these operations and make sure the work is being done on the same spid.
It sounds like your connection may not be enlisted in the transaction. When do you create your connectiion object? If it occurs before the TransactionScope then it will not be enlisted in the transaction.
Related
I have a use case where I am processing multiple configuration within a function, each configuration processing runs within a separate transaction and transaction gets commited if everything is fine, now if at all anything goes wrong in processing of further configuration I want to revert all the commuted transaction. Can anyone please help me with code snippet? My application is on .net.
To the best of my knowledge, NH doesn't support nested transactions.
You can use a transaction at the root of your use case, or at any point along the way, but it's all or nothing, AFAIK.
It's not a matter of using nested transactions. It's a matter of ensuring that you have a transaction that surrounds all the relevant code - so it should be open/closed "higher up". Each individual section should then either not care about transactions at all, or it should "piggy-back" on any existing transaction and only open a new transaction when one does not already exist.
As a guideline, transaction management is an overall concern that should be handled in different sorts of wrapper methods and applied as needed by the application - not hidden away in specific low-level support routines.
I am attempting to update RavenDB storage for hangfire to RavenDB4 and I sometimes receive the following exception:
Raven.Client.Exceptions.RavenException: 'System.InvalidOperationException: A write transaction is already opened by this thread
I checked for unclosed session, but all session but one use using and the last one is specific because it is part of a class that acts like a transaction builder and is disposed on commit. I was unable to find what operations might take longer in the background or what could cause it.
I'd appreciate a little help with narrowing down what could cause this, because I have absolutely no idea and documentation didn't help much.
After upgrading to nightly version of RavenDB4 instead of RavenDB 4.0.0-rc-40025 (after Ayende Rahien suggested it should be a server issue) I never got this exception. I scheduling thousands of jobs before posting this as an answer to make sure it was server side issue.
Before the upgrade I got the exception pretty much every time I scheduled many jobs.
In our dev team we have an interesting discussion regarding opening transactions during reads in Entity Framework.
Case is this: we have unit of work in MVC app which spans action methods - simple open EF transaction before executing action and commits after no error appears during execution. This is fine and maybe some of you use an UoW pattern with EF in that way.
Interesting part is what about actions that performs only reads (no modification of any entity for example get by id). Should transaction be opened also for reads? What would be the difference in approach when we don't open transaction and during read there is active transaction on same table we read data not using tran? Suppose that we have set default transaction isolation level to read committed.
I was pro opening transaction which keeps reads consistent but there are things against like transactions slowdown reads (which is true but I don't know how much).
What are your thoughts? I know that some of u will answer as old architects saying "it depends" but I need strong arguments not hate :)
For SQL Server at READ COMMITTED isolation there is no difference between a SELECT inside a transaction and one outside a transaction.
With legacy READ COMMITTED the S locks are released at the end of each query even in a transaction.
With READ COMMITTED SNAPSHOT (which is the default for EF Code First) there are no S locks taken, and row versions provide only a statement-level point-in-time view of the database.
At SNAPSHOT isolation, the whole transaction would see the database at a single point-in-time, still with no locking.
I am stress testing my website. It uses Entity Framework 6.
I have 10 threads. This is what they are doing:
Fetch some data from the web.
Create new database context.
Create/Update records in the database using Database.SqlQuery(sql).ToList() to read and Database.ExecuteSqlCommand(sql) to write (about 200 records/second)
Close context
It crashes within 2 minutes with a database deadlock exception (consistently on a read!).
I have tried wrapping steps 2-4 in a Transaction, but this did not help.
I have read that as of EF6, ExecuteSqlCommand is wrapped in a transaction by default (https://msdn.microsoft.com/en-us/data/dn456843.aspx). How do I turn this behavior off?
I don't even understand why my transactions are deadlocked, they are read/writing independent rows.
Is there a database setting I can flip somewhere increase the size of my pending transaction queue?
I doubt EF has anything to do with it. Even though you are reading/writing independent rows, locks can escalate and lock pages. If you are not careful with your database design, and how you perform the reads and writes (order is important) you can deadlock, with EF or any other access technique.
What transaction type is being used?
.Net's TransactionScope defaults to SERIALIZABLE, at least in my applications which admittedly do not use EF. SERIALIZABLE transactions deadlock much more easily in my experience than other types such as ReadCommitted.
My DBA says that there are way too many connection open and he thinks it is my code in .net that is leaving them open.
I am using LINQ querys and EF code first.
Example Method:
public List<Stuff> GetStuff()
{
var db = new DBContext();
var results = db.stuff.toList();
return results;
}
Do I need to dispose the db var once I am done? My understanding is that I didn't need to in EF and LINQ. Please point me to a Microsoft documentation about managing connection in code or best practices for LINQ/EF and db connections
Update:
I added
db.Connection.Close();
db.Dispose();
and I still see the open connection in SQL after the two lines were executed. Is there a reason why it wouldn't close when I force it to close?
You should listen to your DBA! Yes, use a using. Do not leave connections open unnecessarily. You should connect, do your business with the db, and close that connection, freeing it up for another process. This is especially true in high volume systems.
Edit. Let me further explain with my own experiences here. In low volume processing, it probably isn't an issue, but it's a bad habit not to dispose of something explicitly or not-wrap it in a using when it clearly implements IDisposable.
In high-volume situations, this is just asking for disaster. Sql server will allot so many connections per application (can be specified in the connection string). What happens is processes will spend time waiting for connections to free up if they're not promptly closed. This generally leads to timeouts or deadlocks in some situations.
Sure, you can tweak Sql server connection mgmt and such, but everytime you tweak a setting, you're making a compromise. You must consider backups running, other jobs running, etc. This is why a wise developer will listen to their DBA's warnings. It's not always all about the code...
I just asked this same question over on Programmers.SE. Robert Harvey gave a great answer.
In general, you don't need to use Using statements with Entity Framework data contexts. Lazy collections is one of the reasons why.
I encourage you to read the entire answer on Programmers.SE as well as the links Robert provides in the answer.
The entity framework uses, as far as i know, connection pooling by default to reduce the overhead of creating new connections everytime.
Are the connections closed when you close your application?
If so, you could try to decrease the Max Pool Size in your connection string or disable connection pooling entirely.
See here for a reference of possible options in your connection string.
By default DbContext automatically manages the connection for you. So you shouldn't have to explicitly call Dispose.
Blog post on the subject: Link
But I believe not disposing can cause performance issues if you're processing a lot of requests. You should add a using statement to see whether or not it's causing a problem in your case.
Yes, if your method defines a Unit of Work; no, if something more primitive. (P.S. something somewhere in your code should define a Unit of Work, and that thing should be wrapped in a using (var context = new DbContext()) {} or equivalent.)
And if you belong to the school of thought that your DbContext is your Unit of Work, then you'll always be wrapping that bad boy with a using block: the local caching of data previously fetched during the context lifetime together with the SaveChanges method act as a sort of lightweight transaction, and your Dispose (without calling SaveChanges) is your Rollback (whereas your SaveChanges is your Commit).
Check this out, here's a standard protocol on how to use IDisposable objects.
https://msdn.microsoft.com/en-us/library/yh598w02.aspx
It says:
"As a rule, when you use an IDisposable object, you should declare and instantiate it in a using statement."
As they have access to unmanaged resources, you should always consider a "using" statement.