I have a few methods which gets called sequentially and they all execute queries into a MySQL database, sequentially:
UpdateInvoice() - > UpdateOrderItems(connection)
|---------- > UpdateGrandTotal(connection)
|---------- > UpdateAdvances(connection)
I have a using block for connection and transaction as follows:
using (var connection = ConnectionManager.GetConnection()){
using (var transaction = connection.BeginTransaction(){
UpdateOrderItems(connection)
UpdateGrandTotal(connection)
UpdateAdvances(connection)
}
}
My question is, once I have created a transaction from connection.BeginTransaction(), do I need to pass the transaction object around to get it to work atomically? According to my knowledge since BeginTransaction() was called upon the connection, it is in transaction mode, and that a single connection can have only one transaction at a time.
Am i getting something wrong?
P.S. I am using dapper to execute queries inside these methods
You can only have 1 transaction per connection (regardless of IsolationLevel if you are wondering). However you can nest multiple transactions by using System.Transactions.TransactionScope.
I hope this shed some light.
Related
I have an ASP.Net WebAPI instance setup that uses a MySQL database for storage. I have written an ActionFilter that handles creating a TransactionScope for the lifetime of a single endpoint request.
public async Task<HttpResponseMessage> ExecuteActionFilterAsync(
HttpActionContext actionContext,
CancellationToken cancellationToken,
Func<Task<HttpResponseMessage>> continuation)
{
var transactionScopeOptions = new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted };
using (var transaction = new TransactionScope(TransactionScopeOption.RequiresNew, transactionScopeOptions, TransactionScopeAsyncFlowOption.Enabled))
{
var handledTask = await continuation();
transaction.Complete();
return handledTask;
}
}
Then throughout the endpoints I have different queries/commands that open/close connections using the autoenlist=true functionality of DbConnection's. An example endpoint may be:
public async Task<IHttpActionResult> CreateStuffAsync()
{
var query = this.queryService.RetrieveAsync();
// logic to do stuff
var update = this.updateService.Update(query);
return this.Ok();
}
I don't create a single DbConnection and pass it around from the top, as this is a simplistic example, when in practise passing the connection between services would require a large refactor (although if necessary, this can be done). I also read that it is better to open/close the connections as necessary (i.e. keep them open for as little time as possible). The queryService and updateService open/close DbConnection's via using statements:
var factory = DbProviderFactories.GetFactory("MySql.Data.MySqlClient");
using (var connection = factory.CreateConnection())
{
connection.ConnectionString = "Data Source=localhost;Initial Catalog=MyDatabase;User ID=user;Password=password;Connect Timeout=300;AutoEnlist=true;";
if (connection.State != ConnectionState.Open)
{
connection.Open();
}
var result = await connection.QueryAsync(Sql).ConfigureAwait(false);
return result;
}
The same DbConnection is generally not used for multiple queries within the same API endpoint request -- but the same connection string is.
Intermittently I am seeing an exception thrown when attempting to open the connection:
"ExceptionType": "System.NotSupportedException",
"ExceptionMessage": "System.NotSupportedException: MySQL Connector/Net does not currently support distributed transactions.\r\n at MySql.Data.MySqlClient.ExceptionInterceptor.Throw(Exception exception)\r\n at MySql.Data.MySqlClient.MySqlConnection.EnlistTransaction(Transaction transaction)\r\n at MySql.Data.MySqlClient.MySqlConnection.Open()"
I do not understand why it is attempting to escalate the transaction to a distributed transaction, when all of the connections are against the same database. Or am I misunderstanding/misusing the TransactionScope and DbConnection instances?
The System.Transactions.Transaction object makes the determination of whether to escalate to a distributed transaction based on how many separate "resource managers" (e.g., a database) have enlisted in the transaction.
It does not draw a distinction between connections to different physical databases (that do require a distributed transaction) and multiple MySqlConnection connections that have the same connection string and connect to the same database (which might not). (It would be very difficult for it to determine that two separate "resource managers" ① represent the same physical DB and ② are being used sequentially, not in parallel.) As a result, when multiple MySqlConnection objects enlist in a transaction, it will always escalate to a distributed transaction.
When this happens, you run into MySQL bug #70587 that distributed transactions aren't supported in Connector/NET.
Workarounds would be:
Make sure only one MySqlConnection object is opened within any TransactionScope.
Change to a separate connector that does support distributed transactions. You could use MySqlConnector (NuGet, GitHub) as a drop-in replacement for Connector/NET. I've heard that dotConnect for MySQL supports them also (but haven't tried that one).
I know this question has been asked many times however none of the answers fit my issue.
I have a thread timer firing every 30 seconds that queries a MSSQL db that is under heavy load. If i need to update the data in the console app that i'm using i use Linq To Sql to update the data stored in memory.
My problem is sometimes I get the error ExecuteReader requires an open and available Connection.
The code from the thread timer fires a Thread.Run(reload());
The connection string is
//example code
void reload(...
string connstring = string.Format("Data Source={0},{1};Initial Catalog={2};User ID={3};Password={4};Application Name={5};Connect Timeout=120;MultipleActiveResultSets=True;Max Pool Size=1524;Pooling=true;"
settings = new ConnectionStringSettings("sqlServer", connstring, "System.Data.SqlClient");
using (var tx = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
using (SwitchDataDataContext data = new SwitchDataDataContext(settings.ConnectionString))
{
data.CommandTimeout = 560;
then i do many linqtosql searches. The exceptions happen from time to time but not always on the same query's. it's like the connections is opened and is forced closed.
Sometimes the exceptions says the current status is Open, Closed, Connecting. I add a larger ThreadPool to the SQL db but nothing seems to help.
i also have ADO in other parts of the program without any issues.
I believe that your problem is that the transaction scope also has a timeout. The default timeout is 1 minute according to this answer. So the transaction times out long before your command does (560 seconds = 9.3 minutes or so) . You will need to set the timeout property in the instance of the TransactionOptions object you are creating
new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadUncommitted,
Timeout = new TimeSpan(0,10,0) /* 10 Minutes */
}
You can verify that is indeed the issue by setting the TransactionScope timeout to a small value to force it to timeout.
I changed Linq To Sql to Entity Framework and received the same type of message. I believe the issues is lazy Loading. I was using the collections before it was ready on a different thread. I just added .Include("Lab") to my collection to load the entire collection and it seems to of fixed the issue.
I'm using the source code like this:
Database db1 = new Database(); //init 1 db connection
db1.BeginTransaction();
//this function used to check exist customer
//in this function, I also use Database db2 = new Database(); db2.Close();
CheckExistCustomer();
InsertCustomer(db1, strInsert); //this function worked correct, use db1
if(iErrorCode == ErrorStatus.SUCCESSED)
db1.CommitTransaction();
else db1.RollbackTransaction();
As you see, I have 2 db connections. Can I use them? When db2.Close(), it won't affect to current db1, right? Should I use only 1 db connection (db1)?
When I run CheckExistCustomer(), the program will hang on. I don't know why. Any clue?
Please advise.
I'm very appriciated for your help.
You are doing transaction management in this code. in Transaction you are performing operation on two DB, This is called Distributed Transactions, I guess BeginTransaction() method does not support Distributed Transaction, Please use TransactionScope class
TransactionScope: Avoiding Distributed Transactions
I have a fairly big database with tables created for different business modules.
We decided to create different edmx-files for different modules respectively.
However, how can I prevent the usage of MSDTC when trying to implement a TransactionScope for a logical action that will incur writing to multiple tables in different edmx? Again, the underlying database is the same, I wouldn't want to use MSDTC for this scenario.
Is there any way to pass in an opened SQL connection with active transaction?
Thanks for help in advance.
Regards,
William
TransactionScope enlists the MSDTC when the databases are different and/or the connection strings are different.
Rick Strahl has a great article on this (his perspective is LINQ to SQL, but it's applicable to EF). The money paragraphs:
TransactionScope is a high level Transaction wrapper that makes it
real easy to wrap any code into a transaction without having to track
transactions manually. Traditionally TransactionScope was a .NET
wrapper around the Distributed Transaction Coordinator (DTC) but it’s
functionality has expanded somewhat. One concern is that the DTC is
rather expensive in terms of resource usage and it requires that the
DTC service is actually running on the machine (yet another service
which is especially bothersome on a client installation).
However, recent updates to TransactionScope and the SQL Server Client
drivers make it possible to use TransactionScope class and the ease of
use it provides without requiring DTC as long as you are running
against a single database and with a single consistent connection
string. In the example above, since the transaction works with a
single instance of a DataContext, the transaction actually works
without involving DTC. This is in SQL Server 2008.
See also this SO question/answer where I found the link to Rick's blog.
So if you're connecting to the same database and are using the same connection string, the DTC should not be involved.
thanks for all replies above!
by the way, just managed to find a solution which is to use EntityConnection and EntityTransaction explicitly. A sample is like this:
string theSqlConnStr = "data source=TheSource;initial catalog=TheCatalog;persist security info=True;user id=TheUserId;password=ThePassword";
EntityConnectionStringBuilder theEntyConnectionBuilder = new EntityConnectionStringBuilder();
theEntyConnectionBuilder.Provider = "System.Data.SqlClient";
theEntyConnectionBuilder.ProviderConnectionString = theConnectionString;
theEntyConnectionBuilder.Metadata = #"res://*/";
using (EntityConnection theConnection = new EntityConnection(theEntyConnectionBuilder.ToString()))
{
theConnection.Open();
theET = null;
try
{
theET = theConnection.BeginTransaction();
DataEntities1 DE1 = new DataEntities1(theConnection);
//DE1 do somethings...
DataEntities2 DE2 = new DataEntities2(theConnection);
//DE2 do somethings...
DataEntities3 DE3 = new DataEntities3(theConnection);
//DE3 do somethings...
theET.Commit();
}
catch (Exception ex)
{
if (theET != null) { theET.Rollback(); }
}
finally
{
theConnection.Close();
}
}
with explicit use of EntityConnection & EntityTransaction, I can achieve the sharing of single connection and transaction for multiple ObjectContexts for a single database, yet without the need to incur the usage of MSDTC.
Hope this info is helpful. Gd luck!!
I am using different SQL procedures in an application.
First procedures insert some rows then some processing in C#code and then 2nd procedure
do some updation then again some code processing then third procedure delete some record and then insert new record. When all is done on Sever 1 then data is fetch from this server and sent to Server 2 there record is deleted and new record is inserted.
IF there is error at any stage on any server in any procedure i want to roll back all the record.
I can not use begin trans because processing takes time and can not block table as others users are also using same tables in parallel. So kindly tell me how can i achieve it without blocking the table for other users.
Thanks in advance.
Edited (Added code example):
I tried Transaction Scope but i am getting exception while opening the connection. I configured MS DTC but may be not configured properly.
"
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."
using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Required))
{
try
{
dl.SetBookReadyToLive(13570, false);
//SetBookReadyToLive
dl.AddTestSubmiitedTitleID(23402);
dl.AddBookAuthorAtLIve(13570, 1);
ts.Complete();
}
catch (Exception ex)
{
Response.Write(ex.Message);
}
}
public void SetBookReadyToLive(long BookID, bool status)
{
try
{
if (dbConMeta.State != ConnectionState.Open)
dbConMeta.Open();
SqlCommand cmd = new SqlCommand("spSetBookReadyToLive", dbConMeta);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Clear();
cmd.Parameters.Add("#BookID", BookID);
cmd.Parameters.Add("#status", status);
cmd.ExecuteNonQuery();
if (dbConMeta.State == ConnectionState.Open)
dbConMeta.Close();
}
catch
{
if (dbConMeta.State == ConnectionState.Open)
dbConMeta.Close();
}
}
I get the exception on opening the connection of method>
I am using SQL Server 2000, i have set the configuration of MS DTC on the machine where SQL Server is installed and also on my PC from where i am running the code. But still same exception.
Kindly help me to configure it
You can use the TransactionScope class. It works generally well but in case of distributed SQL servers like in your case requires the MS DTC enabled in both servers and configured properly (security has to be granted for execution of network transactions, distributed ones and so on...)
here a copy paste from an example on MSDN, you could "almost" use it like this... :)
// Create the TransactionScope to execute the commands, guaranteeing
// that both commands can commit or roll back as a single unit of work.
using (TransactionScope scope = new TransactionScope())
{
using (SqlConnection connection1 = new SqlConnection(connectString1))
{
// Opening the connection automatically enlists it in the
// TransactionScope as a lightweight transaction.
connection1.Open();
// Create the SqlCommand object and execute the first command.
SqlCommand command1 = new SqlCommand(commandText1, connection1);
returnValue = command1.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command1: {0}", returnValue);
// If you get here, this means that command1 succeeded. By nesting
// the using block for connection2 inside that of connection1, you
// conserve server and network resources as connection2 is opened
// only when there is a chance that the transaction can commit.
using (SqlConnection connection2 = new SqlConnection(connectString2))
{
// The transaction is escalated to a full distributed
// transaction when connection2 is opened.
connection2.Open();
// Execute the second command in the second database.
returnValue = 0;
SqlCommand command2 = new SqlCommand(commandText2, connection2);
returnValue = command2.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command2: {0}", returnValue);
}
}
// The Complete method commits the transaction. If an exception has been thrown,
// Complete is not called and the transaction is rolled back.
scope.Complete();
}
source: TransactionScope Class
to minimize locks you could specify the IsolationLevel with the overload of the constructor which takes a TransactionScopeOptions, default is Serializable if you are fine with that you could set it to ReadCommitted.
Note: Personally I would not use this one unless absolutely needed, because it's a bit of a pain to have the DTC always configured and Distributed Transactions are in general slower than local ones but really depends on your BL / DAL logic.
Short answer : The same way you would do it if you would do it in MS SQL Management Studio.
You open a connection to a server.
Open a transaction for a specific server
You run your queries related to this server
You make sure to keep your connection alive while you... [go back to 1. for next server]
If all your queries worked, commit all your changes.
Else, rollback all your queries.
Warning : The first table will most likely be locked until you're done with all your servers/queries. What you could do here to help this : If you got a lot of data, you can transfer the data to temporary tables on every servers before doing the step #2. Once this is done, you open the transaction, do your fast things, then commit/rollback asap.
Note: I know you asked how to achieve this without locking the tables, hence why I added an idea in the « warning » part.