Best way to do connection management in dapper? - c#

I am using the following way of querying in dapper on a MySQL database.
using (var db = new MySqlConnection(ConfigurationHandler.GetSection<string>(StringConstants.ConnectionString)))
{
resultSet = db.Execute(UpdateQuery, new { _val = terminalId }, commandType: CommandType.Text);
db.Close();//should i call this or not
db.Dispose();//should i call this or not
}
Is it a good way of explicitly calling db.close and db.dispose? My application could be handling 100's of requests per second.

A using block is a convenience arround the IDisposable interface. It ensures that the dispose method is called at the end of the block.
See: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/using-statement.
In your case you can remove the explicit calls to db.Close() and db.Dispose() because you are not re-using the connection object.
using (var db = new MySqlConnection(ConfigurationHandler.GetSection<string>(StringConstants.ConnectionString)))
{
resultSet = db.Execute(UpdateQuery,
new { _val = terminalId }, commandType: CommandType.Text);
}
The following link provides further details about .Close vs .Dispose: https://stackoverflow.com/a/61171/1028323

Related

how to rollback all SqlCommands which already executed with SqlTransaction?

I have the following code:
public void Execute(string Query, params SqlParameter[] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
{
Connection.Open();
using (var Command = new SqlCommand(Query, Connection))
{
if (Parameters.Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
}
The method may be called 2 or 3 times for different queries but in same manner.
For example:
Insert an Employee
Insert Employee Certificates
Update Degree of Employee on another table [ Fail can cause here. for example ]
If Point [3] fails, all already committed commands shouldn't execute and must be rolled back.
I know I can put SqlTransaction above and use Commit() method. But what about 3rd point if failed? I think point 3 only will rollback and other point 1,2 will not? How to solve this and what approach should I do??
Should I use SqlCommand[] arrays? What I should I do?
I only find similar question but in CodeProject:
See Here
Without changing your Execute method you can do this
var tranOpts = new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadCommitted,
Timeout = TransactionManager.MaximumTimeout
};
using (var tran = new TransactionScope(TransactionScopeOption.Required, tranOpts)
{
Execute("INSERT ...");
Execute("INSERT ...");
Execute("UPDATE ...");
tran.Complete();
}
SqlClient will cache the internal SqlConnection that is enlisted in the Transaction and reuse it for each call to Execute. So you even end up with a local (not distributed) transaction.
This is all explained in the docs here: System.Transactions Integration with SQL Server
There are a few ways to do it.
The way that probably involves changing the least code and involves the least complexity is to chain multiple SQL statements into a single query. It's perfectly fine to build a string for the Query argument that runs more than one statement, including BEGIN TRANSACTION, COMMIT, and (if needed) ROLLBACK. Basically, keep a whole stored procedure in your C# code. This also has the nice benefit of making it easier to use version control with your procedures.
But it still feels kind of hackish.
One way to reduce that effect is marking the Execute() method private. Then, have an additional method in the class for each query. In this way, the long SQL strings are isolated, and when you're using the database it feels more like using a local API. For more complicated applications, this might instead be a whole separate assembly with a few types managing logical functional areas, where the core methods like Exectue() are internal. This is a good idea anyway, regardless of how you end up supporting transactions.
And speaking of procedures, stored procedures are also a perfectly fine way to handle this. Have one stored procedure to do all the work, and call it when ready.
Another option is overloading the method to accept multiple queries and parameter collections:
public void Execute(string TransactionName, string[] Queries, params SqlParameter[][] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
using (var Transaction = new SqlTransaction(TransactionName))
{
connection.Transaction = Transaction;
Connection.Open();
try
{
for (int i = 0; i < Queries.Length; i++)
{
using (var Command = new SqlCommand(Queries[i], Connection))
{
command.Transaction = Transaction;
if (Parameters[i].Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
Transaction.Commit();
}
catch(Exception ex)
{
Transaction.Rollback();
throw; //I'm assuming you're handling exceptions at a higher level in the code
}
}
}
Though I'm not sure how the params keyword works with an array of arrays... I've just not tried that option, but something along these lines would work. The weakness here is also that it's not trivial to have a later query depend on a result from an earlier query, and even queries with no parameter would still need a Parameters array as a placeholder.
A final option is extending the type holding your Execute() method to support transactions. The trick here is it's common (and desirable) to have this type be static, but supporting transactions requires re-using common connection and transaction objects. Given the implied long-running nature of a transaction, you have to support more than one at a time, which means both instances and implementing IDisposable.
using (var connection = new SqlConnection(Configuration.ConnectionString))
{
SqlCommand command = connection.CreateCommand();
SqlTransaction transaction;
connection.Open();
transaction = connection.BeginTransaction("Transaction");
command.Connection = connection;
command.Transaction = transaction;
try
{
if (Parameters.Length > 0)
{
command.Parameters.Clear();
command.Parameters.AddRange(Parameters);
}
command.ExecuteNonQuery();
transaction.Commit();
}
catch (Exception e)
{
try
{
transaction.Rollback();
}
catch (Exception ex2)
{
//trace
}
}
}

Parallel.Invoke(), TransactionScope() and SqlBulkCopy

I have multiple methods inside a Parallel.Invoke() that need to run inside of a transaction. These methods all invoke instances of SqlBulkCopy The use-case is "all-or-none", so if one method fails nothing gets committed. I am getting a TransactionAbortedException ({"Transaction Timeout"}) when I call the Complete() method on the parent transaction.
This is the parent transaction:
using (var ts = new TransactionScope())
{
var saveClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
var saveErrorsClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
var saveADClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
var saveEnrollmentsClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
Parallel.Invoke(_options, () =>
{
Save(data, saveClone);
},
() =>
{
SaveErrors(saveErrorsClone);
},
() =>
{
SaveEnrollments(data, saveEnrollmentsClone);
});
ts.Complete();
}//***** GET THE EXCEPTION HERE *****
Here's a dependent transaction that makes use of SqlBulkCopy (they're all the same structure). I'm passing-in the parent and assigning it to the child's TransactionScope
private void Save(IDictionary<string, string> data, Transaction transaction)
{
var dTs = (DependentTransaction)transaction;
if (transaction.TransactionInformation.Status != TransactionStatus.Aborted)
{
using (var ts = new TransactionScope(dTs))
{
_walmartData.Save(data);
Debug.WriteLine("Completed Processing XML - {0}", _stopWatch.Elapsed);
ts.Complete();
}
}
else
{
Debug.WriteLine("Save Not Executed - Transaction Aborted - {0}", _stopWatch.Elapsed);
dTs.Complete();
}
dTs.Complete();
}
EDIT (added my SqlBulkCopy method...notice null for the transaction param)
private void SqlBulkCopy(DataTable dt, SqlBulkCopyColumnMappingCollection mappings)
{
try
{
using (var sbc = new SqlBulkCopy(_conn, SqlBulkCopyOptions.TableLock, null))
{
sbc.BatchSize = 100;
sbc.BulkCopyTimeout = 0;
sbc.DestinationTableName = dt.TableName;
foreach (SqlBulkCopyColumnMapping mapping in mappings)
{
sbc.ColumnMappings.Add(mapping);
}
sbc.WriteToServer(dt);
}
}
catch (Exception)
{
throw;
}
}
Besides fixing the error, I'm open to alternatives. Thanks.
You're creating a form of deadlock with your choice of DependentCloneOption.BlockCommitUntilComplete.
Parallel.Invoke blocks the calling thread until all of its processing is complete. The jobs trying to be completed by Parallel.Invoke are all blocking while waiting for the parent transaction to complete (due to the DependentCloneOption). So the 2 are waiting on each other... deadlock. The parent transaction eventually times out and releases the dependent transactions from blocking, which unblocks your calling thread.
Can you use DependentCloneOption.RollbackIfNotComplete ?
http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.complete.aspx says that TransactionScope.Complete only commits the transaction it contains if it was the one that created it. Since you are creating the scope from an existing transaction I believe you will need to commit the transaction before calling complete on the scope.
From MSDN:
The actual work of commit between the resources manager happens at the
End Using statement if the TransactionScope object created the
transaction. If it did not create the transaction, the commit occurs
whenever Commit is called by the owner of the CommittableTransaction
object. At that point the Transaction Manager calls the resource
managers and informs them to either commit or rollback, based on
whether this method was called on the TransactionScope object
.
After a lot of pain, research, and lack of a valid answer, I've got to believe that it's not possible with the stack that I described in my question. The pain-point, I believe, is between TransactionScope and SqlBulkCopy. I put this answer here for the benefit of future viewers. If someone can prove that it can be done, I'll gladly remove this as the answer.
I believe that how you create your _conn-instance matters a lot, if you create it and open it within your TransactionScope-instance any SqlBulkCopy-related issues should be solved.
Have a look at Can I use SqlBulkCopy inside Transaction and Is it possible to use System.Transactions.TransactionScope with SqlBulkCopy? and see if it helps you.
void MyMainMethod()
{
using (var ts = new TransactionScope())
{
Parallell.InvokeOrWhatNotOrWhatEver(() => DoStuff());
}
}
void DoStuff()
{
using (var sqlCon = new SqlConnection(conStr))
{
sqlCon.Open(); // ensure to open it before SqlBulkCopy can open it in another transactionscope.
using (var bulk = new SqlBulkCopy(sqlCon))
{
// Do you stuff
bulk.WriteToServer...
}
ts.Complete(); // finish the transaction, ie commit
}
}
In short:
Create transaction scope
Create sql-connection and open it under the transaction scope
Create and use SqlBulkCopy-instance with above created conncection
Call transaction.Complete()
Dispose of everything :-)

Entity Framework not picking up Transaction Scope

I have pretty much standard EF 6.1 'create object in a database' code wrapped in transaction scope. For whatever reason the data persists in db after the transaction fails (to complete).
Code:
using (var db = this.Container.Resolve<SharedDataEntities>()) // << new instance of DbContext
{
using (TransactionScope ts = new TransactionScope())
{
SubscriptionTypes st = this.SubscriptionType.Value;
if (st == SubscriptionTypes.Lite && this.ProTrial)
st = SubscriptionTypes.ProTrial;
Domain domain = new Domain()
{
Address = this.Address.Trim(),
AdminUserId = (Guid)user.ProviderUserKey,
AdminUserName = user.UserName,
Description = this.Description.TrimSafe(),
DomainKey = Guid.NewGuid(),
Enabled = !masterSettings.DomainEnableControlled.Value,
Name = this.Name.Trim(),
SubscriptionType = (int)st,
Timezone = this.Timezone,
Website = this.Website.TrimSafe(),
IsPrivate = this.IsPrivate
};
foreach (var countryId in this.Countries)
{
domain.DomainCountries.Add(new DomainCountry() { CountryId = countryId, Domain = domain });
}
db.Domains.Add(domain);
db.SaveChanges(); // << This is the Saving that should not be commited until we call 'ts.Complete()'
this.ResendActivation(domain); // << This is where the Exception occurs
using (TransactionScope ts2 = new TransactionScope(TransactionScopeOption.Suppress))
{
this.DomainMembership.CreateDomainUser(domain.Id, (Guid)user.ProviderUserKey, user.UserName, DomainRoles.DomainSuperAdmin | DomainRoles.Driver);
ts2.Complete();
}
this.Enabled = domain.Enabled;
ts.Complete(); // << Transaction commit never happens
}
}
After SaveChanges() exception is thrown inside ResendActivation(...) so the changes should not be saved. However the records stay in database.
There is no other TransactionScope wrapping the code that I've pasted, it's triggered by an MVC Action call.
after more investigations, turns out that something - probably Entity Framework upgrade or database update process had put
Enlist=false;
into the database connection string. That effectively stops EF from picking up Transaction Scope.
So the solution is to set it to true, or remove it, I think by default it's true
Try using the transaction from the db instance it self, db.Database.BeginTransaction() if I recall it correctly instead of using the transaction scope.
using (var ts = db.Database.BeginTransaction())
{
..
}
Assuming that db is your entity framework context.
Context class by default support transactions. but with every new instance of context class, a new transaction will be created. This new transaction is a nested transaction and it will get committed once the SaveChanges() on the associated context class gets called.
In the given code it seems, we are calling a method that is responsible for creating a domain user i.e. CreateDomainUser and perhaps that has its own context object. thus this problem.
If this is the case(that this method has own context) the perhaps we don't even need TransactionScope here. We can simply pass the same context(that we are using before this call) to the function that is creating the domain user. We can then check for result of both operations and if they are successful, we simply need to call SaveChanges() method.
TransactionScope is usually needed when we are mixing ADO.NET calls with Entity framework. We can use it with context also, but it would be an overkill as the context class already has a transaction and we can simply use the same context to manage the transaction. If this is the case in the above code then the trick to fix the issue is to let the context class know that you want to use it with your own transaction. Since the transaction gets associated with the connection object in the scope, we need to use the context class with the connection that is being associated with the transaction scope. Also, we need to let the context class know that it cannot own the connection as it is being owned by the calling code. so we can do something like:
using (var scope = new TransactionScope(TransactionScopeOption.Required))
{
using (var conn = new SqlConnection("..."))
{
conn.Open();
var sqlCommand = new SqlCommand();
sqlCommand.Connection = conn;
sqlCommand.CommandText =
#"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'";
sqlCommand.ExecuteNonQuery();
using (var context =
new BloggingContext(conn, contextOwnsConnection: false))
{
var query = context.Posts.Where(p => p.Blog.Rating > 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
}
}
scope.Complete();
}
See: http://msdn.microsoft.com/en-us/data/dn456843.aspx
You should be able to use TransactionScope with EF, I know our projects do. However, I think you want the EF context instantiated inside the transaction scope -- that is, I believe you need to swap the outermost / first two using statements, as the answer from #rahuls suggests.
Even if it works the other way ... if you have a service / app / business layer method that needs to update several tables, and you want those updates to be atomic, you'd need to do it this way. So for the sake of consistency (and your own sanity), I would recommend transaction scope first, context second.

Using ServiceStack.Redis, how can I run strongly-typed read-only queries in a transaction

I'm aware that I can increase performance of Redis queries by executing them in a transaction (and even more so in a dedicated pipeline).
The problem is that using the ServiceStack Redis client, I can execute the reads in a transaction but I cannot access the results. This is the code I have below:
User user = null;
Group group = null;
using (IRedisClient cacheClient = new RedisClient())
using (var trans = cacheClient.CreateTransaction())
{
trans.QueueCommand(x =>
{
var userClient = x.As<User>();
var userHash = userClient.GetHash<string>("Users");
user = userClient.GetValueFromHash(userHash, userKey);
});
trans.QueueCommand(y =>
{
// Retrieve modules from cache
var groupClient = y.As<Group>();
var groupHash = groupClient.GetHash<string>("Groups");
group = groupClient.GetValueFromHash(groupHash, groupKey);
});
trans.Commit()
}
The problem here is that the user and group variables are not set with the output from the transaction.
So, how can I run a series of different strongly-typed read queries in a transaction (or pipeline) and retrieve the results?
Thanks.
You cannot do any dependent reads (operations relying on other read values) within a Redis transaction since the way that transactions work in Redis is that they all commands get batched and sent down to redis and executed as a single unit, so there's no opportunity to use a read value from within the same transaction.
The way you can ensure integrity is to use the Redis WATCH command to watch any keys before a transaction and if any of keys changed before the transaction it will throw when the transaction is committed. e.g:
var cacheClient = new RedisClient();
cacheClient.Watch("Users", "Groups");
var userHash = cacheClient.As<User>().GetHash<string>("Users");
var groupHash = cacheClient.As<Group>.GetHash<string>("Groups");
using (var trans = cacheClient.CreateTransaction())
{
trans.QueueCommand(x =>
{
user = x.As<User>().GetValueFromHash(userHash, userKey);
});
trans.QueueCommand(y =>
{
group = y.As<Group>().GetValueFromHash(groupHash, groupKey);
});
trans.Commit();
}
Another alternative to transactions is to use Redis server-side LUA.

Connection lifetime and prepared statements in ADO.NET

I have a long-running windows service that constantly receives data and processes it, then puts it into the database. I use stored procedures for the complex operations, but some of them have many parameters.
I'm aware that this is the suggested 'best practice':
using(IDbConnection connection = GetConnection())
{
connection.Open();
// Do stuff
connection.Close();
}
Resulting in short-lived connections, but taking full advantage of connection pooling. However, this practice really seems to negate the benefits of a stored procedure. I have something like this at the moment:
while(true)
{
var items = GetData(); // network I/O
using(IDbConnection conn = GetConn())
{
connection.Open();
var tran = connection.BeginTransaction();
var preparedStatement1 = SQL.Prepare(connection, "...", ...);
var preparedStatement2 = SQL.Prepare(connection, "...", ...);
var preparedStatement3 = SQL.Prepare(connection, "...", ...);
foreach(var item in items)
{
// loop which calls SQL statements.
}
connection.Close();
}
}
I really feel that I should open the connection outside of the while loop so it stays alive a long time; and prepare the statements before entering the loop. That would give me the full benefit of using the stored procedures:
using(IDbConnection conn = GetConn())
{
connection.Open();
var tran = connection.BeginTransaction();
var preparedStatement1 = SQL.Prepare(connection, "...", ...);
var preparedStatement2 = SQL.Prepare(connection, "...", ...);
var preparedStatement3 = SQL.Prepare(connection, "...", ...);
while(!service.IsStopped)
{
var items = GetData(); // network I/O
foreach(var item in items)
{
// loop which calls SQL statements.
}
}
connection.Close();
}
So the question is, does the performance benefit of stored procedures outweigh the "risks" of leaving connections open for a long time? The best practices never seem to mention prepared statements, and the MSDN documentation (I'm using SQL Server) seems to suggest Prepare()ing will sometimes be a no-op: SQLCommand.Prepare()

Categories

Resources