I'm not sure what a good way to manage this transaction in code would be.
Say I have the following
Service layer (non-static class)
Repository layer (static class)
// Service layer class
/// <summary>
/// Accept offer to stay
/// </summary>
public bool TxnTest()
{
using (SqlConnection conn = new SqlConnection(ConnectionString))
{
conn.Open();
SqlTransaction txn = conn.BeginTransaction();
using (SqlCommand cmd = conn.CreateCommand())
{
cmd.Transaction = txn;
try
{
DoThis(cmd);
DoThat(cmd);
txn.Commit();
}
catch (SqlException sqlError)
{
txn.Rollback();
}
}
}
}
// Repo Class
/// <summary>
/// Update Rebill Date
/// </summary>
public static void DoThis(SqlCommand cmd)
{
cmd.Parameters.Clear();
cmd.Parameters.AddWithValue("#SomeParam", 1);
cmd.CommandText = "Select * from sometable1";
cmd.CommandType = CommandType.Text;
cmd.ExecuteNonQuery();
}
/// <summary>
/// Update Rebill Date
/// </summary>
public static void DoThat(SqlCommand cmd)
{
cmd.Parameters.Clear();
cmd.Parameters.AddWithValue("#SomeParam", 2);
cmd.CommandText = "Select * from sometable2";
cmd.CommandType = CommandType.Text;
cmd.ExecuteNonQuery();
}
Is the above approach any good? Is it wise to use a static class for the respository or will that create problems?
Is there a way to do this without having to pass a command (cmd) object around?
You might want to take a look at the unit of work pattern.
The unit of work pattern defines exactly what it suggests, a unit of work that is committed all at once, or not at all.
This occurs by defining an interface that has two parts:
Methods that handle your insert, update, deleted operations (note, you don't have to expose all of these operations, and you aren't limited to one entity type)
A method to commit (if you rollback, you simply don't call commit). This is where you would handle the transaction as well as the inserting, updating and/or deletion of all the entities that registered to be changed.
Then, you would pass an implementation of this interface around, and commit the changes at the outer boundaries (your service) when all the operations are complete.
Note that the ObjectContext class in LINQ-to-Entities and the DataContext class in LINQ-to-SQL are both examples of units of work (you perform operations and then save them in a batch).
Related
I have the following code:
public void Execute(string Query, params SqlParameter[] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
{
Connection.Open();
using (var Command = new SqlCommand(Query, Connection))
{
if (Parameters.Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
}
The method may be called 2 or 3 times for different queries but in same manner.
For example:
Insert an Employee
Insert Employee Certificates
Update Degree of Employee on another table [ Fail can cause here. for example ]
If Point [3] fails, all already committed commands shouldn't execute and must be rolled back.
I know I can put SqlTransaction above and use Commit() method. But what about 3rd point if failed? I think point 3 only will rollback and other point 1,2 will not? How to solve this and what approach should I do??
Should I use SqlCommand[] arrays? What I should I do?
I only find similar question but in CodeProject:
See Here
Without changing your Execute method you can do this
var tranOpts = new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadCommitted,
Timeout = TransactionManager.MaximumTimeout
};
using (var tran = new TransactionScope(TransactionScopeOption.Required, tranOpts)
{
Execute("INSERT ...");
Execute("INSERT ...");
Execute("UPDATE ...");
tran.Complete();
}
SqlClient will cache the internal SqlConnection that is enlisted in the Transaction and reuse it for each call to Execute. So you even end up with a local (not distributed) transaction.
This is all explained in the docs here: System.Transactions Integration with SQL Server
There are a few ways to do it.
The way that probably involves changing the least code and involves the least complexity is to chain multiple SQL statements into a single query. It's perfectly fine to build a string for the Query argument that runs more than one statement, including BEGIN TRANSACTION, COMMIT, and (if needed) ROLLBACK. Basically, keep a whole stored procedure in your C# code. This also has the nice benefit of making it easier to use version control with your procedures.
But it still feels kind of hackish.
One way to reduce that effect is marking the Execute() method private. Then, have an additional method in the class for each query. In this way, the long SQL strings are isolated, and when you're using the database it feels more like using a local API. For more complicated applications, this might instead be a whole separate assembly with a few types managing logical functional areas, where the core methods like Exectue() are internal. This is a good idea anyway, regardless of how you end up supporting transactions.
And speaking of procedures, stored procedures are also a perfectly fine way to handle this. Have one stored procedure to do all the work, and call it when ready.
Another option is overloading the method to accept multiple queries and parameter collections:
public void Execute(string TransactionName, string[] Queries, params SqlParameter[][] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
using (var Transaction = new SqlTransaction(TransactionName))
{
connection.Transaction = Transaction;
Connection.Open();
try
{
for (int i = 0; i < Queries.Length; i++)
{
using (var Command = new SqlCommand(Queries[i], Connection))
{
command.Transaction = Transaction;
if (Parameters[i].Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
Transaction.Commit();
}
catch(Exception ex)
{
Transaction.Rollback();
throw; //I'm assuming you're handling exceptions at a higher level in the code
}
}
}
Though I'm not sure how the params keyword works with an array of arrays... I've just not tried that option, but something along these lines would work. The weakness here is also that it's not trivial to have a later query depend on a result from an earlier query, and even queries with no parameter would still need a Parameters array as a placeholder.
A final option is extending the type holding your Execute() method to support transactions. The trick here is it's common (and desirable) to have this type be static, but supporting transactions requires re-using common connection and transaction objects. Given the implied long-running nature of a transaction, you have to support more than one at a time, which means both instances and implementing IDisposable.
using (var connection = new SqlConnection(Configuration.ConnectionString))
{
SqlCommand command = connection.CreateCommand();
SqlTransaction transaction;
connection.Open();
transaction = connection.BeginTransaction("Transaction");
command.Connection = connection;
command.Transaction = transaction;
try
{
if (Parameters.Length > 0)
{
command.Parameters.Clear();
command.Parameters.AddRange(Parameters);
}
command.ExecuteNonQuery();
transaction.Commit();
}
catch (Exception e)
{
try
{
transaction.Rollback();
}
catch (Exception ex2)
{
//trace
}
}
}
I'm designing a small desktop app that fetches data from SQL server. I used BackgroundWorker to make the query execute in background. The code that fetches data generally comes down to this:
public static DataTable GetData(string sqlQuery)
{
DataTable t = new DataTable();
using (SqlConnection c = new SqlConnection(GetConnectionString()))
{
c.Open();
using (SqlCommand cmd = new SqlCommand(sqlQuery))
{
cmd.Connection = c;
using (SqlDataReader r = cmd.ExecuteReader())
{
t.Load(r);
}
}
}
return t;
}
Since query can take up 10-15 minutes I want to implement cancellation request and pass it from GUI layer to DAL. Cancellation procedure of BackroundWorker won't let me cancel SqlCommand.ExecuteReader() beacuse it only stops when data is fetched from server or an exception is thrown by Data Provider.
I tried to use Task and async/await with SqlCommand.ExecuteReaderAsync(CancellationToken) but I am confused where to use it in multi-layer app (GUI -> BLL -> DAL).
Have you tried using the SqlCommand.Cancel() method ?
Aproach: encapsulate that GetData method in a Thread/Worker and then when you cancel/stop that thread call the Cancel() method on the SqlCommand that is being executed.
Here is an example on how to use it on a thread
using System;
using System.Data;
using System.Data.SqlClient;
using System.Threading;
class Program
{
private static SqlCommand m_rCommand;
public static SqlCommand Command
{
get { return m_rCommand; }
set { m_rCommand = value; }
}
public static void Thread_Cancel()
{
Command.Cancel();
}
static void Main()
{
string connectionString = GetConnectionString();
try
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
Command = connection.CreateCommand();
Command.CommandText = "DROP TABLE TestCancel";
try
{
Command.ExecuteNonQuery();
}
catch { }
Command.CommandText = "CREATE TABLE TestCancel(co1 int, co2 char(10))";
Command.ExecuteNonQuery();
Command.CommandText = "INSERT INTO TestCancel VALUES (1, '1')";
Command.ExecuteNonQuery();
Command.CommandText = "SELECT * FROM TestCancel";
SqlDataReader reader = Command.ExecuteReader();
Thread rThread2 = new Thread(new ThreadStart(Thread_Cancel));
rThread2.Start();
rThread2.Join();
reader.Read();
System.Console.WriteLine(reader.FieldCount);
reader.Close();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
static private string GetConnectionString()
{
// To avoid storing the connection string in your code,
// you can retrieve it from a configuration file.
return "Data Source=(local);Initial Catalog=AdventureWorks;"
+ "Integrated Security=SSPI";
}
}
You can only do Cancelation checking and Progress Reporting between Distinct lines of code. Usually both require that you disect the code down to the lowest loop level, so you can do both these things between/in the loop itterations. When I wrote my first step into BGW, I had the advantage that I needed to do the loop anyway so it was no extra work. You have one of the worse cases - pre-existing code that you can only replicate or use as is.
Ideal case:
This operation should not take nearly as long is it does. 5-10 minutes indicates that there is something rather wrong with your design.
If the bulk of the time is transmission of data, then you are propably retreiving way to much data. Retrieving everything to do filtering in the GUI is a very common mistake. Do as much filtering in the query as possible. Usign a Distributed Database might also help with transmission performance.
If the bulk of the time is processing as part of the query operation (complex Conditions), something in your general approach might have to change. There are various ways to trade off complex calculation with a bit of memory on the DBMS side. Views afaik can cache the results of operations, while still maintaining transactional consistency.
But it really depends what your backend DB/DBMS and use case are. A lot of the use SQL as Query Language. So it does not allow us to predict wich options you have.
Second best case:
The second best thing if you can not cut it down, would be if you had the actually DB access code down to the lowest loop and would do progress reporting/cancelation checking on it. That way you could actually use the existing Cancelation Token System inherent in BGW.
Everything else
Using any other approach to Cancelation is really a fallback. I wrote a lot on why it is bad, but felt that this might work better if I focus on the core issue - likely something wrong in design of he DB and/or Query. Because those might well eliminate the issue altogether.
I am working on a setup where a scalable WCF Service Component is connected to a single MS SQL Server Database. The RESTful service allows users to save data into the DB as well as get data from it.
Whilst implementing a class handling the database connections / methods, I started struggling with correctly reusing prepared SqlCommands and the connection. I read up on the MSDN about connection pooling as well as how to use SqlCommand and SqlParameter.
My initial version of the class looks like this:
public class SqlRepository : IDisposable
{
private object syncRoot = new object();
private SqlConnection connection;
private SqlCommand saveDataCommand;
private SqlCommand getDataCommand;
public SqlRepository(string connectionString)
{
// establish sql connection
connection = new SqlConnection(connectionString);
connection.Open();
// save data
saveDataCommand = new SqlCommand("INSERT INTO Table (Operation, CustomerId, Data, DataId, CreationDate, ExpirationDate) VALUES (#Operation, #CustomerId, #Data, #DataId, #CreationDate, #ExpirationDate)", connection);
saveDataCommand.Parameters.Add(new SqlParameter("Operation", SqlDbType.NVarChar, 20));
saveDataCommand.Parameters.Add(new SqlParameter("CustomerId", SqlDbType.NVarChar, 50));
saveDataCommand.Parameters.Add(new SqlParameter("Data", SqlDbType.NVarChar, 50));
saveDataCommand.Parameters.Add(new SqlParameter("DataId", SqlDbType.NVarChar, 50));
saveDataCommand.Parameters.Add(new SqlParameter("CreationDate", SqlDbType.DateTime));
saveDataCommand.Parameters.Add(new SqlParameter("ExpirationDate", SqlDbType.DateTime));
saveDataCommand.Prepare();
// get data
getTripCommand = new SqlCommand("SELECT TOP 1 Data FROM Table WHERE CustomerId = #CustomerId AND DataId = #DataId AND ExpirationDate > #ExpirationDate ORDER BY CreationDate DESC", connection);
getTripCommand.Parameters.Add(new SqlParameter("CustomerId", SqlDbType.NVarChar, 50));
getTripCommand.Parameters.Add(new SqlParameter("DataId", SqlDbType.NVarChar, 50));
getTripCommand.Parameters.Add(new SqlParameter("ExpirationDate", SqlDbType.DateTime));
getTripCommand.Prepare();
}
public void SaveData(string customerId, string dataId, string operation, string data, DateTime expirationDate)
{
lock (syncRoot)
{
saveDataCommand.Parameters["Operation"].Value = operation;
saveDataCommand.Parameters["CustomerId"].Value = customerId;
saveDataCommand.Parameters["CreationDate"].Value = DateTime.UtcNow;
saveDataCommand.Parameters["ExpirationDate"].Value = expirationDate;
saveDataCommand.Parameters["Data"].Value = data;
saveDataCommand.Parameters["DataId"].Value = dataId;
saveDataCommand.ExecuteNonQuery();
}
}
public string GetData(string customerId, string dataId)
{
lock (syncRoot)
{
getDataCommand.Parameters["CustomerId"].Value = customerId;
getDataCommand.Parameters["DataId"].Value = dataId;
getDataCommand.Parameters["ExpirationDate"].Value = DateTime.UtcNow;
using (var reader = getDataCommand.ExecuteReader())
{
if (reader.Read())
{
string data = reader.GetFieldValue<string>(0);
return data;
}
else
{
return null;
}
}
}
}
public void Dispose()
{
try
{
if (connection != null)
{
connection.Close();
connection.Dispose();
}
DisposeCommand(saveDataCommand);
DisposeCommand(getDataCommand);
}
catch { }
}
private void DisposeCommand(SqlCommand command)
{
try
{
command.Dispose();
}
catch (Exception)
{
}
}
}
There are several aspects important to know:
I am using SqlCommand.Prepare() to speed up the process of executing the command
Reusing the commands avoids creating new objects with every call to the GetData and SaveData methods, thus leading to no problem with the garbage collector
There is only one instance of the SqlRepository class, used by the WCF Service.
There are many many calls per minute to this service, so keeping a connection to the DB open is what I want.
Now I read up a bit more about connection pooling and the fact that it is highly recommended to use the SqlConnection object in a using statement to ensure disposal. To my understanding, the connection pooling technology takes care of leaving the connection open even though the Dispose() method of SqlConnection has been called by the using statement.
The way to use this would be to have a using(SqlConnection connection = new SqlConnection(connectionString)) inside the GetData and SaveData methods. However, then - at least to my intuition - I would need to create the SqlCommands inside the GetData / SaveData methods as well. Or not? I could not find any documentation on how to reuse the commands that way. Also wouldn't the call to SqlCommand.Prepare() be meaningless if I need to prepare a new command every time I get into the GetData / SaveData methods?
How do I properly implement the SqlRepository class? The way it is now I believe that if the connection breaks (maybe because the DB server goes down for a while and reboots), then the SqlRepository class will not automatically recover and be functioning. To my best knowledge this sort of failsave scenarios are handled in the pooling technology.
Thanks for ideas and feedback!
Christian
Do not reuse the SqlCommand instances.
You are synchronizing database access.
With your implementation, you are re-using a small object (which is no problem for the GC even if there are thousands) in exchange of concurrent DB operations.
Remove the synchronization locks.
Create new instances of SqlCommands for each database operation.
Do not call Prepare. Prepare speeds up db operations, but after executing ExecuteReader() on a SqlCommand with CommandType = Text and with non-zero number of parameters, the command is unprepared internally.
I'm using System.Data.SQLite, and I have some code where I'm wrapping a series of commands in a SQLite transaction.
Is it required to set the Transaction property to the transaction instance? It seems as though the command automatically picks it up from the connection.
The reason this is important to me is determining what I have to pass down to helper functions--i.e. just a reference to the connection or to the transaction as well.
Edit:
Here's a simplified example.
private void ExecuteSql(SQLiteConnection conn, IEnumerable<string> sqlStatements)
{
using (var trans = conn.BeginTransaction())
{
try
{
foreach (var sql in sqlStatements) ExecuteSql(conn, sql);
}
catch
{
trans.Rollback();
throw; // pass up to higher level
}
trans.Commit();
}
}
private void ExecuteSql(SQLiteConnection conn, string sqlStatement)
{
using (var cmd = new SQLiteCommand(conn)
{
//Transaction = trans, -- necessary?
CommandText = sqlStatement,
})
{
cmd.ExecuteNonQuery();
}
}
No, you don't need to set the Transaction property manually.
Everything you do with the connection while the transaction is open will be tied to that transaction. However, this might not hold true for other database engines (although in general I think it does), so be careful if ever you change your DB.
I find the ADO.NET interfaces convoluted for no good reason; there's lots of redundancy, and providers often have to jump through hoops to get optimal performance with certain parts of the interface (e.g. the IDataParameter/IDataParameterCollection monstrosity).
I have a Database class that abstracts the ExecuteNonQuery() and ExecuteReader() of SqlCommand. Due to wrapping the Sqlconnection and SqlCommand around using blocks, the SqlDataReader gets closed after the CustomExecuteReader() is called, therefore I can't read the SqlReaderResultSet at the business level layer. Code below. Thanks guys for the feedback.
public static SqlDataReader SqlReaderResultSet { get; set; }
public static SqlDataReader CustomExecuteReader(string storedProc)
{
using (var conn = new SqlConnection(ConnectionString))
{
var cmd = new SqlCommand(storedProc, conn) {CommandType = CommandType.StoredProcedure};
try
{
conn.Open();
SqlReaderResultSet = cmd.ExecuteReader();
}
catch (InvalidOperationException)
{
if (conn.State.Equals(ConnectionState.Closed))
conn.Open();
}
finally
{
conn.Close();
}
}
return SqlReaderResultSet;
}
"I can't read the SqlReaderResultSet at the business level layer" - and you shouldn't. Data should be passed using data transfer objects, never through a low level data access structure.
I recommend changing your approach so that the method you describe above iterates the records in the datareader, and creates a list of objects. That list of objects is what should be returned and worked on.
Iterator Blocks can be a way around this. It is legal and generally safe to do the following:
IEnumerable<MyFancyData> ResultSet {
get {
using(DbConnection conn = ...)
using(DbCommand cmd = ...) {
conn.Open();
using(DbDataReader reader = cmd.ExecuteReader()) {
while(reader.Read()) {
yield return new MyFancyData(reader[0], reader[42] ...);
}
}
}
}
}
Each time you enumerate the ResultSet property, the connection will be constructed again - and Disposed of afterwards (foreach and other IEnumerator<> consumers will appropriately call the Dispose() method of the generator, allowing the using block to do its thing).
This approach retains the lazy as-you-need it evaluation of the items from the data reader (which can be relevant when your data set becomes large), which still cleaning abstracting away sql-level details from the public API.