I am using stored procedures in sql server.
I have several stored procedures calls written in c#. I want to wrap them inside transaction:
//Begin Transaction here
sp1Call();
sp2Call();
sp3Call();
//Commit here
//Rollback if failed
is there any way to do it?
Update:
I am using the enterprise library. example to sp1Call():
public static void sp1Call(string itemName)
{
DbCommand command = db.GetStoredProcCommand("dbo.sp1_insertItem");
db.AddInParameter(command, "#item_name", DbType.String, itemName);
db.ExecuteNonQuery(command);
}
You'll want to check out SqlTransaction.
Just a quick search comes up with DBTransaction:
using (DbConnection connection = db.CreateConnection())
{
connection.Open();
DbTransaction transaction = connection.BeginTransaction();
try
{
DbCommand command = db.GetStoredProcCommand("dbo.sp1_insertItem");
db.AddInParameter(command, "#item_name", DbType.String, itemName);
db.ExecuteNonQuery(command, transaction);
transaction.Commit();
}
catch
{
//Roll back the transaction.
transaction.Rollback();
}
}
Related
I have the following code:
public void Execute(string Query, params SqlParameter[] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
{
Connection.Open();
using (var Command = new SqlCommand(Query, Connection))
{
if (Parameters.Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
}
The method may be called 2 or 3 times for different queries but in same manner.
For example:
Insert an Employee
Insert Employee Certificates
Update Degree of Employee on another table [ Fail can cause here. for example ]
If Point [3] fails, all already committed commands shouldn't execute and must be rolled back.
I know I can put SqlTransaction above and use Commit() method. But what about 3rd point if failed? I think point 3 only will rollback and other point 1,2 will not? How to solve this and what approach should I do??
Should I use SqlCommand[] arrays? What I should I do?
I only find similar question but in CodeProject:
See Here
Without changing your Execute method you can do this
var tranOpts = new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadCommitted,
Timeout = TransactionManager.MaximumTimeout
};
using (var tran = new TransactionScope(TransactionScopeOption.Required, tranOpts)
{
Execute("INSERT ...");
Execute("INSERT ...");
Execute("UPDATE ...");
tran.Complete();
}
SqlClient will cache the internal SqlConnection that is enlisted in the Transaction and reuse it for each call to Execute. So you even end up with a local (not distributed) transaction.
This is all explained in the docs here: System.Transactions Integration with SQL Server
There are a few ways to do it.
The way that probably involves changing the least code and involves the least complexity is to chain multiple SQL statements into a single query. It's perfectly fine to build a string for the Query argument that runs more than one statement, including BEGIN TRANSACTION, COMMIT, and (if needed) ROLLBACK. Basically, keep a whole stored procedure in your C# code. This also has the nice benefit of making it easier to use version control with your procedures.
But it still feels kind of hackish.
One way to reduce that effect is marking the Execute() method private. Then, have an additional method in the class for each query. In this way, the long SQL strings are isolated, and when you're using the database it feels more like using a local API. For more complicated applications, this might instead be a whole separate assembly with a few types managing logical functional areas, where the core methods like Exectue() are internal. This is a good idea anyway, regardless of how you end up supporting transactions.
And speaking of procedures, stored procedures are also a perfectly fine way to handle this. Have one stored procedure to do all the work, and call it when ready.
Another option is overloading the method to accept multiple queries and parameter collections:
public void Execute(string TransactionName, string[] Queries, params SqlParameter[][] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
using (var Transaction = new SqlTransaction(TransactionName))
{
connection.Transaction = Transaction;
Connection.Open();
try
{
for (int i = 0; i < Queries.Length; i++)
{
using (var Command = new SqlCommand(Queries[i], Connection))
{
command.Transaction = Transaction;
if (Parameters[i].Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
Transaction.Commit();
}
catch(Exception ex)
{
Transaction.Rollback();
throw; //I'm assuming you're handling exceptions at a higher level in the code
}
}
}
Though I'm not sure how the params keyword works with an array of arrays... I've just not tried that option, but something along these lines would work. The weakness here is also that it's not trivial to have a later query depend on a result from an earlier query, and even queries with no parameter would still need a Parameters array as a placeholder.
A final option is extending the type holding your Execute() method to support transactions. The trick here is it's common (and desirable) to have this type be static, but supporting transactions requires re-using common connection and transaction objects. Given the implied long-running nature of a transaction, you have to support more than one at a time, which means both instances and implementing IDisposable.
using (var connection = new SqlConnection(Configuration.ConnectionString))
{
SqlCommand command = connection.CreateCommand();
SqlTransaction transaction;
connection.Open();
transaction = connection.BeginTransaction("Transaction");
command.Connection = connection;
command.Transaction = transaction;
try
{
if (Parameters.Length > 0)
{
command.Parameters.Clear();
command.Parameters.AddRange(Parameters);
}
command.ExecuteNonQuery();
transaction.Commit();
}
catch (Exception e)
{
try
{
transaction.Rollback();
}
catch (Exception ex2)
{
//trace
}
}
}
I write stored procedures with read uncommitted transaction isolation level, but when connection time out the rollback in catch in SQL doesn't work.
When I use SqlTransaction in my business layer in .Net, the problem is solved
and I can support any errors occurred in SQL in my try... catch in .Net.
Is this what I did right?
using (SqlConnection conn = new SqlConnection(Data.DataProvider.Instance().ConnectionString))
{
conn.Open();
SqlTransaction tran = conn.BeginTransaction(IsolationLevel.ReadUncommitted, "Trans");
try
{
object ores = SqlHelper.ExecuteScalar(.......)
string res = ores.ToString();
if (string.IsNullOrEmpty(res))
{
tran.Rollback();
info.TrackingCode = "";
return 0;
}
else if (res == "-4")
{
tran.Rollback();
return -4;
}
else if (res == "-1")
{
tran.Rollback();
return -1;
}
else
{
tran.Commit();
return 1;
}
}
catch (Exception)
{
tran.Rollback();
info.TrackingCode = "";
return 0;
}
As per your requirement, there can be two ways to define SqlTransactions
SQL Server side SqlTransaction
C#.NET (business layer) SqlTransaction.
(Both can not be mixed)
In your case, you tried to define SqlTransaction at the business layer. So better you call Stored-procedure too in the business layer. So the Business layer will rectify the SqlTransaction and time-Out error won't occur.
So first include your Stored-Procedure code as a Command execution (at business layer) and then execute.
Change your code as below with your required conditions.
// Command Objects for the transaction
SqlCommand cmd1 = new SqlCommand("YourStoredProcedureName", cnn);
cmd1.CommandType = CommandType.StoredProcedure;
//If you are using parameter for the Stored-procedure
cmd1.Parameters.Add(new SqlParameter("#Param1", SqlDbType.NVarChar, 50));
cmd1.Parameters["#Param1"].Value = paramValue1;
//Execute stored procedure through C# code
cmd1.ExecuteNonQuery();
transaction.Commit();
EDITED: Reference link
catch (SqlException sqlEx)
{
if (sqlEx.Number == -2)
{
//handle timeout
}
transaction.Rollback();
}
When a client timeout event occurs (.net CommandTimeout for example), the client sends an "ABORT" to SQL Server. SQL Server then simply abandons the query processing. No transaction is rolled back, no locks are released. I solved this problem with Sqltransaction in my .Net Code Instead Sql and Mange Exceptions with SqlExceptions
Both ways are equivalent. But in the case of the business layer, you can handle the transaction through multiple query executions.
Here, you should handle conn.Open() for possible exceptions. Also, check SqlHelper uses your created connection and transaction everywhere in its code.
I am trying to understand what's happening in the background, when a simple select query executed by client.
I am using C# Asp.Net Webforms, and i checked the processes with WireShark.
public DBC(string procedureName, params object[] procParams)
{
strError = null;
using (MySqlConnection connection = new MySqlConnection(GetConnectionString()))
{
connection.Close();
try
{
connection.Open();
MySqlCommand cmd = new MySqlCommand(procedureName, connection);
cmd.CommandType = CommandType.StoredProcedure;
//if we use params for stored procedure
if (procParams != null)
{
int i = 1;
foreach (object paramValue in procParams)
{
cmd.Parameters.Add(new MySqlParameter("#param_" + i, paramValue.ToString()));
i++;
}
}
if (procedureName.Contains("get"))
{
dtLoaded = new DataTable();
dtLoaded.Load(cmd.ExecuteReader());
}
else
{
cmd.ExecuteNonQuery();
}
}
catch (Exception ex)
{
strError = ErrorHandler.ErrorToMessage(ex);
}
finally
{
connection.Close();
connection.Dispose();
}
}
}
This is a simple SELECT * FROM TABLE query, in a try-catch statement. At the finally state, the connection was closed and disposed.
Why is it causes 43 process? I don't understand, why is there so much. Somebody could explain me?
Many thanks!
I assume you're using Oracle's Connector/NET. It performs a lot of not-strictly-necessary queries after opening a connection, e.g., SHOW VARIABLES to retrieve some server settings. (In 8.0.17 and later, this has been optimised slightly.)
Executing a stored procedure requires retrieving information about the stored procedure (to align parameters); it's more "expensive" than just executing a SQL statement directly. (You can disable this with CheckParameters=false, but I wouldn't recommend it.)
You can switch to MySqlConnector if you want a more efficient .NET client library. It's been tuned for performance (in both client CPU time and network I/O) and won't perform as much unnecessary work when opening a connection and executing a query. (MySqlConnector is the client library used for the .NET/MySQL benchmarks in the TechEmpower Framework Benchmarks.)
I read a lot of articles on Transaction but I want to build my own example of nested Transactions to see how it really works in c#. I already have a good idea about them in SQl but C# is giving me a tough time. SO I came here for an example that can explain how Nested Transactions work.
I tried the following code to check if the inner transaction will be committed or not. Since the TransactionScope property is RequiresNew the inner transaction should execute but I introduced a deliberate Unique key violation in outer Transaction and my inner transaction didn't execute. Why? Is my concept messed up?
Database _Cataloguedatabase = DatabaseFactory.CreateDatabase();
public void TransferAmountSuppress(Account a)
{
var option = new TransactionOptions();
option.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
try
{
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.RequiresNew, option))
{
using (DbCommand cmd = _Cataloguedatabase.GetStoredProcCommand(SpUpdateCredit))
{
_Cataloguedatabase.AddInParameter(cmd, CreditAmountParameter, DbType.String, a.Amount);
_Cataloguedatabase.AddInParameter(cmd, CodeParameter, DbType.String, a.Code);
_Cataloguedatabase.ExecuteNonQuery(cmd);
}
using (TransactionScope innerScope = new TransactionScope(TransactionScopeOption.RequiresNew, option))
{
using (DbCommand cmd = _Cataloguedatabase.GetStoredProcCommand(SpUpdateDebit))
{
_Cataloguedatabase.AddInParameter(cmd, DebitAmountParameter, DbType.String, a.Amount);
_Cataloguedatabase.ExecuteNonQuery(cmd);
}
innerScope.Complete();
}
outerScope.Complete();
}
}
catch (Exception ex)
{
throw new FaultException(new FaultReason(new FaultReasonText(ex.Message)));
}
}
We use application in server with Sql server and it receive many SDF file from clients to update the SqlServer Data Base the update for many Tables and I use Transaction for this, the application in server used by multiple user than it possible to be two updates in the same time, and the two with transaction, the application throw an exception that table used by another transaction !!
New transaction is not allowed because there are other threads running in the session.
and the user have other task to do in the app with transaction too, for the moment if there's an update he have to wait until it finish.
Any ideas?
Update : there's some code for my fonction : this is my fonctions with transaction run in BackgroundWorker with timer :
DbTransaction trans = ConnectionClass.Instance.MasterConnection.BeginTransaction();
try
{
ClientDalc.UpdateClient(ChosenClient, trans);
// other functions with the same transaction
trans.Commit();
}
catch (Exception ex)
{
trans.Rollback();
return false;
}
Update Client:
public static bool UpdateClient(ClientEntity ChosenClient, DbTransaction trans)
{
bool retVal = true;
string sql = "UPDATE ClientTable Set ........";
try
{
using (DbCommand cmd = DatabaseClass.GetDbCommand(sql))
{
cmd.Transaction = trans;
cmd.Parameters.Add(DatabaseClass.GetParameter("#Name", Client.Name));
//
cmd.ExecuteNonQuery();
}
}
catch (Exception ex)
{
retVal = false;
}
return retVal;
}
If I run any thing else in the same time that BackgroundWorker is in progress I got exception even is not a transaction
ExecuteReader requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.”
Is this a totally borked posting?
The only reference I find to that error points to Entity Framework, NOT Sql server and I ahve never seen that in sql server - in fact, sql server has no appropiate conceot for session at all. They are called Conenctions there.
SqlException from Entity Framework - New transaction is not allowed because there are other threads running in the session
In that case your description and the tagging makes ZERO sense and the onyl answer is to use EntityFramework PROPERLY - i.e. not multi threaded. Open separate sessions per thread.
Try to follow the sample C# code
using (SqlConnection con = new SqlConnection("Your Connection String"))
{
using (SqlCommand cmd = new SqlCommand("Your Stored Procedure Name", con))
{
SqlParameter param = new SqlParameter();
param.ParameterName = "Parameter Name";
param.Value = "Value";
param.SqlDbType = SqlDbType.VarChar;
param.Direction = ParameterDirection.Input;
cmd.Parameters.Add(param);
cmd.ExecuteNonQuery();
}
}
Try to follow sample Stored Procedure
Create Proc YourStoredProcedureName
#ParameterName DataType(Length)
As
Begin
Set NoCount On
Set XACT_ABORT ON
Begin Try
Begin Tran
//Your SQL Code...
Commit Tran
End Try
Begin Catch
Rollback Tran
End Catch
End
Suggestions
Keep your transaction as short as possible.
When you are adding a record in table, it will definitely lock the resource until the transaction is completed. So definitely other users will have to wait.
In case of updation, you can use concurrency controls