I have a problem at work with a simple insert method occasionally timing out due to a scheduled clean-up task on a database table. This task runs every ten minutes and during its execution my code often records an error in the event log due to 'the wait operation timed out'.
One of the solutions I'm considering is to make the code calling the stored procedure asynchronous, and in order to do this I first started looking at the BeginExecuteNonQuery method.
I've tried using the BeginExecuteNonQuery method but have found that it quite often does not insert the row at all. The code I've used is as follows:
SqlConnection conn = daService.CreateSqlConnection(dataSupport.DBConnString);
SqlCommand command = daService.CreateSqlCommand("StoredProc");
try {
command.Connection = conn;
command.Parameters.AddWithValue("page", page);
command.Parameters.AddWithValue("Customer", customerId);
conn.Open();
command.BeginExecuteNonQuery(delegate(IAsyncResult ar) {
SqlCommand c = (SqlCommand)ar.AsyncState;
c.EndExecuteNonQuery(ar);
c.Connection.Close();
}, command);
} catch (Exception ex) {
LogService.WriteExceptionEntry(ex, EventLogEntryType.Error);
} finally {
command.Connection.Close();
command.Dispose();
conn.Dispose();
}
Obviously, I'm not expecting an instant insert but I am expecting it to be inserted after five minutes on a low usage development database.
I've now switched to the following code, which does do the insert:
System.Threading.ThreadPool.QueueUserWorkItem(delegate {
using (SqlConnection conn = daService.CreateSqlConnection( dataSupport.DBConnString)) {
using (SqlCommand command = daService.CreateSqlCommand("StoredProcedure")) {
command.Connection = conn;
command.Parameters.AddWithValue("page", page);
command.Parameters.AddWithValue("customer", customerId);
conn.Open();
command.ExecuteNonQuery();
}
}
});
I've got a few questions, some of them are assumptions:
As my insert method's signature is void, I'm presuming code that calls it doesn't wait for a response. Is this correct?
Is there a reason why BeginExecuteNonQuery doesn't run the stored procedure? Is my code wrong?
Most importantly, if I use the QueueUserWorkItem (or a well-behaved BeginExecuteNonQuery) am I right in thinking this will have the desired result? Which is, that an attempt to run the stored procedure whilst the scheduled task is running will see the code executing after the task completes, rather than its current timing out?
Edit
This is the version I'm using now in response to the comments and answers I've received.
SqlConnection conn = daService.CreateSqlConnection(
string.Concat("Asynchronous Processing=True;",
dataSupport.DBConnString));
SqlCommand command = daService.CreateSqlCommand("StoredProc");
command.Connection = conn;
command.Parameters.AddWithValue("page", page);
command.Parameters.AddWithValue("customer", customerId);
conn.Open();
command.BeginExecuteNonQuery(delegate(IAsyncResult ar) {
SqlCommand c = (SqlCommand)ar.AsyncState;
try {
c.EndExecuteNonQuery(ar);
} catch (Exception ex) {
LogService.WriteExceptionEntry(ex, EventLogEntryType.Error);
} finally {
c.Connection.Close();
c.Dispose();
conn.Dispose();
}
}, command);
Is there a reason why BeginExecuteNonQuery doesn't run the stored
procedure? Is my code wrong?
Probably you didn't add the Asynchronous Processing=True in the connection string.
Also - there could be a situation that when the reponse from sql is ready - the asp.net response has already sent.
that's why you need to use : Page.RegisterASyncTask (+AsyncTimeout)
(if you use webform asynchronous pages , you should add in the page directive : Async="True")
p.s. this line in :
System.Threading.ThreadPool.QueueUserWorkItem is dangerouse in asp.net apps. you should take care that the response is not already sent.
Related
I have the following code:
public void Execute(string Query, params SqlParameter[] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
{
Connection.Open();
using (var Command = new SqlCommand(Query, Connection))
{
if (Parameters.Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
}
The method may be called 2 or 3 times for different queries but in same manner.
For example:
Insert an Employee
Insert Employee Certificates
Update Degree of Employee on another table [ Fail can cause here. for example ]
If Point [3] fails, all already committed commands shouldn't execute and must be rolled back.
I know I can put SqlTransaction above and use Commit() method. But what about 3rd point if failed? I think point 3 only will rollback and other point 1,2 will not? How to solve this and what approach should I do??
Should I use SqlCommand[] arrays? What I should I do?
I only find similar question but in CodeProject:
See Here
Without changing your Execute method you can do this
var tranOpts = new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadCommitted,
Timeout = TransactionManager.MaximumTimeout
};
using (var tran = new TransactionScope(TransactionScopeOption.Required, tranOpts)
{
Execute("INSERT ...");
Execute("INSERT ...");
Execute("UPDATE ...");
tran.Complete();
}
SqlClient will cache the internal SqlConnection that is enlisted in the Transaction and reuse it for each call to Execute. So you even end up with a local (not distributed) transaction.
This is all explained in the docs here: System.Transactions Integration with SQL Server
There are a few ways to do it.
The way that probably involves changing the least code and involves the least complexity is to chain multiple SQL statements into a single query. It's perfectly fine to build a string for the Query argument that runs more than one statement, including BEGIN TRANSACTION, COMMIT, and (if needed) ROLLBACK. Basically, keep a whole stored procedure in your C# code. This also has the nice benefit of making it easier to use version control with your procedures.
But it still feels kind of hackish.
One way to reduce that effect is marking the Execute() method private. Then, have an additional method in the class for each query. In this way, the long SQL strings are isolated, and when you're using the database it feels more like using a local API. For more complicated applications, this might instead be a whole separate assembly with a few types managing logical functional areas, where the core methods like Exectue() are internal. This is a good idea anyway, regardless of how you end up supporting transactions.
And speaking of procedures, stored procedures are also a perfectly fine way to handle this. Have one stored procedure to do all the work, and call it when ready.
Another option is overloading the method to accept multiple queries and parameter collections:
public void Execute(string TransactionName, string[] Queries, params SqlParameter[][] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
using (var Transaction = new SqlTransaction(TransactionName))
{
connection.Transaction = Transaction;
Connection.Open();
try
{
for (int i = 0; i < Queries.Length; i++)
{
using (var Command = new SqlCommand(Queries[i], Connection))
{
command.Transaction = Transaction;
if (Parameters[i].Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
Transaction.Commit();
}
catch(Exception ex)
{
Transaction.Rollback();
throw; //I'm assuming you're handling exceptions at a higher level in the code
}
}
}
Though I'm not sure how the params keyword works with an array of arrays... I've just not tried that option, but something along these lines would work. The weakness here is also that it's not trivial to have a later query depend on a result from an earlier query, and even queries with no parameter would still need a Parameters array as a placeholder.
A final option is extending the type holding your Execute() method to support transactions. The trick here is it's common (and desirable) to have this type be static, but supporting transactions requires re-using common connection and transaction objects. Given the implied long-running nature of a transaction, you have to support more than one at a time, which means both instances and implementing IDisposable.
using (var connection = new SqlConnection(Configuration.ConnectionString))
{
SqlCommand command = connection.CreateCommand();
SqlTransaction transaction;
connection.Open();
transaction = connection.BeginTransaction("Transaction");
command.Connection = connection;
command.Transaction = transaction;
try
{
if (Parameters.Length > 0)
{
command.Parameters.Clear();
command.Parameters.AddRange(Parameters);
}
command.ExecuteNonQuery();
transaction.Commit();
}
catch (Exception e)
{
try
{
transaction.Rollback();
}
catch (Exception ex2)
{
//trace
}
}
}
I am trying to understand what's happening in the background, when a simple select query executed by client.
I am using C# Asp.Net Webforms, and i checked the processes with WireShark.
public DBC(string procedureName, params object[] procParams)
{
strError = null;
using (MySqlConnection connection = new MySqlConnection(GetConnectionString()))
{
connection.Close();
try
{
connection.Open();
MySqlCommand cmd = new MySqlCommand(procedureName, connection);
cmd.CommandType = CommandType.StoredProcedure;
//if we use params for stored procedure
if (procParams != null)
{
int i = 1;
foreach (object paramValue in procParams)
{
cmd.Parameters.Add(new MySqlParameter("#param_" + i, paramValue.ToString()));
i++;
}
}
if (procedureName.Contains("get"))
{
dtLoaded = new DataTable();
dtLoaded.Load(cmd.ExecuteReader());
}
else
{
cmd.ExecuteNonQuery();
}
}
catch (Exception ex)
{
strError = ErrorHandler.ErrorToMessage(ex);
}
finally
{
connection.Close();
connection.Dispose();
}
}
}
This is a simple SELECT * FROM TABLE query, in a try-catch statement. At the finally state, the connection was closed and disposed.
Why is it causes 43 process? I don't understand, why is there so much. Somebody could explain me?
Many thanks!
I assume you're using Oracle's Connector/NET. It performs a lot of not-strictly-necessary queries after opening a connection, e.g., SHOW VARIABLES to retrieve some server settings. (In 8.0.17 and later, this has been optimised slightly.)
Executing a stored procedure requires retrieving information about the stored procedure (to align parameters); it's more "expensive" than just executing a SQL statement directly. (You can disable this with CheckParameters=false, but I wouldn't recommend it.)
You can switch to MySqlConnector if you want a more efficient .NET client library. It's been tuned for performance (in both client CPU time and network I/O) and won't perform as much unnecessary work when opening a connection and executing a query. (MySqlConnector is the client library used for the .NET/MySQL benchmarks in the TechEmpower Framework Benchmarks.)
This question already has answers here:
C# & SQL Server : inserting/update only truly executes when debugging
(2 answers)
Closed 7 years ago.
I'm calling two stored procedures from a Windows service developed in C#. It should write two records in a determined table on the DB.
The stored procedures had been tested and executed from SQL Server Management Studio and they ALWAYS work well, so my problem is with the calling to them.
The weird part is that it works randomly. Some times it works just fine, but most of them time, the service doesn't execute the procedures. I had debugged it and the result of BeginExecuteNonQuery() is always "Ran To Completion", so it says that runs ok.
I'm including the code of the methods which make the calling. I don't include the stored procedures code because they are huge and as I said they always work fine when you execute it in from Management Studio with passing NULL as parameters. Of course, I don't have any connection or stored procedure naming problem.
public void Process()
{
if (!_initialized)
Initialize();
Stopped = false;
try
{
// Calling Sales sp
DoOutboundProcedure("procedure1", null, null, null);
// Calling Returns sp
DoOutboundProcedure("procedure2", null, null, null);
}
catch (Exception ex)
{
_logger.Error(ex.Message);
}
}
public void DoOutboundProcedure(string procedureName, object i_TraceOn, object i_Validate, DateTime? i_NextDateLastModified)
{
using (SqlConnection con = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["DatabaseConnection"].ConnectionString))
{
using (SqlCommand cmd = new SqlCommand(procedureName, con))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#i_TraceOn", SqlDbType.TinyInt).Value = i_TraceOn;
cmd.Parameters.Add("#i_Validate", SqlDbType.Bit).Value = i_Validate;
cmd.Parameters.Add("#i_NextDateLastModified", SqlDbType.DateTime).Value = i_NextDateLastModified;
con.Open();
_logger.Trace("Calling store procedure \"{0}\".", procedureName);
var result = cmd.BeginExecuteNonQuery();
_logger.Trace("Stored procedure calling finished.");
con.Close();
}
}
}
Since you're trying to do this operation asynchronously with BeginExecuteNonQuery() you have to finish the operation with EndExecuteNonQuery()
Per MSDN:
When you call BeginExecuteNonQuery to execute a Transact-SQL statement, you must call EndExecuteNonQuery in order to complete the operation. If the process of executing the command has not yet finished, this method blocks until the operation is complete. Users can verify that the command has completed its operation by using the IAsyncResult instance returned by the BeginExecuteNonQuery method. If a callback procedure was specified in the call to BeginExecuteNonQuery, this method must be called.
An example of how to use this, as taken from MSDN:
SqlCommand command = new SqlCommand(commandText, connection);
connection.Open();
IAsyncResult result = command.BeginExecuteNonQuery();
while (!result.IsCompleted)
{
Console.WriteLine("Waiting ({0})", count++);
// Wait for 1/10 second, so the counter
// does not consume all available resources
// on the main thread.
System.Threading.Thread.Sleep(100);
}
Console.WriteLine("Command complete. Affected {0} rows.",
command.EndExecuteNonQuery(result));
I use the following approach to execute queries over database and read data:
using(SqlConnection connection = new SqlConnection("Connection string"))
{
connection.Open();
using(SqlCommand command = new SqlCommand("SELECT * FROM TableName", connection))
{
using (SqlDataReader reader = command.ExecuteReader())
{
// read and process data somehow (possible source of exceptions)
} // <- reader hangs here if exception occurs
}
}
While reading and processing data some exceptions can occur. The problem is when exception is thrown DataReader hangs on Close() call. Do you have any ideas why??? And how to solve this issue in a proper way? The problem has gone when I wrote try..catch..finally block instead of using and called command.Cancel() before disposing the reader in finally.
Working version:
using(SqlConnection connection = new SqlConnection("Connection string"))
{
connection.Open();
using(SqlCommand command = new SqlCommand("SELECT * FROM TableName", connection))
{
SqlDataReader reader = command.ExecuteReader();
try
{
// read and process data somehow (possible source of exceptions)
}
catch(Exception ex)
{
// handle exception somehow
}
finally
{
command.Cancel(); // !!!
reader.Dispose();
}
}
}
When an exception occurs you stop processing data before all data is received. You can reproduce this issue even without exceptions if you abort processing after a few rows.
When the command or reader is disposed, the query is still running on the server. ADO.NET just reads all remaining rows and result sets like mad and throws them away. It does that because the server is sending them and the protocol requires receiving them.
Calling SqlCommand.Cancel sends an "attention" to SQL Server causing the query to truly abort. It is the same thing as pressing the cancel button in SSMS.
To summarize, this issue occurs whenever you stop processing rows although many more rows are inbound. Your workaround (calling SqlCommand.Cancel) is the correct solution.
About the Dispose method of the SqlDataReader, MSDN (link) has this to say:
Releases the resources used by the DbDataReader and calls Close.
Emphasis added by me. And if you then go look at the Close method (link), it states this:
The Close method fills in the values for output parameters, return
values and RecordsAffected, increasing the time that it takes to close
a SqlDataReader that was used to process a large or complex query.
When the return values and the number of records affected by a query
are not significant, the time that it takes to close the SqlDataReader
can be reduced by calling the Cancel method of the associated
SqlCommand object before calling the Close method.
So if you need to stop iterating through the reader, it's best to cancel the command first just like your working version is doing.
I would not format it that way.
Open(); is not in a try block and it can throw an exception
ExecuteReader(); is not in a try block and it can throw an exception
I like reader.Close - cause that is what I see in MSDN samples
And I catch SQLexception as they have numbers (like for timeout)
SqlConnection connection = new SqlConnection();
SqlDataReader reader = null;
try
{
connection.Open(); // you are missing this as a possible source of exceptions
SqlCommand command = new SqlCommand("SELECT * FROM TableName", connection);
reader = command.ExecuteReader(); // you are missing this as a possible source of exceptions
// read and process data somehow (possible source of exceptions)
}
catch (SqlException ex)
{
}
catch (Exception ex)
{
// handle exception somehow
}
finally
{
if (reader != null) reader.Close();
connection.Close();
}
Im making a system which should be running 24/7, with timers to control it. There are many calls to the database, and at some point, two methods are trying to open a connection, and one of them will fail. I've tried to make a retry method, so my methods would succeed. With the help from Michael S. Scherotter and Steven Sudit's methods in Better way to write retry logic without goto, does my method look like this:
int MaxRetries = 3;
Product pro = new Product();
SqlConnection myCon = DBcon.getInstance().conn();
string barcod = barcode;
string query = string.Format("SELECT * FROM Product WHERE Barcode = #barcode");
for (int tries = MaxRetries; tries >= 0; tries--) //<-- 'tries' at the end, are unreachable?.
{
try
{
myCon.Open();
SqlCommand com = new SqlCommand(query, myCon);
com.Parameters.AddWithValue("#barcode", barcode);
SqlDataReader dr = com.ExecuteReader();
if (dr.Read())
{
pro.Barcode = dr.GetString(0);
pro.Name = dr.GetString(1);
}
break;
}
catch (Exception ex)
{
if (tries == 0)
Console.WriteLine("Exception: "+ex);
throw;
}
}
myCon.Close();
return pro;
When running the code, the program stops at the "for(.....)", and the exception: The connection was not closed. The connection's current state is open... This problem was the reason why I'm trying to make this method! If anyone knows how to resovle this problem, please write. Thanks
You do
myCon.Open();
inside the for loop, but
myCon = DBcon.getInstance().conn();
outside of it. This way you try to open the same connection multiple times. If you want to protect against loss of DB connection you need to put both inside teh loop
You should move the call to myCon.Open outside the for statement or wrap myCon.Open() checking the connection state before re-opening the connection:
if (myCon.State != ConnectionState.Open)
{
myCon.Open();
}
Edited for new information
How about using Transactions to preserve data integrity, getting on-the-fly connections for multiple access and wrapping them in Using statements to ensure connections are closed? eg
Using (SqlConnection myCon = new SqlConnection('ConnectionString'))
{
myCon.Open();
var transaction = myCon.BeginTransaction();
try
{
// ... do some DB stuff - build your command with SqlCommand but use your transaction and your connection
var sqlCommand = new SqlCommand(CommandString, myCon, transaction);
sqlCommand.Parameters.Add(new Parameter()); // Build up your params
sqlCommand.ExecuteNonReader(); // Or whatever type of execution is best
transaction.Commit(); // Yayy!
}
catch (Exception ex)
{
transaction.RollBack(); // D'oh!
// ... Some logging
}
myCon.Close();
}
This way even if you forget to Close the connection, it will still be done implicitly when the connection gets to the end of its Using statement.
Have you tried adding
myCon.Close();
Into a Finally block. It looks like it is never being hit if you have an exception. I would highly recommend that you wrap the connection, command object etc in Using statements. This will ensure they are disposed of properly and the connection is closed.