The command timeout does not throw - c#

private void NpgSqlGetContracts(IList<Contract> con)
{
var conn = (NpgsqlConnection)Database.GetDbConnection();
List<Contract> contracts = new List<Contract>();
using (var cmd = new NpgsqlCommand("SELECT * FROM \"Contracts\";", conn))
{
cmd.CommandTimeout = 1;
cmd.Prepare();
int conCount= cmd.ExecuteNonQuery();
using (var reader = cmd.ExecuteReader(CommandBehavior.SingleResult))
{
while (reader.Read())
{
contracts.Add(MapDataReaderRowToContract(reader));
}
}
}
}
Here I have this code to try the command timeout in postgres, I have try debug it locally with break point in visual studio. I try both ExecuteNonQuery and ExecuteReader The query took more than 1 second to load all data (I have above 3 millions rows here). But the command timeout is set to 1 second. I wonder why it does not throw any exception here, What did I configured wrong here?
Thank you :)

As #hans-kesting wrote above, the command timeout isn't cumulative for the entire command, but rather for each individual I/O-producing call (e.g. Read). In that sense, it's meant to help with queries running for too long (without producing any results), or network issues.
You may also want to take a look at PostgreSQL's statement_timeout, which is a PG-side timeout for the entire command. It too has its issues, and Npgsql never sets it implicitly for you - but you can set it yourself.

Related

Connection leak (C#/ADO.NET) even though SqlConnection created with using

I have a program that loads a large quantity of data (~800K-1M rows per iteration) in a Task running on the threadpool (see offending code sample below); no more than 4 tasks running concurrently. This is the only place in the program that a connection is made to this database. When running the program on my laptop (and other coworkers identical laptops), the program functions perfectly. However, we have access to another workstation via remote desktop that is substantially more powerful than our laptops. The program fails about 1/3 to 1/2 of the way through its list. All of the tasks return an exception.
The first exception was: "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached." I've tried googling, binging, searching on StackOverflow, and banging my head against the table trying to figure out how this can be the case. With no more than 4 tasks running at once, there shouldn't be more than 4 connections at any one time.
In response to this, I tried two things: (1) I added a try/catch around the conn.Open() line that would clear the pool if InvalidOperationException appears--that appeared to work [didn't let it run all the way through, but got substantially past where it did before], but at the cost of performance. (2) I changed ConnectionTimeout to be 30 seconds instead of 15, which did not work (but let it proceed a little more). I also tried at one point to do ConnectRetryInterval=4 (mistakenly choosing this instead of ConnectRetryCount)--this let to a different error "The maximum number of requests is 4,800", which is strange because we still shouldn't be anywhere near 4,800 requests or connections.
In short, I'm at a loss because I can't figure out what is causing this connection leak only on a higher speed computer. I am also unable to get Visual Studio on that computer to debug directly--any thoughts anyone might have on where to look to try and resolve this would be much appreciated.
(Follow-up to c# TaskFactory ContinueWhenAll unexpectedly running before all tasks complete)
private void LoadData()
{
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();
builder.DataSource = "redacted";
builder.UserID = "redacted";
builder.Password = "redacted";
builder.InitialCatalog = "redacted";
builder.ConnectTimeout = 30;
using (SqlConnection conn = new SqlConnection(builder.ConnectionString))
{
//try
//{
// conn.Open();
//} catch (InvalidOperationException)
//{
// SqlConnection.ClearPool(conn);
// conn.Open();
//}
conn.Open();
string monthnum = _monthsdict.First((x) => x.Month == _month).MonthNum;
string yearnum = _monthsdict.First((x) => x.Month == _month).YearNum;
string nextmonthnum = _monthsdict[Array.IndexOf(_monthsdict, _monthsdict.First((x) => x.Month == _month))+1].MonthNum;
string nextyearnum = _monthsdict[Array.IndexOf(_monthsdict, _monthsdict.First((x) => x.Month == _month)) + 1].YearNum;
SqlCommand cmd = new SqlCommand();
cmd.Connection = conn;
cmd.CommandText = #"redacted";
cmd.Parameters.AddWithValue("#redacted", redacted);
cmd.Parameters.AddWithValue("#redacted", redacted);
cmd.Parameters.AddWithValue("#redacted", redacted);
cmd.CommandTimeout = 180;
SqlDataReader reader = cmd.ExecuteReader();
while(reader.Read())
{
Data data = new Data();
int col1 = reader.GetOrdinal("col1");
int col2 = reader.GetOrdinal("col2");
int col3 = reader.GetOrdinal("col3");
int col4 = reader.GetOrdinal("col4");
data.redacted = redacted;
data.redacted = redacted;
data.redacted = redacted;
data.redacted = redacted;
data.redacted = redacted;
data.Calculate();
_data.Add(data); //not a mistake, referring to another class variable
}
reader.Close();
cmd.Dispose();
conn.Close();
conn.Dispose();
}
}
This turned out to be a classic case of not reading the documentation closely enough. I was trying to cap max Threads at 4, using ThreadPool.SetMaxThreads, but max Threads cannot be less than the number of processors. On the workstation it failed on, it has 8 processors. So, there was never a cap, it was running as many tasks as the Task Scheduler felt appropriate, and it was eventually hitting the Connection Pool limit.
https://learn.microsoft.com/en-us/dotnet/api/system.threading.threadpool.setmaxthreads

how to rollback all SqlCommands which already executed with SqlTransaction?

I have the following code:
public void Execute(string Query, params SqlParameter[] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
{
Connection.Open();
using (var Command = new SqlCommand(Query, Connection))
{
if (Parameters.Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
}
The method may be called 2 or 3 times for different queries but in same manner.
For example:
Insert an Employee
Insert Employee Certificates
Update Degree of Employee on another table [ Fail can cause here. for example ]
If Point [3] fails, all already committed commands shouldn't execute and must be rolled back.
I know I can put SqlTransaction above and use Commit() method. But what about 3rd point if failed? I think point 3 only will rollback and other point 1,2 will not? How to solve this and what approach should I do??
Should I use SqlCommand[] arrays? What I should I do?
I only find similar question but in CodeProject:
See Here
Without changing your Execute method you can do this
var tranOpts = new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadCommitted,
Timeout = TransactionManager.MaximumTimeout
};
using (var tran = new TransactionScope(TransactionScopeOption.Required, tranOpts)
{
Execute("INSERT ...");
Execute("INSERT ...");
Execute("UPDATE ...");
tran.Complete();
}
SqlClient will cache the internal SqlConnection that is enlisted in the Transaction and reuse it for each call to Execute. So you even end up with a local (not distributed) transaction.
This is all explained in the docs here: System.Transactions Integration with SQL Server
There are a few ways to do it.
The way that probably involves changing the least code and involves the least complexity is to chain multiple SQL statements into a single query. It's perfectly fine to build a string for the Query argument that runs more than one statement, including BEGIN TRANSACTION, COMMIT, and (if needed) ROLLBACK. Basically, keep a whole stored procedure in your C# code. This also has the nice benefit of making it easier to use version control with your procedures.
But it still feels kind of hackish.
One way to reduce that effect is marking the Execute() method private. Then, have an additional method in the class for each query. In this way, the long SQL strings are isolated, and when you're using the database it feels more like using a local API. For more complicated applications, this might instead be a whole separate assembly with a few types managing logical functional areas, where the core methods like Exectue() are internal. This is a good idea anyway, regardless of how you end up supporting transactions.
And speaking of procedures, stored procedures are also a perfectly fine way to handle this. Have one stored procedure to do all the work, and call it when ready.
Another option is overloading the method to accept multiple queries and parameter collections:
public void Execute(string TransactionName, string[] Queries, params SqlParameter[][] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
using (var Transaction = new SqlTransaction(TransactionName))
{
connection.Transaction = Transaction;
Connection.Open();
try
{
for (int i = 0; i < Queries.Length; i++)
{
using (var Command = new SqlCommand(Queries[i], Connection))
{
command.Transaction = Transaction;
if (Parameters[i].Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
Transaction.Commit();
}
catch(Exception ex)
{
Transaction.Rollback();
throw; //I'm assuming you're handling exceptions at a higher level in the code
}
}
}
Though I'm not sure how the params keyword works with an array of arrays... I've just not tried that option, but something along these lines would work. The weakness here is also that it's not trivial to have a later query depend on a result from an earlier query, and even queries with no parameter would still need a Parameters array as a placeholder.
A final option is extending the type holding your Execute() method to support transactions. The trick here is it's common (and desirable) to have this type be static, but supporting transactions requires re-using common connection and transaction objects. Given the implied long-running nature of a transaction, you have to support more than one at a time, which means both instances and implementing IDisposable.
using (var connection = new SqlConnection(Configuration.ConnectionString))
{
SqlCommand command = connection.CreateCommand();
SqlTransaction transaction;
connection.Open();
transaction = connection.BeginTransaction("Transaction");
command.Connection = connection;
command.Transaction = transaction;
try
{
if (Parameters.Length > 0)
{
command.Parameters.Clear();
command.Parameters.AddRange(Parameters);
}
command.ExecuteNonQuery();
transaction.Commit();
}
catch (Exception e)
{
try
{
transaction.Rollback();
}
catch (Exception ex2)
{
//trace
}
}
}

Cancelling SQL Query in background in .NET

I'm designing a small desktop app that fetches data from SQL server. I used BackgroundWorker to make the query execute in background. The code that fetches data generally comes down to this:
public static DataTable GetData(string sqlQuery)
{
DataTable t = new DataTable();
using (SqlConnection c = new SqlConnection(GetConnectionString()))
{
c.Open();
using (SqlCommand cmd = new SqlCommand(sqlQuery))
{
cmd.Connection = c;
using (SqlDataReader r = cmd.ExecuteReader())
{
t.Load(r);
}
}
}
return t;
}
Since query can take up 10-15 minutes I want to implement cancellation request and pass it from GUI layer to DAL. Cancellation procedure of BackroundWorker won't let me cancel SqlCommand.ExecuteReader() beacuse it only stops when data is fetched from server or an exception is thrown by Data Provider.
I tried to use Task and async/await with SqlCommand.ExecuteReaderAsync(CancellationToken) but I am confused where to use it in multi-layer app (GUI -> BLL -> DAL).
Have you tried using the SqlCommand.Cancel() method ?
Aproach: encapsulate that GetData method in a Thread/Worker and then when you cancel/stop that thread call the Cancel() method on the SqlCommand that is being executed.
Here is an example on how to use it on a thread
using System;
using System.Data;
using System.Data.SqlClient;
using System.Threading;
class Program
{
private static SqlCommand m_rCommand;
public static SqlCommand Command
{
get { return m_rCommand; }
set { m_rCommand = value; }
}
public static void Thread_Cancel()
{
Command.Cancel();
}
static void Main()
{
string connectionString = GetConnectionString();
try
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
Command = connection.CreateCommand();
Command.CommandText = "DROP TABLE TestCancel";
try
{
Command.ExecuteNonQuery();
}
catch { }
Command.CommandText = "CREATE TABLE TestCancel(co1 int, co2 char(10))";
Command.ExecuteNonQuery();
Command.CommandText = "INSERT INTO TestCancel VALUES (1, '1')";
Command.ExecuteNonQuery();
Command.CommandText = "SELECT * FROM TestCancel";
SqlDataReader reader = Command.ExecuteReader();
Thread rThread2 = new Thread(new ThreadStart(Thread_Cancel));
rThread2.Start();
rThread2.Join();
reader.Read();
System.Console.WriteLine(reader.FieldCount);
reader.Close();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
static private string GetConnectionString()
{
// To avoid storing the connection string in your code,
// you can retrieve it from a configuration file.
return "Data Source=(local);Initial Catalog=AdventureWorks;"
+ "Integrated Security=SSPI";
}
}
You can only do Cancelation checking and Progress Reporting between Distinct lines of code. Usually both require that you disect the code down to the lowest loop level, so you can do both these things between/in the loop itterations. When I wrote my first step into BGW, I had the advantage that I needed to do the loop anyway so it was no extra work. You have one of the worse cases - pre-existing code that you can only replicate or use as is.
Ideal case:
This operation should not take nearly as long is it does. 5-10 minutes indicates that there is something rather wrong with your design.
If the bulk of the time is transmission of data, then you are propably retreiving way to much data. Retrieving everything to do filtering in the GUI is a very common mistake. Do as much filtering in the query as possible. Usign a Distributed Database might also help with transmission performance.
If the bulk of the time is processing as part of the query operation (complex Conditions), something in your general approach might have to change. There are various ways to trade off complex calculation with a bit of memory on the DBMS side. Views afaik can cache the results of operations, while still maintaining transactional consistency.
But it really depends what your backend DB/DBMS and use case are. A lot of the use SQL as Query Language. So it does not allow us to predict wich options you have.
Second best case:
The second best thing if you can not cut it down, would be if you had the actually DB access code down to the lowest loop and would do progress reporting/cancelation checking on it. That way you could actually use the existing Cancelation Token System inherent in BGW.
Everything else
Using any other approach to Cancelation is really a fallback. I wrote a lot on why it is bad, but felt that this might work better if I focus on the core issue - likely something wrong in design of he DB and/or Query. Because those might well eliminate the issue altogether.

ADO.NET hurting perfomance: how can I prevent DataReader from taking so long?

An Oracle Stored Procedure runs in the database and returns some rows of data, taking 30 sec.
Now, calling this procedure and filling the DataAdapter to then populate the DataSet takes 1m40s in a C# .NET application.
Testing, noticed that using a DataReader and reading secuentially with the Read() function after calling the stored procedure, takes again a total time of 1m40s aprox.
Any idea what could be causing these bottlenecks and how to get rid of them?
Thanks in advance!
Edit: Added the code
OracleConnection oracleConnection = Connect();
OracleCommand oracleCommand = CreateCommand(2);
OracleDataReader oracleDataReader = null;
if (oracleConnection != null)
{
try
{
if (oracleConnection.State == ConnectionState.Open)
{
oracleCommand.Connection = oracleConnection;
oracleDataReader = oracleCommand.ExecuteReader();
DateTime dtstart = DateTime.Now;
if (oracleDataReader.HasRows)
{
while (oracleDataReader.Read())
{
/* big Bottleneck here ... */
// Parse the fields
}
}
DateTime dtEnd = DateTime.Now;
TimeSpan ts = new TimeSpan(dtEnd.Ticks - dtstart.Ticks);
lblDuration2.Text = "Duration: " + ts.ToString();
Disconnect(oracleConnection);
}
This might help, though the lack of information on how you're actually using the reader.
using (var cnx = Connect())
using (var cmd = CreateCommand(2)) {
try {
if (cnx.State == ConnectionState.Close) cnx.Open()
// The following line allows for more time to be allowed to
// the command execution. The smaller the amount, the sooner the
// command times out. So be sure to let enough room for the
// command to execute successfuly
cmd.CommandTimeout = 600;
// The below-specified CommandBehavior allows for a sequential
// access against the underlying database. This means rows are
// streamed through your reader instance and meanwhile the
// program reads from the reader, the reader continues to read
// from the database instead of waiting until the full result
// set is returned by the database to continue working on the
// information data.
using (var reader = cmd.ExecuteReader(
CommandBehavior.SequentialAccess)) {
if (reader.HasRows)
while (reader.Read()) {
// Perhaps bottleneck will disappear here...
// Without proper code usage of your reader
// no one can help.
}
}
} catch(OracleException ex) {
// Log exception or whatever,
// otherwise might be best to let go and rethrow
} finally {
if (cnx.State == ConnectionState.Open) cnx.Close();
}
}
For more detailed information on command behaviours: Command Behavior Enumeration.
Directly from MSDN:
Sequential Access
Provides a way for the DataReader to handle rows that contain columns with large binary values. Rather than loading the entire row, SequentialAccess enables the DataReader to load data as a stream. You can then use the GetBytes or GetChars method to specify a byte location to start the read operation, and a limited buffer size for the data being returned.
When you specify SequentialAccess, you are required to read from the columns in the order they are returned, although you are not required to read each column. Once you have read past a location in the returned stream of data, data at or before that location can no longer be read from the DataReader. When using the OleDbDataReader, you can reread the current column value until reading past it. When using the SqlDataReader, you can read a column value only once.
As for increasing the CommandTimeout property, have a look at this post:
Increasing the Command Timeout for SQL command
When you expect the command to take a certain amount of time, one shall require a longer timeout and allow for the command to return before it times out. When a timeout occurs, it takes a few seconds to be resume from it. All this may be avoided. You might want to measure the time required for the timeout and specify it as close as possible to the real command timeout requirement, as a longer timeout might incur some other underlying problems which won't be detected with a too long timeout. When a command timeout occurs, ask yourself how you could deal with a smaller result set, or how you could improve your query to run faster.

SQL server and .NET memory constraints, allocations, and garbage collection

I am running .NET 3.5 (C#) and SQL Server 2005 (for our clients). The code that we run does some regression math and is a little complicated. I get the following error when I run multiple pages on our site:
.NET Framework execution was aborted by escalation policy because of out of memory.
System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first.
System.InvalidOperationException:
I'm trying to figure out what is the root cause of this: is it a database issue or my C## code? or is it concurrency with locks when running queries? or somethin else?
The code is erroring here:
erver.ScriptTimeout = 300;
string returnCode = string.Empty;
using (SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["MainDll"].ToString())) {
connection.Open();
using (SqlCommand command = new SqlCommand(sql.ToString(), connection)) {
command.CommandType = CommandType.Text;
command.CommandTimeout = 300;
returnCode = (string)command.ExecuteScalar();
//Dispose();
}
//Dispose();
}
Our contractor wrote a bunch of code to help with SQL connections in an App_Code/sqlHelper.s file. Some of them are like this:
public static SqlDataReader GetDataReader(string sql, string connectionString, int connectionTime) {
lock (_lock) {
SqlConnection connection = null;
try {
connection = GetConnection(connectionString);
//connection.Open();
using (SqlCommand cmd = new SqlCommand(sql, connection)) {
cmd.CommandTimeout = connectionTime;
WriteDebugInfo("GetDataReader", sql);
return cmd.ExecuteReader(CommandBehavior.CloseConnection);
}
}
catch (Exception e) {
if (connection != null)
connection.Dispose();
throw new DataException(sql, connectionString, e);
}
}
}
Should there be some deallocation of memory somewhere?
The problem is that, for some reason, your DataReader isn't being closed. An exception? The method user didn't remember to close the DataReader?
A function that returns a DataReader to be used outside its body leaves the responsibility of closing it to outer code, so there's no guarantee that the Reader will be closed. If you don't close the reader, you cannot reuse the connection in which it was opened.
So returning a DataReader from a function is a very bad idea!
You can see a whole discussion on this subject here.
Look for the usages of this function (GetDataReader), and check if there's guarantee that the reader is getting closed. And, most importantly, that there is no possibility that this code re-enters and uses the same collection to open a new DataReader before the first is closed. (Don't be mislead by the CommandBehavior.CloseConnection. This only takes care of closing the connection when the DataReader is closed... only if you don't fail to close it)
This is because your data reader is already filled in. Its always a better way to release the data reader, command , data set , data table and close the connection in finally block.
Make use of Dispose() and Close() methods .

Categories

Resources