I have some code that periodically runs a query against a SQL Server database and stores the rows in a dictionary. This code has been running fine in our production environment for about 3 years. Just recently, it has been crashing with an unhandled exception. For troubleshooting purposes, I've removed everything but the column reads, and wrapped everything in a try-catch. Here is the exception:
A first chance exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error)
at System.Data.SqlClient.TdsParserStateObject.ReadSni(DbAsyncResult asyncResult, TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParserStateObject.ReadNetworkPacket()
at System.Data.SqlClient.TdsParserStateObject.ReadByteArray(Byte[] buff, Int32 offset, Int32 len)
at System.Data.SqlClient.TdsParser.SkipValue(SqlMetaDataPriv md, TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParser.SkipRow(_SqlMetaDataSet columns, Int32 startCol, TdsParserStateObject stateObj)
at System.Data.SqlClient.SqlDataReader.CleanPartialRead()
at System.Data.SqlClient.SqlDataReader.ReadInternal(Boolean setTimeout)
I'm using something very similar to:
// query is a simple select from 1 table, not long running by any means
string query = "SELECT col1, col2, col3, col4 FROM db.dbo.tbl_name WITH (nolock)"; //connection timeout
string query = "SELECT col1, col2, col3, col4 FROM db.dbo.tbl_name WITH (nolock) order by col1, col2"; //connection does not time out
SqlCommand command = new SqlCommand(query,connection)
SqlDataReader reader = command.ExecuteReader();
while (!reader.IsClosed && reader.Read()) {
try {
string test0 = reader[0].ToString();
string test1 = reader[1].ToString();
string test2 = reader[2].ToString();
string test3 = reader[3].ToString();
// here is where I would normally processes and store into dictionary
}
catch (Exception e){
//make some noises
}
}
When I run the query with other methods, It returns almost instantly (well under a second), but just to see what would happen, I increased the CommandTimeout to 60 seconds (from the default 30), which just increased the amount of time my program would hang before throwing an exception.
At #frisbee's suggestion I added an order by clause to the query, which stops the connection from timing out.
What I think is happening is that one of the Read() operations is not returning, and then causing the connection to timeout, but I have no idea what would cause this. This usually happens on a certain row when reading column 3, but not always. The query returns just under 50k rows, and sometimes it will make it through all, and sometimes only through 15k
Why don't you go ahead and set the CommandTimeout property of your SqlCommand instance to a high number? This will get your code working.
On the other end, you'll need to debug whatever's taking the server so long to finish its work. You can't do anything about that from the .NET side. You'll have to step through the underlying code that is executed on the SQL Server.
Why is it you are checking if the reader is closed on your while loop?
Try using using to ensure things are getting handled correctly. The following should make everything flow smoothly for you.
using (SqlConnection connection = MyDBConnection)
{
connection.Open();
SqlCommand command = new SqlCommand("SQL Query Here", connection) { CommandTimeout = 0 };
using (var reader = command.ExecuteReader())
{
try
{
while (reader.Read())
{
string test0 = reader[0].ToString();
string test1 = reader[1].ToString();
string test2 = reader[2].ToString();
string test3 = reader[3].ToString();
}
}
catch (Exception e)
{
//Make some Noise
}
}
}
Every other run? Are you possibly sharing a command?
If you are using MARS and sharing a connection - don't
using (SqlCommand cmd = con.CreateCommand)
{
using (SqlDataReader rdr = cmd.ExecuteReader())
{
}
}
Related
private void NpgSqlGetContracts(IList<Contract> con)
{
var conn = (NpgsqlConnection)Database.GetDbConnection();
List<Contract> contracts = new List<Contract>();
using (var cmd = new NpgsqlCommand("SELECT * FROM \"Contracts\";", conn))
{
cmd.CommandTimeout = 1;
cmd.Prepare();
int conCount= cmd.ExecuteNonQuery();
using (var reader = cmd.ExecuteReader(CommandBehavior.SingleResult))
{
while (reader.Read())
{
contracts.Add(MapDataReaderRowToContract(reader));
}
}
}
}
Here I have this code to try the command timeout in postgres, I have try debug it locally with break point in visual studio. I try both ExecuteNonQuery and ExecuteReader The query took more than 1 second to load all data (I have above 3 millions rows here). But the command timeout is set to 1 second. I wonder why it does not throw any exception here, What did I configured wrong here?
Thank you :)
As #hans-kesting wrote above, the command timeout isn't cumulative for the entire command, but rather for each individual I/O-producing call (e.g. Read). In that sense, it's meant to help with queries running for too long (without producing any results), or network issues.
You may also want to take a look at PostgreSQL's statement_timeout, which is a PG-side timeout for the entire command. It too has its issues, and Npgsql never sets it implicitly for you - but you can set it yourself.
I have a program that loads a large quantity of data (~800K-1M rows per iteration) in a Task running on the threadpool (see offending code sample below); no more than 4 tasks running concurrently. This is the only place in the program that a connection is made to this database. When running the program on my laptop (and other coworkers identical laptops), the program functions perfectly. However, we have access to another workstation via remote desktop that is substantially more powerful than our laptops. The program fails about 1/3 to 1/2 of the way through its list. All of the tasks return an exception.
The first exception was: "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached." I've tried googling, binging, searching on StackOverflow, and banging my head against the table trying to figure out how this can be the case. With no more than 4 tasks running at once, there shouldn't be more than 4 connections at any one time.
In response to this, I tried two things: (1) I added a try/catch around the conn.Open() line that would clear the pool if InvalidOperationException appears--that appeared to work [didn't let it run all the way through, but got substantially past where it did before], but at the cost of performance. (2) I changed ConnectionTimeout to be 30 seconds instead of 15, which did not work (but let it proceed a little more). I also tried at one point to do ConnectRetryInterval=4 (mistakenly choosing this instead of ConnectRetryCount)--this let to a different error "The maximum number of requests is 4,800", which is strange because we still shouldn't be anywhere near 4,800 requests or connections.
In short, I'm at a loss because I can't figure out what is causing this connection leak only on a higher speed computer. I am also unable to get Visual Studio on that computer to debug directly--any thoughts anyone might have on where to look to try and resolve this would be much appreciated.
(Follow-up to c# TaskFactory ContinueWhenAll unexpectedly running before all tasks complete)
private void LoadData()
{
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();
builder.DataSource = "redacted";
builder.UserID = "redacted";
builder.Password = "redacted";
builder.InitialCatalog = "redacted";
builder.ConnectTimeout = 30;
using (SqlConnection conn = new SqlConnection(builder.ConnectionString))
{
//try
//{
// conn.Open();
//} catch (InvalidOperationException)
//{
// SqlConnection.ClearPool(conn);
// conn.Open();
//}
conn.Open();
string monthnum = _monthsdict.First((x) => x.Month == _month).MonthNum;
string yearnum = _monthsdict.First((x) => x.Month == _month).YearNum;
string nextmonthnum = _monthsdict[Array.IndexOf(_monthsdict, _monthsdict.First((x) => x.Month == _month))+1].MonthNum;
string nextyearnum = _monthsdict[Array.IndexOf(_monthsdict, _monthsdict.First((x) => x.Month == _month)) + 1].YearNum;
SqlCommand cmd = new SqlCommand();
cmd.Connection = conn;
cmd.CommandText = #"redacted";
cmd.Parameters.AddWithValue("#redacted", redacted);
cmd.Parameters.AddWithValue("#redacted", redacted);
cmd.Parameters.AddWithValue("#redacted", redacted);
cmd.CommandTimeout = 180;
SqlDataReader reader = cmd.ExecuteReader();
while(reader.Read())
{
Data data = new Data();
int col1 = reader.GetOrdinal("col1");
int col2 = reader.GetOrdinal("col2");
int col3 = reader.GetOrdinal("col3");
int col4 = reader.GetOrdinal("col4");
data.redacted = redacted;
data.redacted = redacted;
data.redacted = redacted;
data.redacted = redacted;
data.redacted = redacted;
data.Calculate();
_data.Add(data); //not a mistake, referring to another class variable
}
reader.Close();
cmd.Dispose();
conn.Close();
conn.Dispose();
}
}
This turned out to be a classic case of not reading the documentation closely enough. I was trying to cap max Threads at 4, using ThreadPool.SetMaxThreads, but max Threads cannot be less than the number of processors. On the workstation it failed on, it has 8 processors. So, there was never a cap, it was running as many tasks as the Task Scheduler felt appropriate, and it was eventually hitting the Connection Pool limit.
https://learn.microsoft.com/en-us/dotnet/api/system.threading.threadpool.setmaxthreads
So I'm trying to use a MySqlDataReader to acquire data from my database. I know that the database does in fact respond (insert, delete, and update all work fine from my program).
using (MySqlConnection conn = new MySqlConnection(connectionString))
{
// Open a connection
conn.Open();
MySqlCommand command = conn.CreateCommand();
command.CommandText = "select * from cs3500_u0848199.PairedGames";
// Execute the command and cycle through the DataReader object
using (MySqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{ /*do something here*/}
}
}
The problem does not appear to be with the command itself, as the command works in the MySQL workbench. Anyways, upon executing this line of code
using (MySqlConnection conn = new MySqlConnection(connectionString))
the VS debugger notes that the following exception was thrown
System.FormatException was unhandled by user code
HResult=-2146233033 Message=Guid should contain 32 digits with 4
dashes (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx). Source=mscorlib
StackTrace:
at System.Guid.TryParseGuidWithNoStyle(String guidString, GuidResult& result)
at System.Guid.TryParseGuid(String g, GuidStyles flags, GuidResult& result)
at System.Guid..ctor(String g)
at MySql.Data.Types.MySqlGuid.MySql.Data.Types.IMySqlValue.ReadValue(MySqlPacket
packet, Int64 length, Boolean nullVal)
at MySql.Data.MySqlClient.NativeDriver.ReadColumnValue(Int32 index, MySqlField field, IMySqlValue valObject)
at MySql.Data.MySqlClient.Driver.ReadColumnValue(Int32 index, MySqlField field, IMySqlValue value)
at MySql.Data.MySqlClient.ResultSet.ReadColumnData(Boolean outputParms)
at MySql.Data.MySqlClient.ResultSet.NextRow(CommandBehavior behavior)
at MySql.Data.MySqlClient.MySqlDataReader.Read()
at ToDoList.BoggleService.GetBriefStatus(String gameToken) in d:\repositories\x0848199\PS11\ToDoService\BoggleService.svc.cs:line
443
at SyncInvokeGetBriefStatus(Object , Object[] , Object[] )
at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object
instance, Object[] inputs, Object[]& outputs)
at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&
rpc) InnerException:
Really unsure as to why it's telling me about the guid format since i'm not using Guids in this code.
Any help will be greatly appreciated.
it appears that appending ;old guids=true; to the connection string resolved the issue.
I'm getting the following error intermittently.
There is already an open DataReader associated with this Command which must be closed first.
I read that this can happens when there are nested DataReaders in the same connection, but in my case, I'm using the following code to execute all queries.
private SqlTransaction Transaction { get; set; }
private SqlConnection Connection { get; set; }
private DbRow Row {get; set;}
public Row Exec(string sql){
try{
//Begin connection/transaction
Connection = new SqlConnection(connectionString);
Connection.Open();
Transaction = Connection.BeginTransaction("SampleTransaction");
//create command
SqlCommand command = new SqlCommand(sql, Connection);
command.Transaction = Transaction;
//execute reader and close it
//HERE IS THE PROBLEM, THE READER ALWAYS READ UNTIL THE END
//BEFORE ANOTHER CAN BE OPENED
reader = command.ExecuteReader();
while (reader.Read())
{
object[] value = new object[reader.FieldCount];
reader.GetValues(value);
List<object> values = new List<object>(value);
Rows.Add(values);
}
reader.Close();
Transaction.Commit();
Connection.Dispose();
Connection = null;
}
catch
{
Transaction.Rollback();
Connection.Dispose();
Connection = null;
}
finally
{
if (reader != null && !reader.IsClosed) reader.Close();
}
}
This way, The result is stored in an object and there isn't nested readers.
I also read that adding 'MultipleActiveResultSets=True' to connection string can solve the problem when using nested readers.
Does this solution also solve my problem?
As the error is intermitent and only happens in production environment, I can't test it many times.
There is already an open DataReader associated with this Command which must be closed first. at
System.Data.SqlClient.SqlInternalConnectionTds.ValidateConnectionForExecute(SqlCommand command) at
System.Data.SqlClient.SqlInternalTransaction.Rollback() at
System.Data.SqlClient.SqlTransaction.Rollback() at
Application.Lib.DB.DBSQLServer.Rollback()
at Application.Lib.DB.DBSQLServer.Execute(String sql, Dictionary`2 parameters,
Nullable`1 timeout, Boolean useTransaction) at
Application.UtilDB.Execute(String sql, Dictionary`2 parameters, Nullable`1
timeout, Boolean useTransaction) in c:\Application\DBUtil.cs:line 37 at
Application.A.CollectionFromDataBase(Int32 cenId,
IDB db, Int32 includeId, Boolean allStatus) in c:\Application\Activities.cs:line 64 at
Application.ActivitiesController.CheckForConflictsBeforeSave(String aulId, String insId) in c:\Application\AlocController.cs:line 212
The problem was that, when a query fails, the transaction can't be rolled back because the data reader is already open to process the query.
A second exception is thrown and the first one is lost.
I just placed the rollback inside a try catch block and used the AggregateException class to throw both exceptions.
try
{
Transaction.Rollback();
Connection.Dispose();
Connection = null;
}
catch (Exception ex2)
{
throw new AggregateException(new List<Exception>() { e, ex2 });
}
Although the transaction will be rolled back anyway, I think you can also try to close the data reader before the rollback, so it will probably work.
if (reader != null && !reader.IsClosed)
reader.Close();
Transaction.Rollback();
Since this happens only on production it's more likely that bug is outside the code you attached.
Most common way to prevent this is to always code in following fashion:
reader = command.ExecuteReader();
try
{
for (int i = 0; i < reader.FieldCount; i++)
{
dbResult.Columns.Add(reader.GetName(i));
dbResult.Types.Add(reader.GetDataTypeName(i));
}
while (reader.Read())
{
object[] value = new object[reader.FieldCount];
reader.GetValues(value);
List<object> values = new List<object>(value);
Rows.Add(values);
}
}
finally
{
reader.Close();
}
Notice the finally block, it makes sure reader is closed no matter what. I am under impression that something happens in your code that leaves the reader open but the bug isn't visible in the code you've posted.
I recommend you enclose it in the above try/finally block and your bug is quite likely to be resolved.
Edit, to clarify: This may not resolve whatever bug exists outside the scope of the originally shown code but it will prevent data readers being left open. The finally block I suggested won't block any exceptions, they will be propagated to whatever handler you employ outside of it.
I use the following approach to execute queries over database and read data:
using(SqlConnection connection = new SqlConnection("Connection string"))
{
connection.Open();
using(SqlCommand command = new SqlCommand("SELECT * FROM TableName", connection))
{
using (SqlDataReader reader = command.ExecuteReader())
{
// read and process data somehow (possible source of exceptions)
} // <- reader hangs here if exception occurs
}
}
While reading and processing data some exceptions can occur. The problem is when exception is thrown DataReader hangs on Close() call. Do you have any ideas why??? And how to solve this issue in a proper way? The problem has gone when I wrote try..catch..finally block instead of using and called command.Cancel() before disposing the reader in finally.
Working version:
using(SqlConnection connection = new SqlConnection("Connection string"))
{
connection.Open();
using(SqlCommand command = new SqlCommand("SELECT * FROM TableName", connection))
{
SqlDataReader reader = command.ExecuteReader();
try
{
// read and process data somehow (possible source of exceptions)
}
catch(Exception ex)
{
// handle exception somehow
}
finally
{
command.Cancel(); // !!!
reader.Dispose();
}
}
}
When an exception occurs you stop processing data before all data is received. You can reproduce this issue even without exceptions if you abort processing after a few rows.
When the command or reader is disposed, the query is still running on the server. ADO.NET just reads all remaining rows and result sets like mad and throws them away. It does that because the server is sending them and the protocol requires receiving them.
Calling SqlCommand.Cancel sends an "attention" to SQL Server causing the query to truly abort. It is the same thing as pressing the cancel button in SSMS.
To summarize, this issue occurs whenever you stop processing rows although many more rows are inbound. Your workaround (calling SqlCommand.Cancel) is the correct solution.
About the Dispose method of the SqlDataReader, MSDN (link) has this to say:
Releases the resources used by the DbDataReader and calls Close.
Emphasis added by me. And if you then go look at the Close method (link), it states this:
The Close method fills in the values for output parameters, return
values and RecordsAffected, increasing the time that it takes to close
a SqlDataReader that was used to process a large or complex query.
When the return values and the number of records affected by a query
are not significant, the time that it takes to close the SqlDataReader
can be reduced by calling the Cancel method of the associated
SqlCommand object before calling the Close method.
So if you need to stop iterating through the reader, it's best to cancel the command first just like your working version is doing.
I would not format it that way.
Open(); is not in a try block and it can throw an exception
ExecuteReader(); is not in a try block and it can throw an exception
I like reader.Close - cause that is what I see in MSDN samples
And I catch SQLexception as they have numbers (like for timeout)
SqlConnection connection = new SqlConnection();
SqlDataReader reader = null;
try
{
connection.Open(); // you are missing this as a possible source of exceptions
SqlCommand command = new SqlCommand("SELECT * FROM TableName", connection);
reader = command.ExecuteReader(); // you are missing this as a possible source of exceptions
// read and process data somehow (possible source of exceptions)
}
catch (SqlException ex)
{
}
catch (Exception ex)
{
// handle exception somehow
}
finally
{
if (reader != null) reader.Close();
connection.Close();
}