Slow SQL data retrieval with SqlDataReader.Read() in C# vs SSMS - c#

I am doing a simple SQL query to get lots of data.
The complexity of the query is not an issue. It takes around 200ms to execute.
However the amount of data seems to be the issue.
We retrieve around 40k rows.
Each row has 8 columns and the amount of data is around a few hundreds of kbytes per row. Say, we download 15megs in total for this query.
What boggles my mind is that:
when I execute the query from a basic C# code it takes 1min and 44secs.
But when I do it from SSMS it takes 10 secs. Of course I do this from the same machine, and I'm using the same database.
And I clearly see the UI and the rows being populated in realtime. In 10secs the whole data table is full.
We tried:
to set the same SET things as the ones from SSMS,
to change the transaction isolation level,
to ignore the execution plan (with the OPTION(RECOMPILE)),
to ignore locks (with the WITH(NOLOCK)).
It doesn't change anything.
Makes sense: it's the read that is slow. Not the query (IMHO).
It is the while(reader.Read()) that takes time.
And, we tried with an empty while loop. So this excludes boxing/unboxing stuff or putting the result in memory.
Here is a test program we made to figure out it was the Read() that is taking time:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Threading.Tasks;
using System.Transactions;
namespace SqlPerfTest
{
class Program
{
const int GroupId = 1234;
static readonly DateTime DateBegin = new DateTime(2017, 6, 19, 0, 0, 0, DateTimeKind.Utc);
static readonly DateTime DateEnd = new DateTime(2017, 10, 20, 0, 0, 0, DateTimeKind.Utc);
const string ConnectionString = "CENSORED";
static void Main(string[] args)
{
TransactionOptions transactionOptions = new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted
};
using (var transactionScope = new TransactionScope(TransactionScopeOption.Required, transactionOptions))
{
using (SqlConnection connection = new SqlConnection(ConnectionString))
{
connection.Open();
SetOptimizations(connection);
ShowUserOptions(connection);
DoPhatQuery(connection).Wait(TimeSpan.FromDays(1));
}
transactionScope.Complete();
}
}
static void SetOptimizations(SqlConnection connection)
{
SqlCommand cmd = connection.CreateCommand();
Console.WriteLine("===================================");
cmd.CommandText = "SET QUOTED_IDENTIFIER ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_NULL_DFLT_ON ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_PADDING ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_WARNINGS ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_NULLS ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET CONCAT_NULL_YIELDS_NULL ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ARITHABORT ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET DEADLOCK_PRIORITY -1";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET QUERY_GOVERNOR_COST_LIMIT 0";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET TEXTSIZE 2147483647";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
}
static void ShowUserOptions(SqlConnection connection)
{
SqlCommand cmd = connection.CreateCommand();
Console.WriteLine("===================================");
cmd.CommandText = "DBCC USEROPTIONS";
using (SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess))
{
Console.WriteLine(cmd.CommandText);
while (reader.HasRows)
{
while (reader.Read())
{
Console.WriteLine("{0} = {1}", reader.GetString(0), reader.GetString(1));
}
reader.NextResult();
}
}
}
static async Task DoPhatQuery(SqlConnection connection)
{
Console.WriteLine("===================================");
SqlCommand cmd = connection.CreateCommand();
cmd.CommandText =
#"SELECT
p.[Id],
p.[UserId],
p.[Text],
FROM [dbo].[Post] AS p WITH (NOLOCK)
WHERE p.[Visibility] = #visibility
AND p.[GroupId] = #groupId
AND p.[DatePosted] >= #dateBegin
AND p.[DatePosted] < #dateEnd
ORDER BY p.[DatePosted] DESC
OPTION(RECOMPILE)";
cmd.Parameters.Add("#visibility", SqlDbType.Int).Value = 0;
cmd.Parameters.Add("#groupId", SqlDbType.Int).Value = GroupId;
cmd.Parameters.Add("#dateBegin", SqlDbType.DateTime).Value = DateBegin;
cmd.Parameters.Add("#dateEnd", SqlDbType.DateTime).Value = DateEnd;
Console.WriteLine(cmd.CommandText);
Console.WriteLine("===================================");
DateTime beforeCommit = DateTime.UtcNow;
using (SqlDataReader reader = await cmd.ExecuteReaderAsync(CommandBehavior.CloseConnection))
{
DateTime afterCommit = DateTime.UtcNow;
Console.WriteLine("Query time = {0}", afterCommit - beforeCommit);
DateTime beforeRead = DateTime.UtcNow;
int currentRow = 0;
while (reader.HasRows)
{
while (await reader.ReadAsync())
{
if (currentRow++ % 1000 == 0)
Console.WriteLine("[{0}] Rows read = {1}", DateTime.UtcNow, currentRow);
}
await reader.NextResultAsync();
}
Console.WriteLine("[{0}] Rows read = {1}", DateTime.UtcNow, currentRow);
DateTime afterRead = DateTime.UtcNow;
Console.WriteLine("Read time = {0}", afterRead - beforeRead);
}
}
}
}
As you can see above, we reproduce the same SET stuff as the ones from SSMS.
We also tried all the tricks known to mankind to speed up everything.
Using async stuff. Using WITH(NOLOCK), NO RECOMPILE, defining a bigger PacketSize in the connection string didn't help, and using Sequential Reader.
Still, SSMS is 50 times faster.
More info
Our database is an Azure database. We actually have 2 databases, one in Europe and one in West US.
Since we are located in Europe, the same query is faster when we use the Europe database. But it's still like 30secs and is like instant in SSMS.
The data transfer speed does influence this, but it's not the main issue.
We can also reduce the time of the data transfer by projecting less columns. It does quickens the Read() iteration of course. Say we retrieve only our ID column: then we have a while(Read()) that lasts 5secs.
But it's not an option as we need all these columns.
We know how to 'solve' this issue: we can approach our problem differently, and make small queries daily and cache these results in an Azure Table or something.
But we want to know WHY SSMS is faster. What's the trick.
We used Entity Framework in C#, Dapper in C# and the example above is like native C#. I have seen a few people in the interwebz with potentially a similar issue. To me, it feels like it's the SqlDataReader that is slow.
Like, it doesn't pipeline the download of the rows using multiple connections or something.
Question
So my question here is this: how the hell does Management Studio manages to be 50 times faster to download the result of our query? What's the trick?
Thanks guys.

What boggles my mind is that: when I execute the query from a basic C#
code it takes 1min and 44secs. But when I do it from SSMS it takes 10
secs
You can't execute a parameterized query directly in SSMS so you're comparing different things. When you use local variables instead of parameters in SSMS, SQL Server estimates row counts using overall average density statistics. With a parameterized query, SQL Server uses the statistics histogram and supplied parameter values for initial compilation. Different estimates can result in different plans, although the estimates from the histogram are usually more accurate and yield a better plan (theoretically).
Try updating statistics and executing the query from SSMS using sp_executesql and parameters. I would expect the same performance as the app code, good or bad.

For grins have you tried ditching the idea of using datareader and slap the results into a DataTable? I have seen datareader be slow in certain situations.

Related

Stored procedure causing connection time execution out error when fetch large data

I am calling a SQL Server stored procedure to get huge amount of records from the database, but it's causing timeout execution errors when run. How can we make it fast in this way?
I am using stored procedure call in Entity Framework. The stored procedure takes input parameters and out parameter is total record in table for pagination in the fronted. I want to get all data with 10 record per page. Database table name is Player_account_flow and store and stored procedure name is also given. I am mapping my db model when the data load.
using (SqlConnection conn = dbContext.Database.GetDbConnection() as SqlConnection)
{
conn.Open();
using (SqlCommand cmd = new SqlCommand())
{
cmd.Connection = conn;
cmd.CommandText = "Pro_GetPageData";
cmd.CommandType = CommandType.StoredProcedure;
// input parameters to procedure
cmd.Parameters.AddWithValue("#TableName", "Player_Account_Flow");
cmd.Parameters.AddWithValue("#OrderString", "Create_Time Desc");
cmd.Parameters.AddWithValue("#ReFieldsStr", "*");
cmd.Parameters.AddWithValue("#PageIndex", 1);
cmd.Parameters.AddWithValue("#PageSize", 10);
cmd.Parameters.AddWithValue("#WhereString", query);
cmd.Parameters.AddWithValue("#TotalRecord", SqlDbType.Int);
cmd.Parameters["#TotalRecord"].Direction = ParameterDirection.Output;
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
var map = new PlayerAccountFlow();
map.Id = (Int32)reader["Id"];
map.AccountId = (Int32)reader["AccountId"];
map.Type = (byte)reader["Type"];
map.BeforeAmout = (decimal)reader["BeforeAmout"];
map.AfterAmout = (decimal)reader["AfterAmout"];
map.Amout = (decimal)reader["Amout"];
map.Source = (Int32)reader["Source"];
map.Memo = reader["Memo"].ToString();
map.CreateTime = (DateTime)reader["Create_Time"];
playerAccountFlowsList.Add(map);
}
}
obj = cmd.Parameters["#TotalRecord"].Value;
}
}
PagingData<PlayerAccountFlow> pagingData = new PagingData<PlayerAccountFlow>();
pagingData.Countnum = (Int32)obj;
pagingData.Data = playerAccountFlowsList;
return pagingData;
There's multiple things to look into here. You could set a longer command timeout than the default. You could look into the stored procedure to see why is it running so slow.
For the CommandTimeout just give it a value in seconds. In the example below that would be 2 minutes. Although it would be a bad user experience to make them wait minutes to display their result. I would advise on optimizing the stored procedure first(look at indexes, multiple joins, sub queries, and so on). Also, are you actually displaying that huge dataset on the user interface, or are you splitting it in pages? You could change the stored procedure to just return the current page, and call it again when the user goes to the next page. A well written stored procedure should be a lot faster when you only return a small subset of the whole data.
using (SqlCommand cmd = new SqlCommand())
{
cmd.Connection = conn;
cmd.CommandTimeout = 120;
cmd.CommandText = "Pro_GetPageData";
cmd.CommandType = CommandType.StoredProcedure;
In this case you should revisit your stored procedure for example multiple joins. You should also ask yourself if all data should be loaded immediately or not. I would suggest to display initial data and load further data on user request. That means that you can use simple count on database and calculate pages based on result, then fetch data only from selected range.

Improve querying speed on Sqlite [duplicate]

I recently read about SQLite and thought I would give it a try. When I insert one record it performs okay. But when I insert one hundred it takes five seconds, and as the record count increases so does the time. What could be wrong? I am using the SQLite Wrapper (system.data.SQlite):
dbcon = new SQLiteConnection(connectionString);
dbcon.Open();
//---INSIDE LOOP
SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
dbcon.close();
Wrap BEGIN \ END statements around your bulk inserts. Sqlite is optimized for transactions.
dbcon = new SQLiteConnection(connectionString);
dbcon.Open();
SQLiteCommand sqlComm;
sqlComm = new SQLiteCommand("begin", dbcon);
sqlComm.ExecuteNonQuery();
//---INSIDE LOOP
sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
sqlComm = new SQLiteCommand("end", dbcon);
sqlComm.ExecuteNonQuery();
dbcon.close();
I read everywhere that creating transactions is the solution to slow SQLite writes, but it can be long and painful to rewrite your code and wrap all your SQLite writes in transactions.
I found a much simpler, safe and very efficient method: I enable a (disabled by default) SQLite 3.7.0 optimisation : the Write-Ahead-Log (WAL).
The documentation says it works in all unix (i.e. Linux and OSX) and Windows systems.
How ? Just run the following commands after initializing your SQLite connection:
PRAGMA journal_mode = WAL
PRAGMA synchronous = NORMAL
My code now runs ~600% faster : my test suite now runs in 38 seconds instead of 4 minutes :)
Try wrapping all of your inserts (aka, a bulk insert) into a single transaction:
string insertString = "INSERT INTO [TableName] ([ColumnName]) Values (#value)";
SQLiteCommand command = new SQLiteCommand();
command.Parameters.AddWithValue("#value", value);
command.CommandText = insertString;
command.Connection = dbConnection;
SQLiteTransaction transaction = dbConnection.BeginTransaction();
try
{
//---INSIDE LOOP
SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
transaction.Commit();
return true;
}
catch (SQLiteException ex)
{
transaction.Rollback();
}
By default, SQLite wraps every inserts in a transaction, which slows down the process:
INSERT is really slow - I can only do few dozen INSERTs per second
Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second.
Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite..
By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced.
See "Optimizing SQL Queries" in the ADO.NET help file SQLite.NET.chm. Code from that page:
using (SQLiteTransaction mytransaction = myconnection.BeginTransaction())
{
using (SQLiteCommand mycommand = new SQLiteCommand(myconnection))
{
SQLiteParameter myparam = new SQLiteParameter();
int n;
mycommand.CommandText = "INSERT INTO [MyTable] ([MyId]) VALUES(?)";
mycommand.Parameters.Add(myparam);
for (n = 0; n < 100000; n ++)
{
myparam.Value = n + 1;
mycommand.ExecuteNonQuery();
}
}
mytransaction.Commit();
}

To optimize the query performance

i have stored procedure which returns like minimum of 40K rows and it takes like 20 seconds in SSMS 2008 R2 where the database resides in Sql Azure but when i run the same Sp in my c# application using EF 5 or just Normal ADo.NET it took like 70-80 seconds.
table has a non-clustered index on ScenarioID
Sp is just a select statement with where condition. select * from Cost where ScenarioID= #ID
using (SqlConnection con = new SqlConnection(constr))
{
using (SqlCommand cmd = new SqlCommand("sp_GetActCostsByID", con))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#ID", SqlDbType.VarChar).Value = ID;
con.Open();
DataTable dt = new DataTable();
DateTime timee = DateTime.Now;
Console.WriteLine(timee);
dt.Load(cmd.ExecuteReader());
timee = DateTime.Now;
Console.WriteLine(timee);
}
}
Is there any way to increase the performance:
My execution Plan:
Your nonclustered index on ScenarioID might not be helping, if you're not INCLUDEing all the columns you're trying to return, as it will need to do lookups to get those other columns - if there are lots of rows for that Scenario, you could end up with an ordinary table scan. And this comes down to statistics, so can vary from server to server.
If you can avoid the need for lookups, you'll get more consistent performance.

Improving SQLite Performance

Well i have a file.sql that contains 20,000 of insert commands
Sample From the .sql file
INSERT INTO table VALUES
(1,-400,400,3,154850,'Text',590628,'TEXT',1610,'TEXT',79);
INSERT INTO table VALUES
(39,-362,400,3,111659,'Text',74896,'TEXT',0,'TEXT',14);
And i am using the following code to create an in memory Sqlite database and pull the values into it then calculate the time elapsed
using (var conn = new SQLiteConnection(#"Data Source=:memory:"))
{
conn.Open();
var stopwatch = new Stopwatch();
stopwatch.Start();
using (var cmd = new SQLiteCommand(conn))
{
using (var transaction = conn.BeginTransaction())
{
cmd.CommandText = File.ReadAllText(#"file.sql");
cmd.ExecuteNonQuery();
transaction.Commit();
}
}
var timeelapsed = stopwatch.Elapsed.TotalSeconds <= 60
? stopwatch.Elapsed.TotalSeconds + " seconds"
: Math.Round(stopwatch.Elapsed.TotalSeconds/60) + " minutes";
MessageBox.Show(string.Format("Time elapsed {0}", timeelapsed));
conn.Close();
}
Things i have tried
Using file database instead of memory one.
Using begin transaction and commit transaction [AS SHOWN IN MY CODE].
Using Firefox's extension named SQLite Manager to test whether the
slowing down problem is from the script; However, I was surprised
that the same 20,000 lines that i am trying to process using my code
has been pulled to the database in JUST 4ms!!!.
Using PRAGMA synchronous = OFF, as well as, PRAGMA journal_mode =
MEMORY.
Appending begin transaction; and commit transaction; to the
beginning and ending of the .sql file respectively.
As the SQLite documentations says : SQLite is capable of processing 50,000 commands per seconds. And that is real and i made sure of it using the SQLite Manager [AS DESCRIPED IN THE THIRD SOMETHING THAT I'V TRIED]; However, I am getting my 20,000 commands done in 4 minutes something that tells that there is something wrong.
QUESTION : What is the problem am i facing why is the Execution done very slowly ?!
SQLite.Net documentation recommends the following construct for transactions
using (SqliteConnection conn = new SqliteConnection(#"Data Source=:memory:"))
{
conn.Open();
using(SqliteTransaction trans = conn.BeginTransaction())
{
using (SqliteCommand cmd = new SQLiteCommand(conn))
{
cmd.CommandText = File.ReadAllText(#"file.sql");
cmd.ExecuteNonQuery();
}
trans.Commit();
}
con.Close();
}
Are you able to manipulate the text file contexts to something like:
INSERT INTO table (col01, col02, col03, col04, col05, col06, col07, col08, col09, col10, col11)
SELECT 1,-400,400,3,154850,'Text',590628,'TEXT',1610,'TEXT',79
UNION ALL
SELECT 39,-362,400,3,111659,'Text',74896,'TEXT',0,'TEXT',14
;
Maybe try "batching them" into groups of 100 as a initial test.
http://sqlite.org/lang_select.html
SqlLite seems to support the UNION ALL statement.

SQLite Insert very slow?

I recently read about SQLite and thought I would give it a try. When I insert one record it performs okay. But when I insert one hundred it takes five seconds, and as the record count increases so does the time. What could be wrong? I am using the SQLite Wrapper (system.data.SQlite):
dbcon = new SQLiteConnection(connectionString);
dbcon.Open();
//---INSIDE LOOP
SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
dbcon.close();
Wrap BEGIN \ END statements around your bulk inserts. Sqlite is optimized for transactions.
dbcon = new SQLiteConnection(connectionString);
dbcon.Open();
SQLiteCommand sqlComm;
sqlComm = new SQLiteCommand("begin", dbcon);
sqlComm.ExecuteNonQuery();
//---INSIDE LOOP
sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
sqlComm = new SQLiteCommand("end", dbcon);
sqlComm.ExecuteNonQuery();
dbcon.close();
I read everywhere that creating transactions is the solution to slow SQLite writes, but it can be long and painful to rewrite your code and wrap all your SQLite writes in transactions.
I found a much simpler, safe and very efficient method: I enable a (disabled by default) SQLite 3.7.0 optimisation : the Write-Ahead-Log (WAL).
The documentation says it works in all unix (i.e. Linux and OSX) and Windows systems.
How ? Just run the following commands after initializing your SQLite connection:
PRAGMA journal_mode = WAL
PRAGMA synchronous = NORMAL
My code now runs ~600% faster : my test suite now runs in 38 seconds instead of 4 minutes :)
Try wrapping all of your inserts (aka, a bulk insert) into a single transaction:
string insertString = "INSERT INTO [TableName] ([ColumnName]) Values (#value)";
SQLiteCommand command = new SQLiteCommand();
command.Parameters.AddWithValue("#value", value);
command.CommandText = insertString;
command.Connection = dbConnection;
SQLiteTransaction transaction = dbConnection.BeginTransaction();
try
{
//---INSIDE LOOP
SQLiteCommand sqlComm = new SQLiteCommand(sqlQuery, dbcon);
nRowUpdatedCount = sqlComm.ExecuteNonQuery();
//---END LOOP
transaction.Commit();
return true;
}
catch (SQLiteException ex)
{
transaction.Rollback();
}
By default, SQLite wraps every inserts in a transaction, which slows down the process:
INSERT is really slow - I can only do few dozen INSERTs per second
Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second.
Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite..
By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced.
See "Optimizing SQL Queries" in the ADO.NET help file SQLite.NET.chm. Code from that page:
using (SQLiteTransaction mytransaction = myconnection.BeginTransaction())
{
using (SQLiteCommand mycommand = new SQLiteCommand(myconnection))
{
SQLiteParameter myparam = new SQLiteParameter();
int n;
mycommand.CommandText = "INSERT INTO [MyTable] ([MyId]) VALUES(?)";
mycommand.Parameters.Add(myparam);
for (n = 0; n < 100000; n ++)
{
myparam.Value = n + 1;
mycommand.ExecuteNonQuery();
}
}
mytransaction.Commit();
}

Categories

Resources