i have stored procedure which returns like minimum of 40K rows and it takes like 20 seconds in SSMS 2008 R2 where the database resides in Sql Azure but when i run the same Sp in my c# application using EF 5 or just Normal ADo.NET it took like 70-80 seconds.
table has a non-clustered index on ScenarioID
Sp is just a select statement with where condition. select * from Cost where ScenarioID= #ID
using (SqlConnection con = new SqlConnection(constr))
{
using (SqlCommand cmd = new SqlCommand("sp_GetActCostsByID", con))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#ID", SqlDbType.VarChar).Value = ID;
con.Open();
DataTable dt = new DataTable();
DateTime timee = DateTime.Now;
Console.WriteLine(timee);
dt.Load(cmd.ExecuteReader());
timee = DateTime.Now;
Console.WriteLine(timee);
}
}
Is there any way to increase the performance:
My execution Plan:
Your nonclustered index on ScenarioID might not be helping, if you're not INCLUDEing all the columns you're trying to return, as it will need to do lookups to get those other columns - if there are lots of rows for that Scenario, you could end up with an ordinary table scan. And this comes down to statistics, so can vary from server to server.
If you can avoid the need for lookups, you'll get more consistent performance.
Related
I am calling a SQL Server stored procedure to get huge amount of records from the database, but it's causing timeout execution errors when run. How can we make it fast in this way?
I am using stored procedure call in Entity Framework. The stored procedure takes input parameters and out parameter is total record in table for pagination in the fronted. I want to get all data with 10 record per page. Database table name is Player_account_flow and store and stored procedure name is also given. I am mapping my db model when the data load.
using (SqlConnection conn = dbContext.Database.GetDbConnection() as SqlConnection)
{
conn.Open();
using (SqlCommand cmd = new SqlCommand())
{
cmd.Connection = conn;
cmd.CommandText = "Pro_GetPageData";
cmd.CommandType = CommandType.StoredProcedure;
// input parameters to procedure
cmd.Parameters.AddWithValue("#TableName", "Player_Account_Flow");
cmd.Parameters.AddWithValue("#OrderString", "Create_Time Desc");
cmd.Parameters.AddWithValue("#ReFieldsStr", "*");
cmd.Parameters.AddWithValue("#PageIndex", 1);
cmd.Parameters.AddWithValue("#PageSize", 10);
cmd.Parameters.AddWithValue("#WhereString", query);
cmd.Parameters.AddWithValue("#TotalRecord", SqlDbType.Int);
cmd.Parameters["#TotalRecord"].Direction = ParameterDirection.Output;
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
var map = new PlayerAccountFlow();
map.Id = (Int32)reader["Id"];
map.AccountId = (Int32)reader["AccountId"];
map.Type = (byte)reader["Type"];
map.BeforeAmout = (decimal)reader["BeforeAmout"];
map.AfterAmout = (decimal)reader["AfterAmout"];
map.Amout = (decimal)reader["Amout"];
map.Source = (Int32)reader["Source"];
map.Memo = reader["Memo"].ToString();
map.CreateTime = (DateTime)reader["Create_Time"];
playerAccountFlowsList.Add(map);
}
}
obj = cmd.Parameters["#TotalRecord"].Value;
}
}
PagingData<PlayerAccountFlow> pagingData = new PagingData<PlayerAccountFlow>();
pagingData.Countnum = (Int32)obj;
pagingData.Data = playerAccountFlowsList;
return pagingData;
There's multiple things to look into here. You could set a longer command timeout than the default. You could look into the stored procedure to see why is it running so slow.
For the CommandTimeout just give it a value in seconds. In the example below that would be 2 minutes. Although it would be a bad user experience to make them wait minutes to display their result. I would advise on optimizing the stored procedure first(look at indexes, multiple joins, sub queries, and so on). Also, are you actually displaying that huge dataset on the user interface, or are you splitting it in pages? You could change the stored procedure to just return the current page, and call it again when the user goes to the next page. A well written stored procedure should be a lot faster when you only return a small subset of the whole data.
using (SqlCommand cmd = new SqlCommand())
{
cmd.Connection = conn;
cmd.CommandTimeout = 120;
cmd.CommandText = "Pro_GetPageData";
cmd.CommandType = CommandType.StoredProcedure;
In this case you should revisit your stored procedure for example multiple joins. You should also ask yourself if all data should be loaded immediately or not. I would suggest to display initial data and load further data on user request. That means that you can use simple count on database and calculate pages based on result, then fetch data only from selected range.
I am doing a simple SQL query to get lots of data.
The complexity of the query is not an issue. It takes around 200ms to execute.
However the amount of data seems to be the issue.
We retrieve around 40k rows.
Each row has 8 columns and the amount of data is around a few hundreds of kbytes per row. Say, we download 15megs in total for this query.
What boggles my mind is that:
when I execute the query from a basic C# code it takes 1min and 44secs.
But when I do it from SSMS it takes 10 secs. Of course I do this from the same machine, and I'm using the same database.
And I clearly see the UI and the rows being populated in realtime. In 10secs the whole data table is full.
We tried:
to set the same SET things as the ones from SSMS,
to change the transaction isolation level,
to ignore the execution plan (with the OPTION(RECOMPILE)),
to ignore locks (with the WITH(NOLOCK)).
It doesn't change anything.
Makes sense: it's the read that is slow. Not the query (IMHO).
It is the while(reader.Read()) that takes time.
And, we tried with an empty while loop. So this excludes boxing/unboxing stuff or putting the result in memory.
Here is a test program we made to figure out it was the Read() that is taking time:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Threading.Tasks;
using System.Transactions;
namespace SqlPerfTest
{
class Program
{
const int GroupId = 1234;
static readonly DateTime DateBegin = new DateTime(2017, 6, 19, 0, 0, 0, DateTimeKind.Utc);
static readonly DateTime DateEnd = new DateTime(2017, 10, 20, 0, 0, 0, DateTimeKind.Utc);
const string ConnectionString = "CENSORED";
static void Main(string[] args)
{
TransactionOptions transactionOptions = new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted
};
using (var transactionScope = new TransactionScope(TransactionScopeOption.Required, transactionOptions))
{
using (SqlConnection connection = new SqlConnection(ConnectionString))
{
connection.Open();
SetOptimizations(connection);
ShowUserOptions(connection);
DoPhatQuery(connection).Wait(TimeSpan.FromDays(1));
}
transactionScope.Complete();
}
}
static void SetOptimizations(SqlConnection connection)
{
SqlCommand cmd = connection.CreateCommand();
Console.WriteLine("===================================");
cmd.CommandText = "SET QUOTED_IDENTIFIER ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_NULL_DFLT_ON ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_PADDING ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_WARNINGS ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ANSI_NULLS ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET CONCAT_NULL_YIELDS_NULL ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET ARITHABORT ON";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET DEADLOCK_PRIORITY -1";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET QUERY_GOVERNOR_COST_LIMIT 0";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
cmd.CommandText = "SET TEXTSIZE 2147483647";
cmd.ExecuteNonQuery();
Console.WriteLine(cmd.CommandText);
}
static void ShowUserOptions(SqlConnection connection)
{
SqlCommand cmd = connection.CreateCommand();
Console.WriteLine("===================================");
cmd.CommandText = "DBCC USEROPTIONS";
using (SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess))
{
Console.WriteLine(cmd.CommandText);
while (reader.HasRows)
{
while (reader.Read())
{
Console.WriteLine("{0} = {1}", reader.GetString(0), reader.GetString(1));
}
reader.NextResult();
}
}
}
static async Task DoPhatQuery(SqlConnection connection)
{
Console.WriteLine("===================================");
SqlCommand cmd = connection.CreateCommand();
cmd.CommandText =
#"SELECT
p.[Id],
p.[UserId],
p.[Text],
FROM [dbo].[Post] AS p WITH (NOLOCK)
WHERE p.[Visibility] = #visibility
AND p.[GroupId] = #groupId
AND p.[DatePosted] >= #dateBegin
AND p.[DatePosted] < #dateEnd
ORDER BY p.[DatePosted] DESC
OPTION(RECOMPILE)";
cmd.Parameters.Add("#visibility", SqlDbType.Int).Value = 0;
cmd.Parameters.Add("#groupId", SqlDbType.Int).Value = GroupId;
cmd.Parameters.Add("#dateBegin", SqlDbType.DateTime).Value = DateBegin;
cmd.Parameters.Add("#dateEnd", SqlDbType.DateTime).Value = DateEnd;
Console.WriteLine(cmd.CommandText);
Console.WriteLine("===================================");
DateTime beforeCommit = DateTime.UtcNow;
using (SqlDataReader reader = await cmd.ExecuteReaderAsync(CommandBehavior.CloseConnection))
{
DateTime afterCommit = DateTime.UtcNow;
Console.WriteLine("Query time = {0}", afterCommit - beforeCommit);
DateTime beforeRead = DateTime.UtcNow;
int currentRow = 0;
while (reader.HasRows)
{
while (await reader.ReadAsync())
{
if (currentRow++ % 1000 == 0)
Console.WriteLine("[{0}] Rows read = {1}", DateTime.UtcNow, currentRow);
}
await reader.NextResultAsync();
}
Console.WriteLine("[{0}] Rows read = {1}", DateTime.UtcNow, currentRow);
DateTime afterRead = DateTime.UtcNow;
Console.WriteLine("Read time = {0}", afterRead - beforeRead);
}
}
}
}
As you can see above, we reproduce the same SET stuff as the ones from SSMS.
We also tried all the tricks known to mankind to speed up everything.
Using async stuff. Using WITH(NOLOCK), NO RECOMPILE, defining a bigger PacketSize in the connection string didn't help, and using Sequential Reader.
Still, SSMS is 50 times faster.
More info
Our database is an Azure database. We actually have 2 databases, one in Europe and one in West US.
Since we are located in Europe, the same query is faster when we use the Europe database. But it's still like 30secs and is like instant in SSMS.
The data transfer speed does influence this, but it's not the main issue.
We can also reduce the time of the data transfer by projecting less columns. It does quickens the Read() iteration of course. Say we retrieve only our ID column: then we have a while(Read()) that lasts 5secs.
But it's not an option as we need all these columns.
We know how to 'solve' this issue: we can approach our problem differently, and make small queries daily and cache these results in an Azure Table or something.
But we want to know WHY SSMS is faster. What's the trick.
We used Entity Framework in C#, Dapper in C# and the example above is like native C#. I have seen a few people in the interwebz with potentially a similar issue. To me, it feels like it's the SqlDataReader that is slow.
Like, it doesn't pipeline the download of the rows using multiple connections or something.
Question
So my question here is this: how the hell does Management Studio manages to be 50 times faster to download the result of our query? What's the trick?
Thanks guys.
What boggles my mind is that: when I execute the query from a basic C#
code it takes 1min and 44secs. But when I do it from SSMS it takes 10
secs
You can't execute a parameterized query directly in SSMS so you're comparing different things. When you use local variables instead of parameters in SSMS, SQL Server estimates row counts using overall average density statistics. With a parameterized query, SQL Server uses the statistics histogram and supplied parameter values for initial compilation. Different estimates can result in different plans, although the estimates from the histogram are usually more accurate and yield a better plan (theoretically).
Try updating statistics and executing the query from SSMS using sp_executesql and parameters. I would expect the same performance as the app code, good or bad.
For grins have you tried ditching the idea of using datareader and slap the results into a DataTable? I have seen datareader be slow in certain situations.
I would like to insert all the id's in a sql table. The following way works but this take very long. What is the best or better way to do this to increase the speed.
using (SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString))
{
string query = "";
foreach (var id in ids) // count = 60000
{
{
query += "INSERT INTO [table] (id) VALUES (" + id + ");";
}
}
SqlCommand command = new SqlCommand(query, connection);
connection.Open();
using (SqlDataReader reader = command.ExecuteReader())
{
reader.Close();
}
connection.Close();
}
You can use the SqlBulkCopy to insert large amounts of data - something like this:
// define a DataTable with the columns of your target table
DataTable tblToInsert = new DataTable();
tblToInsert.Columns.Add(new DataColumn("SomeValue", typeof (int)));
// insert your data into that DataTable
for (int index = 0; index < 60000; index++)
{
DataRow row = tblToInsert.NewRow();
row["SomeValue"] = index;
tblToInsert.Rows.Add(row);
}
// set up your SQL connection
using (SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString))
{
// define your SqlBulkCopy
SqlBulkCopy bulkCopy = new SqlBulkCopy(connection);
// give it the name of the destination table WHICH MUST EXIST!
bulkCopy.DestinationTableName = "BulkTestTable";
// measure time needed
Stopwatch sw = new Stopwatch();
sw.Start();
// open connection, bulk insert, close connection
connection.Open();
bulkCopy.WriteToServer(tblToInsert);
connection.Close();
// stop time measurement
sw.Stop();
long milliseconds = sw.ElapsedMilliseconds;
}
On my system (PC, 32GB RAM, SQL Server 2014) I get those 60'000 rows inserted in 135 - 185 milliseconds.
Consider Table-Valued Parameters. They are an easy way to send a batch of data into a stored procedure that will then handle them on the SQL side, and they aren't restricted in most of the other approaches you will see are (insert limits, etc).
In the database create a custom Type that has the schema of your table.
CREATE TYPE dbo.TableType AS TABLE
( ID int )
Create a DataTable that matches your table schema (including column name and order).
DataTable newTableRecords = new DataTable();
// Insert your records, etc.
Create a stored procedure that receives a table parameter, and inserts the records from that parameter into your real table.
CREATE PROCEDURE usp_InsertTableRecords
(#tvpNewTableRecords dbo.TableType READONLY)
AS
BEGIN
INSERT INTO dbo.Table(ID)
SELECT tvp.ID FROM #tvpNewTableRecords AS tvp;
END
Call the procedure from your application code, passing in your data table as a parameter.
using (connection)
{
// Configure the SqlCommand and SqlParameter.
SqlCommand insertCommand = new SqlCommand(
"usp_InsertTableRecords", connection);
insertCommand.CommandType = CommandType.StoredProcedure;
SqlParameter tvpParam = insertCommand.Parameters.AddWithValue(
"#tvpNewTableRecords", newTableRecords);
tvpParam.SqlDbType = SqlDbType.Structured;
// Execute the command.
insertCommand.ExecuteNonQuery();
}
I've had really great performance at very large volumes with this approach, and it is nice because it allows everything to be set-based without any arbitrary insert limits like the INSERT INTO (Table) VALUES (1),(2),(3)... approach.
I'm using a datatable as the datasource of some dropdowns on a page, but have noticed that the page is very slow during the postbacks.
I've tracked it through to here:
DataTable dt = new DataTable();
dt.Load(sqlCmd.ExecuteReader()); // this takes ages
The sql command is a parametrised query, not a stored procedure (the return values and where are quite 'dynamic' so this wouldn't be practicable), but nevertheless a simple select union query.
Usually returns between 5 and 20 options per dropdown, depending on what's been selected on the other dropdowns.
When I run the query in the management studio, it's done in under a second. Here it can take up to 7 seconds per dropdown, with 6 dropdowns on the page it soon adds up.
I have also tried with a SqlDataAdapter:
SqlDataAdapter sqlDa = new SqlDataAdapter(sqlCmd);
sqlDa.Fill(dt); // this takes ages
but this was just as slow.
I have this on 2 different systems and on both have the same performance issues.
If anyone knows a better (faster) methord, or knows why this is so slow that would be great.
Not the best thread I've seen on the issue, but there's good links inside, & it's in my post history:
SQL Query that runs fine in SSMS runs very slow in ASP.NET
The SQL Optimizer sometimes likes to decide what's best & you'll have to break out your query through some tracing and logging of data execution plans. It may very well be something as buried as a bad index, or your query code might need optimization. Seeing as we don't have the query code, and having it may or may not be helpful. I'd recommend you follow the guides linked to in the above post and close your question.
here is an example on how you can load a DataTable very quickly notice how I show specific Columns that I want to return
private DataTable GetTableData()
{
string sql = "SELECT Id, FisrtName, LastName, Desc FROM MySqlTable";
using (SqlConnection myConnection = new SqlConnection(connectionString))
{
using (SqlCommand myCommand = new SqlCommand(sql, myConnection))
{
myConnection.Open();
using (SqlDataReader myReader = myCommand.ExecuteReader())
{
DataTable myTable = new DataTable();
myTable.Load(myReader);
myConnection.Close();
return myTable;
}
}
}
}
If you want to use DataAdapter to Fill the DataTable here is a simple example
private void FillAdapter()
{
using (SqlConnection conn = new SqlConnection(Your ConnectionString))
{
conn.Open();
using (SqlDataAdapter dataAdapt = new SqlDataAdapter("SELECT * FROM EmployeeIDs", conn))
{
DataTable dt = new DataTable();
dataAdapt.Fill(dt);
// dataGridView1.DataSource = dt;//if you want to display data in DataGridView
}
}
}
I am working on sql server monitoring product and i have database query that will fetch data regarding All Table details of all the Databases in SQL server.
For this i have two options.
Fire query on data base from code as select name from [master].sys.sysdatabases
Get the DB name of all the data base first then i will fire my main query on each DB
using "USE <fetched DB name>;"+"mainQuery";
Please check followin code for the same.
public DataTable GetResultsOfAllDB(string query)
{
SqlConnection con = new SqlConnection(_ConnectionString);
string locleQuery = "select name from [master].sys.sysdatabases";
DataTable dtResult = new DataTable("Result");
SqlCommand cmdData = new SqlCommand(locleQuery, con);
cmdData.CommandTimeout = 0;
SqlDataAdapter adapter = new SqlDataAdapter(cmdData);
DataTable dtDataBases = new DataTable("DataBase");
adapter.Fill(dtDataBases);
foreach (DataRow drDB in dtDataBases.Rows)
{
if (dtResult.Rows.Count >= 15000)
break;
locleQuery = " Use [" + Convert.ToString(drDB[0]) + "]; " + query;
cmdData = new SqlCommand(locleQuery, con);
adapter = new SqlDataAdapter(cmdData);
DataTable dtTemp = new DataTable();
adapter.Fill(dtTemp);
dtResult.Merge(dtTemp);
}
return dtResult;
}
I will use sys store procedure i.e.EXEC sp_MSforeachdb and fetched data will be stored store data in table datatype select from temptable; Drop Table temptable.
Check following query for the same
Declare #TableDetail table
(
field1 varchar(500),
field2 int,
field3 varchar(500),
field4 varchar(500),
field5 decimal(18,2),
field6 decimal(18,2)
)
INSERT #TableDetail EXEC sp_MSforeachdb 'USE [?]; QYERY/COMMAND FOR ALL DATABASE'
Select
field1,field2 ,field3 ,field4 ,field5,field6 FROM #TableDetail
Note : In second option query takes time because if number of database and number of table are huge then this will wait until all database get finish.
Now my question is which is the good option from above two options and why? or any other solution for the same.
Thanks in advance.
One key difference is the second option blocks until everything is done. All of the work is done sql server side. That has the issue of not being able to apply feedback to the user as it runs and it can potentially time out and not be resiliant to network blips. This option can be used as a pure sql script (some sql admins like that) where the first needs a program.
In the first example, the client is doing iterative more granular tasks where you can supply feedback to the user. You can also retry in the face of network blips without redoing all of the work. In the first example, you can also use SqlConnectionBuild instead of USE concatentation.
If performance is a concern, you could also potentially parallelize the first one with some locking around adapter.Fill
Both suck - they are both serial.
Use the first, get rid of the ridiculous objects (DataSet) and use TASKS to parallelize X databases at the same time. X determined by trying ut how much load the server can handle.
Finished.
If your queries are simple enough you can try to generate single script instead of execute queries in each DB one by one:
select 'DB1' as DB, Field1, Field2, ...
from [DB1]..[TableOrViewName]
union all
select 'DB2' as DB, Field1, Field2, ...
from [DB2]..[TableOrViewName]
union all
...
Everything is looking fine. I just want to add Using statements for IDisposable objects
public DataTable GetResultsOfAllDB(string query)
{
using (SqlConnection con = new SqlConnection(_ConnectionString))
{
string locleQuery = "select name from [master].sys.sysdatabases";
DataTable dtResult = new DataTable("Result");
using (SqlCommand cmdData = new SqlCommand(locleQuery, con))
{
cmdData.CommandTimeout = 0;
using (SqlDataAdapter adapter = new SqlDataAdapter(cmdData))
{
using (DataTable dtDataBases = new DataTable("DataBase"))
{
adapter.Fill(dtDataBases);
foreach (DataRow drDB in dtDataBases.Rows)
{
if (dtResult.Rows.Count >= 15000)
break;
locleQuery = " Use [" + Convert.ToString(drDB[0]) + "]; " + query;
cmdData = new SqlCommand(locleQuery, con);
adapter = new SqlDataAdapter(cmdData);
using (DataTable dtTemp = new DataTable())
{
adapter.Fill(dtTemp);
dtResult.Merge(dtTemp);
}
}
return dtResult;
}
}
}
}
}