Why insert statement is slow with SqlCommand? - c#

I am using SqlCommand to insert multiple records to the database but it takes long time to insert 2000 records, I did the following code:
using (SqlConnection sql = new SqlConnection(connectionString))
{
using (SqlCommand cmd = new SqlCommand(query, sql))
{
sql.Open();
int ff = 0;
while (ff < 2000)
{
cmd.ExecuteNonQuery();//It takes 139 milliseconds approximately
ff++;
Console.WriteLine(ff);
}
}
}
But when I execute the following script in SSMS(Sql Server Management Studio) the 2000 records are stored in 15 seconds:
declare #i int;
set #i=1
while (#i)<=2000
begin
set #i=#i+1
INSERT INTO Fulls (ts,MOTOR,HMI,SET_WEIGHT) VALUES ('2018-07-04 02:56:57','0','0','0');
end
What's going on?
Why is It so slow in executing the sentence?
Additional:
-The database is a SQL Database hosted in Microsoft Azure.
-The loading speed of my internet is 20 Mbits.
-The above query is not the real query, the real query contains 240 columns and 240 values.
-I tried to do a transaction following this example: https://msdn.microsoft.com/en-us/library/86773566(v=vs.110).aspx
-The sql variable is of type SqlConnection.
Thanks for your help.

It sounds like there is a high latency between your SQL server and your application server. When I do this locally, the pure TSQL version runs in 363ms, and the C# version with 2000 round trips takes 1061ms (so: about 0.53ms per round-trip). Note: I took the Console.WriteLine away, because I didn't want to measure how fast Console isn't!
For 2000 inserts, this is a pretty fair comparison. If you're seeing something massively different, then I suspect:
your SQL server is horribly under-powered - it should not take 15s (from the question) to insert 2000 rows, under any circumstances (my 363ms timing is on my desktop PC, not a fast server)
as already suggested; you have high latency
Note there are also things like "DTC" which might impact the performance based on the connection string and ambient transactions (TransactionScope), but I'm assuming those aren't factors here.
If you need to improve the performance here, the first thing to do would be to find out why it is so horribly bad - i.e. the raw server performance is terrible, and the latency is huge. Neither of those is a coding question: those are infrastructure questions.
If you can't fix those, then you can code around them. Table valued parameters or bulk-insert (SqlBulkCopy) both provide ways to transfer multiple rows without having to pay a round-trip per execute. You can also use "MARS" (multiple active results sets) and pipelined inserts, but that is quite an advanced topic (and most people tend to recommend not enabling MARS).

Make sure to minimize number of indexes on your table and use SqlBulkCopy as below
DataTable sourceData=new DataTable();
using (var sqlBulkCopy = new SqlBulkCopy(_connString))
{
sqlBulkCopy .DestinationTableName = "DestinationTableName";
sqlBulkCopy .WriteToServer(sourceData);
}

Related

Same query with the same query plan takes ~10x longer when executed from ADO.NET vs. SMSS

My query is fairly complex, but I have simplified it to figure out this problem and now it is a simple JOIN that I'm running on a SQL Server 2014 database. The query is:
SELECT * FROM SportsCars as sc INNER JOIN Cars AS c ON c.CarID = sc.CarID WHERE c.Type = 1
When I run this query from SMSS and watch it in SQL Profiler, it takes around 350ms to execute. When I run the same query inside my application using Entity Framework or ADO.NET (I've tried both). It takes 4500ms to execute.
ADO.NET Code:
using (var connection = new SqlConnection(connectionString))
{
connection.Open();
var cmdA = new SqlCommand("SET ARITHABORT ON", connection);
cmdA.ExecuteNonQuery();
var query = "SELECT * FROM SportsCars as sc INNER JOIN Cars AS c ON c.CarID = sc.CarID WHERE c.Type = 1";
var cmd = new SqlCommand(query, connection);
cmd.ExecuteNonQuery()
}
I've done an extensive Google search and found this awesome article and several StackOverflow questions (here and here). In order to make the session parameters identical for both queries I call SET ARITHABORT ON in ADO.NET and it makes no difference. This is a straight SQL query, so there is not a parameter sniffing problem. I've simplified the query and the indexes down to their most basic form for this test. There is nothing else running on the server and there is nothing else accessing the database during the test. There are no computed columns in the Cars or SportsCars table, just INTs and VARCHARs.
The SportsCars table has about 170k records and 4 columns, and the Cars table has about 1.2M records and 7 columns. The resulting data set (SportsCars of Type=1) has about 2600 records and 11 columns. I have a single non-clustered index on the Cars table, on the [Type] column that includes all the columns of the cars table. And both tables have a clustered index on the CarID column. No other indexes exist on either table. I'm running as the same database user in both cases.
When I view the data in SQL Profiler, I see that both queries are using the exact same, very simple query plan. In SQL Profiler, I'm using the Performance Event Class and the ShowPlan XML Statistics Profile, which I believe to be the proper event to monitor and capture the actual execution plan. The # of reads is the same for both queries (2596).
How can two exact same queries with the exact same query plan take 10x longer in ADO.NET vs. SMSS?
Figured it out:
Because I'm using Entity Framework, the connection string in my application has MultipleActiveResultSets=True. When I remove this from the connection string, the queries have the same performance in ADO.NET and SSMS.
Apparently there is an issue with this setting causing queries to respond slowly when connected to SQL Server via WAN. I found this link and this comment:
MARS uses "firehose mode" to retrieve data. Firehose mode means that
the server will produce data as fast as possible. This also means that
your client application must receive inbound data at the same speed as
it comes in. If it doesn't the data storage buffers on the server will
fill up and the processing will stop until those buffers empty.
So what? You may ask... But as long as the processing is stopped the
resources on the SQL server are in use and are tied up. This includes
the worker thread, schema and data locks, memory, etc. So it is
crucial that your client application consumes the inbound results as
quickly as they arrive.
I have to use this setting with Entity Framework otherwise lazy loading will generate exceptions. So I'm going to have to figure out some other workaround. But at least I understand the issue now.
How can two exact same queries with the exact same query plan take 10x longer in ADO.NET vs. SMSS?
First we need to be clear about what is considered "same" with regards to queries and query plans. Assuming that the query at the very top of the question is a copy-and-paste, then it is not the same query as the one being submitted via ADO.NET. For two queries to be the same, they need to be byte-by-byte the same, which includes all white-space, capitalization, punctuation, comments, etc.
The two queries shown are definitely very similar. And they might even share the same execution plan. But how was "same"ness determined for those? Was the XML the same in both cases? Or just what was shown graphically in SSMS when viewing the plans? If they were determined to be the same based on their graphical representation then that is sometimes misleading. The XML itself needs to be checked. Even if two query plans have the same query hash, there are still (sometimes) parts of a query plan that are variable and changes do not change the plan hash. One example is the evaluation of expressions. Sometimes they are calculated and their result is embedded into the plan as a constant. Sometimes they are calculated at the start of each execution and stored and reused within that particular execution, but not for any subsequent executions.
One difference between SSMS and ADO.NET is the default session properties for each. I thought I had seen a chart years ago showing the defaults for ADO / OLEDB / SQLNCLI but can't find it out. Either way, it doesn't need to be guess work as it can be discovered using the SESSIONPROPERTY function. Just run this query in the C# code instead of your current SELECT, and inspect the results in debug or print them out or whatever. Either way, run something like this:
SELECT SESSIONPROPERTY('ANSI_NULLS') AS [AnsiNulls],
SESSIONPROPERTY('ANSI_PADDING') AS [AnsiPadding],
SESSIONPROPERTY('CONCAT_NULL_YIELDS_NULL') AS [ConcatNullYieldsNull],
...;
Make sure to get all of the setting noted in the linked MSDN page. Now, in SSMS, go to the "Query" menu, select "Query Options...", and go to "Execution" | "ANSI". The settings coming back from the C# code need to match the ones showing in SSMS. Anything set different requires adding something like this to the beginning of your ADO.NET query string:
SET ANSI_NULLS ON;
{rest of query}
Now, if you want to eliminate the DataTable loading from being a possible suspect, just replace that line, just replace:
var cars = new DataTable();
cars.Load(reader);
with:
while(reader.Read());
And lastly, why not just put the query into a Stored Procedure? The session settings (i.e. ANSI_NULLS, etc) that typically matter the most are stored with the proc definition so they should work the same whether you EXEC from SSMS or from ADO.NET (again, we aren't dealing with any parameters here).

Fastest way to insert 1 million rows in SQL Server [duplicate]

This question already has answers here:
Insert 2 million rows into SQL Server quickly
(8 answers)
Closed 8 years ago.
I am writing a stored procedure to insert rows into a table. The problem is that in some operation we might want to insert more than 1 million rows and we want to make it fast. Another thing is that in one of the column, it is Nvarchar(MAX). We might want to put avg 1000 characters in this column.
Firstly, I wrote a prc to insert row by row. Then I generate some random data for insert with the NVARCHAR(MAX) column to be a string of 1000 characters. Then use a loop to call the prc to insert the rows. The perf is very bad which takes 48 mins if I use SQL server to log on the database server to insert. If I use C# to connect to the server in my desktop (that is what we usually want to do ), it takes about more than 90mins.
Then, I changed the prc to take a table type parameter as the input. I prepared the rows somehow and put them in the table type parameter and do the insert by the following command:
INSERT INTO tableA SELECT * from #tableTypeParameterB
I tried batch size as 1000 rows and 3000 rows (Put 1000-3000 rows in the #tableTypeParameterB to be inserted for one time). The performance is still bad. It takes about 3 mins to insert 1 million rows if I run it in the SQL server and take about 10 mins if I use C# program to connect from my desktop.
The tableA has a clustered index with 2 columns.
My target is to make the insert as fast as possible (My idea target is within 1 min). Is there any way to optimize it?
Just an update:
I tried the Bulk Copy Insert which was suggested by some people below. I tried use the SQLBULKCOPY to insert 1000 row and 10000 row at a time. The performance is still 10 mins to insert 1 million row (Every row has a column with 1000 characters). There is no performance improve. Is there any other suggestions?
An update based on the comments require.
The data is actually coming from UI. The user will change use UI to bulk select, we say, one million rows and change one column from the old value to new value. This operation will be done in a separate procedure.But here what we need to do is that make the mid-tier service to get the old value and new value from the UI and insert them in the table. The old value and new value may be up to 4000 characters and the average is 1000 characters. I think the long string old/new value slow down the speed because when I change the test data old value/new value to 20-50 characters and insert is very fast no matter use SQLBulkCopy or table type variable
I think what you are looking for is Bulk Insert if you prefer using SQL.
Or there is also the ADO.NET for Batch Operations option, so you keep the logic in your C# application. This article is also very complete.
Update
Yes I'm afraid bulk insert will only work with imported files (from within the database).
I have an experience in a Java project where we needed to insert millions of rows (data came from outside the application btw).
Database was Oracle, so of course we used the multi-line insert of Oracle. It turned out that the Java batch update was much faster than the multi-valued insert of Oracle (so called "bulk updates").
My suggestion is:
Compare the performance between the multi-value insert of SQL Server code (then you can read from inside your database, a procedure if you like) with the ADO.NET Batch Insert.
If the data you are going to manipulate is coming from outside your application (if it is not already in the database), I would say just go for the ADO.NET Batch Inserts. I think that its your case.
Note: Keep in mind that batch inserts usually operate with the same query. That is what makes them so fast.
Calling a prc in a loop incurs many round trips to SQL.
Not sure what batching approach you used but you should look into table value parameters: Docs are here. You'll want to still batch write.
You'll also want to consider memory on your server. Batching (say 10K at a time) might be a bit slower but might keep memory pressure lower on your server since you're buffering and processing a set at a time.
Table-valued parameters provide an easy way to marshal multiple rows
of data from a client application to SQL Server without requiring
multiple round trips or special server-side logic for processing the
data. You can use table-valued parameters to encapsulate rows of data
in a client application and send the data to the server in a single
parameterized command. The incoming data rows are stored in a table
variable that can then be operated on by using Transact-SQL.
Another option is bulk insert. TVPs benefit from re-use however so it depends on your usage pattern. The first link has a note about comparing:
Using table-valued parameters is comparable to other ways of using
set-based variables; however, using table-valued parameters frequently
can be faster for large data sets. Compared to bulk operations that
have a greater startup cost than table-valued parameters, table-valued
parameters perform well for inserting less than 1000 rows.
Table-valued parameters that are reused benefit from temporary table
caching. This table caching enables better scalability than equivalent
BULK INSERT operations.
Another comparison here: Performance of bcp/BULK INSERT vs. Table-Valued Parameters
Here is an example what I've used before with SqlBulkCopy. Grant it I was only dealing with around 10,000 records but it did it inserted them a few seconds after the query ran. My field names were the same so it was pretty easy. You might have to modify the DataTable field names. Hope this helps.
private void UpdateMemberRecords(Int32 memberId)
{
string sql = string.Format("select * from Member where mem_id > {0}", memberId);
try {
DataTable dt = new DataTable();
using (SqlDataAdapter da = new SqlDataAdapter(new SqlCommand(sql, _sourceDb))) {
da.Fill(dt);
}
Console.WriteLine("Member Count: {0}", dt.Rows.Count);
using (SqlBulkCopy sqlBulk = new SqlBulkCopy(ConfigurationManager.AppSettings("DestDb"), SqlBulkCopyOptions.KeepIdentity)) {
sqlBulk.BulkCopyTimeout = 600;
sqlBulk.DestinationTableName = "Member";
sqlBulk.WriteToServer(dt);
}
} catch (Exception ex) {
throw;
}
}
If you have SQL2014, then the speed of In-Memory OLTP is amazing;
http://msdn.microsoft.com/en-au/library/dn133186.aspx
Depending on your end goal, it may be a good idea to look into Entity Framework (or similar). This abstracts out the SQL so that you don't really have to worry about it in your client application, which is how things should be.
Eventually, you could end up with something like this:
using (DatabaseContext db = new DatabaseContext())
{
for (int i = 0; i < 1000000; i++)
{
db.Table.Add(new Row(){ /* column data goes here */});
}
db.SaveChanges();
}
The key part here (and it boils down to a lot of the other answers) is that Entity Framework handles building the actual insert statement and committing it to the database.
In the above code, nothing will actually be sent to the database until SaveChanges is called and then everything is sent.
I can't quite remember where I found it, but there is research around that suggests it is worth while to call SaveChanges every so often. From memory, I think every 1000 entries is a good choice for committing to the database. Committing every entry, compared to every 100 entries, doesn't provide much performance benefit and 10000 takes it past the limit. Don't take my word for that though, the numbers could be wrong. You seem to have a good grasp on the testing side of things though, so have a play around with things.

Low-Memory Traversal of Database Table

I have a database with a large number of records that are Date/Time stamped. I need to traverse through these records (in chronological order) and perform some analysis on them.
The database is too large to pull in every record at once, so I thought of pulling in a few weeks/days/hours/etc at a time. The problem I'm having is that no matter what I've tried, the database (SQL Server) just uses all the memory on my machine. Even after the application is closed, sqlservr.exe is still using all of my memory. It typically uses about 1.8 GB of memory, no matter if my "batches" only contain 10 records or 1,000,000.
The question is: How can I query the database to get "batches" of records at a time, without the database consuming every bit of memory?
I am using the System.Data.SqlClient libraries. Here is a bit of pseudo-code:
String file = "C:\\db.mdf";
String connString = #"Data Source=.\SQLExpress;AttachDbFilename="C:\db.mdf";Integrated Security=True;User Instance=True";
SqlConnection conn = new SqlConnection(connString);
conn.Open();
DateTime start = DateTime.MinValue;
DateTime end = DateTime.MaxValue;
while()
{
// This should query for 1 hour at a time (but I should be able to change the time interval)
// I would like for the memory usage to be proportional to the time interval
String query = "SELECT * From MyTable WHERE Date BETWEEN '" + start.ToString() + "' AND '" + end.ToString() + "'";
SqlCommand cmd = new SqlCommand(query, conn);
SqlDataReader reader = command.ExecuteReader();
while(reader.Read())
ProcessRecord(ref reader);
start = end;
end = end.AddHours(1);
}
conn.Close();
C#
.NET 3.5
SQL Server 2008
Thanks.
This is normal, SQL Server will use all available memory unless configured differently.
Sql Server Express will release the memory when your other applications request more, but it will try to use all the memory it can to cache query plans and data.
Quote from the linked article:
The following example sets the max server memory option to 4 GB:
exec sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
exec sp_configure 'max server memory', 4096;
GO
RECONFIGURE;
GO
exec sp_configure 'show advanced options', 0;
RECONFIGURE;
GO
Do note that SqlConnection, SqlCommand and SqlDataReader implement IDisposable, so you usually would want to wrap them in a using clause.
Filip's answer is correct, that that's how SQL Server is set up to behave.
The reason that's the default is typically database servers are run on a dedicated machine that's running almost nothing except for the database and the #1 concern is database speed. You usually want to keep as much as possible in memory so it minimizes how often it needs to hit the disk.
An alternative to programatically configuring the setting, you can use Sql Server Management Studio (SSMS) to change the configuration. Connect to your database, right-click on it and go to Properties. In the Memory page you can configure the maximum memory the database will use.
A SqlDataReader will stream the results. As long as you don't hold on to the data returned by a record from the reader, the .NET garbage collector will collect all that data (at undeterministic times). In other words, your while(reader.Read()) ProcessRecord(reader); will work just fine. .NET will not load the complete set into memory (unless you do so explicitly, for instance by using a DataSet or DataTable).

SQL Transaction with ADO.Net

I am new to Database interection with C#, I am trying to writing 10000 records in database in a loop with the help of SqlCommand and SqlConnection objects with the help of SqlTransaction and committing after 5000. It is taking 10 seconds to processed.
SqlConnection myConnection = new SqlConnection("..Connection String..");
myConnection.Open();
SqlCommand myCommand = new SqlCommand();
myCommand.CommandText = "exec StoredProcedureInsertOneRowInTable Param1, Param2........";
myCommand.Connection = myConnection;
SqlTransaction myTrans = myConnection.Begintransaction();
for(int i=0;i<10000;i++)
{
mycommand.ExecuteNonQuery();
if(i%5000==0)
{
myTrans.commit();
myTrans = myConnection.BeginTransaction();
mycommand.Transaction = myTrans;
}
}
Above code is giving me only 1000 rows write/sec in database.
But when i tried to implement same logic in SQL and execute it on Database with SqlManagement Studio the it gave me 10000 write/sec.
When I compare the behaviour of above two approch then it showes me that while executing with ADO.Net there is large number of Logical reads.
my questions are:
1. Why there is logical reads in ADO.Net execution?
2. Is tansaction have some hand shaking?
3. Why they are not available in case of management studio?
4. If I want very fast insert transactions on DB then what will be the approach? .
Updated Information about Database objects
Table: tbl_FastInsertTest
No Primary Key, Only 5 fields first three are type of int (F1,F2,F3) and last 2(F4,F5) are type varchar(30)
storedprocedure:
create proc stp_FastInsertTest
{
#nF1 int,
#nF2 int,
#nF3 int,
#sF4 varchar(30),
#sF5 varchar(30)
}
as
Begin
set NoCOUNT on
Insert into tbl_FastInsertTest
{
[F1],
[F2],
[F3],
[F4],
[F5]
}
Values
{
#nF1,
#nF2,
#nF3,
#sF4,
#sF5,
} end
--------------------------------------------------------------------------------------
SQL Block Executing on SSMS
--When I am executing following code on SSMS then it is giving me more than 10000 writes per second but when i tried to execute same STP on ADO than it gave me 1000 to 1200 writes per second
--while reading no locks
begin trans
declare #i int
set #i=0
While(1<>0)
begin
exec stp_FastInsertTest 1,2,3,'vikram','varma'
set #i=#i+1
if(#i=5000)
begin
commit trans
set #i=0
begin trans
end
end
If you are running something like:
exec StoredProcedureInsertOneRowInTable 'blah', ...
exec StoredProcedureInsertOneRowInTable 'bloop', ...
exec StoredProcedureInsertOneRowInTable 'more', ...
in SSMS, that is an entirely different scenario, where all of that is a single batch. With ADO.NET you are paying a round-trip per ExecuteNonQuery - I'm actually impressed it managed 1000/s.
Re the logical reads, that could just be looking at the query-plan cache, but without knowing more about StoredProcedureInsertOneRowInTable it is impossible to comment on whether something query-specific is afoot. But I suspect you have some different SET conditions between SSMS and ADO.NET that is forcing it to use a different plan - this is in particular a problem with things like persisted calculated indexed columns, and columns "promoted" out of a sql-xml field.
Re making it faster - in this case it sounds like a table-valued parameters is exactly the thing, but you should also review the other options here
For performant inserts take a look at SqlBulkCopy class if it works for you it should be fast.
As Sean said, using parameterized queries is always a good idea.
Using a StringBuilder class, batching thousand INSERT statements in a single query and committing the transaction is a proven way of inserting data:
var sb=new StringBuilder();
for(int i=0;i < 1000;i++)
{
sb.AppendFormat("INSERT INTO Table(col1,col2)
VALUES({0},{1});",values1[i],values2[i]);
}
sqlCommand.Text=sb.ToString();
Your code doesn't look right to me, you are not committing transactions at each batch. Your code keeps opening new transactions.
It is always a good practice to drop indexes while inserting a lot of data, and adding them later. Indexes will slow down your writes.
Sql Management Studio does not have transactions but Sql has, try this:
BEGIN TRANSACTION MyTransaction
INSERT INTO Table(Col1,Col1) VALUES(Val10,Val20);
INSERT INTO Table(Col1,Col1) VALUES(Val11,Val21);
INSERT INTO Table(Col1,Col1) VALUES(Val12,Val23);
COMMIT TRANSACTION
You need to use a parameterized query so that the execution path can get processed and cached. Since you're using string concatenation (shudder, this is bad, google sql injection) to build the query, SQL Server treats those 10,000 queries are separate, individual queries and builds an execution plan for each one.
MSDN: http://msdn.microsoft.com/en-us/library/yy6y35y8.aspx although you're going to want to simplify their code a bit and you'll have to reset the parameters on the command.
If you really, really want to get the data in the db fast, think about using bcp... but you better make sure the data is clean first (as there's no real error checking/handling on it.

Searching SQL Server with LIKE Operator

I have a problem when I try to read rows from SQL Server 2005 from code in C#
The idea:
In my database (SQL Server 2005 Express) there is a table with a column (of datatype ntext) containing HTML code.
In my C# application user can enter a sentence (HTML code) and search the rows with contains this sentence.
The query generated from my app is:
USE test
SELECT
al.aal_Id As ID,
al.aal_Description As Opis,
au.au_Title As Tytul_szablonu,
au.au_Note As Nazwa_szablonu
FROM dbo.au_Allegro al
LEFT OUTER JOIN dbo.au__Auction au ON (al.aal_AuctionId = au.au_Id)
WHERE
au.au_Type = 11
AND al.aal_Description COLLATE SQL_Latin1_General_CP1_CS_AS LIKE '%%' ESCAPE '\'
In my App I'm converting special characters (e.g. ',) and adding escape character.
User tries to search for very long sentence (about 7000+ chars), when he tries to do this the sqlserver.exe process consumes all of his RAM memory and search time is about 30+ minutes (he has about 1000+ rows in this table).
The query returns 0 rows.
When he tries to run (this same) query in SQL Server Management Studio the database shows results in few seconds (with rows).
In my app I use SqlDataAdapter:
System.Data.DataTable dt = new System.Data.DataTable();
System.Data.SqlClient.SqlCommand cmd = new System.Data.SqlClient.SqlCommand();
cmd.CommandTimeout = 0;
cmd.Connection = conn;
System.Data.SqlClient.SqlDataAdapter da = new System.Data.SqlClient.SqlDataAdapter(kwerenda, conn);
try
{
da.Fill(dt);
}
I tried SqlDataReader:
dr = cmd.ExecuteReader();
while (dr.Read())
{
string id = dr["ID"].ToString();
string opis = dr["Opis"].ToString();
string tytul = dr["Tytul_szablonu"].ToString();
string nazwa = dr["Nazwa_szablonu"].ToString();
dt.Rows.Add(id, opis, tytul, nazwa);
}
When I tried to simulate this in my test database I don't have any problems with search (this same) sentences.
Have you got any tips for me ?
I can't do any changes in user datatable, i can't go to him and check what happens.
Is the SQL command executing a stored procedure? If so you might be getting different query plans, which may explain the timing difference between the apps. Your ADO.Net call might be affected by something known as parameter sniffing, which can cause radically different query execution times.
There are a couple of things you can do to avoid this problem and yield consistent results.
Convert parameters to local variables inside of the stored procedure.
Disable the feature on the SQL server altogether.
Also your syntax looks suspect as John pointed out. It would be better to use a NVARCHAR(MAX) datatype for that column if possible NTEXT should be avoided as its been deprecated.
A better alternative to doing like searches on a non-indexed column like this is to utilize the SQL's Full Text Search which is optimized for these types of queries.
http://msdn.microsoft.com/en-us/library/ms142571.aspx
http://www.developer.com/article.php/3446891
A couple of things you might want to do.
First, don't use nText. SQL 2005 has a datatype called nvarchar(max). It's MUCH better for storing large amounts of text. Further, ntext was deprecated so save yourself some trouble and convert it now. See this link on how to successfully do this.
Second, the query you posted is unusual. You have a left outer join, but you have a where clause on the outer joined table. Because of the where clause it's being converted (hopefully) into an inner join. You should just write it that way OR move the au.au_type = 11 to be part of the join construct. I doubt you want the latter.
Third, when the client runs the query the first time through your app it is generating a query plan based on those parameters. Running the exact same query shortly thereafter in Management Studio is going to reuse that plan and cached data. Therefore the second pass will be fast so no surprise there.
Fourth, I don't think you posted the actual query that was run. I suspect there is some data in the parameter you are comparing which either isn't escaping properly OR is using one of the reserved characters such as '[', ']', ^, etc.

Categories

Resources