OFFSET command not recognized in Table Adapter Query c# - c#

I am attempting to build the following tableadapter query in Visual Studio 2019
SELECT * FROM Vendors
ORDER BY VendorID
OFFSET 5 ROWS
FETCH NEXT 5 ROWS ONLY
And it gives the error "Unable to Parse Query Text" at the offset command
I know this query works on the database itself as i can run it successfully on the sql server (2019 Express).
Will Visual Studio tableadapter queries not recognize the offset command? Or is the syntax different in some way?

TableAdapters are pretty old these days, but they do remain a serviceable data access strategy. The designer attempts to parse the query you entered and doesn't like it much with the extra offset parts.
I recommend you try:
right click your dataset surface
add >> tableadapter
"select that downloads rows"
put the query in as SELECT * FROM Vendors
finish the wizard
click the fill,getdata() line of the adapter
in the properties grid, paste the extra clause onto the end of the query text
say "no" to "do you want to update the other queries?"
You should be left with a usable adapter. See the footnote though
You have other options if you want to use a lot of syntax that designer doesn't like; most notably you can say "Create New Stored Procedures" when going through the wizard, put a base query of SELECT * FROM Vendors, VS will make the sprocs, and then you can just edit the OFFSET etc into the sproc command in SSMS
Footnote: I personally still use TAs a lot and have never needed such syntaxes, but generally I make the first query in a TA of the form SELECT * FROM x WHERE ID = #y so it'll only ever pull one row anyway and is very "plain SQL" - the datatable schema is driven from it and it "just works":
Then other queries have defined uses such as SELECT * FROM x WHERE Name LIKE ... - If you adopt this approach of making the firt query in a "normal" one that it can cope with, then you can certainly add another query with unsupported syntax - you get an error "unable to parse query text":
but you can ignore it and finish the wizard anyway, and it will work fine

Related

Same query with the same query plan takes ~10x longer when executed from ADO.NET vs. SMSS

My query is fairly complex, but I have simplified it to figure out this problem and now it is a simple JOIN that I'm running on a SQL Server 2014 database. The query is:
SELECT * FROM SportsCars as sc INNER JOIN Cars AS c ON c.CarID = sc.CarID WHERE c.Type = 1
When I run this query from SMSS and watch it in SQL Profiler, it takes around 350ms to execute. When I run the same query inside my application using Entity Framework or ADO.NET (I've tried both). It takes 4500ms to execute.
ADO.NET Code:
using (var connection = new SqlConnection(connectionString))
{
connection.Open();
var cmdA = new SqlCommand("SET ARITHABORT ON", connection);
cmdA.ExecuteNonQuery();
var query = "SELECT * FROM SportsCars as sc INNER JOIN Cars AS c ON c.CarID = sc.CarID WHERE c.Type = 1";
var cmd = new SqlCommand(query, connection);
cmd.ExecuteNonQuery()
}
I've done an extensive Google search and found this awesome article and several StackOverflow questions (here and here). In order to make the session parameters identical for both queries I call SET ARITHABORT ON in ADO.NET and it makes no difference. This is a straight SQL query, so there is not a parameter sniffing problem. I've simplified the query and the indexes down to their most basic form for this test. There is nothing else running on the server and there is nothing else accessing the database during the test. There are no computed columns in the Cars or SportsCars table, just INTs and VARCHARs.
The SportsCars table has about 170k records and 4 columns, and the Cars table has about 1.2M records and 7 columns. The resulting data set (SportsCars of Type=1) has about 2600 records and 11 columns. I have a single non-clustered index on the Cars table, on the [Type] column that includes all the columns of the cars table. And both tables have a clustered index on the CarID column. No other indexes exist on either table. I'm running as the same database user in both cases.
When I view the data in SQL Profiler, I see that both queries are using the exact same, very simple query plan. In SQL Profiler, I'm using the Performance Event Class and the ShowPlan XML Statistics Profile, which I believe to be the proper event to monitor and capture the actual execution plan. The # of reads is the same for both queries (2596).
How can two exact same queries with the exact same query plan take 10x longer in ADO.NET vs. SMSS?
Figured it out:
Because I'm using Entity Framework, the connection string in my application has MultipleActiveResultSets=True. When I remove this from the connection string, the queries have the same performance in ADO.NET and SSMS.
Apparently there is an issue with this setting causing queries to respond slowly when connected to SQL Server via WAN. I found this link and this comment:
MARS uses "firehose mode" to retrieve data. Firehose mode means that
the server will produce data as fast as possible. This also means that
your client application must receive inbound data at the same speed as
it comes in. If it doesn't the data storage buffers on the server will
fill up and the processing will stop until those buffers empty.
So what? You may ask... But as long as the processing is stopped the
resources on the SQL server are in use and are tied up. This includes
the worker thread, schema and data locks, memory, etc. So it is
crucial that your client application consumes the inbound results as
quickly as they arrive.
I have to use this setting with Entity Framework otherwise lazy loading will generate exceptions. So I'm going to have to figure out some other workaround. But at least I understand the issue now.
How can two exact same queries with the exact same query plan take 10x longer in ADO.NET vs. SMSS?
First we need to be clear about what is considered "same" with regards to queries and query plans. Assuming that the query at the very top of the question is a copy-and-paste, then it is not the same query as the one being submitted via ADO.NET. For two queries to be the same, they need to be byte-by-byte the same, which includes all white-space, capitalization, punctuation, comments, etc.
The two queries shown are definitely very similar. And they might even share the same execution plan. But how was "same"ness determined for those? Was the XML the same in both cases? Or just what was shown graphically in SSMS when viewing the plans? If they were determined to be the same based on their graphical representation then that is sometimes misleading. The XML itself needs to be checked. Even if two query plans have the same query hash, there are still (sometimes) parts of a query plan that are variable and changes do not change the plan hash. One example is the evaluation of expressions. Sometimes they are calculated and their result is embedded into the plan as a constant. Sometimes they are calculated at the start of each execution and stored and reused within that particular execution, but not for any subsequent executions.
One difference between SSMS and ADO.NET is the default session properties for each. I thought I had seen a chart years ago showing the defaults for ADO / OLEDB / SQLNCLI but can't find it out. Either way, it doesn't need to be guess work as it can be discovered using the SESSIONPROPERTY function. Just run this query in the C# code instead of your current SELECT, and inspect the results in debug or print them out or whatever. Either way, run something like this:
SELECT SESSIONPROPERTY('ANSI_NULLS') AS [AnsiNulls],
SESSIONPROPERTY('ANSI_PADDING') AS [AnsiPadding],
SESSIONPROPERTY('CONCAT_NULL_YIELDS_NULL') AS [ConcatNullYieldsNull],
...;
Make sure to get all of the setting noted in the linked MSDN page. Now, in SSMS, go to the "Query" menu, select "Query Options...", and go to "Execution" | "ANSI". The settings coming back from the C# code need to match the ones showing in SSMS. Anything set different requires adding something like this to the beginning of your ADO.NET query string:
SET ANSI_NULLS ON;
{rest of query}
Now, if you want to eliminate the DataTable loading from being a possible suspect, just replace that line, just replace:
var cars = new DataTable();
cars.Load(reader);
with:
while(reader.Read());
And lastly, why not just put the query into a Stored Procedure? The session settings (i.e. ANSI_NULLS, etc) that typically matter the most are stored with the proc definition so they should work the same whether you EXEC from SSMS or from ADO.NET (again, we aren't dealing with any parameters here).

The schema returned by the new query differs from the base query (C#/SQL - VS 2012)

For a homework task I have to develop a C# application to interface with a SQL Server database file (.mdf), providing a dataGridView to browse the contents, and several buttons to execute queries.
I have been going well, found how to add queries to my table adapter, how to call them etc.
Now I am having problems making a query that returns the maximum pay in hourlyPayRate.
I have a database employee that contains the following attributes: employeeID, name, position, hourlyPayRate.
My query is
SELECT MAX(hourlyPayRate)
FROM employee
I right click employeeTableAdapter, click "Add Query...", name it Max and put in the query. When I click okay I get the following error message:
The schema returned by the new query differs from the base query.
The query executes correctly in the query builder, it is only when I click "OK" to save it that I receive the error.
Looking around SE there are no definitive answers to this question.
Thanks, Michael.
The solution has been found, for all those wondering.
The problem is that the query returns a table which has a different number of columns that the database.
Usually in most DBMS's this is not an issue, but for some reason Visual Studio was having none of that.
The solution was this query:
SELECT employeeID, name, position, hourlyPayRate
FROM employee
WHERE (hourlyPayRate =
(SELECT MAX(hourlyPayRate) AS MaxRate
FROM employee AS TempTable))
And then trimming the unneeded from the result as you like. For me this was as simple as having a label that derived it's data only from the hourlyPayRate attribute.
The actual reason for this is that the base QUERY returns more or less columns than the QUERY you are adding . Visual Studio cares about this ; and should not after all it is a new query.
NOTE: We are talking query columns not Database Table Columns. The Error regards Base Query - for example a fill, and perhaps you want to have a fill by and hide the field of a foreignID - because your base query outputs that column, and your added query does not - the new query differs from the base in that the columns are not the same. (I think this is done to ensure bound objects are properly bound; but I really do not know (think datagridview.)
So for example
Fill() Query
SELECT Id, Name, FKtblThing_ID
FROM ITEMS
Adding this query works..
FillByID() Query
SELECT Id, Name, FKtblThing_ID
FROM ITEMS
WHERE (FKtblThing_ID = #FKtbl_ThingID)
If instead you tried this - it would not work.
FillByID() Query
SELECT Id, Name
FROM ITEMS
WHERE (FKtblThing_ID = #FKtbl_ThingID)
This is the error you would receive:
The schema returned by the new query differs from the base query.
Everyone is wrong here, you don't edit the sql in the table adapter but edit the sql in the dataset.

Insert into Access from SQL Server

I'm looking to copy a few thousand records from SQL Server into Access in C#. The other direction works using SqlBulkCopy. Is there anything in place to do this in reverse?
I'm trying my best to stay away from looping through each field in each record and building a heinous Insert statement that not only would take forever to run, but would likely crash horribly if anything changes.
This will run against the MS Access OleConnection connection:
SELECT fld1, fld2 INTO accessTable FROM [sql connection string].sqltable
For example:
SELECT * INTO newtable
FROM
[ODBC;Description=Test;DRIVER=SQL Server;SERVER=server\SQLEXPRESS;UID=uid;Trusted_Connection=Yes;DATABASE=Test].table_1
Or to append
INSERT INTO newtable
SELECT *
FROM [ODBC;Description=Test;DRIVER=SQL Server;SERVER=server\SQLEXPRESS;UID=uid;Trusted_Connection=Yes;DATABASE=Test].table_1;
Or with FileDSN
INSERT INTO newtable
SELECT *
FROM [ODBC;FileDSN=z:\docs\test.dsn].table_1;
You will need to find the right driver to suit, for example
ODBC;Driver={SQL Server Native Client 11.0};Server=myServerAddress;Database=myDataBase; Uid=myUsername;Pwd=myPassword;
From http://connectionstrings.com works for me, but check out your client version.

SQL Server: way to see final query with filled parameters

Is there a way to see final query which is passed to SQL Server database from my C# app ?
For ex I got query:
SELECT * FROM mytable WHERE x = #yyyy;
This creates and SQLCommand object
SqlCommand cmd = new SqlCommand("SELECT * FROM mytable WHERE x = #yyyy");
Plus I need to pass parameter:
cmd.Parameters.Add("#yyyy","MyValue");
What I want to see (in debug in C# or somewhere in SQL Server Management Studio) is this:
SELECT * FROM mytable WHERE x = MyValue
Where can I find such query ?!
Best regards
Where can I find such query ?!
You can't. Such a query never exists. The values are not substituted into the SQL.
I think actually sp_executesql is called, and this function accepts the parameters separately from the SQL. You can check this using SQL Profiler to see the actual SQL.
Update:
ORDER BY #descOrAsc
Your problem is that parameters can only be used in certain places where expressions are allowed. DESC is not an expression - it is a reserved word. You cannot use a parameter containing the string "DESC" instead of writing the keyword DESC in the query.
Also, you haven't specified which column to order by.
You can run the SQL Server Profiler and see all the queries that get executed, to see whats happening (and copy paste these into the Sql Server Management Studio to do tests etc)
I would expect the query to be passed to SQL Server with the parameters. There should be no need for anything to ever create a full SQL-only query. It makes no sense to do so, as it just means more conversions for either the client, the server or both. On the server side, the query processor is going to want to parse the query into clauses with values - if the command can pass those values directly, where's the advantage on converting them into the SQL statement, only to have the server parse them into separate values again?
1.You can use SQL Profiler. (here you can see all process)
2.You can write all your queries to SQL Server table. And then you can always get queries from this table.

What do these .NET auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server.
The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements.
The auto-generated table adapter command looks like this...
this._adapter.InsertCommand.CommandText = #"INSERT INTO [Table] ([Field1], [Field2]) VALUES (#Value1, #Value2);
SELECT Field1, Field2 FROM Table WHERE (Field1 = #Value1)";
What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that?
Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE?
I cannot regenerate these table adapters, as the people who knew how to have long since left.
It is just updating the object with the latest values from the database, after an update. Always seemed a little unecessary to me but hey...
These are a nuisance from a maintenance point of view - if you have the option, you'll save yourself a lot of hassle by abstracting this all out to a proper data layer.
allows that the field values might be altered by trigger(s) on the table. Sensible enough, I'd have thought, in auto-generated boilerplate.
though the select statement is a tad whacky to assume that field1 is the primary key... but maybe the autogen code makes sure it is before generating this bit of code.

Categories

Resources