I have a temp table in sql server that I have created. I have the commandText of the sqlcommand object to insert from the temp table to another table. My question is: Does it insert the rows before reaching an error row?
So for example, lets say there is 1000 rows in the temptable and 0 in tableA. I do an insert from temptable to tableA. There is an error on row 999 and an exception is thrown. Does tableA have 989 rows inside of it? Or is it 0?
I have tried googling this question, but I haven't found anything. I have also read the documentation on SQLCommand.ExecuteNonQuery() and haven't found an answer. I would appreciate any help or leads.
Since your INSERT is a single statement, then no. There will be 0 rows in tableA.
If you had multiple statements in a batch then each sucessfully executed statement will perform the requested modifications EXCEPT the statement that errors out, which will leave the tables in the state as of the completion of the prior batch.
If you have the multiple statement batch mentioned above wrapped inside a TRANSACTION then, generally speaking, if one of the statements errors you can roll back the entire batch to the state prior to any of the statements executing.
Note: again, this is generally speaking. There are many external factors that can leave your data in an inconsistent state (server failure, IO corruption, etc) in which case SQL Server will try to rollback your data from the transaction log.
This is a single statement
INSERT tableA (col1,col2,col3)
SELECT col1,col2,col3
FROM #tmpTable;
An error here (such as datatype mismatch, NULL value on a NOT NULL column, etc) will result in 0 rows being inserted into tableA.
While writing stored procedure, i have remove set no count on and check whether multiple rows are affected to check whether the table values are affected or not.
Then I have realized it will give bad performance.
Then I have implemented like ##rowcount.
But for checking one table this will be the good idea.
In stored procedure, I will update more than one table and delete more than one table.
How to return whether the values are updated/deleted in efficient way to the server side (where i will use .ExecuteScalar)?
Variant 1: Create trigger to log changes into table and after then get information from this table.
Variant 2: Use system variables ##rowcount to get information about rows effected.
Variant 3: Get information about rows effected and use your variable or output select or output statement which can store number of changed rows
Variant 4: Rewrite your code to pattern: int numberOfRecords = comm.ExecuteNonQuery();
According to the most of the SQL texts that I have seen
SET NoCount ON
Add's to the DB performance and DBA's don't like to see it OFF.
However in the ASP.NET that i'm dealing with , this causes
Calling to the stored procedures using ExecuteNonQuery always result in -1.
Is this a known issue and if so what is the workaround?
So the question is how have SET NoCount ON and then ExecuteNonQuery return number of rows affected.
This question is for 'ExecuteNonQuery' only. I know I can use ExecuteScalar and get the ##RowCount
What else did you expect? You explicitely tell the database to not count the rows and then you are asking why the counted rows are not returned?
The database does not deliver the number of rows affected, because you turned it off. That's neither an issue nor a bug, that's the whole point of what you did.
Since you asked the db not to return that information, it does exactly what you said.
You could try to prefix your queries with SET NOCOUNT OFF and postfix with SET NOCOUNT ON to get the number of rows for specific cases
I am using ADO.net to call stored procedures to perform insertions into a Sybase database, and I want to include unit tests to ensure that the insert calls are working correctly.
According to the spec:
IDbCommand.ExecuteNonQuery()
Returns: The number of rows affected.
As each stored procedure is inserting one row, I am checking that the value returned == 1.
However, each procedure returns a different value, ranging from 3 to 8.
So, what exactly does the phrase 'The number of rows affected' mean?
I should sepcify that the only statements in the procs are Set, Insert, Print and Return, and the only statement that doesn't seem to affect the return value is Insert!
Thanks.
Then your procedure doing something, it can arise some triggers and system updates, depending on your inner logic. And this number is the sum of this affected rows.
More info:
http://www.sqlhacks.com/Retrieve/Row_Count
Without any knowledge of the Sybase provider, a likely answer is that ExecuteNonQuery returns the cumulative sum of 'The number of rows affected' messages returned by the SQL statement. In SQL Server, ##ROWCOUNT returns the number of rows affected by the last statement.
I found this comment in the SqlCommand.cs source on the ExecuteNonQuery return value, but didn't actually verify it:
// sql reader will pull this value out for each NextResult call. It is not cumulative
// _rowsAffected is cumulative for ExecuteNonQuery across all rpc batches
internal int _rowsAffected = -1; // rows affected by the command
In SQL Server, you can use SET NOCOUNT in the stored procedures to control how many count messages is returned. ExecuteNonQuery returns -1 if no messages are returned, by setting SET NOCOUNT ON at the beginning.
SET and RETURN do not send messages to same output stream where count messages go, but PRINT does. Printed strings shouldn't affect ExecuteNonQuery integer return value, but I can't really tell.
The point is that it is better to use T-SQL output parameters to calculate interesting rows than to rely on the ExecuteNonQuery return value. Alternative to output parameters is to use the SELECT resultset and ExecuteScalar
See Also
Overriding rows affected in SQL Server using ExecuteNonQuery?
According to MSDN the return value of 'ExecuteNonQuery' is not relevant for stored procedure execution.
For UPDATE, INSERT, and DELETE statements, the return value is the number of rows affected by the command. For all other types of statements, the return value is -1.
You can instead use stored procedure return values or output parameters to obtain the desired value.
Just verify your sql routine logic if its having some dependency logic in case it adds more that 1 rows on certain condition.
I have some complex stored procedures that may return many thousands of rows, and take a long time to complete.
Is there any way to find out how many rows are going to be returned before the query executes and fetches the data?
This is with Visual Studio 2005, a Winforms application and SQL Server 2005.
You mentioned your stored procedures take a long time to complete. Is the majority of the time taken up during the process of selecting the rows from the database or returning the rows to the caller?
If it is the latter, maybe you can create a mirror version of your SP that just gets the count instead of the actual rows. If it is the former, well, there isn't really that much you can do since it is the act of finding the eligible rows which is slow.
A solution to your problem might be to re-write the stored procedure so that it limits the result set to some number, like:
SELECT TOP 1000 * FROM tblWHATEVER
in SQL Server, or
SELECT * FROM tblWHATEVER WHERE ROWNUM <= 1000
in Oracle. Or implement a paging solution so that the result set of each call is acceptably small.
make a stored proc to count the rows first.
SELECT COUNT(*) FROM table
Unless there's some aspect of the business logic of you app that allows calculating this, no. The database it going to have to do all the where & join logic to figure out how line rows, and that's the vast majority of the time spend in the SP.
You can't get the rowcount of a procedure without executing the procedure.
You could make a different procedure that accepts the same parameters, the purpose of which is to tell you how many rows the other procedure should return. However, the steps required by this procedure would normally be so similar to those of the main procedure that it should take just about as long as just executing the main procedure.
You would have to write a different version of the stored procedure to get a row count. This one would probably be much faster because you could eliminate joining tables which you aren't filtered against, remove ordering, etc. For example if your stored proc executed the sql such as:
select firstname, lastname, email, orderdate from
customer inner join productorder on customer.customerid=productorder.productorderid
where orderdate>#orderdate order by lastname, firstname;
your counting version would be something like:
select count(*) from productorder where orderdate>#orderdate;
Not in general.
Through knowledge about the operation of the stored procedure, you may be able to get either an estimate or an accurate count (for instance, if the "core" or "base" table of the query is able to be quickly calculated, but it is complex joins and/or summaries which drive the time upwards).
But you would have to call the counting SP first and then the data SP or you could look at using a multiple result set SP.
It could take as long to get a row count as to get the actual data, so I wouldn't advodate performing a count in most cases.
Some possibilities:
1) Does SQL Server expose its query optimiser findings in some way? i.e. can you parse the query and then obtain an estimate of the rowcount? (I don't know SQL Server).
2) Perhaps based on the criteria the user gives you can perform some estimations of your own. For example, if the user enters 'S%' in the customer surname field to query orders you could determine that that matches 7% (say) of the customer records, and extrapolate that the query may return about 7% of the order records.
Going on what Tony Andrews said in his answer, you can get an estimated query plan of the call to your query with:
SET showplan_text OFF
GO
SET showplan_all on
GO
--Replace with call you your stored procedure
select * from MyTable
GO
SET showplan_all ofF
GO
This should return a table, or many tables which will let you get the estimated row count of your query.
You need to analyze the returned data set, to determine what is a logical, (meaningful) primary key for the result set that is being returned. In general this WILL be much faster than the complete procedure, because the server is not constructing a result set from data in all the columns of each row of each table, it is simply counting the rows... In general, it may not even need to read the actual table rows off disk to do this, it may simply need to count index nodes...
Then write another SQL statement that only includes the tables necessary to generate those key columns (Hopefully this is a subset of the tables in the main sql query), and the same where clause with the same filtering predicate values...
Then add another Optional parameter to the Stored Proc called, say, #CountsOnly, with a default of false (0) as so...
Alter Procedure <storedProcName>
#param1 Type,
-- Other current params
#CountsOnly TinyInt = 0
As
Set NoCount On
If #CountsOnly = 1
Select Count(*)
From TableA A
Join TableB B On etc. etc...
Where < here put all Filtering predicates >
Else
<Here put old SQL That returns complete resultset with all data>
Return 0
You can then just call the same stored proc with #CountsOnly set equal to 1 to just get the count of records. Old code that calls the proc would still function as it used to, since the parameter value is set to default to false (0), if it is not included
It's at least technically possible to run a procedure that puts the result set in a temporary table. Then you can find the number of rows before you move the data from server to application and would save having to create the result set twice.
But I doubt it's worth the trouble unless creating the result set takes a very long time, and in that case it may be big enough that the temp table would be a problem. Almost certainly the time to move the big table over the network will be many times what is needed to create it.