Interesting behaviour has been noticed by me recently.
When having MS SQL stored-procedure ran using SqlCommand.ExecuteScalar(), my application seems to be completely unaware to any SQL Errors or PRINTs which appear after SELECT is done.
Most probable explanation is that flow control is given to C# immediately after any SELECT result appears, without waiting stored procedure to finish (though stored procedure continues execution silently underneath).
Obvious advantage is performance gain (no need to wait, since the result is already known), unfortunately C# app is unaware of any SQL exceptions that could happen after that point.
Could anyone confirm my explanation? Could this behaviour be altered?
The ExecuteNonQuery method will call "ExecuteReader" and immediately call "Close" on the returned reader object. ExecuteScalar will call "Read" once, pick out the first value (index 0) and then call "Close".
Since the DataReader is essentially nothing more than a specialized network stream, any information that is returned afther it's current location (when Close is called) will just never reach the actual client components, even though the server might have sent it. The implementation is as such to avoid returning a huge amount of data when none is required.
In your case, I see two solutions to this problem.
make sure that you use ExecuteReader instead, and read all the way through the result:
using(var reader = command.ExecuteReader())
{
do
{
while (reader.Read()) { /* whatever */ };
} while (reader.NextResult());
}
If you can control the server side, it will help to move the actual "send-to-client" select to the end of the procedure or batch in question. Like this:
create proc Demo
as
declare #result int
select top 1 #result = Id from MyTable where Name = 'testing'
print 'selected result...'
select #result Id -- will send a column called "Id" with the previous value
go
Related
So I have sproc1 which does some things and returns some rows. The important thing is it does some things. I also have sproc2 which does some things, calls sproc1 (which does it's own things) and returns it's own rows. The problem is when I call sproc2 I get 2 result sets. First comes from sproc1 and second comes from sproc2.
Is it possible to easily suppress the sproc1 when calling it in sproc2?
I have two ways to do this as far as I can tell:
use a temporary table to catch the output of the exec sproc.
in C# navigate to the last result set and use that while ignoring the first one(s).
None of these methods is easily reusable as the:
first requires me to CREATE a temporary table that matches the output of the stored procedure
second needs me to iterate through the result sets to get to the last one not knowing which is the last one unless I try to move to the next and fail via .NextResult().
The easy way would be if SQL Server allowed me to exec a stored procedure within another stored procedure, but suppress the output of the inner executed it. Or if SqlCommand allowed an ExecuteReader(CommandBehavior.LastResult) and would navigate to the last result by itself.
Can any of the two be achieved in an easy and reusable manner?
The real solution would be to refactor inner stored procedures into write and read components. Or add a #param to inner stored procedures that prevents the final results from being selected. But I'm trying to be lazy here!
So (for now, unless I find a better answer or something gets improved) I ended up adding this argument with a default value so I don't have to think about it at all in the C# side:
,#_Suppress bit = 0 -- prevent output via select
and right before the select I add:
if #_Suppress is null or #_Suppress = 0
select -- output results
This method also requires you to refactor insert ... output code and output into a temporary table and eventually only select if not suppressed.
This is the easiest method to handle things but there should be internal functionality for these cases like:
begin suppress
exec some_sproc;
end suppress;
or some special syntax like sexec (as in suppressed exec) or a general use NULL table that can accept any insert columns format and just discard it.
I'll probably add this argument from now on to all my sprocs that produce results and refactor the old ones impacted by this issue.
This is going to top some of the weirdest things I've seen. I've tried looking up "simple t-sql delete causing timeout" but all titles are misleading, they say simple but are not. They deal with deleting millions of records or have complex relationships setup. I do not.
I have four tables:
tblInterchangeControl,
tblFunctionalGroup,
tblTransactionSet,
tblSegment
The latter 3 all associate to tblInterchangeConrol via InterchangeControlID. There are no relationships setup. like I said as simple as one could get.
The procedure runs a delete statement on all 4 tables like so...
DELETE FROM tblSegment
WHERE (ID_InterchangeControlID = #InterchangeControlID)
DELETE FROM tblTransactionSet
WHERE (ID_InterchangeControlID = #InterchangeControlID)
DELETE FROM tblFunctionalGroup
WHERE (ID_InterchangeControlID = #InterchangeControlID)
DELETE FROM tblInterchangeControl
WHERE (InterchangeControlID = #InterchangeControlID)
The weird part is if I leave these in the procedure it times out, if I remove them, it does not. I've pinned it to these delete statements that are the cause. But Why?!
I included c# because I'm calling this procedure from a c# application. I don't think this is the issue but maybe. I only say I don't think so because my code work just fine when I remove the delete statements inside the stored procedure. Then if I put them back, an exception is thrown that it's timed out.
In case my comment is the answer.
Most likely you have some locks holding those deletes up.
If you run a query from a command line SQL tool or from SQL Management Studio it will take whatever it needs to complete the query. So yes, most likely it's client part issue. And, because you mentioned c# it's probably ADO.NET command timeout.
Also, I'd suggest to profile the queries by inspecting their execution plans. In case you don't have indexes (primary/unique key constraints) this will result to full-scan, i.e. O(n) operation you don't want.
Update:
OK, looks like it's ADO.NET error. In your code, just prior executing the command increase the timeout:
var myCommand = new SqlCommand("EXEC ..."); // you create it something like this
....
myCommand.CommandTimeout = 300; // 5 minutes
myCommand.ExecuteNonReader(); // assuming your SP doesn't return anything
I am creating a oracle user in dba_users table by using the below c# code where i am using oledbcommand and ExecuteNonQuery. User is being successfully created in the dba_users table but ExecuteNonQuery is always retun value as "0"
So i am doing validation in my code as (IsUserCreated==0). Am i correct with my coding here?
int IsUserCreated= oleCreateUserCommands.ExecuteNonQuery();
if(IsUserCreated==0)
{
//TBD code
Response.write("User Created Successfully");
}
else
{
//TBD Code
Response.write("User creation failed with some error");
}
No, basically. That 0 doesn't mean much - in fact, the main thing it tells me is that you probably have SET NOCOUNT ON somewhere, or this is a sproc without a RETURN - otherwise I would expect 1 to be returned to indicate 1 row impacted. Either way: it does not indicate the lack of an error. The lack of an exception indicates the lack of an error. Returning 1 is useful as a "yes, exactly 1 row was updated" check, if it is enabled.
As Marc said, you can't rely on the return value. The return value is actually not consistent or portable, across different databases and statement types you may see -1 or 0 for success for non-DML, and 0, 1 or greater for DML, in my experience. Per his comment about SET NOCOUNT ON, Oracle doesn't support that, its a SQL Server feature.
Incidentally, for a CREATE USER statement, I always see -1 (I develop several desktop database tools and I've done a lot of tracing) though I don't use OleDb much. I am surprised you see 0, you should double check.
Regardless, you must use exceptions to handle error cases for ExecuteNonQuery and ExecuteScalar and its siblings. It is not possible to write robust code otherwise. The lack of exception implies success. As far as the return code, it is really useless for validation, except in DML. How do you write a generic algorithm that can accept -1, 0 or 1, or N as valid? I simply check it when I know I issue a possible DML, and need to return the row count to the user.
Your code should be in a using block (all IDisposable types in ADO should typically be disposed in a using statement)
You should have a try/catch or at least a try/finally
If you don't like repeating yourself, then wrap ExecuteNonQuery in your own function that will handle exception and return a bool true/false. In certain cases, I like to write extension methods for the connection or reader classes.
I am experiencing very strange issue: the stored procedure doesn't return even though the data is properly and timely inserted. The most staggering thing about this is that the record time stamp field is always populated within milliseconds so the data seems getting into the tables very fast. But return never happens.
Important part of all this is that it only happens under the load -- individual requests are just fine. Only when DB is stressed enough, this thing starts happening.
Any ideas are welcome because I have very little understanding what can be wrong.
Here is simplified C# part of it:
try
{
using (var conn = new SqlConnection(connString))
{
conn.Open();
using (var cmd = new SqlCommand(conn, ....)
{
cmd.CommandType = StoredProcedure;
cmd.ExecuteNonQuery();
// THIS NEVER EXECUTES:
ReportSuccess();
}
}
}
catch (TimeoutException)
{
// EXCEPTION HERE
}
And the stored procedure:
CREATE PROCEDURE dbo.Blah
BEGIN
INSERT dbo.MyTable VALUES (...)
INSERT dbo.MyTable2...
-- Here is where everything stops.
END
UPDATE: we did out best to correlate timeouts and SQL server activity and it appears non-app user activity was causing locks. The process is designed to allow very quick inserts and very quick reads. However some individuals would execute quite expensive queries without actually using DIRTY READ policy, which was tipping over the fragile hardware load balance. Thanks for all your tips though.
Based on the information provided, my best guess is that there is a contingency problem in your stored procedure.
Try using transaction in your stored procedure. If my theory is correct, no record will be inserted.
Check if there are any locks being acquired on MyTable2. If you doing a big select on that table elsewhere, ensure you are using nolock.
When you use a SqlDataReader, is the return set completely determined by the ExecuteReader step, or can you influence what you get by writing to the source table(s) while reading? Here is an example in very rough pseudo code.
sc = new SqlCommand("select * from people order by last, first",db) ;
sdr = sc.ExecuteReader() ;
while (sdr.read())
{
l = (string) sdr["last"] ;
k = (string) sdr["key"] ;
if (l.Equals("Adams"))
{
sc2 = new SqlCommand("update people set last = #nm where key = #key") ;
sc2.Parameters.Add(new SqlParameter("#nm", "Ziegler"));
sc2.Parameters.Add(new SqlParameter("#key", k));
sc2.ExecuteNonQuery() ;
}
}
I've seen a lot of bad errors in other environments caused by writing to the table you are reading. Here record k gets bumped from the top of the list (Adams) to the bottom (Ziegler). I've assumed (ha!) that SqlDataReader is immune. True? False?
It depends on your transaction isolation level or other locking hints, but iirc by default reading from a table in sql server locks those records, and therefore the code you posted will either deadlock (sc2 will eventually timeout) or the updates will go into the the transaction log and none will be written until your reader is finished. I don't remember which off the top of my head.
One issue I see is that when the reader is open it owns the database connection, nothing else can use it while the reader is open. So the only way possible to do this is using a different database connection, and still it would depend on transaction level
If you want to assume that the read data is not altered by those updates, could you read the data into a temporary object container, and then after all the reading is done, then do your updates? It would make the issue moot.
Of course, I did find the question interesting from a "how does this really work" standpoint.
If you want to do updates while you're iterating on the query results you could read it all into a DataSet.
I know you didn't ask about this, and I also know this is pseudo-code, but be sure to wrap your sc, sdr, and sc2 variables in using () statements to ensure they're disposed properly.