So I have sproc1 which does some things and returns some rows. The important thing is it does some things. I also have sproc2 which does some things, calls sproc1 (which does it's own things) and returns it's own rows. The problem is when I call sproc2 I get 2 result sets. First comes from sproc1 and second comes from sproc2.
Is it possible to easily suppress the sproc1 when calling it in sproc2?
I have two ways to do this as far as I can tell:
use a temporary table to catch the output of the exec sproc.
in C# navigate to the last result set and use that while ignoring the first one(s).
None of these methods is easily reusable as the:
first requires me to CREATE a temporary table that matches the output of the stored procedure
second needs me to iterate through the result sets to get to the last one not knowing which is the last one unless I try to move to the next and fail via .NextResult().
The easy way would be if SQL Server allowed me to exec a stored procedure within another stored procedure, but suppress the output of the inner executed it. Or if SqlCommand allowed an ExecuteReader(CommandBehavior.LastResult) and would navigate to the last result by itself.
Can any of the two be achieved in an easy and reusable manner?
The real solution would be to refactor inner stored procedures into write and read components. Or add a #param to inner stored procedures that prevents the final results from being selected. But I'm trying to be lazy here!
So (for now, unless I find a better answer or something gets improved) I ended up adding this argument with a default value so I don't have to think about it at all in the C# side:
,#_Suppress bit = 0 -- prevent output via select
and right before the select I add:
if #_Suppress is null or #_Suppress = 0
select -- output results
This method also requires you to refactor insert ... output code and output into a temporary table and eventually only select if not suppressed.
This is the easiest method to handle things but there should be internal functionality for these cases like:
begin suppress
exec some_sproc;
end suppress;
or some special syntax like sexec (as in suppressed exec) or a general use NULL table that can accept any insert columns format and just discard it.
I'll probably add this argument from now on to all my sprocs that produce results and refactor the old ones impacted by this issue.
Related
In our .net application, we have a tool that allows you to type SQL in a browser and submit it, for testing. In this context, though, I need to be able to prevent testers from writing to specific tables. So, based on the parameter passed from the controller (InSupportTool = true, or something), I need to know if SQL Server is allowed to make updates or inserts to, say, an accounts table.
Things I've tried so far:
I have tried looking into triggers, but there is no before trigger available, and I've heard people don't recommend using them if you can help it.
Parsing the passed SQL string to look for references to inserting or updating on that table. This is even more fragile and has countless ways, I'm sure, of getting around it if someone wanted to.
Check constraint, which is the closest I feel I've gotten but I can't quite put it together.
For check constraints, I have this:
ALTER TABLE Accounts WITH NOCHECK
ADD CONSTRAINT chk_read_only_accounts CHECK(*somehow this needs to be dynamic based on parameters passed from C# controller*)
The above works to prevent updates to that table, but only if I put a check like 1 = 0. I've seen a post where people said you could use a function as the check, and pass parameters that way, but I'm at the limit of my familiarity with SQL/.net.
Given what I'm looking to do, does anyone have experience with something like this? Thanks!
Since the application is running under a different account than the end user, you could specify your application name in the connection string (e.g. Application Name=SupportTool) and check that in an after trigger, rolling back the transaction as needed:
CREATE TABLE dbo.example(
col1 int
);
GO
CREATE TRIGGER tr_example
ON dbo.example
AFTER INSERT, UPDATE, DELETE
AS
IF APP_NAME() = N'SupportTool'
BEGIN
ROLLBACK;
THROW 50000, 'This update is not allowed using the support tool', 1;
END;
GO
INSERT INTO dbo.example VALUES(1);
GO
When having a query that returns multiple results, we are iterating among them using the NextResult() of the SqlDataReader. This way we are accessing results sequentially.
Is there any way to access the result in a random / non sequential way. For example jump first to the third result, then to the first e.t.c
I am searching if there is something like rdr.GetResult(1), or a workaround.
Since I was asked Why I want something like this,
First of all I have no access to the query and so I can not changes, so in my client I will have the Results in the sequence that server writes / produces them.
To process (build collections, entities --> business logic) the first I need the Information from both the second and the third one.
Again since it is not an option to modify some of the code, I can not somehow (without writing a lot of code) 'store' the connection info (eg. ids) in order to connect in a later step the two ResultSets
The most 'elegant' solution (for sure not the only one) is to process the result sets in non sequential way. So that is why I am asking if there is such a way.
Update 13/6
While Jeroen Mostert answer gives a thoughtful explanation on why, Think2ceCode1ce answer shows the right directions for a workaround. The content of the link in the comments how in additional dataset could be utilized to work in an async way. IMHO this would be the way to go if was going to write a general solution. However in my case, I based my solution in the nature of my data and the logic behind them. In short terms, (1) I read the data as they come sequentially using the SqlDataReader; (2) I store some of the data I need in a dictionary and a Collection, when I am reading the first in row but second in logic ResultSet; (3) When I am Reading the third in row, but first in logic ResultSet I am iterating in through the collection I built earlier and based on the dictionary data I am building my final result.
The final code seems more efficient and it is more maintainable than using the async DataAdapter. However this is a very specific solution based on my data.
Provides a way of reading a forward-only stream of rows from a SQL
Server database.
https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader(v=vs.110).aspx
You need to use DataAdapter for disconnected and non-sequential access. To use this you just have to change bit of your ADO.NET code.
Instead of
SqlDataReader sqlReader = sqlCmd.ExecuteReader();
You need
DataTable dt = new DataTable();
SqlDataAdapter sqlAdapter = new SqlDataAdapter(sqlCmd);
sqlAdapter.Fill(dt);
If your SQL returns multiple result sets, you would use DataSet instead of DataTable, and then access result sets like ds.Tables[index_or_name].
https://msdn.microsoft.com/en-us/library/bh8kx08z(v=vs.110).aspx
No, this is not possible. The reason why is quite elementary: if a batch returns multiple results, it must return them in order -- the statement that returns result set #2 does not run before the one that returns result set #1, nor does the client have any way of saying "please just skip that first statement entirely" (as that could have dire consequences for the batch as a whole). Indeed, there's not even any way in general to tell how many result sets a batch will produce -- all of this is done at runtime, the server doesn't know in advance what will happen.
Since there's no way, server-side, to skip or index result sets, there's no meaningful way to do it client-side either. You're free to ignore the result sets streamed back to you, but you must still process them in order before you can move on -- and once you've moved on, you can't go back.
There are two possible global workarounds:
If you process all data and cache it locally (with a DataAdapter, for example) you can go back and forth in the data as you please, but this requires keeping all data in memory.
If you enable MARS (Multiple Active Result Sets) you can execute another query even as the first one is still processing. This does require splitting up your existing single batch code into individual statements (which, if you really can't change anything about the SQL at all, is not an option), but you could go through result sets at will (without caching). It would still not be possible for you to "go back" within a single result set, though.
I have the following table
CREATE TABLE MYTABLE (MYID VARCHAR2(5), MYGEOM MDSYS.SDO_GEOMETRY );
AND the sql statement below:
INSERT INTO MYTABLE (MYID,MYGEOM) VALUES
( 255, SDO_GEOMETRY(2003, 2554, NULL, SDO_ELEM_INFO_ARRAY(1,1003,1),
SDO_ORDINATE_ARRAY(-34.921816571,-8.00119170599993,
...,-34.921816571,-8.00119170599993)));
Even after read several articles about possible solutions, I couldn't find out how to insert this sdo_geometry object.
The Oracle complains with this message:
ORA-00939 - "too many arguments for funcion"
I know that it's not possible to insert more then 999 values at once.
I tried stored procedure solutions, but I'm not Oracle expert, and maybe I missed something.
Could someone give me an example of code in c# or plsql ( or the both ) with or without stored procedure, to insert that row?
I'm using Oracle 11g, OracleDotNetProvider v 12.1.400 on VS2015 AND my source of spatial data comes from an external json ( so, no database-to-database ) and I can only use solutions using this provider, without datafiles or direct database handling.
I'm using SQLDeveloper to test the queries.
Please, don't point me articles if you are not sure that works with this row/value
I finally found an effective solution. Here: Constructing large sdo_geometry objects in Sql Developer and SqlPlus. Pls-00306 Error
The limitation you see is old. It is based on the idea that no-one would ever write a function that would have more than 1000 parameters (actually 999 input parameters and 1 return value).
However with the advent of multi-valued attributes (VARRAYs) and objects, this is no longer true. In particular for spatial types, the SDO_ORDINATE attribute is really an object type (implemented as a VARRAY) and the reference to SDO_ORDINATE is the constructor of that object type. Its input can be an array (if used in some programming language) or a list of numbers, each one being considered a parameter to a function - hence the limit to 999 numbers).
That happens only if you hard-code the numbers in your SQL statement. But that is a bad practice generally. The better practice is to use bind variables, and object types are no exception. The proper way is to construct an array with the coordinates you want to insert and pass those to the insert statement. Or construct the entire SDO_GEOMETRY object as a bind variable.
And of course, the very idea of constructing a complex geometry entirely manually by hardcoding the coordinates is absurd. That shape will either be loaded from a file (and a loading tool will take care of that), or capture by someone drawing a shape over a map - and then your GIS/capture tool will pass the coordinates to your application for insertion into your database.
In other words, that limitation to 999 attributes / numbers is rarely seen in real life. When it does, it reflects misunderstandings on how those things work.
I'm working with a legacy codebase and need to call a stored procedure that I'm not allowed to modify. This stored procedure returns a row or multiple rows of validation data.
Example of result set (two columns, code and text):
0 "success"
OR
3 "short error"
4 "detailed error"
In the procedure itself, the message is selected simply as:
Select 0 as code, 'success' as text
Problem:
I'm using Entity Framework to map the result of this stored procedure to a custom class:
public class ValidationResult
{
public int code { get; set; }
public string text { get; set; }
}
The call itself:
var result = context.Database.SqlQuery<ValidationResult>(#"old_sproc").ToList();
I've written some integration tests, and have noticed that when the procedure returns the success message, the 0 comes across as a short. When it returns a non-zero message, it comes across as an int. I assumed that setting code as an int, the short would fit in. Unfortunately, I get the following exception for my success test:
The specified cast from a materialized 'System.Int16' type to the 'System.Int32' type is not valid.
When I switch code to a short to make my success test pass, my failure test fails with the following exception:
The specified cast from a materialized 'System.Int32' type to the 'System.Int16' type is not valid.
ADO.NET is an answer
One solution is to fall back to ADO.NET's SqlDataReader object, so I have that as a fallback solution. I'm wondering if there is something I can do on the EF side to get this working, though.
(This is a follow-up to my previous answer. It is only relevant for sql-server-2012 and later.)
Short answer:
var sql = "EXECUTE old_sproc WITH RESULT SETS ((code INT, text VARCHAR(MAX)))";
var result = context.Database.SqlQuery<ValidationResult(sql).ToList();
Approach taken in this answer:
This answer will follow in your footsteps and use SqlQuery to execute your stored procedure. (Why not an altogether different approach? Because there might not be any alternative. I'll go into this further below.)
Let's start with an observation about your current code:
var result = context.Database.SqlQuery<ValidationResult>(#"old_sproc").ToList();
The query text "old_sproc" is really abbreviated T-SQL for "EXECUTE old_sproc". I am mentioning this because it's easy to think that SqlQuery somehow treats the name of a stored procedure specially; but no, this is actually a regular T-SQL statement.
In this answer, we will modify your current SQL only a tiny bit.
Implicit type conversions with the WITH RESULT SETS clause:
So let's stay with what you're already doing: EXECUTE the stored procedure via SqlQuery. Starting with SQL Server 2012, the EXECUTE statement supports an optional clause called WITH RESULT SETS that allows you to specify what result sets you expect to get back. SQL Server will attempt to perform implicit type conversions if the actual result sets do not match that specification.
In your case, you might do this:
var sql = "EXECUTE old_sproc WITH RESULT SETS ((code INT, text VARCHAR(MAX)))";
var result = context.Database.SqlQuery<ValidationResult(sql).ToList();
The added clause states that you expect to get back one result set having a code INT and a text VARCHAR(MAX) column. The important bit is code INT: If the stored procedure happens to produce SMALLINT values for code, SQL Server will perform the conversion to INT for you.
Implicit conversions could take you even further: For example, you could specify code as VARCHAR(…) or even NUMERIC(…) (and change your C# properties to string or decimal, respectively).
If you're using Entity Framework's SqlQuery method, it's unlikely to get any neater than that.
For quick reference, here are some quotes from the linked-to MSDN reference page:
"The actual result set being returned during execution can differ from the result defined using the WITH RESULT SETS clause in one of the following ways: number of result sets, number of columns, column name, nullability, and data type."
"If the data types differ, an implicit conversion to the defined data type is performed."
Do I have to write a SQL query? Isn't there another (more ORM) way?
None that I am aware of.
Entity Framework has been evolving in a "Code First" direction in the recent past (it's at version 6 at this time of writing), and that trend is likely to continue.
The book "Programming Entity Framework Code First" by Julie Lerman & Rowan Miller (published in 2012 by O'Reilly) has a short chapter "Working with Stored Procedures", which contains two code examples; both of which use SqlQuery to map a stored procedure's result set.
I guess that if these two EF experts do not show another way of mapping stored procedures, then perhaps EF currently does not offer any alternative to SqlQuery.
(P.S.: Admittedly the OP's main problem is not stored procedures per se; it's making EF perform an automatic type conversion. Even then, I am not aware of another way than the one shown here.)
If you can't alter the stored procedure itself, you could create a wrapper stored procedure which alters the data in some way, and have EF call that.
Not ideal of course, but may be an option.
(Note: If you're working with SQL Server 2012 or later, see my follow-up answer, which shows a much shorter, neater way of doing the same thing described here.)
Here's a solution that stays in EF land and does not require any database schema changes.
Since you can pass any valid SQL to the SqlQuery method, nothing stops you from passing it a multi-statement script that:
DECLAREs a temporary table;
EXECUTEs the stored procedure and INSERTs its result into the temporary table;
SELECTs the final result from that temporary table.
The last step is where you can apply any further post-processing, such as a type conversion.
const string sql = #"DECLARE #temp TABLE ([code] INT, [text] VARCHAR(MAX));
INSERT INTO #temp EXECUTE [old_sproc];
SELECT CONVERT(INT, [code]) AS [code], [text] FROM #temp;";
// ^^^^^^^^^^^^^ ^^^^^^^^^^^
// this conversion might not actually be necessary
// since #temp.code is already declared INT, i.e.
// SQL Server might already have coerced SMALLINT
// values to INT values during the INSERT.
var result = context.Database.SqlQuery<ValidationResult>(sql).ToList();
In the entity framework data modeler page (Model Browser), either change the functional mapping to a specific int which works for the ValidationResult class or create a new functional mapping result class which has the appropriate int and use that as the resulting DTO class.
I leave this process a touch vague because I do not have access to the actual database; instead I provide the process to either create a new functional mapping or modify an existing one. Trial and error will help you overcome the incorrect functional mapping.
Another trick to have EF generate the right information is temporarily drop the stored proc and have a new one return a stub select such as:
select 1 AS Code , 'Text' as text
RETURN ##ROWCOUNT
The reasoning for this is that sometimes EF can't determine what the stored procedure ultimately returns. If that is the case, temporarily creating the stub return and generating EF from it provides a clear picture for the mappings. Then returning the sproc to its original code after an update sometimes does the trick.
Ignore the int/short. the text is always the same for the same number right? get just the text. have a switch case. Yes its a hack but unless you can fix the root of the problem (and you say you are not allowed) then you should go with the hack that will take the least amount of time to create and will not cause problems down the road for the next person maintaining the code. if this stored proc is legacy it will not have any new kinds of results in the future. and this solution together with a nice comment solves this and lets you go back to creating value somewhere else.
Cast the static message code to an int:
Select cast(0 as int) as code, 'success' as text
This ensures the literal returned is consistent with the int returned by the other query. Leave the ValidationResult.code declared as an int.
Note: I know I missed the part in the question about the SP can't be modified, but given that this makes the answer quite complicated, I'm leaving this here for others who may have the same problem, but are able to solve it much more easily by modifying the SP. This does work if you have a return type inconsistency in the SP and modifying is an option.
There is a workaround you could use if you don't find a better solution. Let it be an int. It will work for all error codes. If you get an exception you know the result was a success so you can add a try/catch for that specific exception. It's not pretty and depending on how much this runs it might impact performance.
Another idea, have you tried changing the type of code to object?
Interesting behaviour has been noticed by me recently.
When having MS SQL stored-procedure ran using SqlCommand.ExecuteScalar(), my application seems to be completely unaware to any SQL Errors or PRINTs which appear after SELECT is done.
Most probable explanation is that flow control is given to C# immediately after any SELECT result appears, without waiting stored procedure to finish (though stored procedure continues execution silently underneath).
Obvious advantage is performance gain (no need to wait, since the result is already known), unfortunately C# app is unaware of any SQL exceptions that could happen after that point.
Could anyone confirm my explanation? Could this behaviour be altered?
The ExecuteNonQuery method will call "ExecuteReader" and immediately call "Close" on the returned reader object. ExecuteScalar will call "Read" once, pick out the first value (index 0) and then call "Close".
Since the DataReader is essentially nothing more than a specialized network stream, any information that is returned afther it's current location (when Close is called) will just never reach the actual client components, even though the server might have sent it. The implementation is as such to avoid returning a huge amount of data when none is required.
In your case, I see two solutions to this problem.
make sure that you use ExecuteReader instead, and read all the way through the result:
using(var reader = command.ExecuteReader())
{
do
{
while (reader.Read()) { /* whatever */ };
} while (reader.NextResult());
}
If you can control the server side, it will help to move the actual "send-to-client" select to the end of the procedure or batch in question. Like this:
create proc Demo
as
declare #result int
select top 1 #result = Id from MyTable where Name = 'testing'
print 'selected result...'
select #result Id -- will send a column called "Id" with the previous value
go