I have a stored procedure to insert data in SQL Server DB. I'm using WCF Service which takes data from client and inserts data in the DB. I have a unique key constraint on Name column of my DB table.
If there is any unique key constraint errors:
I handle that in Stored Procedure. e.g using if exists statement in SQL Server and if value is there then I should return -1 otherwise it should insert row in db.
I handle that in my (WCF Service)c# code. I get sql exception and return that sql exception code to the client.
In the first solution I think there is performance issue, because unique key constraint will be checked 2 times. First time I’m checking it manually within stored procedure and second time unique key constraint will check it again. So, 1 value is being checked twice.
In second solution exception is being handle by wcf service using c# code and I’ve heard exceptions in wcf is not so good.
What's the best solution?
"Better" is a little bit subjective here, and it kinds of depends on which "better" you like.
If you mean from a performance perspective, I'd be willing to bet that Option 1 is actually more performant, since the process of checking an index for an existing value in SQL Server (even twice) will probably be dwarfed by the time it takes to raise and propagate an exception back into your code. (Not to mention that you don't have to check it twice at all, you can try/catch in T-SQL itself and return -1 on a Unique key violation)
However if you mean from a design and maintenance point of view, then Option 2 is in my opinion far more desirable, since it is very clear what is going on
In other words, as a developer I would rather read (pseudo-code)
//assuming you have a connection open, a command prepared, etc.
try
{
var result=command.ExecuteNonQuery();
}
catch(SqlException ex)
{
if(ex.Number==2627)
{
//Unique Key constraint violation.
//Error 2627 is documented and well known to be a Unique Key Constraint Violation
//so it's immediately obvious what's going on here
}
}
than
var result=command.ExecuteNonQuery();
if (result==-1)
{
//this means a unique key violation
//what if that comment got removed? I'd have no clue what
//-1 meant...
}
Even though at first glance the second is shorter and more succint.
Note: as MetroSmurf pointed out, catching the exception here is not ideal, it should be handled by the caller, so this is just for illustrative purposes.
This is because the -1 here is pretty arbitrary on the part of the stored procedure; so unless you can guarantee that you can document it and that this document will never go out of date ,etc, then you could be placing the burden on the next developer to have to go look up the Stored Procedure, and figure out what exactly -1 means in this context.
Plus since it's possible for someone to change the SP without touching your C#, what are you going to do if the SP suddenly starts returning "42" for Unique Key Violations? Of course you may be in full control of the SP but will that always be the case?
Related
In our .net application, we have a tool that allows you to type SQL in a browser and submit it, for testing. In this context, though, I need to be able to prevent testers from writing to specific tables. So, based on the parameter passed from the controller (InSupportTool = true, or something), I need to know if SQL Server is allowed to make updates or inserts to, say, an accounts table.
Things I've tried so far:
I have tried looking into triggers, but there is no before trigger available, and I've heard people don't recommend using them if you can help it.
Parsing the passed SQL string to look for references to inserting or updating on that table. This is even more fragile and has countless ways, I'm sure, of getting around it if someone wanted to.
Check constraint, which is the closest I feel I've gotten but I can't quite put it together.
For check constraints, I have this:
ALTER TABLE Accounts WITH NOCHECK
ADD CONSTRAINT chk_read_only_accounts CHECK(*somehow this needs to be dynamic based on parameters passed from C# controller*)
The above works to prevent updates to that table, but only if I put a check like 1 = 0. I've seen a post where people said you could use a function as the check, and pass parameters that way, but I'm at the limit of my familiarity with SQL/.net.
Given what I'm looking to do, does anyone have experience with something like this? Thanks!
Since the application is running under a different account than the end user, you could specify your application name in the connection string (e.g. Application Name=SupportTool) and check that in an after trigger, rolling back the transaction as needed:
CREATE TABLE dbo.example(
col1 int
);
GO
CREATE TRIGGER tr_example
ON dbo.example
AFTER INSERT, UPDATE, DELETE
AS
IF APP_NAME() = N'SupportTool'
BEGIN
ROLLBACK;
THROW 50000, 'This update is not allowed using the support tool', 1;
END;
GO
INSERT INTO dbo.example VALUES(1);
GO
I'm working with a legacy codebase and need to call a stored procedure that I'm not allowed to modify. This stored procedure returns a row or multiple rows of validation data.
Example of result set (two columns, code and text):
0 "success"
OR
3 "short error"
4 "detailed error"
In the procedure itself, the message is selected simply as:
Select 0 as code, 'success' as text
Problem:
I'm using Entity Framework to map the result of this stored procedure to a custom class:
public class ValidationResult
{
public int code { get; set; }
public string text { get; set; }
}
The call itself:
var result = context.Database.SqlQuery<ValidationResult>(#"old_sproc").ToList();
I've written some integration tests, and have noticed that when the procedure returns the success message, the 0 comes across as a short. When it returns a non-zero message, it comes across as an int. I assumed that setting code as an int, the short would fit in. Unfortunately, I get the following exception for my success test:
The specified cast from a materialized 'System.Int16' type to the 'System.Int32' type is not valid.
When I switch code to a short to make my success test pass, my failure test fails with the following exception:
The specified cast from a materialized 'System.Int32' type to the 'System.Int16' type is not valid.
ADO.NET is an answer
One solution is to fall back to ADO.NET's SqlDataReader object, so I have that as a fallback solution. I'm wondering if there is something I can do on the EF side to get this working, though.
(This is a follow-up to my previous answer. It is only relevant for sql-server-2012 and later.)
Short answer:
var sql = "EXECUTE old_sproc WITH RESULT SETS ((code INT, text VARCHAR(MAX)))";
var result = context.Database.SqlQuery<ValidationResult(sql).ToList();
Approach taken in this answer:
This answer will follow in your footsteps and use SqlQuery to execute your stored procedure. (Why not an altogether different approach? Because there might not be any alternative. I'll go into this further below.)
Let's start with an observation about your current code:
var result = context.Database.SqlQuery<ValidationResult>(#"old_sproc").ToList();
The query text "old_sproc" is really abbreviated T-SQL for "EXECUTE old_sproc". I am mentioning this because it's easy to think that SqlQuery somehow treats the name of a stored procedure specially; but no, this is actually a regular T-SQL statement.
In this answer, we will modify your current SQL only a tiny bit.
Implicit type conversions with the WITH RESULT SETS clause:
So let's stay with what you're already doing: EXECUTE the stored procedure via SqlQuery. Starting with SQL Server 2012, the EXECUTE statement supports an optional clause called WITH RESULT SETS that allows you to specify what result sets you expect to get back. SQL Server will attempt to perform implicit type conversions if the actual result sets do not match that specification.
In your case, you might do this:
var sql = "EXECUTE old_sproc WITH RESULT SETS ((code INT, text VARCHAR(MAX)))";
var result = context.Database.SqlQuery<ValidationResult(sql).ToList();
The added clause states that you expect to get back one result set having a code INT and a text VARCHAR(MAX) column. The important bit is code INT: If the stored procedure happens to produce SMALLINT values for code, SQL Server will perform the conversion to INT for you.
Implicit conversions could take you even further: For example, you could specify code as VARCHAR(…) or even NUMERIC(…) (and change your C# properties to string or decimal, respectively).
If you're using Entity Framework's SqlQuery method, it's unlikely to get any neater than that.
For quick reference, here are some quotes from the linked-to MSDN reference page:
"The actual result set being returned during execution can differ from the result defined using the WITH RESULT SETS clause in one of the following ways: number of result sets, number of columns, column name, nullability, and data type."
"If the data types differ, an implicit conversion to the defined data type is performed."
Do I have to write a SQL query? Isn't there another (more ORM) way?
None that I am aware of.
Entity Framework has been evolving in a "Code First" direction in the recent past (it's at version 6 at this time of writing), and that trend is likely to continue.
The book "Programming Entity Framework Code First" by Julie Lerman & Rowan Miller (published in 2012 by O'Reilly) has a short chapter "Working with Stored Procedures", which contains two code examples; both of which use SqlQuery to map a stored procedure's result set.
I guess that if these two EF experts do not show another way of mapping stored procedures, then perhaps EF currently does not offer any alternative to SqlQuery.
(P.S.: Admittedly the OP's main problem is not stored procedures per se; it's making EF perform an automatic type conversion. Even then, I am not aware of another way than the one shown here.)
If you can't alter the stored procedure itself, you could create a wrapper stored procedure which alters the data in some way, and have EF call that.
Not ideal of course, but may be an option.
(Note: If you're working with SQL Server 2012 or later, see my follow-up answer, which shows a much shorter, neater way of doing the same thing described here.)
Here's a solution that stays in EF land and does not require any database schema changes.
Since you can pass any valid SQL to the SqlQuery method, nothing stops you from passing it a multi-statement script that:
DECLAREs a temporary table;
EXECUTEs the stored procedure and INSERTs its result into the temporary table;
SELECTs the final result from that temporary table.
The last step is where you can apply any further post-processing, such as a type conversion.
const string sql = #"DECLARE #temp TABLE ([code] INT, [text] VARCHAR(MAX));
INSERT INTO #temp EXECUTE [old_sproc];
SELECT CONVERT(INT, [code]) AS [code], [text] FROM #temp;";
// ^^^^^^^^^^^^^ ^^^^^^^^^^^
// this conversion might not actually be necessary
// since #temp.code is already declared INT, i.e.
// SQL Server might already have coerced SMALLINT
// values to INT values during the INSERT.
var result = context.Database.SqlQuery<ValidationResult>(sql).ToList();
In the entity framework data modeler page (Model Browser), either change the functional mapping to a specific int which works for the ValidationResult class or create a new functional mapping result class which has the appropriate int and use that as the resulting DTO class.
I leave this process a touch vague because I do not have access to the actual database; instead I provide the process to either create a new functional mapping or modify an existing one. Trial and error will help you overcome the incorrect functional mapping.
Another trick to have EF generate the right information is temporarily drop the stored proc and have a new one return a stub select such as:
select 1 AS Code , 'Text' as text
RETURN ##ROWCOUNT
The reasoning for this is that sometimes EF can't determine what the stored procedure ultimately returns. If that is the case, temporarily creating the stub return and generating EF from it provides a clear picture for the mappings. Then returning the sproc to its original code after an update sometimes does the trick.
Ignore the int/short. the text is always the same for the same number right? get just the text. have a switch case. Yes its a hack but unless you can fix the root of the problem (and you say you are not allowed) then you should go with the hack that will take the least amount of time to create and will not cause problems down the road for the next person maintaining the code. if this stored proc is legacy it will not have any new kinds of results in the future. and this solution together with a nice comment solves this and lets you go back to creating value somewhere else.
Cast the static message code to an int:
Select cast(0 as int) as code, 'success' as text
This ensures the literal returned is consistent with the int returned by the other query. Leave the ValidationResult.code declared as an int.
Note: I know I missed the part in the question about the SP can't be modified, but given that this makes the answer quite complicated, I'm leaving this here for others who may have the same problem, but are able to solve it much more easily by modifying the SP. This does work if you have a return type inconsistency in the SP and modifying is an option.
There is a workaround you could use if you don't find a better solution. Let it be an int. It will work for all error codes. If you get an exception you know the result was a success so you can add a try/catch for that specific exception. It's not pretty and depending on how much this runs it might impact performance.
Another idea, have you tried changing the type of code to object?
I am creating a oracle user in dba_users table by using the below c# code where i am using oledbcommand and ExecuteNonQuery. User is being successfully created in the dba_users table but ExecuteNonQuery is always retun value as "0"
So i am doing validation in my code as (IsUserCreated==0). Am i correct with my coding here?
int IsUserCreated= oleCreateUserCommands.ExecuteNonQuery();
if(IsUserCreated==0)
{
//TBD code
Response.write("User Created Successfully");
}
else
{
//TBD Code
Response.write("User creation failed with some error");
}
No, basically. That 0 doesn't mean much - in fact, the main thing it tells me is that you probably have SET NOCOUNT ON somewhere, or this is a sproc without a RETURN - otherwise I would expect 1 to be returned to indicate 1 row impacted. Either way: it does not indicate the lack of an error. The lack of an exception indicates the lack of an error. Returning 1 is useful as a "yes, exactly 1 row was updated" check, if it is enabled.
As Marc said, you can't rely on the return value. The return value is actually not consistent or portable, across different databases and statement types you may see -1 or 0 for success for non-DML, and 0, 1 or greater for DML, in my experience. Per his comment about SET NOCOUNT ON, Oracle doesn't support that, its a SQL Server feature.
Incidentally, for a CREATE USER statement, I always see -1 (I develop several desktop database tools and I've done a lot of tracing) though I don't use OleDb much. I am surprised you see 0, you should double check.
Regardless, you must use exceptions to handle error cases for ExecuteNonQuery and ExecuteScalar and its siblings. It is not possible to write robust code otherwise. The lack of exception implies success. As far as the return code, it is really useless for validation, except in DML. How do you write a generic algorithm that can accept -1, 0 or 1, or N as valid? I simply check it when I know I issue a possible DML, and need to return the row count to the user.
Your code should be in a using block (all IDisposable types in ADO should typically be disposed in a using statement)
You should have a try/catch or at least a try/finally
If you don't like repeating yourself, then wrap ExecuteNonQuery in your own function that will handle exception and return a bool true/false. In certain cases, I like to write extension methods for the connection or reader classes.
This question already has answers here:
combination of two field must be unique in Entity Framework code first approach. how it would?
(2 answers)
Closed 8 years ago.
I want check unique key constrain but I do not know which method to use.
Which method is better?
I use c#, EF, SQL Server
First Method:
bool contactExists = _db.Contacts.Any(contact => contact.ContactName.Equals(ContactName));
if (contactExists)
{
return -1;
}
else
{
_db.Contacts.Add(myContact);
_db.SaveChanges();
return myContact.ContactID;
}
Second Method:
Handle in exception.
The third method:
check with T-SQL
If Exists (
Select Name
From dbo.ContentPage
Where Name = RTRIM(LTRIM(N'page 1'))
)
Begin
Select 'True'
End
Else
Begin
Select 'False'
End
I rarely check unique key constraints in code. It fact, it is really a waste of time if multiple clients can be updating data at the same time. Say you check to make sure you could add the employee 'Saeid Mirzaei' and found the key was not in use. So you add it. You can still get a dup key problem if someone else enters it in the meantime and you end up getting the exception anyway. You can handle the exception in TSQL or C# but you pretty much need to use exception handling for robust code.
It depends on your requirements, but usually handling in a Exception is the best choice. This will be most efficient and reliable. I assume you mean by this the Exception that will be thrown because of the unique constraint.
I prefer method 1, check in your code AND add UNIQUE constraint in your column.
I suggest you that Create Unique index on Name column in SQL Server, and have instead of insert trigger for save RTRIM(LTRIM(Inserted.name)) column instead of name column on your table.
Also you can have client control for reduce network connection to your database.
I need to be able to change the primary keys in a table. The problem is, some of the keys will be changing to existing key values. E.g. record1.ID 3=>4 and record2.ID 4=>5. I need to keep these as primary keys as they are set as foreign keys (which cascade up update) Is there a reasonable way to accomplish this, or am I attempting sql heresey?
As for the why, I have data from one set of tables linked by this primary key that are getting inserted/updated into another set of similarly structured tables. The insertion is in parts, as it is part of a deduping process, and if I could simply update all of the tables that are to be inserted with the new primary key, life would be easier.
One solution is to start the indexing on the destination table higher than the incoming tables row count will ever reach (the incoming table gets re=indexed every time), but I'd still like to know if it is possible to do the above, otherwise.
TIA
You are attempting sql heresy. I'm actually pretty open-minded and know a lot of times one must do things that seem crazy. It annoys me when people arrogantly answer with "you should do that differently", when they have know idea what the situation is. However I must tell you that you should do this differently. heh heh.
No, there is no way to do this elegantly with sql\DataAdapter. You could do it through ADO.NET with a series of t-sql commands. You have to, every time, turn on an identity-overwrite mode (set identity_insert theTable on), do your query where all the values on that table are incremented up one, and then turn of autonumber-overwrite mode. But then you would need to increment all the other tables that use this as a foreign key. But wait, it gets worse:
You would need to all this in a transaction, because you cannot have anything else happening to these tables during this time, and because if there was a failure you would most definitely need to rollback. This could be a good-size chunk of processing; your tables would be locked for a good bit.
If you have any foreign key constraints between these tables, you would need to turn them off before you do this, and re-implement them afterwards.
If you find yourself starting to think about update primary key values, alarm bells should start ringing.
It may seem easier, but I'd class it as more of a hack than a solution. Personally, I'd be having a rethink and try to address the real problem - may seem harder now, but it will be much better to maintain and reduce potential horrible issues down the line.