Basically Commands has Parameters and parameters has functions like Add, AddWithValue, and etc. In all tutorials i've seen, i usually noticed that they are using Add instead of AddWithValue.
.Parameters.Add("#ID", SqlDbType.Int)
vs
.Parameters.AddWithValue("#ID", 1)
Is there a reason NOT to use AddWithValue? I'd prefer to use that over
Parameters.Add("#ID", SqlDbType.Int, 4).Value = 1
since it saves my coding time. So which is better to use? Which is safe to use? Does it improves performance?
With Add() method you may restrict user input by specifying type and length of data - especially for varchar columns.
.Parameters.Add("#name",SqlDbType.VarChar,30).Value=varName;
In case of AddWithValue() (implicit conversion of value) method, it sends nvarchar value to the database.
I believe there are also some cons to using AddWithValue which affect the SQL Cache Excection Plan, see the Parameter Length section here
Using AddWithValue() adds parameter with length of current value. If the length of your parameter value varies often this means new plan is generated every time. This makes your queries run slower(additional time for parsing, compiling) and also causes higher server load.
I'd use the AddWithValue for normal cases. And use Add(name, dbtype... only when your column type is different from how .net converts the CLR type.
Related
I've been suffering from this for a while now. In SQL Server Profiler, I confirmed that using SqlDataAdapter.Fill produces a sp_executesql command. Sometimes, the query takes too long to execute, the cause used to be one of the following two cases:
All of my text database fields are varchar. We know that sp_executesql only accepts unicode types (nvarchar, nchar). Specifying SqlDbType.Varchar for the parameter type in C# sometimes causes a big performance problem (for certain queries) because of the fact that an implicit conversion takes place in order to convert the varchar parameter to the nvarchar needed for sp_executesql. Aparently, that conversion was taking place on every row scan.
So I specified SqlDbType.NVarChar instead and the problem is solved temporarily. One day, I ended up having the same performance problem (for a different query) for the same cause but in another way. An implicit conversion was taking place to convert all the varchar values in the specified database column to nvarchar. Reverting the type to SqlDbType.VarChar fixed the problem for that case.
I know the problem is not due to Parameter Sniffing because I tried using sp_updatestats to refresh the cached plans and using OPTION(RECOMPILE) to make sure a new plan gets generated. But that did not solve the problem.
I cannot wrap my head around this and I wish to avoid using sp_executesql altogether if possible. Any idea how to avoid the undesired implicit conversions?
EDIT: It's worth noting that the queries that are generated are dynamic (coming from a query builder feature). Also, it's worth noting that the problem only showed when a "contain" operand (in sql: like %%) was used.
EDIT: This good article emphasizes the same problem. However, he author sadly declares that there is no solution.. yet, hopefully.
cmd.Parameters.Add("#blah", SqlDbType.VarChar).Value = blah;
In this code length of parameter is compulsory or not?
Also, if we are not using length parameter in this method any performance or SQL injection related issue is occurred ?
Please suggest me
Thanks
in this code length of parameter is compulsory or not?
I don't think it is compulsory. But it would be a good practice when you clarify it. SqlParameterCollection.Add(String, SqlDbType) takes SqlDbType as a second parameter and the length of parameter is not required. Just a tip; if your column is varchar(max), then you should use VarChar as a db type.
and if we are not using length parameter in this method any
performance or SQL injection related issue is occurred ?
Performance issue looks irrelevant because length is not must. And since you use parameterized sql in your queries, you should not worry about SQL Injection attacks.
Since it passed as parameter, i don't think any issue will arise regards to Injection. Regarding length if you use varchar(max), that will lead to performance issue as it internally keeps that type as text.
FInally it is not required to use lengh while passing an parameter
If you omit the length argument it creates the argument with the size of your data
cmd.Parameters.Add("#blah", SqlDbType.VarChar).Value = "this is 22 chars long.";
it creates a parameter of type VarChar(22). It is possible that sql server uses that parameter information before doing any work to see if data fits to the column (I'd do that).
Here a field in my data records could pass the limit of 8000 chars of nvarchar, and looking for a quite larger Data-Type, e.g about 9000 chars, Any ideas ?
At first I was using NvarChar(8000), after finding some could pass this boundary I used NText
to see what will happen next, with Entity Framework seems it could do the job as it's expected without defining any Insert statement and Data Adapter, During the programming the system changed to data Adapter and I should do the job with a Insert command, Now the parameter defined is look like this :
cmdIns.Parameters.Add("#story", SqlDbType.NText, 16, "Story")
it seems that the limitation of 16 will be increased automatically while using EF is used but not with the Data Adapter(And it just insert 16 chars of the data),
really don't know (can't remember) Is the test with EF passed even the items larger than 8000 ?
If so, I'm curious about the reason.
The situation is deciding the proper Data-Type and it's equivalent working parameter to be used on insertion point of this large data field.
Note : Here SQL Server CE is Used
Edit :
Sorry, I had to go at that time,
The Data-type which should be used is NTEXT with no alternative here
but defining the **insertion Statement and parameter** is a bit hassle,
unfortunately none of the suggested methods could do the desired job similar to the piece which I gave.
without defining the length it will give errors (run-time) !
And Using AddWithValue couldn't use a the DataAdapter and do the insertion in bulk.
Maybe I have to place it in another question, but this is a piece of this question, and a working answer here could be the complete one.
Any ideas ?
If I understood your question correctly you should be fine doing something like this, omitting the size as it isn't necessary:
cmdIns.Parameters.Add( new SqlParameter( "story", SqlDbType.NText )
{
Value = yourVariable;
} );
Use AddWithValue whenever you want to add a parameter by specifying its name and value. Like this command.Parameters.AddWithValue("#story", story);
I'm using Dapper against a database where strings are stored primarily in VarChar columns. By default Dapper uses NVarChar parameters when generating queries and while I can wrap each and every string parameter I use with DbString it'd be great to use AnsiStrings by default and use DbString for the NVarChar case.
I tried changing the type map in the Dapper source from DbType.String to DbType.AnsiString however that seems to cause an error in the IL generation for the parameters delegate (throws an InvalidProgramException).
Is there an easier way to do this?
Update
Just changing the typeMap was not sufficient I needed to alter some if (dbType == DbType.String) checks too. Now it works!
You can accomplish this without modifying the source code.
Dapper.SqlMapper.AddTypeMap(typeof(string), System.Data.DbType.AnsiString);
Setting this once will adjust all of your strings to varchar.
To use ansistrings by default I had to (referring to Dapper 1.3 source from NuGet):
Alter the type map to use DbType.AnsiString on L164 instead of DbType.String
In the method CreateParamInfoGenerator change the checks on L960, L968, L973 to include DbType.AnsiString as well as DbType.String.
The problem with the invalid IL seemed to be that the later branch of the code on L1000 checks for typeof(string) whereas the preceeding branches use DbType.
Doing that everything is peachy again - no more index scans!
Our data access layer uses command objects to communicate with sql server.
In most cases I've hard-coded the field size (that matches the column size in sql server) into the command param builder.
Such as:
SqlParameter param = new SqlParameter("#name", NVarChar, 4000);
What's the advantage to specifying a value here (4000 in this example) versus just leaving it as 0? It's a pain when you have to recompile when a column size changes.
Is actually quite important. Identical requests issued with different parameter length end up as different queries in the procedure cache. In time, this leads to cache pollution an over-active compilation events. This issue is one of the major design flaws in how both Linq2SQL and the EF providers were implemented, eg. see How Data Access Code Affects Database Performance. Also see Query performance and plan cache issues when parameter length not specified correctly for a related problem.
There is no performance or execution-time advantage - size is inferred if it is not explicitly passed:
The Size is inferred from the value of the dbType parameter if it is not explicitly set in the size parameter.
I guess you could say that by explicitly declaring the size of the parameter to match the size of the field in SQL Server you are better informing readers of your code as to the limits of the data model. How useful that may be is in the eye of the beholder.
If you specify a size that matches the width of the SQL column, then presumably you have another layer that detects and/or prevents data loss. (What happens when a user enters or an application generates more characters than can be stored in the database?)
Perhaps the problem is related to all those Microsoft Buffer Overflows?