I have a stored procedure which looks like following:
alter procedure [dbo].[zsp_deleteEndedItems]
(
#ItemIDList nvarchar(max)
)
as
delete from
SearchedUserItems
WHERE EXISTS (SELECT 1 FROM dbo.SplitStringProduction(#ItemIDList,',') S1 WHERE ItemID=S1.val)
The parameter IDList is passed like following:
124125125,125125125...etc etc
And the split string function look like following:
ALTER FUNCTION [dbo].[SplitStringProduction]
(
#string nvarchar(max),
#delimiter nvarchar(5)
) RETURNS #t TABLE
(
val nvarchar(500)
)
AS
BEGIN
declare #xml xml
set #xml = N'<root><r>' + replace(#string,#delimiter,'</r><r>') + '</r></root>'
insert into #t(val)
select
r.value('.','varchar(500)') as item
from #xml.nodes('//root/r') as records(r)
RETURN
END
This is supposed to delete all items from table "SearcheduserItems" under the IDs:
124125125 and 125125125
But for some reason after I do a select to check it out:
select * from SearchedUserItems
where itemid in('124125125','125125125')
The records are still there...
What am I doing wrong here? Can someone help me out?
As mentioned in the comments, a different option would be to use a table type parameter. This makes a couple of assumptions (some commented), however, should get you on the right path:
CREATE TYPE dbo.IDList AS TABLE (ItemID int NOT NULL); --Assumed int datatype;
GO
ALTER PROC dbo.zsp_deleteEndedItems #ItemIDList dbo.IDList READONLY AS
DELETE SUI
FROM dbo.SearchedUserItems SUI
JOIN #ItemIDList IDL ON SUI.ItemID = IDL.ItemID;
GO
--Example of usage
DECLARE #ItemList dbo.IDList;
INSERT INTO #ItemList
VALUES(123456),(123457),(123458);
EXEC dbo.zsp_deleteEndedItems #ItemList;
GO
In regards to the question of an inline table value function, one such example is the below, which I quickly wrote up, that provides a tally table of the next 1000 numbers:
CREATE FUNCTION dbo.NextThousand (#Start int)
RETURNS TABLE
AS RETURN
WITH N AS(
SELECT N
FROM (VALUES(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL)) N(N)
)
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) -1 + #Start AS I
FROM N N1 --10
CROSS JOIN N N2 --100
CROSS JOIN N N3; --1,000
GO
The important thing about an iTVF is that it has only one statement, and that is the RETURN statement. Declaring the table as a return type variable, inserting data into it, and returning that variable turns it into a multi-line TVF; which perform far slower.
I have a similar problem as described in
EF can't infer return schema from Stored Procedure selecting from a #temp table
and I have created my stored procedure solution based on the solution described above BUT I am still getting a similar EF error and I really don't know why or understand how I can fix it.
A member of the type, 'rowNum', does not have a corresponding column in the data reader with the same name.
My specific error:
The data reader is incompatible with the specified 'TestModel.sp_SoInfoDocs_Result'. A member of the type, 'rowNum', does not have a corresponding column in the data reader with the same name.
My stored procedure:
ALTER PROCEDURE [dbo].[sp_SoInfoDocs]
#searchText nvarchar(200),
#PageNumber int,
#PageSize int
AS
BEGIN
IF 1 = 2
BEGIN
SELECT
cast(null as int ) as rowNum
,cast(null as text) as serverName
,cast(null as text) as jobName
,cast(null as DATETIME) as oDate
,cast(null as int) as runCount
,cast(null as nvarchar(10)) as orderID
,cast(null as text) as applicationName
,cast(null as text) as memberName
,cast(null as text) as nodeID
,cast(null as nvarchar(10)) as endStatus
,cast(null as int) as returnCode
,cast(null as DATETIME) as startTime
,cast(null as DATETIME) as endTime
,cast(null as nvarchar(50)) as status
,cast(null as text) as owner
,cast(null as bit) as existsNote
WHERE
1 = 2
END
DECLARE #LowerLimit int;
SET #LowerLimit = (#PageNumber - 1) * #PageSize;
DECLARE #UpperLimit int;
SET #UpperLimit = #PageNumber * #PageSize;
PRINT CAST (#LowerLimit as varchar)
PRINT CAST (#UpperLimit as varchar)
SELECT ROW_NUMBER() over (order by Expr1) as rowNum, *
into #temp
from (
SELECT dbo.SOInfo.jobName, dbo.SOInfo.nodeID, dbo.SOInfo.nodeGroup, dbo.SOInfo.endStatus, dbo.SOInfo.returnCode, dbo.SOInfo.startTime, dbo.SOInfo.endTime,
dbo.SOInfo.oDate, dbo.SOInfo.orderID, dbo.SOInfo.status, dbo.SOInfo.runCount, dbo.SOInfo.owner, dbo.SOInfo.cyclic, dbo.SOInfo.soInfoID, dbo.SOInfo.docInfoID,
dbo.SOInfo.existsNote, dbo.SOInfo.noSysout, dbo.serverInfo.serverName, dbo.Groups.label AS applicationName, Groups_1.label AS memberName,
Groups_2.label AS groupName, Groups_3.label AS scheduleTableName, dbo.SOInfo.serverInfoID, dbo.SOInfo.applicationID, dbo.SOInfo.groupID,
dbo.SOInfo.memberID, dbo.SOInfo.scheduleTableID, dbo.docFile.docFileID, dbo.docInfo.docInfoID AS Expr1, dbo.docFile.docFileObject
FROM dbo.SOInfo INNER JOIN
dbo.serverInfo ON dbo.SOInfo.serverInfoID = dbo.serverInfo.serverInfoID INNER JOIN
dbo.docInfo ON dbo.SOInfo.docInfoID = dbo.docInfo.docInfoID INNER JOIN
dbo.docFile ON dbo.docInfo.docFileID = dbo.docFile.docFileID LEFT OUTER JOIN
dbo.Groups AS Groups_3 ON dbo.SOInfo.scheduleTableID = Groups_3.ID LEFT OUTER JOIN
dbo.Groups AS Groups_1 ON dbo.SOInfo.memberID = Groups_1.ID LEFT OUTER JOIN
dbo.Groups AS Groups_2 ON dbo.SOInfo.groupID = Groups_2.ID LEFT OUTER JOIN
dbo.Groups ON dbo.SOInfo.applicationID = dbo.Groups.ID
WHERE CONTAINS (docfileObject,#searchText)
) tbl
SELECT Count(1) FROM #temp
SELECT rowNum, serverName, jobName ,oDate,runCount,orderID,applicationName,memberName,nodeID, endStatus, returnCode,startTime,endTime,status,owner,existsNote
FROM #temp WHERE rowNum > #LowerLimit AND rowNum <= #UpperLimit
END
My overall goals are:
search through clustered indexed table (docInfo) and find all rows that contain a specific search string value
at the same time capture metadata from other tables associated with each docInfo object
The results of actions (1) and (2) above are written to a #temp table to which I then add a rowNum column to enable me to introduce paging i.e.
introduce paging for the number of metadata results that can be returned at any one time, based on supplied PageNumber and PageSize variables.
What does work
I am able to successfully create the stored procedure.
Within SSMS I am able to successfully execute the stored procedure and it delivers the results I expect, here's an example
Within EF I have been able to update and import the stored procedure by updating from database
Within EF I am then able to see the Function Imports and can see the Mapping
Within ED I am then able to see the generated complex types
I use the following code to call the process
using (TestEntities context = new TestEntities())
{
List<sp_SoInfoDocs_Result> lst = context.sp_SoInfoDocs(searchText, 1, 10).ToList();
}
I compile and run my solution and get the following error from EF
'System.Data.Entity.Core.EntityCommandExecutionException' occurred in EntityFramework.SqlServer.dll
Additional information: The data reader is incompatible with the specified 'SysviewModel.sp_SoInfoDocs_Result'. A member of the type, 'rowNum', does not have a corresponding column in the data reader with the same name.
I am very much a novice / basic user when it comes to both SSMS / SQL and EF, this has stretched me as far as I understand / can go and I really don't know where to turn to next in order to resolve this problem.
I've searched extensively through SO and can see others who have had similar problems and have tried the solutions suggested but nothing seems to work for me.
I really would be very very grateful to anyone who could help me understand
what is it that is wrong / I've done wrong?
is there a better approach to achieve what I need?
ideas as to how I can fix this.
Thanks in advance
The solution I have found is to use SQL temporary variables in stead of using temporary tables which then enables me to expose the table columns via my final SQL Select statement and then consume them as meta data in EF using the "Add Function Imports" function.
Here's an example of my successfully working sp.
USE [TestDB]
GO
/****** Object: StoredProcedure [dbo].[sp_SoInfoDocs_Archive] Script Date: 09/07/2015 10:35:43 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[sp_SoInfoDocs_Archive]
#searchText nvarchar(200),
#PageNumber int,
#PageSize int,
#out int = 0 output
AS
BEGIN
DECLARE #LowerLimit int;
SET #LowerLimit = (#PageNumber - 1) * #PageSize;
DECLARE #UpperLimit int;
SET #UpperLimit = #PageNumber * #PageSize;
-- Create temporary column variables
Declare #temp TABLE
(
rowNum INT,
jobName text,
nodeID nvarchar(50),
nodeGroup text,
endStatus nvarchar(10),
returnCode int,
startTime datetime,
endTime datetime,
oDate smalldatetime,
orderID nvarchar(10),
status nvarchar(50),
runCount int,
owner text,
cyclic text,
soInfoID int,
docInfoID int,
existsNote bit,
noSysout bit,
serverName varchar(256),
applicationName nvarchar(255),
memberName nvarchar(255),
groupName nvarchar(255),
scheduleTableName nvarchar(255),
serverInfoID int,
applicationID int,
groupID int,
memberID int,
scheduleTableID int,
docFileID int,
Expr1 int,
docFileObject varbinary(MAX)
)
INSERT INTO #temp
SELECT ROW_NUMBER() over (order by Expr1) as rowNum, *
from (
SELECT dbo.SOInfoArchive.jobName,
dbo.SOInfoArchive.nodeID,
dbo.SOInfoArchive.nodeGroup,
dbo.SOInfoArchive.endStatus,
dbo.SOInfoArchive.returnCode,
dbo.SOInfoArchive.startTime,
dbo.SOInfoArchive.endTime,
dbo.SOInfoArchive.oDate,
dbo.SOInfoArchive.orderID,
dbo.SOInfoArchive.status,
dbo.SOInfoArchive.runCount,
dbo.SOInfoArchive.owner,
dbo.SOInfoArchive.cyclic,
dbo.SOInfoArchive.soInfoID,
dbo.SOInfoArchive.docInfoID,
dbo.SOInfoArchive.existsNote,
dbo.SOInfoArchive.noSysout,
dbo.serverInfo.serverName,
dbo.Groups.label AS applicationName,
Groups_1.label AS memberName,
Groups_2.label AS groupName,
Groups_3.label AS scheduleTableName,
dbo.SOInfoArchive.serverInfoID,
dbo.SOInfoArchive.applicationID,
dbo.SOInfoArchive.groupID,
dbo.SOInfoArchive.memberID,
dbo.SOInfoArchive.scheduleTableID,
dbo.docFile.docFileID,
dbo.docInfo.docInfoID AS Expr1,
dbo.docFile.docFileObject
FROM dbo.SOInfoArchive INNER JOIN
dbo.serverInfo ON dbo.SOInfoArchive.serverInfoID = dbo.serverInfo.serverInfoID INNER JOIN
dbo.docInfo ON dbo.SOInfoArchive.docInfoID = dbo.docInfo.docInfoID INNER JOIN
dbo.docFile ON dbo.docInfo.docFileID = dbo.docFile.docFileID LEFT OUTER JOIN
dbo.Groups AS Groups_3 ON dbo.SOInfoArchive.scheduleTableID = Groups_3.ID LEFT OUTER JOIN
dbo.Groups AS Groups_1 ON dbo.SOInfoArchive.memberID = Groups_1.ID LEFT OUTER JOIN
dbo.Groups AS Groups_2 ON dbo.SOInfoArchive.groupID = Groups_2.ID LEFT OUTER JOIN
dbo.Groups ON dbo.SOInfoArchive.applicationID = dbo.Groups.ID
WHERE CONTAINS (docfileObject,#searchText)
) tbl
-- Select enables me to consume the following columns as meta data in EF
SELECT rowNum,
serverName,
jobName ,
oDate,
runCount,
orderID,
applicationName,
memberName,
nodeID,
endStatus,
returnCode,
startTime,
endTime,
status,
owner,
existsNote,
docFileID
FROM #temp WHERE rowNum > #LowerLimit AND rowNum <= #UpperLimit
END
So to recap, I can now
1) Import the stored procedure into my EDMX.
2) The "Add Function Import" successfully creates
a) sp_SoInfoDocs Function Imports
b) sp_SoInfoDocs Complex Types
I can now successfully call my stored procedure as follows
using (TestEntities context = new TestEntities())
{
string searchText = "rem";
ObjectParameter total = new ObjectParameter("out",typeof(int));
List<sp_SoInfoDocs_Result> lst = context.sp_SoInfoDocs(searchText, 1, 10, total).ToList();
foreach (var item in lst)
{
Console.WriteLine(item.jobName + " " + item.oDate + " " + item.serverName + " " + item.startTime + " " + item.endTime);
}
Console.ReadLine();
}
And an example of the results returned.
I am now successfully using the basis of this process to import and display the metadata in a dynamically created HTML table in my View.
If anyone else is experiencing similar problems and I have neglected to explain fully why I adopted this solution and how I made it work ~ then please feel free to P.M. me and I'll do my best to explain.
Why to use temporary table or table variable at all. Table variables has many performance drawbacks like:
Table variables does not have distribution statistics, they will not
trigger recompiles. Therefore, in many cases, the optimizer will build
a query plan on the assumption that the table variable has no rows.
For this reason, you should be cautious about using a table variable
if you expect a larger number of rows (greater than 100). Temp tables
may be a better solution in this case. Alternatively, for queries that
join the table variable with other tables, use the RECOMPILE hint,
which will cause the optimizer to use the correct cardinatlity for the
table variable.
table variables are not supported in the SQL Server optimizer's
cost-based reasoning model. Therefore, they should not be used when
cost-based choices are required to achieve an efficient query plan.
Temporary tables are preferred when cost-based choices are required.
This typically includes queries with joins, parallelism decisions, and
index selection choices.
Queries that modify table variables do not generate parallel query
execution plans. Performance can be affected when very large table
variables, or table variables in complex queries, are modified. In
these situations, consider using temporary tables instead. For more
information, see CREATE TABLE (Transact-SQL). Queries that read table
variables without modifying them can still be parallelized.
Use simple CTE:
ALTER PROCEDURE [dbo].[sp_SoInfoDocs_Archive]
#searchText NVARCHAR(200),
#PageNumber INT,
#PageSize INT,
#out INT = 0 OUTPUT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #LowerLimit INT = (#PageNumber - 1) * #PageSize;
DECLARE #UpperLimit INT = #PageNumber * #PageSize;
;WITH cte AS
(
SELECT
dbo.SOInfoArchive.jobName,
dbo.SOInfoArchive.nodeID,
dbo.SOInfoArchive.nodeGroup,
dbo.SOInfoArchive.endStatus,
dbo.SOInfoArchive.returnCode,
dbo.SOInfoArchive.startTime,
dbo.SOInfoArchive.endTime,
dbo.SOInfoArchive.oDate,
dbo.SOInfoArchive.orderID,
dbo.SOInfoArchive.status,
dbo.SOInfoArchive.runCount,
dbo.SOInfoArchive.owner,
dbo.SOInfoArchive.cyclic,
dbo.SOInfoArchive.soInfoID,
dbo.SOInfoArchive.docInfoID,
dbo.SOInfoArchive.existsNote,
dbo.SOInfoArchive.noSysout,
dbo.serverInfo.serverName,
dbo.Groups.label AS applicationName,
Groups_1.label AS memberName,
Groups_2.label AS groupName,
Groups_3.label AS scheduleTableName,
dbo.SOInfoArchive.serverInfoID,
dbo.SOInfoArchive.applicationID,
dbo.SOInfoArchive.groupID,
dbo.SOInfoArchive.memberID,
dbo.SOInfoArchive.scheduleTableID,
dbo.docFile.docFileID,
dbo.docInfo.docInfoID AS Expr1,
dbo.docFile.docFileObject
FROM dbo.SOInfoArchive
JOIN dbo.serverInfo
ON dbo.SOInfoArchive.serverInfoID = dbo.serverInfo.serverInfoID
JOIN dbo.docInfo
ON dbo.SOInfoArchive.docInfoID = dbo.docInfo.docInfoID
JOIN dbo.docFile
ON dbo.docInfo.docFileID = dbo.docFile.docFileID
LEFT JOIN dbo.Groups AS Groups_3
ON dbo.SOInfoArchive.scheduleTableID = Groups_3.ID
LEFT JOIN dbo.Groups AS Groups_1
ON dbo.SOInfoArchive.memberID = Groups_1.ID
LEFT JOIN dbo.Groups AS Groups_2
ON dbo.SOInfoArchive.groupID = Groups_2.ID
LEFT JOIN dbo.Groups
ON dbo.SOInfoArchive.applicationID = dbo.Groups.ID
WHERE CONTAINS (docfileObject,#searchText)
),
cte2 AS
(
SELECT ROW_NUMBER() OVER (ORDER BY Expr1) AS rowNum, *
FROM cte
)
SELECT rowNum,
serverName,
jobName ,
oDate,
runCount,
orderID,
applicationName,
memberName,
nodeID,
endStatus,
returnCode,
startTime,
endTime,
status,
owner,
existsNote,
docFileID
FROM cte2
WHERE rowNum > #LowerLimit
AND rowNum <= #UpperLimit
END
Probably nobody cares anymore, but the reason it didn't work is that ROW_NUMBER() returns a BIGINT and your code defines the structure this way: cast(null as int) as rowNum
I want to let a user search through all the columns in a table for a set of phrases defined in a textbox (split terms with whitespace).
So what first came to mind is finding a way in SQL to concatenate all the columns and just use the LIKE operator (for each phrase) in this result.
The other solution I thought of is writing an algorithm which takes all the phrases searched, and match them with all the columns.
So I ended up with the following:
String [] columns = {"col1", "col2", "col3", "col4"};
String [] phrases = textBox.Text.Split(' ');
I then took all the possible combinations of columns and phrases, and put that into a where-clause-format for sql and then the result was
"(col1 LIKE '%prase1%' AND col1 LIKE '%phrase2%') OR
(col1 LIKE '%phrase1%' AND col2 LIKE '%phrase2%') OR
(col1 LIKE '%phrase2%' AND col2 LIKE '%phrase1%') OR
(col2 LIKE '%phrase1%' AND col3 LIKE '%phrase2%')"
The above is just an example snippet of the output, the amount of conditions being created in this algorith is measured by
conditions=columns^(phrases+1)
So I observed that having 2 search phrases can still give good performance, but more than that will certainly decrease performance drastically.
What is the best practise when searching all the columns for the same data?
Edwin,
I didn't know you was using ORACLE. My solution is using SQL Server. Hopefully you will get the gist of the solution and translate into PL/SQL.
Hopefully this is useful to you.
I am manually populating the #search temp table. You will need to somehow do that. Or look for some Split Function that will take the delimited string and return a Table.
IF OBJECT_ID('tempdb..#keywords') IS NOT NULL
DROP TABLE #keywords;
IF OBJECT_ID('tempdb..#search') IS NOT NULL
DROP TABLE #search;
DECLARE #search_count INT
-- Populate # search with all my search strings
SELECT *
INTO #search
FROM (
SELECT '%ST%' AS Search
UNION ALL
SELECT '%CL%'
) T1
SELECT #search_count = COUNT(*)
FROM #search;
PRINT #search_count
-- Populate my #keywords table with all column values from my table with table id and values
-- I just did a select id, value union with all fields
SELECT *
INTO #keywords
FROM (
SELECT client_id AS id
,First_name AS keyword
FROM [CLIENT]
UNION
SELECT client_id
,last_name
FROM [CLIENT]
) AS T1
-- see what is in there
SELECT *
FROM #search
SELECT *
FROM #keywords
-- I am doing a count(distinct #search.Search). This will get me a count,
--so if I put in 3 search values my count should equal 3 and that tells me all search strings have been found
SELECT #keywords.id
,COUNT(DISTINCT #search.Search)
FROM #keywords
INNER JOIN #search ON #keywords.keyword LIKE #search.Search
GROUP BY #keywords.id
HAVING COUNT(DISTINCT #search.Search) = #search_count
SELECT *
FROM [CLIENT]
WHERE [CLIENT].client_id IN (
SELECT #keywords.id
FROM #keywords
INNER JOIN #search ON #keywords.keyword LIKE #search.Search
GROUP BY #keywords.id
HAVING COUNT(DISTINCT #search.Search) = #search_count
)
You could create a stored procedure or function in PL/SQL to dynamically search the table for the search terms and then bring back the primary key and column of any matches. The code sample below should be enough to tailor to your requirements.
create table text_table(
col1 varchar2(32),
col2 varchar2(32),
col3 varchar2(32),
col4 varchar2(32),
col5 varchar2(32),
pk varchar2(32)
);
insert into text_table(col1, col2, col3, col4, col5, pk)
values ('the','quick','brown','fox','jumped', '1');
insert into text_table(col1, col2, col3, col4, col5, pk)
values ('over','the','lazy','dog','!', '2');
commit;
declare
rc sys_refcursor;
cursor_num number;
col_count number;
desc_tab dbms_sql.desc_tab;
vs_column_value varchar2(4000);
search_terms dbms_sql.varchar2a;
matching_cols dbms_sql.varchar2a;
empty dbms_sql.varchar2a;
key_value varchar2(32);
begin
--words to search for (i.e. from the text box)
search_terms(1) := 'fox';
search_terms(2) := 'box';
open rc for select * from text_table;
--Get the cursor number
cursor_num := dbms_sql.to_cursor_number(rc);
--Get the column definitions
dbms_sql.describe_columns(cursor_num, col_count, desc_tab);
--You must define the columns first
for i in 1..col_count loop
dbms_sql.define_column(cursor_num, i, vs_column_value, 4000);
end loop;
--loop through the rows
while ( dbms_sql.fetch_rows(cursor_num) > 0 ) loop
matching_cols := empty;
for i in 1 .. col_count loop --loop across the cols
--Get the column value
dbms_sql.column_value(cursor_num, i, vs_column_value);
--Get the value of the primary key based on the column name
if (desc_tab(i).col_name = 'PK') then
key_value := vs_column_value;
end if;
--Scan the search terms array for a match
for j in 1..search_terms.count loop
if (search_terms(j) like '%'||vs_column_value||'%') then
matching_cols(nvl(matching_cols.last,0) + 1) := desc_tab(i).col_name;
end if;
end loop;
end loop;
--Print the result matches
if matching_cols.last is not null then
for i in 1..matching_cols.last loop
dbms_output.put_line('Primary Key: '|| key_value||'. Matching Column: '||matching_cols(i));
end loop;
end if;
end loop;
end;
I have a Gridview in front end where Grid have two columns : ID and Order like this:
ID Order
1 1
2 2
3 3
4 4
Now user can update the order like in front end Gridview:
ID Order
1 2
2 4
3 1
4 3
Now if the user click the save button the ID and order data is being sent to Stored Procedure as #sID = (1,2,3,4) and #sOrder = (2,4,1,3)
Now if I want to update the order and make save I want to store it into database. Through Stored procedure how can update into the table so that the table is updated and while select it gives me the results like:
ID Order
1 2
2 4
3 1
4 3
There is no built in function to parse these comma separated string. However, yo can use the XML function in SQL Server to do this. Something like:
DECLARE #sID VARCHAR(100) = '1,2,3,4';
DECLARE #sOrder VARCHAR(10) = '2,4,1,3';
DECLARE #sIDASXml xml = CONVERT(xml,
'<root><s>' +
REPLACE(#sID, ',', '</s><s>') +
'</s></root>');
DECLARE #sOrderASXml xml = CONVERT(xml,
'<root><s>' +
REPLACE(#sOrder, ',', '</s><s>') +
'</s></root>');
;WITH ParsedIDs
AS
(
SELECT ID = T.c.value('.','varchar(20)'),
ROW_NUMBER() OVER(ORDER BY (SELECT 1)) AS RowNumber
FROM #sIDASXml.nodes('/root/s') T(c)
), ParsedOrders
AS
(
SELECT "Order" = T.c.value('.','varchar(20)'),
ROW_NUMBER() OVER(ORDER BY (SELECT 1)) AS RowNumber
FROM #sOrderASXml.nodes('/root/s') T(c)
)
UPDATE t
SET t."Order" = p."Order"
FROM #tableName AS t
INNER JOIN
(
SELECT i.ID, p."Order"
FROM ParsedOrders p
INNER JOIN ParsedIDs i ON p.RowNumber = i.RowNumber
) AS p ON t.ID = p.ID;
Live Demo
Then you can put this inside a stored procedure or whatever.
Note that: You didn't need to do all of this manually, it should be some way to make this gridview update the underlying data table automatically through data binding. You should search for something like this instead of all this pain.
You could use a table valued parameter to avoid sending delimiter-separated values or even XML to the database. To do this you need to:
Declare a parameter type in the database, like this:
CREATE TYPE UpdateOrderType TABLE (ID int, Order int)
After that you can define the procedure to use the parameter as
CREATE PROCEDURE UpdateOrder (#UpdateOrderValues UpdateOrderType readonly)
AS
BEGIN
UPDATE t
SET OrderID = tvp.Order
FROM <YourTable> t
INNER JOIN #UpdateOrderValues tvp ON t.ID=tvp.ID
END
As you can see, the SQL is trivial compared to parsing XML or delimited strings.
Use the parameter from C#:
using (SqlCommand command = connection.CreateCommand()) {
command.CommandText = "dbo.UpdateOrder";
command.CommandType = CommandType.StoredProcedure;
//create a table from your gridview data
DataTable paramValue = CreateDataTable(orderedData)
SqlParameter parameter = command.Parameters
.AddWithValue("#UpdateOrderValues", paramValue );
parameter.SqlDbType = SqlDbType.Structured;
parameter.TypeName = "dbo.UpdateOrderType";
command.ExecuteNonQuery();
}
where CreateDataTable is something like:
//assuming the source data has ID and Order properties
private static DataTable CreateDataTable(IEnumerable<OrderData> source) {
DataTable table = new DataTable();
table.Columns.Add("ID", typeof(int));
table.Columns.Add("Order", typeof(int));
foreach (OrderData data in source) {
table.Rows.Add(data.ID, data.Order);
}
return table;
}
(code lifted from this question)
As you can see this approach (specific to SQL-Server 2008 and up) makes it easier and more formal to pass in structured data as a parameter to a procedure. What's more, you're working with type safety all the way, so much of the parsing errors that tend to crop up in string/xml manipulation are not an issue.
You can use charindex like
DECLARE #id VARCHAR(MAX)
DECLARE #order VARCHAR(MAX)
SET #id='1,2,3,4,'
SET #order='2,4,1,3,'
WHILE CHARINDEX(',',#id) > 0
BEGIN
DECLARE #tmpid VARCHAR(50)
SET #tmpid=SUBSTRING(#id,1,(charindex(',',#id)-1))
DECLARE #tmporder VARCHAR(50)
SET #tmporder=SUBSTRING(#order,1,(charindex(',',#order)-1))
UPDATE dbo.Test SET
[Order]=#tmporder
WHERE ID=convert(int,#tmpid)
SET #id = SUBSTRING(#id,charindex(',',#id)+1,len(#id))
SET #order=SUBSTRING(#order,charindex(',',#order)+1,len(#order))
END
I was wondering if the below scenario will work? I am having trouble with it.
I have a smart tag SQLDataSource with a query like such:
SELECT [col1], [col2], [col3] FROM [Table1] WHERE (#SubType = #SubID) ORDER BY [col1] ASC
No matter where or how I set the #SubType parameter, it does not work, yet if I change the query to WHERE [col1] = #SubID (removing the #SubType) it works fine.
Can I set a parameter as a field name to compare against like my query does?
That's not how parameters work. Parameters are not string replacement. They work with values, not database objects names (Columns, Tables, etc.).
The solution is to first assemble the SQL query with the desired columns (code behind) and then set the parameter's values.
If you want to dynamically replace the items in your WHERE clause then you will want to look at using Dynamic SQL, then you can build your SQL as a string and execute it.
Code sample from http://www.sommarskog.se/dynamic_sql.html
DECLARE #sql nvarchar(2000)
SELECT #sql = 'SELECT O.OrderID, SUM(OD.UnitPrice * OD.Quantity)
FROM dbo.Orders O
JOIN dbo.[Order Details] OD ON O.OrderID = OD.OrderID
WHERE O.OrderDate BETWEEN #from AND #to
AND EXISTS (SELECT *
FROM dbo.[Order Details] OD2
WHERE O.OrderID = OD2.OrderID
AND OD2.ProductID = #prodid)
GROUP BY O.OrderID'
EXEC sp_executesql #sql, N'#from datetime, #to datetime, #prodid int',
'19980201', '19980228', 76
Another helpful link:
Dynamic WHERE Clause