I have a list of conditions.
These conditions can be grouped.
When creating groups, I want to avoid duplicate groups, meaning they have the same members.
the ID's are unique identifiers, there is a unique constraint on the Name of the condition.
For example, I have 3 conditions, being "Painted", "Oiled", "Protected".
I want to make groups of Painted & Protected and Oiled & Protected. Since there are multiple applications, it is possible that they want to create the groups at the same moment
In code, I first try to retrieve the group with the matching members. If there is no group found, we create it. With multiple threads or applications this introduces a race condition where there can be created in the database.
Is there a way to avoid this in SQL or in code
This is kind of a relational division-no remainder problem. A set based approach is as follows:
A) Assume that (conditiongroup_id, condition_id) pairs are unique
B) Convert the set of values to be inserted into a table:
CREATE TABLE #temp (condition_id int NOT NULL UNIQUE);
INSERT INTO #temp (condition_id) VALUES
(1),
(2),
(3);
C) Perform relational division:
SELECT conditiongroup_condition.conditiongroup_id
FROM conditiongroup_condition
LEFT JOIN #temp AS temp ON conditiongroup_condition.condition_id = temp.condition_id
GROUP BY conditiongroup_condition.conditiongroup_id
HAVING COUNT(temp.condition_id) = (SELECT COUNT(*) FROM #temp)
AND COUNT(conditiongroup_condition.condition_id) = (SELECT COUNT(*) FROM #temp);
If this query returns one or more rows then the conditiongroup_id returned contains exactly all three conditions. This logic could be coded inside your application and inside an insert trigger. Table value constructor VALUES() could be used instead of temporary table.
this could be help you.
IsDuplicate is a UDF for verifying ConditionGroupMembers, then reference that UDF from a check constraint.
create function dbo.IsDuplicate()
returns int as
begin
declare #result int=0
;with cte as
(
select ConditionGroupId, string_agg(ConditionGroupMemberId ,', ') WITHIN GROUP (ORDER BY ConditionGroupMemberId) as Members
from dbo.ConditionGroupMembers gm
group by ConditionGroupId
)
select #result=1 from cte c1
inner join cte c2 on c1.Members=c2.Members and c1.ConditionGroupId <> c2.ConditionGroupId
return #result
end
alter table dbo.ConditionGroupMembers
add constraint CheckDuplicateMembers CHECK (
dbo.IsDuplicate() = 0)
Note: The above code works, but you can also use a Trigger instead of a Check Constraint.
Related
My database contains three tables called Object_Table, Data_Table and Link_Table. The link table just contains two columns, the identity of an object record and an identity of a data record.
I want to copy the data from DATA_TABLE where it is linked to one given object identity and insert corresponding records into Data_Table and Link_Table for a different given object identity.
I can do this by selecting into a table variable and the looping through doing two inserts for each iteration.
Is this the best way to do it?
Edit : I want to avoid a loop for two reason, the first is that I'm lazy and a loop/temp table requires more code, more code means more places to make a mistake and the second reason is a concern about performance.
I can copy all the data in one insert but how do get the link table to link to the new data records where each record has a new id?
In one statement: No.
In one transaction: Yes
BEGIN TRANSACTION
DECLARE #DataID int;
INSERT INTO DataTable (Column1 ...) VALUES (....);
SELECT #DataID = scope_identity();
INSERT INTO LinkTable VALUES (#ObjectID, #DataID);
COMMIT
The good news is that the above code is also guaranteed to be atomic, and can be sent to the server from a client application with one sql string in a single function call as if it were one statement. You could also apply a trigger to one table to get the effect of a single insert. However, it's ultimately still two statements and you probably don't want to run the trigger for every insert.
You still need two INSERT statements, but it sounds like you want to get the IDENTITY from the first insert and use it in the second, in which case, you might want to look into OUTPUT or OUTPUT INTO: http://msdn.microsoft.com/en-us/library/ms177564.aspx
The following sets up the situation I had, using table variables.
DECLARE #Object_Table TABLE
(
Id INT NOT NULL PRIMARY KEY
)
DECLARE #Link_Table TABLE
(
ObjectId INT NOT NULL,
DataId INT NOT NULL
)
DECLARE #Data_Table TABLE
(
Id INT NOT NULL Identity(1,1),
Data VARCHAR(50) NOT NULL
)
-- create two objects '1' and '2'
INSERT INTO #Object_Table (Id) VALUES (1)
INSERT INTO #Object_Table (Id) VALUES (2)
-- create some data
INSERT INTO #Data_Table (Data) VALUES ('Data One')
INSERT INTO #Data_Table (Data) VALUES ('Data Two')
-- link all data to first object
INSERT INTO #Link_Table (ObjectId, DataId)
SELECT Objects.Id, Data.Id
FROM #Object_Table AS Objects, #Data_Table AS Data
WHERE Objects.Id = 1
Thanks to another answer that pointed me towards the OUTPUT clause I can demonstrate a solution:
-- now I want to copy the data from from object 1 to object 2 without looping
INSERT INTO #Data_Table (Data)
OUTPUT 2, INSERTED.Id INTO #Link_Table (ObjectId, DataId)
SELECT Data.Data
FROM #Data_Table AS Data INNER JOIN #Link_Table AS Link ON Data.Id = Link.DataId
INNER JOIN #Object_Table AS Objects ON Link.ObjectId = Objects.Id
WHERE Objects.Id = 1
It turns out however that it is not that simple in real life because of the following error
the OUTPUT INTO clause cannot be on
either side of a (primary key, foreign
key) relationship
I can still OUTPUT INTO a temp table and then finish with normal insert. So I can avoid my loop but I cannot avoid the temp table.
I want to stress on using
SET XACT_ABORT ON;
for the MSSQL transaction with multiple sql statements.
See: https://msdn.microsoft.com/en-us/library/ms188792.aspx
They provide a very good example.
So, the final code should look like the following:
SET XACT_ABORT ON;
BEGIN TRANSACTION
DECLARE #DataID int;
INSERT INTO DataTable (Column1 ...) VALUES (....);
SELECT #DataID = scope_identity();
INSERT INTO LinkTable VALUES (#ObjectID, #DataID);
COMMIT
It sounds like the Link table captures the many:many relationship between the Object table and Data table.
My suggestion is to use a stored procedure to manage the transactions. When you want to insert to the Object or Data table perform your inserts, get the new IDs and insert them to the Link table.
This allows all of your logic to remain encapsulated in one easy to call sproc.
If you want the actions to be more or less atomic, I would make sure to wrap them in a transaction. That way you can be sure both happened or both didn't happen as needed.
You might create a View selecting the column names required by your insert statement, add an INSTEAD OF INSERT Trigger, and insert into this view.
Before being able to do a multitable insert in Oracle, you could use a trick involving an insert into a view that had an INSTEAD OF trigger defined on it to perform the inserts. Can this be done in SQL Server?
Insert can only operate on one table at a time. Multiple Inserts have to have multiple statements.
I don't know that you need to do the looping through a table variable - can't you just use a mass insert into one table, then the mass insert into the other?
By the way - I am guessing you mean copy the data from Object_Table; otherwise the question does not make sense.
//if you want to insert the same as first table
$qry = "INSERT INTO table (one, two, three) VALUES('$one','$two','$three')";
$result = #mysql_query($qry);
$qry2 = "INSERT INTO table2 (one,two, three) VVALUES('$one','$two','$three')";
$result = #mysql_query($qry2);
//or if you want to insert certain parts of table one
$qry = "INSERT INTO table (one, two, three) VALUES('$one','$two','$three')";
$result = #mysql_query($qry);
$qry2 = "INSERT INTO table2 (two) VALUES('$two')";
$result = #mysql_query($qry2);
//i know it looks too good to be right, but it works and you can keep adding query's just change the
"$qry"-number and number in #mysql_query($qry"")
I have 17 tables this has worked in.
-- ================================================
-- Template generated from Template Explorer using:
-- Create Procedure (New Menu).SQL
--
-- Use the Specify Values for Template Parameters
-- command (Ctrl-Shift-M) to fill in the parameter
-- values below.
--
-- This block of comments will not be included in
-- the definition of the procedure.
-- ================================================
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE InsetIntoTwoTable
(
#name nvarchar(50),
#Email nvarchar(50)
)
AS
BEGIN
SET NOCOUNT ON;
insert into dbo.info(name) values (#name)
insert into dbo.login(Email) values (#Email)
END
GO
I don't know is it possible or not.
I have a table that keeps records for a book issue return.There are two columns in this one is [status] and other [bookid] .I want to add a constraint in sql that will restrict user to insert duplicate record for status="issue" with same bookid.
For example if there is already a record with status='issue' and bookid=1 then it must not allow to insert other record with status="issue" and bookid=1 but there can me multipule records with other status like staus='return' and bookid=1 may occur a number of times.
Or there may be solution using linq to sql in c#
You do not need a user defined function, in general. In SQL Server (and many other databases) you can just use a filtered index:
create unique index unq_bookissue
where status = 'issued' ;
In earlier versions of SQL Server you can do this with a computed column, assuming that you have a table with columns such as:
BookId, which is repeated across rows.
Status, which should be unique when the value is issue.
BookIssueId, which uniquely identifies each row.
Then, add a computed column to keep track of status = 'issue':
alter table t add computed_bookissueid as (case when status = 'issue' then -1 else BookIssueId end);
Now add a unique index on this column and BookId:
create unique index unq_bookid_issue on (BookId, computed_bookissueid);
You have a complex condition here, so a UNIQUEconstraint won't help you. You will need a CHECKconstraint.
You first need a function to to do your check:
CREATE FUNCTION dbo.IsReturnDuplicate
(
#id INT,
#bookid INT,
#status VARCHAR(MAX)
)
RETURNS BIT
AS
BEGIN
RETURN (SELECT TOP 1 COUNT (*) FROM bookreturns WHERE (id <> #id) AND (status = #status) AND (bookid = #bookid) AND (status = 'issue')
END
This will return 1 if there is already a row in the table that has status 'issue' and has a different id
You can then create a CHECK constraint using this function
CREATE TABLE bookreturns (
--...
CONSTRAINT CK_bookreturns_status CHECK (dbo.IsReturnDuplicate(id, bookid, status) == 0)
)
Using Gordon answer
create unique index unq_bookissuedReference
on Books(Book_id) where [status] = 'issued'
Works for me
I have tried but I get error:
SubQuery are not allowed in this context message comes.
I have two tables Product and Category and want to use categoryId base on CategoryName.
The query is
Insert into Product(Product_Name,Product_Model,Price,Category_id)
values(' P1','M1' , 100, (select CategoryID from Category where Category_Name=Laptop))
Please tell me a solution with code.
(you didn't clearly specify what database you're using - this is for SQL Server but should apply to others as well, with some minor differences)
The INSERT command comes in two flavors:
(1) either you have all your values available, as literals or SQL Server variables - in that case, you can use the INSERT .. VALUES() approach:
INSERT INTO dbo.YourTable(Col1, Col2, ...., ColN)
VALUES(Value1, Value2, #Variable3, #Variable4, ...., ValueN)
Note: I would recommend to always explicitly specify the list of column to insert data into - that way, you won't have any nasty surprises if suddenly your table has an extra column, or if your tables has an IDENTITY or computed column. Yes - it's a tiny bit more work - once - but then you have your INSERT statement as solid as it can be and you won't have to constantly fiddle around with it if your table changes.
(2) if you don't have all your values as literals and/or variables, but instead you want to rely on another table, multiple tables, or views, to provide the values, then you can use the INSERT ... SELECT ... approach:
INSERT INTO dbo.YourTable(Col1, Col2, ...., ColN)
SELECT
SourceColumn1, SourceColumn2, #Variable3, #Variable4, ...., SourceColumnN
FROM
dbo.YourProvidingTableOrView
Here, you must define exactly as many items in the SELECT as your INSERT expects - and those can be columns from the table(s) (or view(s)), or those can be literals or variables. Again: explicitly provide the list of columns to insert into - see above.
You can use one or the other - but you cannot mix the two - you cannot use VALUES(...) and then have a SELECT query in the middle of your list of values - pick one of the two - stick with it.
So in your concrete case, you'll need to use:
INSERT INTO dbo.Product(Product_Name, Product_Model, Price, Category_id)
SELECT
' P1', 'M1', 100, CategoryID
FROM
dbo.Category
WHERE
Category_Name = 'Laptop'
Try like this
Insert into Product
(
Product_Name,
Product_Model,
Price,Category_id
)
Select
'P1',
'M1' ,
100,
CategoryID
From
Category
where Category_Name='Laptop'
Try this:
DECLARE #CategoryID BIGINT = (select top 1 CategoryID from Category where Category_Name='Laptop')
Insert into Product(Product_Name,Product_Model,Price,Category_id)
values(' P1','M1' , 100, #CategoryID)
I'm trying to manually map some rows to instances of their appropriate classes. I know that I need to use every column of every table, and map all of those columns from one table into a given class.
However, I was wondering if there would be an easier way to do it. Right now, I have a class called School and a class called User. Each of these classes has a Name property, and other properties (but the ´Name` one is the important one, since it is a mutual name for both classes).
Right now, I am doing the following to map them down.
SELECT u.SomeOtherColumn, u.Name AS userName, s.SomeOtherColumn, s.Name AS schoolName FROM User AS u INNER JOIN School AS s ON something
I would love to do the following, but I can't, since Name is a mutual name between the classes.
SELECT u.*, s.* FROM User AS u INNER JOIN School AS s ON something
This however generates an error since they both have the column Name. Can I prefix them somehow? Like this for instance?
u.user_*, s.school_*
So that every column of each of those tables have a prefix? For instance user_Name and school_Name?
Years ago I wrote a bunch of functions and procedures to help me with developing automatic code-generation routines for SQL Servers and applications using dynamic SQL. Here is the one that I think would be most helpful to your situation:
Create FUNCTION [dbo].[ColumnString2]
(
#TableName As SYSNAME, --table or view whose column names you want
#Template As NVarchar(MAX), --replaces '{c}' with the name for every column,
#Between As NVarchar(MAX) --puts this string between every column string
)
RETURNS NVarchar(MAX) AS
BEGIN
DECLARE #str As NVarchar(MAX);
SELECT TOP 999
#str = COALESCE(
#str + #Between + REPLACE(#Template,N'{c}',COLUMN_NAME),
REPLACE(#Template,N'{c}',COLUMN_NAME)
)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA= COALESCE(PARSENAME(#TableName, 2), N'dbo')
And TABLE_NAME = PARSENAME(#TableName, 1)
ORDER BY ORDINAL_POSITION
RETURN #str;
END
This allows you to format all of the column names of a table or view any way that you want. Simply pass it a table name, and a Template string with '{c}' everywhere that you want the column name inserted for each column. It will do this for every column in #TableName, and add the #Between string in between them.
Here is an example of how to vertically format all of the column names for a table, renaming them with a prefix in a way that is suitable for inclusion into a SELECT query:
SELECT dbo.[ColumnString2](N'yourTable', N'
{c} As prefix_{c}', N',')
This function was intended for use with dynamic SQL, but you can use it too by executing it in Management Studio with your output set to Text (instead of Grid). Then cut and paste the output into your desired query, view or code text. (Be sure to change your SSMS Query options for Text Results to raise the "maximum number of characters displayed" from 256 to the max (8000). If that still gets cut off for you, then you can change this procedure to a function that outputs each column as a separate row, instead of as one single large string.)
I have a list of objects (created from several text files) in C#.net that I need to store in a SQL2005 database file. Unfortunately, Table-Valued Parameters began with SQL2008 so they won't help. I found from MSDN that one method is to "Bundle multiple data values into delimited strings or XML documents and then pass those text values to a procedure or statement" but I am rather new to stored procedures and need more help than that. I know I could create a stored procedure to create one record then loop through my list and add them, but that's what I'm trying to avoid. Thanks.
Input file example (Other files contain pricing and availability):
Matnr ShortDescription LongDescription ManufPartNo Manufacturer ManufacturerGlobalDescr GTIN ProdFamilyID ProdFamily ProdClassID ProdClass ProdSubClassID ProdSubClass ArticleCreationDate CNETavailable CNETid ListPrice Weight Length Width Heigth NoReturn MayRequireAuthorization EndUserInformation FreightPolicyException
10000000 A&D ENGINEERING SMALL ADULT CUFF FOR UA-767PBT UA-279 A&D ENGINEERING A&D ENG 093764011542 GENERAL General TDINTERNL TD Internal TDINTERNL TD Internal 2012-05-13 12:18:43 N 18.000 .350 N N N N
10000001 A&D ENGINEERING MEDIUM ADULT CUFF FOR UA-767PBT UA-280 A&D ENGINEERING A&D ENG 093764046070 GENERAL General TDINTERNL TD Internal TDINTERNL TD Internal 2012-05-13 12:18:43 N 18.000 .450 N N N N
Some DataBase File fields:
EffectiveDate varchar(50)
MfgName varchar(500)
MfgPartNbr varchar(500)
Cost varchar(200)
QtyOnHand varchar(200)
You can split multiple values from a single string quite easily. Say you can bundle the string like this, using a comma to separate "columns", and a semi-colon to separate "rows":
foo, 20120101, 26; bar, 20120612, 32
(This assumes that colons and semi-colons can't appear naturally in the data; if they can, you'll need to choose other delimiters.)
You can build a split routine like this, which includes an output column that allows you to determine the order the value appeared in the original string:
CREATE FUNCTION dbo.SplitStrings
(
#List NVARCHAR(MAX),
#Delimiter NVARCHAR(255)
)
RETURNS TABLE
AS
RETURN (SELECT Number = ROW_NUMBER() OVER (ORDER BY Number),
Item FROM (SELECT Number, Item = LTRIM(RTRIM(SUBSTRING(#List, Number,
CHARINDEX(#Delimiter, #List + #Delimiter, Number) - Number)))
FROM (SELECT ROW_NUMBER() OVER (ORDER BY [object_id])
FROM sys.all_objects) AS n(Number)
WHERE Number <= CONVERT(INT, LEN(#List))
AND SUBSTRING(#Delimiter + #List, Number, 1) = #Delimiter
) AS y);
GO
Then you can query it like this (for simplicity and illustration I'm only handling 3 properties but you can extrapolate this for 11 or n):
DECLARE #x NVARCHAR(MAX); -- a parameter to your stored procedure
SET #x = N'foo, 20120101, 26; bar, 20120612, 32';
;WITH x AS
(
SELECT ID = s.Number, InnerID = y.Number, y.Item
-- parameter and "row" delimiter here:
FROM dbo.SplitStrings(#x, ';') AS s
-- output and "column" delimiter here:
CROSS APPLY dbo.SplitStrings(s.Item, ',') AS y
)
SELECT
prop1 = x.Item,
prop2 = x2.Item,
prop3 = x3.Item
FROM x
INNER JOIN x AS x2
ON x.InnerID = x2.InnerID - 1
AND x.ID = x2.ID
INNER JOIN x AS x3
ON x2.InnerID = x3.InnerID - 1
AND x2.ID = x3.ID
WHERE x.InnerID = 1
ORDER BY x.ID;
Results:
prop1 prop2 prop3
------ -------- -------
foo 20120101 26
bar 20120612 32
We use XML data types like this...
declare #contentXML xml
set #contentXML=convert(xml,N'<ROOT><V a="124694"/><V a="124699"/><V a="124701"/></ROOT>')
SELECT content_id,
FROM dbo.table c WITH (nolock)
JOIN #contentXML.nodes('/ROOT/V') AS R ( v ) ON c.content_id = R.v.value('#a', 'INT')
Here is what it would look like if calling a stored procedure...
DbCommand dbCommand = database.GetStoredProcCommand("MyStroredProcedure);
database.AddInParameter(dbCommand, "dataPubXML", DbType.Xml, dataPublicationXml);
CREATE PROC dbo.usp_get_object_content
(
#contentXML XML
)
AS
BEGIN
SET NOCOUNT ON
SELECT content_id,
FROM dbo.tblIVContent c WITH (nolock)
JOIN #contentXML.nodes('/ROOT/V') AS R ( v ) ON c.content_id = R.v.value('#a', 'INT')
END
SQL Server does not parse XML very quickly so the use of the SplitStrings function might be more performant. Just wanted to provide an alternative.
I can think of a few options, but as I was typing one of them (the Split option) was posted by Mr. #Bertrand above. The only problem with it is that SQL just isn't that good at string manipulation.
So, another option would be to use a #Temp table that your sproc assumes will be present. Build dynamic SQL to the following effect:
Start a transaction, CREATE TABLE #InsertData with the shape you need, then loop over the data you are going to insert, using INSERT INTO #InsertData SELECT <values> UNION ALL SELECT <values>....
There are some limitations to this approach, one of which is that as the data set becomes very large you may need to split the INSERTs into batches. (I don't recall the specific error I got when I learned this myself, but for very long lists of values I have had SQL complain.) The solution, though, is simple: just generate a series of INSERTs with a smaller number of rows each. For instance, you might do 10 INSERT SELECTs with 1000 UNION ALLs each instead of 1 INSERT SELECT with 10000 UNION ALLs. You can still pass the entire batch as a part of a single command.
The advantage of this (despite its various disadvantages-- the use of temporary tables, long command strings, etc) is that it offloads all the string processing to the much more efficient C# side of the equation and doesn't require an additional persistent database object (the Split function; though, again, who doesn't need one of these sometimes)?
If you DO go with a Split() function, I'd encourage you to offload this to a SQLCLR function, and NOT a T-SQL UDF (for the performance reasons illustrated by the link above).
Finally, whatever method you choose, note that you'll have more problems if your data can include strings that contain the delimiter (for instance, In Aaron's answer you run into problems if the data is:
'I pity the foo!', 20120101, 26; 'bar, I say, bar!', 20120612, 32
Again, because C# is better at string handling than T-SQL, you'll be better off without using a T-SQL UDF to handle this.
Edit
Please note the following additional point to think about for the dynamic INSERT option.
You need to decide whether any input here is potentially dangerous input and would need to be cleaned before use. You cannot easily parameterize this data, so this is a significant one. In the place I used this strategy, I already had strong guarantees about the type of the data (in particular, I have used it for seeding a table with a list of integer IDs to process, so I was iterating over integers and not arbitrary, untrusted strings). If you don't have similar assurances, be aware of the dangers of SQL injection.