How to insert Duplicated data only into an Event Log? - c#

I need help regarding a SQL query problem. I have a query where I am able to delete the duplicates but I also need to create records of the duplicated data being deleted into a EventLog in which I am clueless about it. Below is an example of my Student Table. From the table below, you can see only Alpha and Bravo are duplicated
id Name Age Group
-----------------------
1 Alpha 11 A
2 Bravo 12 A
3 Alpha 11 B
4 Bravo 12 B
5 Delta 11 B
As I am copying data from Group A to Group B, I need to find & delete the duplicated data in group B. Below is my query on deleting duplicates from Group B.
DELETE Student WHERE id
IN (SELECT tb.id
FROM Student AS ta
JOIN Student AS tb ON ta.name=tb.name AND ta.age=tb.age
WHERE ta.GroupName='A' AND tb.GroupName='B')
Here is an example of my eventlog and how I want the query that I execute to like.
id Name Age Group Status
------------------------------------------
1 Alpha 11 B Delete
2 Bravo 11 B Delete
Instead of inserting the entire Group B data into the eventlog, is there any query that can just insert the Duplicated Data into the event log?

If we are speaking about Microsoft sql, key is output clause, more details here https://msdn.microsoft.com/en-us/library/ms177564.aspx
declare #Student table
( id int, name nvarchar(20), age int,"groupname" char(1))
insert into #student values (1, 'Alpha' , 11, 'A' ),
(2, 'Bravo' , 12, 'A'),
(3 ,'Alpha' , 11 , 'B'),
(4 ,'Bravo' ,12 , 'B'),
(5 ,'Delta' ,11 , 'B')
declare #Event table
( id int, name nvarchar(20), age int,"groupname" char(1),"Status" nvarchar(20))
select * from #Student
DELETE #Student
output deleted.*, 'Deleted' into #Event
WHERE id
IN (SELECT tb.id
FROM #Student AS ta
JOIN #Student AS tb ON ta.name=tb.name AND ta.age=tb.age
WHERE ta.GroupName='A' AND tb.GroupName='B')
select * from #event

Run this before the Delete above. Not sure how you decide what one is the duplicate but you can use Row_Number to list them with the non duplicate at as 1 and and then insert everything with a row_Number > 1
; WITH cte AS
(
SELECT Name
,Age
,[Group]
,STATUS = 'Delete'
,RID = ROW_NUMBER ( ) OVER ( PARTITION BY Name,Age ORDER BY Name)
FROM Student AS ta
JOIN Student AS tb ON ta.name=tb.name AND ta.age=tb.age
)
INSERT INTO EventLog
SELECT Name,Age,[Group],'Delete'
FROM cte
WHERE RID > 1

you need to create basic trigger after delete in student table, this query will be executed after any deletion process in student table and will insert deleted record into log_table
create trigger deleted_records
on student_table
after delete
as
begin
insert into log_table
select d.id, d.Name, d.Age, d.Group, 'DELETED'
from DELETED d;
end

Related

Assigning even number of FK's in a table based on an even number

I have two tables where I have a situation like this:
Table1
column1
column2
column3
column4
column5
Table 2 structure:
Table2
Col1
Col2
Col3
Table1FK
My table 1 currently has 3 records inside it, and the table 2 shall contain more records than table 1 (one to many relationship between these two). I want to assign an even number of records from Table 2 to table 1 FK's.
For example:
If table 2 has 20 records and if table 1 has 3 records
I will divide these two and get an even number, 6.66 in this case.
So the table1 PK's should be assigned like
6-6-8
or
7 7 6 (this one is more even)
And then table 1 PK under identity let's say 1500 would have 7 of it's corresponding FK's in table 2 , 7 for Identity 1501 , and 6 for identity 1502
Starting point is that I should divide these:
var evenAmountOfFKs = table2.Count()/table1.Count();
What would be the next step here, and how could I achieve this ?
Can someone help me out?
No matter how weird the requirement seems, the math was fun.
At first, we have to determine the number of parents:
DECLARE #NoParents int = (SELECT COUNT(*) FROM Table1);
Both parents and children can be numbered starting with 0, and then a child can be assigned to parent x with x = ChildNo % #NoParents:
DECLARE #NoParents int = (SELECT COUNT(*) FROM Table1);
WITH
Parents AS (
SELECT column1, column2, column3, column4, column5,
ROW_NUMBER() OVER(ORDER BY column1)-1 AS ParentNo
FROM Table1
),
Children AS (
SELECT Col1, Col1, Col1,
(ROW_NUMBER() OVER(ORDER BY Col1)-1) % #NoParents AS ParentNo
FROM Table2
)
SELECT p.column1, p.column2, p.column3, p.column4, p.column5, c.Col1, c.Col1
FROM Parents p
INNER JOIN Children c ON p.ParentNo = c.ParentNo;
This will produce an "even" assignment.

Strange order of line insertion

I have a stored procedure that inserts a line in a table. This table has an auto incremented int primary key and a datetime2 column named CreationDate. I am calling it in a for loop via my C# code, and the loop is inside a transaction scope.
I run the program twice, first time with a for loop that turned 6 times and second time with a for loop that turned 2 times. When I executed this select on sql server I got a strange result
SELECT TOP 8
RequestId, CreationDate
FROM
PickupRequest
ORDER BY
CreationDate DESC
What I didn't get is the order of insertion: for example the line with Id=58001 has to be inserted after that with Id=58002 but this is not the case. Is that because I put my loop in a transaction scoope? or the precision in the datetime2 is not enough?
It is a question of speed and statement scope as well...
Try this:
--This will create a #numbers table with 1 mio numbers:
DECLARE #numbers TABLE(Nbr BIGINT);
WITH N(N) AS
(SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1)
,MoreN(N) AS
(SELECT 1 FROM N AS N1 CROSS JOIN N AS N2 CROSS JOIN N AS N3 CROSS JOIN N AS N4 CROSS JOIN N AS N5 CROSS JOIN N AS N6)
INSERT INTO #numbers(Nbr)
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
FROM MoreN;
--This is a dummy table for inserts:
CREATE TABLE Dummy(ID INT IDENTITY,CreationDate DATETIME);
--Play around with the value for #Count. You can insert 1 mio rows in one go. Although this runs a while, all will have the same datetime value:
--Use a small number here and below, still the same time value
--Use a big count here and a small below will show a slightly later value for the second insert
DECLARE #Count INT = 1000;
INSERT INTO Dummy (CreationDate)
SELECT GETDATE()
FROM (SELECT TOP(#Count) 1 FROM #numbers) AS X(Y);
--A second insert
SET #Count = 10;
INSERT INTO Dummy (CreationDate)
SELECT GETDATE()
FROM (SELECT TOP(#Count) 1 FROM #numbers) AS X(Y);
SELECT * FROM Dummy;
--Clean up
GO
DROP TABLE Dummy;
You did your insertions pretty fast so the actual CreationDate values inserted in one program run had the same values. In case you're using datetime type, all the insertions may well occur in one millisecond. So ORDER BY CreationDate DESC by itself does not guarantee the select order to be that of insertion.
To get the desired order you need to sort by the RequestId as well:
SELECT TOP 8 RequestId, CreationDate
FROM PickupRequest
ORDER BY CreationDate DESC, RequestId DESC

How to speed up select in 3 joint tables

SELECT it.uid,it.Name,COALESCE(sum(i.Qty),0)-COALESCE(sum(s.Qty),0) as stock
FROM items it
left outer join sales_items s on it.uid=s.ItemID
left outer join inventory i on it.uid=i.uid
group by s.ItemID,i.uid,it.UID;
This is my query. This query take 59 seconds. How can I speed up this query?
my tables ->
items
UID Item
5089 JAM100GMXDFRUT
5090 JAM200GMXDFRUT
5091 JAM500GMXDFRUT
5092 JAM800GMXDFRUT
tables ->
sales_items
- slno ItemID Item Qty
- 9 5089 JAM100GMXDFRUT 5
- 10 5090 JAM200GMXDFRUT 2
- 11 5091 JAM500GMXDFRUT 1
tables ->
inventory
- slno uid Itemname Qty
- 102 5089 JAM100GMXDFRUT 10
- 200 5091 JAM500GMXDFRUT 15
- 205 5092 JAM800GMXDFRUT 20
This table has more than 6000 rows
Put indexes on the join columns
sales_items ItemID
inventory uid
If I was designing something like this I would have a query and schema that looks like this. Take note of my Idx1 indexes. I don't know about MySql but Sql Server will make use of those indexes for the sum function and this is called a covered query.
select Item.ItemID, Item.Name, IsNull(sum(inv.Quantity), 0) - IsNull(sum(s.Quantity), 0) as stock
from Item
Left Join Inventory inv
On Item.ItemID = inv.ItemID
Left Join Sales s
On Item.ItemID = s.ItemID
Group by Item.ItemID, Item.Name
Create Table dbo.Location
(
LocationID int not null identity constraint LocationPK primary key,
Name NVarChar(256) not null
)
Create Table dbo.Item
(
ItemID int not null identity constraint ItemPK primary key,
Name NVarChar(256) not null
);
Create Table dbo.Inventory
(
InventoryID int not null identity constraint InventoryPK primary key,
LocationID int not null constraint InventoryLocationFK references dbo.Location(LocationID),
ItemID int not null constraint InventoryItemFK references dbo.Item(ItemID),
Quantity int not null,
Constraint AK1 Unique(LocationID, ItemID)
);
Create Index InventoryIDX1 on dbo.Inventory(ItemID, Quantity);
Create Table dbo.Sales
(
SaleID int not null identity constraint SalesPK primary key,
ItemID int not null constraint SalesItemFK references dbo.Item(ItemID),
Quantity int not null
);
Create Index SalesIDX1 on dbo.Sales(ItemID, Quantity);
Aside from indexes on the tables to optimize joins, you are also doing a group by of the S.ItemID instead of just using the IT.UID since that is the join basis, and part of the main FROM table of the query... if that is an available index on the items table, use that and you are done. No need to reference the sales_items or inventory column names in the group by.
Now, that being said, another problem you will run into the way you have it is a Cartesian result if you have more than one record for the same "item id" you are summing from sales_items and inventory as I have extremely simplified an example for you via
CREATE TABLE items (
`uid` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(5) NOT NULL,
PRIMARY KEY (`uid`)
);
CREATE TABLE sales_items (
`sid` int(11) NOT NULL AUTO_INCREMENT,
`itemid` int(11),
`qty` int(5) NOT NULL,
PRIMARY KEY (`sid`),
KEY byItemAndQty (`itemid`,`qty`)
);
CREATE TABLE inventory (
`iid` int(11) NOT NULL AUTO_INCREMENT,
`uid` int(11) NOT NULL,
`qty` int(5) NOT NULL,
PRIMARY KEY (`iid`),
KEY byItemAndQty (`itemid`,`qty`)
);
insert into items ( uid, name ) values ( 1, 'test' );
INSERT INTO sales_items ( sid, itemid, qty ) VALUES ( 1, 1, 1 );
INSERT INTO sales_items ( sid, itemid, qty ) VALUES ( 2, 1, 2 );
INSERT INTO inventory ( iid, uid, qty ) VALUES ( 1, 1, 13 );
INSERT INTO inventory ( iid, uid, qty ) VALUES ( 2, 1, 35 );
Simple 1 item,
Sales items 2 records for Item 1.. Qty of 1 and 2, total = 3
Inventory 2 records for Item 1.. Qty of 13 and 35, total 38
SELECT
it.uid,
it.Name,
sum(i.Qty) as iQty,
sum(s.Qty) as sQty,
COALESCE( sum(i.Qty),0) - COALESCE(sum(s.Qty),0) as stock
FROM
items it
left outer join sales_items s
on it.uid = s.ItemID
left outer join inventory i
on it.uid = i.uid
group by
it.uid
So, the result of the query you MIGHT EXPECT the Stock to be
uid name iQty sQty stock
1 test 48 3 45
but in reality becomes
1 test 96 6 90
NOW... PLEASE TAKE NOTE OF MY ASSUMPTION, but see similar sum()s or count()s from multiple tables like this. I am assuming the ITEMS table is one record per item.
The Sales_Items actually has more columns than provided (such as sales details and every date/sales count could be tracked) and MAY CONTAIN Multiple sales record quantities for a given item id (thus matching my sample).. Finally, the Inventory table similarly could have more than one record per same item, such as purchases of incoming inventory tracked by date and thus multiple records per a given item id (also matching my example).
To prevent this type of Cartesian result, and can also increase speed, I would do pre-aggregates per secondary table and join to that.
SELECT
it.uid,
it.Name,
i.iQty,
s.sQty,
COALESCE( i.iQty,0) - COALESCE(s.sQty,0) as stock
FROM
items it
left join ( select itemid, sum( qty ) as SQty
from sales_items
group by itemid ) s
on it.uid = s.ItemID
left join ( select uid, sum( qty ) as IQty
from inventory
group by uid ) i
on it.uid = i.uid
group by
it.uid
And you get the correct values of
uid name iQty sQty stock
1 test 48 3 45
Yes, this was only for a single item ID to prove the point, but still applies to as many inventory items as you have and respective sales/inventory records that may (or not) exist for certain items.

Recursively update values based on rows parent id SQL Server

I have the following table structure
| id | parentID | count1 |
2 -1 1
3 2 1
4 2 0
5 3 1
6 5 0
I increase count values from my source code, but i also need the increase in value to bubble up to each parent id row until the parent id is -1.
eg. If I were to increase count1 on row ID #6 by 1, row ID #5 would increase by 1, ID #3 would increase by 1, and ID #2 would increase by 1.
Rows also get deleted, and the opposite would need to happen, basically subtracting the row to be deleted' value from each parent.
Thanks in advance for your insight.
I'm using SQL Server 2008, and C# asp.net.
If you really want to just update counts, you could want to write stored procedure to do so:
create procedure usp_temp_update
(
#id int,
#value int = 1
)
as
begin
with cte as (
-- Take record
select t.id, t.parentid from temp as t where t.id = #id
union all
-- And all parents recursively
select t.id, t.parentid
from cte as c
inner join temp as t on t.id = c.parentid
)
update temp set
cnt = cnt + #value
where id in (select id from cte)
end
SQL FIDDLE EXAMPLE
So you could call it after you insert and delete rows. But if your count field are depends just on your table, I would suggest to make a triggers which will recalculate your values
You want to use a recursive CTE for this:
with cte as (
select id, id as parentid, 1 as level
from t
union all
select cte.id, t.parentid, cte.level + 1
from t join
cte
on t.id = cte.parentid
where cte.parentid <> -1
) --select parentid from cte where id = 6
update t
set count1 = count1 + 1
where id in (select parentid from cte where id = 6);
Here is the SQL Fiddle.

Selecting multiple row from one row in SQL

I have the following output with me from multiple tables
id b c b e b g
abc 2 123 3 321 7 876
abd 2 456 3 452 7 234
abe 2 0 3 123 7 121
abf 2 NULL 3 535 7 1212
Now I want to insert these values into another table and the insert query for a single command is as follows:
insert into resulttable values (id,b,c), (id,b,e) etc.
For that I need to do a select such that it gives me
id,b,c
id,b,e etc
I dont mind getting rid of b too as it can be selected using c# query.
How can I achieve the same using a single query in sql. Again please note its not a table its an output from different tables
My query should look as follows: from the above I need to do something like
select b.a, b.c
union all
select b.d,b.e from (select a,c,d,e from <set of join>) b
But unfortunately that does not work
INSERT resulttable
SELECT id, b, c
FROM original
UNION
SELECT id, b, e
FROM original
Your example has several columns named 'b' which isn't allowed...
Here, #tmporigin refers to your original query that produces the data in the question. Just replace the table name with a subquery.
insert into resulttable
select
o.id,
case a.n when 1 then b1 when 2 then b2 else b3 end,
case a.n when 1 then c when 2 then e else g end
from #tmporigin o
cross join (select 1n union all select 2 union all select 3) a
The original answer below, using CTE and union all requiring CTE evaluation 3 times
I have the following output with me from multiple tables
So set that query up as a Common Table Expression
;WITH CTE AS (
-- the query that produces that output
)
select id,b1,c from CTE
union all
select id,b2,e from CTE
union all
select id,b3,g from CTE
NOTE - Contrary to popular belief, your CTE while conveniently written once, is run thrice in the above query, once for each of the union all parts.
NOTE ALSO that if you actually name 3 columns "b" (literally), there is no way to identify which b you are referring to in anything that tries to reference the results - in fact SQL Server will not let you use the query in a CTE or subquery.
The following example shows how to perform the above, as well as (if you show the execution plan) revealing that the CTE is run 3 times! (the lines between --- BELOW HERE and --- ABOVE HERE is a mock of the original query that produces the output in the question.
if object_id('tempdb..#eav') is not null drop table #eav
;
create table #eav (id char(3), b int, v int)
insert #eav select 'abc', 2, 123
insert #eav select 'abc', 3, 321
insert #eav select 'abc', 7, 876
insert #eav select 'abd', 2, 456
insert #eav select 'abd', 3, 452
insert #eav select 'abd', 7, 234
insert #eav select 'abe', 2, 0
insert #eav select 'abe', 3, 123
insert #eav select 'abe', 7, 121
insert #eav select 'abf', 3, 535
insert #eav select 'abf', 7, 1212
;with cte as (
---- BELOW HERE
select id.id, b1, b1.v c, b2, b2.v e, b3, b3.v g
from
(select distinct id, 2 as b1, 3 as b2, 7 as b3 from #eav) id
left join #eav b1 on b1.b=id.b1 and b1.id=id.id
left join #eav b2 on b2.b=id.b2 and b2.id=id.id
left join #eav b3 on b3.b=id.b3 and b3.id=id.id
---- ABOVE HERE
)
select b1, c from cte
union all
select b2, e from cte
union all
select b3, g from cte
order by b1
You would be better off storing the data into a temp table before doing the union all select.
Instead of this which does not work as you know
select b.a, b.c
union all
select b.d,b.e from (select a,c,d,e from <set of join>) b
You can do this. Union with repeated sub-select
select b.a, b.c from (select a,c,d,e from <set of join>) b
union all
select b.d, b.e from (select a,c,d,e from <set of join>) b
Or this. Repeated use of cte.
with cte as
(select a,c,d,e from <set of join>)
select b.a, b.c from cte b
union all
select b.d, b.e from cte b
Or use a temporary table variable.
declare #T table (a int, c int, d int, e int)
insert into #T values
select a,c,d,e from <set of join>
select b.a, b.c from #T b
union all
select b.d, b.e from #T b
This code is not tested so there might be any number of typos in there.
I'm not sure if I understood Your problem correctly, but i have been using something like this for some time:
let's say we have a table
ID Val1 Val2
1 A B
2 C D
to obtain a reslut like
ID Val
1 A
1 B
2 C
2 D
You can use a query :
select ID, case when i=1 then Val1 when i=2 then Val2 end as Val
from table
left join ( select 1 as i union all select 2 as i ) table_i on i=i
which will simply join the table with a subquery containing two values and create a cartesian product. In effect, all rows will be doubled (or multiplied by how many values the subquery will have). You can vary the number of values depending on how many varsions of row You'll need. Depending on the value of i, Val will be Val1 or Val2 from original table. If you'll see the execution plan, there will be a warning that the join has no join predicates (because of i=i), but it is ok - we want it.
This makes queries a bit large (in terms of text) because of all the case when, but are quite easy to read if formatted right. I needed it for stupid tables like "BigID, smallID1, smallID2...smallID11" that was spread across many columns I don't know why.
Hope it helps.
Oh, I use a static table with 10000 numbers, so i just use
join tab10k on i<=10
for 10x row.
I apologize for stupid formatting, I'm new here.

Categories

Resources