I have a stored procedure that inserts a line in a table. This table has an auto incremented int primary key and a datetime2 column named CreationDate. I am calling it in a for loop via my C# code, and the loop is inside a transaction scope.
I run the program twice, first time with a for loop that turned 6 times and second time with a for loop that turned 2 times. When I executed this select on sql server I got a strange result
SELECT TOP 8
RequestId, CreationDate
FROM
PickupRequest
ORDER BY
CreationDate DESC
What I didn't get is the order of insertion: for example the line with Id=58001 has to be inserted after that with Id=58002 but this is not the case. Is that because I put my loop in a transaction scoope? or the precision in the datetime2 is not enough?
It is a question of speed and statement scope as well...
Try this:
--This will create a #numbers table with 1 mio numbers:
DECLARE #numbers TABLE(Nbr BIGINT);
WITH N(N) AS
(SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1)
,MoreN(N) AS
(SELECT 1 FROM N AS N1 CROSS JOIN N AS N2 CROSS JOIN N AS N3 CROSS JOIN N AS N4 CROSS JOIN N AS N5 CROSS JOIN N AS N6)
INSERT INTO #numbers(Nbr)
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
FROM MoreN;
--This is a dummy table for inserts:
CREATE TABLE Dummy(ID INT IDENTITY,CreationDate DATETIME);
--Play around with the value for #Count. You can insert 1 mio rows in one go. Although this runs a while, all will have the same datetime value:
--Use a small number here and below, still the same time value
--Use a big count here and a small below will show a slightly later value for the second insert
DECLARE #Count INT = 1000;
INSERT INTO Dummy (CreationDate)
SELECT GETDATE()
FROM (SELECT TOP(#Count) 1 FROM #numbers) AS X(Y);
--A second insert
SET #Count = 10;
INSERT INTO Dummy (CreationDate)
SELECT GETDATE()
FROM (SELECT TOP(#Count) 1 FROM #numbers) AS X(Y);
SELECT * FROM Dummy;
--Clean up
GO
DROP TABLE Dummy;
You did your insertions pretty fast so the actual CreationDate values inserted in one program run had the same values. In case you're using datetime type, all the insertions may well occur in one millisecond. So ORDER BY CreationDate DESC by itself does not guarantee the select order to be that of insertion.
To get the desired order you need to sort by the RequestId as well:
SELECT TOP 8 RequestId, CreationDate
FROM PickupRequest
ORDER BY CreationDate DESC, RequestId DESC
Related
Given this query, if I want to pull the rank of a specific individual where I know there $name and $score and return the rows above/below that rank (say +/- 4), how would I go about doing that?
$query = "SELECT #curRank := #curRank + 1 AS Rank,
uniqueID,
name,
score
FROM scores, (SELECT #curRank := 0) r
ORDER by score DESC";
I'm coding in php, using MySQL and C# in Unity. My game is making a call to the server and running the php code. Goal is to echo the information and parse the information back in the game.
Any help would be much appreciated :)
Based off of your :=, I'm assuming you are using PostgreSQL, correct? I'm more familiar with the T-SQL syntax; but regardless, both PostgreSQL and T-SQL have windowing functions. You could implement something similar to the following (I left out variables for you to fill-in):
$query = "WITH scoreOrder
AS
(
SELECT uniqueID,
name,
score,
ROW_NUMBER() OVER (ORDER BY score DESC, uniqueID DESC) AS RowNum
FROM scores
ORDER BY uniqueID DESC
)
SELECT ns.*
FROM scoreOrder ms --Your matching score
INNER JOIN scoreOrder ns --Your nearby scores
ON ms.name = /* your name variable */
AND ms.score = /* your score variable */
AND ns.RowNum BETWEEN ms.RowNum - /* your offset */ and ms.RowNum + /* your offset */";
Explanation: First, we're creating a common table expression called scoreOrder, and projecting a RowNum column for your scores. In short, ROW_NUMBER() OVER (ORDER BY score DESC, uniqueID DESC) is just saying, "I am returning the row number of this record ordered by score and uniqueID, both descending and in that order." Then, you join that CTE with itself... ms will be your score that you match with, and you join that with ns where the ns.RowNum will be between your ms.RowNum, plus or minus your offset.
There are a ton of other windowing functions. Here are some others that could be more or less appropriate for your scenario:
ROW_NUMBER() - the rownumber of the record
RANK() - the rank of the record, duplicating in ties and includes
gaps (i.e., if 2nd place ties, you would have 1st, 2nd, 2nd,
4th)
DENSE_RANK() - same as rank, except that it fills in the gaps
(i.e., if 2nd place ties, you would have 1st, 2nd, 2nd, 3rd)
For more info, check the PostgreSQL documentation on windowing functions and their tutorial
Update:
Unfornately, MySQL does not support windowing functions or common table expressions. In your scenario, you will have to put the results of your previous query into a temp table, then doing a similar join as demonstrated above. So for example...
CREATE TEMPORARY TABLE IF NOT EXISTS allRankings AS
(
SELECT #curRank := #curRank + 1 AS Rank,
uniqueID,
name,
score
FROM scores, (SELECT #curRank := 0) r
ORDER by score DESC, uniqueID
);
SELECT r.*
FROM allRankings r
INNER JOIN allRankings myRank
ON r.Rank BETWEEN myRank.Rank - <your offset> AND myRank.Rank + <your offset>
AND myRank.name = <your name>
AND myRank.score = <your score>
ORDER by r.Rank;
Here is a SQLFiddle link for an example. (I'm not using a temp table on SQLFiddle because you have to build tables in the Build Schema window).
In our database we have a table that lacks an identity column. There is an Id column, but it is manually populated when a record is inputted. Any item with an Id over 90,000 is reserved and is populated globally across all customer databases.
I'm building a tool to handle bulk insertions into this table using Entity Framework. I need to figure out what the most efficient method of finding the first available Id is (under 90,000) on the fly without iterating over every single row. It is highly likely that in many of the databases, someone has simply selected a random number that wasn't taken and used it to insert the row.
What is my best recourse?
Edit
After seeing some of the solutions listed, I attempted to replicate the SQL logic in Linq. I doubt it's perfect, but it seems incredibly fast and efficient.
var availableIds = Enumerable.Range(1, 89999)
.Except(db.Table.Where(n => n.Id <= 89999)
.Select(n => n.TagAssociationTypeID))
.ToList();
Have you considered something like:
SELECT
min(RN) AS FirstAvailableID
FROM (
SELECT
row_number() OVER (ORDER BY Id) AS RN,
Id
FROM
YourTable
) x
WHERE
RN <> Id
To answer your implied question of how do you get a list of available numbers to use: Easy, make a list of all possible numbers then delete the ones that are in use.
--This builds a list of numbers from 1 to 89999
SELECT TOP (89999) n = CONVERT(INT, ROW_NUMBER() OVER (ORDER BY s1.[object_id]))
INTO #AvialableNumbers
FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2
OPTION (MAXDOP 1);
CREATE UNIQUE CLUSTERED INDEX n ON #AvialableNumbers(n)
--Start a seralizeable transaction so we can be sure no one uses a number
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
begin transaction
--Remove numbers that are in use.
delete #AvialableNumbers where n in (select Id from YourTable)
/*
Do your insert here using numbers from #AvialableNumbers
*/
commit transaction
Here is how you would do it via Entity framework
using(var context = new YourContext(connectionString))
using(var transaction = context.Database.BeginTransaction(IsolationLevel.Serializable))
{
var query = #"
SELECT TOP (89999) n = CONVERT(INT, ROW_NUMBER() OVER (ORDER BY s1.[object_id]))
INTO #AvialableNumbers
FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2
OPTION (MAXDOP 1);
CREATE UNIQUE CLUSTERED INDEX n ON #AvialableNumbers(n)
--Remove numbers that are in use.
delete #AvialableNumbers where n in (select Id from YourTable)
--Select the numbers out to the result set.
select n from #AvialableNumbers order by n
drop table #AvialableNumbers
";
List<int> availableIDs = context.Database.SqlQuery<int>(query).ToList();
/*
Use the list of IDs here
*/
context.SaveChanges();
transaction.Commit();
}
getName_as_Rows is an array which contains some names.
I want to set an int value to 1 if record found in data base.
for(int i = 0; i<100; i++)
{
using (var command = new SqlCommand("select some column from some table where column = #Value", con1))
{
command.Parameters.AddWithValue("#Value", getName_as_Rows[i]);
con1.Open();
command.ExecuteNonQuery();
}
}
I am looking for:
bool recordexist;
if the above record exist then bool = 1 else 0 with in the loop.
If have to do some other stuff if the record exist.
To avoid making N queries to the database, something that could be very expensive in terms of processing, network and so worth, I suggest you to Join only once using a trick I learned. First you need a function in your database that splits a string into a table.
CREATE FUNCTION [DelimitedSplit8K]
--===== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...
-- enough to cover VARCHAR(8000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "zero base" and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT 0 UNION ALL
SELECT TOP (DATALENGTH(ISNULL(#pString,1))) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT t.N+1
FROM cteTally t
WHERE (SUBSTRING(#pString,t.N,1) = #pDelimiter OR t.N = 0)
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY s.N1),
Item = SUBSTRING(#pString,s.N1,ISNULL(NULLIF(CHARINDEX(#pDelimiter,#pString,s.N1),0)-s.N1,8000))
FROM cteStart s
GO
Second, concatenate your 100 variables into 1 string:
"Value1", "Value 2", "Value 3"....
In Sql Server you can just join the values with your table
SELECT somecolumn FROM sometable t
INNER JOIN [DelimitedSplit8K](#DelimitedString, ',') v ON v.Item = t.somecolumn
So you find 100 strings at a time with only 1 query.
Use var result = command.ExecuteScalar() and check if result != null
But a better option than to loop would be to say use a select statement like
SELECT COUNT(*) FROM TABLE WHERE COLUMNVAL >= 0 AND COLUMNVAL < 100,
and run ExecuteScalar on that, and if the value is > 0, then set your variable to 1.
I have the following table structure
| id | parentID | count1 |
2 -1 1
3 2 1
4 2 0
5 3 1
6 5 0
I increase count values from my source code, but i also need the increase in value to bubble up to each parent id row until the parent id is -1.
eg. If I were to increase count1 on row ID #6 by 1, row ID #5 would increase by 1, ID #3 would increase by 1, and ID #2 would increase by 1.
Rows also get deleted, and the opposite would need to happen, basically subtracting the row to be deleted' value from each parent.
Thanks in advance for your insight.
I'm using SQL Server 2008, and C# asp.net.
If you really want to just update counts, you could want to write stored procedure to do so:
create procedure usp_temp_update
(
#id int,
#value int = 1
)
as
begin
with cte as (
-- Take record
select t.id, t.parentid from temp as t where t.id = #id
union all
-- And all parents recursively
select t.id, t.parentid
from cte as c
inner join temp as t on t.id = c.parentid
)
update temp set
cnt = cnt + #value
where id in (select id from cte)
end
SQL FIDDLE EXAMPLE
So you could call it after you insert and delete rows. But if your count field are depends just on your table, I would suggest to make a triggers which will recalculate your values
You want to use a recursive CTE for this:
with cte as (
select id, id as parentid, 1 as level
from t
union all
select cte.id, t.parentid, cte.level + 1
from t join
cte
on t.id = cte.parentid
where cte.parentid <> -1
) --select parentid from cte where id = 6
update t
set count1 = count1 + 1
where id in (select parentid from cte where id = 6);
Here is the SQL Fiddle.
i am trying to show the last order for the a specific customer on a grid view , what i did is showing all orders for the customer but i need the last order
here is my SQL code
SELECT orders.order_id, orders.order_date,
orders.payment_type, orders.cardnumber, packages.Package_name,
orders.package_id, packages.package_price
FROM orders INNER JOIN packages ON orders.package_id = packages.Package_ID
WHERE (orders.username = #username )
#username get its value from a cookie , now how can i choose the last order only for a cookie value " Tony " for example ?
To generalize (and fix a little bit) Mitch's answer, you need to use SELECT clause embellished with TOP(#N) and ORDER BY ... DESC. Note that I use TOP(#N), not TOP N, which means you can pass it as an argument to the stored procedure and return, say, not 1 but N last orders:
CREATE STORED PROCEDURE ...
#N int
...
SELECT TOP(#N) ...
ORDER BY ... DESC
SELECT top 1
orders.order_id,
orders.order_date,
orders.payment_type,
orders.cardnumber,
packages.Package_name,
orders.package_id,
packages.package_price
FROM orders
INNER JOIN packages ON orders.package_id = packages.Package_ID
WHERE (orders.username = #username )
ORDER BY orders.order_date DESC
In fact assuming orders.order_id is an Identity column:
SELECT top 1
orders.order_id,
orders.order_date,
orders.payment_type,
orders.cardnumber,
packages.Package_name,
orders.package_id,
packages.package_price
FROM orders
INNER JOIN packages ON orders.package_id = packages.Package_ID
WHERE (orders.username = #username )
ORDER BY orders.order_id DESC