reset rowid in sqlite after deleting a row - c#

When I delete a row from my table, the rowID number is deleted; this means that the rowIDs are not truly sorted any more.
In this case, I want to reset the rowID such that the new IDs will be sorted and consecutive.
I tried to do this with ALTER TABLE :
ALTER TABLE my_table DROP ID;
ALTER TABLE my_table AUTO_INCREMENT = 1;
ALTER TABLE my_table ADD ID int UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
... but this doesn't work. How can I get the consecutive numbers?

It is technically possible, if somewhat inadvisable - Set start value for AUTOINCREMENT in SQLite . But it sounds like you are using autoincrement wrong, you definitely should not have to change id values of every row each time a row is deleted from the table. This makes joining other tables in a query very difficult for one thing, and will be horrendously slow on a large table. Why does it matter if the id are 1,2,4 and not 1,2,3 anyway ? You can still ORDER BY them the same way.

Related

SQL Server allow duplicates in any column, but not all columns

I've searched through numerous threads to try to find an answer to this but any answer I've found suggests using a unique constraint on a single column, or multiple columns.
My problem is, I'm writing an application in C# with a SQL Server back end. One of the features is to allow a user to import a .CSV file into the database after a little bit of pre-processing. I need to find the quickest method to prevent the user from importing the same data more than once. The data will look something like
ID -- will be auto-generated in SQL Server (PK)
Date Time(datetime)
Machine(nchar)
...
...
...
Name(nchar)
Age(int)
I want to allow any number of the columns to be duplicate values, a long as the entire record is not.
I was thinking of creating another column in the database, obtained by hashing all of the columns together and making it unique but want sure if that was the most efficient method, or if the resulting hash would be guaranteed unique. The CSV files will only be around 60 MB, but there will be tens of thousands of them.
Any help would be appreciated.
Thanks
You should be able to resolve this by creating a unique constraint which includes all the columns.
create table #a (col1 varchar(10), col2 varchar(10))
ALTER TABLE #a
ADD CONSTRAINT UQ UNIQUE NONCLUSTERED
(col1, col2)
-- Works, duplicate entries in columns
insert into #a (col1, col2)
values ('a', 'b')
,('a', 'c')
,('b', 'c')
-- Fails, full duplicate record:
insert into #a (col1, col2)
values ('a1', 'b1')
,('a1', 'b1')
The code below can work to ensure that you don't duplicate the [Date Time], Machine, [Name] and Age columns when you insert the data.
It's important to ensure that at the time of running the code, each row of the incoming dataset has a unique ID on it. This code just fails to shift any rows where the ID gets selected because all four other values are already duplicated in the destination table.
INSERT INTO MAIN_TABLE ([Date Time],Machine,[Name],Age)
SELECT [Date Time],Machine,[Name],Age
FROM IMPORT_TABLE WHERE ID NOT IN
(
SELECT I.ID FROM IMPORT_TABLE I INNER JOIN MAIN_TABLE M
ON I.[Date Time]=M.[Date Time]
AND I.Machine=M.Machine
AND I.[Name]=M.[Name]
AND I.Age=M.Age
)

How to reorder auto increment column values, after deleting any row other than last row?

I want to know is there any SQL query for asp.net,c# that can just re arrange auto increment coloumn values..
eg.
deleting 2 in the table:
sno
1
2
3
4
does:
sno
1
3
4
but i want re-arrangement:
sno
1
2
3
Note:
Don't want to to the numbering manually
query to create table is like this:
CREATE TABLE uid (sno int IDENTITY(1,1) PRIMARY KEY, qpname nvarchar(500), mob int, tm int)
Let your table be named Parent and table that will hold the backup is called Backup. They should have identical columns.
INSERT INTO dbo.Backup
SELECT * FROM dbo.Parent
Now truncate the parent table
TRUNCATE TABLE dbo.Parent
Now you can just insert the data back using the first command and just reversing the table names.
Remember that this may not work in all cases. You may have On delete cascade on and if that is the case, then you would loose all data from other tables also which are referencing the parent table. I think that you should never use this is you are using any Foriegn Key reference on this table.
Following are the queries which should run 1 after other to get this functionality done. This can be easily achieved in C# by executing a generic ExecuteNonQuery().
DELETE FROM TBL1 WHERE sno = #sno;
UPDATE TBL1
SET sno = sno -1
WHERE sno > #sno;

Good practice to avoid duplicate records

I have a application where users can add update and delete a record, I wanted to know the best ways to avoid duplicate records. In this application to avoid duplicate records i created a index on the table, is it a good practice or there are others?
There are a few ways to do this. If you have a unique index on a field and you try to insert a duplicate value SQL Server with throw an error. My preferred way is to test for existence before the insert by using
IF NOT EXISTS (SELECT ID FROM MyTable WHERE MyField = #ValueToBeInserted)
BEGIN
INSERT INTO MyTable (Field1, Field2) Values (#Value1, #Value2)
END
You can also return a value to let you know if the INSERT took place using an ELSE on the above code.
If you choose to index a field you can set IGNORE_DUP_KEY to simply ignore any duplicate inserts. If you were inserting multiple rows any duplicates would be ignored and the non duplicates would continue to be inserted.
You can use UNIQUE constraints on columns or on a set of columns that you don't want to be duplicated; see also http://www.w3schools.com/sql/sql_unique.asp.
Here is an example for both a single-column and a multi-column unique constraint:
CREATE TABLE [Person]
(
…
[SSN] VARCHAR(…) UNIQUE, -- only works for single-column UNIQUE constraint
…
[Name] NVARCHAR(…),
[DateOfBirth] DATE,
…
UNIQUE ([Name], [DateOfBirth]) -- works for any number of columns
)
An id for a table is almost compulsory according to me. To avoid duplicates when inserting a row, you can simply use :
INSERT IGNORE INTO Table(id, name) VALUES (null, "blah")
This works in MySQL, i'm not sure about SQL Server.

Improve SQL performance for populating List<T>

I have 200,000 records in a database with the PK as a varchar(50)
Every 5 minutes I do a SELECT COUNT(*) FROM TABLE
If that result is greater than the List.Count I then execute
"SELECT * FROM TABLE WHERE PRIMARYKEY NOT IN ( " + myList.ToCSVString() + ")"
The reason I do this is because records are being added to the table via another process.
This query takes a long time to run and I also believe its throwing an OutOfMemoryException
Is there a better way to implement this?
Thanks
SQL Server has a solution for this, add a timestamp column, every time you touch any row in the table the timestamp will grow.
Add an index for the timestamp column.
Instead of just storing ids in memory, store ids and last timestamp.
To update:
select max timestamp
select all the rows between old max timestamp and current max timestamp
merge that into the list
Handling deletions is a bit more tricky, but can be achieved if you tombstone as opposed to delete.
Can you change the table?
If so, you might want to add a new auto incremented column that will serve as the PK TableId.
On each SELECT save the max id and on the next select add where TableId > maxId.
Create an INT PK, and use something like this:
"SELECT * FROM TABLE WHERE MY_ID > " + myList.Last().Id;
If you can't change your PK, create another column with date as type , and with NOW() as the default value and use it to query for new items.
Create another table in the database with a single column for for the primary key. When your application starts, insert the PKs into this table. Then you can detect added keys directly with a select rather than checking the count:
select PrimaryKey from Table where PrimaryKey not in (select PrimaryKey from OtherTable)
If this CSV list is large, I would recommend loading your file into a temp table, put an index on it and do a left join where null
select tbl.*
from table tbl
left join #tmpTable tmp on tbl.primarykey = tmp.primarykey
where tmp.primary key is null
edit: a Primary Key should not be a varchar. It should almost always be a incremented int/bigint. This would've been a lot easier. select * from table where primarykey > #lastknownkey
Smack the DB programmer who designed this.. :p
This design would also cause index fragmentation because rows won't be inserted in a linear fashion.

problem in inserting data into table in 24-073110-XX format ????

I need help insering an id into the database in ASP.net mvc (C#). Here theid is the primary key and it should be in the format 24-073110-XX, where XX represents a numeric value which should be incremented by 1.
How should I insert the id in this format?
As Rob said - don't store the whole big identifier in your table - just store the part that changes - the consecutive number.
If you really need that whole identifier in your table, e.g. for displaying it, you could use a computed column:
ALTER TABLE dbo.MyTable
ADD DisplayID AS '24-073110-' + RIGHT('00' + CAST(ID AS VARCHAR(2)), 2) PERSISTED
This way, your INT IDENTITY will be used as an INT and always contains the numerical value, and it will be automatically incremented by SQL Server.
Your DisplayID field will then contain values like:
ID DisplayID
1 24-073110-01
2 24-073110-02
12 24-073110-12
13 24-073110-03
21 24-073110-21
Since it's a persisted field, it's now part of your table, and you can query on it, and even put an index on it to make queries faster:
SELECT (fields) FROM dbo.MyTable WHERE DisplayID = '24-073110-59'
Update:
I would definitely not use DisplayID as your primary key - that's what the ID IDENTITY column is great for
to create an index on DisplayID is no different than creating an index on any other column in your table, really:
CREATE NONCLUSTERED INDEX SomeIndex ON dbo.MyTable(DisplayID)
If the 24-073110- part of the data is always going to be the same, there's little to no point in storing it in the database. Given that you've said that the XX component is a numeric value that increments by one, I'd suggest having your table created similarly to this:
CREATE TABLE [dbo].[MyTable]
(
MyTableId INT IDENTITY(1,1) NOT NULL,
/*
Other columns go here
*/
)
This way, you can let the database worry about inserting unique automatically incrementing values for your primary key.

Categories

Resources