I have a SQL table that stores different data. And the primary key has integer value that is incremented 1 one new data is entered. As long as we keep on adding it works fine. But when we delete any center value or ending value it causes problems.
i.e for example I have added 5 rows in the table. And the column sr_num holds value of 5. And when I delete the 4th record the sr_num column remains like this: 1,2,3,5.
I want it to be 1,2,3,4 as soon as I delete the 4th entry, I want the 5th one to take 4th position and same number as well.
It must to happen to all.
No. That is not what your primary key is used for. It is only for logical reference, to allow for uniqueness. You should mentally ignore the fact that it uses an integer. #Adriano and #marc_s are both correct. Let go of the idea that you could/should renumber your primary key values. There are some rare occasions when you might consider it, but this is not one of those rare occasions.
Instead, you could set up a query (or view) that uses ROW_NUMBER() in your query (as #Adriano mentioned). Then, you will have your consecutive numbers without messing with your primary key values. People usually refer to this as an ordinal column or simply Ord.
It is a bad Idea what you want to.
example: your sr_num has a foreign key to other table, once you
update the sr_num you need to update the other table with the same
value as sr_num too.
Related
I am recently working with Entity Framework Core and I have some issue about the relation between the primary key and the indexes.
To be more concrete, I found out that in a table containing composite primary keys an index is created for the second property of the key.
You can see an example here
Can you explain me if I should manually create another index for the first one? Or is a clustered index created for that first property?
Generally an index on a set of columns can be used even if a query is only searching some of the columns, with the restriction that the query must ask for columns from the index left to right, no gaps
Thus if a set of columns A,B,C,D are indexed, this index can still be used to answer queries that are filtering on A, A and B, A and B and C.
Thus you don't need to index NoteID separately, because the index that aids the primary key (NoteID, CategoryID) can be used by queries calling for just NoteID. It cannot, however, be used to answer queries calling for just CategoryID, hence the separate index being created
As an aside, you might find, in some cases, that you can supply values in a where clause that have no purpose other than to encourage use of an index that covers them. Suppose, for example, that a table has an index on Name, Gender, Age, and you want all 20 year old people named Steven. If you can reasonably assert that Steven is always male, you can WHERE Name = 'Steven' AND Gender = 'M' AND Age = 20 - even though the Gender of M is redundant, specifying it will let the DB engine use that index. Omitting it means the DB will have a much harder job of figuring out whether to use the index or not
You can also re-arrange index ordering to help your application perform, and give the DB fewer indexes to maintain.. If all your queries will only ever ask for A, A+C or A+B+C, it would be better to specify the index for the columns in the order A,C,B then a single index can cover all queries, rather than having to maintain an index of A+B+C and another of A+C
You don't need to create an index on NoteID because it is the first column of the primary key. Generally, you want all foreign keys to be the first column in at least one index. In your case, NoteID is the first column in the primary key which acts as a clustered unique index.
i have a sql server database with table. These are
1stAP_TB, 2ndAP_TB, 3rdAP_TB, 4thAP_TB, 1steng_TB, 2ndeng_TB, 3rdeng_TB,
4theng_TB
all in them are in row. The numbers will be solve individually on specific column. Now, i need to know how am i going to get the average of 1stAP_TB, 2ndAP_TB, 3rdAP_TB and 4thAP_TB while there are in rows.
Also, there are multiple data that will be save inside the database. I am using C# programming language.
Try below method
create table aveexample
(a1stAP_TB int,
a2ndAP_TB int,
a3rdAP_TB int,
a4thAP_TB int,
a1steng_TB int,
a2ndeng_TB int,
a3rdeng_TB int,
a4theng_TB int
)
Sample data
insert into aveexample values(1,2,3,4,5,6,7,8)
insert into aveexample values(11,22,33,44,55,66,77,78)
insert into aveexample values(2,3,1,4,10,10,45,5)
Method 1
select *, (select AVG(totaldata)
from (values(a1stAP_TB),
(a2ndAP_TB),(a3rdAP_TB),(a4thAP_TB),(a1steng_TB),
(a2ndeng_TB),(a3rdeng_TB),(a4theng_TB)) total(totaldata))as average
from aveexample
Method 2
select ((a1stAP_TB)+
(a2ndAP_TB)+(a3rdAP_TB)+(a4thAP_TB)+(a1steng_TB)+
(a2ndeng_TB)+(a3rdeng_TB)+(a4theng_TB))/8 as Average
from aveexample
It is difficult to give concrete advice given the very limited description in the question, but from the description and comments so far, it seems to me like the database needs to be redesigned to better fit your requirements. First, you have no ID field, so there is no way to differentiate one row from the next. Then, what you are left with is a series of repeated values. The clue here is that you have "1st", "2nd", "3rd" in the column names. That's probably a sign that those columns need to be moved into rows of a related table. It may not instantly seem to be the best approach, but this is called "First Normal Form" and is a typical best practice with SQL databases. See also Database Normalization Basics.
It seems to me that what you have here is some entity (which you haven't mentioned in your question) that has a number of values associated with it. The 'entity' here should be given a unique ID and then all of the values for that entity stored with its ID.
You might have a table with the following columns:
CREATE TABLE MyItems (
ID int NOT NULL,
Sequence int NOT NULL,
Value int NOT NULL,
CONSTRAINT PK_MyValues_ID_Sequence PRIMARY KEY
(ID,Sequence)
)
Note: ID + sequence forms the unique primary key for the table and makes every row unique. This also lets you keep track of what order the items were added in. This may or may not be important to you but every table should probably have a unique primary key.
Your data table would then look something like this (the example represents two different entities, the first having 4 values and the second having 3 values):
It's difficult to show a sensible example without knowing more about the application and what it does... but with this table design you have a basis from which to add values one at a time, as you said you needed, and a way to query them back. You can use grouping to produce things like totals and averages, or you can do that in code by iterating over the results of a query or in a LINQ statement.
You can then compute the average for an entity of a given ID using a LINQ query along the lines of:
var average = MyItems.Where(p=>p.ID == 1).Average(q=>q.Value);
As an example of the flexibility of this sort of approach, you could just as easily compute the average of every second value entered across the entire database:
var averageOfSecondItems = MyItems.Where(p => p.Sequence == 2).Average(q => q.Value);
The example I've shown deals with one type of value. In your question it appears that you might have two different types of value. There are several ways you could handle that - for example you could add another column to the table if the values are always entered in pairs, or you could create a second table to hold the separate values. Again, it's hard to make a recommendation based on the limited information given.
If putting your data into First Normal Form seems like a lot of work, then your application might be a better fit for a document database ("NoSQL" database), but that is really a different question. In the question, a SQL database was specified so I've concentrated on that.
With LINQ, I'm trying to delete a selected row in datagrid from database (made with code first) using db.Dishes.Remove(Dish);
But when I delete the item and inserting a new one, primary key (id) of new item "jumps" a value.
E.g.
1 Shoes
2 Jeans //I delete this item
When adding a new Item
1 Shoes
3 T-Shirt //jumps a value for Id
I've tried with this too in my DBContext.cs
modelBuilder.Entity<Cart>()
.HasOptional(i => i.Item)
.WithMany()
.WillCascadeOnDelete(true);
But it's not working
Is there a better way to delete an item from database?
The thing is that when we use DELETE it removes the row from the table but the counter is not changed (if the deleted row has an auto increment PK) see DELETE vs TRUNCATE
.
So, if you want to reuse the key value then you could do something on the lines of:
1) Handle the Auto Increment part of Key in your code
2) If you have
access to DB or want to query it something on the lines of this will
might be of help (SQL Server) :
DBCC CHECKIDENT ('tablename', RESEED, newseed)
to do this from code you could after the delete do :
db.ExecuteCommand("DBCC CHECKIDENT('tablename', RESEED, newseed);")
where 'newseed' is the id of the deleted row.e.g if newseed is 0 then next insert will be 1 and if it is 10 then the insert will have 11. To get the new seed value you could also get the max id value residing in your db and then work from there. Better check out what approaches you can take if you decide to go down that road.
From Reset autoicrement in SQL Server and how to use it in code.
If your primary key is an auto integer, you cannot avoid this behavior. This is simply how the database works. If you want to control the int value, do NOT make it the primary key and do not use auto integer. Instead use uniqueidentifier as your primary key and make your int a normal field. Then when you create your new records, you need to have a robust mechanism to get the next index, lock it so nobody else can steal it, and then write your record.
This is not trivial in a multi-threaded environment! You should do some research on the topic and come up with a good scheme. Personally, I'd NEVER attempt to do this and would use a repeatable process to generate numbers that are non-sequential or unique to a thread.
The primary key has to be unique (by definition), and you have also defined it as an identity column.
So when you delete a row and create a new one, that new one will take the next available key (3 in your case).
If you don't want this behaviour you will have to manage the uniqueness of the primary key yourself.
I am creating a survey. It is long enough that I want to give people a chance to save what they have so far. I am wondering what the best practice is for saving the data. Do I turn off foreign key constraints so if they haven't selected everything yet then foreign key constraint errors are ignored. In this example I use an ID to link the documents table to the table that holds what they have selected. If they haven't selected a document yet then a -1 is inserted as a holder. Or do I create a second table to hold the saved place data. Or is there a third option.
There is a 3rd option. You can generate the primary key right when the user begins the survey. There are two ways to do this:
Generate a database record and read-back the primary key (assumes
it's generated by the database)
Change the primary key to be a GUID
and simply generate a GUID in code.
OK, using -1 this way means that you have to have document in the documents table with an id of -1. If you don't a better structure would be to define the field as allowing nulls. Then you pass in a null value.
You might want to read this:
Can a foreign key be NULL and/or duplicate?
I am creating application that uses MYSQL database in C#. I want to delete row and update autoincremented value of id in table. For example, I have table with two columns: id and station, and table is station list. Something like this
id station
1 pt1
2 pt2
3 pt3
If i delete second row, after deleting the table looks something like this:
id station
1 pt1
3 pt3
Is there any way that I update id of table, for this example that id in third row instead value 3 have value 2?
Thanks in advance!
An autoincrement column, by definition, should not be changed manually.
What happen if some other tables use this ID (3) as foreign key to refer to that record in this table? That table should be changed accordingly.
(Think about it, in your example is simple, but what happen if you delete ID = 2 in a table where the max(ID) is 100000? How many updates in the main table and in the referring tables?)
And in the end there is no real problem if you have gaps in your numbering.
I suggest you don't do anything special when a row is deleted. Yes you will have gaps in the ids, but why do you care? It is just an id.
If you change the value of id_station, you would also need to update the value in all tables that have an id_station field. It causes more unnecessary UPDATES.
The only way to change the value of the id column in other rows is with an UPDATE statement. There is no builtin mechanism to accomplish what you want.
I concur with the other answers here; normally, we do not change the value of an id column in other rows when a row is deleted. Normally, that id column is a primary key, and ideally, that primary key value is immutable (it is assigned once and it doesn't change.) If it does change, then any references to it will also need to change. (The ON UPDATE CASCADE for a foreign key will propagate the change to a child table, for storage engines like InnoDB that support foreign keys, but not with MyISAM.
Basically, changing an id value causes way more problems than it solves.
There is no "automatic" mechanism that changes the value of a column in other rows when a row is deleted.
With that said, there are times in the development cycle where I have had "static" data, and I wanted control over the id values, and I have made changes to id values. But this
is an administrative exercise, not a function performed by an application.