I have this DbContext object which consists of -
- Employee
- CompanyAddress (PK: AddressFirstLine, City)
Note: one Employee can have many CompanyAddress
Records are added to CompanyAddress table only if some address doesn't exists in CompanyAddress table.
If I have two DBContext objects from database say Snapshot1, Snapshot2.
Say when both these snapshots were taken, there were no records in CompanyAddress table.
When changes were made to Snapshot1 and saved - records are written to CompanyAddress table.
When changes were made to Snapshot2 and saved using
mydataContext.SaveChanges();
exception occurs:
System.Data.Entity.Infrastructure.DbUpdateException: An error occurred while updating the entries
System.Data.SqlClient.SqlException: Violation of PRIMARY KEY constraint 'PK_CompanyAddress'. Cannot insert duplicate key in object 'dbo.CompanyAddress'
It seems saving of Snapshot1 made Snapshot2 dirty because when they are saved back to database, both had same CompanyAddress records.
What other call/settings I can make on dbContext object to avoid this error?
Thank you!
Your error has got nothing to do with the DbContext objects. Your problem is that you are trying to insert a record with duplicating primary key. That is what your exception message says.
Look at how you create your CompanyAddress objects and what are the keys when you save them - this will give you the clues.
Edit: And it is a bad idea to have primary key to be a natural key, i.e. you should not assign city and address as primary keys. You should have either Guid or Integer to be primary key that is not dependent on the information stored in DB.
And to enforce uniqueness, before you save to DB, you check if that record exists, and can add a unique index to database table based on the unique constraints.
Related
Im working on database design on Microsoft Sql server management Studio, I have a small problem. A LibraryItem should have a required category tied with a foreign key of CategoryId mapped to Id in the table Category as shown in the picture.
SEE THE IMAGE
SECOND IMAGE
I need help with how I can tie CategoryId(FK) to Id(PK on Category Table). I just dont know how to do it excatly.
You'll need to add the reference to the script that creates the table and add a name to the constraint like so:
CONSTRAINT FK_LibraryItem_Category_CategoryId FOREIGN KEY ([CategoryId]) REFERENCES [dbo].[Category] ([Id])
Note: I've defaulted to the dbo schema. You will need to change that if it's different for the Category table you are creating.
That will create a Foreign Key for your LibraryItem table and link the CategoryId to the respective record in the Category table.
Another thing to note as well: This will throw errors if your value for the FK doesn't match an ID in the Category table.
To ellaborate on the errors:
Let's say you add a CategoryId of 2 to a record in your LibraryItem table but a record with the ID of 2 doesn't exist in your Category table, it will throw an error similar to this:
The INSERT statement conflicted with the FOREIGN KEY constraint "FK_LibraryItem_Category_CategoryId". The conflict occurred in database "foo", table "dbo.LibraryItem". The statement has been terminated.
This can be easily solved by ensuring the IDs match in both tables.
I recently asked a question related to this and found a solution, but realized I may have a bigger problem. If anyone can tell me if I'm able to do what I describe below without making changes to the database it would be greatly appreciated! Note: I'm new to Entity Framework.
I am trying to insert into this table (Agreement Settings) duplicate SettingsId values for a new agreement (associated with an agreementId that is illustrated in the table as a column).
However, a SettingsId is also stored in a table with these columns Algorithm Settings. The Id column represents a SettingsId and is the primary key of this table.
I only want to update the Agreement Settings table (the former table above) with these new duplicate SettingsId values and leave the latter table alone. That way I will have agreements that have duplicate SettingsId guids but only one unique representation of that guid in the Algorithm Settings table.
When I try to insert into the database using Entity Framework:
dataTransferAgreement = (await _dataTransferContext.Agreements
.AddAsync(dataTransferAgreement))
.Entity;
I get brand new guids for SettingsIds returned, although the object dataTransferAgreement has the duplicate guids as properties beforehand (they are replaced). I assume this is because Entity Framework sees these foreign keys in Agreement Settings table and their association to Algorithm Settings table (the primary key) and automatically updates the primary key and thus the associated foreign keys on its own.
I of course can't add the Algorithm Settings table properties to dataTransferAgreement, as that would cause a primary key conflict.
The question: is there any way to manually (or otherwise) insert these duplicate foreign key values into Agreement Settings table without touching the Algorithm Settings table in Entity Framework (code first)? Currently, the entity property that inserts the primary key Id for SettingsId is decorated with [Key,DatabaseGenerated(DatabaseGeneratedOption.Identity)], which is used in numerous other places in this project, so I assume I cannot change that.
Also, the entity property that associates this table in the code:
[ForeignKey(nameof(SettingsId))]
public AlgorithmSetting AlgorithmSetting { get; set; }
is not needed in my case (since I don't want to do anything with it), but I can't just remove it due to it being a domain model (again, I'm an Entity Framework newbie so if I'm wrong in any way please correct me).
In your agreements settings table, add a primary key "id", alongside the other two columns you already have. Entity Framework and relational databases aren't going to support the same primary key.
If you need to query the agreement table in the future, you can do so with any column values and just "ignore" the new primary key you added.
Happy to help further if needed.
I have 3 tables in my Database, one for student and other for the courses and the third one to store what every student select from courses. I want to prevent the student from selecting the same course more than once. what condition should I provide in Insert statement in the third table?
Thanks
Your StudentCourse table should have a unique constraint on the (StudentId, CourseId) table.
As an alternative, you can create the Primary Key on your StudentCourse table as a composite key on (StudentId, CourseId).
While it follows that every table in your database must have a Primary key constraint, often its an auto generated value useful when carrying out most database maintenance tasks. However the primary key itself will not protect you from user generated or user captured data that may contain duplications. Enter the “Unique” constraint! This is a very powerful table-level constraint that you can apply to your table against a chosen table column, which can greatly assist to prevent duplicates in your data. For example, say you have a “Users” table and in it, you have an EmailAddress column, surely it would be strange to capture 1 or 2 users who have an identical email address.
I have used SqlBulkCopy in my previous program and have enjoyed the speed-of-light advantage of its INSERTS. But then, I was only inserting things in one table only.
I now have two tables with a one-to-many association i.e. table A has a foreign key in table B. So each record in B carries an id that is the result of an insert in A.
I am wondering if there is a solution for this?
Example:
I will give a better example on this and hope we find a good solution eventually.
We have a table called Contacts. And since each contact can have zero or more Email addresses we will store those emails in a separate table called ContactEmails. So Contacts.Id will become FK on ContactEmails (say ContactEmails.ContactId).
Let's say we would like to insert 1000 Contacts and each will have zero or more Emails. And we of course want to use SqlBulkCopy for both tables.
The problem is, it is only when we insert a new Contact that we know his/her Id. Once the Contact is inserted, we know the inserted Id is e.g. 15. So we insert 3 emails for this contact and all three will have ContactEmails.ContactId value of 15. But we have no knowledge of 15 before the contact is inserted into the database.
We can insert all contacts as bulk into the table. But when it comes to their email, the connection is lost because emails do not know their own contacts.
Disable the constraints (foreign key) before bulk insert. Then enable it again.
Make sure you do not have referential integrity violations.
You can disable FK and CHECK constraints using below query:
ALTER TABLE foo NOCHECK CONSTRAINT ALL
or
ALTER TABLE foo NOCHECK CONSTRAINT CK_foo_column
Primary keys and unique constraints can not be disabled, but this should be OK if I've understood you correctly.
Suppose a
Table "Person" having
"SSN",
"Name",
"Address"
and another
Table "Contacts" having
"Contact_ID",
"Contact_Type",
"SSN" (primary key of Person)
similarly
Table "Records" having
"Record_ID",
"Record_Type",
"SSN" (primary key of Person)
Now i want that when i change or update SSN in person table that accordingly changes in other 2 tables.
If anyone can help me with a trigger for that
Or how to pass foreign key constraints for tables
Just add ON UPDATE CASCADE to the foreign key constraint.
Preferably the primary key of a table should never change. If you expect the SSN to change you should use a different primary key and have the SSN as a normal data column in the person table. If it's already too late to make this change, you can add ON UPDATE CASCADE to the foreign key constraint.
If you have PKs that change, you need to look at the table design, use an surrogate PK, like an identity.
In your question you have a Person table, which could be a FK to many many tables. In that case a ON UPDATE CASCADE could have some serious problems. The database I'm working on has well over 300 references (FK) to our equivalent table, we track all the various work that a person does in each different table. If I insert a row into our Person table and then try to delete it back out again (it will not be used in any other tables, it is new) the delete will fail with a Msg 8621, Level 17, State 2, Line 1 The query processor ran out of stack space during query optimization. Please simplify the query. As a result I can't imagine an ON UPDATE CASCADE would work either when you get many FKs on your PK.
I would never make sensitive data like a SSN a PK. Health care companies used to do this and had a painful switch because of privacy. I hope you don't have a web app and have a GET or POST variable called SSN with the actual value in it!! Or display the SSN on every report, or will you shred all old printed reports and limit access to who views each report., etc.
Well, assuming the SSN is the primary key of the Person table, I would just (in a transaction of course):
create a brand new row with the new SSN, copying all other details from the old row.
update the columns in the other tables to point to the new row.
delete the old row.
Now this is actually a good example of why you shouldn't use real data as table cross-references, if that data can change. If you'd used an artificial column to tie them together (and only stored the SSN in one place), you wouldn't have the problem.
Cascade update and delete are very dangerous to use. If you have a million child records, you could end up with a serious locking problem. You should code the updates and deletes instead.
You should never use a PK with the potential to change if it can be avoided. Nor should you ever use SSN as a PK because it should never be stored unencrypted in your database. Never, unless your company likes to be sued when they are the cause of an indentity theft incident. This is not a design flaw to shrug off as this is legacy, we don't have time to fix. This is a design flaw that could bankrupt your company if someone steals your backup tapes or gets the ssns out of the sytem in another manner (most of these types of thefts are internal BTW). This is an urgent - must fix now design flaw.
SSN is also a bad candidate because it changes (people change them when they are victims of identity theft for instance.) Plus an integer PK will have faster performance than a nine-digit PK.