This is likely a much broader SQL topic than Entity Framework, and I'm very much a newbie in both these arenas, but I'll ask it in terms of Entity Framework.
I would like to enforce a many-to-8 relationship. My setup is this:
A PersonGroup needs 8 (unique) Persons.
A Person can be in many different PersonGroups.
The order of the PersonGroup matters (the first needs to remain first, etc).
Easy access to all people in a PersonGroup and all PersonGroups a Person is in.
I've tried the following:
1) Add 8 1..many associations between Person and PersonGroup. I can certainly not have more than 8 Persons per group using this solution. However, to find all groups a person is in I need to iterate over 8 variables in the Person field, which is clunky.
2) Add 8 ids to PersonGroup that match up with a Person. Once again, I can guarantee only 8 persons per group, but there is no automatic link back through the association of Person->PersonGroup. I now need to be sure to add it to two places.
3) Just do a many...many relationship and handle it in code. There are two problems with this: I cannot guarantee only 8 persons per group, and I'm unsure if I can assure the order remains the same.
So, which is the best, or what solution am I missing?
An n:m relationship with a "catch":
Person
------
PersonId
PRIMARY KEY (PersonId)
PersonGroup
-----------
GroupId
PRIMARY KEY (GroupId)
Belongs
-------
GroupId
PersonId
Ordering
PRIMARY KEY (GroupId, PersonId)
FOREIGN KEY (GroupId)
REFERENCES PersonGroup (GroupId)
FOREIGN KEY (PersonId)
REFERENCES Person (PersonId) --- all normal up to here
UNIQUE KEY (GroupId, Ordering) --- the "catch"
CONSTRAINT Ordering_chk --- ensuring only up to 8 persons
CHECK Ordering IN (1,2,3,4,5,6,7,8) --- per group
You should make sure that the CHECK constraint is available in the SQL engine you'll use (MySQL for example would trick you into believing it has such constraints but it simply ignores them. SQL-Server does not return an error but happily adds a NULL in the checked column if you try to insert one.)
There is a limitation to this approach. The Ordering field has to be NOT NULL because if it is NULL, more than 8 rows (with NULL there) could be inserted (except for SQL-Server which would allow you only up to 9 rows, eight with values and one with NULL.)
To ensure maximum of 8 rows and NULLs in the Ordering, you could make a more complex constraint like the one described in MSDN site, CHECK Constraints (if your RDBMS has such feature) but I'm not at all sure on the performance of such a beast:
CREATE FUNCTION CheckMax8PersonPerGroup()
RETURNS int
AS
BEGIN
DECLARE #retval int
SELECT #retval = CASE WHEN EXISTS
( SELECT *
FROM Belongs
GROUP BY GroupId
HAVING COUNT(*) > 8
)
THEN 0
ELSE 1
END
RETURN #retval
END;
GO
ALTER TABLE Belongs
ADD CONSTRAINT Ordering_chk
CHECK (CheckMax8PersonPerGroup() = 1 );
GO
The constraint could alternatively be created as a FOREIGN KEY to a reference table with 8 rows. (If you use MySQL, that's the only way to have the equivalent of CHECK.)
A variation would be to use the (GroupId, Ordering) as the Primary Key and not have any constraint on the (GroupId, PersonId) combination. This would allow for a Person having multiple positions in a Group (but still up to 8) .
Many-to-many seems ok to me. You can easily make sure there are no more than 8 persons per group by implementing triggers. Also, you can add order column to this table if you think it's important for your logic.
Related
I have a problem when I try to add constraints to my tables. I get the error:
Introducing FOREIGN KEY constraint 'FK74988DB24B3C886' on table 'Employee' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints.
My constraint is between a Code table and an employee table. The Code table contains Id, Name, FriendlyName, Type and a Value. The employee has a number of fields that reference codes, so that there can be a reference for each type of code.
I need for the fields to be set to null if the code that is referenced is deleted.
Any ideas how I can do this?
SQL Server does simple counting of cascade paths and, rather than trying to work out whether any cycles actually exist, it assumes the worst and refuses to create the referential actions (CASCADE): you can and should still create the constraints without the referential actions. If you can't alter your design (or doing so would compromise things) then you should consider using triggers as a last resort.
FWIW resolving cascade paths is a complex problem. Other SQL products will simply ignore the problem and allow you to create cycles, in which case it will be a race to see which will overwrite the value last, probably to the ignorance of the designer (e.g. ACE/Jet does this). I understand some SQL products will attempt to resolve simple cases. Fact remains, SQL Server doesn't even try, plays it ultra safe by disallowing more than one path and at least it tells you so.
Microsoft themselves advises the use of triggers instead of FK constraints.
A typical situation with multiple cascasing paths will be this:
A master table with two details, let's say "Master" and "Detail1" and "Detail2". Both details are cascade delete. So far no problems. But what if both details have a one-to-many-relation with some other table (say "SomeOtherTable"). SomeOtherTable has a Detail1ID-column AND a Detail2ID-column.
Master { ID, masterfields }
Detail1 { ID, MasterID, detail1fields }
Detail2 { ID, MasterID, detail2fields }
SomeOtherTable {ID, Detail1ID, Detail2ID, someothertablefields }
In other words: some of the records in SomeOtherTable are linked with Detail1-records and some of the records in SomeOtherTable are linked with Detail2 records. Even if it is guaranteed that SomeOtherTable-records never belong to both Details, it is now impossible to make SomeOhterTable's records cascade delete for both details, because there are multiple cascading paths from Master to SomeOtherTable (one via Detail1 and one via Detail2).
Now you may already have understood this. Here is a possible solution:
Master { ID, masterfields }
DetailMain { ID, MasterID }
Detail1 { DetailMainID, detail1fields }
Detail2 { DetailMainID, detail2fields }
SomeOtherTable {ID, DetailMainID, someothertablefields }
All ID fields are key-fields and auto-increment. The crux lies in the DetailMainId fields of the Detail tables. These fields are both key and referential contraint. It is now possible to cascade delete everything by only deleting master-records. The downside is that for each detail1-record AND for each detail2 record, there must also be a DetailMain-record (which is actually created first to get the correct and unique id).
I would point out that (functionally) there's a BIG difference between cycles and/or multiple paths in the SCHEMA and the DATA. While cycles and perhaps multipaths in the DATA could certainly complicated processing and cause performance problems (cost of "properly" handling), the cost of these characteristics in the schema should be close to zero.
Since most apparent cycles in RDBs occur in hierarchical structures (org chart, part, subpart, etc.) it is unfortunate that SQL Server assumes the worst; i.e., schema cycle == data cycle. In fact, if you're using RI constraints you can't actually build a cycle in the data!
I suspect the multipath problem is similar; i.e., multiple paths in the schema don't necessarily imply multiple paths in the data, but I have less experience with the multipath problem.
Of course if SQL Server did allow cycles it'd still be subject to a depth of 32, but that's probably adequate for most cases. (Too bad that's not a database setting however!)
"Instead of Delete" triggers don't work either. The second time a table is visited, the trigger is ignored. So, if you really want to simulate a cascade you'll have to use stored procedures in the presence of cycles. The Instead-of-Delete-Trigger would work for multipath cases however.
Celko suggests a "better" way to represent hierarchies that doesn't introduce cycles, but there are tradeoffs.
There is an article available in which explains how to perform multiple deletion paths using triggers. Maybe this is useful for complex scenarios.
http://www.mssqltips.com/sqlservertip/2733/solving-the-sql-server-multiple-cascade-path-issue-with-a-trigger/
By the sounds of it you have an OnDelete/OnUpdate action on one of your existing Foreign Keys, that will modify your codes table.
So by creating this Foreign Key, you'd be creating a cyclic problem,
E.g. Updating Employees, causes Codes to changed by an On Update Action, causes Employees to be changed by an On Update Action... etc...
If you post your Table Definitions for both tables, & your Foreign Key/constraint definitions we should be able to tell you where the problem is...
This is because Emplyee might have Collection of other entity say Qualifications and Qualification might have some other collection Universities
e.g.
public class Employee{
public virtual ICollection<Qualification> Qualifications {get;set;}
}
public class Qualification{
public Employee Employee {get;set;}
public virtual ICollection<University> Universities {get;set;}
}
public class University{
public Qualification Qualification {get;set;}
}
On DataContext it could be like below
protected override void OnModelCreating(DbModelBuilder modelBuilder){
modelBuilder.Entity<Qualification>().HasRequired(x=> x.Employee).WithMany(e => e.Qualifications);
modelBuilder.Entity<University>.HasRequired(x => x.Qualification).WithMany(e => e.Universities);
}
in this case there is chain from Employee to Qualification and From Qualification to Universities. So it was throwing same exception to me.
It worked for me when I changed
modelBuilder.Entity<Qualification>().**HasRequired**(x=> x.Employee).WithMany(e => e.Qualifications);
To
modelBuilder.Entity<Qualification>().**HasOptional**(x=> x.Employee).WithMany(e => e.Qualifications);
Trigger is solution for this problem:
IF OBJECT_ID('dbo.fktest2', 'U') IS NOT NULL
drop table fktest2
IF OBJECT_ID('dbo.fktest1', 'U') IS NOT NULL
drop table fktest1
IF EXISTS (SELECT name FROM sysobjects WHERE name = 'fkTest1Trigger' AND type = 'TR')
DROP TRIGGER dbo.fkTest1Trigger
go
create table fktest1 (id int primary key, anQId int identity)
go
create table fktest2 (id1 int, id2 int, anQId int identity,
FOREIGN KEY (id1) REFERENCES fktest1 (id)
ON DELETE CASCADE
ON UPDATE CASCADE/*,
FOREIGN KEY (id2) REFERENCES fktest1 (id) this causes compile error so we have to use triggers
ON DELETE CASCADE
ON UPDATE CASCADE*/
)
go
CREATE TRIGGER fkTest1Trigger
ON fkTest1
AFTER INSERT, UPDATE, DELETE
AS
if ##ROWCOUNT = 0
return
set nocount on
-- This code is replacement for foreign key cascade (auto update of field in destination table when its referenced primary key in source table changes.
-- Compiler complains only when you use multiple cascased. It throws this compile error:
-- Rrigger Introducing FOREIGN KEY constraint on table may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION,
-- or modify other FOREIGN KEY constraints.
IF ((UPDATE (id) and exists(select 1 from fktest1 A join deleted B on B.anqid = A.anqid where B.id <> A.id)))
begin
update fktest2 set id2 = i.id
from deleted d
join fktest2 on d.id = fktest2.id2
join inserted i on i.anqid = d.anqid
end
if exists (select 1 from deleted)
DELETE one FROM fktest2 one LEFT JOIN fktest1 two ON two.id = one.id2 where two.id is null -- drop all from dest table which are not in source table
GO
insert into fktest1 (id) values (1)
insert into fktest1 (id) values (2)
insert into fktest1 (id) values (3)
insert into fktest2 (id1, id2) values (1,1)
insert into fktest2 (id1, id2) values (2,2)
insert into fktest2 (id1, id2) values (1,3)
select * from fktest1
select * from fktest2
update fktest1 set id=11 where id=1
update fktest1 set id=22 where id=2
update fktest1 set id=33 where id=3
delete from fktest1 where id > 22
select * from fktest1
select * from fktest2
This is an error of type database trigger policies. A trigger is code and can add some intelligences or conditions to a Cascade relation like Cascade Deletion. You may need to specialize the related tables options around this like Turning off CascadeOnDelete:
protected override void OnModelCreating( DbModelBuilder modelBuilder )
{
modelBuilder.Entity<TableName>().HasMany(i => i.Member).WithRequired().WillCascadeOnDelete(false);
}
Or Turn off this feature completely:
modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>();
Some databases, most notably SQL Server, have limitations on the cascade behaviors that form cycles.
There are two ways to handle this situation:
1.Change one or more of the relationships to not cascade delete.
2.Configure the database without one or more of these cascade deletes, then ensure all dependent entities are loaded so that EF Core can perform the cascading behavior.
please refer to this link:
Database cascade limitations
Mass database update to offset PKs: make a copy of the database instead.
Special use case: company A uses a database with the same schema as company B. Because they have merged, they want to use a single database. Hence, many tables from company B's database must have their primary keys offset to avoid collision with company A's records.
One solution could have been to define foreign keys as ON UPDATE CASCADE, and offset the primary keys having the foreign keys follow. But there are many hurdles if you do that (Msg 1785, Msg 8102, ...).
So a better idea that occurs to me is simply to make a copy of the database, DROP and re CREATE the tables that must have their PKs|FKs offset, and copy the data (and while doing so, offset the primary keys and the foreign keys).
Avoiding all the hassle.
My solution to this problem encountered using ASP.NET Core 2.0 and EF Core 2.0 was to perform the following in order:
Run update-database command in Package Management Console (PMC) to create the database (this results in the "Introducing FOREIGN KEY constraint ... may cause cycles or multiple cascade paths." error)
Run script-migration -Idempotent command in PMC to create a script that can be run regardless of the existing tables/constraints
Take the resulting script and find ON DELETE CASCADE and replace with ON DELETE NO ACTION
Execute the modified SQL against the database
Now, your migrations should be up-to-date and the cascading deletes should not occur.
Too bad I was not able to find any way to do this in Entity Framework Core 2.0.
Good luck!
I am recently working with Entity Framework Core and I have some issue about the relation between the primary key and the indexes.
To be more concrete, I found out that in a table containing composite primary keys an index is created for the second property of the key.
You can see an example here
Can you explain me if I should manually create another index for the first one? Or is a clustered index created for that first property?
Generally an index on a set of columns can be used even if a query is only searching some of the columns, with the restriction that the query must ask for columns from the index left to right, no gaps
Thus if a set of columns A,B,C,D are indexed, this index can still be used to answer queries that are filtering on A, A and B, A and B and C.
Thus you don't need to index NoteID separately, because the index that aids the primary key (NoteID, CategoryID) can be used by queries calling for just NoteID. It cannot, however, be used to answer queries calling for just CategoryID, hence the separate index being created
As an aside, you might find, in some cases, that you can supply values in a where clause that have no purpose other than to encourage use of an index that covers them. Suppose, for example, that a table has an index on Name, Gender, Age, and you want all 20 year old people named Steven. If you can reasonably assert that Steven is always male, you can WHERE Name = 'Steven' AND Gender = 'M' AND Age = 20 - even though the Gender of M is redundant, specifying it will let the DB engine use that index. Omitting it means the DB will have a much harder job of figuring out whether to use the index or not
You can also re-arrange index ordering to help your application perform, and give the DB fewer indexes to maintain.. If all your queries will only ever ask for A, A+C or A+B+C, it would be better to specify the index for the columns in the order A,C,B then a single index can cover all queries, rather than having to maintain an index of A+B+C and another of A+C
You don't need to create an index on NoteID because it is the first column of the primary key. Generally, you want all foreign keys to be the first column in at least one index. In your case, NoteID is the first column in the primary key which acts as a clustered unique index.
The following columns are set to auto-increment in SQL Server IDENTITY(1,1) and I wanted similar behavior on SqLite: Tenant.TenantID, Project.ProjectID, and Credits.CreditsID. Although there is AUTOINCREMENT in SqLite, and I have tried it, but it only works on tables with only 1 Primary Key. I have tried the following testing:
By the way, I used Microsoft.EntityFrameworkCore.Sqlite 2.1.4 for this testing
Explicitly assign value for these columns set to auto-increment:
Tenant.TenantID
a. -99 : remains -99 after saving
b. 0 : becomes 1 after saving
c. 99 : remains 99 after saving
For Project.ProjectID & Credits.CreditsID
a. -99 & 99 values remains the same after saving changes to DbContext. But I do not want to explicitly assign these values because there are bunch of test data from my DbContext.
b. Assigning explicit value 0 throws this error: Microsoft.Data.Sqlite.SqliteException : SQLite Error 19: 'NOT NULL constraint failed: Credits.CreditsID'.
I'd really be grateful for someone who can help me out with this one. It's been days that this bothers me.
With SQLite you probably do not want to use AUTOINCREMENT, this does not actually set the column to increment rather it sets a constraint that the value, if not set explicitly must be a higher value than has been allocated.
Simply defining a column using INTEGER PRIMARY KEY sets the column to increment if not explicitly setting the value. Noting that there can only be one such column per table.
Note that SQLite DOES NOT guarantee incrementing by 1 rather it guarantees a unique identifier which is an integer and may even be less (only after and id of 9223372036854775807 has been assigned).SQLite Autoincrement. In which case using AUTOINCREMENT will fail with an SQLIte Full exception, whilst without AUTOINCREMENT SQLite will try to find an unused id.
Looking at your diagram I believe the the Credits table would not need the TennantID as this is available via the Project referencing the Tennant.
Ignoring other than the columns that make up the relationships (also adding the optional foreign key restraints that would enforce referential integrity) then I believe you could use something along the lines of :-
DROP TABLE IF EXISTS credits;
DROP TABLE IF EXISTS project;
DROP TABLE IF EXISTS tennant;
CREATE TABLE IF NOT EXISTS tennant (tennant_id INTEGER PRIMARY KEY, Name TEXT, other_columns);
CREATE TABLE IF NOT EXISTS project (project_id INTEGER PRIMARY KEY, tennant_reference REFERENCES tennant(tennant_id), Title);
CREATE TABLE IF NOT EXISTS credits (credit_id INTEGER PRIMARY KEY, project_reference INTEGER REFERENCES project(project_id), other_columns TEXT);
CREATE TABLE IF NOT EXISTS creidts (credit_id INTEGER PRIMARY KEY, project_reference INTEGER, other_columns);
INSERT INTO tennant VALUES(1,'Fred','other data'); -- Explicit ID 1
INSERT INTO tennant (Name,other_columns) VALUES('Mary','Mary''s other data'),('Anne','Anne''s other data'); -- Implicit ID 's (2 and 3 most likely)
INSERT INTO project VALUES (99,1,'Project001 for Fred'); -- Explicit Project ID 99 - tennant 1 = Fred
INSERT INTO project (tennant_reference,Title) VALUES(1,'Project002 for Fred'),(2,'Project003 for Mary'),(3,'Project004 for Anne'); -- 3 implicit project id's 100,101 and 102 (most likely)
-- Result 1
SELECT * FROM project JOIN tennant ON tennant_reference = tennant.tennant_id;
INSERT INTO credits VALUES(199,99,'Other credit columns'); -- Explicit credit ID of 199 for Project001 (tennant implied)
INSERT INTO credits VALUES(0,99,'Other credit colums credit_id = 0'); -- Explicit credit ID of 0 for Project002
INSERT INTO credits (project_reference,other_columns) VALUES (100,'for Project002'),(100,'another for Project002'),(102,'for Project004');
SELECT * FROM credits JOIN project ON project_reference = project_id JOIN tennant ON tennant_reference = tennant_id;
This drops all the existing tables to make testing simpler.
The 3 tables are then created.
Rows are inserted both explicitly and implicitly (the recommended way) into the Tennant table and then into the Project table (note that rows that reference a non-existent tennant cannot be inserted into the Project table due to the foreign key constraint)
The Projects, along with the joined tennant details are then listed (see Results)
Rows are then inserted into the Credits table using Explicit and Implicit credit id's (note that 199 is Explicitly defined and then 0).
As you can see when id's are autogenerated they generally are 1 greater than the greatest value used to date.
Results
First query (Project's with related Tennant)
Second Query Credits with related Project and the underlying related Tennant
I am designing this database and c# app, that a record gets saved to database. now say we have three Sales Person and each should be assigned a record in strict rotation so they get to work on equal amount of records.
What I have done so far was to create one table called Records and one SalesPerson, the Records would have salesperson id as foreign key and another column that would say which agent it is assigned to and will increment this column.
Do you think this is a good design, if not can you give any ideas?
To do this I would use the analytical functions ROW_NUMBER and NTILE (assuming your RDBMS supports them). This way you can allocate each available sales person a pseudo id incrementing upwards from 1, then randomly allocate each unassigned record one of these pseudo ids to assign them equally between sales people. Using pseudo ids rather than actual ids allows for the SalesPersonID field not being continuous. e.g.
-- CREATE SOME SAMPLE DATA
DECLARE #SalesPerson TABLE (SalesPersonID INT IDENTITY(1, 1) NOT NULL PRIMARY KEY, Name VARCHAR(50) NOT NULL, Active BIT NOT NULL)
DECLARE #Record TABLE (RecordID INT IDENTITY(1, 1) NOT NULL PRIMARY KEY, SalesPersonFK INT NULL, SomeOtherInfo VARCHAR(100))
INSERT #SalesPerson VALUES ('TEST1', 1), ('TEST2', 0), ('TEST3', 1), ('TEST4', 1);
INSERT #Record (SomeOtherInfo)
SELECT Name
FROM Sys.all_Objects
With this sample data the first step is to find the number of available sales people to allocate records to:
DECLARE #Count INT = (SELECT COUNT(*) FROM #SalesPerson WHERE Active = 1)
Next using CTEs to contain the window functions (as they can't be used in join clauses)
;WITH Records AS
( SELECT *,
NTILE(#Count) OVER(ORDER BY NEWID()) [PseudoSalesPersonID]
FROM #Record
WHERE SalesPersonFK IS NULL -- UNALLOCATED RECORDS
), SalesPeople AS
( SELECT SalesPersonID,
ROW_NUMBER() OVER (ORDER BY SalesPersonID) [RowNumber]
FROM #SalesPerson
WHERE Active = 1 -- ACTIVE SALES PEOPLE
)
Finally update the records CTE with the actual sales personID rather than a pseudo id
UPDATE Records
SET SalesPersonFK = SalesPeople.SalesPersonID
FROM Records
INNER JOIN SalesPeople
ON PseudoSalesPersonID = RowNumber
ALL COMBINED IN AN SQL FIDDLE
This is quite confusing as I suspect you're using the database term 'record' aswell as an object/entity 'Record'.
The simple concept of having a unique identifier in one table that also features as a foreign key in another table is fine though, yes. It avoids redundancy.
Basics of normalisation
Its mostly as DeeMac said. But if your Record is an object (i.e. it has all the work details or its a sale or a transaction) then you need to separate that table. Have a table Record with all the details to that particular object. Have another table `Salesman' with all the details about the Sales Person. (In a good design, you would only add particular business related attributes of the position in this table. All the personal detail will go in a different table)
Now for your problem, you can build two separate tables. One would be Record_Assignment where you will assign a Record to a Salesman. This table will hold all the active jobs. Another table will be Archived_Record_Assignment which will hold all the past jobs. You move all the completed jobs here.
For equal assignment of work, you said you want circular assignment. I am not sure if you want to spread work amongst all sales person available or only certain number. Usually assignments are given by team. Create a table (say table SalesTeam)with the Salesman ids of the sales persons you want to assign the jobs (add team id, if you have multiple teams working on their own assigned work areas or customers. That's usually the case). When you want to assign new job, query the Record_Assignment table for last record, get the Salesman id and assign the job to the next salesman in the SalesTeam table. The assignment will be done through business logic (coding).
I am not fully aware of your scenario. These are all my speculations so if you see something off according to your scenario, let me know.
Good Luck!
Suppose a
Table "Person" having
"SSN",
"Name",
"Address"
and another
Table "Contacts" having
"Contact_ID",
"Contact_Type",
"SSN" (primary key of Person)
similarly
Table "Records" having
"Record_ID",
"Record_Type",
"SSN" (primary key of Person)
Now i want that when i change or update SSN in person table that accordingly changes in other 2 tables.
If anyone can help me with a trigger for that
Or how to pass foreign key constraints for tables
Just add ON UPDATE CASCADE to the foreign key constraint.
Preferably the primary key of a table should never change. If you expect the SSN to change you should use a different primary key and have the SSN as a normal data column in the person table. If it's already too late to make this change, you can add ON UPDATE CASCADE to the foreign key constraint.
If you have PKs that change, you need to look at the table design, use an surrogate PK, like an identity.
In your question you have a Person table, which could be a FK to many many tables. In that case a ON UPDATE CASCADE could have some serious problems. The database I'm working on has well over 300 references (FK) to our equivalent table, we track all the various work that a person does in each different table. If I insert a row into our Person table and then try to delete it back out again (it will not be used in any other tables, it is new) the delete will fail with a Msg 8621, Level 17, State 2, Line 1 The query processor ran out of stack space during query optimization. Please simplify the query. As a result I can't imagine an ON UPDATE CASCADE would work either when you get many FKs on your PK.
I would never make sensitive data like a SSN a PK. Health care companies used to do this and had a painful switch because of privacy. I hope you don't have a web app and have a GET or POST variable called SSN with the actual value in it!! Or display the SSN on every report, or will you shred all old printed reports and limit access to who views each report., etc.
Well, assuming the SSN is the primary key of the Person table, I would just (in a transaction of course):
create a brand new row with the new SSN, copying all other details from the old row.
update the columns in the other tables to point to the new row.
delete the old row.
Now this is actually a good example of why you shouldn't use real data as table cross-references, if that data can change. If you'd used an artificial column to tie them together (and only stored the SSN in one place), you wouldn't have the problem.
Cascade update and delete are very dangerous to use. If you have a million child records, you could end up with a serious locking problem. You should code the updates and deletes instead.
You should never use a PK with the potential to change if it can be avoided. Nor should you ever use SSN as a PK because it should never be stored unencrypted in your database. Never, unless your company likes to be sued when they are the cause of an indentity theft incident. This is not a design flaw to shrug off as this is legacy, we don't have time to fix. This is a design flaw that could bankrupt your company if someone steals your backup tapes or gets the ssns out of the sytem in another manner (most of these types of thefts are internal BTW). This is an urgent - must fix now design flaw.
SSN is also a bad candidate because it changes (people change them when they are victims of identity theft for instance.) Plus an integer PK will have faster performance than a nine-digit PK.