I have an existing app / dataabse. I have been tasked to add in Entity Framework as part of an upgrade.
I hit a problem where when I generate (or regenerate) the edmx, the code no longer recognises the foreign keys in the database tables, and when the code runs, it complains about missing id's, as, I assume, it is 'guessing' what the foreign keys should be.
I can get round this by adding the following attribute to the Auto generated model definitions.
[ForeignKey("NavigationProperty")]
But then, if / when the edmx is regenerated, all this gets blown away, and has to be re-added.
Although the class that is generated is partial, as these attributes are being added to existing members, I cannot move them to a seperate file.
So, how do I get round this option? Ideally I'd like to ensure that when the edmx is generated it picks up the foreign keys, so that this issue is fixed permanently. If that can't be done, next step is to ask if there is some way of programatically generating these associations, so it is only done once.
Thanks
edit - Added in sample table definition
Here is the code auto generated by SMS. Is tehre anything wrong with the foreign key definition?
CREATE TABLE [dbo].[ShopProductTypes](
[id] [int] IDENTITY(1,1) NOT NULL,
[Shop_Id] [int] NOT NULL,
[Product_Id] [int] NOT NULL,
[CreatedDate] [datetime] NOT NULL,
[CancelledDate] [datetime] NULL,
[Archived] [bit] NOT NULL,
CONSTRAINT [PK_ShopProductTypes] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[ShopProductTypes] WITH CHECK ADD CONSTRAINT [FK_ShopProductTypes_Shop] FOREIGN KEY([Shop_Id])
REFERENCES [dbo].[Shops] ([Id])
GO
I found this:
http://blogs.msdn.com/b/dsimmons/archive/2007/09/01/ef-codegen-events-for-fun-and-profit-aka-how-to-add-custom-attributes-to-my-generated-classes.aspx
It's a bit more involved.
Related
I have a primary key as a foreign key in Entity Framework.
public class RailcarTrip
{
[Key, ForeignKey("WaybillRailcar")]
public int WaybillRailcarId { get; set; }
public WaybillRailcar WaybillRailcar { get; set; }
// Etc.
}
This seems to work fine, and generates the following table.
CREATE TABLE [dbo].[RailcarTrips](
[WaybillRailcarId] [int] NOT NULL,
[StartDate] [datetime2](7) NOT NULL,
[DeliveryDate] [datetime2](7) NULL,
[ReleaseDate] [datetime2](7) NULL,
[ReturnDate] [datetime2](7) NULL,
[DeliveryEta] [datetime2](7) NULL,
[ReleaseEta] [datetime2](7) NULL,
[ReturnEta] [datetime2](7) NULL,
[ReturnCity] [nvarchar](80) NULL,
[ReturnState] [nvarchar](2) NULL,
[TripType] [int] NOT NULL,
CONSTRAINT [PK_RailcarTrips] PRIMARY KEY CLUSTERED
(
[WaybillRailcarId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[RailcarTrips] WITH CHECK ADD CONSTRAINT [FK_RailcarTrips_WaybillRailcars_WaybillRailcarId] FOREIGN KEY([WaybillRailcarId])
REFERENCES [dbo].[WaybillRailcars] ([Id])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[RailcarTrips] CHECK CONSTRAINT [FK_RailcarTrips_WaybillRailcars_WaybillRailcarId]
GO
But I get an error when I try to change this PK/FK so that it references a different record.
The property 'RailcarTrip.WaybillRailcarId' is part of a key and so cannot be modified or marked as modified. To change the principal of an existing entity with an identifying foreign key, first delete the dependent and invoke 'SaveChanges', and then associate the dependent with the new principal.
I don't understand why this is a problem? The primary key is not set as an entity/autoset. This code should be a simple update of a FK. I don't want to have to delete anything. Can anyone explain why it's an issue?
This appears to be an Entity Framework error and not a SQL Server error.
EF doesn't support modifying primary keys. So you need to delete and insert (Remove+SaveChanges+Add+SaveChanges) the RailCarTrip to move it to a different WaybillRailcar. Alternatively you can update the PK/FK directly in TSQL.
If I understood You correctly, that is that you want to change the value of primary key of your entity (RailcarTrip), than You have answered Your own question - EF is informing you, that primary key is not a modifiable value, which is correct for SQL database.
It does not matter if this key is also a foreign key.
To modify a primary key value one has to delete corresponding entry and recreate it with new key value.
I have a special case where the Id of the table is defined as a computed column like this:
CREATE TABLE [BusinessArea](
[Id] AS (isnull((CONVERT([nvarchar],[CasaId],(0))+'-')+CONVERT([nvarchar],[ConfigurationId],(0)),'-')) PERSISTED NOT NULL,
[CasaId] [int] NOT NULL,
[ConfigurationId] [int] NOT NULL,
[Code] [nvarchar](4) NOT NULL,
[Name] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_BusinessArea] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY]
GO
Usually when I have a computed column i configure it like this:
builder.Entity<MyEntity>()
.Property(p => p.MyComputed).HasComputedColumnSql(null);
With .HasComputedColumnSql() the value of MyComputed is reflected an insert/update on the entity.
However this trick doesn't work if the computed column is a PK.
Any idea on how to make that work also with a PK?
It can be made to work, but only for insert, by setting property BeforeSaveBehavior to Ignore:
modelBuilder.Entity<BusinessArea>().Property(e => e.Id)
.Metadata.BeforeSaveBehavior = PropertySaveBehavior.Ignore;
But in general such design will cause problems with EF Core because it doesn't support mutable keys (primary or alternate). Which means that it would never retrieve back the Id from database after update. You can verify that by marking the property as ValueGeneratedOnAddOrUpdate (which is the normal behavior for computed columns):
modelBuilder.Entity<BusinessArea>().Property(e => e.Id)
.ValueGeneratedOnAddOrUpdate();
If you do so, EF Core will throw InvalidOperationException saying
The property 'Id' cannot be configured as 'ValueGeneratedOnUpdate' or 'ValueGeneratedOnAddOrUpdate' because the key value cannot be changed after the entity has been added to the store.
Are Sql Server computed primary key columns supported when inserting/updating records with Entity Framework 4.0? I know they're not in Linq to SQL.
Every time I try to insert a record that has a computed primary key column defined in Sql Server, I get the error
"Modifications to tables where a primary key column has property 'StoreGeneratedPattern' set to 'Computed' are not supported. Use 'Identity' pattern instead. "
even when the key's StoreGeneratedPattern property is set to Identity or Computed.
Example Table:
CREATE TABLE [dbo].[Table](
[B1] [varchar](10) NOT NULL,
[L1] [varchar](5) NOT NULL,
[L2] [varchar](5) NULL,
[E1] [varchar](7) NOT NULL,
[D1] [varchar](50) NULL,
[R1] [varchar](max) NULL,
[Z1] [varchar](5) NULL,
[D2] [varchar](5) NULL,
[Key] AS (([E1]+[B1])+[L1]),
CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED
(
[Key] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
Example edmx :
EDMX with Key properties displayed
I would've just set up a normal auto-increment ID column in the database, but I've inherited this project and have been told to leave the database structure alone.
I'm trying to use SSIS to move some data from one SQL server to my Destimation SQL server, the source has a table "Parent" with Identity field ID that is a Foreign key to the "Child" table.
1 - N relation
The question is simple, what is the best way to transfer the data to a different SQL Server with still a parent child relation.
Note: Both ID (Parent and Child) are identity fields that we do not want to migrate since the destination source wont necessary need to have them.
Please share your comments and ideas.
FYI: We create a .Net Code (C#) that does this, we have a query that gets parent data, a query that get childs data and using linq we join the data and we loop parent getting the new ID and inserting as reference of second table. This is working but we want to create the same on SSIS to be able to scale later.
You have to import Parent Table Before Child Table:
First You have to Create Tables On Destination Server, you can achieve this using an query like the following:
CREATE TABLE [dbo].[Tbl_Child](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Parent_ID] [int] NULL,
[Name] [varchar](50) NULL,
CONSTRAINT [PK_Tbl_Child] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Tbl_Parent](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) NULL,
CONSTRAINT [PK_Tbl_Parent] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Tbl_Child] WITH CHECK ADD CONSTRAINT [FK_Tbl_Child_Tbl_Parent] FOREIGN KEY([Parent_ID])
REFERENCES [dbo].[Tbl_Parent] ([ID])
GO
ALTER TABLE [dbo].[Tbl_Child] CHECK CONSTRAINT [FK_Tbl_Child_Tbl_Parent]
GO
Add two OLEDB Connection manager (Source & Destination)
Next you have to add a DataFlow Task to Import Parent Table Data From Source. You have to check Keep Identity option
Next you have to add a DataFlow Task to Import Child Table Data From Source. You have to check Keep Identity option
Package May Look like the following
WorkAround: you can disable constraint and import data then enabling it by adding a SQL Task before and after Importing
Disable Constraint:
ALTER TABLE Tbl_Child NOCHECK CONSTRAINT FK_Tbl_Child_Tbl_Parent
Enable Constraint:
ALTER TABLE Tbl_Child CHECK CONSTRAINT FK_Tbl_Child_Tbl_Parent
if using this Workaround it is not necessary to follow an order when importing
I've just taken over a project at work, and my boss has asked me to make it run faster. Great.
So I've identified one of the major bottlenecks to be searching through one particular table from our SQL server, which can take up to a minute, sometimes longer, for a select query with some filters on it to run. Below is the SQL generated by C# Entity Framework (minus all the GO statements):
CREATE TABLE [dbo].[MachineryReading](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Location] [geometry] NULL,
[Latitude] [float] NOT NULL,
[Longitude] [float] NOT NULL,
[Altitude] [float] NULL,
[Odometer] [int] NULL,
[Speed] [float] NULL,
[BatteryLevel] [int] NULL,
[PinFlags] [bigint] NOT NULL, -- Deprecated field, this is now stored in a separate table
[DateRecorded] [datetime] NOT NULL,
[DateReceived] [datetime] NOT NULL,
[Satellites] [int] NOT NULL,
[HDOP] [float] NOT NULL,
[MachineryId] [int] NOT NULL,
[TrackerId] [int] NOT NULL,
[ReportType] [nvarchar](1) NULL,
[FixStatus] [int] NOT NULL,
[AlarmStatus] [int] NOT NULL,
[OperationalSeconds] [int] NOT NULL,
CONSTRAINT [PK_dbo.MachineryReading] PRIMARY KEY NONCLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
ALTER TABLE [dbo].[MachineryReading] ADD DEFAULT ((0)) FOR [FixStatus]
ALTER TABLE [dbo].[MachineryReading] ADD DEFAULT ((0)) FOR [AlarmStatus]
ALTER TABLE [dbo].[MachineryReading] ADD DEFAULT ((0)) FOR [OperationalSeconds]
ALTER TABLE [dbo].[MachineryReading] WITH CHECK ADD CONSTRAINT [FK_dbo.MachineryReading_dbo.Machinery_MachineryId] FOREIGN KEY([MachineryId])
REFERENCES [dbo].[Machinery] ([Id])
ON DELETE CASCADE
ALTER TABLE [dbo].[MachineryReading] CHECK CONSTRAINT [FK_dbo.MachineryReading_dbo.Machinery_MachineryId]
ALTER TABLE [dbo].[MachineryReading] WITH CHECK ADD CONSTRAINT [FK_dbo.MachineryReading_dbo.Tracker_TrackerId] FOREIGN KEY([TrackerId])
REFERENCES [dbo].[Tracker] ([Id])
ON DELETE CASCADE
ALTER TABLE [dbo].[MachineryReading] CHECK CONSTRAINT [FK_dbo.MachineryReading_dbo.Tracker_TrackerId]
The table has indexes on MachineryId, TrackerId, and DateRecorded:
CREATE NONCLUSTERED INDEX [IX_MachineryId] ON [dbo].[MachineryReading]
(
[MachineryId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
CREATE NONCLUSTERED INDEX [IX_MachineryId_DateRecorded] ON [dbo].[MachineryReading]
(
[MachineryId] ASC,
[DateRecorded] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
CREATE NONCLUSTERED INDEX [IX_TrackerId] ON [dbo].[MachineryReading]
(
[TrackerId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
When we select from this table, we are almost always interested in one machinery or tracker, over a given date range:
SELECT *
FROM MachineryReading
WHERE MachineryId = 2127 AND
DateRecorded > '2016-12-08 00:00:10.009' AND DateRecorded < '2016-12-11 18:32:41.734'
As you can see, it's quite a basic setup. The main problem is the sheer amount of data we put into it - about one row every ten seconds per tracker, and we have over a hundred trackers at the moment. We're currently sitting somewhere around 10-15 million rows. So this leaves me with two questions.
Am I thrashing the database if I insert 10 rows per second (without batching them)?
Given that this is historical data, so once it is inserted it will never change, is there anything I can do to speed up read access?
You have too many non-clustered indexes on the table - which will increase the size of the DB.
If you have an index on MachineryId and DateRecorded - you don't really need a separate one on MachineryId.
With 3 of your Non-Clustered indexes - there are 3 more copies of the data
Clustered VS Non-Clustered
No Include on the Non-Clustered index
When SQL Server is executing your SQL it is first searching the Non-Clustered Index for the required data, then it is going back to the original table (bookmark lookup) Link and getting the rest of the columns as you are doing select *, but the non-clustered index doesn't have all the columns (That is what I think is happening - Can't really tell without the Query Plan)
Include columns in non-clustered index: https://stackoverflow.com/a/1308325/1910735
You should maintain you indexes - by creating a maintenance plan to check for fragmentation and rebuild or reorganize your indexes on weekly basis.
I really think you should have a Clustered index on your MachineryId and DateRecordred instead of a Non-Clustered index. A table can only have one Clustered Index ( this is the order data is stored on the Hard Disk) - as Most of your queries will be in DateRecordred and MachineryId order - it will be better to store them that way,
Also if you really are searching by TrackerId in any query, try adding it to the same Clustered Index
IMPORTANT NOTE: DELETE THE NON-CLUSTERED INDEX in TEST environment before going LIVE
Create a clustered index instead of your non-clustered index, run different queries - Check the performance by comparing the Query Plans and the STATISTICS IO)
Some resources for Index and SQL Query help:
Subscribe to the newsletter here and download the first responder kit:
https://www.brentozar.com/?s=first+responder
It is now open source - but I don't know if it has the actual PDF getting started and help files (Subscribe in the above link anyway - for weekly articles/tutorials)
https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit
Tuning is per query, but in any case -
I see you have no partitions and no indexes, which means, no matter what you do. it always results in a full table scan.
For your specific query -
create index MachineryReading_ix_MachineryReading_DateRecorded
on (MachineryReading,DateRecorded)
First, 10 inserts per second is very feasible under almost any reasonable circumstances.
Second, you need an index. For this query:
SELECT *
FROM MachineryReading
WHERE MachineryId = 2127 AND
DateRecorded > '2016-12-08 00:00:10.009' AND DateRecorded < '2016-12-11 18:32:41.734';
You need an index on MachineryReading(MachineryId, DateRecorded). That will probably solve your performance problem.
If you have similar queries for tracker, then you want an index on MachineryReading(TrackerId, DateRecorded).
These will slightly impede the progress of in the inserts. But the overall improvement should be so great, that all will be a big win.