I've always been a database oriented programmer, so up to this day, I've always used a database-driven approach to programming and I feel pretty confident in T-SQL and SQL Server.
I'm trying to wrap my head around the Entity Framework 6 code-first approach - and frankly - I'm struggling.
I have an existing database - so I did a Add New Item > ADO.NET Entity Data Model > Code-First from Database and I get a bunch of C# classes representing my existing database. So far so good.
What I'm trying to do now is explore how to handle ongoing database upgrades - both in schema as well as "static" (pre-populated) lookup data. My first gripe is that the entities that were reverse-engineered from the database are being configured with the Fluent API, while it seems more natural to me to create the new tables I want to have created as a C# class with data annotations. Is there any problems / issues with "mixing" those two approaches? Or could I tell the reverse-engineering step to just use data annotation attributes instead of the Fluent API altogether?
My second and even bigger gripe: I'm trying to create nice and small migrations - one each for each set of features I'm trying to add (e.g. a new table, a new index, a few new columns etc.) - but it seems I cannot have more than a single "pending" migration...... when I have one, and I modify my model classes further, and I try to get a second migration using add-migration (name of migration), I'm greeted with:
Unable to generate an explicit migration because the following explicit migrations are pending: [201510061539107_CreateTableMdsForecast]. Apply the pending explicit migrations before attempting to generate a new explicit migration.
Seriously ?!?!? I cannot have more than one, single pending migration?? I need to run update-database after every single tiny migration I'm adding?
Seems like a rather BIG drawback! I'd much rather create my 10, 20 small, compact, easy-to-understand migrations, and then apply them all in one swoop - no way to do this!?!? This is really hard to believe..... any way around this??
It is true that you can only have one pending migration open at a time during development. To understand why, you have to understand how the migrations are generated. The generator works by comparing the current state of your database (the schema) with the current state of your model code. It then effectively creates a "script" (a C# class) which changes the schema of the database to match the model. You would not want to have more than one of these pending at the same time or else the scripts would conflict with each other. Let's take an simple example:
Let's say I have a class Widget:
class Widget
{
public int Id { get; set; }
public string Name { get; set; }
}
and a matching table Widgets in the database:
Widgets
-------
Id (int, PK, not null)
Name (nvarchar(100), not null)
Now I decide to add a new property Size to my class.
class Widget
{
public int Id { get; set; }
public string Name { get; set; }
public int Size { get; set; } // added
}
When I create my migration, the generator looks at my model, compares it with the database and sees that my Widget model now has a Size property while the corresponding table does not have a Size column. So the resulting migration ends up looking like this:
public partial class AddSizeToWidget : DbMigration
{
public override void Up()
{
AddColumn("dbo.Widgets", "Size", c => c.Int());
}
public override void Down()
{
DropColumn("dbo.Widgets", "Size");
}
}
Now, imagine that it is allowed to create a second migration while the first is still pending. I haven't yet run the Update-Database command, so my baseline database schema is still the same. Now I decide to add another property Color to Widget.
When I create a migration for this change, the generator compares my model to the current state of the database and sees that I have added two columns. So it creates the corresponding script:
public partial class AddColorToWidget : DbMigration
{
public override void Up()
{
AddColumn("dbo.Widgets", "Size", c => c.Int());
AddColumn("dbo.Widgets", "Color", c => c.Int());
}
...
}
So now I have two pending migrations, and both of them are going to try to add a Size column to the database when they are ultimately run. Clearly, that is not going to work. So that is why there is only one pending migration allowed to be open at a time.
So, the general workflow during development is:
Change your model
Generate a migration
Update the database to establish a new baseline
Repeat
If you make a mistake, you can roll back the database to a previous migration using the –TargetMigration parameter of the Update-Database command, then delete the errant migration(s) from your project and generate a new one. (You can use this as a way to combine several small migrations into a larger chunk if you really want to, although I find in practice it is not worth the effort).
Update-Database –TargetMigration PreviousMigrationName
Now, when it comes time to update a production database, you do not have to manually apply each migration one at a time. That is the beauty of migrations -- they are applied automatically whenever you run your updated code against the database. During initialization, EF looks at the target database and checks the migration level (this is stored in the special __MigrationHistory table which was created when you enabled migrations on the database). For any migration in your code which has not yet been applied, it runs them all in order for you, to bring the database up to date.
Hope this helps clear things up.
Is there any problems / issues with "mixing" those two approaches?
No, there is no problem to mix them.
You can do more with fluent config than with data annotations.
Fluent config overrides data annotation when constructing the migration script.
You can use data annotations to generate DTOs and front-end/UI constraints dynamically - saves a lot of code.
Fluent API has class EntityTypeConfiguration which allows you to make domains (in DDD sense) of objects dynamically and store them - speeds up work with DbContext a lot.
I cannot have more than a single "pending" migration
Not 100% true. ( Maybe 50% but this is not a showstopper )
Yes, the DbMigrator compares your model "hash" to the database model "hash" when it generates the Db - so it blocks you before you make your new small migration. But this is not a reason to think you can not make small migration steps. I do only small migration steps all the time.
When you develop an app and you use your local db you apply the small migrations one by one as you develop functionality - gradually. At the end you deploy to staging/production all your small migrations in one dll with all the new functionality - and they are applied one by one.
Related
I have a specific EF Core 6.x question.
If the SQL table has a column removed. Then EF Core will throw a SqlException saying that it's an invalid column name unless I also update the C# model.
For example,
Create Table User
(
FirstName varchar(200)
,MiddleName varchar(200) null -- tried to remove this column after table is created
,LastName varchar(200)
)
I tried deleting the MiddleName column from the SQL Table. When I run a simple read call using EF Core 6, I get the error.
c# model
public class User
{
public virtual string FirstName { get; set; }
public virtual string? MiddleName { get; set; }
public virtual string LastName { get; set; }
}
var db = new EFDbContext(connectionString);
var data = db.Users.ToList(); // SqlException here after column removal
Is there any way to remove columns from the table without needing to update the c# class as well?
Tried making the C# property MiddleName not virtual.
Update:
In the event that I have an existing application. I would need to modify the c# model even if the codebase doesn't refer to the removed column anywhere. Alternatively, I can decorate the property with [NotMapped] or use the Ignore() method in the modelbuilder.
Both approaches means a rebuild of the assembly is needed and downtime during deployment.
NHibernate's mapping can be done using an XML file and thus all it takes would be a simple config file update.
I can't seem to find anything in EF Core that will reduce the headache of maintaining older codebases when schema changes occur.
EF creates a data model mapping internally to track the database schema and your code models. By removing the column in your database table, your code model no longer matches the database. Hence, the exception occurs.
This is definitely not be the answer you're looking for, but as far as I know EF Core need consistency between the models and the DB schemas to work.
I can think of 2 things here:
Maybe you could benefit from using a different ORM (Did you give Dapper a cahnce)
You might be facing an architectural issue, if there's more than one team working with the same database, and more than one system calling that database, the best way to avoid headaches in the future would be to isolate the data access layer and expose an API that serves all the involved systems.
That way, if the database changes, you just need to re-build the data access layer, no downtime for your clients.
And finally... in my opinion the ideal solution is a combination of both, create a decoupled data access layer, map things there and expose an API with the models your application needs.
I am developing an ASP.NET Core 5 application and I just made a modification to one of the model classes as follows
...
public long OfferId { get; set; }
[ForeignKey("OfferId")]
public RequestOffer Offer { get; set; }
...
This requires that I add migration and update database.
However, when I try to run update-database I get the following error
Column names in each table must be unique. Column name 'Discriminator' in table 'AspNetUserTokens' is specified more than once
I have tried to run the migration with the -ignoreChanges flag as I saw in a solution proposed on a similar StackOverflow question but it did not make a difference.
My worry is that I never made any changes to that table (AspNetUserTokens) in this update. However, I realize that in the migration file, I see that all the codes for the previous migrations are repeated as if I am rerunning the migrations afresh which I am not doing.
So, it looks like all the database tables are being recreated. I was expecting the migration file to contain code for only the changes I just made but it rather contains all the changes in the previous migrations as well starting from the very first migration I ran.
I will appreciate any guide to help me resolve this so I can update my database and continue with the project.
Thank you
I need to map to a view when using EF6 with migrations.
The view pivots 2 other tables to enable a simple summary view of the underlying data, the idea being it allows us to use this in a summary index view.
The issue I have is I am unable create a migration that either deploys the view (ideal goal) or deploys the DB without the view for later manual deployment.
In most attempts, following other SO questions, I end up either deadlocking the Add-Migration and Update-Database commands or generally causing an error that breaks one or the other.
What is the current best way to use EF6 to access views, even if I lose the ability to automatically deploy them with the migrations, and not cause errors with migrations.
Further detail
The Db contains 2 tables Reports and ReportAnswers. The view ReportView combines these two and pivots ReportAnswers to allow some of the rows to become columns in this summary view.
Reports and ReportAnswers were depolied via EF Migrations. The view is currently a script that needs be added to the deployment somehow.
Reports, ReportAnswers & ReportView are accessible from the db Context
public virtual DbSet<ReportAnswer> ReportAnswers { get; set; }
public virtual DbSet<Report> Reports { get; set; }
public virtual DbSet<ReportView> ReportView { get; set; }
I have tried using Add-Migration Name -IgnoreChanges to create a blank migration and then manually adding the view to the Up() and Down() methods but this just deadlocks the migration and update commands, each wanting the other to run first.
I have also tried using modelBuilder.Ignore<ReportView>(); to ignore the type when running the migrations but this proved incredibly error prone, even though it did seem to work at least once.
I just walked around interesting article about using views with EF Core few days ago, but I found also the very same using EF 6.
You may want to use Seed method instead of migration Up and Down methods.
protected override void Seed({DbContextType} context)
{
string codeBase = Assembly.GetExecutingAssembly().CodeBase;
UriBuilder uri = new UriBuilder(codeBase);
string path = Uri.UnescapeDataString(uri.Path);
var baseDir = Path.GetDirectoryName(path) + "\\Migrations\\{CreateViewSQLScriptFilename}.sql";
context.Database.ExecuteSqlCommand(File.ReadAllText(baseDir));
}
Your SQL command should look like sample below.
IF NOT EXISTS (SELECT * FROM sys.views WHERE object_id = OBJECT_ID(N'[dbo].[{ViewName}]'))
EXEC dbo.sp_executesql #statement = N'CREATE VIEW [dbo].[{ViewName}]
AS
SELECT {SelectCommand}
It is not perfect, but I hope at least helpful.
I found another blog post about this topic and the writer says to use Sql(#"CREATE VIEW dbo.{ViewName} AS...") in Up method and Sql(#"DROP VIEW dbo.{ViewName};") in Down method. I added it as you didn't supplied the code from Up and Down migration methods. Maybe good idea will be to add SqlFile instead of Sql method.
There is also option to create customized code or sql generator and plug it in to migrations, but I guess it is not the things you are looking for.
Let me know in comment in case you need additional help.
Related links:
Using Views with Entity Framework Code First
EF CODE FIRST - VIEWS AND STORED PROCEDURES
Leveraging Views in Entity Framework
DbMigration.Sql Method (String, Boolean, Object)
DbMigration.SqlFile Method (String, Boolean, Object)
I am currently working towards implementing a charting library with a database that contains a large amount of data. For the table I am using, the raw data is spread out across 148 columns of data, with over 1000 rows. As I have only created models for tables that contain a few columns, I am unsure how to go about implementing a model for this particular table. My usual method of creating a model and using the Entity Framework to connect it to a database doesn't seem practical, as implementing 148 properties for each column does not seem like an efficient method.
My questions are:
What would be a good method to implement this table into an MVC project so that there are read actions that allow one to pull the data from the table?
How would one structure a model so that one could read 148 columns of data from it without having to declare 148 properties?
Is the Entity Framework an efficient way of achieving this goal?
Entity Framework Database First sounds like the perfect solution for your problem.
Data first models mean how they sound; the data exists before the code does. Entity Framework will create the models as partial classes for you based on the table you direct it to.
Additionally, exceptions won't be thrown if the table changes (as long as nothing is accessing a field that doesn't exist), which can be extremely beneficial in a lot of cases. Migrations are not necessary. Instead, all you have to do is right click on the generated model and click "Update Model from Database" and it works like magic. The whole process can be significantly faster than Code First.
Here is another tutorial to help you.
yes with Database First you can create the entites so fast, also remember that is a good practice return onlye the fiedls that you really need, so, your entity has 148 columns, but your app needs only 10 fields, so convert the original entity to a model or viewmodel and use it!
One excelent tool that cal help you is AutoMapper
Regards,
Wow, that's a lot of columns!
Given your circumstances a few thoughts come to mind:
1: If your problem is the leg work of creating that many properties you could look at Entity Framework Power Tools. EF Tools is able to reverse engineer a database and create the necessary models/entity relation mappings for you, saving you a lot of the grunt work.
To save you pulling all of that data out in one go you can then use projections like so:
var result = DbContext.ChartingData.Select(x => new PartialDto {
Property1 = x.Column1,
Property50 = x.Column50,
Property109 = x.Column109
});
A tool like AutoMapper will allow you to do this with ease via simply configurable mapping profiles:
var result = DbContext.ChartingData.Project().To<PartialDto>().ToList();
2: If you have concerns with the performance of manipulating such large entities through Entity Framework then you could also look at using something like Dapper (which will happily work alongside Entity Framework).
This would save you the hassle of modelling the entities for the larger tables but allow you to easily query/update specific columns:
public class ModelledDataColumns
{
public string Property1 { get; set; }
public string Property50 { get; set; }
public string Property109 { get; set; }
}
const string sqlCommand = "SELECT Property1, Property50, Property109 FROM YourTable WHERE Id = #Id";
IEnumerable<ModelledDataColumns> collection = connection.Query<ModelledDataColumns>(sqlCommand", new { Id = 5 }).ToList();
Ultimately if you're keen to go the Entity Framework route then as far as I'm aware there's no way to pull that data from the database without having to create all of the properties one way or another.
Okay, so i've studied c# and asp.net long enough and would like to know how all these custom classes i created relate to the database. for example.
i have a class call Employee
public class Employee
{
public int ID { get; set; }
public string Name { get; set; }
public string EmailAddress { get; set; }
}
and i have a database with the following 4 fields:
ID
Name
EmailAddress
PhoneNumber
it seems like the custom class is my database. and in asp.net i can simple run the LINQ to SQL command on my database and get the whole schema of my class without typing out a custom class with getter and setter.
so let's just say that now i am running a query to retrieve a list of employees. I would like to know how does my application map to my Employee class to my database?
by itself, it doesn't. But add any ORM or similar, and you start to get closer. for example, LINQ-to-SQL (which I mention because it is easy to get working with Visual Studio), you typically get (given to you by the tooling) a custom "data context" class, which you use as:
using(var ctx = new MyDatabase()) {
foreach(var emp in ctx.Employees) {
....
}
}
This is generating TSQL and mapping the data to objects automatically. By default the tooling creates a separate Employee class, but you can tweak this via partial classes. This also supports inserts, data changes and deletion.
There are also tools that allow re-use of your existing domain objects; either approach can be successful - each has advantages and disadvantages.
If you only want to read data, then it is even easier; a micro-ORM such as dapper-dot-net allows you to use our type with TSQL that you write, with it handling the tedious materialisation code.
Your question is a little vague, imo. But what you are referring to is the Model of the MVC (Model-View-Controller) architecture.
What the Model , your Employee Class, manages data of the application. So it can not only get and set (save / update) your data, but it can also be used to notify of a data change. (Usually to the view).
You mentioned you where using SQL, so more then likely you could create and save an entire employee record by sending an Associative Array of the table data to save it to the database. Your setting for the Class would handle the unique SQL syntax to INSERT the data. In larger MVC Frameworks. The Model of your application inherits several other classes to handle the proper saving to different types of backends other than MS SQL.
Models will also, normally, have functions to handle finding records and updating records. This is normally by specify a search field, and it returning the record, of which would include the ID and you would normally base this back into a save / update function to make changes to record. You could also tie into this level of the Model to create revision of the data you are saving
So how the model directly correlates to your SQL structure is dependent on how you right it. Or which Framework you decide to use. I believe a common one for asp.net is the Microsoft's ASP.Net MVC
Your class cannot be directly mapped to the database without ORM tool, The ORM tool will read your configuration and will map your class to DB row as per your mappings automatically. That means you don't need to read the row and set the class fields explicitly but you have to provide mapping files and have to go through the ORM framework to load the entities, and the framework will take care of the rest
You can check nHibernate and here is getting started on nHibernate.