Can I use an EF Database three ways? - c#

I have a MVC web application that I've done code first to build my database.
I also need to have a console app manage data based on timeframes so, it will also need to access this database which I understand I can use as a Database First model.
However, I also need to build another website as a management dashboard, which I understand will also work as Database First.
Can I do this without having EF in one of the two circumstances nuke the database if I need to make a change to the model?

The short answer: no. You cannot implement both code-first and data-first EF on the same dataset without encountering a bona fide logistical nightmare.
Converting from one to the other is not quite as difficult as you might think, however, if your application is not overly complex. Based on the tables you've already created, data-first EF should produce objects that are reasonably compatible with your existing code.
Your next steps should look like this:
Pick one approach for EF
If necessary, convert existing projects to that paradigm
Move EF code into a shared class library (as suggested by snow)
Implement new projects using that class library to ensure consistency and reduce redundancy

Related

How to do migration & seed with MongoDB.Driver on Asp.Net Core?

We are building an Asp.Net core web API that must use MongoDb on the backend. According to what I found, it is recommended to use directly MongoDb.Driver because it does already most of the job and it makes less sense to use an ORM(EF Core) with a NoSql DB.
One thing that I'm not sure:
Is there a way to make "migration" as we would with Entity Framework? Same thing for the data seeding? I could imagine some way to doing it myself, but it feels like re-inventing the wheel.
So, how should we handle the potentials data migrations?
Ps, I understand that if we just add a property, we might not do an update, but there might be some occasion where you have a real structure changes
Some of the changes can be handled without explicit migrations; for some examples, see this link.
For others, we executed some code at startup that created indexes, set up collections or performed migrations. In the best case, this code is idempotent, so that it can be run multiple times. Otherwise we stored a migration marker in the database so that only the necessary migrations are run and parallel executions of the code are avoided.
If you need a more sophisticated approach with documents storing their version and the option to perform an on-the-fly-migration on a per-document-basis, you could check out this package.
Up to now, it was sufficient for us to perform the migrations at startup so that we haven't used this package yet.

How can I structure an ASP.NET MVC application with a "Core" database and individual derived databases using Entity Framework?

I had a hard time naming and wording this question, as there's a lot to unpack, so I apologize in advance - for anyone who spends the time to review and respond to this, I very much appreciate you.
Background:
I have a relatively large ASP.NET MVC5 application using Entity Framework 6, using a SQL Server database. Currently, the solution is split in to a few projects, mostly split by layer (business, data, etc). There is a single .edmx file and dbContext for the application, and it points to a single database at the moment.
The code/solution above represents the "core" of the system being built. However, this application is customized per client, therefore each client could have their own modules, pages, logic, etc. Due to this, we have a project in the solution for each client (only a couple right now, but will eventually be 50+ - is that an issue? Split the solution up maybe?). The intention is to be able to deploy just that client's code along with the core, or to be able to deploy just the core as well.
In addition to the custom modules in the code, they may also have their own custom database, again derived from a Core database. The custom database will always be kept up to date with the core db, but may have additional objects (tables, stored procedures, etc). On thing to note, I do not have the option of veering away from this approach - each client will definitely have their own copy of the "core", but it will be kept up to date utilizing a push tool developed in-house.
Problem/Question:
With that, which will essentially be the Core database with the potential for extra objects added in for that client's implementation.
The issue I'm struggling with is how to implement this in Entity Framework in a way which does not require me to add all of those custom db objects to the Core database, or at the very least keep them logically separated, relegated to the client projects. What would be the best way to go about this?
My Idea For Implementation
This is definitely where I am struggling at the moment. I am not really sure if my current idea will work, but I am still investigating and trying to come up with better options.
My current idea is as follows... Since I can target a specific schema when generating an EDMX, place client specific objects in a schema for their project, and utilize those to generate a dbContext in each client project/database, which inherits from the Core's dbContext implementation (containing all the "core" objects). This would mean ClientA's project would have an edmx file with just their custom tables/objects, inheriting all of the core's objects, but keeping them separate from other client's objects.
I'm not completely certain whether this approach will work (playing with it now), my initial concerns are that Entity Framework doesn't appear to generate foreign keys between the contexts. For example, if ClientA's table has a foreign key pointing to a core table, the generation tool doesn't appear to generate that relationship. That said, could I manually implement this effectively? The core code is database first, however I could implement the smaller, client specific items code-first, which I believe would give me far more flexibility. Would this be an effective approach? If not, is there a better approach out there I could use?
As a developer in very similar situation (6 years of project for multiple clients) I can say that your approach is full of pain. Customising your code per client is a road to hell.
You need to deploy the same code to every client. Core stays the same. Satellite modules developed for a specific client should be done as generic as possible (so you can re-sell them multiple times) and also deployed to everyone. The trick is to have a good toggle system that will enable only the right functionality per client.
I.e. there is a controller that saves for example company information. Everyone gets the same code, but if a customer BobTheBuilder Ltd. requires a special validation for companies, then that code goes into MyApp.BobTheBuilder.* namespace and your configuration code should know that this code should be executed instead of your general code. Needless to say that this should be done via DI container and implementations should be replaced by injecting objects that implement the common interface.
As for database - you can have multiple DB Contexts that represent your database modules. They can live in the same database, but best to separate modules by schema name. So yes, all those objects go to your codebase. Only not every tenant will get all the tables - only enabled modules should be activated and create their tenant tables.
As for project per customer - that's also is a big pain. Imagine if you have more than 10 customers and need to update Newtonsoft.Json package - that usually takes a bit more than forever! We tried that and fell back to namespace per customer overrides.
Generally here is our schema:
Tenants all get the same codebase deployed to them, but functionality is disabled by toggles
Tenants each get their own database with all the tables and enabled schemas(modules)
Do not customise your core per tenant. All customisations go into modules.
CQRS is recommended, but you can live without it. Though life is a lot easier when you have only a handful of interfaces to think about.
DI is a must. Can't make all that happen without a good container that supports multi-tenancy.
There are modules that do some specific stuff developed per customer. Each module has it's own toggles and very configurable - so multiple tenants can get the same module, but can be re-configured independently.
You can implement inheritance with the Entity Framework in an ASP.NET MVC Application:
https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/implementing-inheritance-with-the-entity-framework-in-an-asp-net-mvc-application
There are a few approaches Table-Per-Hierarchy (TPH) inheritance, Table Per Type (TPT) inheritance and Table-per-Concrete Class (TPC) inheritance.
You might also consider a Microservic-ie architecture if you're concerned how the different schema's will integrate.
Entity Framework doesn't appear to generate foreign keys between the contexts.
That approach sounds painful. Using Microservices to encapsulate the Core and client dBs as their own entities you could then use Message Queue's to broker communication between them.

Continuous delivery and database schema changes with entity framework

We want to progress towards being able to do continuous delivery of of our application into production. We currently deploy to azure and use table/blob storage and have a azure sql database, which we access with the entity.
As the database schema changes we want to be able to automatically apply the schema changes to the production database, but as this will happen whilst the application is live and the code changes are being deployed to many nodes at the same time we are not sure what the correct approach is.
After some reading it seems (and this makes sense) that the application needs to be tolerant of the 2 different database schema versions, so that it doesn't matter if its an old version of the code or a new version of the code which sees the database, however I'm not sure what the best way to approach handling this in the application is, using the entity framework.
Should we have versioned instances of the EF generated classes in the code which know how to access a specific version of the schema? What happens when the schema is updated and an old version of the code is running against the database?
Our entity framework classes are mapped to views in specific schemas in the db and nothing is mapped to the underlying tables, so potentially this could allow us to create v1 views which the old code uses and v2 views which the new code uses, but maintaining this feels like it would be a bit of a nightmare (its already enough of a pain simply maintaining the EF mappings to views rather than tables)
So what are best practices in this area? What do others do to solve this problem?
Whether you use EF or not, maintaining the code's ability to work with 2 consecutive versions of the database is a good (and perhaps the only viable) approach here.
Here are some ways we handle specific types of migrations:
When adding a column, we can typically just add the column (with a default constraint if non-nullable) and not worry about the code. EF will never issue a "SELECT *", so it will be able to continue to function properly while ignoring the new column. Similarly, adding a table is easy.
When removing a column or table, simply keep that column around 1 version longer than you would have otherwise.
For more complex migrations (e. g. completely changing the structure for a table or segment of the data model), deploy the new model alongside backwards-compatibility views (or tables with triggers to keep them in-sync), which will live as long as does the code that references them. As you say, this can a lot of work depending on the complexity of the migration, but it sounds like you are already well-positioned to do this because your EF entities point to views anyway. On the other hand, the benefit of this work is that you have more time to do the code migration. If you have a large codebase, this could be really beneficial in allowing you to migrate the data model to fit the needs of new features while still supporting old features without major code changes.
As a side-note, the difficulty of data migration often makes us push developing a finalized data model as far back as possible in the development schedule. With EF, you can write and test a lot of code before the data model is finalized (we use code-first to generate a sample SQLExpress database in a unit tests, even though our production database is not maintained by code-first). That way, we make fewer incremental changes to the production data model once a new feature is released.

How to represent a MySQL database schema in C#?

The title is not so accurate, but I couldn't come up with a better one.
I’m trying to write a MySQL Connector for MS‘ Forefront Identity Manager (FIM is basically a sync engine that synchronizes identities between various data sources using a meta directory). But I’m having difficulties to come up with an appropriate design.
Let’s say I want to import user data from a db into FIM’s metaverse. A user object has various attributes like firstname, lastname, address etc. In the database these attributes can be distributed between multiple tables. FIM ultimately needs these attributes to be merged into one object. So the user needs to configure the connector to tell it how the data is stored in the DB.
I was wondering what would be the “best” way to represent this configuration. Two alternatives come to (my) mind:
I could just save a select query that merges/joins the data, so that the result is a single “table” with all the desired attributes. The problem with this is that I think I would have to do some kind of parsing on this query-string to create a fim-compatible-schema out of it (which is basically the name of the object type (f.e. “person”) and a list of attributes). This schema needs to be creatable from the query-string alone without actually executing the query (I could execute some fake queries if that would simplify the process).
I could create some classes to represent the database schema, i.e. the tables and relationships. Since I’m not that experienced with MySQL (or databases at all for that matter) I’m running the risk of missing some special cases. Also it might be some kind of overkill, since the schema can be assumed as fixed once it's configured.
Does anyone have same advice on which alternative to choose and how to tackle the problems that would come with it? Or is there another – better – alternative I didn’t think of? Any advice would be greatly appreciated!
If something is not clear, please let me know.
Edit: Since there have been some questions on the use case, I'm going to elaborate a bit:
As I've said, I'm developing a Management Agent for FIM. FIM provides a so called Extensible Connectivity Management Agent, which is basically one single class implementing a few interfaces. (See this technet guide for a sample implementation).
Since I want to develop a generic agent for managing identities in a MySQL database, I don't know the database layout at compile time. When the enduser wants to use the management agent, he needs to decide, which attributes of the identities he'd like to manage. So I need to give the user some way to configure the management agent. My main question is, how to design the classes to save this configuration.
Lets look at a simple example:
Say you want to manage employee identities. To keep it simple, we have three attributes:
firstName
lastName
department
In this example case it could be f.e. just one single table with 4 columns (the attributes plus an id). But it could also be the much better design, which uses two tables, one user table and one department table, using a 1:1 relation to define the users department.
FIM requires me to consolidate these attributes in one object. It provides a class CSEntryChange which has an AttributeChanges collection member. I would then create some instances of AttributeChange (which basically contains the attribute name und it's value) and add them to the collection. So the user-editable configuration must tell the management agent how it can get the users with all defined attributes from the db and how to create and modify users in that database.
So ideally I'd have an intance of some "MySQLSchema" class (which is configured by the user up front), that could return a List<CSEntryChange> (I wouldn't actually use the CSEntryChange class for the sake of decoupling, but you should get the point) that contains all users in the db (pagination might be a requirement but I can figure that out later). In addition I'd like to able to pass it a CSEntryChange which would result in the corresponding database entries beeing updated (or created if not yet present).
I hope this clear it up a bit more :)
I think that your real question is, "How to access MySQL entities over C#?"
To begin with, I hope you are building this in as a MVC application.
I would suggest sticking to a full Microsoft stack for purposes of learning and ease of implementation.
With this in mind, you will want to create an EntityFramework MySQL data provider in the following steps:
Create a new project and and EntityFramework either through the Nuget package manager UI or package manager console by typing Install-Package EntityFramework -Version 6.0.2 (and add a reference to this project from your web project). Look half way down the page for "Configure EntityFramework to work with a MySQL database".
Install the MySQL provider for entity framework through the Nuget package manager UI or by typing Install-Package MySql.Data.Entity in the package manager console
The next step requires understanding of db configuration changes, that are nicely detailed here - Configure EntityFramework to work with a MySQL database.
You should end up with a nice class structure which will allow you to traverse your entities' navigation properties through EF.
Depending on the level of security your application requires, you may also want to create data transfer objects (DTOs) that contains only the data required for your remote calls - keeping your data calls efficient.
This is by no means a definitive guide on how to do this, but hopefully gives you a start in the right direction.
With regards to your step #1 above:
I could just save a select query that merges/joins the data, so that
the result is a single “table” with all the desired attributes. The
problem with this is that I think I would have to do some kind of
parsing on this query-string to create a fim-compatible-schema out of
it (which is basically the name of the object type (f.e. “person”) and
a list of attributes). This schema needs to be creatable from the
query-string alone without actually executing the query (I could
execute some fake queries if that would simplify the process).
I am slightly confused by this. Are you saying that you want to dynamically update your database schema based application requests?
You can use NHibernate with MySQL, and NHibernate is a full featured ORM, where C# classess maps with your MySQL tables, and the rest will be a breeze, once you get a hang of NHibernate.
A sample is here for your reference.
http://www.codeproject.com/Articles/26123/NHibernate-and-MySQL-A-simple-example
When you use the MySQL Connector/Net you can also use Entity Framework like this example from MSDN:
using (var db = new BloggingContext())
{
// Create and save a new Blog
Console.Write("Enter a name for a new Blog: ");
var name = Console.ReadLine();
var blog = new Blog { Name = name };
db.Blogs.Add(blog);
db.SaveChanges();
}
I have some experience with .NET <-> MySQL communication and I've used Entity Framework in the past for the communication - I had a lot of problems with it and performance issues and soon came to regret using it (this was 1-2 years ago, so may be they fixed it up). Of course, using an ORM framework adds a layer on top of your db communication which in my case proved to be not desired in terms of performance and flexibility.
Finally, I chose to take the following approach:
1) Create models with POCO classes as you would do with Entity Framework. Those models may or may not include relationships - it is up to your preference. I prefer to only add the relationships when I actually need them (so some objects may have their db relationships in the POCO's and some may not). I chose this because it lowers the complexities of when to pre-load the relationships and when not. Basically, if you don't need it - don't add it.
2) Create DAL layer (for example, using the repository pattern) that accepts and works with those objects and fires direct queries to MySQL. No EF required for this - you just need to install the Connector/NET for MySQL and you are ready to go.
A quick example of this would be the following (note: example is of the top of my head and it is just to illustrate the classes. I would use command parameters as well to prevent injection and so on):
public class Person{
public string Name {get;set;}
}
public interface IPersonRepository{
void AddPerson(Person p);
}
public class PersonRepository{
public void AddPerson(Person p){
using(var connection = new MySqlConnection("some connection string"){
connection.Open();
var command = new MySqlCommand(connection);
command.Text = string.Format("insert into Person (Name) values ({0})", p.Name)l
command.ExecuteNonQuery();
}
}
}
The benefits of this approach for me are:
Performance - my application need to insert large amounts of data int MySQL. Entity Framework could not cope with this. If your application doesn't handle a lot of data you might be alright with EF.
Flexibility - writing my own queries allows me to have better control over the communication. You can choose, for example, to use bulk inserts in MySQL (from file - really powerful and fast when you need to handle large amounts of data) for which you will need to bypass Entity Framework. I also found out that EF generates some funky queries
The main drawback is, of course, more work - you will get some things for "free" with the Entity Framework.
So, I can recommend the following:
Consider the amounts of data that you need to handle and make a small exercise application with those amounts. How does EF (or any other ORM) handle it? What about direct queries to the database? That will give you a somewhat accurate idea of how the communication will perform.
Consider how much time you have for building this application - if you are looking for a quick solution and are willing to sacrifice a bit of performance - go for EF or another ORM framework. If you have more time on your hands and would like to make a flexible solution - go for direct queries to the database.
Good luck!
Use Entity Framework Code First.
http://msdn.microsoft.com/en-us/data/jj193542.aspx
It is still a lot of work, but I think this is the quickest approach.
Create a C# classes according to the user and create the DB schema from those classes.

Updating database for desktop application (patching)

I wonder what you are using for updating a client database when your program is patched?
Let's take a look at this scenario:
You have a desktop application (.net, entity framework) which is using sql server compact database.
You release a new version of your application which is using extended database.
The user downloads a patch with modified files
How do you update the database?
I wonder how you are doing this process. I have some conception but I think more experienced people can give me better and tried solutions or advice.
You need a migration framework.
There are existing OSS libraries like FluentMigrator
project page
wiki
long "Getting started" blogpost
Entity Framework Code First will also get its own migration framework, but it's still in beta:
Code First Migrations: Beta 1 Released
Code First Migrations: Beta 1 ‘No-Magic’ Walkthrough
Code First Migrations: Beta 1 ‘With-Magic’ Walkthrough (Automatic Migrations)
You need to provide explicitly or hidden in your code DB upgrade mechanism, and - thus implement something like DB versioning chain
There are a couple of aspects to it.
First is versioning. You need some way of tying teh version of teeh db to the version of the program, could be something as simple as table with a version number in it. You need to check it on executing the application as well.
One fun scenario is you 'update' application and db successfully, and then for some operational reason the customer restores a previous version of the db, or if you are on a frequent patch cycle, do you have to do each patch in order or can thay catch up. Do you want to deal with application only or database only upgrades differently?
There's no one right way for this, you have to look at what sort of changes you make, and what level of complexity you are prepared to maintain in order to cope with everything that could go wrong.
A couple a of things worth looking at.
Two databases, one for static 'read-only' data, and one for more dynamic stuff. Upgrading the static data, can then simply be a restore from a resource within the upgrade package.
The other is how much can you do with meta-data, stored in db tables. For instance a version based xsd to describe your objects instead of a concrete class. That's goes in your read only db, now you've updated code and application with a restore and possibly some transforms.
Lots of ways to go, just remember
'users' will always find some way of making you look like an eejit, by doing something you never thought they would.
The more complex you make the system, the more chance of the above.
And last but not least, don't take short cuts on data version conversions, if you lose data integrity, everything else you do will be wasted.

Categories

Resources