By adopting ASP.NET Identity, I mean we reuse as many parts of it as possible including the IdentityDbContext and the related built-in models such as IdentityUser, IdentityUserClaim, ...
However when writing code with Entity Framework or even with any other frameworks, we commonly have our own BaseEntity class as the root of all other entities although it may even be empty, but in future we may need to add some common properties later.
So I've found that those 2 things cannot live together, I prefer to have a BaseEntity class but still re-using the ASP.NET Identity has its own benefit.
What would you do in this case? I have a feeling that the ASP.NET Identity has something faulted in design, it's not perfect and makes me confused while choosing the approach to go (when we have just one choice we can take it easy but when we have some choices we may need several days to evaluate and weight before ending up with the final hard decision).
Related
I had a hard time naming and wording this question, as there's a lot to unpack, so I apologize in advance - for anyone who spends the time to review and respond to this, I very much appreciate you.
Background:
I have a relatively large ASP.NET MVC5 application using Entity Framework 6, using a SQL Server database. Currently, the solution is split in to a few projects, mostly split by layer (business, data, etc). There is a single .edmx file and dbContext for the application, and it points to a single database at the moment.
The code/solution above represents the "core" of the system being built. However, this application is customized per client, therefore each client could have their own modules, pages, logic, etc. Due to this, we have a project in the solution for each client (only a couple right now, but will eventually be 50+ - is that an issue? Split the solution up maybe?). The intention is to be able to deploy just that client's code along with the core, or to be able to deploy just the core as well.
In addition to the custom modules in the code, they may also have their own custom database, again derived from a Core database. The custom database will always be kept up to date with the core db, but may have additional objects (tables, stored procedures, etc). On thing to note, I do not have the option of veering away from this approach - each client will definitely have their own copy of the "core", but it will be kept up to date utilizing a push tool developed in-house.
Problem/Question:
With that, which will essentially be the Core database with the potential for extra objects added in for that client's implementation.
The issue I'm struggling with is how to implement this in Entity Framework in a way which does not require me to add all of those custom db objects to the Core database, or at the very least keep them logically separated, relegated to the client projects. What would be the best way to go about this?
My Idea For Implementation
This is definitely where I am struggling at the moment. I am not really sure if my current idea will work, but I am still investigating and trying to come up with better options.
My current idea is as follows... Since I can target a specific schema when generating an EDMX, place client specific objects in a schema for their project, and utilize those to generate a dbContext in each client project/database, which inherits from the Core's dbContext implementation (containing all the "core" objects). This would mean ClientA's project would have an edmx file with just their custom tables/objects, inheriting all of the core's objects, but keeping them separate from other client's objects.
I'm not completely certain whether this approach will work (playing with it now), my initial concerns are that Entity Framework doesn't appear to generate foreign keys between the contexts. For example, if ClientA's table has a foreign key pointing to a core table, the generation tool doesn't appear to generate that relationship. That said, could I manually implement this effectively? The core code is database first, however I could implement the smaller, client specific items code-first, which I believe would give me far more flexibility. Would this be an effective approach? If not, is there a better approach out there I could use?
As a developer in very similar situation (6 years of project for multiple clients) I can say that your approach is full of pain. Customising your code per client is a road to hell.
You need to deploy the same code to every client. Core stays the same. Satellite modules developed for a specific client should be done as generic as possible (so you can re-sell them multiple times) and also deployed to everyone. The trick is to have a good toggle system that will enable only the right functionality per client.
I.e. there is a controller that saves for example company information. Everyone gets the same code, but if a customer BobTheBuilder Ltd. requires a special validation for companies, then that code goes into MyApp.BobTheBuilder.* namespace and your configuration code should know that this code should be executed instead of your general code. Needless to say that this should be done via DI container and implementations should be replaced by injecting objects that implement the common interface.
As for database - you can have multiple DB Contexts that represent your database modules. They can live in the same database, but best to separate modules by schema name. So yes, all those objects go to your codebase. Only not every tenant will get all the tables - only enabled modules should be activated and create their tenant tables.
As for project per customer - that's also is a big pain. Imagine if you have more than 10 customers and need to update Newtonsoft.Json package - that usually takes a bit more than forever! We tried that and fell back to namespace per customer overrides.
Generally here is our schema:
Tenants all get the same codebase deployed to them, but functionality is disabled by toggles
Tenants each get their own database with all the tables and enabled schemas(modules)
Do not customise your core per tenant. All customisations go into modules.
CQRS is recommended, but you can live without it. Though life is a lot easier when you have only a handful of interfaces to think about.
DI is a must. Can't make all that happen without a good container that supports multi-tenancy.
There are modules that do some specific stuff developed per customer. Each module has it's own toggles and very configurable - so multiple tenants can get the same module, but can be re-configured independently.
You can implement inheritance with the Entity Framework in an ASP.NET MVC Application:
https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/implementing-inheritance-with-the-entity-framework-in-an-asp-net-mvc-application
There are a few approaches Table-Per-Hierarchy (TPH) inheritance, Table Per Type (TPT) inheritance and Table-per-Concrete Class (TPC) inheritance.
You might also consider a Microservic-ie architecture if you're concerned how the different schema's will integrate.
Entity Framework doesn't appear to generate foreign keys between the contexts.
That approach sounds painful. Using Microservices to encapsulate the Core and client dBs as their own entities you could then use Message Queue's to broker communication between them.
VS2013, EF6 code first, MVC, (VB)
I wanted to better understand the pros and cons of using either a single context, or splitting DbSets into multiple contexts. I have been reading through some of the old SO posts on multiple DbContexts and didn't really find what I was looking for; a comprehensive statement on when and where to use or not use multiple DbContexts.
In the case of a single user running a program such as Windows Forms on their own hardware, it would seem there is no reason to have multiple contexts for ease of working with the code.
In the case of a web application that runs a major business program for multiple businesses it would seem multiple DbContexts are essential for security and administration.
But I'd like to get confirmation if I'm thinking about this question correctly. All I can think of is the following, but then I'm quite new to this environment:
Pros of a single context:
Coding only has a single context to deal with
(Are there issues with relationships across contexts?)
Migrations are easier because there is only one migration folder and process
Easier to get a comprehensive diagram constructed in SSMS or EDMX
(Link here for getting EDMX diagrams when using code first)
Cons of a single context:
Security might be an issue for multiple web clients on an enterprise app
(Is this an issue for simple websites that have simple memberships?)
Some SO posts seem to suggest response time is an issue
(What is the mechanism here?)
That's all I have. I don't know enough to fully understand the two sides, and given the different environments we can be working in, it would seem the answer to one or multiple contexts will be different.
I'm currently working on a website that will have memberships, and also a downloadable app which will be a personal app running on the user's hardware. In this case I think a single context for both makes sense, but before I get too deep into it, I though I would ask for some discussion on this. I presume others who are somewhat new to the environment will continue to have the same questions.
I also note that Microsoft saw fit to add multiple context capability to EF in EF6 and higher, so clearly there must be some programming environments that give rise to compelling reasons to have multiple contexts.
Thanks for the input.
Best Regards,
Alan
The only good reason to have multiple contexts, in my opinion, is if you have multiple databases in play. One application I work with has 3 contexts, for example. Two contexts are for existing databases that the application is not directly responsible for, while the third is the application-specific context with migrations.
There's no real benefit to splitting a context. Erik suggests that large contexts have performance issues, but I've worked with a single context with 50+ object sets in it, and have noticed no performance problems at all.
However, on the flip-side, there's real detriments to working with multiple contexts. For one, you loose the ability to work with multiple objects seamlessly unless they all reside in the same context. Also, multiple contexts tend to confuse the heck out of green developers because of Entity Framework's object graph tracking. For example, let's say you had two entities, Foo and Bar, both in separate contexts. If you created a relationship to Bar on Foo:
public class Foo
{
public virtual Bar Bar { get; set; }
}
Well, guess what? Both Foo and Bar are now tracked by Foo's context. If you then try to run migrations on both contexts, you'll get errors because Bar is managed in two, count 'em, two contexts.
Mistakes like this are ridiculously easy to make, and you'll drive yourself nuts trying to keep everything totally excluded. Plus, my argument has always been that if you have entities in your project that you can completely exclude from others, then that's an argument for a totally separate project, not just a different context.
I saw in the comments that you mentioned learning Domain Driven Design. One of the concepts in DDD is that of Bounded Contexts (be sure to check out the linked resource on bubble contexts to see how to deal with two contexts that share Entities).
It makes absolute sense to map your bounded contexts using a separate DbContext for each. There are certain gotchas that you need to be wary of when doing this but following a DDD approach should help you avoid them. The primary is shared entities. One context should be responsible for controlling the lifecycle of the shared entities, the other should only query those entities and not make any changes to them.
Separating your domain into bounded contexts will allow you to more easily manage a large/complex domain. It also avoids loading parts of the domain unnecessarily if you don't need them (in an SOA, you can deploy each bounded context autonomously as a service something that Udi Dahan calls an Autonomous Business Component).
I wouldn't advocate doing the split until you have to. For example, multiple teams working with different parts of the domain at the same time might present a good opportunity to make the split but at some point it will definitely make sense to do so.
I'm trying to implement Identity 2.0 in my ASP.NET MVC 5 solution that abides the onion architecture.
I have an ApplicationUser in my core.
namespace Core.DomainModel
{
public class ApplicationUser {...}
}
In my Data Access Layer I'm using Entity Framework 6.1 and my context derives from IdentityDbContext and herein lies the problem. ApplicationUser needs to derive from Microsoft.AspNet.Identity.EntityFramework.IdentityUser
namespace Infrastructure.DAL
{
public class TestContext : IdentityDbContext<ApplicationUser> {...}
}
My domain model shouldn't reference Microsoft.AspNet.Identity.EntityFramework that would go against the idea of the onion.
What's a good solution?
Yep, this is the big problem with the Identity framework which I have found no good solution yet.
I contemplated adding EF to my domain project, but decided against it in one project: domain models are not aware about ApplicationUser, only using Id for the current user which they get from
ClaimsPrincipal.Current.Claims
.FirstOrDefault(c => c.Type == ClaimTypes.NameIdentifier)
.Value
In that project I kept all Identity code in Web and Data projects.
In my other project I have added Identity and EF all over the place, including Domain project. And guess what? nothing bad happened.
I also have looked on solutions like already provided link to Imran Baloch' blog. It looked like a lot of work to me to gain no customer value.
Just to repeat myself, there is no good solution to separate EF from Identity without rewriting a pile of code (don't like it). So either add EF to your Domain project (don't like it) or keep your Identity code in Web/Data project (sometimes not possible, so I also don't like it).
Sorry to say, but this is a low-level limitation of .Net.
You can inherit IUser from the Core namespace and the usermanager will be happy. You will need to replace the IUserStore with your own implementation. Then initializer the user manager something like:
new UserManager<ApplicationUser>(new YourNameSpace.UserStore<YourApplicationUser>()))
The problem is that you are trying to use the Onion pattern. In its foundations that you will always build dependencies.
Thrive for single responsibility of your models you are creating. You can do easily this by trying to follow Domain Driven Design properly by implementing individual models per layer:
BusinessLogic.Models.ApplicationUser
Presentiation.Models.ApplicationUser
DAL.Models.ApplicationUser
Note that all of those models are different classes even if they have 100% same properties (although it is never 100%). The drawback is that you may need to map from one model to another, but if you are trully aim for clean, modular and extensible architecture - that is the way.
Hint you can use Automapper (or ExpressMapper) to avoid code needed for mapping.
I'm trying to create a project from scratch. I'll be using asp .net mvc4 (with asp net web api), and entity framework 5 for data access (all the latest technologies)
Since it's a fresh start, I was thinking on centering my design on my model rather than creating the database first and then creating the EF model, so I though I'd go with a code first approach.
The problem with code first (as far as I see) is that you lose all the scaffolding that EF does for you on a model first scenario (design support, easily generating and maintaining entity relationships 1-1, 1-*, -, etc)
The question is : What tools or templates or snippets or whatever can I use to make my life easier when designing my model?. I want this process to be as painless as possible, since it involves a lot of repetition (FK relationships, for example, are the same always)
Should I use DbContext or something else? Is there some kind of way to start code first but at the same time maintain an edmx model, or those are mutually exclusive?
thanks!
The great thing about EF Code First is that you don't need any scaffolding. You don't need an EDMX model, you don't even need to specify the exact nature of relationships, it's all based on conventions. For example your classes must have a property called Id, which will be taken to be the Primary Key of the table. All string based fields are generated as nvarchar(MAX). Of course some conventions might not be what you want and Code First supports this through pluggable conventions (you can remove most conventions and create your own)
You should do some of the basic tutorials to get an idea of how the Code First flow works as it's an entirely different proposition to the Db First approach.
I seem to be missing something and extensive use of google didn't help to improve my understanding...
Here is my problem:
I like to create my domain model in a persistence ignorant manner, for example:
I don't want to add virtual if I don't need it otherwise.
I don't like to add a default constructor, because I like my objects to always be fully constructed. Furthermore, the need for a default constructor is problematic in the context of dependency injection.
I don't want to use overly complicated mappings, because my domain model uses interfaces or other constructs not readily supported by the ORM.
One solution to this would be to have separate domain objects and data entities. Retrieval of the constructed domain objects could easily be solved using the repository pattern and building the domain object from the data entity returned by the ORM. Using AutoMapper, this would be trivial and not too much code overhead.
But I have one big problem with this approach: It seems that I can't really support lazy loading without writing code for it myself. Additionally, I would have quite a lot of classes for the same "thing", especially in the extended context of WCF and UI:
Data entity (mapped to the ORM)
Domain model
WCF DTO
View model
So, my question is: What am I missing? How is this problem generally solved?
UPDATE:
The answers so far suggest what I already feared: It looks like I have two options:
Make compromises on the domain model to match the prerequisites of the ORM and thus have a domain model the ORM leaks into
Create a lot of additional code
UPDATE:
In addition to the accepted answer, please see my answer for concrete information on how I solved those problems for me.
I would question that matching the prereqs of an ORM is necessarily "making compromises". However, some of these are fair points from the standpoint of a highly SOLID, loosely-coupled architecture.
An ORM framework exists for one sole reason; to take a domain model implemented by you, and persist it into a similar DB structure, without you having to implement a large number of bug-prone, near-impossible-to-unit-test SQL strings or stored procedures. They also easily implement concepts like lazy-loading; hydrating an object at the last minute before that object is needed, instead of building a large object graph yourself.
If you want stored procs, or have them and need to use them (whether you want to or not), most ORMs are not the right tool for the job. If you have a very complex domain structure such that the ORM cannot map the relationship between a field and its data source, I would seriously question why you are using that domain and that data source. And if you want 100% POCO objects, with no knowledge of the persistence mechanism behind, then you will likely end up doing an end run around most of the power of an ORM, because if the domain doesn't have virtual members or child collections that can be replaced with proxies, then you are forced to eager-load the entire object graph (which may well be impossible if you have a massive interlinked object graph).
While ORMs do require some knowledge in the domain of the persistence mechanism in terms of domain design, an ORM still results in much more SOLID designs, IMO. Without an ORM, these are your options:
Roll your own Repository that contains a method to produce and persist every type of "top-level" object in your domain (a "God Object" anti-pattern)
Create DAOs that each work on a different object type. These types require you to hard-code the get and set between ADO DataReaders and your objects; in the average case a mapping greatly simplifies the process. The DAOs also have to know about each other; to persist an Invoice you need the DAO for the Invoice, which needs a DAO for the InvoiceLine, Customer and GeneralLedger objects as well. And, there must be a common, abstracted transaction control mechanism built into all of this.
Set up an ActiveRecord pattern where objects persist themselves (and put even more knowledge about the persistence mechanism into your domain)
Overall, the second option is the most SOLID, but more often than not it turns into a beast-and-two-thirds to maintain, especially when dealing with a domain containing backreferences and circular references. For instance, for fast retrieval and/or traversal, an InvoiceLineDetail record (perhaps containing shipping notes or tax information) might refer directly to the Invoice as well as the InvoiceLine to which it belongs. That creates a 3-node circular reference that requires either an O(n^2) algorithm to detect that the object has been handled already, or hard-coded logic concerning a "cascade" behavior for the backreference. I've had to implement "graph walkers" before; trust me, you DO NOT WANT to do this if there is ANY other way of doing the job.
So, in conclusion, my opinion is that ORMs are the least of all evils given a sufficiently complex domain. They encapsulate much of what is not SOLID about persistence mechanisms, and reduce knowledge of the domain about its persistence to very high-level implementation details that break down to simple rules ("all domain objects must have all their public members marked virtual").
In short - it is not solved
(here goes additional useless characters to post my awesome answer)
All good points.
I don't have an answer (but the comment got too long when I decided to add something about stored procs) except to say my philosophy seems to be identical to yours and I code or code generate.
Things like partial classes make this a lot easier than it used to be in the early .NET days. But ORMs (as a distinct "thing" as opposed to something that just gets done in getting to and from the database) still require a LOT of compromises and they are, frankly, too leaky of an abstraction for me. And I'm not big on having a lot of dupe classes because my designs tend to have a very long life and change a lot over the years (decades, even).
As far as the database side, stored procs are a necessity in my view. I know that ORMs support them, but the tendency is not to do so by most ORM users and that is a huge negative for me - because they talk about a best practice and then they couple to a table-based design even if it is created from a code-first model. Seems to me they should look at an object datastore if they don't want to use a relational database in a way which utilizes its strengths. I believe in Code AND Database first - i.e. model the database and the object model simultaneously back and forth and then work inwards from both ends. I'm going to lay it out right here:
If you let your developers code ORM against your tables, your app is going to have problems being able to live for years. Tables need to change. More and more people are going to want to knock up against those entities, and now they all are using an ORM generated from tables. And you are going to want to refactor your tables over time. In addition, only stored procedures are going to give you any kind of usable role-based manageability without dealing with every tabl on a per-column GRANT basis - which is super-painful. If you program well in OO, you have to understand the benefits of controlled coupling. That's all stored procedures are - USE THEM so your database has a well-defined interface. Or don't use a relational database if you just want a "dumb" datastore.
Have you looked at the Entity Framework 4.1 Code First? IIRC, the domain objects are pure POCOs.
this what we did on our latest project, and it worked out pretty well
use EF 4.1 with virtual keywords for our business objects and have our own custom implementation of T4 template. Wrapping the ObjectContext behind an interface for repository style dataaccess.
using automapper to convert between Bo To DTO
using autoMapper to convert between ViewModel and DTO.
you would think that viewmodel and Dto and Business objects are same thing, and they might look same, but they have a very clear seperation in terms of concerns.
View Models are more about UI screen, DTO is more about the task you are accomplishing, and Business objects primarily concerned about the domain
There are some comprimises along the way, but if you want EF, then the benfits outweigh things that you give up
Over a year later, I have solved these problems for me now.
Using NHibernate, I am able to map fairly complex Domain Models to reasonable database designs that wouldn't make a DBA cringe.
Sometimes it is needed to create a new implementation of the IUserType interface so that NHibernate can correctly persist a custom type. Thanks to NHibernates extensible nature, that is no big deal.
I found no way to avoid adding virtual to my properties without loosing lazy loading. I still don't particularly like it, especially because of all the warnings from Code Analysis about virtual properties without derived classes overriding them, but out of pragmatism, I can now live with it.
For the default constructor I also found a solution I can live with. I add the constructors I need as public constructors and I add an obsolete protected constructor for NHibernate to use:
[Obsolete("This constructor exists because of NHibernate. Do not use.")]
protected DataExportForeignKey()
{
}