Are Entity Framework Models Reusable? - c#

How reusable are the results of using EF?
Currently, I use stored procedures for 100% of my data access. The main reason I may be looking to do things differently for my newest project is for maintainability: adding an attribute to a table means manually altering dozens of stored procedures. If I understand EF correctly, I should be able to add an attribute to an Entity in my EF model, and then ask EF to update my CRUD methods for me. awesome.
However, there is one thing holding me back: reusability. I like that I can make the SP's for a given database once, and be done with them; I can make 12 applications that all use that database and getting that data will be as easy as calling the correct SP.
Will the same be true if I switch to a more EF-centric approach?
Can I import an Existing EF Data Model and have it work without too much trouble?
Can I make it so that I alter the Data Model once, and that change is felt across all applications?

Ad1. You can easily reuse complex EF queries if you follow the Repository pattern. Repositories are where you encapsulate your data access and, yes, repositories are easily reused between different modules/applications.
(not that you can't reuse code without Repositories! Repositories are just a common way of doing it for data access layer)
Ad2. I am not sure what you mean by "import existing EF model" (import where?) but usually EF models are straightforward and can be reused
Ad3. Just have your model in a separate assembly.

A real benefit to using EF is getting away from stored procedures.
The problem that exists with using stored procedures for all your data access is that you are forced to put business logic into your data layer.
Check out Should the data access layer contain business logic? While its not true in every case, generally keeping your business logic in your business layer gives you better separation of concerns.
I have an EF project that I use as the data layer for several applications. This allows me to change it once and have all the other projects get the benefits. Granted, sometimes supporting code needs to be changed in these other projects, but you'd have that same problem in a stored procedure model as well.

Any N-tier design would solve this problem by simply creating a class (in a class library) that understands how to access the data by using entity framework. Then any applications that want to access the data uses the class library, configure a connection string in the app.config or web.config and you're done.

Related

DDD: Is it ok to generate/update my entities classes from changes in database schema?

Some time ago, at work, we had to change our main system to be "cross-rdbms". I'm not sure if this is the correct term, but basically the system worked only with MSSQLServer but in order to acomodate a new client we had to make it possible for the system to work with both MSSQLServer and Oracle.
We don't use a ORM because of reasons. Instead, we use a custom ADO-based data access layer.
Before of this change, we rellied heavily on stored procedures, database functions, triggers, etc. A substantial amount of business logic was located on the database itself.
We decided to get rid of all stored procedures, triggers and stuff, and basically reduce to database to a mere storage layer.
To handle migrations, we created a .json file which contains a representation of our database schema: tables, columns, indexes, constraints, etc. A simple application was created to edit this file. By using it, we're able to edit existent tables and columns and add new ones.
This json file is versioned in our repository. When the application is executed, a routine reads the file, constructing a representation of the database in memory. It then reads the metadata from the actual database, compare it to the in-memory representation and generates scripts based on the differences found.
Finally, the scripts are executed, updating the database schema.
So, now comes my real problem. When a new column is added, the developer needs to:
- add a new property to the POCO class that represents a row in that table;
- edit the method which maps the table columns to the class properties, adding the new column/property mapping;
- edit the class which handles database commands, creating a new parameter referent to the new column.
When this approach was initially implemented, I thought about auto-generating and updating the POCO classes based on changes in the json file. These would keep the classes in sync with the database schema, and we wouldn't have to worry about developers forgetting to update the classes after creating new columns.
This feature wasn't implemented tough, and now I'm having serious doubts about it, mostly because I've been studying Clean Architecture/Onion Architecture and Domain Driven Design.
From a DDD perspective, everything should be about the Domain, which in turn should be tottally ignorant about its persistence.
So, my question is basically: how can I maintain my domain model and my database schema in sync, without violating DRY and without using a "database-centric" approach?
DDD puts the focus on the domain language and its representation in domain classes. DB issues are not the primary concern of DDD.
Therefore, generating domain classes from the database schema is the wrong direction if the intention is to apply DDD.
This question is more about finding a decent way to manage DB upgrades, which has little to do with DDD. Unit/integration tests for basic read/write DB operations may go a long way in assisting developers to remember editing the required files when DB columns are altered.

Design of Data Access Layer with multiple databases

I have read many posts concerning the issue of having several databases and how to design a DAL efficiently in this case. In many cases, the forum suggests to apply the repository pattern, which works fine in most cases.
However, I find myself in a different situation. I have 3 different databases: Oracle, OLE DB and SQLServer. Currently, there exists a unique DAL with many different classes sending SQL queries down to a layer below to be executed in the corresponding database. They way the system works is that two of the databases are only used to read information from them, and the other one is used to either store this same information or read it later on. I have to propose a better design to the current implementation, but it seems as if a common interface for all three databases is not plausible from an architectural point of view.
Is there any design pattern that solves this situation? Should I have three different DALs? Or perhaps it is possible (and advisable) to apply the repository pattern to this problem?
Answers to your question will probably be very subjective. These are some thoughts.
You could apply command-query separation. The query side integrates directly with your data layer, bypassing any business or domain layer, and the returning entities are optimized for read and project from your databases. This layer could also be responsible to merge results from different database calls.
The command side consists of command handlers, using domain or business entities, which are mapped from your R/W database.
By doing this, the interface that you expose will be more clear and business oriented.
I'm not sure that completely abstracting out the data access layer with custom units of work and repositories is really needed: do the advantages outweigh the disadvantages? They rarely do, because you will you ever change a database technology? And if you do, this probably means a rewrite anyway. Also, if you use entity framework code first, you already have unit of work and an abstraction on top of your database; and the flexibility of using LINQ.
Bottom line - try not to over-engineer/over-abstract things; or make things super-generic.
Your core/business code should never be dependent on any contract/interface/class that is placed in the DAL layer of the application.
Accessing data is something the business/core layer of your application needs to be able to do, and this it should be able to do without any dependency of SQL statements and without having any knowledge of the underlying data access technology.
I think you need to remove any "sql" statements from the core part of the application. SQL is vendor dependent and any dependency to a specific database engine needs to be clean out of you core, and moved to the DAL where it belongs. Then you need to create interfaces that resides outside of the DAL(s) which you then create implementation classes for in one or many DAL modules/classes. Your DAL can be dependent of your core, but not the other way around.
I don't see why the repository layer can't be used in this case. When I have a database which I can only read from, I usually let the name of the repository interface indicate this, like ICompanyRepositoryRead.

Do I neet Unity of Work and Repository patterns when working with Entity Framework?

So at my job I was pointed to http://www.codeproject.com/Articles/990492/RESTful-Day-sharp-Enterprise-Level-Application#_Toc418969121 and was told to learn these patterns and implement them in my solution.
What confused me was that these things were before entity framework 6 and from what I understood, Unity of Work is used to optimize database performance by grouping queries together. Since EF6 has already these optimizations, should I still implement these layers? I get that the layerness helps with modularization and switching of data source. Does that mean that EF6 is too complex to implement with these patterns and should ADO.Net be used directly or something like that?
EDIT: I've noticed that this added layer allows usage of mock data sources. Not sure how useful this is because of the need to add another layer of apstraction
"Unit of Work is used to optimize database performance by grouping queries together." - This is not correct. Unit of Work is there to collect related operations together into a single transaction which is then committed or rolled back as a whole. It tracks changes made to objects so that required database operations can be deduced automatically and performed on your behalf.
When you work with Entity Framework, you use it to create DbContext from model. That class is both the Repository and Unit of Work, so you don't have to do anything special. Things only become more complicated than that when your project becomes so large that DbContext becomes more of a burden.
Repository is used to abstract your application from datasource, but since EntityFramework implements this pattern by itself and gives you a possibility to change data source seamlessly, there is no neccesity to add one more layer of abstraction.
You will just limit EF possibilities, while creating something like GenericRepository<T>. And nevertheless you won't be able to replace EF by another library with no changes to your code, even if you implement such a layer. (Some queries written for EF will fail for NHibernate, for example).
Just don't use DbContext everywhere in your application (inside UI code at least), use it by your data access layer (services with business dependent methods or something in that way).
Even for scenarios, where some cloud data storage is used (which EF won't be able to handle seamlessly), there is no neccessity for that layer, it's better to introduce separate classes and use them explicitly, because you cannot fit db and cloud interaction into one abstraction, it will start leaking at some point.
Entity Framework is a UnitOfWork/Repository pattern itself. If you need to abstract yourself from EF, then you could implement a layer on top of EF with your own UoW/Rep pattern.
This is good if you want to replace EF at some point in your proyect.
The cons? I think that building a UoW on top of EF gives you a redundant architecture and you will end up writing more code for something that maybe will never change.
In my current proyect, the main structure is very common, with a Data layer (with a sublayer for the Entities) for EF, Logic layer (where I put all the Business Logic) and the View layer (It can be web, or whatever). With that structure I directly invoke EF in the Logic layer.

Dont we need DAL Layer In MVC3 architecture? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Where to write Database and Business logic in MVC?
I have just started with MVC3 pattern. How do we do data access in MVC3? Do we make the 'MODEL' as Data Access Layer or Do we add another 'DAL' layer and call it from 'MODEL' Layer?
Your model should be independent of data-access stuff, which will allow you to change your DAL strategy in the future.
You should be feeding the model from the DAL, but the model shouldn't know how it is being constructed, and certainly shouldn't have any database-specific code in it.
If you take the approach I suggest, look at AutoMapper - a very useful tool for mapping data between DAL and model classes.
When I was working with my last MVC3 project, my understanding from the various samples (such as GeekDinner) was that the Entity Framework serves as the Data Access Layer.
Your Model can be a directly mapped data access object, but don't necessarily have to be. They could just as well be proxies through to your backend DAL which is always going to be the better option depending on your requirements and longevity of the project.
The way I tend to handle it for larger projects is to have a separate namespace called Project.Entities which contains my Entity Framework data models. My Project.Models would contain models which use the Entities as a backing store for their data, and provide common methods (Where necessary) to manipulate that data. It may not be the best way to do it, but provides the most flexibility, and sticks to keeping data models separate from the backing store which allows more abstraction. For example, you can always switch out the underlying data layer to in-memory storage, another DAL than Entity Framework or whatever else.
For smaller/temporary/test projects, my Entity Framework data models will be straight in Project.Models and used directly as it's quicker and doesn't require so much thought.
No, model is not data access. Model is a buch of classes to hold data, and it generally does not contain code other than, possibly, to verify the assigned values are permitted.
You access data from controllers. In which way you do that is completely up to you and MVC is not concerned.
Model is your view model, not your domain model.
If you want to do DAL activity, I would tend to wrap it in repository/service that can be injected into your controllers.
This stops your controllers getting bloated and also allows you to mock your DAL layer for unit testing the controllers.

Please help on choosing the right arhitecture of n-tier web application

Please help on choosing the right way to use the entities in n-tier web application.
At the present moment I have the following assembleis in it:
The Model (Custom entities) describes the fields of the classes that the application use.
The Validation is validating the data integrity from UI using the reflection attributes method (checks data in all layers).
The BusinessLogicLayer is a business facade for additional logic and caching that use abstract data providers from DataAccessLayer.
The DataAccessLayer overrides the abstarct data providers using LinqtoSql data context and Linq queries. And here is the point that makes me feel i go wrong...
My DataLayer right before it sends data to the business layer, maps (converts) the data retrieved from DB to the Model classes (Custom entities) using the mappers. It looks like this:
internal static model.City ToModel(this City city)
{
if (city == null)
{
return null;
}
return new model.City
{
Id = city.CountryId,
CountryId = city.CountryId,
AddedDate = city.AddedDate,
AddedBy = city.AddedBy,
Title = city.Title
};
}
So the mapper maps data object to the describing model. Is that right and common way to work with entities or do I have to use the data object as entities (to gain a time)? Am I clear enough?
You could use your data entities in your project if they are POCOs. Otherwise I would create separate models as you have done. But do keep them in a separate assembly (not in the DataAccess project)
But I would not expose them through a webservice.
Other suggestions
imho people overuse layers. Most applications do not need a lot of layers. My current client had a architecture like yours for all their applications. The problem was that only the data access layer and the presentation layer had logic in them, all other layers just took data from the lower layer, transformed it, and sent it to the layer above.
The first thing I did was to tell them to scrap all layers and instead use something like this (requires a IoC container):
Core (Contains business rules and dataaccess through an orm)
Specification (Seperated interface pattern. Contains service interfaces and models)
User interface (might be a webservice, winforms, webapp)
That works for most application. If you find that Core grows and becomes too large too handle you can split it up without affecting any of the user interfaces.
You are already using an ORM and have you thought about using a validation block (FluentValidation or DataAnnotations) for validation? Makes it easy to validate your models in all layers.
It may be a common practice to send out DTOs from serivce boundary (WCF service, etc.) but if you are directly using your "entities" in your presentation model, I don't see any benefit in doing that.
As to the code snippet you have provided, why not use AutoMappter? It helps by eliminating writing of boiler-plate mapping codes and does that for you if you have a set of convention in place.
Get rid of the model now, before removing it later will require refactoring the whole application. The last project i worked on used this architecture and maintaining the DTO layer and mappings to the database model layer is a huge pain in the arse and offers no usefull benefits. One of the main things that is anoying is that LinkToSql does not effectively support a disconnected data model. You cannot update a database table by creating a new DB entity with a primary key matching an existing record and then stick it into the data context. You have to first retrieve the entity from the database, update it then commit the changes. Managing this results in really nasty update methods to map all the properties from your DTOs to your LinqtoSql classes. It also breaks the whole deferred execution model of LinqToSql. Don't even get me started on the problems it causes with properties on parent classes that are collections of child DTOs (e.g. a customer DTO with an Orders property that contains a collection of order DTOs), managing those mappings is really really fiddly, i had to do some extensive optimisations because retrieving a few hundred records ended up causing LinqToSql to make 200,000 database calls (admittedly there was also some pretty dumbass code as well but you get the picture).
The only valid reason to use DTOs is if you want to have multiple pluggable Data Access Layers e.g. LinqToSql and NHibernate for supporting different DB servers. That way you can swap out the data access later without having to change any other layers. If you don't need to do this then save yourself a world of pain and just use the LinqToSql entities.

Categories

Resources