I have a complex mapping logic between my database objects and domain objects. Hence, I am not using any third-party library like AutoMapper. I am wondering what would be a better code design:
Option #1. Having a mapper abstracted as IServiceNameMapper and inject as a dependency into IServiceName
Option #2. Write extension methods to database objects and domain objects to convert between them.
I am leaning towards option #1 as I get a nice separation of concern, wherein the mapping logic is abstracted away from the core service logic and injected as a dependency into service. This also gives me ability to extend or change the mapping logic in the future without having to change the service implementation.
Any thoughts?
From purely architectural point of view option 1 is better. However sometimes this can be overdesign. For example I have no interest in separating my DTOs from the Entities so what I do is just accept the entity in the constructor of the DTO or create ToEntity() method on the DTO and keep the mapping in the DTO. Theoretically it is bad because the DTOs cannot be used in another project without pulling in the entities but in practice I have estimated that the risk that I will need that is pretty low since Silverlight is now dead :) Unless you are into extreme unit testing I think the extension methods are fine. Basically the only problem that can occur is that your unit tests will test both the method and the mapping logic and you won't be able to write a test that only tests the method. Big deal!
Related
So at my job I was pointed to http://www.codeproject.com/Articles/990492/RESTful-Day-sharp-Enterprise-Level-Application#_Toc418969121 and was told to learn these patterns and implement them in my solution.
What confused me was that these things were before entity framework 6 and from what I understood, Unity of Work is used to optimize database performance by grouping queries together. Since EF6 has already these optimizations, should I still implement these layers? I get that the layerness helps with modularization and switching of data source. Does that mean that EF6 is too complex to implement with these patterns and should ADO.Net be used directly or something like that?
EDIT: I've noticed that this added layer allows usage of mock data sources. Not sure how useful this is because of the need to add another layer of apstraction
"Unit of Work is used to optimize database performance by grouping queries together." - This is not correct. Unit of Work is there to collect related operations together into a single transaction which is then committed or rolled back as a whole. It tracks changes made to objects so that required database operations can be deduced automatically and performed on your behalf.
When you work with Entity Framework, you use it to create DbContext from model. That class is both the Repository and Unit of Work, so you don't have to do anything special. Things only become more complicated than that when your project becomes so large that DbContext becomes more of a burden.
Repository is used to abstract your application from datasource, but since EntityFramework implements this pattern by itself and gives you a possibility to change data source seamlessly, there is no neccesity to add one more layer of abstraction.
You will just limit EF possibilities, while creating something like GenericRepository<T>. And nevertheless you won't be able to replace EF by another library with no changes to your code, even if you implement such a layer. (Some queries written for EF will fail for NHibernate, for example).
Just don't use DbContext everywhere in your application (inside UI code at least), use it by your data access layer (services with business dependent methods or something in that way).
Even for scenarios, where some cloud data storage is used (which EF won't be able to handle seamlessly), there is no neccessity for that layer, it's better to introduce separate classes and use them explicitly, because you cannot fit db and cloud interaction into one abstraction, it will start leaking at some point.
Entity Framework is a UnitOfWork/Repository pattern itself. If you need to abstract yourself from EF, then you could implement a layer on top of EF with your own UoW/Rep pattern.
This is good if you want to replace EF at some point in your proyect.
The cons? I think that building a UoW on top of EF gives you a redundant architecture and you will end up writing more code for something that maybe will never change.
In my current proyect, the main structure is very common, with a Data layer (with a sublayer for the Entities) for EF, Logic layer (where I put all the Business Logic) and the View layer (It can be web, or whatever). With that structure I directly invoke EF in the Logic layer.
I am facing the same issue as the one below
Entity Framework 6 and Unit Of Work… Where, When? Is it like transactions in ado.net?
As per the answer, i shouldn't create an abstraction layer over EF, but i want to keep my business layer independent. So i decided to go with the last option, adding TransactionScope. But i read, that it affects the performance. I have kept the IsolationLevel to ReadCommitted. But i am not sure about the performance.
So how can i use EF without adding its dependency to business layer.
My business objects are different from Entity objects.
If you don't want to draw a dependency on EF, you're going to have to abstract it out. The poster in the response you mentioned believes this is too much code, but if you want to decouple the data layer implementation from the business layer it's a necessary evil.
Historically I've gone with a relatively generic IRepository implementation that's generated from an IUnitOfWork such as:
uow.Get<IRepositoryType>()
By using an IoC container (using TinyIoC lately) we're able to handle and swap out our implementations easily, and keep our domain object separate from our data objects.
I've built a web application with Entity Framework using POCO.
I'm using these POCO classes as my business objects and not just for persisting data which works fine until...
Now I need to add some logic into these classes to do thing like total up sales, order lines, etc.
Should I add methods to my POCO classes to enable this functionality or leave them purely for persisting data and create some kind of 'processor' whereby I pass in the business objects and get the values I require out.
Is there a best practice for this?
What is the architectural design you are using or want to use?
For example, if these are your domain entities, you should put as much as possible logic in them. If they are merely data containers and you don't have a real architecture in place, your logic would probably in some business component.
So if you provide your question with some more details, we can help you better.
I'm pretty new to IoC, Dependency Injection and Unit Testing. I'm starting a new pet project and I'm trying to do it right.
I was planning to use the Repository pattern to mediate with the data. The objects that I was going to return from the repositories were going to be objects collected from a Linq to entities data context (EF4).
I'm reading in "Dependency Injection" from Mark Seeman that doing it, makes an important dependency and will definitely complicate the testing (that's what he's using POCO objects in a Library Project).
I'm not understanding why. Although the objects are created by a linq to entities context, I can create them simply calling the constructor as they were normal objects. So I assume that is possible to create fake repositories that deviler these objects to the caller.
I'm also concerned about the automatic generation of POCO classes, which is not very easy.
Can somebody bring some light? Are POCO objects trully necessary for a decoupled and testable project?
**EDIT: Thanks to Yuck I understand that it's better to avoid autogeneration with templates, which brings me to a design question. If I come from a big legacy database wich his tables are assuming a variety of responsabilities (doesn't fit well with the concept of a class with a single responsability), what's the best way to deal with that?
Delete the database is not an option ;-)
No, they're not necessary it just makes things easier, cleaner.
The POCO library won't have any knowledge that it's being used by Entity Framework. This allows it to be used in other ways - in place of a view model, for instance. It also allows you to use the same project on both sides of a WCF service which eliminates the need to create data transfer objects (DTO).
Just two examples from personal experience but there are surely more. In general the less a particular object or piece of code knows about who is using it or how it's being used will make it more adaptable and generic for other situations.
You also mention automatic generation of POCO classes. I don't recommend doing this. Were you planning to generate the class definitions from your database structure?
I was planning to use the Repository pattern to mediate with the data.
The objects that I was going to return from the repositories were
going to be objects collected from a Linq to entities data context
(EF4).
The default classes (not the POCOs) EF generates contain proxies for lazy loading and are tied at the hip to Entity Framework. That means any other class that wants to use those classes will have to reference the required EF assemblies.
This is the dependency Mark Seeman is talking about. Since you are now dependent on these non-abstract types, which in turn are dependent on EF, you cannot simply change the implementation of your repository to something different (i.e. just using your own persistence store) without addressing this change in the class that depend on these types.
If you are truly only interested in the public properties of the EF generated types then you can have the partial classes generated by EF implement a base interface. Put all the properties you need in that base interface and pass the dependency in as the base interface - now you only depend on the base interface and not EF anymore.
I recently began reading Pro ASP.NET MVC Framework.
The author talks about creating repositories, and using interfaces to set up quick automated tests, which sounds awesome.
But it carries the problem of having to declare yourself all the fields for each table in the database twice: once in the actual database, and once in the C# code, instead of auto-generating the C# data access classes with an ORM.
I do understand that this is a great practice, and enables TDD which also looks awesome. But my question is:
Isn't there any workaround having to declare fields twice: both in the database and the C# code? Can't I use something that auto-generates the C# code but still allows me to do TDD without having to manually create all the business logic in C# and creating a repository (and a fake one too) for each table?
I understand what you mean: most of the POCO classes that you're declaring to have retrieved by repositories look a whole lot like the classes that get auto-generated by your ORM framework. It is tempting, therefore, to reuse those data-access classes as if they were business objects.
But in my experience, it is rare for the data I need in a piece of business logic to be exactly like the data-access classes. Usually I either need some specific subset of data from a data object, or some combination of data produced by joining a few data objects together. If I'm willing to spend another two minutes actually constructing the POCO that I have in mind, and creating an interface to represent the repository methods I plan to use, I find that the code ends up being much easier to refactor when I need to change business logic.
If you use Entity Framework 4, you can generate POCO object automatically from the database. ( Link)
Then you can implement a generic IRepository and its generic SqlRepository, this will allow you to have a repository for all your objects. This is explained here : http://msdn.microsoft.com/en-us/ff714955.aspx
This is a clean way to achieve what you want: you only declare your object once in your database, generate them automatically, and can easily access them with your repository (in addition you can do IoC and unit test :) )
I recommend you to read the second edition of this book which is pure gold and updated with the new features introduced in MVC 2
http://www.amazon.com/ASP-NET-Framework-Second-Experts-Voice/dp/1430228865/ref=sr_1_1?s=books&ie=UTF8&qid=1289851862&sr=1-1
And you should also read about the new features introduced in MVC3 which is now in RC (there is a new view engine really useful) http://weblogs.asp.net/scottgu/archive/2010/11/09/announcing-the-asp-net-mvc-3-release-candidate.aspx
You are not declaring the business logic twice. It's just that this business logic is abstracted behind an interface and in the implementation of this interface you can do whatever you want : hit the database, read from the filesystem, aggregate information from web addresses, ... This interface allows weaker coupling between the controller and the implementation of the repository and among other simplifies TDD. Think of it as a contract between the controller and the business.