I have a project that I'm working on that I'm using LINQ to SQL for and I have set up business objects/models to use in my web app. I am looking for some feedback on how I've set all this up to see if it makes sense and if there's anything I should be doing differently.
Here's an example model:
public class User
{
private MyDataContext _db = new MyDataContext();
private MyLINQUserClass _user = new MyLINQUserClass();
public string Name
{
get
{
return _user.Name;
}
set
{
_user.Name = value;
}
}
public User(int UserID)
{
_user = _db.Where(u => u.UserID == UserID).FirstOrDefault();
if (_user == null)
{
_user = new MyLINQUserClass();
}
}
internal User(MyLINQUserClass user, MyDataContext db)
{
_db = db;
_user = user;
}
public void Save()
{
_db.SubmitChanges();
}
public static User Add(string Name)
{
MyDataContext _db = new MyDataContext();
MyLINQUserClass _user = new MyLINQUserClass();
_user.Name = Name;
_db.MyLINQUserTable.InsertOnSubmit(_user);
_db.SubmitChanges();
return new User(_user, _db);
}
public static IList<User> Get()
{
MyDataContext _db = new MyDataContext();
return _db.MyLINQUserTable.Select(u => new User(u, _db)).ToList();
}
}
For clarity, I am using this type of model already quite heavily in the project (the above is just an example I threw together for the post on the fly) and it works very well. My question is more of a "learning" question ... I know it works. I'm just not sure if there is something I should be doing differently that is better and if so, why.
Thoughts?
I suppose there are no right answers to this kind of questions. It is a matter of design, preference and requirements. I will try to show my view...
I always liked the Repository pattern to keep the concerns seperated. I would use a repository of type T to retrieve the T entities (talking generics). These would be the entities participating on my business model. In your case, I would have a UsersRepository class, returning User entities. This Data access layer (DAL) would handle my data access concern.
My business model would use the entities to do its business. In simple CRUD applications, maybe no other objects other the entities returned by my repositories would be needed. In more complicated applications, new classes would be needed, using the repositories of the DAL to retrieve data as entities. This business layer would handle my main business functionality concern (calculations etc).
Then, for display purposes, you could need perhaps another structure. For instance, if you follow the MVC pattern (you could also see the Microsoft article) you would need to create another model to fit your display purposes. This GUI layer following the MVC pattern would handle my graphical display concern.
Hope I helped!
This is the so-called Data Access Objects pattern. The User is a DAO to MyLINQUserClass which might be called the domain class.
The DAO pattern is designed for single responsibility: only the DAO "knows" the data layer while the domain class can concentrate on business logic. The domain class is persistence ignorant. So far, so good.
However, there are (at least) three great drawbacks of this pattern:
It tends to create lots of boilerplate code
It is hard to compose object graphs, because a DAO represents only one row in the database and fetching object graphs easily degenerates into one query per object or collection of child objects.
It is hard to work transactionally, because a DAO can't manage a transaction spanning an entire object graph. So you need some overarching layer to handle transactions.
Most ORMs however, have a different persistence-ignorance model than DAO. They have repositories and units of work. In L2S the Table is a basic repository and the context a unit of work. The "domain" classes, like MyLINQUserClass, can be considered persistence-ignorant. (Admitted, they are stuffed with boilerplate code that serves persistence and change tracking, but it is generated and it can practically be ignored). So all responsibilities for CRUD operations have been assigned, and there's no need for other objects carrying these responsibilities.
The way you implement it makes it extra hard to work with object graphs and transactions because each DAO has its own context, so you can't compose LINQ queries involving multiple DAO's in a way that the LINQ is translated into one SQL statement. And doing multiple save operations in one transaction is a challenge.
Conclusion
By using DAO in a linq-to-sql environment you're mixing CRUD responsibilities. You get all the disadvantages of the DAO pattern and can't exploit the power of the repository/UoW pattern to the full. I would strongly recommend to make a choice for one or the other, and I would choose L2S (well, actually I would choose Entity Framework).
Related
My goal is async loading of related entities using DBContext.
Let imagine two projects. The first named MyApp.Domain and contains domain entities.
namespace MyApp.Domain
{
public class PlanPage
{
public Guid Id { get; set; }
}
}
namespace MyApp.Domain
{
public class PlanPageDay
{
public Guid Id { get; set; }
public Guid PlanPageId { get; set; }
}
}
The second project named MyApp.Infrastructure.EntityFramework and contains configuration of projection entities to database. It also contains class which extends domain entity and implements Entity framework specific logic.
namespace MyApp.Infrastructure.EntityFramework.Models
{
public class PlanPageEntity : PlanPage
{
private readonly ApplicationDbContext _applicationDbContext;
protected PlanPageEntity(ApplicationDbContext applicationDbContext)
{
_applicationDbContext = applicationDbContext;
}
public ICollection<PlanPageDay>? Days { get; set; }
public async Task<ICollection<PlanPageDay>> GetDays()
{
return Days ??= await _applicationDbContext.PlanPageDays
.Where(pd => pd.PlanPageId == Id)
.ToListAsync();
}
}
}
The purpose of this example is simple. We separate infrastructure code from domain code. Look how do we plan to use this concept:
// Entity initializing code. Placing somewhere in domain logic.
var plan = new PlanPage(/*some constructor arguments*/);
// Entity loading code. Placing somewhere in infrastructure implementation.
public async Task<PlanPage> GetPlanPage(Guid id)
{
return await _applicationDbContext.Set<PlanPageEntity>().FindAsync(id);
}
Note that we tell to Entity framework to use child class (PlanPageEntity) so it can handle all specific things that it can.
The question is: Is it possible to configure the EF so that it allows us to use this concept?
As requested here's a little more details for my opinion stated in the comments.
The main reason why I think your current approach is a bad idea is that it violates the separation of concerns design principle: when you are mixing domain models with data access models, you make your domain logic completely dependent on how you model the data in your database. This quickly limits your options because the database may have some restrictions on how you can model your data that doesn't fit well with the domain logic you want to implement as well as making maintenance difficult. E.g. if you decide to split up one DB table into two then you might have a big task ahead of you in order to make your domain logic work with those two new models/tables. Additionally, making performance optimizations in your database easily becomes a nightmare if not thought through ahead of time - and you shouldn't spend time thinking of optimizing your system before it's necessary.
I know this is a little abstract since I don't know much about your domain but I'm sure I could find more arguments against it.
Instead, separating data access models (and in general all external data models) from your domain models makes it much easier to maintain: if you need to make some changes to your database, you simply need to update the logic that maps the data from your data access models to your domain model - nothing in your domain logic needs to change.
In the examples you have given, you have already logically separated your domain models and data access models into two separate projects. So why not follow through with that thought and separate the two with a binding/mapping layer in-between?
Is it possible to configure the EF so that it allows us to use this concept?
Yes. Essentially you have DTO's, and your Entities derive from your DTOs. So when you fetch an Entity you can return it directly. But if you wouldn't be able to attach a non-Entity, so you'd have to map it. It's going to be inconvenient, and like 99.999% of bespoke entity and repository designs, will be ultimately a waste of time.
This is somewhat similar to the what EF already does for you. Start with persistence-ignorant Entity classes, and introduce persistence-aware runtime subtypes for scenarios that require them, which is basically just Lazy Loading.
I'm fairly new to DDD but I am trying to cram as much as possible as fast as possible. I followed https://github.com/dotnet-architecture/eShopOnContainers as a guide for how to structure my code with Mediatr and EF Core.
Fortunately for this application, the persistence and domain model are the same. Unfortunately for me, my data layer does not match our domain model as it is a legacy db.
So i am separating the domain from persistence which is well and good. But I am having a hard time understanding where if i do this code block in a command handler(trying to make it simple and clear)...
var aggregate = repo.GetById(1234);
aggregate.AddItemToList(item);
repo.SaveChanges();
How can i cause the underlying database context of the repo to be aware of the changes that were applied. Only thing i can think is to have a repo.Update(aggregate) call, that would then try to apply db calls to update various places of the db.
This seems like a smell to me.
Any insights would be great.
Thank you!
Edit:
Should the repository pattern with a separate Domain and Persistence layer return the presistance layer's model or the domain's?
For example:
I have a aggregate Company. And i have a database table called CompanyLegacy which is modeled in the persistence layer using entity framework core.
Should my repository be CompanyLegacyRepository or CompanyRepository? If CompanyRepository, that would mean i query the CompanyLegacy table, and map it to a Company domain model then return it. This model, would not be change tracked. This is where my issue comes from.
But if I'm supposed to do a CompanyLegacyRepository then it seems like that doesn't adhere to DDD guidelines where all actions to be applied to the aggregateroot.
Should the repository pattern with a separate Domain and Persistence
layer return the persistence layer's model or the domain's?
Repository should return your Domain model. If you are using DTOs (such as CompanyLegacy) in your Infrastructure layer, it is the responsibility of your Repository to map them to your Domain models. Please note that in a Layered Architecture, the Application layer is not supposed to know about the DTOs used in the Infrastructure layer... it's your Domain models which are the heart of your application. See this question which is closely related to yours.
Your Repository should be called CompanyRepository. You can define an interface for this repository like:
public interface ICompanyRepository
{
// Company is your domain model not DTO (i.e. your legacy model)
Company GetById(int id);
void Add(Company);
void Update(Company);
}
Change Tracking
Entity Framework change tracking has it's limitations, this question is an example of one of those Disconnected Scenarios, where we cannot rely on EF Change Tracking (because of DTOs). The implementation of the above repository would be like:
public CompanyRepository: ICompanyRepository
{
Private MyDbContext _context;
public CompanyRepository(MyDbContext myDbContext) { _context = myDbContext; }
public Company GetById(int id)
{
var companyLegacy = _context
.CompanyLegacy
.AsNoTracking()
.Where(c => c.id = id)
.FirstOrDefault();
return MyMapper.ToCompany(companyLegacy);
}
public void Add(Company company)
{
var companyLegacy = MyMapper.ToLegacy(company);
_context.Add(companyLegacy);
_context.SaveChanges();
}
public void Update(Company)
{
var companyLegacy = MyMapper.ToLegacy(company);
_context.Update(companyLegacy);
_context.SaveChanges();
}
}
This tutorial is helpful for more advanced operations and you can find more info about EF Core change tracking here.
this answer is related to EF 4/5/6 (not core) but gives you some idea about using unique identifier to decide if an entity should be Added or Updated.
i'm beginner in repository and layerd application and i don't inderstand well which is the interaction and the relationship between the repositories and business layer classes
Here is an example for purchaese order in 3 layers and I want to review whether correct or not and your correction
for DataAcesslayer
repository OrderRepositolry
Namespece Dal
{
Public class RepositoryOrder
{
POrderContext context = new POrderContext ();
Public IEnumrebale <Order> GetAll ()
{
Context.Orders;
}
// Following code
}
}
for the item of order repositories code :
namespece Dal
{
public class RepositoryOrderItem
{
POrderContext context = new POrderContext();
public IEnumrebale<OrderItem> GetAllItemById(Order o)
{
context.OrderItems.where(i => i.OrderId == o.Id);
}
public OrderItem GetItemById(int id)
{
context.OrderItems.Find(id);
}
//Following code
}
}
for businessLayer here is classOrderBLL code:
namespace BLL
{
public class OrderBLL
{
RepositoryOrder repoOrder = new RepositoryOrder();
RepositoryOrderItem repoItem = new RepositoryOrderItem();
public IList<Order> GetAll()
{
return repoOrder.GetAll();
}
public decimal GetPrixTotal(Order order)
{
var query = from item in repoItem.GetAllItemById(order)
select sum(item=>item.Prix * item.Quantite);
return query;
}
}
}
does the total price calculation is done at the level of repository
or at the level of BLL (we can make this request linq with context
in the repository)?
CRUD method is done at repository and they are called at BLL from
repository is it right?
does the where method in linq corresponds to logical business or
repository (data access layer) since it determines certain rules in
the business?
I'm sure this question will be voted down as "primarily opinion based" but before that happens I'll jump in to give my "primarily opinion based" answer :-)
There are two ways to partition a database application and they depend on how complex and large it will be. Entity Framework examples tend to give a very simplistic model, where the EF Data classes are exposed to the Business layer which then exposes them to the View Model or other layers. This may be correct for simplistic applications but for more complex ones, and ones where the data storage method is not RDBMS (i.e. No-SQL) or where you want to create separation between business and repository data structures it is too simple.
The repository layer should have a set of classes which describe how the data is accessed from the repository. If you have an RDBMS these might be EF POCO classes, but if you have a web-service endpoint as your repository this may be SOAP documents, or REST structures, or other Data Transfer Object. For an RDMBS like SQL Server that uses exclusively stored procedures for accessing its data, the Repository layer might simply be a set of classes which mirror the naming and parameters, and data sets returned by the stored procedures. Note that the data stuctures returned by anything other than an RDBMS might not be coherent - i.e. a "Customer" concept returned by one method call in the repository might be a different data structure to a "Customer" returned by a different call. In this case the repository classes would not suit EF.
Moving to the business object layer - this is where you create a model of the business domain, using data classes, validation classes and process class models. For instance a Process class for recording a sales order might combine a Business Customer, Business Sales Order, Business Product Catalog data concepts and tie in a number of Validation classes to form a single atomic business process. These classes might (if you are doing a very lightweight application) be similar to data at the Repository layer but they should be defined seperately. Its in this layer you hold calculated concepts such as "Sales Order Total" or "VAT Calculation" or "Shipping Cost". They might, or might not, get stored into your Repository but the definition of what they mean are modelled in the business layer.
The business layer provides the classes whose data is copied across into a View Model. These classes again can be very similar (and in the simplest of cases, identical to) the repository classes, but in their job is to model the user interface and user interaction. They might contain only some of the data from the business data classes, depending on the requirements of the UI. These classes should carry out user interface based validation, which they might delegate to the business tier, or might add additional validation to. The job of these classes is to manage the state-machine that is the user interface.
My summary is that in a large scale system you have three sets of classes; data repository interaction, business model interaction, and user interface interaction. Only in the simplest of systems are they modelled as a single set of physical POCO classes.
I am writing a proof of concept application.
When coming to the data layer we need the ability to connect to different databases and different technology might be used
Ado.net (sqlCommand etc..)
Entity Framework.
Nhibernate.
What I am saying is that the whatever calls our RepositoryService class is ignorant about the provider used.EG "Entity Framework, Raw Ado.Net NHibernate" etc..
Is there an example out there or an empty shell I can look at or a code snippet from you.
Just to give an idea how would you go about it.
Noddy implementation to give you an idea omitted possible IOC etc..:
public class BusinessService
{
public List<CustomerDto> GetCustomers()
{
RepositoryService repositoryService=new RepositoryService();
List<CustomerDto> customers = repositoryService.GetCustomers().ToList();
return customers
}
}
public class RepositoryService:IRepository
{
private string dbProvider;
public RepositoryService()
{
//In here determine the provider from config file EG Sql- EF etc.. and call the appriopiate repository
// dbProvider=???
}
public IEnumerable<CustomerDto> GetCustomers()
{
//Get the customers from the choosen repository
}
}
public interface IRepository
{
IEnumerable<CustomerDto> GetCustomers();
}
public class SqlRepository : IRepository
{
public IEnumerable<CustomerDto> GetCustomers()
{
throw new NotImplementedException();
}
}
public class EFRepository : IRepository
{
public IEnumerable<CustomerDto> GetCustomers()
{
throw new NotImplementedException();
}
}
public class CustomerDto
{
public string Name { get; set; }
public string Surname { get; set; }
}
Many thanks
You should be more clear about your objectives (and those of your manager). Accessing your data thrue some repository interfaces is a first step. The second step is to have a shared object representation of your data table rows (or your entities if you want to refine table mappings).
The idea behind the scene may be:
a) We don't know ORM technologies well and want to try without taking the risk to have poor performances.
b) Our database is very huge and we manipulate huges amounts of data.
c) Our database contains many thousands of tables.
d) ...
The general answer may be :
1) use the choosen ORM when possible.
2) downgrade to ADO.NET or even to stored procedures when performances are poor.
Entity Framework and NHibernate use an high level entity mapping abstraction. Do you want to use this? If not, you may use lightweight object mappers like Dapper or PetaPoco.
ORM are a good way to lower the development costs of 70% to 80% the database access code (95% if you just read data). Choosing to be able to use all of them will ensure you that the potential cost gains will be lost.
PetaPoco is very interesting for a first experiment because it includes the very light mapper source code in your C# project and generates table objects with an easy to understand T4 transform file (all the source code is small and included in your data access layer). Its major default is that its author does have time to work on it last years.
If ORM technologies can make program easier to write and scale, they have drawbacks:
1) because you work outside the database, operation between in memory (or not yet persisted) objects and database data can easily become very costly : if a search for data concerning one object in database generate one request, an operation on a collection of objects will generate as many requests as there are items in the collection.
2) because of the complex change tracking mechanisms in high level ORM, saving data can become very slow if you don't take care of this.
3) The more the ORM offers functionalities, the more your learning curve is long.
The way that I generally accomplish this task is to have different concrete implementations of your repository interfaces, so you can have an EFRepository or an NHibernateRepository or an AdoNetRepository or an InMemoryDatabaseRepository implementation.
As long as you encapsulate the construction of your repository (through a factory or dependency injection or whatever) the types that are consuming your repository don't have to be know exactly what kind of repository that they are working with.
Getting deeper with entity framework and repositories in order to enable better testing. Wondering if this is wise though?
public interface IRepository
{
int SaveChanges();
void Dispose();
}
using (MyContext context = new MyContext())
{
TransactionRepository txns = new TransactionRepository(context); // TransactionRepository implement IRepository
MappingRepository maps = new MappingRepository(context); // MappingRepositoryimplement IRepository
SomeCommand command = new SomeCommand(txns, maps);
command.Execute();
}
Each of the repositories is logically different, so in theory could be in different data sources. For now, they use the same database though. Each of the repository classes implements IRepository, and notably SaveChanges() along with some query methods that I've not shown for brevity.
What's a good practice for utilize multiple repositories?
+1 gorilla, some goods points made. I would add the following thoughts.
In web/mvc scenario , I use dozens of repositories and inject the Context into these Repositories. I use a repository base class.
I also UoW classes which use a context in constructor.
The Unit Of Work Classes contains references to all supported repositories for the context. I also use bounded contexts. Here is a sample blogs from Julie Lerman on the subject.
http://www.goodreads.com/author/show/1892325.Julia_Lerman/blog
So yes, it makes perfect sense to use multiple contexts and to use multiple repositories.
You may even have multiple Unit of Work classes, although concurrent use of UoW classes is another discussion.
ADDING SAMPLE code as requested:
This sample is one of Several LuW classes that inherits from a base LuW class.
The current state and DBContext to be use is injected. (or defaulted)
The repositories are interfaces from CORE project. The LuW classes are in the DAL project.
the base LuW is something like....
public interface ILuw : ILuwEvent, IDisposable
{
IBosCurrentState CurrentState{ get; set; }
OperationStatus Commit();
}
The Luw Class itself.
namespace XYZ.DAL
{
public class LuwBosMaster : Luw, ILuwBosMaster
{
public LuwBosMaster(DbContext context, IBosCurrentState currentState)
{
base.Initialise(context,currentState);
}
public LuwBosMaster()
{
base.Initialise(GetDefaultContext(), BosGlobal.BGA.IBosCurrentState);
}
public static DbContextBosMaster GetDefaultContext()
{
return new DbContextBosMaster("BosMaster");
}
//MasterUser with own Repository Class
private IRepositoryMasterUser _repositoryMasterUser;
public IRepositoryMasterUser RepMasterUser
{ get { return _repositoryMasterUser ?? (_repositoryMasterUser = new RepositoryMasterUser(Context, CurrentState)); } }
//20 other repositories declared adn available within this Luw
// Some repositories might address several tables other single tables only.
// The repositories are based on a base class that common generic behavior for each MODEL object
Im sure you get the basic idea...
This really comes down to design decisions on your part. If you're following Unit Of Work pattern, then each repository is probably going to have it's own context; mainly because according to UoW, each repository call should create it's context, do it's work, and then dispose of it's context.
There are other good reasons to share a context though, one of which (IMHO) is that the context has to track the state of an entity, if you're get an entity, dispose the context, make some modifications to the entity, and then attach to a new context this new context has to go hit the database so it can figure out the state of the entity. Likewise if you're working with graphs of entities (Invoices, and all their InvoiceItems), then the new context would have to fetch all the entities in the graph to determine their state.
Now, if you're working with web pages or services where you are not or cannot maintain a state, then the UoW pattern is sort of implied and it's a generally accepted "good practice".
The most important thing is forgotten: The database connection is not shared between multiple DbContext instances. That means that you have to use distributed transactions if you would like several repositories to be in the same transaction. That's a large performance degrade compared to a local transactions.