Dependency Injection - where to put my repository object - c#

I am currently in the process of converting a large web service to use the repository pattern and dependency injection. We are expanding our team and the benefits of reliable unit testing outweighs the effort required to refactor the code.
I have chosen Ninject as my framework based on the recommendation of a colleague and have begun refactoring my code. This has involved creating a "Common" project that contains the objects themselves, a Repository.Database project to contain the data access logic, and a web service that uses both. I have used convention based mapping so that IPersonRepository should map to my concrete PersonRepository class.
I am currently taking the approach of creating a "Repository" property on each class with the [Inject] attribute, then replacing my constructors to use said repository, but have run into my first stumbling block and am not convinced I am doing things the right way. Before I started all this, I would instantiate an object like so:
var p = new Person(ID);
And using the format I suggested, my class looks something like this:
[Inject]
public IPersonRepository Repository { get; set; }
public string Name;
public Person(int ID)
{
// This feels wrong
var p = Repository.Get(ID);
Name = p.Name;
}
You can probably see my conundrum. How can I use a constructor without having to return a new object from the repository and then map each field to my current object? I can't replace "this", and while I could use something like AutoMapper to map each field in one go, it feels like I am doing something inherently wrong here.
I could use a static method instead of an injector:
[Inject]
public static IPersonRepository Repository { get; set; }
public string Name;
public static Person GetByID(int ID)
{
return Repository.Get(ID);
}
But as you can see it requires making Repository static, and it feels like I should be using a constructor rather than a static "GetByID" method. That could just be because I am so used to using a constructor.
Alternatively I can pass the Repository into the Person constructor, but again, it feels messy to do that every time I instantiate a Person in code.
What I am trying to achieve is to have my existing WCF project load all of its data using one repository, and for my Unit Test project to load all its data using another. I don't want to have to pass concrete implementations of IPersonRepository in either. Is this achievable and even recommended?

Your entities should not need to know anything about where or how they are stored. The idea of the repository pattern is to take the responsibility of persistence away from the business logic. Practically this means you would set up your service as follows:
Create separate repositories for each "top level" entity
Inject the repositories into your web service/controller/business object
Define your entities as regular business objects with state and behavior
have the web service etc use the repository to fetch the entities as needed via ID
You should't need to call a constructor with an ID from the business logic...that's a bad pattern
You would still use the constructor to instantiate a new entity that you would then submit to the repository e.g. personRepository.Add(new Person("Bob"))
This would then have the added benefit of making your web service testable by injecting a mocked repository and not having to worry about how the entities would get retrieved in your code itself.

I've created a small helper class : "EntityResolver" which implements the "IValueResolver" interface which was introduced recently in AutoMapper. This helper class can retrieve an entity from the repository when an Id is supplied.
1) AutoMapper configuration to map a ViewModel to an Entity is defined as follows:
Mapper.CreateMap<EmployeeVM, Employee>()
.ForMember(e => e.EmployeeNumber, opt => opt.MapFrom(vm => vm.Number))
// Other propeties omitted
.ForMember(e => e.Company, opt => opt.ResolveUsing<EntityResolver<Company>>().FromMember(vm => vm.CompanyId))
;
2) Code for the EntityResolver
public class EntityResolver<TEntity> : IValueResolver where TEntity : class, IEntity, new()
{
public ResolutionResult Resolve(ResolutionResult source)
{
return source.New(ResolveObject(source));
}
private object ResolveObject(ResolutionResult source)
{
if (!source.Context.Options.Items.ContainsKey("Services")) return null;
var services = (List<object>)source.Context.Options.Items["Services"];
var item = services.FirstOrDefault(s => s is IBaseService<TEntity>);
if (item == null) return null;
var id = (long)source.Value;
if (id <= 0) return null;
var service = (IBaseService<TEntity>)item;
return service.GetById(id);
}
}
3) When mapping the ViewModel to an Entity, I supply additional IMappingOperationOptions
Mapper.Map<TEntity>(viewModel, opt => opt.Items["Services"] = GetServices());
4) The GetServices method just returns all services needed to resolve the entities used in the Employee object.
protected override List<object> GetServices()
{
var services = base.GetServices();
services.Add(_companyService);
services.Add(_functionService);
services.Add(_subfunctionService);
services.Add(_countryService);
return services;
}
For more details see my test project.

Related

How do you avoid passing a DI container in a unidirectional api design?

I have a business layer with business entities designed using Active Record, and a unidirectional api surface. I have two distinct problems:
Without complicating the code, how should runtime values be handled, such as passing in an id value from the DAL to a constructed object? Would this be done with parameter overrides?
How do create other business entities and pass dependencies if I am not passing the container down as well (making it more of an anti-pattern / service locator)
Product is the root that wraps the container and acts as our application facade, and entry point to the rest of the BAL. The piece I am trying to solve is in Product.FindCustomer and Customer.FindDocument
public class Product
{
private IUnityContainer container;
public void RegisterType<T>() ...
public void RegisterType<TFrom, TTo>() ...
public Customer FindCustomer(string customerNumber)
{
var id = context.Customers
.Where(p => p.CustomerNumber == customerNumber)
.Select(p => p.Id)
.Single();
var customer = container.Resolve<Customer>(...); // param override?
customer.Load();
return customer;
}
}
public class Customer : BusinessEntity<Data.Customer, Guid>
{
private readonly IDocumentFileProvider provider;
public Customer(IDataContext context, IDocumentFileProvider provider) : base(context)
{
this.provider = provider;
}
public Customer(IDataContext context, IDocumentFileProvider provider, Guid id) : base(context, id)
{
this.provider = provider;
}
public Document FindDocument(string code)
{
var id = context.Documents
.Where(p => p.CustomerNumber == customerNumber)
.Select(p => p.Id)
.Single()
var document = new Document(context, provider, id); // Here is the issue
document.Load();
return document;
}
}
public class Document : BusinessEntity<Data.Document, Guid>
{
public Document(IDataContext context, IDocumentFileProvider provider) : base(context)
{
this.provider = provider;
}
public Document(IDataContext context, IDocumentFileProvider provider, Guid id) : base(context, id)
{
this.provider = provider;
}
public IDocumentFile GetFile()
{
return provider.GetFile();
}
}
Here is briefly the other classes.
public abstract class ActiveRecord<TEntity, TKey>
{
protected ActiveRecord(IDataContext context)
{
}
public virtual void Load() ...
public virtual void Save() ...
public virtual void Delete() ...
}
public abstract class BusinessEntity<TEntity, TKey> : ActiveRecord<TEntity, TKey>
{
protected BusinessEntity(IDataContext context) : base(context)
{
}
protected BusinessEntity(IDataContext context, TKey id) : this(context)
{
}
...
}
The hierarchies can be quite deep, but a shorter example:
var customer = product.FindCustomer("123");
var account = customer.FindAccount("321");
var document = account.FindDocument("some_code");
var file = document.GetFile();
One of my goals is to A) model the domain, and B) provide a very easy to understand API. Currently our BAL uses Service Locator, but I am experimenting on replacing that with proper IoC/DI and a container.
The deeper the API, and the more dependencies are needed, all the higher up class constructors can be quite long, and may no longer seem cohesive.
While DI can be squeezed into most application designs using half-measures, the unfortunate truth is that not all application designs are particularly DI friendly. Creating "smart entities" seems like magic when it comes to API design, but the fact of the matter is that at their core they violate the SRP (load and save are separate responsibilities regardless of how you slice it).
You basically have 4 options:
Find a design that is more conducive of DI and use its object model for your API
Find a design that is more conducive of DI and create a facade object model for your API
Use property injection to load your dependencies and give the end user control over the constructor
Use a service locator
I ran into a similar wall when trying to use CSLA in conjunction with DI and after many attempts, finally decided that it was CSLA that needed to go and to find a better design approach.
For a time, I tried using option 3. In this case you can create a facade wrapper around the DI container, and only expose its BuildUp() method through a static accessor. This prevents the use of the container as a service locator.
[Dependency]
public ISomeDependency SomeDepenency { get; set; }
public Customer()
{
Ioc.BuildUp(this);
}
Some DI containers can inject properties using fluent configuration instead of attributes (so your business model doesn't need to reference the container), but this can make the DI configuration very complex. Another option is to make build your own attributes.
Options 1 and 2 would be similar. You basically make every responsibility into its own class and separate your "entities" out into dumb data containers. An approach that works well for this is to use Command Query Segregation.
public class FindCustomer : IDataQuery<Customer>
{
public string CustomerNumber { get; set; }
}
public class FindCustomerHandler : IQueryHandler<FindCustomer, Customer>
{
private readonly DbContext context;
public FindCustomerHandler(DbContext context)
{
if (context == null)
throw new ArgumentNullException("context");
this.context = context;
}
public Customer Handle(GetCustomer query)
{
return (from customer in context.Customers
where customer.CustomerNumber == query.CustomerNumber
select new Customer
{
Id = customer.Id,
Name = customer.Name,
Addresses = customer.Addresses.Select(a =>
new Address
{
Id = a.Id,
Line1 = a.Line1,
Line2 = a.Line2,
Line3 = a.Line3
})
.OrderBy(x => x.Id)
}).FirstOrDefault();
}
}
Using option 1, the end user would create an instance of FindCustomer and call queryProcessor.Handle(findCustomer) (the queryProcessor is injected).
Using option 2, you would then need to create a wrapper API. You could use a fluent builder approach (more info here) to provide logical default dependencies, but allow the end user to call methods to supply their own.
var customer = new CustomerBuilder().Build(); // defaults
var customer = new CustomerBuilder(c =>
c.WithSomeDependency(new SomeDependency()).Build(); // overridden dependency
Unfortunately, the main issue with this is that control of the lifetime of objects is no longer up to the DI container, so dependencies like DbContext need special handling.
Another variant of this would be to make each entity into a humble object that internally builds up its own DI container using the other (loosely coupled) API objects. This is the recommended approach for legacy frameworks (such as web forms) that are difficult to use with DI.
Finally, there is making a static service locator that all of your API objects use to resolve their dependencies. While this best accomplishes the goal, it is something that should be considered a last resort. The biggest issue is that you lose the ability to quickly and easily understand what dependencies a class requires. So, you are either forced to create (and update) documentation indicating what the dependencies are to the end user, or end users will have to go digging through the source code to find out. Whether using a service locator is acceptable depends on your target audience and how frequent you expect them to need to be able to customize dependencies beyond the defaults. If custom dependencies are a once in a blue moon thing, it may work, but if 25% of your user base needs to add custom dependencies, service locator is probably not the right approach.
The bottom line is that if maintainability is your main goal, then option 1 is the clear winner. But if you are married to this particular API design, you will need to choose one of the other options and live with the extra maintenance involved in supporting such an API.
References:
DI Friendly Library
Dependency Injection in .NET.

How to implement FIND method of EF in Unit Test?

I have a Web API 2.0 project that I am unit testing. My controllers have a Unit of Work. The Unit of Work contains numerous Repositories for various DbSets. I have a Unity container in the Web API and I am using Moq in the test project. Within the various repositories, I use the Find method of Entity Framework to locate an entity based on it's key. Additionally, I am using Entity Framework 6.0.
Here is an very general example of the Unit of Work:
public class UnitOfWork
{
private IUnityContainer _container;
public IUnityContainer Container
{
get
{
return _container ?? UnityConfig.GetConfiguredContainer();
}
}
private ApplicationDbContext _context;
public ApplicationDbContext Context
{
get { _context ?? Container.Resolve<ApplicationDbContext>(); }
}
private GenericRepository<ExampleModel> _exampleModelRepository;
public GenericRepository<ExampleModel> ExampleModelRepository
{
get { _exampleModelRepository ??
Container.Resolve<GenericRepository<ExampleModel>>(); }
}
//Numerous other repositories and some additional methods for saving
}
The problem I am running into is that I use the Find method for some of my LINQ queries in the repositories. Based on this article, MSDN: Testing with your own test doubles (EF6 onwards), I have to create a TestDbSet<ExampleModel> to test the Find method. I was thinking about customizing the code to something like this:
namespace TestingDemo
{
class TestDbSet : TestDbSet<TEntity>
{
public override TEntity Find(params object[] keyValues)
{
var id = (string)keyValues.Single();
return this.SingleOrDefault(b => b.Id == id);
}
}
}
I figured I would have to customize my code so that TEntity is a type of some base class that has an Id property. That's my theory, but I'm not sure this is the best way to handle this.
So I have two questions. Is the approach listed above valid? If not, what would be a better approach for overriding the Find method in the DbSet with the SingleOrDefault method? Also, this approach only really works if their is only one primary key. What if my model has a compound key of different types? I would assume I would have to handle those individually. Okay, that was three questions?
To expand on my comment earlier, I'll start with my proposed solution, and then explain why.
Your problem is this: your repositories have a dependency on DbSet<T>. You are unable to test your repositories effectively because they depend on DbSet<T>.Find(int[]), so you have decided to substitute your own variant of DbSet<T> called TestDbSet<T>. This is unnecessary; DbSet<T> implements IDbSet<T>. Using Moq, we can very cleanly create a stub implementation of this interface that returns a hard coded value.
class MyRepository
{
public MyRepository(IDbSet<MyType> dbSet)
{
this.dbSet = dbSet;
}
MyType FindEntity(int id)
{
return this.dbSet.Find(id);
}
}
By switching the dependency from DbSet<T> to IDbSet<T>, the test now looks like this:
public void MyRepository_FindEntity_ReturnsExpectedEntity()
{
var id = 5;
var expectedEntity = new MyType();
var dbSet = Mock.Of<IDbSet<MyType>>(set => set.Find(It.is<int>(id)) === expectedEntity));
var repository = new MyRepository(dbSet);
var result = repository.FindEntity(id);
Assert.AreSame(expectedEntity, result);
}
There - a clean test that doesn't expose any implementation details or deal with nasty mocking of concrete classes and lets you substitute out your own version of IDbSet<MyType>.
On a side note, if you find yourself testing DbContext - don't. If you have to do that, your DbContext is too far up the stack and it will hurt if you ever try and move away from Entity Framework. Create an interface that exposes the functionality you need from DbContext and use that instead.
Note: I used Moq above. You can use any mocking framework, I just prefer Moq.
If your model has a compound key (or has the capability to have different types of keys), then things get a bit trickier. The way to solve that is to introduce your own interface. This interface should be consumed by your repositories, and the implementation should be an adapter to transform the key from your composite type into something that EF can deal with. You'd probably go with something like this:
interface IGenericDbSet<TKeyType, TObjectType>
{
TObjectType Find(TKeyType keyType);
}
This would then translate under the hood in an implementation to something like:
class GenericDbSet<TKeyType,TObjectType>
{
GenericDbSet(IDbSet<TObjectType> dbset)
{
this.dbset = dbset;
}
TObjectType Find(TKeyType key)
{
// TODO: Convert key into something a regular dbset can understand
return this.dbset(key);
}
}
I realise this is an old question, but after coming up against this issue myself when mocking data for unit tests I wrote this generic version of the 'Find' method that can be used in the TestDBSet implementation that is explained on msdn
Using this method means you dont have to create concrete types for each of your DbSets. One point to note is that this implementaion works if your entities have primary keys in one of the following forms (im sure you could modify to suite other forms easily):
'Id'
'ID'
'id'
classname +'id'
classname +'Id'
classname + 'ID'
public override T Find(params object[] keyValues)
{
ParameterExpression _ParamExp = Expression.Parameter(typeof(T), "a");
Expression _BodyExp = null;
Expression _Prop = null;
Expression _Cons = null;
PropertyInfo[] props = typeof(T).GetProperties();
var typeName = typeof(T).Name.ToLower() + "id";
var key = props.Where(p => (p.Name.ToLower().Equals("id")) || (p.Name.ToLower().Equals(typeName))).Single();
_Prop = Expression.Property(_ParamExp, key.Name);
_Cons = Expression.Constant(keyValues.Single(), key.PropertyType);
_BodyExp = Expression.Equal(_Prop, _Cons);
var _Lamba = Expression.Lambda<Func<T, Boolean>>(_BodyExp, new ParameterExpression[] { _ParamExp });
return this.SingleOrDefault(_Lamba);
}
Also from a performance point of view its not going to be as quick as the recommended method, but for my purposes its fine.
So based on the example, I did the following to be able to Unit Test my UnitOfWork.
Had to make sure my UnitOfWork was implementing IApplicationDbContext. (Also, when I say UnitOfWork, my controller's UnitOfWork is of type IUnitOfWork.)
I left all of the DbSet's in my IApplicationDbContext alone. I chose this pattern once I noticed IDbSet didn't include RemoveRange and FindAsync, which I use throughout my code. Also, with EF6, the DbSet can be set to virtual and this was recommended in MSDN, so that made sense.
I followed the Creating the in-memory test doubles
example to create the TestDbContext and all the recommended classes (e.g. TestDbAsyncQueryProvider, TestDbAsyncEnumerable, TestDbAsyncEnumerator.) Here is the code:
public class TestContext : DbContext, IApplicationDbContext
{
public TestContext()
{
this.ExampleModels= new TestBaseClassDbSet<ExampleModel>();
//New up the rest of the TestBaseClassDbSet that are need for testing
//Created an internal method to load the data
_loadDbSets();
}
public virtual DbSet<ExampleModel> ExampleModels{ get; set; }
//....List of remaining DbSets
//Local property to see if the save method was called
public int SaveChangesCount { get; private set; }
//Override the SaveChanges method for testing
public override int SaveChanges()
{
this.SaveChangesCount++;
return 1;
}
//...Override more of the DbContext methods (e.g. SaveChangesAsync)
private void _loadDbSets()
{
_loadExampleModels();
}
private void _loadExampleModels()
{
//ExpectedGlobals is a static class of the expected models
//that should be returned for some calls (e.g. GetById)
this.ExampleModels.Add(ExpectedGlobal.Expected_ExampleModel);
}
}
As I mentioned in my post, I needed to implement the FindAsync method, so I added a class called TestBaseClassDbSet, which is an alteration of the TestDbSet class in the example. Here is the modification:
//BaseModel is a class that has a key called Id that is of type string
public class TestBaseClassDbSet<TEntity> :
DbSet<TEntity>
, IQueryable, IEnumerable<TEntity>
, IDbAsyncEnumerable<TEntity>
where TEntity : BaseModel
{
//....copied all the code from the TestDbSet class that was provided
//Added the missing functions
public override TEntity Find(params object[] keyValues)
{
var id = (string)keyValues.Single();
return this.SingleOrDefault(b => b.Id == id);
}
public override Task<TEntity> FindAsync(params object[] keyValues)
{
var id = (string)keyValues.Single();
return this.SingleOrDefaultAsync(b => b.Id == id);
}
}
Created an instance of TestContext and passed that into my Mock.
var context = new TestContext();
var userStore = new Mock<IUserStore>();
//ExpectedGlobal contains a static variable call Expected_User
//to be used as to populate the principle
// when mocking the HttpRequestContext
userStore
.Setup(m => m.FindByIdAsync(ExpectedGlobal.Expected_User.Id))
.Returns(Task.FromResult(ExpectedGlobal.Expected_User));
var mockUserManager = new Mock(userStore.Object);
var mockUnitOfWork =
new Mock(mockUserManager.Object, context)
{ CallBase = false };
I then inject the mockUnitOfWork into the controller, and voila. This implementation seems to be working perfect. That said, based on some feeds I have read online, it will probably be scrutinized by some developers, but I hope some others find this to be useful.

How do i mock a Interface with Moq or NInject Mocking Kernel

I just waded through questions and blogs on the subject of mocking and Dependency Injection. Come to a conclusion i just need to mock the interface that is consumed by client. I am looking forward to testing a simple use case here with no idea.
The Contract
public Interface IApplicationService
{
bool DeleteApplication(int id);
ApplicationDto AddApplication(ApplicationDto application);
IEnumerable<ApplicationDto> GetApplications();
}
Implementation ( I am going to mock )
public Class ApplicationService:IApplicationService
{
private EntityFrameworkRepo repo;
public ApplicationService()
{
repo = new EntityFrameworkRepo();
}
public ApplicationDto Add(ApplicationDto dto)
{
//add to dbcontext and commit
}
}
Mocking Code
[Test(Description = "Test If can successfully add application")]
public void CanAddApplication()
{
//create a mock application service
var applicationService = new Mock<IApplicationService>();
//create a mock Application Service to be used by business logic
var applicationDto = new Mock<ApplicationDto>();
//How do i set this up
applicationService.Setup(x => x.GetApplications()).Returns(IEnumerable<applicationDto.Object>);
}
And i for one am sure i need to test the business logic rather than mocking it. So what is it exactly i have to do to test my ApplicationService but then keep the entity framework out.
btw to speak of ApplicationService, it uses constructor injection with NInject. So mocking this with NInject.MockingKernel will setup dependency chain?
There is little or no benefit using dependency injection (IOC) container in unit testing. Dependency injection helps you in creating loose coupled components, and loose coupled components are easier to test, thats it.
So if you want to test some service, just create mockups of it dependencies and pass them to that service as usual (no need to involve IOC container here, I hardly can imagine, that you will need some features of IOC containers - like contextual binding, interception etc. - inside unit test).
If you want your ApplicationService to be easy to test, it should look more like:
public class ApplicationService: IApplicationService
{
private readonly IEntityFrameworkRepo repo;
// dependency passed by constructor
public ApplicationService(IEntityFrameworkRepo repo)
{
this.repo = repo;
}
// save to db when DTO is eligible
public ApplicationDto Add(ApplicationDto dto)
{
// some business rule
if(dto.Id > 0 && dto.Name.Contains(string.Empty)){
//add to dbcontext and commit
}else{
throw new NotEligibleException();
}
}
}
Here the dependency is passed by constructor. In your application code you will use it together with an IOC container to make constructor injection (IOC container will be responsible for creating instances of IEntityFrameworkRepo).
But in unit test, you can just pass instance of some implementation of IEntityFrameworkRepo created on your own.
ApplicationDto
As long as ApplicationDto is some object that can by created by hand, I can directly use it in unit-test (creating instances by hand). Otherwise I will have to wrap it by interface like IApplicationDto, in order to be able to mock it up with Moq.
public class ApplicationDto{
public int Id {get; set;}
public string Name {get; set;}
}
Here is how could unit-test look like:
In unit test I will use mocked implementaion of IApplicationRepo, because I do not want to configure e.g. database connections, web services etc. and my primary intention is to test the ApplicationService not the underlying repository. Another advantage is that the test will be runnable without specific configuration for various machines. To mockup some db repository I can use e.g. List.
[Test(Description = "Test If can successfully add application")]
public void CanAddApplicationIfEligible()
{
var repo = GetRepo();
var appService = new ApplicationService(repo);
var testAppDto = new ApplicationDto() { Id = 155, Name = "My Name" };
var currentItems = repo.ApplicationDtos.Count();
appService.Add(testAppDto);
Assert.AreEqual(currentItems + 1, repo.ApplicationDtos.Count());
var justAdded = repo.ApplicationsDto.Where(x=> x.Id = 155).FirstOrDefault();
Assert.IsNotNull(justAdded);
///....
}
private static IEntityFrameworkRepo GetRepo{
// create a mock repository
var listRepo = new List<ApplicationDto>{
new ApplicationDto {Id=1, Name="MyName"}
};
var repo = new Mock<IEntityFrameworkRepo>();
// setup the methods you know you will need for testing
// returns initialzed list instead of DB queryable like in real impl.
repo.Setup(x => x.ApplicationDtos)
.Returns<IQueryable<ApplicationDto>>(x=> listRepo);
// adds an instance of ApplicationDto to list
repo.Setup(x => x.Add(It.IsAny<ApplicationDto>())
.Callback<ApplicationDto>(a=> listRepo.Add(a));
return repo.Object;
}
Note:
There have been realeased an ninject.mockingkernel extension. The approach described in example on wiki can make your unit-test code bit tidier, but the approach described there is definetly not depencdency injection (it is service locator).

Implementing a Repository<T> that returns a domain model mapped from an EF Entity

I have the following application structure, based on the onion architecture by Jeffery Palermo (ref link). So my Core is not dependant on anything, my Infrastructure is dependent on my Core
My Core has the Repository Contract and my Infrastructure implements it. The Implementation gets Injected by my IoC Container
Core
-Interfaces
--IRepository<TDomainModel>
-Domain
--Person
Infrastructure
-Data
--Repository<TDomainModel> (Implementation)
-Entities
--Ef.edmx
So this wouldn't be a problem if I wrote out a concrete repository implementation (e.g a PersonRepository) because I know what type to project / map too.
Example Concrete Implementation:
public class PersonRepository
{
...
public IQueryable<PersonDomainClass> GetByName(string name)
{
return Dbcontext.Person.Where(x => x.name == name).Select(x => new Person());
}
...
}
What I would like:
public class Repository<TDomainModel> : IRepository<TDomainModel>
{
//Problem 1. We can't set the DbSet to a Domain Model
private DbSet<TDomainModel> dbEntity;
...
public IQueryable<TDomainModel> GetWhere((Expression<Func<TDomainModel, bool>> predicate))
{
//Problem 2. I Don't think this will work because the predicate is ofType TDomainModel
//and not an EF Entity!?
var entities = dbEntity.Where(predicate);
var domainObjects = Mapper.Map <IQueryable<TDomainModel>, IQueryable<TEntityModel>> (entities);
return domainObjects;
}
...
}
I might be going about this the wrong way so I open other implementations.
UPDATE
Well thank you all for your thoughts and advice. usr made a very good point that I had overlooked- If I abstract over my ORM I will lose all benefits that the ORM provides.
I am using EF Database First development. So my Entities are in my infrastructure along with my repository implementations.
The Domain is separate to this as I am building my application based on the onion architecture.
If I was doing Code First it seems the thing to do is build your Domain first and using EF Code first translate this to a Database.
I can't do code first :(
So, in steps the Entity Framework DbCotnext POCO generator from the EF team # Microsoft. This generates persistent ignorant POCO classes based on my edmx file
This seems great so far, I have all the benefits of Lazy Loading and Change Tracking and even better my Domain is generated for me and Entity Framework handles the mapping internally. This has simplified my application :)
So this is no a high level view of my architecture
Core
-Interfaces
--IRepository<TEntity>
---IPersonRepository<Person>
---IFooRepository<Foo>
-Domain
--Person (Auto Generated)
--Foo (Auto Generated)
Infrastructure
-Data
--Repository<TEntity> (Implementation)
---PersonRepository<Person>
---FooRepository<Foo>
-Entities
--Ef.edmx
The repository pattern is used to provide an abstraction to the data layer? Right?
With that in mind, let's think about LINQ to SQL (no matter if it's through EF, nhibernate or anything else). There is no LINQ to SQL provider which is 100% fully compatible with LINQ. There are always cases which cannot be used. Hence LINQ to SQL is a leaky abstraction.
That means that if you use a repository interface which exposes IQueryable<TDomainModel> or Expression<Func<TDb, bool>> where you have to be aware of those limitations. It's therefore not a complete abstraction.
Instead I recommend that you just provide a basic generic repository like this:
interface IRepository<TEntity, TKey>
{
TEntity Get(TKey key);
void Save(TEntity entity);
void Delete(TEntity entity);
}
And then create root aggregate specific interfaces:
interface IUserRepository : IRepository<User, int>
{
User GetByUserName(string userName);
IEnumerable FindByLastName(string lastName);
}
which means that the implementation would look like:
public class UserRepository : EfRepository<User>, IUserRepository
{
//implement the interfaces declared in IUserRepository here
}
It's now a 100% working abstraction where it's much easier to tell what functions the repository provides. You have to write a little bit more code now, but you don't have to struggle with a leaky abstraction later on.
You could also switch to queries like I demonstrate here: http://blog.gauffin.org/2012/10/griffin-decoupled-the-queries/
take a look at AutoMapper. This may help with the generic mapping.
You are going in a right way, but I will suggest to add additional abstractions into your solution:
public abstract class RepositoryBase<T, TDb> : where T : new() where TDb : class, new()
{
protected IQueryable<T> GetBy(Expression<Func<TDb, bool>> where = null,
PagingSortOptions? pagingSortOptions = null)
{
//GetDbSet basic method to get DbSet in generic way
IQueryable<TDb> query = GetDbSet();
if (where != null)
{
query = query.Where(where);
}
if (pagingSortOptions != null)
{
query = query.InjectPagingSortQueryable(pagingSortOptions);
}
return query.Select(GetConverter());
}
protected virtual Expression<Func<TDb, T>> GetConverter()
{
return dbEntity => Mapper.Map<TDb, T>(dbEntity);
}
}
public class CountryRepository : RepositoryBase<CountryDomainModel, CountryDb>, ICountryRepository
{
public Country GetByName(string countryName)
{
return GetBy(_ => _.Name == countryName).First();
}
}
public interface ICountryRepository : IRepository<CountryDomainModel>
{
Country GetByName(string countryName);
}
public interface IRepository<TDomainModel>
{
//some basic metods like GetById
}
than outside database layer you will use ICountryRepository.

MVC repository architecture and accessing different tables

Thank you for helping me understand some of this stuff:
Say I have 2 controllers in an MVC application -
1 controls viewModels related to salespeople
1 controls viewModels related to sales
Each have a their own repository, which access data using Entity framework (code first)
both repositories are set up to handle dependency injection, but also have 0 argument constructors with defaults to use the appropriate EF dataAccess.
the salespeople controller uses a _salesPeopleRepository.getAllSalesPeople() function which returns a List of salespeople to populate an index view.
the sales controller needs to access the same list to populate a dropdownlist.
there are several ways to get the information to the sales controller, and I was wondering which options would be considered best practice:
a) In the controller
db = new DataContext();
_saleRepos = new SalesRepository(db);
_salesPeople = new SalesPeopleRepository(db);
.....
modelA.SalePeopleSelectList = SelectList(_salesPeople.getAllSalesPeople(),"id","name")
b) in the SalesRepository - either using EF itself:
public IEnumerable<salesPerson> getAllSalesPeople()
{
return _db.SalesPeople.ToList();
}
c) or instantiating and injecting the same data access object before calling the function
public IEnumerable<salesPerson> getAllSalesPeople()
{
return (new SalesPersonRepository(_db)).getAllSalesPeople();
}
Edit
If the answer is a), how should custom buisiness logic be called from 1 repository - say for instance sales has a storeId and the repository checks that the storeId entered for the sale matches the storeId for the salesPerson. Should the salesPerson object used for buisness logic purposes (within the salesRepository) be accessed via the salesPerson repository, or directly from the dataContext object?
Thank you for your thoughts and experience
It doesn't make sense to have your SalesRepository retrieving data from the SalesPerson table. That data access logic ought to be centralized in the SalesPeopleRepository. Duplicating the method across repositories will just muddy the water, in my opinion.
So why not have both the SalesRepository and SalesPeopleRepository used in your Sales Controller? I would just instantiate an instance of the SalesPeopleRepository and use the method already defined in there.
Also, if your Controllers are using dependency injection, you could just have the repositories passed in to the constructor:
public SalesController (ISalesRepository salesRepository, ISalesPeopleRepository salesPeopleRepository)
{
this._salesRepository = salesRepository;
this._salesPeopleRepository = salesPeopleRepository;
}
Best Practice always depends on the context, but a combination of the Repository and Unit Of Work patterns are regularly used on top of EF.
Usage:
using (var uow as new DataContext()) {
var salesPeople = new SalesPeopleRepository(uow);
// ...
uow.Commit(); // If changes must be committed back to the database
}
Implementation:
public interface IUnitOfWork {
public void Commit();
}
public class DataContext : IUnitOfWork {
public void Commit() {
this.SaveChanges();
}
}
public class SalesPeopleRepository {
private DataContext _db
public SalesPeopleRepository(IUnitOfWork uow) {
_db = uow as DataContext;
}
public IEnumerable<SalesPerson> GetAllSalesPeople() {
return _db.SalesPeople.ToList();
}
}
First, the C# naming convention should be followed: getAllSalesPeople() should be GetAllSalesPeople. Second, IoC Container and dependency injection would be the best practice in this case.
The item a should be avoied because DataContext and Repositories are created directly in controller, it is violating dependency injection and make your code tight coupling with Repositories and DataContext and no way to mock for unit testing. Instead, repositories should be injected into controllers and DataContext should be injected into Repositories.
public Repository(DataContext dataContext)
{
_dataContext = dataContext;
}
public SalesController(ISalesRepository salesRepository,
ISalesPeopleRepository salesPeopleRepository)
{
_salesRepository = salesRepository;
_salesPeopleRepository = salesPeopleRepository;
}
The lifetime management for DataContext should be kept in per request in IoC Container instead of creating directly in controller, most of IoC Container supports this. Don't know which IoC container you use, but my faves is: Autofac and Windsor.
For the item c, you are making the business logic leak to Repository layer, instead, business logic should be in controller or in separate layer. Repository just deal with CRUD operation with your database.

Categories

Resources