How to configure dependency injection container with Func<T, Result>? - c#

BusinessAction is used to represent an action that can be performed by a user. Each action is related to the specific entity, so if for example, that entity is Order, business actions could be CancelOrder, IssueRefund, etc.
public abstract class BusinessAction<T>
{
public Guid Id { get; init; }
public Func<T, bool> IsEnabledFor { get; init; }
}
public class CancelOrderAction : BusinessAction<Order>
{
public CancelOrderAction ()
{
Id = Guid.Parse("0e07d05c-6298-4c56-87d7-d2ca339fee1e");
IsEnabledFor = o => o.Status == OrderStatus.Active;
}
}
Then I need to group all actions related to the specific type.
public interface IActionRegistry
{
Task<IEnumerable<Guid>> GetEnabledActionIdsForAsync(Guid entityId);
}
public class ActionRegistry<T> : IActionRegistry
where T : BaseEntity
{
private readonly IEnumerable<BusinessAction<T>> _actions;
private readonly IRepository<T> _repository;
public ActionRegistry(IEnumerable<BusinessAction<T>> actions, IRepository<T> repository)
{
_actions = actions;
_repository = repository;
}
public async Task<IEnumerable<Guid>> GetEnabledActionIdsForAsync(Guid entityId)
{
var entity = await _repository.FindByIdAsync(entityId);
return entity == null
? Enumerable.Empty<Guid>()
: _actions.Where(a => a.IsEnabledFor(entity)).Select(a => a.Id);
}
}
Finally, there is an API endpoint that receives entity type (some enumeration that is later on mapped to real .NET type) and ID of an entity. The API endpoint is responsible to return action IDs that are enabled for the current state of the entity.
public class RequestHandler : IRequestHandler<Request, IEnumerable<Guid>>>
{
private readonly Func<Type, IActionRegistry> _registryFactory;
public RequestHandler(Func<Type, IActionRegistry> registryFactory)
{
_registryFactory = registryFactory;
}
public async Task<IEnumerable<Guid>> Handle(Request request, CancellationToken cancellationToken)
{
var type = request.EntityType.GetDotnetType();
var actionRegistry = _registryFactory(type);
var enabledActions = await actionRegistry.GetEnabledActionIdsForAsync(request.EntityId);
return enabledActions;
}
}
The question is: How can I configure the dependency injection container in ASP.NET (using default option or Autofac) so that Func<Type, IActionRegistry> can be resolved?
For parameters in ActionRegistry<T> I guess I can do:
builder.RegisterAssemblyTypes().AsClosedTypesOf(typeof(BusinessAction<>));
builder.RegisterGeneric(typeof(Repository<>))
.As(typeof(IRepository<>))
.InstancePerLifetimeScope();
But, how can I configure Func<Type, IActionRegistry> so that I am able to automatically connect a request for Order with ActionRegistry<Order>? Is there a way to do that or I will need to manually configure the factory by writing some switch statement based on type (and how will that look)?
Is there a better way to achieve what I need here? The end goal is that once I have runtime type, I can get a list of business actions related to that type as well as a repository (so that I can fetch entity from DB).

What you're trying to do is possible, but it's not a common thing and isn't something magic you'll get out of the box. You'll have to write code to implement it.
Before I get to that... from a future perspective, you might get help faster and more eyes on your question if your repro is far more minimal. The whole BusinessAction<T> isn't really needed; the RequestHandler isn't needed... honestly, all you need to repro what you're doing is:
public interface IActionRegistry
{
}
public class ActionRegistry<T> : IActionRegistry
{
}
If the other stuff is relevant to the question, definitely include it... but in this case, it's not, so adding it in here just makes the question harder to read through and answer. I know I, personally, will sometimes just skip questions where there's a lot of extra stuff because there are only so many hours in the day, you know?
Anyway, here's how you'd do it, in working example form:
var builder = new ContainerBuilder();
// Register the action registry generic but not AS the interface.
// You can't register an open generic as a non-generic interface.
builder.RegisterGeneric(typeof(ActionRegistry<>));
// Manually build the factory method. Going from reflection
// System.Type to a generic ActionRegistry<Type> is not common and
// not directly supported.
builder.Register((context, parameters) => {
// Capture the lifetime scope or you'll get an exception about
// the resolve operation already being over.
var scope = context.Resolve<ILifetimeScope>();
// Here's the factory method. You can add whatever additional
// enhancements you need, like better error handling.
return (Type type) => {
var closedGeneric = typeof(ActionRegistry<>).MakeGenericType(type);
return scope.Resolve(closedGeneric) as IActionRegistry;
};
});
var container = builder.Build();
// Now you can resolve it and use it.
var factory = container.Resolve<Func<Type, IActionRegistry>>();
var instance = factory(typeof(DivideByZeroException));
Assert.Equal("ActionRegistry`1", instance.GetType().Name);
Assert.Equal("DivideByZeroException", instance.GetType().GenericTypeArguments[0].Name);

Related

Resolving muliple implementation of interface

I have an application that has 3 service :
A validator service that validates content based on type
A boss service that converts stuff in regard to boss people
An employee service that convert stuff in regard to employee people
It looks like this simplified :
public interface IValidatorService<T>
{
void ValidateContent(AbstractValidator<T> validator, List<T> content);
}
public class ValidatorService <T> : IValidatorService<T>
{
public void ValidateContent(AbstractValidator<T> validator, List<T> peopleContent)
{ ...does its job...}
}
public interface IPeopleService<T>
{
List<T> Convert(string json);
}
public class BossService : IPeopleService<People>
{
IValidatorService<Boss> _validatorService;
public BossService (IValidatorService<Boss> validatorService)
{
_validatorService = validatorService;
}
public List<People> Convert(string json)
{ ...does its job for boss...}
}
public class EmployeeService : IPeopleService<People>
{
IValidatorService<Employee> _validatorService;
public EmployeeService (IValidatorService<Employee> validatorService)
{
_validatorService = validatorService;
}
public List<People> Convert(string json)
{ ...does its job for employee...}
}
Now in my main I am ok if I do this :
var serviceProvider = new ServiceCollection()
.AddSingleton<IValidatorService<Boss>, ValidatorService<Boss>>()
.AddSingleton<IPeopleService<People>, BossService>()
.BuildServiceProvider();
var bosses = serviceProvider.GetService<IPeopleService<People>>().Convert(json);
But in fact I want to do something like this although my problem how can it know which implementation to execute for boss and for employee respectivly since they are of the same type in this case IPeopleService
var serviceProvider = new ServiceCollection()
.AddSingleton<IValidatorService<Boss>, ValidatorService<Boss>>()
.AddSingleton<IPeopleService<People>, BossService>()
.AddSingleton<IValidatorService<Employee>, ValidatorService<Employee>>()
.AddSingleton<IPeopleService<People>, EmployeeService>()
.BuildServiceProvider();
var bosses = serviceProvider.GetService<IPeopleService<People>>().Convert(json);
var employees = serviceProvider.GetService<IPeopleService<People>>().Convert(json);
You can't do that directly using the built-in container. You will have to introduce a factory that acts as an intermediary and inject the factory instead. This allows you to specify additional context to get the right service (e.g. by name).
Take a look at the GitHub repo and NuGet package for DependencyInjection.Extensions.
This solution was inspired by a question about named resolution.
It doesn't. Depending on which DI engine you use, that would throw some kind of ambiguous resolution exception or some lamer implementations might take the first one. Some others even allow you to get all implementations of IPeopleService<People> through an IEnumerable mechanism.
And why would the DI engine know which one you want? You haven't given it any context with which to make a decision.
You'd have to make one IPeopleService<Boss> and one IPeopleService<Employee>.
EDIT: Kit makes a good point... some DI engines support resolving by name or other type of contexts.

How do you avoid passing a DI container in a unidirectional api design?

I have a business layer with business entities designed using Active Record, and a unidirectional api surface. I have two distinct problems:
Without complicating the code, how should runtime values be handled, such as passing in an id value from the DAL to a constructed object? Would this be done with parameter overrides?
How do create other business entities and pass dependencies if I am not passing the container down as well (making it more of an anti-pattern / service locator)
Product is the root that wraps the container and acts as our application facade, and entry point to the rest of the BAL. The piece I am trying to solve is in Product.FindCustomer and Customer.FindDocument
public class Product
{
private IUnityContainer container;
public void RegisterType<T>() ...
public void RegisterType<TFrom, TTo>() ...
public Customer FindCustomer(string customerNumber)
{
var id = context.Customers
.Where(p => p.CustomerNumber == customerNumber)
.Select(p => p.Id)
.Single();
var customer = container.Resolve<Customer>(...); // param override?
customer.Load();
return customer;
}
}
public class Customer : BusinessEntity<Data.Customer, Guid>
{
private readonly IDocumentFileProvider provider;
public Customer(IDataContext context, IDocumentFileProvider provider) : base(context)
{
this.provider = provider;
}
public Customer(IDataContext context, IDocumentFileProvider provider, Guid id) : base(context, id)
{
this.provider = provider;
}
public Document FindDocument(string code)
{
var id = context.Documents
.Where(p => p.CustomerNumber == customerNumber)
.Select(p => p.Id)
.Single()
var document = new Document(context, provider, id); // Here is the issue
document.Load();
return document;
}
}
public class Document : BusinessEntity<Data.Document, Guid>
{
public Document(IDataContext context, IDocumentFileProvider provider) : base(context)
{
this.provider = provider;
}
public Document(IDataContext context, IDocumentFileProvider provider, Guid id) : base(context, id)
{
this.provider = provider;
}
public IDocumentFile GetFile()
{
return provider.GetFile();
}
}
Here is briefly the other classes.
public abstract class ActiveRecord<TEntity, TKey>
{
protected ActiveRecord(IDataContext context)
{
}
public virtual void Load() ...
public virtual void Save() ...
public virtual void Delete() ...
}
public abstract class BusinessEntity<TEntity, TKey> : ActiveRecord<TEntity, TKey>
{
protected BusinessEntity(IDataContext context) : base(context)
{
}
protected BusinessEntity(IDataContext context, TKey id) : this(context)
{
}
...
}
The hierarchies can be quite deep, but a shorter example:
var customer = product.FindCustomer("123");
var account = customer.FindAccount("321");
var document = account.FindDocument("some_code");
var file = document.GetFile();
One of my goals is to A) model the domain, and B) provide a very easy to understand API. Currently our BAL uses Service Locator, but I am experimenting on replacing that with proper IoC/DI and a container.
The deeper the API, and the more dependencies are needed, all the higher up class constructors can be quite long, and may no longer seem cohesive.
While DI can be squeezed into most application designs using half-measures, the unfortunate truth is that not all application designs are particularly DI friendly. Creating "smart entities" seems like magic when it comes to API design, but the fact of the matter is that at their core they violate the SRP (load and save are separate responsibilities regardless of how you slice it).
You basically have 4 options:
Find a design that is more conducive of DI and use its object model for your API
Find a design that is more conducive of DI and create a facade object model for your API
Use property injection to load your dependencies and give the end user control over the constructor
Use a service locator
I ran into a similar wall when trying to use CSLA in conjunction with DI and after many attempts, finally decided that it was CSLA that needed to go and to find a better design approach.
For a time, I tried using option 3. In this case you can create a facade wrapper around the DI container, and only expose its BuildUp() method through a static accessor. This prevents the use of the container as a service locator.
[Dependency]
public ISomeDependency SomeDepenency { get; set; }
public Customer()
{
Ioc.BuildUp(this);
}
Some DI containers can inject properties using fluent configuration instead of attributes (so your business model doesn't need to reference the container), but this can make the DI configuration very complex. Another option is to make build your own attributes.
Options 1 and 2 would be similar. You basically make every responsibility into its own class and separate your "entities" out into dumb data containers. An approach that works well for this is to use Command Query Segregation.
public class FindCustomer : IDataQuery<Customer>
{
public string CustomerNumber { get; set; }
}
public class FindCustomerHandler : IQueryHandler<FindCustomer, Customer>
{
private readonly DbContext context;
public FindCustomerHandler(DbContext context)
{
if (context == null)
throw new ArgumentNullException("context");
this.context = context;
}
public Customer Handle(GetCustomer query)
{
return (from customer in context.Customers
where customer.CustomerNumber == query.CustomerNumber
select new Customer
{
Id = customer.Id,
Name = customer.Name,
Addresses = customer.Addresses.Select(a =>
new Address
{
Id = a.Id,
Line1 = a.Line1,
Line2 = a.Line2,
Line3 = a.Line3
})
.OrderBy(x => x.Id)
}).FirstOrDefault();
}
}
Using option 1, the end user would create an instance of FindCustomer and call queryProcessor.Handle(findCustomer) (the queryProcessor is injected).
Using option 2, you would then need to create a wrapper API. You could use a fluent builder approach (more info here) to provide logical default dependencies, but allow the end user to call methods to supply their own.
var customer = new CustomerBuilder().Build(); // defaults
var customer = new CustomerBuilder(c =>
c.WithSomeDependency(new SomeDependency()).Build(); // overridden dependency
Unfortunately, the main issue with this is that control of the lifetime of objects is no longer up to the DI container, so dependencies like DbContext need special handling.
Another variant of this would be to make each entity into a humble object that internally builds up its own DI container using the other (loosely coupled) API objects. This is the recommended approach for legacy frameworks (such as web forms) that are difficult to use with DI.
Finally, there is making a static service locator that all of your API objects use to resolve their dependencies. While this best accomplishes the goal, it is something that should be considered a last resort. The biggest issue is that you lose the ability to quickly and easily understand what dependencies a class requires. So, you are either forced to create (and update) documentation indicating what the dependencies are to the end user, or end users will have to go digging through the source code to find out. Whether using a service locator is acceptable depends on your target audience and how frequent you expect them to need to be able to customize dependencies beyond the defaults. If custom dependencies are a once in a blue moon thing, it may work, but if 25% of your user base needs to add custom dependencies, service locator is probably not the right approach.
The bottom line is that if maintainability is your main goal, then option 1 is the clear winner. But if you are married to this particular API design, you will need to choose one of the other options and live with the extra maintenance involved in supporting such an API.
References:
DI Friendly Library
Dependency Injection in .NET.

WebApi: Per Request Per Action DbSession using IoC, how?

Our existing database deployment has a single 'master' and a read-only replica. Using ASP.NET's Web API2 and an IoC container I want to create controller actions whose attribute (or lack there of) indicate which database connection is to be used for that request (See Controller and Services usage below)...
public MyController : ApiController
{
public MyController(IService1 service1, IService2 service2) { ... }
// this action just needs the read only connection
// so no special attribute is present
public Foo GetFoo(int id)
{
var foo = this.service1.GetFoo(id);
this.service2.GetSubFoo(foo);
return foo;
}
// This attribute indicates a readwrite db connection is needed
[ReadWrteNeeded]
public Foo PostFoo(Foo foo)
{
var newFoo = this.service1.CreateFoo(foo);
return newFoo;
}
}
public Service1 : IService1
{
// The dbSession instance injected here will be
// based off of the action invoked for this request
public Service1(IDbSession dbSession) { ... }
public Foo GetFoo(int id)
{
return this.dbSession.Query<Foo>(...);
}
public Foo CreateFoo(Foo newFoo)
{
this.dbSession.Insert<Foo>(newFoo);
return newFoo;
}
}
I know how to setup my IoC (structuremap or Autofac) to handle per request IDbSession instances.
However, I'm not sure how I would go about making the type of IDbSession instance for the request to key off the indicator attribute (or lack there of) on the matching controller's action. I assume I will need to create an ActionFilter that will look for the indicator attribute and with that information identify, or create, the correct type of IDbSession (read-only or read-write). But how do I make sure that the created IDbSession's lifecycle is managed by the container? You don't inject instances into the container at runtime, that would be silly. I know Filters are created once at startup (making them singleton-ish) so I can't inject a value into the Filter's ctor.
I thought about creating an IDbSessionFactory that would have 'CreateReadOnlyDbSession' and 'CreateReadWriteDbSession' interfaces, but don't I need the IoC container (and its framework) to create the instance otherwise it can't manage its lifecycle (call dispose when the http request is complete).
Thoughts?
PS During development, I have just been creating a ReadWrite connection for every action, but I really want to avoid that long-term. I could also split out the Services methods into separate read-only and read-write classes, but I'd like to avoid that as well as placing GetFoo and WriteFoo in two different Service implementations just seems a bit wonky.
UPDATE:
I started to use Steven's suggestion of making a DbSessionProxy. That worked, but I was really looking for a pure IoC solution. Having to use HttpContext and/or (in my case) Request.Properties just felt a bit dirty to me. So, if I had to get dirty, I might as well go all the way, right?
For IoC I used Structuremap and WebApi.Structuremap. The latter package sets up a nested container per Http Request plus it allows you to inject the current HttpRequestMessage into a Service (this is important). Here's what I did...
IoC Container Setup:
For<IDbSession>().Use(() => DbSession.ReadOnly()).Named("ReadOnly");
For<IDbSession>().Use(() => DbSession.ReadWrite()).Named("ReadWrite");
For<ISampleService>().Use<SampleService>();
DbAccessAttribute (ActionFilter):
public class DbAccessAttribute : ActionFilterAttribute
{
private readonly DbSessionType dbType;
public DbAccessAttribute(DbSessionType dbType)
{
this.dbType = dbType;
}
public override bool AllowMultiple => false;
public override void OnActionExecuting(HttpActionContext actionContext)
{
var container = (IContainer)actionContext.GetService<IContainer>();
var dbSession = this.dbType == DbSessionType.ReadOnly ?
container.GetInstance<IDbSession>("ReadOnly") :
container.GetInstance<IDbSession>("ReadWrite");
// if this is a ReadWrite HttpRequest start an Request long
// database transaction
if (this.dbType == DbSessionType.ReadWrite)
{
dbSession.Begin();
}
actionContext.Request.Properties["DbSession"] = dbSession;
}
public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
var dbSession = (IDbSession)actionExecutedContext.Request.Properties["DbSession"];
if (this.dbType == DbSessionType.ReadWrite)
{
// if we are responding with 'success' commit otherwise rollback
if (actionExecutedContext.Response != null &&
actionExecutedContext.Response.IsSuccessStatusCode &&
actionExecutedContext.Exception == null)
{
dbSession.Commit();
}
else
{
dbSession.Rollback();
}
}
}
}
Updated Service1:
public class Service1: IService1
{
private readonly HttpRequestMessage request;
private IDbSession dbSession;
public SampleService(HttpRequestMessage request)
{
// WARNING: Never attempt to access request.Properties[Constants.RequestProperty.DbSession]
// in the ctor, it won't be set yet.
this.request = request;
}
private IDbSession Db => (IDbSession)request.Properties["DbSession"];
public Foo GetFoo(int id)
{
return this.Db.Query<Foo>(...);
}
public Foo CreateFoo(Foo newFoo)
{
this.Db.Insert<Foo>(newFoo);
return newFoo;
}
}
I assume I will need to create an ActionFilter that will look for the indicator attribute and with that information identify, or create, the correct type of IDbSession (read-only or read-write).
With your current design, I would say an ActionFilter is the way to go. I do think however that a different design would serve you better, which is one where business operations are more explicitly modelled behind a generic abstraction, since you can in that case place the attribute in the business operation, and when you explicitly separate read operations from write operations (CQS/CQRS), you might not even need this attribute at all. But I'll consider this out of scope of your question right now, so that means an ActionFilter is the the way to go for you.
But how do I make sure that the created IDbSession's lifecycle is managed by the container?
The trick is let the ActionFilter store information about which database to use in a request-global value. This allows you to create a proxy implementation for IDbSession that is able to switch between a readable and writable implementation internally, based on this setting.
For instance:
public class ReadWriteSwitchableDbSessionProxy : IDbSession
{
private readonly IDbSession reader;
private readonly IDbSession writer;
public ReadWriteSwitchableDbSessionProxy(
IDbSession reader, IDbSession writer) { ... }
// Session operations
public IQueryable<T> Set<T>() => this.CurrentSession.Set<T>();
private IDbSession CurrentSession
{
get
{
var write = (bool)HttpContext.Current.Items["WritableSession"];
return write ? this.writer : this.reader;
}
}
}

How and where to implement automapper in WPF application

I haveBusinessLayer, DTO library,DataService, EntityModel(wher EDMX sits), DTO library refers to both business and data layer. I am trying to implement automapper in data layer, want to map entity object to DTO object and return DTO from the dataService library.
Currently am doing this way
public class DataService
{
private MapperConfiguration config;
public DataService()
{
IMapper _Mapper = config.CreateMapper();
}
public List<Dto.StudentDto> Get()
{
using(var context = new DbContext().GetContext())
{
var studentList = context.Students.ToList();
config = new MapperConfiguration(cfg => {
cfg.CreateMap<Db.Student, Dto.StudentDto>();
});
var returnDto = Mapper.Map<List<Db.Student>, List<Dto.StudentDto>>(studentList);
return returnDto;
}
}
}
How can I move all the mappings to one class and automapper should initialize automatically when call to dataserive is made?
Is it good practice to use AutoMapper in data layer?
Yes.
How can I move all the mappings to one class and automapper should initialize automatically when call to dataserive is made?
You could just create a static class that creates the mappings once:
public static class MyMapper
{
private static bool _isInitialized;
public static Initialize()
{
if (!_isInitialized)
{
Mapper.Initialize(cfg =>
{
cfg.CreateMap<Db.Student, Dto.StudentDto>();
});
_isInitialized = true;
}
}
}
Make sure that you use this class in your data service:
public class DataService
{
public DataService()
{
MyMapper.Initialize();
}
public List<Dto.StudentDto> GetStudent(int id)
{
using (var context = new DbContext().GetContext())
{
var student = context.Students.FirstOrDefault(x => x.Id == id)
var returnDto = Mapper.Map<List<Dto.StudentDto>>(student);
return returnDto;
}
}
}
Dependending on how you actually host the DAL, you might be able to call the Initialize() method of your custom mapper class from the Main() method of an executable or from somewhere else than the constructor of your DataService class.
Use AutoMapper.Mapper.CreateMap on OnAppInitialize. You can do the implementation of course in an own static class for better style.
There is really no more magic in this - because you only have to register (CreateMap) the mappings one time.
initialize automatically when call to dataserive is made?
You can of course register it too in the constructor.
Here you can take a look at another sample - how to use register in one or two of many extended ways.
In the end AutoMapper should make your life easier and not harder. In my opinion the best way is to register everything at one point - when starting the application.
But you also can do it on demand like seperating each CreateMapin the constructor.
Both ways - just make sure you just call it once.

How to implement FIND method of EF in Unit Test?

I have a Web API 2.0 project that I am unit testing. My controllers have a Unit of Work. The Unit of Work contains numerous Repositories for various DbSets. I have a Unity container in the Web API and I am using Moq in the test project. Within the various repositories, I use the Find method of Entity Framework to locate an entity based on it's key. Additionally, I am using Entity Framework 6.0.
Here is an very general example of the Unit of Work:
public class UnitOfWork
{
private IUnityContainer _container;
public IUnityContainer Container
{
get
{
return _container ?? UnityConfig.GetConfiguredContainer();
}
}
private ApplicationDbContext _context;
public ApplicationDbContext Context
{
get { _context ?? Container.Resolve<ApplicationDbContext>(); }
}
private GenericRepository<ExampleModel> _exampleModelRepository;
public GenericRepository<ExampleModel> ExampleModelRepository
{
get { _exampleModelRepository ??
Container.Resolve<GenericRepository<ExampleModel>>(); }
}
//Numerous other repositories and some additional methods for saving
}
The problem I am running into is that I use the Find method for some of my LINQ queries in the repositories. Based on this article, MSDN: Testing with your own test doubles (EF6 onwards), I have to create a TestDbSet<ExampleModel> to test the Find method. I was thinking about customizing the code to something like this:
namespace TestingDemo
{
class TestDbSet : TestDbSet<TEntity>
{
public override TEntity Find(params object[] keyValues)
{
var id = (string)keyValues.Single();
return this.SingleOrDefault(b => b.Id == id);
}
}
}
I figured I would have to customize my code so that TEntity is a type of some base class that has an Id property. That's my theory, but I'm not sure this is the best way to handle this.
So I have two questions. Is the approach listed above valid? If not, what would be a better approach for overriding the Find method in the DbSet with the SingleOrDefault method? Also, this approach only really works if their is only one primary key. What if my model has a compound key of different types? I would assume I would have to handle those individually. Okay, that was three questions?
To expand on my comment earlier, I'll start with my proposed solution, and then explain why.
Your problem is this: your repositories have a dependency on DbSet<T>. You are unable to test your repositories effectively because they depend on DbSet<T>.Find(int[]), so you have decided to substitute your own variant of DbSet<T> called TestDbSet<T>. This is unnecessary; DbSet<T> implements IDbSet<T>. Using Moq, we can very cleanly create a stub implementation of this interface that returns a hard coded value.
class MyRepository
{
public MyRepository(IDbSet<MyType> dbSet)
{
this.dbSet = dbSet;
}
MyType FindEntity(int id)
{
return this.dbSet.Find(id);
}
}
By switching the dependency from DbSet<T> to IDbSet<T>, the test now looks like this:
public void MyRepository_FindEntity_ReturnsExpectedEntity()
{
var id = 5;
var expectedEntity = new MyType();
var dbSet = Mock.Of<IDbSet<MyType>>(set => set.Find(It.is<int>(id)) === expectedEntity));
var repository = new MyRepository(dbSet);
var result = repository.FindEntity(id);
Assert.AreSame(expectedEntity, result);
}
There - a clean test that doesn't expose any implementation details or deal with nasty mocking of concrete classes and lets you substitute out your own version of IDbSet<MyType>.
On a side note, if you find yourself testing DbContext - don't. If you have to do that, your DbContext is too far up the stack and it will hurt if you ever try and move away from Entity Framework. Create an interface that exposes the functionality you need from DbContext and use that instead.
Note: I used Moq above. You can use any mocking framework, I just prefer Moq.
If your model has a compound key (or has the capability to have different types of keys), then things get a bit trickier. The way to solve that is to introduce your own interface. This interface should be consumed by your repositories, and the implementation should be an adapter to transform the key from your composite type into something that EF can deal with. You'd probably go with something like this:
interface IGenericDbSet<TKeyType, TObjectType>
{
TObjectType Find(TKeyType keyType);
}
This would then translate under the hood in an implementation to something like:
class GenericDbSet<TKeyType,TObjectType>
{
GenericDbSet(IDbSet<TObjectType> dbset)
{
this.dbset = dbset;
}
TObjectType Find(TKeyType key)
{
// TODO: Convert key into something a regular dbset can understand
return this.dbset(key);
}
}
I realise this is an old question, but after coming up against this issue myself when mocking data for unit tests I wrote this generic version of the 'Find' method that can be used in the TestDBSet implementation that is explained on msdn
Using this method means you dont have to create concrete types for each of your DbSets. One point to note is that this implementaion works if your entities have primary keys in one of the following forms (im sure you could modify to suite other forms easily):
'Id'
'ID'
'id'
classname +'id'
classname +'Id'
classname + 'ID'
public override T Find(params object[] keyValues)
{
ParameterExpression _ParamExp = Expression.Parameter(typeof(T), "a");
Expression _BodyExp = null;
Expression _Prop = null;
Expression _Cons = null;
PropertyInfo[] props = typeof(T).GetProperties();
var typeName = typeof(T).Name.ToLower() + "id";
var key = props.Where(p => (p.Name.ToLower().Equals("id")) || (p.Name.ToLower().Equals(typeName))).Single();
_Prop = Expression.Property(_ParamExp, key.Name);
_Cons = Expression.Constant(keyValues.Single(), key.PropertyType);
_BodyExp = Expression.Equal(_Prop, _Cons);
var _Lamba = Expression.Lambda<Func<T, Boolean>>(_BodyExp, new ParameterExpression[] { _ParamExp });
return this.SingleOrDefault(_Lamba);
}
Also from a performance point of view its not going to be as quick as the recommended method, but for my purposes its fine.
So based on the example, I did the following to be able to Unit Test my UnitOfWork.
Had to make sure my UnitOfWork was implementing IApplicationDbContext. (Also, when I say UnitOfWork, my controller's UnitOfWork is of type IUnitOfWork.)
I left all of the DbSet's in my IApplicationDbContext alone. I chose this pattern once I noticed IDbSet didn't include RemoveRange and FindAsync, which I use throughout my code. Also, with EF6, the DbSet can be set to virtual and this was recommended in MSDN, so that made sense.
I followed the Creating the in-memory test doubles
example to create the TestDbContext and all the recommended classes (e.g. TestDbAsyncQueryProvider, TestDbAsyncEnumerable, TestDbAsyncEnumerator.) Here is the code:
public class TestContext : DbContext, IApplicationDbContext
{
public TestContext()
{
this.ExampleModels= new TestBaseClassDbSet<ExampleModel>();
//New up the rest of the TestBaseClassDbSet that are need for testing
//Created an internal method to load the data
_loadDbSets();
}
public virtual DbSet<ExampleModel> ExampleModels{ get; set; }
//....List of remaining DbSets
//Local property to see if the save method was called
public int SaveChangesCount { get; private set; }
//Override the SaveChanges method for testing
public override int SaveChanges()
{
this.SaveChangesCount++;
return 1;
}
//...Override more of the DbContext methods (e.g. SaveChangesAsync)
private void _loadDbSets()
{
_loadExampleModels();
}
private void _loadExampleModels()
{
//ExpectedGlobals is a static class of the expected models
//that should be returned for some calls (e.g. GetById)
this.ExampleModels.Add(ExpectedGlobal.Expected_ExampleModel);
}
}
As I mentioned in my post, I needed to implement the FindAsync method, so I added a class called TestBaseClassDbSet, which is an alteration of the TestDbSet class in the example. Here is the modification:
//BaseModel is a class that has a key called Id that is of type string
public class TestBaseClassDbSet<TEntity> :
DbSet<TEntity>
, IQueryable, IEnumerable<TEntity>
, IDbAsyncEnumerable<TEntity>
where TEntity : BaseModel
{
//....copied all the code from the TestDbSet class that was provided
//Added the missing functions
public override TEntity Find(params object[] keyValues)
{
var id = (string)keyValues.Single();
return this.SingleOrDefault(b => b.Id == id);
}
public override Task<TEntity> FindAsync(params object[] keyValues)
{
var id = (string)keyValues.Single();
return this.SingleOrDefaultAsync(b => b.Id == id);
}
}
Created an instance of TestContext and passed that into my Mock.
var context = new TestContext();
var userStore = new Mock<IUserStore>();
//ExpectedGlobal contains a static variable call Expected_User
//to be used as to populate the principle
// when mocking the HttpRequestContext
userStore
.Setup(m => m.FindByIdAsync(ExpectedGlobal.Expected_User.Id))
.Returns(Task.FromResult(ExpectedGlobal.Expected_User));
var mockUserManager = new Mock(userStore.Object);
var mockUnitOfWork =
new Mock(mockUserManager.Object, context)
{ CallBase = false };
I then inject the mockUnitOfWork into the controller, and voila. This implementation seems to be working perfect. That said, based on some feeds I have read online, it will probably be scrutinized by some developers, but I hope some others find this to be useful.

Categories

Resources