I am currently using a DbContext similar to this:
namespace Models
{
public class ContextDB: DbContext
{
public DbSet<User> Users { get; set; }
public DbSet<UserRole> UserRoles { get; set; }
public ContextDB()
{
}
}
}
I am then using the following line at the top of ALL my controllers that need access to the database. Im also using it in my UserRepository Class which contains all methods relating to the user (such as getting the active user, checking what roles he has, etc..):
ContextDB _db = new ContextDB();
Thinking about this, there are occasions when one visitor can have multiple DbContexts active, for instance if it is visiting a controller that uses the UserRepository, which might not be the best of ideas.
When should I make a new DbContext? Alternatively, should I have one global context that is passed around and reused in all places? Would that cause a performance hit? Suggestions of alternative ways of doing this are also welcome.
I use a base controller that exposes a DataBase property that derived controllers can access.
public abstract class BaseController : Controller
{
public BaseController()
{
Database = new DatabaseContext();
}
protected DatabaseContext Database { get; set; }
protected override void Dispose(bool disposing)
{
Database.Dispose();
base.Dispose(disposing);
}
}
All of the controllers in my application derive from BaseController and are used like this:
public class UserController : BaseController
{
[HttpGet]
public ActionResult Index()
{
return View(Database.Users.OrderBy(p => p.Name).ToList());
}
}
Now to answer your questions:
When should I make a new DbContext / should I have one global context
that I pass around?
The context should be created per request. Create the context, do what you need to do with it then get rid of it. With the base class solution I use you only have to worry about using the context.
Do not try and have a global context (this is not how web applications work).
Can I have one global Context that I reuse in all places?
No, if you keep a context around it will keep track of all the updates, additions, deletes etc and this will slow your application down and may even cause some pretty subtle bugs to appear in your application.
You should probably chose to either expose your repository or your Context to your controller but not both. Having two contexts being access from the same method is going to lead to bugs if they both have different ideas about the current state of the application.
Personally, I prefer to expose DbContext directly as most repository examples I have seen simply end up as thin wrappers around DbContext anyway.
Does this cause a performance hit?
The first time a DbContext is created is pretty expensive but once this has been done a lot of the information is cached so that subsequent instantiations are a lot quicker. you are more likely to see performance problems from keeping a context around than you are from instantiating one each time you need access to your database.
How is everyone else doing this?
It depends.
Some people prefer to use a dependency injection framework to pass a concrete instance of their context to their controller when it is created. Both options are fine. Mine is more suitable for a small scale application where you know the specific database being used isn't going to change.
some may argue that you can't know this and that is why the dependency injection method is better as it makes your application more resilient to change. My opinion on this is that it probably won't change (SQL server & Entity Framework are hardly obscure) and that my time is best spent writing the code that is specific to my application.
I try to answer out of my own experience.
1. When should I make a new DbContext / should I have one global context that I pass around?
The Context should be injected by the dependency-injection and should not be instantiated by yourself. Best-Practice is to have it created as a scoped service by the dependency-injection. (See my answer to Question 4)
Please also consider using a proper layered application structure like Controller > BusinessLogic > Repository. In this case it would not be the case that your controller receives the db-context but the repository instead. Getting injected / instantiating a db-context in a controller tells me that your application architecture mixes many responsibilities in one place, which - under any circumstances - I cannot recommend.
2. Can i have one global Context that I reuse in all places?
Yes you can have but the question should be "Should I have..." -> NO. The Context is meant to be used per request to change your repository and then its away again.
3. Does this cause a performance hit?
Yes it does because the DBContext is simply not made for being global. It stores all the data that has been entered or queried into it until it is destroyed. That means a global context will get larger and larger, operations on it will get slower and slower until you will get an out of memory exceptions or you die of age because it all slowed to a crawl.
You will also get exceptions and many errors when multiple threads access the same context at once.
4. How is everyone else doing this?
DBContext injected through dependency-injection by a factory; scoped:
services.AddDbContext<UserDbContext>(o => o.UseSqlServer(this.settings.DatabaseOptions.UserDBConnectionString));
I hope my answers where of help.
In performance point of view, DbContext should be created just when it is actually needed, For example when you need to have list of users inside your business layer,you create an instance form your DbContext and immediately dispose it when your work is done
using (var context=new DbContext())
{
var users=context.Users.Where(x=>x.ClassId==4).ToList();
}
context instance will be disposed after leaving Using Block.
But what would happen if you do not dispose it immediately?
DbContext is a cache in the essence and the more you make query the more memory blocks will be occupied.
It will be much more noticeable in case of concurrent requests flooding towards your application, in this case,each millisecond that you are occupying a memory block would be of the essence, let alone a second.
the more you postpone disposing unnecessary objects the more your application is closed to crash!
Of course in some cases you need to preserve your DbContext instance and use it in another part of your code but in the same Request Context.
I refer you to the following link to get more info regarding managing DbContext:
dbcontext Scope
You should dispose the context immediately after each Save() operation. Otherwise each subsequent Save will take longer. I had an project that created and saved complex database entities in a cycle. To my surprise, the operation became three times faster after I moved
using (var ctx = new MyContext()){...}
inside the cycle.
Right now I am trying this approach, which avoids instantiating the context when you call actions that don't use it.
public abstract class BaseController : Controller
{
public BaseController() { }
private DatabaseContext _database;
protected DatabaseContext Database
{
get
{
if (_database == null)
_database = new DatabaseContext();
return _database;
}
}
protected override void Dispose(bool disposing)
{
if (_database != null)
_database.Dispose();
base.Dispose(disposing);
}
}
This is obviously an older question but if your using DI you can do something like this and scope all your objects for the lifetime of the request
public class UnitOfWorkAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(HttpActionContext actionContext)
{
var context = IoC.CurrentNestedContainer.GetInstance<DatabaseContext>();
context.BeginTransaction();
}
public override void OnActionExecuted(HttpActionExecutedContext actionContext)
{
var context = IoC.CurrentNestedContainer.GetInstance<DatabaseContext>();
context.CloseTransaction(actionContext.Exception);
}
}
Related
I am building a WPF application (storage system), which would be running for a very long time and therefore I am interested in what option should I go with when using Entity Framework with Repository Patterns.
What I did, is that I have separated my Data Access Layer (DAL) into a separate class library referenced by the main project. For the technologies, I have decided to use Entity Framework 6 & SQLite database (plus SQLite plugins for EF).
In the main project, except for other things (such as Prism) I am also using Unity for IoC while working with my ViewModels.
This is how I am registering my repository under Bootstrapper.cs class:
Container.RegisterType<IUserRepository, UserRepository>(new TransientLifetimeManager());
The reason why I have decided to use TransientLifetimeManager was to make sure, that this class is re-created every time ViewModel is created and thus ensuring that new data is being fetched from the database.
Here is a (oversimplified) repository with only two methods:
public interface IUserRepository
{
UserLevel TryLoginUser(User user);
string RegisterNewUser(User user);
void Dispose();
}
With sample implementation:
public class UserRepository : IUserRepository
{
public enum UserLevel { Regular, Supervisor, Admin, Invalid };
private readonly DataContext _context;
public UserRepository()
{
_context = new DataContext();
}
public UserLevel TryLoginUser(User user)
{
return UserLevel.Invalid;
}
public string RegisterNewUser(User user)
{
return null;
}
}
What users will be doing is, that they will be switching between 2 (or three) main windows thorough the lifetime of an application. In order to make sure, that the database data from Entity Framework is in sync with the database, what I have decided to do is to initialize new DataContext in the constructor of the method itself (which reloads the data).
When I have explored the memory usage while navigating back and forth between my windows (and calling various repositories), I have noticed that the memory usage starts to go up, which means that, clearly, there is something wrong with my method of initializing and injecting Repositories.
Finally, in regards to this topic, I have also tried to somehow register my DataContext.cs (which is very simple and just inherits from DbContext) so that it could be injectable into the repositories; however I had even problems registering it (as it lacks interface definition, Unity didn't allow me to register it).
I would highly appreciate any help or recommendations in this matter.
Say I have a controller like below:
public class MyController : Controller
{
private MyDBContext db = new MyDBContext();
public ActionResult Index()
{
return View(db.Items.ToList());
}
...
Typically when I need to make EF calls I will instantiate the DBContext in the function I am using it in and wrap it in a using statement like so:
public ActionResult Index()
{
using(MyDBContext db = new MyDBContext())
{
return View(db.Items.ToList());
}
}
I found the first example on the www.asp.net website which seems like a reputable source(right?), but I'm concerned about the context not being manually disposed of after each use.
Is having a context that is defined outside of the function scope without a using statement a bad practice?
Not if you have something like this:
protected override void Dispose(bool disposing)
{
db.Dispose();
base.Dispose(disposing);
}
Since Controller is Disposable, it's ok
But generally I try to have a better separation between my controllers and my model.
The first way is the "recommended" way because the context will be disposed when the controller is disposed. Since the controller survives the life of the request, you're ensured that your context will hang around the entire time as well. Using using with contexts is dangerous as the context is disposed at a different point in the request and could result in problems if it's accessed after it's been disposed. You're probably okay here since the return is inside the using block, but assuming you did something like the following, instead:
List<Item> items;
using(MyDBContext db = new MyDBContext())
{
items = db.Items.ToList();
}
return View(items);
You'd be in a world of hurt the first time you accessed a navigation property that happened to be lazy-loaded. While, you didn't make that mistake in your code, it's far to easy to make it in general. If you avoid using altogether with your contexts, then you're always good.
That said, the best way is to actually use dependency injection. For all intents and purposes your context should be treated as a singleton - you don't want multiple instances floating around as that's a recipe for disaster. Using a DI container is a good way to ensure that you achieve that no matter where and in how many different way your context is used. Which DI container to use is a highly personal choice. I prefer Ninject, but there's quite a few other choices that may work better for your personal style. Regardless of which you go with, there should be some sort of option for using "request scope". That is what you'll want to use with your context as it ensures that there's only one instance per request, but every request gets its own instance.
I've been working with the entity framework for a little bit now, and I've come across several scenarios where two contexts would try to access the same entity etc, so I'm wondering if perhaps I'm not opening/closing my dbcontexts in the best way.
Currently I basically open a DbContext on each of my controllers the way the basic MVC app was set up originally, which means I have a private dbcontext field on the controller, and I override the controller's dispose method to call dispose on the context.
However I also sometimes make queries against the db in some of my other classes, which can get called from within the controller, which also has a context open.
Is there a way to access an open context without an explicit handler? It wouldn't really make sense for me to pass around a DbContext reference through a dozen different methods.
Using Dependency Injection
As others have said and will probably reiterate, the 'right way' is often considered to be dependency injection.
In my latest project, I've organized things so that I'm almost done with the project and DI has been so effortless that I'm doing it myself (rather than using an injector). One major factor in that has been adhering fairly strictly to this structure:
WebProject
| |
| DataServices
| | |
ViewModels EntityModels
Access to all data services during one unit of work occurs through a single DataServiceFactory instance, that requires an instance of MyDbContext. Another factor has been an entirely RESTful application design - it means I don't have to intersperse persistence functionality throughout my code.
Without Dependency Injection
That said, maybe DI isn't right for you on this project. Maybe:
you don't plan to write unit tests
you need more time to understand DI
your project structure already has EF deeply integrated
In ASP.NET MVC, the unit of work often entirely coincides with the request lifetime - i.e. HttpContext.Current. As a result, you can lazily instanciate a repository 'singleton' per-request, instead of using DI. Here is a classic singleton pattern with current context as the backing store, to hold your DbContext:
public class RepositoryProxy {
private static HttpContext Ctx { get { return HttpContext.Current; } }
private static Guid repoGuid = typeof(MyDbContext).GUID;
public static MyDbContext Context {
get {
MyDbContext repo = Ctx.Items[repoGuid];
if (repo == null) {
repo = new MyDbContext();
Ctx.Items[repoGuid] = result;
}
return repo;
}
}
public static void SaveIfContext() {
MyDbContext repo = Ctx.Items[repoGuid];
if (repo != null) repo.SaveChanges();
}
}
You can SaveChanges automatically too, if you are feeling especially lazy (you'll still need to call it manually to inspect side-effects, of course, like retrieving the id for a new item):
public abstract class ExtendedController : Controller {
protected MyDbContext Context {
get { return RepositoryProxy.Context; }
}
protected override void OnActionExecuted(ActionExecutedContext filterContext) {
RepositoryProxy.SaveIfContext();
base.OnActionExecuted(filterContext);
}
}
The "best" way to handle DbContext instances is to make it a parameter on each method that needs it (or, if an entire class needs it, a parameter to the constructor).
This is a basic part of IoC (Inversion of Control) which allows the caller to specify dependencies for a method thus allowing the caller to "control" the behavior of the called method.
You can then add a Dependency Injection framework as well. They can be configured to use a singleton instance of your DbContext and inject that instance into any methods that need it.
Instead of passing instance of DbContext I would suggest you pass instance of some kind of factory class, that will create an instance of DbContext for you.
For most scenarios you can just create interface, that your DbContext will implement, something like this:
public interface IDbContext
{
IDbSet<TEntity> Set<TEntity>() where TEntity : class;
DbSet Set(Type entityType);
DbEntityEntry<TEntity> Entry<TEntity>(TEntity entity) where TEntity : class;
int SaveChanges();
}
and then a factory to return your instance:
public interface IContextFactory
{
IDbContext Retrieve();
}
The factory can be easily mocked to return whatever implementation you want - which is a good approach especially if you intend to test the application. Your DbContext will just derive from IDbContext interface and implement necessary methods and properties. This approach assumes, that to tell EF about your database tables you use OnModelCreating(DbModelBuilder modelBuilder) method and with modelBuilder.Configurations.Add(new EntityTypeConfiguratinOfSomeType()); instead of having properties of type DbSet<T> all around your context.
To completely abstract from Entity framework, you can create yet another layer of abstraction, for example with UnitOfWork or Repository pattern approaches, but I guess the former covers your concerns for now.
I have an UnitOfWork attribute, something like this:
public class UnitOfWorkAttribute : ActionFilterAttribute
{
public IDataContext DataContext { get; set; }
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
if (filterContext.Controller.ViewData.ModelState.IsValid)
{
DataContext.SubmitChanges();
}
base.OnActionExecuted(filterContext);
}
}
As you can see, it has DataContext property, which is injected by Castle.Windsor. DataContext has lifestyle of PerWebRequest - meaning single instance reused for each request.
Thing is, that from time to time I get DataContext is Disposed exception in this attribute and I remember that ASP.NET MVC 3 tries to cache action filters somehow, so may that causes the problem?
If it is so, how to solve the issue - by not using any properties and trying to use ServiceLocator inside method?
Is it possible to tell ASP.NET MVC to not cache filter if it does cache it?
I would strongly advice against using such a construct. For a couple of reasons:
It is not the responsibility of the controller (or an on the controller decorated attribute) to commit the data context.
This would lead to lots of duplicated code (you'll have to decorate lots of methods with this attribute).
At that point in the execution (in the OnActionExecuted method) whether it is actually safe to commit the data.
Especially the third point should have drawn your attention. The mere fact that the model is valid, doesn't mean that it is okay to submit the changes of the data context. Look at this example:
[UnitOfWorkAttribute]
public View MoveCustomer(int customerId, Address address)
{
try
{
this.customerService.MoveCustomer(customerId, address);
}
catch { }
return View();
}
Of course this example is a bit naive. You would hardly ever swallow each and every exception, that would just be plain wrong. But what it does show is that it is very well possible for the action method to finish successfully, when the data should not be saved.
But besides this, is committing the transaction really a problem of MVC and if you decide it is, should you still want to decorate all action methods with this attribute. Wouldn't it be nicer if you just implement this without having to do anything on the Controller level? Because, which attributes are you going to add after this? Authorization attributes? Logging attributes? Tracing attributes? Where does it stop?
What you can try instead is to model all business operations that need to run in a transaction, in a way that allows you to dynamically add this behavior, without needing to change any existing code, or adding new attributes all over the place. A way to do this is to define an interface for these business operations. For instance:
public interface ICommandHandler<TCommand>
{
void Handle(TCommand command);
}
Using this interface, your controller would look like this:
private readonly ICommandHandler<MoveCustomerCommand> handler;
// constructor
public CustomerController(
ICommandHandler<MoveCustomerCommand> handler)
{
this.handler = handler;
}
public View MoveCustomer(int customerId, Address address)
{
var command = new MoveCustomerCommand
{
CustomerId = customerId,
Address = address,
};
this.handler.Handle(command);
return View();
}
For each business operation in the system you define a class (a DTO and Parameter Object). In the example the MoveCustomerCommand class. This class contains merely the data. The implementation is defined in a class that implementation of the ICommandHandler<MoveCustomerCommand>. For instance:
public class MoveCustomerCommandHandler
: ICommandHandler<MoveCustomerCommand>
{
private readonly IDataContext context;
public MoveCustomerCommandHandler(IDataContext context)
{
this.context = context;
}
public void Handle(MoveCustomerCommand command)
{
// TODO: Put logic here.
}
}
This looks like an awful lot of extra useless code, but this is actually really useful (and if you look closely, it isn't really that much extra code anyway).
Interesting about this is that you can now define one single decorator that handles the transactions for all command handlers in the system:
public class TransactionalCommandHandlerDecorator<TCommand>
: ICommandHandler<TCommand>
{
private readonly IDataContext context;
private readonly ICommandHandler<TCommand> decoratedHandler;
public TransactionalCommandHandlerDecorator(IDataContext context,
ICommandHandler<TCommand> decoratedHandler)
{
this.context = context;
this.decoratedHandler = decoratedHandler;
}
public void Handle(TCommand command)
{
this.decoratedHandler.Handle(command);
this.context.SubmitChanges();
}
}
This is not much more code than your UnitOfWorkAttribute, but the difference is that this handler can be wrapped around any implementation and injected into any controller, without the controller to know about this. And directly after executing a command is really the only safe place where you actually know whether you can save the changes or not.
You can find more information about this way of designing your application in this article: Meanwhile... on the command side of my architecture
Today I've half accidently found the original issue of the problem.
As it is seen from the question, filter has property, that is injected by Castle.Windsor, so those, who use ASP.NET MVC know, that for that to work you need to have IFilterProvider implementation, which would be able to use IoC container for dependency injection.
So I've started to look at it's implementation, and noticed, that it is derrived from FilterAttributeFilterProvider and FilterAttributeFilterProvider has constructor:
public FilterAttributeFilterProvider(bool cacheAttributeInstances)
So you can control to cache or not your attribute instances.
After disabling this cache, site was blown with NullReferenceExceptions, so I was able to find one more thing, that was overlooked and caused undesired side effects.
Thing was, that original filter was not removed, after we added Castle.Windsor filter provider. So when caching was enabled, IoC filter provider was creating instances and default filter provider was reusing them and all dependency properties were filled with values - that was not clearly noticeable, except the fact, that filters were running twice, after caching was disabled, default provider was needed to create instances by it self, so dependency properties were unfilled, that's why NullRefereceExceptions occurred.
Okay, I'm going to try and go short and straight to the point. I am trying to develop a loosely-coupled, multi-tier service application that is testable and supports dependency injection. Here's what I have:
At the service layer, I have a StartSession method that accepts some key data required to, well, start the session. My service class is a facade and delegates to an instance of the ISessionManager interface that is injected into the service class constructor.
I am using the Repository pattern in the data access layer. So I have an ISessionRepository that my domain objects will work with and that I implement using the data access technology du jour. ISessionRepository has methods for GetById, Add and Update.
Since my service class is just a facade, I think it is safe to say that my ISessionManager implementation is the actual service class in my architecture. This class coordinates the operations with my Session domain/business object. And here's where the shell game and problem comes in.
In my SessionManager class (the concrete ISessionManager), here's how I have StartSession implemented:
public ISession StartSession(object sessionStartInfo)
{
var session = Session.GetSession(sessionStartInfo);
if (session == null)
session = Session.NewSession(sessionStartInfo);
return session;
}
I have three problems with this code:
First, obviously I could move this logic into a StartSession method in my Session class but I think that would defeat the purpose of the SessionManager class which then simply becomes a second facade (or is it still considered a coordinator?). Alas, the shell game.
Second, SessionManager has a tightly-coupled dependance upon the Session class. I considered creating an ISessionFactory/SessionFactory that could be injected into SessionManager but then I'd have the same tight-coupling inside the factory. But, maybe that's okay?
Finally, it seems to me that true DI and factory methods don't mix. After all, we want to avoid "new"ing an instance of an object and let the container return the instance to us. And true DI says that we should not reference the container directly. So, how then do I get the concrete ISessionRepository class injected into my Session domain object? Do I have it injected into the factory class then manually pass it into Session when constructing a new instance (using "new")?
Keep in mind that this is also only one operation and I also need to perform other tasks such as saving a session, listing sessions based on various criteria plus work with other domain objects in my solution. Plus, the Session object also encapsulates business logic for authorization, validation, etc. so (I think) it needs to be there.
The key to what I am looking to accomplish is not only functional but testable. I am using DI to break dependencies so we can easily implement unit tests using mocks as well as give us the ability to make changes to the concrete implementations without requiring changes in multiple areas.
Can you help me wrap my head around the best practices for such a design and how I can best achieve my goals for a solid SOA, DDD and TDD solution?
UPDATE
I was asked to provide some additional code, so as succinctly as possible:
[ServiceContract()]
public class SessionService : ISessionService
{
public SessionService(ISessionManager manager) { Manager = manager; }
public ISessionManager Manager { get; private set; }
[OperationContract()]
public SessionContract StartSession(SessionCriteriaContract criteria)
{
var session = Manager.StartSession(Mapper.Map<SessionCriteria>(criteria));
return Mapper.Map<SessionContract>(session);
}
}
public class SessionManager : ISessionManager
{
public SessionManager() { }
public ISession StartSession(SessionCriteria criteria)
{
var session = Session.GetSession(criteria);
if (session == null)
session = Session.NewSession(criteria);
return session;
}
}
public class Session : ISession
{
public Session(ISessionRepository repository, IValidator<ISession> validator)
{
Repository = repository;
Validator = validator;
}
// ISession Properties
public static ISession GetSession(SessionCriteria criteria)
{
return Repository.FindOne(criteria);
}
public static ISession NewSession(SessionCriteria criteria)
{
var session = ????;
// Set properties based on criteria object
return session;
}
public Boolean Save()
{
if (!Validator.IsValid(this))
return false;
return Repository.Save(this);
}
}
And, obviously, there is an ISessionRepository interface and concrete XyzSessionRepository class that I don't think needs to be shown.
2nd UPDATE
I added the IValidator dependency to the Session domain object to illustrate that there are other components in use.
The posted code clarifies a lot. It looks to me like the session class holds state (with behavior), and the service and manager classes strictly perform actions/behavior.
You might look at removing the Repository dependency from the Session and adding it to the SessionManager. So instead of the Session calling Repository.Save(this), your Manager class would have a Save(ISession session) method that would then call Repository.Save(session). This would mean that the session itself would not need to be managed by the container, and it would be perfectly reasonable to create it via "new Session()" (or using a factory that does the same). I think the fact that the Get- and New- methods on the Session are static is a clue/smell that they may not belong on that class (does this code compile? Seems like you are using an instance property within a static method).
Finally, it seems to me that true DI
and factory methods don't mix. After
all, we want to avoid "new"ing an
instance of an object and let the
container return the instance to us.
And true DI says that we should not
reference the container directly. So,
how then do I get the concrete
ISessionRepository class injected into
my Session domain object? Do I have it
injected into the factory class then
manually pass it into Session when
constructing a new instance (using
"new")?
This question gets asked a LOT when it comes to managing classes that mix state and service via an IOC container. As soon as you use an abstract factory that uses "new", you lose the benefits of a DI framework from that class downward in the object graph. You can get away from this by completely separating state and service, and having only your classes that provide service/behavior managed by the container. This leads to passing all data through method calls (aka functional programming). Some containers (Windsor for one) also provide a solution to this very problem (in Windsor it's called the Factory Facility).
Edit: wanted to add that functional programming also leads to what Fowler would call "anemic domain models". This is generally considered a bad thing in DDD, so you might have to weigh that against the advice I posted above.
Just some comments...
After all, we want to avoid "new"ing an instance of an object and let the container return the instance to us.
this ain't true for 100%. You want to avoid "new"ing only across so called seams which basically are lines between layers. if You try to abstract persistence with repositories - that's a seam, if You try to decouple domain model from UI (classic one - system.web reference), there's a seam. if You are in same layer, then decoupling one implementation from another sometimes makes little sense and just adds additional complexity (useless abstraction, ioc container configuration etc.). another (obvious) reason You want to abstract something is when You already right now need polymorphism.
And true DI says that we should not reference the container directly.
this is true. but another concept You might be missing is so called composition root (it's good for things to have a name :). this concept resolves confusion with "when to use service locator". idea is simple - You should compose Your dependency graph as fast as possible. there should be 1 place only where You actually reference ioc container.
E.g. in asp.net mvc application, common point for composition is ControllerFactory.
Do I have it injected into the factory class then manually pass it into Session when constructing a new instance
As I see so far, factories are generally good for 2 things:
1.To create complex objects (Builder pattern helps significantly)
2.Resolving violations of open closed and single responsibility principles
public void PurchaseProduct(Product product){
if(product.HasSomething) order.Apply(new FirstDiscountPolicy());
if(product.HasSomethingElse) order.Apply(new SecondDiscountPolicy());
}
becomes as:
public void PurchaseProduct(Product product){
order.Apply(DiscountPolicyFactory.Create(product));
}
In that way Your class that holds PurchaseProduct won't be needed to be modified if new discount policy appears in sight and PurchaseProduct would become responsible for purchasing product only instead of knowing what discount to apply.
P.s. if You are interested in DI, You should read "Dependency injection in .NET" by Mark Seemann.
I thought I'd post the approach I ended up following while giving due credit above.
After reading some additional articles on DDD, I finally came across the observation that our domain objects should not be responsible for their creation or persistence as well as the notion that it is okay to "new" an instance of a domain object from within the Domain Layer (as Arnis eluded).
So, I retained my SessionManager class but renamed it SessionService so it would be clearer that it is a Domain Service (not to be confused with the SessionService in the facade layer). It is now implemented like:
public class SessionService : ISessionService
{
public SessionService(ISessionFactory factory, ISessionRepository repository)
{
Factory = factory;
Repository = repository;
}
public ISessionFactory Factory { get; private set; }
public ISessionRepository Repository { get; private set; }
public ISession StartSession(SessionCriteria criteria)
{
var session = Repository.GetSession(criteria);
if (session == null)
session = Factory.CreateSession(criteria);
else if (!session.CanResume)
thrown new InvalidOperationException("Cannot resume the session.");
return session;
}
}
The Session class is now more of a true domain object only concerned with the state and logic required when working with the Session, such as the CanResume property shown above and validation logic.
The SessionFactory class is responsible for creating new instances and allows me to still inject the ISessionValidator instance provided by the container without directly referencing the container itself:
public class SessionFactory : ISessionFactory
{
public SessionFactory(ISessionValidator validator)
{
Validator = validator;
}
public ISessionValidator Validator { get; private set; }
public Session CreateSession(SessionCriteria criteria)
{
var session = new Session(Validator);
// Map properties
return session;
}
}
Unless someone can point out a flaw in my approach, I'm pretty comfortable that this is consistent with DDD and gives me full support for unit testing, etc. - everything I was after.