Hi created an extension method to control the lifecycle of an EF context. My code is below
public static Entities GetCentralRepositoryContext(this HttpContext httpcontext)
{
if (HttpContext.Current.Items["context"] == null)
{
HttpContext.Current.Items["context"] = new Entities();
}
return (Entities)HttpContext.Current.Items["context"];
}
I've created many layers in my solution as projects and have started to think about IOC. The above code sits in my BL layer project but for it to work I need to create a reference to my DL layer as that's where the entities class resides. How can I remove the reference to my DL layer and inject into my extension method. Is this even possible?
The approach you are taking has several problems. First of all, static methods tend to be a problem for loose coupling, and you'll notice this quickly when trying to unit test your code. Besides this, your business layer has a dependency on System.Web, which makes your business layer technology specific, which will make it very hard to move part of the system to for instance a Windows Service, and again makes unit testing almost impossible.
Instead of doing this, start injecting your Entities class into the constructor of all types that need it. At the beginning of each request you can build up the dependency graph of services in your application, specific to that request. At that point you can determine that the Entities instance should have a lifetime of a web request.
This however, will start to get cumbersome to do without a DI framework. Or at least, a DI framework will make this much easier to do.
When you start writing unit tests, you'll find that it will be very hard when directly using the EF ObjectContext in your application. This article might give you some ideas how to abstract the ObjectContext behind a testable interface.
Related
My application is built around "Services" like this:
public class ProfileService : BaseService {
private CommService _commService;
public ProfileService(CommService commService) {
_commService = commService;
}
public ApiResponseDto GetProfile(string profileUsername) {
using (var db = new appContext()){ db.DoStuff(); }
}
}
What I would like to do is push the db instantiation into BaseService, but I don't want to create a dbContext and incur the cost of such when I don't need it. So I'm thinking about doing something like this:
public class BaseService {
public AppContext _db;
public AppContext db(){
return _db ?? new AppContext();
}
}
And then all of my methods will access the db via db().DoStuff().
I don't like the idea of parentheses everywhere, but I like the idea of cleaning up my services footprints more.
My question is - if I create an instance of DbContext and don't use it, is there any cost? Or is the object just instantiated for future use? I hate to ask for opinions here as I know it's not allowed, but is this a step in the right direction of keeping things DRY?
Unit of Work Pattern
DbContext is effectively an implementation of the 'unit of work' pattern - once the DbContext is created, all changed done to the DbSet are then persisted in one go when you call SaveChanges.
So the further question you need to answer in order to properly answer your question is: What is the scope of the changes that make up your unit of work? In other words - what set of changes need to be made atomically - all succeed, or all fail?
A practical (if example of this - say you have an API endpoint that exposes an operation allowing the client to submit an order. The controller uses OrderService to submit the order, and then InventoryService to update the inventory associated with the items in the order. If each service has their own DbContext, you have a risk that the OrderService will succeed to persist the order submission, but the InventoryService will fail to persist the inventory update.
Dependency Injection
To combat this, a common pattern is to create a context per-request and let your IoC container create and dispose the context, and make it available to inject into services per request. This blog post gives a few options for DbContext management, and includes an example of configuring Ninject to do it.
What this means is your ctor will look like:
public ProfileService(CommService commService, AppContext context) {
_commService = commService;
_context = context;
}
And you can safely use the context there without having to worry about how it was created or where it came from.
Medhi's DbScopeFactory
However, my preferred approach for more complex applications is an excellent open source library documented up here: http://mehdi.me/ambient-dbcontext-in-ef6/. Injecting DbContext per request will work fine for simpler applications, but as your application gets more involved (e.g. multiple Contexts per application, multiple databases etc.), the finer grain control offered by his IDbContextScopeFactory is invaluable.
Edit to Add - Pros and Cons of Injection vs Construction
Following your comment asking for pros/cons of the approach you proposed, I'd say that generally, injection of dependencies (including DbContext) is a far more flexible and powerful approach, and can still achieve the goal of ensuring your devs don't have to be concerned with dbcontext lifecycle management.
The pros and cons are generally the same for all instances of dependency injection not just db context, but here are a few concrete issues with constructing the context within the service (even in a base service):
each service will have its own instance of the dbcontext - this can lead to consistency problems where your unit of work spans tasks carried out by multiple services (see example above)
It will be much more difficult to unit test your services, as they are constructing their own dependency. Injecting the dbcontext means you can mock it in your unit tests and test functionality without hitting the database
It introduces unmanaged state into your services - if you are using dependency injection, you want the IoC container to manage the lifecycle of services. When your service has no per-request dependencies, the IoC container will create a single instance of the service for the whole application, which means your dbcontext saved to the private member will be used for all requests/threads - this can be a big problem and should be avoided.
(Note: this is less of an issue if you are not using DI and constructing new instances of the services within controllers, but then you are losing the benefits of DI at the controller level as well...)
All services are now locked to using the same DbContext instance - what if, in the future you decide to split your database and some services need to access a different DbContext? Would you create two different BaseServices? Or pass in configuration data to allow the base service to switch? DI would take care of that, because you would just register the two different Context classes, and then the container would provide each service with the context it needs.
Are you returning IQueryables anywhere? If you are, then you run a risk that the IQueryable will cause the Db to hit even after the DbContext has gone out of scope - it may have been disposed by the garbage collector and will not be available.
From a dev perspective, I think nothing is simpler than the DI approach - simply specify the DbContext in your constructor, and let the DI container container take care of the rest.
If you are using DbContext per request, you don't even have to create or dispose the context, and you can be confident that IQueryables will be resolvable at any point in the request call stack.
If you use Mehdi's approach, you do have to create a DbContextScope, but that approach is more appropriate if you are going down a repository pattern path and want explicit control over the context scope.
As you can see, I'm far less concerned about the computational cost of constructing a DbContext when it's not needed (as far as I can tell, it's a fairly low cost until you actually use it to hit the db), and more concerned about the application architecture that permits unit testing and decoupling from dependencies.
I have an application that will have some presentations layer (web, mobile, wpf, wcf, windows service to work on background etc...) and We are using NHibernate to persist the domain objects. We will have repositories (class library) to persist data, a service layer to use theses repositories to persist according to business rules. My question is, we do not know how to implement the a trasactional management in this service layer. We will probably use (more than one) repositories in a same service layer method and we need to control the transaction on the service layer. I would like to implement something like this (by attributes):
public class DomainObjectService
{
[Transactional]
public bool CreateDomainObject(DomainObject domainObject, /* other parameters */)
{
foreach(var item in /* collection */)
{
_itemRepository.Save(item);
}
if (/* some condition */) {
/* change the domainObject here */
}
_domainObjectRepository.Save(domainObject);
}
}
And does this Transactional attribute control my transactional with Commit/RollBack when we got erros. Is it possible? Or is there another solution to do this?
Thank you
What you have asked does not have a straight forward answer.
The behavior you wish to have sounds like you need to implement a unit of work pattern.
NHibernate's own ISession is in fact an implementation of a unit of work. I personally recommend implementing your own unit of work so that you have greater control over what your specific application considers a unit of work.
The use of attributes in a service layer class really doesn't make a lot of sense to me personally. I have seen people create custom controller attributes in an MVC application that handles transactions but I've never personally agreed with that kind of implementation.
You mentioned using more than one repository in the service layer. This is quite a common practice but it also means that each of those repositories will need to be operating within the same unit of work. If you application is using dependency injection, then one option is to have each repository accept an ISession in its constructor. Your dependency injection framework of choice could be setup in such a way as to inject the same ISession into all of the repositories. Your setup could be configured to begin a new transaction every time a new ISession is created.
You also mentioned different presentation layers such as web, mobile, wpf, etc. How you deal with sessions and transactions in each of those different types of applications can be quite different. That is why I always point people in the unit of work direction because each of those different application types could have a completely different definition for what it considers a unit of work. For a web application, you would typically go with a new unit of work for each web request. For a wpf application, the unit of work could be per screen, or until the user hits the save button, etc. Also, by implementing a unit of work, you can reuse that same unit of work implementation more easily across those different application types.
Again, this is not a question wish a straight forward answer but in general, I typically make use of a custom unit of work and a dependency injection framework to make this problem much easier to deal with.
Here are some helpful links that you may wish to investigate:
http://nhibernate.info/doc/patternsandpractices/nhibernate-and-the-unit-of-work-pattern.html
Correct use of the NHibernate Unit Of Work pattern and Ninject
Unit of work/repository managers for NHibernate?
In a application that use Repositories and Services pattern, how to make sure Service layer is always called and not the Repositories directly ?
Example :
class OrderRepository
{
void CreateOrder(Order o)
...
}
class OrderService
{
void CreateOrder(Order o)
{
//make some business logic tests
...
//call repository
_orderRepository.CreateOrder(o);
}
}
I see two issues :
A programmer may call the repository directly because it doesn't know the existance of a service (sometimes its not as simple as in this example (1 service = 1 repository with same method names). some applications are not very well documented. or someone in hurry can forget to check if corresponding service exists (mistake)).
Totally different : long time ago someone created some views + controllers that use order repository directly. At that time there was no need to have some business logic check or additional operations, only order repository exists (because there it was not needed at all). If later, some additional operations when creating an order would be needed, a service will be created. the problem is that all controllers that make old repositories calls will need to be changed. Isn't repository principle/idea (and separating code in layers) supposed to make parts independent from each other ?
You can structure your solution so that all repositories and services are in their own respective projects, ex. Repositories and Services.
The only project that should reference Repositories would be Services. This way, other projects wouldn't have access to the repositories. Of course, nothing is to stop a developer from including the repositories project to the controllers project, but hopefully at this point they'll be asking themselves why it wasn't included in the first place.
Static analysis tools can help in this respect.
nDepend is a commercial tool that can be integrated into your build process and error on such a condition (any non service class calling a repository class directly).
I am new to IOC in general and I'm struggling a little to understand whether what I am trying to do makes any sense. I have a web forms application in which I want to create one module to define some bindings for me. The bindings will be used to inject repositories into my business manager classes, allowing me to unit test the business managers. Also I would like to use the container to inject the Entity Framework context into my repositories that way they all share the same context per http request. So here is what I am wondering:
I understand that I need to have the same kernel instance manage my object creation and their lifetime. For example if I want a one-per-httprequest type scenario I need the instance of the kernel to be available for that period of time. What if I need a singleton? Then it has to be application scoped somehow. So where exactly do I store the IKernel instance? It seems that I might want to make it a static in my Global.asax, is that the right approach and is thread safety a concern?
Since I am using Bind<> to define my bindings, how do I go about making that definition in the Web/UI layer when I shouldn't be referencing my data access layer from the UI? My references look like .Web --> .Business --> DataAccess. It seems like I want to tell the kernel "hey manage my data access instances, but don't have a reference to them at compile time." A binding such as this:
//Any object requesting an instance of AdventureWorksEntities will get an instance per request
Bind<AdventureWorksEntities>().ToSelf().InRequestScope();
I feel like I might be approaching this incorrectly, thank you.
Re part 1- have a look at the Ninject.Web extension - it keeps a Kernel at Application level. You can then manage other resources that have shorter lifetimes within that too.
Also, have a look around here for questions and examples on EF and L2S DataContext management wrt Ninject and DI in general (it's come up[ in the last few weeks)
UPDATE: This answer to another question from the same OP is far more concrete (There's a KernelContainer class with a .Inject( object) and a .Kernel)
It really depends on the complexity of your web app.
It sounds like you have a business and a data access layer; I would personally have an 'infrastructure' layer where I would store my DI repository and helper classes.
I got some static classes with extension methods which add 'business logic' to entities using the repository pattern.
Now sometimes i need to create a new IRepository in these extension functions.
I'm currently working around it by accessing my Ninject kernel through the object I'm extending, but it's really ugly:
public static IEnumerable<ISomething> GetSomethings(this IEntity entity)
{
using (var dataContext = entity.kernel.Get<IDataContext>())
return dataContext.Repository<ISomething>().ToList();
}
I could also make a static constructor, accessing the Ninject kernel somehow from a factory, is there already infrastructure for that in Ninject 2?
Does anybody know a better solution? Does anybody have some comments on this way to implement business logic.
On the issue of extension methods and how they get stuff. You have two approaches:
Service Location - have a global Kernel and drop down to Service Location (which is different from Dependency Injection). The problem here though is that your Entity (or its extensions) should not be assuming its context and instead demanding it
As you are an extension method have the thing you're extending pass you what you need
As you've more or less guessed, this (Having a global Kernel that becomes the dumping ground) is something that Ninject tries to dissuade you from. In general, the extension for whatever the you're using (e.g., MVC or WCF) will provide something if its appropriate. For example, the WCF extension has http://github.com/ninject/ninject.extensions.wcf/blob/master/source/Ninject.Extensions.Wcf/NinjectServiceHost.cs
The larger issue here is that dependencies like this should probably not propagate down to the Entity level - it should stay at the Service level and be propagated from there (using DDD vocabulary).
You may find this answer by me interesting as it covers this ground a bit (more from a Ninject techniques that an architectural concepts perspective)