https://github.com/int6/CoiniumServ/blob/develop/src/CoiniumServ/Pools/Pool.cs
this is my pool class. i want that when i dispose the class. all dependencies should be stop working and dispose it self.
i tried and implement idisposable to all dependency to dispose but it doesnt work.
i also implement a thread to run function in thread and destroy it with thread abort. that also doesnt work.
is there any other way to do this?
A component should not dispose any injected dependencies. The main reasons for this are:
that component didn't create them, and therefore has no idea whether those dependencies should be disposed or not.
consumers shouldn't even be aware that dependencies are disposable.
It is very common for a component to depend on a service with a longer lifestyle. In case the consuming component disposes that dependency, the application will break, because the dependency cannot longer be used, while it is configured to be used. Here's a simple example:
// Singleton
private static readonly IRepository<User> repository = new UserRepository();
public IController CreateController(Type controllerType) {
if (controllerType == typeof(UserController)) {
return new UserController(repository);
}
// ...
}
This example contains a singleton UserRepository and a transient UserController. For each request, an new UserController is created (just picture an ASP.NET MVC application, and this will start to make sense). If the UserController would dispose the UserRepository, the next request would get a UserController that would depend on an already disposed UserRepository. This would obviously bad.
But besides this, IRepository<T> should not implement IDisposable. Implementing IDisposable means that the abstraction is leaking implementation details and therefor violates the Dependency Inversion Principle, that states:
Abstractions should not depend on details. Details should depend on
abstractions.
Implementing IDisposable on an abstraction only makes sense if you are absolutely 100% sure that all implementations of that abstraction that you'll ever make need to dispose itself. But this hardly ever is the case. Just imagine having a FakeRepository<T> implementation in your unit tests. Such fake implementation never needs disposal and therefore not all implementations need disposal and you're leaking implementation details.
This simply means that you should move the IDisposable interface to the implementation. For instance:
public interface IRepository<T> { }
public class UserRepository : IRepository<User>, IDisposable { }
Note that having the IDisposable interface on an abstraction, while not all consumers are expected to call Dispose also means you are violating the Interface Segregation Principle that states that:
no client should be forced to depend on methods it does not use.
Advantage of this is that it becomes impossible for consuming components (such as the UserController) to accidentally call Dispose() and with that possibly break the system.
Another advantage is that since components don't need to dispose their dependencies, for most components there will be no disposal logic left, making the system considerably simpler and more maintainable.
Related
I've been using Dependency Injection for a while, and now I want to give a talk about IoC and DI to a group of new developers. I remember explaining it to one guy personally and he asked me:
"Why not just use:
private IMyInterface _instance = new MyImplementaion();
instead of going through all the DI trouble. "
My answer was: "Unit testing requires mocks and stubs." - but we do not write unit tests in my company so it did not convince him. I told him that concrete implementation is bad since you are tightly coupled to one implementation. Changing one component will cause change in another.
Can you give an example for such code?
Can you give me more reasons why this code is bad?
It seem so obvious to me that I have trouble explaining it :-)
The problem with the following coupling
public class MyClass
{
private IMyInterface _instance = new MyImplementation();
...
Means that any time MyClass is created (whether directly, or by an IoC container) is that it will always immediately create a concrete MyImplementation and bind its dependency _instance to this concrete implementation. In turn, it is likely that MyImplementation has other dependencies, which are also coupled this way.
Benefits of decoupling of classes such that MyClass is only dependent on interfaces to its dependencies, and not concrete implementations of the dependencies (i.e. the D of SOLID principles) include:
for Unit Testing - As you've mentioned, in order to test MyClass in isolation, with new'ed dependencies, you would need to resort to nasty things like Moles / Fakes in order to mock out the the hard wired MyImplementation dependency.
for Substitution - by coupling only to an interface, you can now swap out different concrete implementations of IMyInterface (e.g. via configuring your IoC bootstrapping) without changing any code in MyClass.
for making dependencies explicit and obvious in your system, as the IMyInterface dependency may have further dependencies, which need to be resolved (and may need configuration considerations as well). If MyClass hides the IMyInterface dependency internally, it is not visible to the caller as to what the dependencies of MyClass are. Although in classic 1990's OO this was commonplace (i.e. encapsulation + composition), this can obscure the implementation as deployment of all dependencies still needs to be done. However, with coupling done on interface level (i.e. consumers of MyClass will do so only via IMyClass), the coupling-visible interface is IMyClass which will again hide the dependency on IMyInterface, since constructors are not visible on the interface).
for configurable dependency lifespan control. By injecting IMyInterface, instead of newing MyImplementation, you are allowing additional configuration options with respect to the lifespan management of the MyImplementation object. When the original hardwired creation of MyImplementation was done on MyClass, it was effectively taking ownership of MyImplementation's lifespan with a 1:1 relationship between the two class instances. By leaving this to the IoC container, you can now play with other options of MyImplementation's lifespan, which might be more efficient, e.g. if MyImplementation instances are thread-safe, you may elect to share an instance across multiple instances of MyClass, for instance.
In summary, here's how I believe the refactoring should look suitable for IoC constructor dependency injection:
public class MyClass
{
// Coupled onto the the interface. Dependency can be mocked, and substituted
private readonly IMyInterface _instance;
public MyClass(IMyInterface instance)
{
_instance = instance;
}
...
The IoC container bootstrapping will define WHICH implementation of IMyInterface needs to be bound, and will also define the lifespan of the dependency, e.g. in Ninject:
Bind<IMyInterface>()
.To<SomeConcreteDependency>() // Which implements IMyInterface
.InSingletonScope();
I have a class that use constructor injections.
public class MyClass
{
public MyClass(IInterface1 interface1)
{
}
public Dispose()
{
interface1.dispose();
}
}
interface1 will be injected by DI. But sometimes, i need to create MyClass manually.
public class MyOtherClass
{
private readonly IInterface1 _interface1;
public MyOtherClass()
{
_interface1 = new Interface1();
}
public Foo()
{
foo = new MyClass(_interface1);
bar = new MyClass(_interface1);
}
}
in my dispose method, interface1 is always disposed when MyClass is destroyed. The problem is, interface1 is own by MyOtherClass and might still be used by other instances and shouldn't be disposed. How can i resolve this?
You should not call
interface1.dispose();
inside MyClass.
If interface1 is created by the DI container, it will be rleased there.
If you are creating it explicitly as in MyOtherClass, Dispose it in MyOtherClass.
There are two problems with your code.
First of all, MyClass 'illegally' takes the ownership of the IInterface1, while it has no idea what the lifetime of that instances is. This means that if IInterface1 is reused, the system breaks.
The basic rule of ownership is that "he who creates an instance is responsible of disposing it" (the RAII idiom). Since MyClass didn't create IInterface1, he should not dispose it. MyClass should therefore not implement a Dispose method and not call IInterface1.Dispose().
Second, by letting the IInterface1 implement IDisposable, your code violates the Dependency Inversion Principle (DIP), because the DIP states:
Abstractions should not depend on details. Details should depend on
abstractions.
Your IInterface1 however, depends on an implementation detail, because whether or not some component has unmanaged resources that needs to be disposed is an implementation detail. It is very unlikely that every implementation of that IInterface1 will always need to dispose resources and because of that your interface leaks implementation details of a specific implementation.
So instead of letting the IInterface1 implement IDisposable you simply let the given implementation implement IDisposable. What's nice about this is that it minimized IInterface1's API, which makes it easier to work with and possibly allows you to conform to the Interface Segregation Principle.
If you do that, the problem goes away immediately, since MyClass has no clue whether or not IInterface1 can be disposed or not (which is good), which means it can't accidentally call Dispose in the first place.
This of course leaves the disposal of that instance up to the part of the system that created that instance (which is good). If this instance is created on your behalf by a DI library (if you use one), the DI library is usually responsible of disposing that instance. If you don't use a container, you will have to ensure disposal yourself obviously.
Do note that not all containers track all instances. For instance, Unity and Simple Injector do not track (and dispose) transient instances automatically. In most cases however, disposable components should be registered with a scoped lifestyle (per web request or something similar). I think in that case all containers dispose instances that are registered with such lifestyle.
I got the IUnitOfWork interface (with its implementation which I wont show you):
public interface IUnitOfWork : IDisposable
{
...
}
Note the IDisposable inheritance. Also, I got the service with the appropriate implementation:
public interface IBusinessLogicService
{
...
}
public sealed class BusinessLogicService : IBusinessLogicService
{
// Dependency is auto-injected by ninject
// because of the custom injection heuristic.
public IUnitOfWork UnitOfWork { get; set; }
...
}
There we go the ninject binding:
kernel.Bind<IUnitOfWork>().To<UnitOfWork>().InRequestScope();
kernel.Bind<IBusinessLogicService>().To<BusinessLogicService>();
As you can see, ninject will automatically deactivate the IUnitOfWork instance at the end of the request and also dispose it.
Now, the question,
will the ninject also deactivate (and reactivate on the next web request) the instances (like IBusinessLogicService) that are dependent on the deactivating object?
No, Ninject will deactivate an object when the scope is collected if there is a scope. Transient objects are not tracked by Ninject and are considered to be managed externally [and fresh instances are created wherever one is required].
The deactivation point for a scope worth of objects is normally dictated by when the scoping object gets garbage collected.
For InRequestScope the deactivation takes place deterministically at the end of a request using an ASP.NET pipeline hook.
See also Ninject.Extensions.NamedScope for more options in this space
Ninject is only responsible for dependency injection. It will instantiate the interface variable with class instance depending on the kernel binding. It is not responsible for deactivating instances created using Ninject.
The deactivation of object will follow the normal asp.net framework's object destruction cycle.
We are building an ASP.NET project, and encapsulating all of our business logic in service classes. Some is in the domain objects, but generally those are rather anemic (due to the ORM we are using, that won't change). To better enable unit testing, we define interfaces for each service and utilize D.I.. E.g. here are a couple of the interfaces:
IEmployeeService
IDepartmentService
IOrderService
...
All of the methods in these services are basically groups of tasks, and the classes contain no private member variables (other than references to the dependent services). Before we worried about Unit Testing, we'd just declare all these classes as static and have them call each other directly. Now we'll set up the class like this if the service depends on other services:
public EmployeeService : IEmployeeService
{
private readonly IOrderService _orderSvc;
private readonly IDepartmentService _deptSvc;
private readonly IEmployeeRepository _empRep;
public EmployeeService(IOrderService orderSvc
, IDepartmentService deptSvc
, IEmployeeRepository empRep)
{
_orderSvc = orderSvc;
_deptSvc = deptSvc;
_empRep = empRep;
}
//methods down here
}
This really isn't usually a problem, but I wonder why not set up a factory class that we pass around instead?
i.e.
public ServiceFactory
{
virtual IEmployeeService GetEmployeeService();
virtual IDepartmentService GetDepartmentService();
virtual IOrderService GetOrderService();
}
Then instead of calling:
_orderSvc.CalcOrderTotal(orderId)
we'd call
_svcFactory.GetOrderService.CalcOrderTotal(orderid)
What's the downfall of this method? It's still testable, it still allows us to use D.I. (and handle external dependencies like database contexts and e-mail senders via D.I. within and outside the factory), and it eliminates a lot of D.I. setup and consolidates dependencies more.
Thanks for your thoughts!
One argument against this is that it doesn't make your dependencies clear. It shows that you depend on "some of the stuff in the service factory" but not which services. For refactoring purposes it can be helpful to know exactly what depends on what.
Dependency injection should make this kind of thing easy, if you're using an appropriate framework - it should just be a matter of creating the right constructor, defining what implements which interface, and letting it sort everything out.
Such a factory is essentially a Service Locator, and I consider it an anti-pattern because it obscures your dependencies and make it very easy to violate the Single Responsibility Principle (SRP).
One of the many excellent benefits we derive from Constructor Injection is that it makes violations of the SRP so glaringly obvious.
If most of your classes depend on this three interfaces you could pass an object around that wraps them together, BUT: if most of the classes just depend on one or two of them then it's not a good idea since those classes will have access to objects they don't need and they have no business with and some programmers will always call the code they are not supposed to call just because it's available.
Btw, it's not a factory unless you always create a new object in the Get[...]Service() methods and doing that just for passing a few methods around is bad. I'd just call it ServiceWrapper and turn them into the properties EmployeeService, DepartmentService and OrderService.
I'm finally wrapping my head around IoC and DI in C#, and am struggling with some of the edges. I'm using the Unity container, but I think this question applies more broadly.
Using an IoC container to dispense instances that implement IDisposable freaks me out! How are you supposed to know if you should Dispose()? The instance might have been created just for you (and therefor you should Dispose() it), or it could be an instance whose lifetime is managed elsewhere (and therefor you'd better not). Nothing in the code tells you, and in fact this could change based on configuration!!! This seems deadly to me.
Can any IoC experts out there describe good ways to handle this ambiguity?
You definitely do not want to call Dispose() on an object that was injected into your class. You can't make the assumption that you are the only consumer. Your best bet is to wrap your unmanaged object in some managed interface:
public class ManagedFileReader : IManagedFileReader
{
public string Read(string path)
{
using (StreamReader reader = File.OpenRead(path))
{
return reader.ReadToEnd();
}
}
}
That is just an example, I would use File.ReadAllText(path) if I were trying to read a text file into a string.
Another approach is to inject a factory and manage the object yourself:
public void DoSomething()
{
using (var resourceThatShouldBeDisposed = injectedFactory.CreateResource())
{
// do something
}
}
AutoFac handles this by allowing the creation of a nested container. When the container is finished with, it automatically disposes of all IDisposable objects within it. More here.
.. As you resolve services, Autofac tracks disposable (IDisposable) components that are resolved. At the end of the unit of work, you dispose of the associated lifetime scope and Autofac will automatically clean up/dispose of the resolved services.
This has puzzled me frequently as well. Though not happy about it, I always came to the conclusion that never returning an IDisposable object in a transient way was best.
Recently, I rephrased the question for myself: Is this really an IoC issue, or a .net framework issue? Disposing is awkward anyway. It has no meaningful functional purpose, only technical. So it's more a framework issue that we have to deal with, than an IoC issue.
What I like about DI is that I can ask for a contract providing me functionality, without having to bother about the technical details. I'm not the owner. No knowledge about which layer it's in. No knowledge about which technologies are required to fulfil the contract, no worries about lifetime. My code looks nice and clean, and is highly testable. I can implement responsibilities in the layers where they belong.
So if there's an exception to this rule that does require me to organise the lifetime, let's make that exception. Whether I like it or not. If the object implementing the interface requires me to dispose it, I want to know about it since then I am triggered to use the object as short as possible. A trick by resolving it using a child container which is disposed some time later on might still cause me keeping the object alive longer than I should. The allowed lifetime of the object is determined when registering the object. Not by the functionality that creates a child container and holds on to that for a certain period.
So as long as we developers need to worry about disposing (will that ever change?) I will try to inject as few transient disposable objects as possible.
1. I try to make the object not IDisposable, for example by not keeping disposable objects on class level, but in a smaller scope.
2. I try to make the object reusable so that a different lifetime manager can be applied.
If this is not feasible, I use a factory to indicate that the user of the injected contract is owner and should take responsibility for it.
There is one caveat: changing a contract implementer from non-disposable to disposable will be a breaking change. At that time the interface will no longer be registered, but the interface to the factory. But I think this applies to other scenario's as well. Forgetting to use a child container will from that moment on give memory issues. The factory approach will cause an IoC resolve exception.
Some example code:
using System;
using Microsoft.Practices.Unity;
namespace Test
{
// Unity configuration
public class ConfigurationExtension : UnityContainerExtension
{
protected override void Initialize()
{
// Container.RegisterType<IDataService, DataService>(); Use factory instead
Container.RegisterType<IInjectionFactory<IDataService>, InjectionFactory<IDataService, DataService>>();
}
}
#region General utility layer
public interface IInjectionFactory<out T>
where T : class
{
T Create();
}
public class InjectionFactory<T2, T1> : IInjectionFactory<T2>
where T1 : T2
where T2 : class
{
private readonly IUnityContainer _iocContainer;
public InjectionFactory(IUnityContainer iocContainer)
{
_iocContainer = iocContainer;
}
public T2 Create()
{
return _iocContainer.Resolve<T1>();
}
}
#endregion
#region data layer
public class DataService : IDataService, IDisposable
{
public object LoadData()
{
return "Test data";
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
/* Dispose stuff */
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
}
#endregion
#region domain layer
public interface IDataService
{
object LoadData();
}
public class DomainService
{
private readonly IInjectionFactory<IDataService> _dataServiceFactory;
public DomainService(IInjectionFactory<IDataService> dataServiceFactory)
{
_dataServiceFactory = dataServiceFactory;
}
public object GetData()
{
var dataService = _dataServiceFactory.Create();
try
{
return dataService.LoadData();
}
finally
{
var disposableDataService = dataService as IDisposable;
if (disposableDataService != null)
{
disposableDataService.Dispose();
}
}
}
}
#endregion
}
I think in general the best approach is to simply not Dispose of something which has been injected; you have to assume that the injector is doing the allocation and deallocation.
This depends on the DI framework. Some frameworks allow you to specify whether you want a shared instance (always using the same reference) for every dependency injected. In this case, you most likely do not want to dispose.
If you can specify that you want a unique instance injected, then you will want to dispose (since it was being constructed for you specifically). I'm not as familiar with Unity, though - you'd have to check the docs as to how to make this work there. It's part of the attribute with MEF and some others I've tried, though.
Putting a facade in front of the container can resolve this as well. Plus you can extend it to keep track of a more rich life cycle like service shutdowns and startups or ServiceHost state transitions.
My container tends to live in an IExtension that implements the IServiceLocator interface. It is a facade for unity, and allows for easy access in WCf services. Plus I have access to the service events from the ServiceHostBase.
The code you end up with will attempt to see if any singleton registered or any type created implements any of the interfaces that the facade keeps track of.
Still does not allow for the disposing in a timely manner as you are tied to these events but it helps a bit.
If you want to dispose in a timely manner (aka, now v.s. upon service shutdown). You need to know that the item you get is disposable, it is part of the business logic to dispose of it, so IDisposable should be part of the interface of the object. And there probably should be verification of expectations untitests related to the dispose method getting called.
In the Unity framework, there are two ways to register the injected classes: as singletons (you get always the same instance of the class when you resolve it), or such as you get a new instance of the class on each resolution.
In the later case, you have the responsibility of disposing the resolved instance once you don't need it (which is a quite reasonable approach). On the other hand, when you dispose the container (the class that handles object resolutions), all the singleton objects are automatically disposed as well.
Therefore, there are apparently no issues with injected disposable objects with the Unity framework. I don't know about other frameworks, but I suppose that as long as a dependency injection framework is solid enough, it for sure handles this issue in one way or another.