I'm using Simple injector successfully with constructor injection and I love the auto-wiring feature for the ctr-Arguments. Is there a similar auto-wire way for a initialize method?
So like instead of providing arguments to the constructor where doing async stuff is often impossible, things are provided to an Initialize() method with auto-wiring
public class A
{
// only std-ctor ()
[Init] // maybe mark needed?
void /*async*/ Initialize(IC c, IB b)
{
// get auto called by container and params auto-wired
// maybe do async stuff with b
}
}
// in Composition root (for simplicity):
container.RegisterSingleton<A>();
So the async possibility would be a plus feature.
[Update]
So async is reasonably not the feature you want in object graph creation. So let's not put the focus on that. But sometimes it may still be desireable to rather use init (i.e. if a ()-constructor is required). I realize that auto calling Initialize() by simple injector leads to nonsense. I just see the need sometimes to split object creation and initialization and auto-wiring is just way too cool. How would one then activate or implement that? So that simple Injector does auto-wire it for you? I hope I do not have to write an "initializer-class" where I have to 'pipe through' all dependencies. So that is the actual direction of the question (get ahold of Magic.AutoWire()).
public class SomeClassWithA
{
A _a;
public SomeClassWithA(A a)
{
_a=a;
}
void CalledLater()
{
Magic.AutoWire(_a);
// Initialize with auto-wire parameters was called
}
}
Although it is possible to extend Simple Injector to call an initialize method during object graph initialization, it is impossible to make this truly asynchronous, simply because Simple Injector lacks an asynchronous API. Without a "Task<T> GetInstanceAsync<T>()" method, the application would still have wait for I/O by blocking a thread, instead of using an I/O completion port, which would completely waste the benefits of using async.
But Simple Injector lacks such async feature and this is deliberate (and AFAIK most DI libraries lack such feature for the exact same reason). The construction of object graphs should be fast and should not consist of any I/O. Constructors (or initializers with dependencies) should do nothing more than store incoming dependencies, as Mark Seemann clearly stated here.
This basically means that you should delay any I/O till after object graph construction until the moment that an object is used for the first time. This not only keeps object construction simple, fast and reliable, it also makes it possible to verify the construction of your onject graph in isolation (without the existence of any I/O components such as a database or the file system).
Related
It's fairly established that doing work in ctors for types that are resolved using SimpleInjector is bad practice. Although this often leads to certain late initializations of such types, a particularly interesting case is Reactive Extensions subscriptions.
Take for instance an observable sequence that exhibits Replay(1) semantics (actually BehaviorSubject if we take the StartWith into account), e.g.
private readonly IObservable<Value> _myObservable;
public MyType(IService service)
{
_myObservable = service.OtherObservable
.StartWith(service.Value)
.Select(x => SomeTransform())
.Replay(1)
.RefCount();
}
public IObservable<Value> MyObservable => _myObservable;
Assume now, that SomeTransform is computationally expensive. From the point of view of SimpleInjector, the above is bad practice. Ok, so we need some kind of Initialize() method to call after SimpleInjector is finished. But what about our replay semantics and our StartWith()? Our consumers expect a value when they Subscribe (assume now that this is guaranteed to happen after initialization)!
How do we get around these restrictions in a nice way while still satisfying SimpleInjector? Here's a summary of requirements:
Don't do extensive work in the ctor (i.e. SomeTransform) should not run
_myObservable should be readonly
MyObservable should exhibit Replay(1) semantics
We should always have an initial value (hence the StartWith)
We do not want to Subscribe inside MyType and cache the value (we like immutability)
I experimented with creating an additional observable that starts with false and then gets set to true on initialize, and then merging that together with _myObservable, but couldn't quite get it to work. Additionally, it doesn't seem like the best solution. In essence, all I want to do is delay until Initialize() is done. There must be some way to do this that I'm not seeing?
One easy solution that comes to mind is the use of Lazy<T>
This could look like:
private readonly Lazy<IObservable<Value>> _lazyMyObservable;
public MyType(IService service)
{
_lazyMyObservable = new Lazy<IObservable<Value>>(() => this.InitObservable(service));
}
private IObservable<Value> InitObservable(IService service)
{
return service.OtherObservable
.StartWith(service.Value)
.Select(x => SomeTransform())
.Replay(1)
.RefCount();
}
public IObservable<Value> MyObservable => _lazyMyObservable.Value;
This will init the variable _lazyMyObservable without actually calling SomeTransform(). When a consumer asks for MyType.MyObservable the InitObservable code will be called one time and one time only. This postpones the initialization to the point where the code is actually used.
This will keep your constructor nice and clean and has no need to add initialization logic.
Note that the ctor of the Lazy<T> has several overloads that you can use if you may have issues with multithreading.
Injection constructors should be simple and reliable. This means that the following practices are frowned upon:
Doing any I/O operations inside the constructor. I/O operations can fail and make construction of the object graph unreliable.
Using the class's dependencies inside the constructor. Not only could a called dependency cause I/O of its own, sometimes injected dependencies are not (yet) fully initialized, and final initialization happens at a later point in time. Perhaps after the object graph has been constructed.
Considering how Reactive Extensions work, your MyType constructor doesn't seem to do any I/O. Its SomeTransform method is not called during the creation of MyType. Instead, the observable is configured to call SomeTransform when objects are pushed. This means that from a DI perspective, your injection is still 'simple' and fast. Sometimes your classes need some initialization on top of storing incoming dependencies. Creating and storing a Lazy<T>, for instance, is a good example. It allows delaying doing some I/O while still having more code than merely "receiving the dependencies."
But still you are accessing a dependency inside your constructor, which might cause trouble if that dependency, or its dependencies are not fully initialized. Further more, with Reactive Extensions you make a runtime dependency from IService back to MyType (you already have a design-time dependency from MyType to IService). This is very similar to working with events in .NET. Consequence of this is that it could cause MyType to be kept alive by IService, even when MyType lifetime is expected to be shorter.
So, strictly spoken, from a DI perspective this configuration might be troublesome. But it's hard to imagine a different model when working with Reactive Extensions. That would mean you have to move this configuration of the observables out of the constructors, and do it after the object graph has been constructed. But that will likely cause having to open up your classes so the Composition Root has access to the methods that need to be called. It also causes Temporal Coupling.
In other words, when using Reactive Extensions, it is probably good to have some design rules in place to prevent trouble. These rules could be:
All exposed IObservable<T> properties should always be fully initialized and usable after its type's construction.
All observers and observables should have the same lifetime.
as for my understanding, part of writing (unit-)testable code, a constructor should not do real work in constructor and only assigning fields. This worked pretty well so far. But I came across with a problem and I'm not sure what is the best way to solve it. See the code sample below.
class SomeClass
{
private IClassWithEvent classWithEvent;
public SomeClass(IClassWithEvent classWithEvent)
{
this.classWithEvent = classWithEvent;
// (1) attach event handler in ctor.
this.classWithEvent.Event += OnEvent;
}
public void ActivateEventHandling()
{
// (2) attach event handler in method
this.classWithEvent.Event += OnEvent;
}
private void OnEvent(object sender, EventArgs args)
{
}
}
For me option (1) sounds fine, but it the constructor should only assign fields. Option (2) feels a bit "too much".
Any help is appreciated.
A unit test would test SomeClass at most. Therefore you would typically mock classWithEvent. Using some kind of injection for classWithEvent in ctor is fine.
Just as Thomas Weller said wiring is field assignment.
Option 2 is actually bad IMHO. As if you omit a call to ActivateEventHandling you end up with a improperly initialized class and need to transport knowledge of the requirement to call ActivateEventHandling in comments or somehow else, which make the class harder to use and probably results in a class-usage that was not even tested by you, as you have called ActivateEventHandling and tested it but an uninformed user omitting the activation didn't, and you have certainly not tested your class when ActivateEventHandling was not called, right? :)
Edit: There may be alternative approaches here which are worth mentioning it
Depending on the paradigm it may be wise to avoid wiring events in the class at all. I need to relativize my comment on Stephen Byrne's answer.
Wiring can be regarded as context knowledge. The single responsibility principle says a class should do only one task. Furthermore a class can be used more versatile if it does not have a dependency to something else. A very loosely coupled system would provide many classes witch have events and handlers and do not know other classes.
The environment is then responsible for wiring all the classes together to connect events properly with handlers.
The environment would create the context in which the classes interact with each-other in a meaningful way.
A class in this case does therefore not know to whom it will be bound and it actually does not care. If it requires a value, it asks for it, whom it asks should be unknown to it. In that case there wouldn't even be an interface injected into the ctor to avoid a dependency. This concept is similar to neurons in a brain as they also emit messages to the environment and expect answer not knowing neighbouring neurons.
However I regard a dependency to an interface, if it is injected by some means of a dependency injection container just another paradigm and not less wrong.
The non trivial task of the environment to wire up all classes on start may lead to runtime errors (which are mitigated by a very good test coverage of functional and integration tests, which may be a hard task for large projects) and it gets very annoying if you need to wire dozens of classes and probably hundreds of events on startup manually.
While I agree that wiring in an environment and not in the class itself can be nice, it is not practical for large scale code.
Ralf Westphal (one of the founders of the clean code developer initiative (sorry german only)) has written a software that performs the wiring automatically in a concept called "event based components" (not necessarily coined by himself). It uses naming conventions and signature matching with reflection to bind events and handlers together.
Wiring events is field assignment (because delegates are nothing but simple reference variables that point to methods).
So option(1) is fine.
The point of constructor is not to "assign fields". It is to establish invariants of your object, i. e. something that never changes during its lifetime.
So if in other methods of class you depend on being always subscribed to some object, you'd better do it in the constructor.
On the other hand, if subscriptions come and go (probably not the case here), you can move this code to another method.
The single responsibility principle dictates that that wiring should be avoided. Your class should not care how, or where from it receives data. It would make sense to rename OnEvent method to something more meaningful, and make it public.
Then some other class (bootstrapper, configurator, whatever) should be responsible for the wiring. Your class should only be responsible for what happens when a new data come's in.
Pseudo code:
public interface IEventProvider //your IClassWithEvent
{
Event MyEvent...
}
public class EventResponder : IEventResponder
{
public void OnEvent(object sender, EventArgs args){...}
}
public class Boostrapper
{
public void WireEvent(IEventProvider eventProvider, IEventResponder eventResponder)
{
eventProvider>event += eventResponder.OnEvent;
}
}
Note, the above is pseudo code, and it's only for the purpose to describe the idea.
How your bootstrapper actually is implemented depends on many things. It can be your "main" method, or your global.asax, or whatever you have in place to actually configure and prepare your application.
The idea is, that whatever is responsible to prepare the application to run, should compose it, not the classes themselves, as they should be as single purpose as possible, and should not care too much about how and where they are used.
I have a custom MVC framework in which I'm overhauling the routing API. I'm trying to think of a clean way to segregate "setup" and "execution" in my framework which makes extensive use of delegates and generics. Right now I envision this from the calling side:
//setup
MyRouter.AddRoute("/foo", () => new MyHandler(), (h) => h.MyMethod);
//h.MyMethod only exists in MyHandler, not in HttpHandler
//execution
MyRouter.Execute(HttpContext);
I can make the AddRoute method signature "work" currently:
delegate T HandlerInvoker<T>();
delegate string HandlerCallMethod<T>(T handler);
...
public void AddRoute<T>(string pattern, HandlerInvoker<T> invoker, HandlerCallMethod<T> caller) where T is HttpHandler{...}
If I didn't need to store the invoker and caller and could do it all right then, this would work fine. But, I do need to store the invoker and caller to later execute.
Current things I've thought about doing:
Storing them in a List<object> and then using reflection to call them. This seems extremely complicated and probably not too good of performance
Moving AddRoute to execution. This can make it harder for people using my API, but might end up being my only "good" choice
Ask a SO question :)
Is there any good way of storing these generic types without a ton of painful reflection?
You could store an anonymous delegate that performs all the conversion for you.
It looks like the following would work (not tested in any way):
List<Action> handlers;
handlers.Add(() => caller(invoker()));
Note that this wouldn't work if you were caching invoker.
In that case you need to make preserve the value, Lazy should do the trick.
List<Action> handlers;
Lazy<T> lazy = new Lazy<T>(invoker);
handlers.Add(() => caller(lazy.Value);
That latter will only create one instance of the return value of invoker per call to the method. And since lazy is a local variable, it is automatically shoved into a class which is held onto as long as handlers holds onto a reference for you.
Note that I ignored pattern, but it seems you don't need any help there.
When looking at the source code of a couple of projects I found a pattern I can not quite understand.
For instance in FubuMVC and Common Service Locator a Func is used when a static provider is changed.
Can anyone explain what the benefit is of using:
private static Func<IServiceLocator> currentProvider;
public static IServiceLocator Current
{
get { return currentProvider(); }
}
public static void SetLocatorProvider(Func<IServiceLocator> newProvider)
{
currentProvider = newProvider;
}
instead of:
private static IServiceLocator current;
public static IServiceLocator Current
{
get { return current; }
}
public static void SetLocator(IServiceLocator newInstance)
{
current = newInstance;
}
The major advantage of the first model over the second is what's called "lazy initialization". In the second example, as soon as SetLocator is called, you must have an IServiceLocator instance loaded in memory and ready to go. If such instances are expensive to create, and/or created along with a bunch of other objects at once (like on app startup), it's a good idea to try to delay actual creation of the object to reduce noticeable delays to the user. Also, if the dependency may not be used by the dependent class (say it's only needed for certain operations, and the class can do other things that don't require the dependency), it would be a waste to instantiate one.
The solution is to provide a "factory method" instead of an actual instance. When the instance is actually needed, the factory method is called, and the instance is created at the last possible moment before it's used. This reduces front-end loading times and avoids creating unneeded dependencies.
Good answer by #KeithS. Another thing to note here is what happens under the covers of the initialization of certain instances. Keeping a reference to intentionally volatile objects can be tricky.
FubuMVC, for instance, spins up a nested StructureMap container per HTTP request which scopes all service location to that specific request. If you have classes running within that pipeline that have been built up, you'll want to use the contextual injection provided to you via THAT instance of IServiceLocator.
Theres a lot more flexibility to the implementer of newProvider. They can lazy load, async load (and and then if it's not loaded by the time the func is called it can have code to wait), they can allow it change based on runtime parameters etc.
A func allows several things
The locator creation can be delayed until it is needed. It is therefore lazy.
The provider object does not contain any state. It is not its responsiblity to shut down the locator does anything with it except to return the current locator when needed.
When the locator is reconfigured at run time or it decides that a different instance is needed it can control the lifetime of the locator as long as the calling code does not store a reference to locator.
Since the locator is returned by a method it has more flexibility e.g. to create a thread local locator so it can create many objects in each thread without the need to coordinate object creation in one global object which could become a bottleneck when many threads are involved.
I am sure the designers did could give you more points than I did why it can be a good idea to abstract away "simple" things like to return an instance of a service locator.
Yours,
Alois Kraus
I am refactoring a class so that the code is testable (using NUnit and RhinoMocks as testing and isolations frameworks) and have found that I have found myself with a method is dependent on another (i.e. it depends on something which is created by that other method). Something like the following:
public class Impersonator
{
private ImpersonationContext _context;
public void Impersonate()
{
...
_context = GetContext();
...
}
public void UndoImpersonation()
{
if (_context != null)
_someDepend.Undo();
}
}
Which means that to test UndoImpersonation, I need to set it up by calling Impersonate (Impersonate already has several unit tests to verify its behaviour). This smells bad to me but in some sense it makes sense from the point of view of the code that calls into this class:
public void ExerciseClassToTest(Impersonator c)
{
try
{
if (NeedImpersonation())
{
c.Impersonate();
}
...
}
finally
{
c.UndoImpersonation();
}
}
I wouldn't have worked this out if I didn't try to write a unit test for UndoImpersonation and found myself having to set up the test by calling the other public method. So, is this a bad smell and if so how can I work around it?
Code smell has got to be one of the most vague terms I have ever encountered in the programming world. For a group of people that pride themselves on engineering principles, it ranks right up there in terms of unmeasurable rubbish, and about as useless a measure, as LOCs per day for programmer efficiency.
Anyway, that's my rant, thanks for listening :-)
To answer your specific question, I don't believe this is a problem. If you test something that has pre-conditions, you need to ensure the pre-conditions have been set up first for the given test case.
One of the tests should be what happens when you call it without first setting up the pre-conditions - it should either fail gracefully or set up it's own pre-condition if the caller hasn't bothered to do so.
Well, there is a bit too little context to tell, it looks like _someDepend should be initalized in the constructor.
Initializing fields in an instance method is a big NO for me. A class should be fully usable (i.e. all methods work) as soon as it is constructed; so the constructor(s) should initialize all instance variables. See e.g. the page on single step construction in Ward Cunningham's wiki.
The reason initializing fields in an instance method is bad is mainly that it imposes an implicit ordering on how you can call methods. In your case, TheMethodIWantToTest will do different things depending on whether DoStuff was called first. This is generally not something a user of your class would expect, so it's bad :-(.
That said, sometimes this kind of coupling may be unavoidable (e.g. if one method acquires a resource such as a file handle, and another method is needed to release it). But even that should be handled within one method if possible.
What applies to your case is hard to tell without more context.
Provided you don't consider mutable objects a code smell by themselves, having to put an object into the state needed for a test is simply part of the set-up for that test.
This is often unavoidable, for instance when working with remote connections - you have to call Open() before you can call Close(), and you don't want Open() to automatically happen in the constructor.
However you want to be very careful when doing this that the pattern is something readily understood - for instance I think most users accept this kind of behaviour for anything transactional, but might be surprised when they encounter DoStuff() and TheMethodIWantToTest() (whatever they're really called).
It's normally best practice to have a property that represents the current state - again look at remote or DB connections for an example of a consistently understood design.
The big no-no is for this to ever happen for properties. Properties should never care what order they are called in. If you have a simple value that does depend on the order of methods then it should be a parameterless method instead of a property-get.
Yes, I think there is a code smell in this case. Not because of dependencies between methods, but because of the vague identity of the object. Rather than having an Impersonator which can be in different persona states, why not have an immutable Persona?
If you need a different Persona, just create a new one rather than changing the state of an existing object. If you need to do some cleanup afterwards, make Persona disposable. You can keep the Impersonator class as a factory:
using (var persona = impersonator.createPersona(...))
{
// do something with the persona
}
To answer the title: having methods call each other (chaining) is unavoidable in object oriented programming, so in my view there is nothing wrong with testing a method that calls another. A unit test can be a class after all, it's a "unit" you're testing.
The level of chaining depends on the design of your object - you can either fork or cascade.
Forking:
classToTest1.SomeDependency.DoSomething()
Cascading:
classToTest1.DoSomething() (which internally would call SomeDependency.DoSomething)
But as others have mentioned, definitely keep your state initialisation in the constructor which from what I can tell, will probably solve your issue.