IoC and "hiding implementation details" - c#

I implemented DI in my project through constructor injection, now the composition root is where all resolving takes place (this is, at the web project), and my question is whether the idea of creating an additional project that just handles the resolving is insane.
The reasoning behind this is while I would still have the implementation assemblies in the build directory (because they would still be referenced by the "proxy" project), I wouldn't need to reference them at web project level, which in turn would mean that the implementation of these interfaces wouldn't be accessible from somewhere other than where they're implemented (unless explicitly referenced, which would quickly pinpoint that something is wrong: you don't want to be doing this).
Is this a purposeless effort likely to become error prone or is it a reasonable thing to do?

There are pros and cons of this. As BrokenGlass said, this is a litmus test, on the flip side you really have to be careful you deploy all of the assemblies. Since dependencies of included libs are not put into the bin folder of the web app, you'll need to ensure they aren't missed although upon first run you would experience this and the resolution would ideally be easy.
This is indeed a matter of personal preference, for ease I like to include in the web app, but again, it can ensure those dependencies don't leak to the web app. However if your project is organized in such as way where your controllers always inject what you require, then the chances of it happening are less. For ex, if you take IContext in every controller then you are less likely to use using(var context = new Context()) in your app, since the standard has been set.

This is not insane at all - it is a very good litmus test to make sure no dependencies have sneaked in and very useful as such. This would only work though if your abstractions / interfaces are defined in a different assembly than the concrete classes that implement those interfaces.
Having said that, personally I have always kept the aggregate root within the main web app assembly, there is extra effort involved in this extra assembly and since I for the most part only inject interfaces I am not too worried about it, since my main concern is really testability. There might be projects though for which this is a worthwhile approach.

You could do some post-build processing to ensure the implementation doesn't leak out.
Cheers
Tymek

Related

Is it bad to register more classes than needed with autofac?

I would like to use a common Autofac module in several different web projects.
One of these projects does not require all the classes registered in my common module (it uses about half of them). My guess was that if a class is registered but never called, it will not be resolved and so it will not use up extra memory.
Is this ok or bad practice ? Thanks
I think that the amount of extra memory consumed by Autofac (or any DI container) is minimal -if- those types are never resolved. Containers have lazy-load mechanism, which prevents slow startups. When a type is resolved for the first time, containers often generate a lot of code and memory in the background to be able to do fast resolves at any later request for that type. Do note though that containers that contain some sort of 'verify' feature often force an instance to be created which will trigger the whole building and compilation process. So if you call this verify feature during startup, you lose the lazy-loading benefits.
Some developers even go a step further and tel the container to go reflect over all assemblies and register any type it finds by its interfaces. When doing this, you might see a lot of types ending up in the container that are never used and can actually never be resolved (because they weren't intended to be created by the container). The idea is that this keeps the container configuration very simple, and they don't care about the extra garbage.
Although this can simplify the container's configuration, downside of this approach is that this makes it much harder to have a simple integration test that verifies the correctness of the DI configuration, because there will be a lot of false-positives; the test will constantly fail because there are a lot of invalid registrations in the container. And it gets even worse if your container contains some sort of diagnostic service that allows detecting common misconfigurations. Again such configuration will trigger lots of false-positives or might even disable this feature altogether (depending on the framework you use).
That's why I usually advice against doing this type of batch-registration (although I'm not against batch registration itself).
Performance wise probably not, but...
...this means that you also have to add unneeded references to your project. I would avoid it to keep the amount of dependencies as low as possible. According to me, registering dependencies is something that belongs to your application, and not something that is shared across multiple applications. After all, things like life time may vary depending on the application.

How to use Ninject across assemblies

I can see that similar questions has been asked previously, but being totally new to DI and .Net I am not able to grasps the entire solution or may not have found the right source....
I have assemblies WebAPI,BL,DL.
WebAPI is dependent on BL,
BL is dependent on DL,
WebAPI DOES NOT reference DL and I would like to keep it the same. There are few more assemblies but this is sufficient to illustrates the issues.
WebAPI has application start section therefore I can use it to initialize the Ninject Kernel and register dependencies for WebAPI project.
How could I achieve the same for BL and other assemblies?
There are a couple of different ways, you can use the Ninject Conventions to automagically resolve every ISomething to an implementation that has the same name (e.g. IThing -> Thing) or you can create a Ninject Module in each assembly which registers the dependencies (the module in your BL could load the module in your DL).
The approach you take would depend on whether you need to define different scopes for different objects, for example if you wanted some things resolved as singletons that may affect which method you use.
I think Mark Seemann's advice about this is great -- make a composition root at the highest possible layer of your application. For Web apps, this means in the Globals.asax file. I could expound on the good reason's for this, but the linked blog post does a better job.
This does break the layering you are trying to achieve, but only barely, and what I think is an appropriate way. If your web layer is appropriately thin (i.e., you could replace it with a thick client fairly easily) then it isn't a big loss. If you are really adverse to that, you could create a composition root in the BL for the DL.

What is the use of spring.net?

We are developing an application using Silverlight and WCF Services. Is using Spring.Net is beneficial for us?
>> "Is using Spring.Net is beneficial for us?"
I think the spirit of your question is really geared more towards questioning the benefit of using an IoC/DI framework versus manually managing dependencies as needed. My response will focus more on the why and why not of IoC/DI and not so much on which specific framework to use.
As Martin Fowler mentioned at a recent conference, DI allows you to separate configuration from usage. For me, thinking about DI in the light of configuration and usage as separate concerns is a great way to start asking the right questions. Is there a need for your application to have multiple configurations for your dependencies? Does your app need the ability to modify behavior by configuration? Keep in mind, this means that dependencies are resolved at runtime and typically require an XML configuration file which is nice because changes can be made without requiring a recompile of the assembly. Personally, I'm not a fan of XML-based configuration of dependencies as they end up being consumed as "magic strings". So there's the danger of introducing runtime errors if you end up misspelling a class name, etc. But if you need the ability to configure on-the-fly, this is probably the best solution today.
On the other hand, there are DI frameworks like Ninject and StructureMap that allow fluent in-code dependency definitions. You lose the ability to change definitions on-the-fly, but you get the added benefit of compile time validations, which I prefer. If all you want from a DI framework is to resolve dependencies then you could eliminate XML-based frameworks from the equation.
From a Silverlight perspective, DI can be used in various ways. The most obvious is to define the relationship of Views to ViewModels. Going deeper, however, you can define validation, and RIA context dependencies, etc. Having all of the dependencies defined in a configuration class keeps the code free from needing to know how to get/create instances and instead focus on usage. Don't forget that the container can manage the lifetime of each object instance based on your config. So if you need to share an instance of a type (e.g. Singleton, ManagedThread, etc.), this is supported by declaring the lifetime scope of each type registered with the container.
I just realized at this point I'm ranting and I apologize. Hope this helps!
Personally i'd recommend using either Castle or Unity as i've had great success with both and found them both, while different, excellent IOC frameworks.
Besides the IOC component they also provide other nifty tools (AOP in Castle, Interface interception in Unity, for example) which you will no doubt find a use for in the future, and having an IOC framework in place from the start is ALWAYS a hell of a lot easier than trying to retrofit it.
It's incredibly easy to setup and configure, although personally i'm not a huge fan of the XML config way of doing things as some of those config files can turn into a total nightmare. A lot of people will tell you that it's only worth doing if you intend to swap components in and out, but why not just do that anyway IN CASE you decide you need to do that later. it's better to have it and not use it, than not have it and need it. If you're worried about perf hit i've seen on many blog posts around the web people comparing the various IOC frameworks for their speed and unless you're creating brain surgery robots or the US Missile defence platform it won't be an issue.
A DI Framework might be of use if you want to change big chunks of your application without having to rewrite your constructors. For example, you might want to use a comet streaming service that you will expose through an interface, and later decide that you'd rather use a dedicated messenging system such as MQ or RendezVous. You will then write an adapter to Mq that respects the common facade and just change the spring config to use the Mq implementation rather than the Comet one.
But for the love of tony the pony, don't use Spring.Net to create your MVVM/MVP/MVC bindings for each and every view or you'll enter a world of pain.
DI is a great tool when used with parcimony, please don't end-up with 243 spring configuration files, for your devs' sanity.
Using an IOC container such as Spring.Net is beneficial as it will enable you to unit test parts of your UI by swapping in mocked or special test implementations of the applications interfaces. In the long run, this should make your application more maintainable for future developers.
I think if you do more in the code rather than using the markup to do bindings etc. and have a BAL/DAL DI can help there because it can inject the correct business component reference (as one example). DI has many other practical advantages, but then you have to do more in code and less in markup.

Real world solutions using Dependency Injection

I was reading about DI thoroughly, and it seems interesting. So far, I'm totally living without it.
All the examples i saw are related to JNDI and how DI helps you being more flexible.
What is real life applications/problems that you solved with DI that will be hard to solve in other ways?
UPDATE
All the answers till now are educating, but to rephrase the question, I'm looking for examples in your programming life, that made you say "this problem will be best solved with a DI framework".
Just the other day, I decided to read up on dependency injection. Until then, I only knew the word. Honestly, my reaction to Martin Fowler's article was, "That's it?"
I have to agree with James Shore:
"Dependency Injection" is a 25-dollar term for a 5-cent concept.
That doesn't mean at all that it's a bad concept. But seriously, when an instance A needs to work with another instance B, it comes down to these choices:
let A find B:
That means B must be global. Evil.
let A create B:
Fine, if only A needs B. As soon as C also needs B, replace A by C in this list. Note that a test case would be a C, so if you do want to test, this choice is gone as well.
give B to A:
That's dependency injection.
Am I missing something? (Note that I'm from the Python world, so maybe there are language specific points I'm not seeing.)
Yesterday I found a mobile on the bus. The person who lost it had no clue about the person possessing her mobile. I called her dad and told him I have the mobile of his daughter. So I injected the dependency from me into him. Typically a case of the hollywood principle "Don't call us (because you can't!), we call you". Later he came and picked up his daughters phone.
I'd call that a real world problem which I solved by dependency injection, isn't it?
In my opinion, DI is not THE way to solve problems, for which we would not have another solution. Factories can be another way to solve such problems.
So there is no real answer for your question, because DI is just one way besides others. It is just a pretty hip, although very elegant way.
I really enjoyed DI when I had this DAOs, which needed an SQLMapper. I just had to inject the different mappers into the fatherclass once and the rest was done by configuration. Saved me a lot of time and LOCs, but I still can't name this a problem for which there is no other solution.
I use dependency injection for testing all the time. It's also extremely helpful when you have a bunch of large systems that you do not want to directly tie together (extremely loose coupling).
If you're using Java, I would recommend Google Guice, since it rocks so much. For C++, I recommend Qt IOC. For .NET, the Castle Project provides a nice IOC system. There is also a Spring implementation basically everywhere, but that's boring.
DI allows you to create applications that can be configured and reconfigured without touching the codebase itself. Not just urls or settings though; generic objects can be written in code, and then "customized" or configured via XML files to achieve the specific result desired for the given case.
For example, I can create a RegexDetective class where the actual regex it looks for is provided in a setter, and then in my Spring DI XML file, define one actual regex expression for RegexDetective.setRegex() for a deployment of SleuthApp going to London. Then a few days later I can go back and update the regex in the XML file for another deployment of SleuthApp shipping out to Siberia.
DI also allows one to define specific implementations of interfaces in a similar fashion, outside of the codebase in XML, to modify behavior of an application without actually touching the code, such as setting the AngryDetective or ArcticDetective implementation of the Detective interface, in the DI XML file.
We run a multimedia service for different countries. We run the same code for everyone but sometimes, business rules are different from one country to another. In this case, we inject different Spring MVC Interceptor for one client or for another.
Then, in the deploy phase, a script "chooses" which DI file is needed, based on the last letters of the files (fr for France, ch for Switzerland etc...)
application-context.xml-fr
application-context.xml-ch
etc...
That's the only good use I see in DI. I'm not a fan of DI though.
I've used Spring's IoC (DI) container for the last three web apps I've developed. I don't think its suited to one particular type of problem, rather it's a different way of solving a problem. It's as you've said, a more flexible approach to large systems. My personal favourite features of DI are that you can prepare better unit tests because your classes are highly decoupled. Also important for me is code reuse. Since I use the container in many apps, I can use the same components and know that their dependencies will be fed in externally.
In large multi-module applications using DI, a module only depends on the interfaces on the collaborator of its classes, which cuts compile-time dependency graphs.
In the context of frameworks (calling "your" functional code, as opposed to functional code calling a library), this is required. Your code can have compiling dependencies to the framework, but we all know that the framework (out of your hands) could not have compile dependencies to your code ;-)
I use DI primarily for ease of testing. Additionally it fosters the model of stubbing out your service calls to provide isolation and the ability to implement independent of service development.

What is the overhead cost associated with IoC containers like StructureMap?

After attending a recent Alt.NET group on IoC, I got to thinking about the tools available and how they might work. StructureMap in particular uses both attributes and bootstrapper concepts to map requests for IThing to ConcreteThing. Attributes automatically throw up flags for me that either reflection or IL injection is going on. Does anyone know exactly how this works (for StructureMap or other IoC tools) and what the associated overhead might be either at run-time or compile-time?
I can't say much for other IoC toolkits but I use Spring.Net and have found that there is a one off initial performance penalty at startup. Once the container has been configured the application runs unaffected.
I use Windsor from the CastleProject and have found it immensely useful in reducing dependencies. I haven't noticed a performance issue yet but one thing I do find is that the configuration can get a bit cumbersome. To help in this regard I'm starting to look at Binsor, which is a DSL for Windsor written in boo.
Another thing to be aware of is that when navigating code you wont be able to go to the code that will be executing at runtime.
They major problem is that code becomes hard to understand. It might become pure magical if one overuse IoC. Another problem is performance. In most cases performance lost is not noticeable. But when you start creating most of your objects via IoC container, it can suddenly drop below ocean level.
I built a very lightweight and basic IOC, here:
http://blogs.microsoft.co.il/blogs/shay/archive/2008/09/30/building-custom-object-mapper.aspx
It's not an alternative to the libraries You mentioned but if all that You need is to resolve a type by giving its interface it might be a perfect solution.
I don't handle instantiation types (singleton, transient, thread, pool...), all object will be instantiated as singletons, you call it like:
IRepository _repository = ObjectFactory.BuildFactory<IRepository>();
Shay

Categories

Resources