We have a web api project which references a library that in turn is shared between many different systems.
The library exposes a "RegisterDependancies(IWindsorContainer container)" function that when called will register all of the dependencies for that particular library.
So when the web api application starts it does its own registrations and then calls the shared library, something like the following pseudo code :
public void SetupWindsor(){
IWindsorContainer container = new WindsorContainer();
container.Register(Component.For<iFoo>().ImplementedBy<Foo>());
SharedLibrary.RegisterDependancies(container);
}
The shared library function looks a bit like this:
public void RegisterDependancies(IWindsorContainer container)
{
container.Register(Component.For<iBar>().ImplementedBy<Bar>());
//In this instance Bar is internal so could not be registered by the api project.
}
We now want the web api project to use the PerWebRequest lifestyle but other systems may want to use a different lifestyle (This decision will be down to whoever writes the client app - the shared library will be used in console and windows apps - so perwebrequest isn't suitable for them to use)
We want to change the RegisterDependancies method so that we can pass in the lifestyle.
Is it possible to do this with the PerWebRequest lifestyle? A few of us had a look and couldnt see how to do it - despite having done similar without issues with unity in the past.
We are either looking for a way to resolve the problem in the way described, or if there is a better way to approach this that would also be helpful.
Edit to clarify, the shared library defines the interface IBar as public but the class Bar as internal so the API cannot do the registration - unless there is some magic in windsor that I am unaware of.
the shared library defines the interface IBar as public but the class Bar as internal so the API cannot do the registration
Let's examine that design decision for a moment. While no external callers can create a Bar object, they can create an IBar object:
IWindsorContainer container = new WindsorContainer();
SharedLibrary.RegisterDependancies(container);
IBar bar = container.Resolve<IBar>();
There can be various reasons to keep the Bar class internal, but since you can already run code like the above, why not simply let the shared library expose a factory?
IBar bar = SharedLibrary.CreateBar();
That'd be much simpler and decouple the shared library from the container. In general, that's the best advice I can give regarding Dependency Injection and reusable libraries: Use Dependency Injection patterns, not containers. See my article DI-Friendly Library for more details.
If you still want to use a DI Container in your host project (e.g. Wep API project), you can register IBar against that static factory method.
Internal classes
There are legitimate reasons why one would want to keep some classes internal, but I often see code bases where classes are internal for no apparent reason. My experience with decades of C# code is that the cases where classes are internal for no good reason far outweighs the cases where there's a good reason.
Does Bar really have to be internal? Why?
From the OP it's clear that IBar is implemented by Bar. This could be an artefact of reducing the question to essentials (good job doing that, BTW), but I get the impression that Bar is the only class that implements IBar. Is that the case?
If so, the Bar class must expose the methods defined by IBar. Thus, the API of the class is already public, so why hide the class? The only thing hidden, then, is the constructor...
The perfect storm
Sometimes when I give advice like the above, I'm met with a response that that's not what the question was about. The question is about how to change Castle Windsor lifestyles (which, BTW, you can).
This question, on the other hand, is almost the perfect (little) storm that illustrates why reusable libraries shouldn't depend on DI Containers:
The library should be usable in more than one context (web, console, desktop, etc.)
The library depends on Castle Windsor
Interfaces are public, but implementation classes aren't
This complicates things - hence the question here on Stack Overflow.
My best advice is to keep things simple. Removing the dependency on Castle Windsor will make library development and reuse easier, not harder.
It'd be even easier to use the library if you make most library classes public.
You can set the lifestyle on the component registration using LifeStyle.Is(LifestyleType)
Something like this will allow you to pass the lifecycle in.
public void RegisterDependencies(IWindsorContainer container, Castle.Core.LifestyleType lifestyleType)
{
container.Register(Component.For<IBar>().ImplementedBy<Bar>().LifeStyle.Is(lifestyleType));
}
Alternatively, take a look at registering by convention with the assembly, this would allow control of the lifestyle from within the API project without needing a public implementation
https://github.com/castleproject/Windsor/blob/master/docs/registering-components-by-conventions.md
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
Recently I've read Mark Seemann's article about Service Locator anti-pattern.
Author points out two main reasons why ServiceLocator is an anti-pattern:
API usage issue (which I'm perfectly fine with)
When class employs a Service locator it is very hard to see its dependencies as, in most cases, class has only one PARAMETERLESS constructor.
In contrast with ServiceLocator, DI approach explicitly exposes dependencies via constructor's parameters so dependencies are easily seen in IntelliSense.
Maintenance issue (which puzzles me)
Consider the following expample
We have a class 'MyType' which employs a Service locator approach:
public class MyType
{
public void MyMethod()
{
var dep1 = Locator.Resolve<IDep1>();
dep1.DoSomething();
}
}
Now we want to add another dependency to class 'MyType'
public class MyType
{
public void MyMethod()
{
var dep1 = Locator.Resolve<IDep1>();
dep1.DoSomething();
// new dependency
var dep2 = Locator.Resolve<IDep2>();
dep2.DoSomething();
}
}
And here is where my misunderstanding starts. The author says:
It becomes a lot harder to tell whether you are introducing a breaking change or not. You need to understand the entire application in which the Service Locator is being used, and the compiler is not going to help you.
But wait a second, if we were using DI approach, we would introduce a dependency with another parameter in constructor (in case of constructor injection). And the problem will be still there. If we may forget to setup ServiceLocator, then we may forget to add a new mapping in our IoC container and DI approach would have the same run-time problem.
Also, author mentioned about unit test difficulties. But, won't we have issues with DI approach? Won't we need to update all tests which were instantiating that class? We will update them to pass a new mocked dependency just to make our test compilable. And I don't see any benefits from that update and time spending.
I'm not trying to defend Service Locator approach. But this misunderstanding makes me think that I'm losing something very important. Could somebody dispel my doubts?
UPDATE (SUMMARY):
The answer for my question "Is Service Locator an anti-pattern" really depends upon the circumstances. And I definitely wouldn't suggest to cross it out from your tool list. It might become very handy when you start dealing with legacy code. If you're lucky enough to be at the very beginning of your project then the DI approach might be a better choice as it has some advantages over Service Locator.
And here are main differences which convinced me to not use Service Locator for my new projects:
Most obvious and important: Service Locator hides class dependencies
If you are utilizing some IoC container it will likely scan all constructor at startup to validate all dependencies and give you immediate feedback on missing mappings (or wrong configuration); this is not possible if you're using your IoC container as Service Locator
For details read excellent answers which are given below.
If you define patterns as anti-patterns just because there are some situations where it does not fit, then YES it's an anti pattern. But with that reasoning all patterns would also be anti patterns.
Instead we have to look if there are valid usages of the patterns, and for Service Locator there are several use cases. But let's start by looking at the examples that you have given.
public class MyType
{
public void MyMethod()
{
var dep1 = Locator.Resolve<IDep1>();
dep1.DoSomething();
// new dependency
var dep2 = Locator.Resolve<IDep2>();
dep2.DoSomething();
}
}
The maintenance nightmare with that class is that the dependencies are hidden. If you create and use that class:
var myType = new MyType();
myType.MyMethod();
You do not understand that it has dependencies if they are hidden using service location. Now, if we instead use dependency injection:
public class MyType
{
public MyType(IDep1 dep1, IDep2 dep2)
{
}
public void MyMethod()
{
dep1.DoSomething();
// new dependency
dep2.DoSomething();
}
}
You can directly spot the dependencies and cannot use the classes before satisfying them.
In a typical line of business application you should avoid the use of service location for that very reason. It should be the pattern to use when there are no other options.
Is the pattern an anti-pattern?
No.
For instance, inversion of control containers would not work without service location. It's how they resolve the services internally.
But a much better example is ASP.NET MVC and WebApi. What do you think makes the dependency injection possible in the controllers? That's right -- service location.
Your questions
But wait a second, if we were using DI approach, we would introduce a
dependency with another parameter in constructor (in case of
constructor injection). And the problem will be still there.
There are two more serious problems:
With service location you are also adding another dependency: The service locator.
How do you tell which lifetime the dependencies should have, and how/when they should get cleaned up?
With constructor injection using a container you get that for free.
If we may
forget to setup ServiceLocator, then we may forget to add a new
mapping in our IoC container and DI approach would have the same
run-time problem.
That's true. But with constructor injection you do not have to scan the entire class to figure out which dependencies are missing.
And some better containers also validate all dependencies at startup (by scanning all constructors). So with those containers you get the runtime error directly, and not at some later temporal point.
Also, author mentioned about unit test difficulties. But, won't we have issues with DI approach?
No. As you do not have a dependency to a static service locator. Have you tried to get parallel tests working with static dependencies? It's not fun.
I would also like to point out that IF you are refactoring legacy code that the Service Locator pattern is not only not an anti-pattern, but it is also a practical necessity. No-one is ever going to wave a magic wand over millions of lines of code and suddenly all that code is going to be DI ready. So if you want to start introducing DI to an existing code base it is often the case that you will change things to become DI services slowly, and the code that references these services will often NOT be DI services. Hence THOSE services will need to use the Service Locator in order to get instances of those services that HAVE been converted to use DI.
So when refactoring large legacy applications to start to use DI concepts I would say that not only is Service Locator NOT an anti-pattern, but that it is the only way to gradually apply DI concepts to the code base.
From the testing point of view, Service Locator is bad. See Misko Hevery's Google Tech Talk nice explanation with code examples http://youtu.be/RlfLCWKxHJ0 starting at minute 8:45. I liked his analogy: if you need $25, ask directly for money rather than giving your wallet from where money will be taken. He also compares Service Locator with a haystack that has the needle you need and knows how to retrieve it. Classes using Service Locator are hard to reuse because of it.
Maintenance issue (which puzzles me)
There are 2 different reasons why using service locator is bad in this regard.
In your example, you are hard-coding a static reference to the service locator into your class. This tightly couples your class directly to the service locator, which in turns means it won't function without the service locator. Furthermore, your unit tests (and anybody else who uses the class) are also implicitly dependent on the service locator. One thing that has seemed to go unnoticed here is that when using constructor injection you don't need a DI container when unit testing, which simplifies your unit tests (and developers' ability to understand them) considerably. That is the realized unit testing benefit you get from using constructor injection.
As for why constructor Intellisense is important, people here seem to have missed the point entirely. A class is written once, but it may be used in several applications (that is, several DI configurations). Over time, it pays dividends if you can look at the constructor definition to understand a class's dependencies, rather than looking at the (hopefully up-to-date) documentation or, failing that, going back to the original source code (which might not be handy) to determine what a class's dependencies are. The class with the service locator is generally easier to write, but you more than pay the cost of this convenience in ongoing maintenance of the project.
Plain and simple: A class with a service locator in it is more difficult to reuse than one that accepts its dependencies through its constructor.
Consider the case where you need to use a service from LibraryA that its author decided would use ServiceLocatorA and a service from LibraryB whose author decided would use ServiceLocatorB. We have no choice other than using 2 different service locators in our project. How many dependencies need to be configured is a guessing game if we don't have good documentation, source code, or the author on speed dial. Failing these options, we might need to use a decompiler just to figure out what the dependencies are. We may need to configure 2 entirely different service locator APIs, and depending on the design, it may not be possible to simply wrap your existing DI container. It may not be possible at all to share one instance of a dependency between the two libraries. The complexity of the project could even be further compounded if the service locators don't happen to actually reside in the same libraries as the services we need - we are implicitly dragging additional library references into our project.
Now consider the same two services made with constructor injection. Add a reference to LibraryA. Add a reference to LibraryB. Provide the dependencies in your DI configuration (by analyzing what is needed via Intellisense). Done.
Mark Seemann has a StackOverflow answer that clearly illustrates this benefit in graphical form, which not only applies when using a service locator from another library, but also when using foreign defaults in services.
My knowledge is not good enough to judge this, but in general, I think if something has a use in a particular situation, it does not necessarily mean it cannot be an anti-pattern. Especially, when you are dealing with 3rd party libraries, you don't have full control on all aspects and you may end up using the not very best solution.
Here is a paragraph from Adaptive Code Via C#:
"Unfortunately, the service locator is sometimes an unavoidable anti-pattern. In some application types— particularly Windows Workflow Foundation— the infrastructure does not lend itself to constructor injection. In these cases, the only alternative is to use a service locator. This is better than not injecting dependencies at all. For all my vitriol against the (anti-) pattern, it is infinitely better than manually constructing dependencies. After all, it still enables those all-important extension points provided by interfaces that allow decorators, adapters, and similar benefits."
-- Hall, Gary McLean. Adaptive Code via C#: Agile coding with design patterns and SOLID principles (Developer Reference) (p. 309). Pearson Education.
I can suggest considering generic approach to avoid the demerits of Service Locator pattern.
It allows explicitly declaring class dependencies and substitute mocks and doesn't depend on a particular DI Container.
Possible drawbacks of this approach is:
It makes your control classes generic.
It is not easy to override some particular interface.
1 First Declare interface
public interface IResolver<T>
{
T Resolve();
}
Create ‘flattened’ class with implementing of resolving the most of frequently used interfaces from DI Container and register it.
This short example uses Service Locator but before composition root. Alternative way is inject each interface with constructor.
public class FlattenedServices :
IResolver<I1>,
IResolver<I2>,
IResolver<I3>
{
private readonly DIContainer diContainer;
public FlattenedServices(DIContainer diContainer)
{
this.diContainer = diContainer;
}
I1 IResolver<I1>.Resolve()
=> diContainer.Resolve<I1>();
I2 IResolver<I2>.Resolve()
=> diContainer.Resolve<I2>();
I3 IResolver<I3>.Resolve()
=> diContainer.Resolve<I3>();
}
Constructor injection on some MyType class should look like
public class MyType<T> : IResolver<T>
where T : class, IResolver<I1>, IResolver<I3>
{
T servicesContext;
public MyType(T servicesContext)
{
this.servicesContext = servicesContext
?? throw new ArgumentNullException(nameof(serviceContext));
_ = (servicesContext as IResolver<I1>).Resolve() ?? throw new ArgumentNullException(nameof(I1));
_ = (servicesContext as IResolver<I3>).Resolve() ?? throw new ArgumentNullException(nameof(I3));
}
public void MyMethod()
{
var dep1 = ((IResolver<I1>)servicesContext).Resolve();
dep1.DoSomething();
var dep3 = ((IResolver<I3>)servicesContext).Resolve();
dep3.DoSomething();
}
T IResolver<T>.Resolve() => serviceContext;
}
P.S. If you don't need to pass servicesContext further in MyType, you can declare object servicesContext; and make generic only ctor not class.
P.P.S. That FlattenedServices class could be considered as main DI container and a branded container could be considered as supplementary container.
The Author reasons that "the compiler won't help you" - and it is true.
When you deign a class, you will want to carefully choose its interface - among other goals to make it as independent as ... as it makes sense.
By having the client accept the reference to a service (to a dependency) through explicit interface, you
implicitly get checking, so the compiler "helps".
You're also removing the need for the client to know something about the "Locator" or similar mechanisms, so the client is actually more independent.
You're right that DI has its issues / disadvantages, but the mentioned advantages outweigh them by far ... IMO.
You're right, that with DI there is a dependency introduced in the interface (constructor) - but this is hopefully the very dependency that you need and that you want to make visible and checkable.
Yes, service locator is an anti-pattern it violates encapsulation and solid.
Service Locator(SL)
Service Locator solves [DIP + DI] issue. It allows by interface name satisfy needs. Service Locator can be as singleton or can be passed into constructor.
class A {
IB ib
init() {
ib = ServiceLocator.resolve<IB>();
}
}
The problem here that it is not clear which exactly classes(realizations of IB) are used by client(A).
proposal - pass parameter explicitly
class A {
IB ib
init(ib: IB) {
self.ib = ib
}
}
SL vs DI IoC Container(framework)
SL is about saving instances when DI IoC Container(framework) is more about creating instances.
SL works as a PULL command when it retrieves dependencies inside constructor
DI IoC Container(framework) works as a PUSH command when it put dependencies into constructor
I think that the author of the article shot himself on the foot in proving that it's an anti pattern with the update written 5 years later. There, it is said that this is the correct way:
public OrderProcessor(IOrderValidator validator, IOrderShipper shipper)
{
if (validator == null)
throw new ArgumentNullException("validator");
if (shipper == null)
throw new ArgumentNullException("shipper");
this.validator = validator;
this.shipper = shipper;
}
Then below is said:
Now it's clear that all three object are required before you can call the Process method; this version of the OrderProcessor class advertises its preconditions via the type system. You can't even compile client code unless you pass arguments to constructor and method (you can pass null, but that's another discussion).
Let me stress the last part again:
you can pass null, but that's another discussion
Why that is another discussion? That is a huge deal. An object that receives its dependencies as arguments fully depends on the previous execution of the application (or tests) to provide those objects as valid references/pointers. It is not "encapsulated" in the terms that the author expressed, as it depends on a lot of external machinery to run in a satisfactory manner for the object to be constructed at all, and later to work properly when it needs to use the other class.
The author claims that it's the Service Locator which is not encapsulated because it's depending on an additional object that you can't isolate from on the tests. But that other object could very well be a trivial map or vector, so it's pure data with no behavior. In C++, for example, containers are not part of the language, so you rely on containers (vectors, hash maps, strings, etc.) for all non-trivial classes. Are they not isolated because rely on containers? I don't think so.
I think that both using manual dependency injection or a service locator the objects are not really isolated from the rest: they need their dependencies, yes or yes, but are provided in a different way. I for one think that a locator even helps with the DRY principle, as it's error prone and repetitive to pass over and over pointers through the application. The service locator can also be more flexible in that an object might retrieve its dependencies when needed (if needed), not only via the constructor.
The problem of the dependencies not being explicit enough via the constructor of the object (on an object using the service locator) is solved by the very thing I stressed out before: passing null pointers. It can be even used to mix and match both systems: if the pointer is null, use the service locator, otherwise, use the pointer. Now it is enforced via the type system, and it's obvious to the user of the class. But we can do even better.
One additional solution that is surely doable with C++ (I don't know about Java/C#, but I suppose it could be done as well), is to write a helper class to be instantiated like LocatorChecker<IOrderValidator, IOrderShipper>. That object could check on it's constructor/destructor that the service locator holds a valid instance of the required classes, hence, being also less repetitive than the example provided by Mark Seeman.
I'm working on a project at the moment and it's going to be primarily library-based.
I want the library to be consumed using dependency injection, but I want the library to be largely agnostic towards the container being used.
I wrote a "bridge" library a while back to make this sort of thing easier, but I wasn't sure if this was actually the right approach? (library: https://github.com/clintkpearson/IoCBridge)
I don't want to reference the DI-technology (Ninject, Windsor etc) directly from my library as then it makes it inflexible for people using it.
There are a few other questions on SO in a similar vein but none of them seem to actually address the problem satisfactorily.
As a side note: I realise I could just make sure the library adheres to the general idiom and uses interfaces & ctor arguments for dependencies, and then just leave it up to the consuming app to register the types in the containers.
The only issue I can see with this (and correct me if I'm wrong) is that this requires the consuming app to actually know which types link to which interfaces, whether some need to be registered as singletons etc... and from a plug-and-play usage perspective that's pretty poor.
It's a bit controversial but I suggest using Poor Man's Injection. I'm not saying it's great but it has some valid use cases(just like Service Locator) under some constraints. It will require a little more maintenance but it will save you from depending another libray for IoC container registration. You should read Mark Seemann's article on the subject.
I recently implemented this approach in a very simple library of mine. Basically you write two constructors for the public classes of the library.
internal SitemapProvider(IActionResultFactory actionResultFactory, IBaseUrlProvider baseUrlProvider)
{
_actionResultFactory = actionResultFactory;
_baseUrlProvider = baseUrlProvider;
}
public SitemapProvider() : this(new ActionResultFactory(), new BaseUrlProvider()) { }
As you can see only the second constructor is public and you fill the dependencies yourself. This also provides encapsulation at assembly level. You can still test this class by adding a InternalsVisibleTo attribute to the assembly and use dependency injection in your library freely. The user can also create instances with new keyword or add this class's interface to their IoC registration.
I don't know if there's a widely adopted IoC container registration library in .NET. I thought about writing one myself but each container has their unique features and it gets more complicated with object life cycles. Also people will be uneasy about depending on another library for this.
A good DI implementation should enable DI on any object, regardless of the latter being DI-agnostic or not.
Prism is a bad example, as the last time I used it (2 years ago) it required objects to be DI-agnostic by enforcing use of the [Injection] attribute. A good non-DI-agnostic example is Spring Framework (extremely popular DI framework for Java, has a .NET port called Spring.NET), which allows enabling DI via so-called context files - these are xml files that describe dependencies. The latter need not be part of your library, leaving it as a completely independent dll file.
The example of Spring can tell you that you should not have any specific configuration, prerequisites or patterns to follow in order to make an object injectable, or allow objects to be injected to it, besides the programming to interfaces paradigm, and allowing programmatic access to suitable constructors and property setters.
This does not mean that any DI framework should support manipulation of plain CLR (.NET) objects, a.k.a. POCO-s. Some frameworks rely only on their specific mechanisms and may not be suitable to use with DI-independent code. Usually, they would require direct dependency on the DI framework to the library, which I think you want to (and probably should) avoid.
I think you have slightly misinterpeted the scope of Dependency Injection. DI is a pattern, a subset of IoC, and IoC containers make DI easy and convenient - they assist with dependency resolution. IoC can be categorised as a superset of several methodologies, of which DI is one part.
You do not need IoC frameworks in order to make Dependency Injection work.
If you really insist on using an IoC container instead of leveraging regular DI (i.e. constructor parameters or mandatory property setting) then you should nominate the container/framework, don't try to be all things to all people by trying to kludge together adapters or bridges. Be cautious about over-engineering. A library by its very definition means it has a limited and well defined set of functionality, therefore it should not need a large amount of dependencies injected.
They'll most likely want to implement their own versions of some of the interfaces
You don't need an IoC framework to achieve this. If your constructors have their parameters defined as interfaces then in effect you've already achieved DI - the dependency is injected at construction time and you know nothing about the actual concrete implementation of it. Let the calling code worry about the nitty gritty details of which implementation of that interface it wants to pass in.
I am creating a library that integrates our product with another 3rd party product.
The design being used involves an interface that abstracts the core operations i'd like to integrate to our product, such that when a new 3rd party API is released, we would transparently switch to using it instead of the old one without modifying existing code.
To this end, the actual code that will return a concrete instance of the object interacting with 3rd party API needs to make a decision on to "which implementation to select".
For the simple needs, i assume an entry in the configuration file would suffice to say the fully qualified implementing class name.
Should i use an IoC container in this case, or should i use a Factory Method pattern and code it myself? (use reflection to instantiate the object, etc).
What are the pros and cons for this? Am i missing anything?
Quoting Mark Seemann:
Applications should depend on containers. Frameworks should not.
An IoC container sounds like overkill for your problem. If you have only one dependency to inject, doing it via the config file should be just fine.
Any IoC container is never overkill. You WILL eventually expand on this app if it's successful, or at the least used regularly and you will get requests to add more.
I'm a Castle user, and it's darn easy to add Castle with NuGet then just create a new WindsorContainer() in the startup and register your interfaces and class.
It's like asking if TDD is overkill to make a simple app. You should always (if you can) TDD the app, and use interfaces over concrete in your classes. IoC is too easy to setup now that you're only adding a few lines of code, so why not? it will be much harder later on if you didn't originally use an IoC container and you have intefaces and newed up classes all over your project to organize them all back into an IoC container.
I am maintaining an ASP.NET MVC project. In the project the original developer has an absolute ton of interfaces. For example: IOrderService, IPaymentService, IEmailService, IResourceService. The thing I am confused about is each of these is only implemented by a single class. In other words:
OrderService : IOrderService
PaymentService : IPaymentService
My understanding of interfaces has always been that they are used to create an architecture in which components can be interchanged easily. Something like:
Square : IShape
Circle : IShape
Furthermore, I don't understand how these are being created and used. Here is the OrderService:
public class OrderService : IOrderService
{
private readonly ICommunicationService _communicationService;
private readonly ILogger _logger;
private readonly IRepository<Product> _productRepository;
public OrderService(ICommunicationService communicationService, ILogger logger,
IRepository<Product> productRepository)
{
_communicationService = communicationService;
_logger = logger;
_productRepository = productRepository;
}
}
These objects don't seem be ever be created directly as in OrderService orderService = new OrderService() it is always using the interface. I don't understand why the interfaces are being used instead of the class implementing the interface, or how that even works. Is there something major that I am missing about interfaces that my google skills aren't uncovering?
This particular design pattern is typically to facilitate unit testing, as you can now replace OrderService with a TestOrderService, both of which are only referenced as IOrderService. This means you can write TestOrderService to provide specific behavior to a class under test, then sense whether the class under test is doing the correct things.
In practice, the above is often accomplished by using a Mocking framework, so that you don't actually hand-code a TestOrderService, but rather use a more concise syntax to describe how it should behave for a typical test, then have the mocking framework dynamically generate an implementation for you.
As for why you never see 'new OrderService' in the code, it's likely that your project is using some form of Inversion of Control container, which facilitates automatic Dependency Injection. In other words, you don't have to construct OrderService directly, because somewhere you've configured that any use of IOrderService should automatically be fulfilled by constructing a singleton OrderService and passing it in to the constructor. There are a lot of subtleties here and I'm not exactly sure how your dependency injection is being accomplished (it doesn't have to be automatic; you can also just construct the instances manually and pass them in through the constructors.)
That's not the only use of interfaces, in MVC they are being used to decouple contract from implementation. To understand about MVC you need to read up a bit on the related topics such as separation of concerns and inversion of control (IoC).The actual act of creating an object to be passed to OrderService constructor is handled by IoC container based on some predefined mapping.
These objects don't seem be ever be created directly as in OrderService orderService = new
OrderService()
So waht?
Point is that SOMEONE calls the OrderService constructor and THE CALLER is respónsible for creating them. He hands them over.
I don't understand why the interfaces are being used instead of the class implementing the
interface
Because you want not to know the class - it may change, be external, be configurable using an IOC container and the programmer decided to not require even a common base class. THe less assumptions you make about how someone implements used utility classes, the better.
Is there something major that I am missing about interfaces that my google skills aren't
uncovering?
No, bu a good book about OO programming would help more than random google snippets. This baiscally falls into the architecture area and .NET basics (for the first part).
It's good practice to program against interfaces rather than objects. This question gives good reasons why, but some reasons include allowing the implementation to change (ex. for testing).
Just because there's currently only 1 class that implements the interface doesn't mean that it can't change in the future.
Furthermore, I don't understand how these are being created and used.
This is called dependency injection and basically means that the class doesn't need to know how or where to instantiate it's dependencies from, someone else will handle it.
These are service interfaces, which encapsulate some kind of externality. You often have just a single implementation of them in your main project, but your tests use simpler implementations that don't depend on that external stuff.
For example if your payment service contacts paypal to verify payments, you don't want to do that in a test of unrelated code. Instead you might replace them with a simple implementation that always returns "payment worked" and check that the order process goes through, and another implementation that returns "payment failed" and check that the order process fails too.
To avoid depending on the implementation, you don't create instances yourself, you accept them in the constructor. Then the IoC container that creates your class will fill them in. Look up Inversion of Control.
Your project has probably some code that sets up the IoC container in its startup code. And that code contains information about which class to create when you want an implementation of a certain interface.
What would be the best way to implement NLog in my Prism / CAL WPF application. This might be an amateur question, I am a bit new to the whole Prism framework :)
I thought about putting the reference to the NLog dll in the Infrastructure module and make a wrapper singleton class e.g. MyLogger. My thinking was to be able to have the reference to 1 logger implementation somewhere in a central place that everything has reference to, and the only thing that I know of in Prism would be your Infrastructure module.
The obvious other way is to add a reference to NLog to each module but I think that would defeat the purpose of decoupling and all of that.
Any ideas would be most helpful
Regards
I would recommend something similar to your first idea, although it leverages an already existing interface in Prism.
While I'm not sure the exact method signatures available to you in NLog, you may want to consider using Prism's ILoggerFacade interface, which is typically defined in your Bootstrapper (see the StockTraderRI application for an example of how this is set up). Typically, this acts as a pass through to Microsoft's Composite Logging interface, but there's no reason why you can't use this to hook into your own logger.
A few reasons to consider this approach:
It uses the already existing ILoggerFacade interface in the Prism framework, which other developers will be familiar with
If you later decide to go to a different logging framework, you just have to replace the object behind the ILoggerFacade implementation
The other approach would be to do as you suggest: create an interface that defines a service to NLog (or expose an existing NLog interface) in your infrastructure DLL and register the implementation of that service in your bootstrapper. You can then you your dependency injection container to get a reference to the logger service in your modules. Note, however, that this really just reproduces what the ILoggerFacade interface already gives you.