I am developing an internal library that is to be used by other developers in the company that I'm working for. I am applying SOLID patterns and following the best practices as described in Dependency Inject (DI) “friendly” library.
My end users would be developers of different applications. Some of them are complex legacy applications with no DI, and others are newer apps that have DI and TDD.
Now, I am trying to figure out how to call this DI friendly library from a legacy ASP.NET Webforms application that has no DI implemented in it, and obviously, I can't revise 250+ aspx pages to support constructor injection because it is out of scope of my project. (Yes, I have read Introducing an IoC Container to Legacy Code
)
One idea that I had was creating a static global wrapper for Common Service Locator to automatically resolve dependencies throughout the app:
public static class GlobalResolver
{
public static T Resolve<T>()
{
return ServiceLocator.Current.GetInstance<T>();
}
}
The nice thing about this approach is that I can use any IoC library in my composition root (I currently use Unity). I would use this GlobalResolver like this:
protected void OnClick(object sender, EventArgs e)
{
IMailMessage message = MessageFactory.Create("Jack.Daniels#jjj.com", "John.Doe#jjj.com", "subject", "Body", true, MailPriority.High);
GlobalResolver.Resolve<IMailer>().SendMail(message);
}
I like this approach and I think it's clean, but novice developers in my company might get confused with this GlobalResolver.Resolve<IMailer> line, so I'm trying to see if there is alternative to this out there.
One thing that comes to my mind is something like this:
public static class CommonCatalog
{
public static IMailer Mailer => ServiceLocator.Current.GetInstance<IMailer>();
public static IMailMessageFactory MessageFactory => ServiceLocator.Current.GetInstance<IMailMessageFactory>();
public static IFtpSecureClientFactory FTPClientFactory => ServiceLocator.Current.GetInstance<IFtpSecureClientFactory>();
// And so on...
}
And simply use it like this: CommonCatalog.Mailer.SendMail(message);. Developers at my company are used to seeing static methods, and I think this approach might be desirable for them.
My questions are:
Is this the best solution for my problem?
Am I violating any of the best practices?
Is there a design pattern that descibes the CommonCatalog class? Is it a "Facade" or "Proxy"?
TLDR: Developers at my company like to use Static methods, but static methods are incompatible with DI and SOLID practices. Is there any way to trick people into thinking that they are using static methods, but behind the scenes call DI code?
If you want to avoid the Service Locator anti-pattern (which you should, because - it's an anti-pattern), then the first option with a GlobalResolver is out of the question, because it's definitely a Service Locator.
The catalog of services is closer to the Facades I recommend in my expanded article on DI-friendly libraries, although I usually prefer not having an aggregated catalog of objects. It always makes me uncomfortable when I don't know how to name objects, and a name like CommonCatalog seems too devoid of meaning.
Rather, I'd prefer making instance-based Facades with the Fluent Builder pattern as described in the article, since it tends to be more flexible when you, down the line, discover that you need to add various options and switches to the facade.
If you absolutely must, though, you can add a static method for each of the Facades. Something like this:
public static class Mailer
{
public static IMailer Default
{
get { return new MailerBuilder().Create(); }
}
}
If the instance conceivably has Singleton lifetime, you can, instead, use the Singleton design pattern, as the next example demonstrates.
You could implement a default MessageFactory the same way, but here instead using the Singleton design pattern:
public static class MailMessageFactory
{
public static IMailMessageFactory Default { get } =
new MailMessageFactoryBuilder().Create();
}
Notice that this doesn't use a Service Locator for the implementation, either.
To be clear, though, what's behind such Facades could easily be implemented according to the SOLID principles, but the calling code would still have a hard time doing that.
Related
I have a class which contains a view dependencies (all interfaces). Basically the behavior of the class is defined through the implementation of those interfaces. I want to be able to have a "builder" which can create instances of this class with different implementations of the interfaces(or parts of it). Something like this:
public class API
{
private readonly ISomeInterface _someInterface;
private readonly ISomeOtherInterface _someOtherInterface;
private readonly ISomeAnotherInterface _someAnotherInterface;
API(ISomeInterface someInterface,ISomeOtherInterface someOtherInterface,ISomeAnotherInterface someAnotherInterface)
{*/implementation ommitted*/}
//Example method
public void DoSomethingWhichDependsOnOneOrMoreInterfaces()
{
//somecode
id(_someInterface != null)
_someInterface.SomeMethode();
}
public class MyApiBuilder()
{
// implementation ommitted
API CreateAPI(someEnum type)
{
switch(type)
{
case SpecificAPI32:
var speficImplementationOfSomeInterface = new ImplementsISomeInterface();
speficImplementationOfSomeInterface .Setup("someSetup");
var specificImplementationOfOtherInterface = new ImplementsISomeOtherInterface();
returns new API(speficImplementationOfSomeInterface,specificImplementationOfOtherInterface ,null);
}
}
}
What is the most elegant way of implementing this (if this makes sense at all)? I was first thinking of the Builder Design Patterns but as far as I understood it, its slightly different.
[Edit]
As pointed out, the way I am implementing it is a factory method but I am not fully satisfied with it. The API can contain a varity of different interfaces which can be totally independent of each other but some may depend on others.(but not mandatory) I would like to give the user (the developer using this "API") as much freedom as possible in creating the API he wants to use. Lets try to explain what I am basically up to:
Let's say I am developing a plugin for a game engine which can post achievments and other stuff to various social media channels. So basically there could be a Interface which implements the access to twitter,facebook,youtube,whathever or some custom server. This custom server could need some kind of authentification process. The user should be able to build at start the API in a nice (hmm fluent is nice..) way. So basically something like this:
var myTotallyForMyNeedsBuildAPI = API.CreateCustomApi().With(Api.Twitter).And(Api.Facebook).And(Api.Youtube).And(Api.CustomServer).With(Security.Authentification);
I actually do not know how to make that fluent but something like this would be nice.
It's a good practice to use Dependency Injection as you want to give the programmer the ability to compose the object with desired configuration.
Check MEF and Unity frameworks which are great for this job.
For example in Unity you can write this:
// Introducing an implementation for ISomeInterface
container.Register<ISomeInterface, SomeImplementation>();
// Introducing an implementation for ISomeOtherInterface
container.Register<ISomeOtherInterface, SomeOtherImplementation>();
// Introducing an implementation for ISomeAnotherInterface
container.Register<ISomeAnotherInterface, SomeAnotherImplemenation>();
container.Register<API, API>();
// and finally unity will compose it for you with desired configurations:
var api = container.Resolve<API>();
In this scenario the api will be composed with desired implementations.
What you have implemented is the Factory method pattern.
It's perfectly fine for what you are trying to do, but you could have a look at the other factory patterns (i.e. here) based on your context and how you think you're code will evolve in the future.
Anyway, I will also consider to not tie this three interface together in a single factory. If they are really so tighten together to be consumed together and built together, maybe they should not be three different interfaces in the first place, or at least all three implemented by the same class, so your factory will build the appropriate class with the proper implementation of these.
Probably what you are after is the Decorator pattern.
In your API class you invoke each interface if they have been provided to the API instance, which is the behaviour of the Decorator pattern.
With this pattern you obtain a modular implementation that allow you to add multiple behaviours to your API.
I am currently weighing up the advantages and disadvantages between DI and SL. However, I have found myself in the following catch 22 which implies that I should just use SL for everything, and only inject an IoC container into each class.
DI Catch 22:
Some dependencies, like Log4Net, simply do not suit DI. I call these meta-dependencies and feel they should be opaque to calling code. My justification being that if a simple class 'D' was originally implemented without logging, and then grows to require logging, then dependent classes 'A', 'B', and 'C' must now somehow obtain this dependency and pass it down from 'A' to 'D' (assuming 'A' composes 'B', 'B' composes 'C', and so on). We have now made significant code changes just because we require logging in one class.
We therefore require an opaque mechanism for obtaining meta-dependencies. Two come to mind: Singleton and SL. The former has known limitations, primarily with regards to rigid scoping capabilities: at best a Singleton will use an Abstract Factory which is stored at application scope (ie. in a static variable). This allows some flexibility, but is not perfect.
A better solution would be to inject an IoC container into such classes, and then use SL from within that class to resolve these meta-dependencies from the container.
Hence catch 22: because the class is now being injected with an IoC container, then why not use it to resolve all other dependencies too?
I would greatly appreciate your thoughts :)
Because the class is now being injected with an IoC container, then why not use it to resolve all other dependencies too?
Using the service locator pattern completely defeats one of the main points of dependency injection. The point of dependency injection is to make dependencies explicit. Once you hide those dependencies by not making them explicit parameters in a constructor, you're no longer doing full-fledged dependency injection.
These are all constructors for a class named Foo (set to the theme of the Johnny Cash song):
Wrong:
public Foo() {
this.bar = new Bar();
}
Wrong:
public Foo() {
this.bar = ServiceLocator.Resolve<Bar>();
}
Wrong:
public Foo(ServiceLocator locator) {
this.bar = locator.Resolve<Bar>();
}
Right:
public Foo(Bar bar) {
this.bar = bar;
}
Only the latter makes the dependency on Bar explicit.
As for logging, there's a right way to do it without it permeating into your domain code (it shouldn't but if it does then you use dependency injection period). Amazingly, IoC containers can help with this issue. Start here.
Service Locator is an anti-pattern, for reasons excellently described at http://blog.ploeh.dk/2010/02/03/ServiceLocatorIsAnAntiPattern.aspx. In terms of logging, you could either treat that as a dependency just like any other, and inject an abstraction via constructor or property injection.
The only difference with log4net, is that it requires the type of the caller that uses the service. Using Ninject (or some other container) How can I find out the type that is requesting the service? describes how you can solve this (it uses Ninject, but is applicable to any IoC container).
Alternatively, you could think of logging as a cross cutting concern, which isn't appropriate to mix with your business logic code, in which case you can use interception which is provided by many IoC containers. http://msdn.microsoft.com/en-us/library/ff647107.aspx describes using interception with Unity.
My opinion is that it depends. Sometimes one is better and sometimes another. But I'd say that generaly I prefer DI. There are few reasons for that.
When dependency is injected somehow into component it can be treated as part of its interface. Thus its easier for component's user to supply this dependecies, cause they are visible. In case of injected SL or Static SL that dependencies are hidden and usage of component is a bit harder.
Injected dependecies are better for unit testing cause you can simply mock them. In case of SL you have to setup Locator + mock dependencies again. So it is more work.
Sometimes logging can be implemented using AOP, so that it doesn't mix with business logic.
Otherwise, options are :
use an optional dependency (such as setter property), and for unit test you don't inject any logger. IOC container will takes care of setting it automatically for you if you run in production.
When you have a dependency that almost every object of your app is using ("logger" object being the most commmon example), it's one of the few cases where the singleton anti-pattern becomes a good practice. Some people call these "good singletons" an Ambient Context:
http://aabs.wordpress.com/2007/12/31/the-ambient-context-design-pattern-in-net/
Of course this context has to be configurable, so that you can use stub/mock for unit testing.
Another suggested use of AmbientContext, is to put the current Date/Time provider there , so that you can stub it during unit test, and accelerates time if you want.
This is regarding the 'Service Locator is an Anti-Pattern' by Mark Seeman.
I might be wrong here. But I just thought I should share my thoughts too.
public class OrderProcessor : IOrderProcessor
{
public void Process(Order order)
{
var validator = Locator.Resolve<IOrderValidator>();
if (validator.Validate(order))
{
var shipper = Locator.Resolve<IOrderShipper>();
shipper.Ship(order);
}
}
}
The Process() method for OrderProcessor does not actually follow the 'Inversion of Control' principle. It also breaks the Single Responsibility principle at the method level. Why should a method be concerned with instantiating the
objects(via new or any S.L. class) it needs to accomplish anything.
Instead of having the Process() method create the objects the constructor can actually have the parameters for the respective objects(read dependencies) as shown below. Then HOW can a Service Locator be any different from a IOC
container. AND it will aid in Unit Testing as well.
public class OrderProcessor : IOrderProcessor
{
public OrderProcessor(IOrderValidator validator, IOrderShipper shipper)
{
this.validator = validator;
this.shipper = shipper;
}
public void Process(Order order)
{
if (this.validator.Validate(order))
{
shipper.Ship(order);
}
}
}
//Caller
public static void main() //this can be a unit test code too.
{
var validator = Locator.Resolve<IOrderValidator>(); // similar to a IOC container
var shipper = Locator.Resolve<IOrderShipper>();
var orderProcessor = new OrderProcessor(validator, shipper);
orderProcessor.Process(order);
}
I have used the Google Guice DI framework in Java, and discovered that it does much more than make testing easier. For example, I needed a separate log per application (not class), with the further requirement that all my common library code use the logger in the current call context. Injecting the logger made this possible. Admittedly, all the library code needed to be changed: the logger was injected in the constructors. At first, I resisted this approach because of all the coding changes required; eventually I realized that the changes had many benefits:
The code became simpler
The code became much more robust
The dependencies of a class became obvious
If there were many dependencies, it was a clear indication that a class needed refactoring
Static singletons were eliminated
The need for session or context objects disappeared
Multi-threading became much easier, because the DI container could be built to contain just one thread, thus eliminating inadvertent cross-contamination
Needless to say, I am now a big fan of DI, and use it for all but the most trivial applications.
We've landed on a compromise: use DI but bundle top-level dependencies into an object avoiding refactoring hell should those dependencies change.
In the example below, we can add to 'ServiceDependencies' without having to refactor all derived dependencies.
Example:
public ServiceDependencies{
public ILogger Logger{get; private set;}
public ServiceDependencies(ILogger logger){
this.Logger = logger;
}
}
public abstract class BaseService{
public ILogger Logger{get; private set;}
public BaseService(ServiceDependencies dependencies){
this.Logger = dependencies.Logger; //don't expose 'dependencies'
}
}
public class DerivedService(ServiceDependencies dependencies,
ISomeOtherDependencyOnlyUsedByThisService additionalDependency)
: base(dependencies){
//set local dependencies here.
}
I know that people are really saying DI is the only good IOC pattern but I don't get this. I will try to sell SL a bit. I will use the new MVC Core framework to show you what I mean. First DI engines are really complex. What people really mean when they say DI, is use some framework like Unity, Ninject, Autofac... that do all the heavy lifting for you, where SL can be as simple as making a factory class. For a small fast project this is an easy way to do IOC without learning a whole framework for proper DI, they might not be that difficult to learn but still.
Now to the problem that DI can become. I will use a quote from MVC Core docs.
"ASP.NET Core is designed from the ground up to support and leverage dependency injection." Most people say that about DI "99% of your code base should have no knowledge of your IoC container." So why would they need to design from ground up if only 1% of code should be aware of it, didn't old MVC support DI? Well this is the big problem of DI it depends on DI. Making everything work "AS IT SHOULD BE DONE" takes a lot of work. If you look at the new Action Injection is this not depending on DI if you use [FromServices] attribute. Now DI people will say NO you are suppose to go with Factories not this stuff, but as you can see not even people making MVC did it right. The problem of DI is visible in Filters as well look at what you need to do to get DI in a filter
public class SampleActionFilterAttribute : TypeFilterAttribute
{
public SampleActionFilterAttribute():base(typeof(SampleActionFilterImpl))
{
}
private class SampleActionFilterImpl : IActionFilter
{
private readonly ILogger _logger;
public SampleActionFilterImpl(ILoggerFactory loggerFactory)
{
_logger = loggerFactory.CreateLogger<SampleActionFilterAttribute>();
}
public void OnActionExecuting(ActionExecutingContext context)
{
_logger.LogInformation("Business action starting...");
// perform some business logic work
}
public void OnActionExecuted(ActionExecutedContext context)
{
// perform some business logic work
_logger.LogInformation("Business action completed.");
}
}
}
Where if you used SL you could have done this with var _logger = Locator.Get();. And then we come to the Views. With all there good will regarding DI they had to use SL for the views. the new syntax #inject StatisticsService StatsService is the same as var StatsService = Locator.Get<StatisticsService>();.
The most advertised part of DI is unit testing. But what people and up doing is just testing there mock services with no purpose or having to wire up there DI engine to do real tests. And I know that you can do anything badly but people end up making a SL locator even if they don't know what it is. Where not a lot of people make DI without ever reading on it first.
My biggest problem with DI is that the user of the class must be aware of the inner workings of the class in other to use it.
SL can be used in a good way and has some advantages most of all its simplicity.
I know this question is a little old, I just thought I would give my input.
In reality, 9 times out of 10 you really don't need SL and should rely on DI. However, there are some cases where you should use SL. One area that I find myself using SL (or a variation, thereof) is in game development.
Another advantage of SL (in my opinion) is the ability to pass around internal classes.
Below is an example:
internal sealed class SomeClass : ISomeClass
{
internal SomeClass()
{
// Add the service to the locator
ServiceLocator.Instance.AddService<ISomeClass>(this);
}
// Maybe remove of service within finalizer or dispose method if needed.
internal void SomeMethod()
{
Console.WriteLine("The user of my library doesn't know I'm doing this, let's keep it a secret");
}
}
public sealed class SomeOtherClass
{
private ISomeClass someClass;
public SomeOtherClass()
{
// Get the service and call a method
someClass = ServiceLocator.Instance.GetService<ISomeClass>();
someClass.SomeMethod();
}
}
As you can see, the user of the library has no idea this method was called, because we didn't DI, not that we'd be able to anyways.
If the example only takes log4net as dependency, then you only need to do this:
ILog log = LogManager.GetLogger(typeof(Foo));
There is no point to inject the dependency as log4net provides granular logging by taking the type (or a string) as parameter.
Also, DI is not correlated with SL. IMHO the purpose of ServiceLocator is for resolve optional dependencies.
Eg: If the SL provides an ILog interface, i will write logging daa.
For DI, do you need to have a hard reference to the injected type assembly? I don’t see anyone talking about that. For SL, I can tell my resolver where to load my type dynamically when it needed from a config.json or similar. Also, if your assembly contains several thousand types and their inheritance, do you need thousands cascading call to the service collection provider to register them? That’s where I do see much talk about. Most are talking about the benefit of DI and what it is in general, when it comes to how to implement it in .net, they presented with an extension method for adding reference to a hard linked types assembly. That’s not very decoupling to me.
Summary: I want to know the best design for creating cross-platform (eg. desktop, web, and Silverlight) classes in C#, with no duplication of code, with the pros and cons of each design.
I'm often writing new, useful classes for one application domain; there's no reason why they won't work across domains. How can I structure my code to make it ideally cross-platform?
For example, let's say I wanted to make a generic "MyTimer" class with an interval and on-tick event. In desktop, this would use the built-in .NET timer. In Silverlight, I would use a DispatchTimer.
Design #1 might be "create a class and use pre-processor directives for conditional compilation," eg. "#IF SILVERILGHT ...". However, this leads to code that is less understandable, readable, and maintainable.
Design #2 might be "create subclasses called DesktopTimer and SilverlightTimer and consume those from MyTimer." How would that work?
While this is a trivial case, I may have more complicated classes that, for example, consume platform-specific classes (IsolatedStorage, DispatchTimer, etc.) but aren't directly replacing them.
What other designs/paradigms can I use?
I would suggest writing Interfaces that you would simply implement for your platform specific code. Then, the interfaces assure that your code will respect the contracts given by your interface, otherwise there will be a code break (if one member is not implemented).
Besides, within this library where resides your specific timer classes, to stick to your example, I would create a class for each platform, thus using the DispatchTimer for Silverlight, and the built-in .NET timer for the desktop version.
In the end, you would end up using only one interface that only its implementers know how to deal with the contract specifically to your underlying platform.
EDIT #1
Conditonal design is not an option for a good design. Here is a tool that will help you deal with the Dependancy Injection, that is called Unity Application Block, and is used to deal with such scenario like yours.
You only use an XML configuration that is very versatile to "tell" what has to be instantiated when this or that interface is needed. Then, the UnityContainer consults with the configuration you have made, and instantiate the right class for you. This assures good design approach and architecture.
EDIT #2
I'm not very familiar with Dependency Injection, and not at all familiar with Unity Application Block. Can you point to some resources or explain these a bit further?
Microsoft Enterprise Library 5.0 - April 2010;
Microsoft Unity 2.0 – April 2010;
Microsoft Unity 2.0 Documentation for Visual Studio 2008;
Are there good tutorial/walkthroughs for unity that don't use configuration files? (SO question on the topic that should provide valuable hints to start with Unity);
Specifying Types in the Configuration File;
Walkthrough: The Unity StopLight QuickStart;
Walkthrough: The Unity Event Broker Extension QuickStart.
I think these resources shall guide you through your learnings. If you need further assistance, please let me know! =)
EDIT #3
But anyway, the StopLight quickstart [...] seems to imply that the dependency mapping of interface to concrete class is done in code (which won't work for me).
In fact, you can do both code and XML dependency mapping, the choice is yours! =)
Here are some example that you should perhaps inspire from to make the StopLight quickstart use the XML configuration instead of the coded mapping.
Testing Your Unity XML Configuration;
Using Design-Time Configuration;
Source Schema for the Unity Application Block.
If this doesn't help you get through, let me know. I shall then provide a simple example using XML dependency mapping. =)
1) Interfaces with platform-specific class in their own assemblies: ITimer in a shared assembly, and a "WebAssembly" containing WebTimer, for example. Then the "WebAssembly.dll", or "DesktopAssembly.dll" are on-demand loaded. This turns it into more of a deployment/configuration issue, and everything compiles. Dependency Injection or MEF become a great help here.
2) Interfaces (again), but with conditional compilation. This makes it less of a deployment issue, and more of a compilation problem. WebTimer would have #ifdef WEB_PLATFORM around it, and so on.
Personally, I'd lean to #1 - but in a complicated application, most likely you'll end up having to use both because of slight changes in the available parts of the .net framework between silverlight and everything else. You may even want different behavior in core parts of your app just for the performance issues.
I think interfaces are a good choice here (defining what a timer will do without actually implementing it)
public interface ITimer
{
void CreateTimer(int _interval, TimerDelegate _delegate);
void StopTimer();
// etc...
} // eo interface ITimer
From this, you derive your concrete timers:
public class DesktopTimer : ITimer
{
} // eo DesktopTimer
public class SilverlightTimer : ITimer
{
} // eo class SilverlightTimer
public class WebTimer : Timer
{
} // eo class WebTimer
Then comes the fun part. How do we create the right timer? Here you could implement some kind of platform-factory that returned the right timer depending on what platform it is running on. Here is a quick and dirty idea (I would make it more dynamic than this, and perhaps implement one factory for multiple kinds of classes, but this is an example)
public enum Platform
{
Desktop,
Web,
Silverlight
} // eo enum Platform
public class TimerFactory
{
private class ObjectInfo
{
private string m_Assembly;
private string m_Type;
// ctor
public ObjectInfo(string _assembly, string _type)
{
m_Assembly = _assembly;
m_Type = _type;
} // eo ctor
public ITimer Create() {return(AppDomain.CurrentDomain.CreateInstanceAndUnwrap(m_Assembly, m_Type));}
} // eo class ObjectInfo
Dictionary<Platform, ObjectInfo> m_Types = new Dictionary<PlatForm, ObjectInfo>();
public TimerFactory()
{
m_Types[Platform.Desktop] = new ObjectInfo("Desktop", "MyNamespace.DesktopTimer");
m_Types[Platform.Silverlight] = new ObjectInfo("Silverlight", "MyNameSpace.SilverlightTimer");
// ...
} // eo ctor
public ITimer Create()
{
// based on platform, create appropriate ObjectInfo
} // eo Create
} // eo class TimerFactory
As I mentioned above, I would not have a factory for every time of object, but make a generic platform-factory that could handle timers, containers and whatever else you want. This is just an example.
The Model-View-Presenter pattern is a really good approach if you want to separate all of your user interface logic from the actual GUI framework you are using. Read Michael Feather's article "The Humble Dialog Box" to get an excellent explanation of how it works:
http://www.objectmentor.com/resources/articles/TheHumbleDialogBox.pdf
The original article was made for C++, if you want a C# example, look here:
http://codebetter.com/blogs/jeremy.miller/articles/129546.aspx
The Pros are:
you will make your GUI logic resusable
your GUI logic becomes applicable for unit testing
The Cons:
if your program does not need more than one GUI framework, this approach produces more lines-of-code, and you have to deal with more complexity, since you have to decide all through your coding which parts of your code belong into the view and which into the presenter
Go with all OOD you know. I'd suggest creating platform-agnostic (Windows, Mono/destkop, web) domain model. Use abstract classes to model platform-dependant stuff (like the Timer). Use Dependency Injection and/or Factory patterns to use specific implementations.
EDIT: at some point you have to specify what concrete classes to use, but using the abovementioned patterns can bring all that code into one place without using conditional compilation.
EDIT: an example of DI/Factory. Of course you can use on of existing frameworks, which will give you more power and expressivenes. For the simple example it seems like an overkill, but the more complicated the code, the bigger the gain of using the patterns.
// Common.dll
public interface IPlatformInfo
{
string PlatformName { get; }
}
public interface PlatformFactory
{
IPlatformInfo CreatePlatformInfo();
// other...
}
public class WelcomeMessage
{
private IPlatformInfo platformInfo;
public WelcomeMessage(IPlatformInfo platformInfo)
{
this.platformInfo = platformInfo;
}
public string GetMessage()
{
return "Welcome at " + platformInfo.PlatformName + "!";
}
}
// WindowsApp.exe
public class WindowsPlatformInfo : IPlatformInfo
{
public string PlatformName
{
get { return "Windows"; }
}
}
public class WindowsPlatformFactory : PlatformFactory
{
public IPlatformInfo CreatePlatformInfo()
{
return new WindowsPlatformInfo();
}
}
public class WindowsProgram
{
public static void Main(string[] args)
{
var factory = new WindowsPlatformFactory();
var message = new WelcomeMessage(factory.CreatePlatformInfo());
Console.WriteLine(message.GetMessage());
}
}
// MonoApp.exe
public class MonoPlatformInfo : IPlatformInfo
{
public string PlatformName
{
get { return "Mono"; }
}
}
public class MonoPlatformFactory : PlatformFactory
{
public IPlatformInfo CreatePlatformInfo()
{
return new MonoPlatformInfo();
}
}
public class MonoProgram
{
public static void Main(string[] args)
{
var factory = new MonoPlatformFactory();
var message = new WelcomeMessage(factory.CreatePlatformInfo());
Console.WriteLine(message.GetMessage());
}
}
As others have sugested, interfaces are the way to go here. I would alter the interface from sugestion Moo-Juice suggestion slightly...
`
//Why is this block not formated like code???
public interface ITimer{
void StopTimer(); // etc...
void StartTimer(); // etc...
TimeSpan Duration {get;} // eo interface ITimer
}`
Now you would need to get the ITimer into your class that is using it. The most timple way to do this is called dependency injection. The most common approach to achieve this is called constructor injection.
So when creating a class that needs a timer you pass a timer into the class when creating one.
Basically you do:
var foo = new Foo(new WebTimer());
Since that will get complicated quite fast, you can utilize some helpers. This pattern is called inversion of control. There are some frameworks that will help you, like the ninject or castle windsor.
Both are inversion of control (IOC) containers. (Thats the secret sauce)
Basically you "register" your timer in the IOC, and also register your "Foo". When you need a "Foo", you ask your IOC Container to create one. The container looks at the constructor, finds that it needs a ITimer. It will then create an ITimer for you, and pass it into the constructor, and finally hand you the complete class.
Inside you class you dont need to have any knowledge about the ITimer, or how to create it, since all that was moved to the outside.
For different Applications you now only need to register the correct components, and you are done...
P.s.: Be carefull and dont confuse the IOC Container with a service locator...
Links:
http://ninject.org/download
http://www.castleproject.org/container/index.html
http://www.pnpguidance.net/Category/Unity.aspx
Why not have configuration section which will tell your library about the platform of the host application. This you have to set only once in your application to the host config file (web.config or app.config), and rest you can use using Factory method as suggested by Moo-Juice. You can use platform detail over entire functionality of the library.
A month ago I finished reading the book "Art of Unit Testing" and today I finally had time to start using Rhino Mocks with unit testing a service that sends/receives messages to devices (UDP) and saves/loads data from the database.
Off course I want to isolate the database and the UDP communication.
For example for database access, we have some classes with static methods, let's call them:
AreaADBAccess
AreaBDBAccess
AreaCDBAccess
These classes had static methods that executed the database access.
To be able to stub them I made their methods public virtual.
Then I started refactoring the code to be able to replace the instance of these classes on the code.
After trying different things, I ended up with a factory class that seems to make things so easy.
public class ObjectFactory
{
private static Dictionary<Type, object> Instances = new Dictionary<Type, object>();
public static T GetInstance<T>() where T : new()
{
if(Instances.ContainsKey(typeof(T)))
return (T)Instances[typeof(T)];
return new T();
}
public static void SetInstance<T>(object obj)
{
Instances[typeof(T)] = obj;
}
}
Then on the code, I can use
private AreaADBAccess DBAccess = ObjectFactory.GetInstance<AreaADBAccess>();
And on the test method I do something like
AreaADBAccess dbAccess = mocks.Stub<AreaADBAccess>();
using (mocks.Record())
{
...
}
ObjectFactory.SetInstance<AreaADBAccess>(dbAccess);
//Invoke the test method
...
This is a first solution and I realize that it can be done in a better way.
Can you now comment it and point me to the best practices?
Can you also explain to me why I would want to have an interface instead of defining the methods as virtual? It seems useless to repeat the method headers in 2 places.
Thanks!
Probably you could make use of IoC containers like, Windsor, StructureMap or Ninject, which would make your life much easier, and in fact they also work as "factory" you have created manually, but are much more powerful.
Also there are automocking containers which will automatically create mock dependencies for your classes so you can save additional lines of code, and make your test less fragile.
As for question regarding the need for interface instead of classes, its basically the Dependency Inversion Principle, which states that higher level modules (classes) should not depend on lower level modules (classes), both of them should depend on abstractions. Interfaces are those abstractions.
I suggest you would take a look at S.O.L.I.D principles which will make your life much more easier, at least they did to me..
Right now we use DI/IOC and when we need to pass extra parameters to a constructor we use a factory class e.g.
public class EmailSender
{
internal EmailSender(string toEmail, string subject,String body, ILogger emailLogger)
{.....}
}
public class EmailSenderFactory
{
ILogger emailLogger;
public EmailSenderFactory(ILogger emailLogger)
{
this.emailLogger = emailLogger;
}
public EmailSender Create(string toEmail, string subject, string body)
{
return new EmailSender(toEmail, subject, body, emailLogger);
}
}
Now the problem with this is that we end up creating a whole lotta factory classes, and people don't always know to use them (they sometimes new them up themselves). What are the biggest negatives of coding the class like this:
public class EmailSender
{
EmailLogger logger = IoC.Resolve<ILogger>();
internal EmailSender(string toEmail, string subject,String body)
{.....}
}
Pro: we now can use the constructor safely without needing a factory class
Con: we have to reference the Service Locator (I'm not worried about testability, its easy to use a mock container as the backing service for the container).
Is there some big stinker of a reason out there why we shouldn't do this?
edit: after a bit of thought, I twigged that by having a private constructor, and by nesting the Factory class, I could keep the implementation and factory together, and prevent people from creating classes improperly, so the question has become somewhat moot. All the points points about SL being dirty, are of course true, so the solution below keeps me happy:
public class EmailSender
{
public class Factory
{
ILogger emailLogger;
public Factory(ILogger emailLogger)
{
this.emailLogger = emailLogger;
}
public EmailSender Create(string toEmail, string subject, string body)
{
return new EmailSender(toEmail, subject, body, emailLogger);
}
}
private EmailSender(string toEmail, string subject,String body, ILogger emailLogger)
{
}
}
yes - it is bad.
Why write all that code when you can have the framework do the work? All the IoC.Resolve() calls are
superfluous and you shouldn't have to
write them.
Another, even more important aspect,
is that your components are tied to
your service locator.
You're now unable to instantiate them
just like that - you need a
completely set up service locator in
place every time you need to use a
component.
Last but, bot least - your SL code is
sprinkled all over your codebase
which is not a good thing, because
when you want to change something,
you have to look in multiple places.
The largest reason I can think of (without just looking at issues with Service Locators in general) is that it is not what I as a user of your class would expect.
Meta discussion:
Some DI frameworks (Guice for example) will build the factory for you.
Some people advocate separating the "newable" from the "injectable".
I'm not quite sure about this strong "it is bad" answer gave by Krzysztof. I think there's some trade-off and preference in there without absolutely categorizing them as bad or good.
I don't think it is more superfluous writing those IoC.Resolve() call than writing specific constructors or properties for the injection mechanism.
This one, i have to agree that you are tied to a service Locator and you have to set it up before instancing a class. However:
You can segregate your service locator with more specific interfaces. Thereby reducing the coupling with a huge service locator for every services of your system
Yeah, if you use DI mechanism, you will remove all those IoC.Resolve(), but you would still have to use a kind of container to instantiate your "main" services. The DI has to intercept those calls, no?
Your service locator could (should?) be "auto-configurable" or at least, pretty easy to set up.
See above the "segregate your service locator with more specific interfaces..." point.
I think using a service locator is indeed hiding your dependencies inside the class instead of exposing them through constructors. And it is an inconvenient in my opinion because you will not know your class is missing something until the service locator is called without being configured.
But the DI thing is not free of that kind of code darkness. When you use DI, it is really not obvious to understand how those dependencies just "appeared" (DI magic) in your constructor. By using a SL, you at least can see where those dependencies are coming from.
But still, when testing a class that expose those dependencies on her constructors, you (almost) can't miss it. That is not the case using a service locator.
I'm not saying Krzysztof was wrong, because I agree with him for the most. But I'm pretty sure using a service locator is not necessarily a bad "design" and certainly not simply bad.
Phil