Dependency Injection when concrete class need to be swapped C# - c#

I am planning to write a shared file between that could be used between multiple projects.
My Projects:
Write code to eat Ice cream. (Because ice cream is my favorite food).
(Please note that my favorite food might change to Mango in the future)
Write code to eat Mango. (Because mango is my favorite food).
(Please note that my favorite food might change to Ice cream in the future)
Following is the interface
interface IEatFavoriteFood
{
void eat();
}
Following will be my concrete classes.
class EatMango
{
void eat()
{
// Some code to eat mango
}
}
class EatIceCream
{
void eat()
{
// Some code to eat ice cream
}
}
So, I will need to have two main programs separately for my two projects, since I will be initializing two different concrete classes for the IEatFavoriteFood interface to call eat().
Main()
{
IEatFavoriteFood iff = new EatMango();
iff.eat();
}
and
Main()
{
IEatFavoriteFood iff = new EatIceCream();
iff.eat();
}
In addition to that, if sometime in the future the favorite food changes to Mango in the first project, then I will have to re-write the initialization and re-compile my project. Is there a better way to implement this functionality. Does using a config file in this case, makes better sense?

For dependency injection to work, you need some place where the concrete types are listed, which knows which concrete class to plug into for your possible dependencies (the interface).
In dependency injection frameworks, this is often done with some kind of configuration, either via code or with some external files (e.g. some XML configuration).
For your purpose, it’s probably the easiest way to start simple and simply have some “central entity” that provides you with your dependencies. This could for example be a FavoriteFoodFactory:
public class FavoriteFoodFactory
{
public static IEatFavoriteFood GetFavoriteFood()
{
return new EatIceCream();
}
}
So in your Main, you would then just ask that factory to give you whatever favorite food is currently configured:
Main()
{
IEatFavoriteFood iff = FavoriteFoodFactory.GetFavoriteFood();
iff.eat();
}
Of course that just shifts the responsibility to create a concrete object to somewhere else, but that’s exactly the point of dependency injection. To have something at the very top that takes care of how to resolve a dependency, removing the need to know from the components further down.

I would suggest you to use an IOC container like Autofac (http://autofac.org/), Ninject (http://www.ninject.org/) or whatever you like most.
The container will provide you with different ways to handle that kind of scenario.
For example, in Autofac you can create a module (that is a class containing the registrations of the concrete classes that will be used by the container) that can be loaded/selected via web.config or code.
You can see a detailed documentation here with examples
http://docs.autofac.org/en/latest/configuration/modules.html
Please note that using an IOC container can be easy, but in most cases you must be aware of the components lifecycle in order to avoid possible problems and memory leaking.

Related

Looking for a Design pattern which can create different instances of a class with different interface implementations

I have a class which contains a view dependencies (all interfaces). Basically the behavior of the class is defined through the implementation of those interfaces. I want to be able to have a "builder" which can create instances of this class with different implementations of the interfaces(or parts of it). Something like this:
public class API
{
private readonly ISomeInterface _someInterface;
private readonly ISomeOtherInterface _someOtherInterface;
private readonly ISomeAnotherInterface _someAnotherInterface;
API(ISomeInterface someInterface,ISomeOtherInterface someOtherInterface,ISomeAnotherInterface someAnotherInterface)
{*/implementation ommitted*/}
//Example method
public void DoSomethingWhichDependsOnOneOrMoreInterfaces()
{
//somecode
id(_someInterface != null)
_someInterface.SomeMethode();
}
public class MyApiBuilder()
{
// implementation ommitted
API CreateAPI(someEnum type)
{
switch(type)
{
case SpecificAPI32:
var speficImplementationOfSomeInterface = new ImplementsISomeInterface();
speficImplementationOfSomeInterface .Setup("someSetup");
var specificImplementationOfOtherInterface = new ImplementsISomeOtherInterface();
returns new API(speficImplementationOfSomeInterface,specificImplementationOfOtherInterface ,null);
}
}
}
What is the most elegant way of implementing this (if this makes sense at all)? I was first thinking of the Builder Design Patterns but as far as I understood it, its slightly different.
[Edit]
As pointed out, the way I am implementing it is a factory method but I am not fully satisfied with it. The API can contain a varity of different interfaces which can be totally independent of each other but some may depend on others.(but not mandatory) I would like to give the user (the developer using this "API") as much freedom as possible in creating the API he wants to use. Lets try to explain what I am basically up to:
Let's say I am developing a plugin for a game engine which can post achievments and other stuff to various social media channels. So basically there could be a Interface which implements the access to twitter,facebook,youtube,whathever or some custom server. This custom server could need some kind of authentification process. The user should be able to build at start the API in a nice (hmm fluent is nice..) way. So basically something like this:
var myTotallyForMyNeedsBuildAPI = API.CreateCustomApi().With(Api.Twitter).And(Api.Facebook).And(Api.Youtube).And(Api.CustomServer).With(Security.Authentification);
I actually do not know how to make that fluent but something like this would be nice.
It's a good practice to use Dependency Injection as you want to give the programmer the ability to compose the object with desired configuration.
Check MEF and Unity frameworks which are great for this job.
For example in Unity you can write this:
// Introducing an implementation for ISomeInterface
container.Register<ISomeInterface, SomeImplementation>();
// Introducing an implementation for ISomeOtherInterface
container.Register<ISomeOtherInterface, SomeOtherImplementation>();
// Introducing an implementation for ISomeAnotherInterface
container.Register<ISomeAnotherInterface, SomeAnotherImplemenation>();
container.Register<API, API>();
// and finally unity will compose it for you with desired configurations:
var api = container.Resolve<API>();
In this scenario the api will be composed with desired implementations.
What you have implemented is the Factory method pattern.
It's perfectly fine for what you are trying to do, but you could have a look at the other factory patterns (i.e. here) based on your context and how you think you're code will evolve in the future.
Anyway, I will also consider to not tie this three interface together in a single factory. If they are really so tighten together to be consumed together and built together, maybe they should not be three different interfaces in the first place, or at least all three implemented by the same class, so your factory will build the appropriate class with the proper implementation of these.
Probably what you are after is the Decorator pattern.
In your API class you invoke each interface if they have been provided to the API instance, which is the behaviour of the Decorator pattern.
With this pattern you obtain a modular implementation that allow you to add multiple behaviours to your API.

What is an IOC container actually doing for me here?

So I've refactored completely to constructor injection, and now I have a bootstrapper class that looks similar to this:
var container = new UnityContainer();
container.RegisterType<Type1, Impl1>();
container.RegisterType<Type2, Impl2>();
container.RegisterType<Type3, Impl3>();
container.RegisterType<Type4, Impl4>();
var type4Impl = container.Resolve((typeof)Type4) as Type4;
type4Impl.Run();
I stared at it for a second before realizing that Unity is really not doing anything special here for me. Leaving out the ctor sigs, the above could be written as:
Type1 type1Impl = Impl1();
Type2 type2Impl = Impl2();
Type3 type3Impl = Impl3(type1Impl, type2Impl);
Type4 type4Impl = Impl4(type1Impl, type3Impl);
type4Impl.Run();
The constructor injection refactoring is great and really opens up the testability of the code. However, I'm doubting the usefulness of Unity here. I realize I may be using the framework in a limited manner (ie not injecting the container anywhere, configuring in code rather than XML, not taking advantage of lifetime management options), but I am failing to see how it is actually helping in this example. I've read more than one comment with the sentiment that DI is better off simply used as a pattern, without a container. Is this a good example of that situation? What other benefits does this solution provide that I am missing out on?
I have found that a DI container becomes valuable when you have many types in the container that are dependent on each other. It is at that point that the auto-wire-up capability of a container shines.
If you find that you are referring to the container when you are getting object out of, then you are really following the Service Locator pattern.
To some extent you're right. Inversion of control does not need to mean using IoC container at all. If your object graph is small enough and convenient enough to be created at once in some kind of bootstrapping code, that's inversion of control, too.
But using an IoC tools simplifies the object creation in case of more complex scenarios. Using IoC tools you can manage object lifecycles, compose your object graph from different configurations or when not the whole graph is known at compile time, easily defer the object creation etc. etc.
There is no general solution. Everything depends from your specific needs. For a simple project with few classes, using IoC can be more annoying than helpful. For a big project I can't even imagine how the bootstrapping code need to look like.
See my post here for an extensive response to this question.
Most of the other answers here are correct, and say pretty much the same thing. I would add that most IoC containers allow you to auto-bind types to themselves, or use binding by convention. If you set up Unity to do that, then you can get rid of all that binding code entirely.
The difference is that you are doing the dependency injection instead of Unity doing dependency injection. In your example, you would have to know what types need to be created coupling your code to those types. You now need to know in your code that Impl1 should be created whenever you need a Type1.
Here's a simple code illustration of what other's have said (albeit taking a few liberties, property injection instead of constructor injection and assuming you've registered your types, etc).
public interface IFoo { }
public interface IBar { IFoo FooImpl { get; set; } }
public interface IBaz { IBar BarImpl { get; set; } }
public interface IBat { IBaz BazImpl { get; set; } }
As your object graph grows and dependencies are nested further and further down the graph, you'll have to provide the whole tree:
var bat = new Bat{
BazImpl = new BazImpl() {
BarImpl = new BarImpl() {
FooImpl = new FooImpl()
}
}
};
However, if you use the container correctly, all of that resolution comes based on what you've registered:
var bat = container.Resolve<IBat>()
Much like the other answers have probably stated, an IoC container is not required to perform dependency injection. It simply provides for automated dependency injection. If you don't get much of an advantage from the automation, then don't worry too much about a container, especially at the entry point of your application where you're injecting the initial objects.
There are however some things an IoC can make easier:
Lazy initialization. Autofac and a few others (not sure about Unity) can detect a constructor that takes a Func<IMyDependency> and, given a registration for an IDependency, will automatically generate the appropriate factory method. This reduces the front-loading often required in a DI system, where a lot of big objects like repositories have to be initialized and passed into the top-level object.
Sub-dependency hiding. Say class A needs to instantiate a class B, and B needs C, but A shouldn't know about C. Maybe even class Z which created A can't even know about C. This is the thing for which IoCs were created; throw A, B and C into the container, shake it up and resolve a fully-hydrated B to give to A, or a factory method which can be injected into A (automatically) and which the A can use to create all the B references it wants.
Simple "singletoning". Instead of creating and using a static singleton, an IoC can be told to create and return one and only one instance of any registered dependency no matter how many times that dependency is asked for. This allows the developer to turn any ordinary instance class into a singleton for use in the container, with no code change to the class itself required.
Your example is very simple, and the object graph would be very easily managable without using a DI framework. If this is really the extent of what is needed, doing manual DI would work fine.
The value of using a DI framework goes up very quickly as the dependency graph becomes more complex.

Question about interfaces in C#

Here is my question...
I work in Telecom industry and have a piece of software which provides the best network available for a given service number or a site installation address. My company uses the network of the wholesale provider and we have our own network as well. To assess what services a customer might be able to get, I call a webservice to find out the services available on a given telephone exchange and based on the services available, I need to run some checks against either our network or the network of the wholesale provider.
My question is how this can be modelled using interfaces in C#? The software that I have does not make use of any interfaces and whatever classes are there are just to satisfy the fact that code cannot live outside classes.
I am familiar with the concept of interfaces, at least on theoretical level, but not very familiar with the concept of programming to interfaces.
What I am thinking is along the following lines:
Create an interface called IServiceQualification which will have an operation defined : void Qualify(). Have two classes called QualifyByNumber and QualifyByAddress and both of these implement the interface and define the details of the operation Qualify. Am I thinking along the right lines or is there a different/better way of approaching this issue.
I have read a few examples of programming to interfaces, but would like to see this utilized in a work situation.
Comments/suggestions are most welcome.
I would probably make it go a little bit deeper, but you are on the right track. I would personally create IServiceQualification with a Qualify method and then below that an abstract class called ServiceQualification which would have an abstract method called Qualify that any kind of qualifier class could implement. This lets you define common behavior among your qualifiers (there is bound to be some) while still creating the separation of concerns at a high level.
Interfaces have a defined purpose and using them properly lets you implement in any way you want without having your code require that implementation. So, we can create a service that looks something like:
public bool ShouldQualify(IServiceQualification qualification)
And no matter the implementation we send it, this method will work. It becomes something you never have to change or modify once its working. Additionally, it leads you directly to bugs. If someone reports that qualifications by address aren't working, you know EXACTLY where to look.
Take a look at the strategy design pattern. Both the problem and the approach that you have described sound like a pretty close fit.
http://www.dofactory.com/Patterns/PatternStrategy.aspx
You should think of interfaces in terms of a contract. It specifies that a class implements certain function signatures meaning you class can call them with known parameters and expect a certain object back - what happens in the middle is upto the developer of the interface to decide. This loose coupling makes your class system a lot more flexible (it has nothing to do with saving key strokes surfash)
Heres an example which is roughly aimed at your situation (but will require more modelling).
public interface IServiceQualification{
bool Qualifies(Service serv);
}
public class ClientTelephoneService : IServiceQualification
{
public bool Qualifies(Service serv){
return serv.TelNumber.Contains("01234");
}
}
public class ClientAddressService : IServiceQualification
{
public bool Qualifies(Service serv){
return serv.Address.Contains("ABC");
}
}
//just a dummy service
public class Service{
public string TelNumber = "0123456789";
public string Address = "ABC";
}
//implementation of a checker which has a list of available services and takes a client who implements the
//interface (meaning we know we can call the Qualifies method
public class ClassThatReturnsTheAvailableServices
{
//ctor
List<Service> services = //your list of all services
public List<Service> CheckServices(IServiceQualification clientServiceDetails)
{
var servicesThatQualify = new List<Service>();
foreach(var service in services){
if(clientServiceDetails.Qualifies(service)){
services.Add(service);
}
}
return servicesThatQualify;
}
}

Dependency Injection vs Service Location

I am currently weighing up the advantages and disadvantages between DI and SL. However, I have found myself in the following catch 22 which implies that I should just use SL for everything, and only inject an IoC container into each class.
DI Catch 22:
Some dependencies, like Log4Net, simply do not suit DI. I call these meta-dependencies and feel they should be opaque to calling code. My justification being that if a simple class 'D' was originally implemented without logging, and then grows to require logging, then dependent classes 'A', 'B', and 'C' must now somehow obtain this dependency and pass it down from 'A' to 'D' (assuming 'A' composes 'B', 'B' composes 'C', and so on). We have now made significant code changes just because we require logging in one class.
We therefore require an opaque mechanism for obtaining meta-dependencies. Two come to mind: Singleton and SL. The former has known limitations, primarily with regards to rigid scoping capabilities: at best a Singleton will use an Abstract Factory which is stored at application scope (ie. in a static variable). This allows some flexibility, but is not perfect.
A better solution would be to inject an IoC container into such classes, and then use SL from within that class to resolve these meta-dependencies from the container.
Hence catch 22: because the class is now being injected with an IoC container, then why not use it to resolve all other dependencies too?
I would greatly appreciate your thoughts :)
Because the class is now being injected with an IoC container, then why not use it to resolve all other dependencies too?
Using the service locator pattern completely defeats one of the main points of dependency injection. The point of dependency injection is to make dependencies explicit. Once you hide those dependencies by not making them explicit parameters in a constructor, you're no longer doing full-fledged dependency injection.
These are all constructors for a class named Foo (set to the theme of the Johnny Cash song):
Wrong:
public Foo() {
this.bar = new Bar();
}
Wrong:
public Foo() {
this.bar = ServiceLocator.Resolve<Bar>();
}
Wrong:
public Foo(ServiceLocator locator) {
this.bar = locator.Resolve<Bar>();
}
Right:
public Foo(Bar bar) {
this.bar = bar;
}
Only the latter makes the dependency on Bar explicit.
As for logging, there's a right way to do it without it permeating into your domain code (it shouldn't but if it does then you use dependency injection period). Amazingly, IoC containers can help with this issue. Start here.
Service Locator is an anti-pattern, for reasons excellently described at http://blog.ploeh.dk/2010/02/03/ServiceLocatorIsAnAntiPattern.aspx. In terms of logging, you could either treat that as a dependency just like any other, and inject an abstraction via constructor or property injection.
The only difference with log4net, is that it requires the type of the caller that uses the service. Using Ninject (or some other container) How can I find out the type that is requesting the service? describes how you can solve this (it uses Ninject, but is applicable to any IoC container).
Alternatively, you could think of logging as a cross cutting concern, which isn't appropriate to mix with your business logic code, in which case you can use interception which is provided by many IoC containers. http://msdn.microsoft.com/en-us/library/ff647107.aspx describes using interception with Unity.
My opinion is that it depends. Sometimes one is better and sometimes another. But I'd say that generaly I prefer DI. There are few reasons for that.
When dependency is injected somehow into component it can be treated as part of its interface. Thus its easier for component's user to supply this dependecies, cause they are visible. In case of injected SL or Static SL that dependencies are hidden and usage of component is a bit harder.
Injected dependecies are better for unit testing cause you can simply mock them. In case of SL you have to setup Locator + mock dependencies again. So it is more work.
Sometimes logging can be implemented using AOP, so that it doesn't mix with business logic.
Otherwise, options are :
use an optional dependency (such as setter property), and for unit test you don't inject any logger. IOC container will takes care of setting it automatically for you if you run in production.
When you have a dependency that almost every object of your app is using ("logger" object being the most commmon example), it's one of the few cases where the singleton anti-pattern becomes a good practice. Some people call these "good singletons" an Ambient Context:
http://aabs.wordpress.com/2007/12/31/the-ambient-context-design-pattern-in-net/
Of course this context has to be configurable, so that you can use stub/mock for unit testing.
Another suggested use of AmbientContext, is to put the current Date/Time provider there , so that you can stub it during unit test, and accelerates time if you want.
This is regarding the 'Service Locator is an Anti-Pattern' by Mark Seeman.
I might be wrong here. But I just thought I should share my thoughts too.
public class OrderProcessor : IOrderProcessor
{
public void Process(Order order)
{
var validator = Locator.Resolve<IOrderValidator>();
if (validator.Validate(order))
{
var shipper = Locator.Resolve<IOrderShipper>();
shipper.Ship(order);
}
}
}
The Process() method for OrderProcessor does not actually follow the 'Inversion of Control' principle. It also breaks the Single Responsibility principle at the method level. Why should a method be concerned with instantiating the
objects(via new or any S.L. class) it needs to accomplish anything.
Instead of having the Process() method create the objects the constructor can actually have the parameters for the respective objects(read dependencies) as shown below. Then HOW can a Service Locator be any different from a IOC
container. AND it will aid in Unit Testing as well.
public class OrderProcessor : IOrderProcessor
{
public OrderProcessor(IOrderValidator validator, IOrderShipper shipper)
{
this.validator = validator;
this.shipper = shipper;
}
public void Process(Order order)
{
if (this.validator.Validate(order))
{
shipper.Ship(order);
}
}
}
//Caller
public static void main() //this can be a unit test code too.
{
var validator = Locator.Resolve<IOrderValidator>(); // similar to a IOC container
var shipper = Locator.Resolve<IOrderShipper>();
var orderProcessor = new OrderProcessor(validator, shipper);
orderProcessor.Process(order);
}
I have used the Google Guice DI framework in Java, and discovered that it does much more than make testing easier. For example, I needed a separate log per application (not class), with the further requirement that all my common library code use the logger in the current call context. Injecting the logger made this possible. Admittedly, all the library code needed to be changed: the logger was injected in the constructors. At first, I resisted this approach because of all the coding changes required; eventually I realized that the changes had many benefits:
The code became simpler
The code became much more robust
The dependencies of a class became obvious
If there were many dependencies, it was a clear indication that a class needed refactoring
Static singletons were eliminated
The need for session or context objects disappeared
Multi-threading became much easier, because the DI container could be built to contain just one thread, thus eliminating inadvertent cross-contamination
Needless to say, I am now a big fan of DI, and use it for all but the most trivial applications.
We've landed on a compromise: use DI but bundle top-level dependencies into an object avoiding refactoring hell should those dependencies change.
In the example below, we can add to 'ServiceDependencies' without having to refactor all derived dependencies.
Example:
public ServiceDependencies{
public ILogger Logger{get; private set;}
public ServiceDependencies(ILogger logger){
this.Logger = logger;
}
}
public abstract class BaseService{
public ILogger Logger{get; private set;}
public BaseService(ServiceDependencies dependencies){
this.Logger = dependencies.Logger; //don't expose 'dependencies'
}
}
public class DerivedService(ServiceDependencies dependencies,
ISomeOtherDependencyOnlyUsedByThisService additionalDependency)
: base(dependencies){
//set local dependencies here.
}
I know that people are really saying DI is the only good IOC pattern but I don't get this. I will try to sell SL a bit. I will use the new MVC Core framework to show you what I mean. First DI engines are really complex. What people really mean when they say DI, is use some framework like Unity, Ninject, Autofac... that do all the heavy lifting for you, where SL can be as simple as making a factory class. For a small fast project this is an easy way to do IOC without learning a whole framework for proper DI, they might not be that difficult to learn but still.
Now to the problem that DI can become. I will use a quote from MVC Core docs.
"ASP.NET Core is designed from the ground up to support and leverage dependency injection." Most people say that about DI "99% of your code base should have no knowledge of your IoC container." So why would they need to design from ground up if only 1% of code should be aware of it, didn't old MVC support DI? Well this is the big problem of DI it depends on DI. Making everything work "AS IT SHOULD BE DONE" takes a lot of work. If you look at the new Action Injection is this not depending on DI if you use [FromServices] attribute. Now DI people will say NO you are suppose to go with Factories not this stuff, but as you can see not even people making MVC did it right. The problem of DI is visible in Filters as well look at what you need to do to get DI in a filter
public class SampleActionFilterAttribute : TypeFilterAttribute
{
public SampleActionFilterAttribute():base(typeof(SampleActionFilterImpl))
{
}
private class SampleActionFilterImpl : IActionFilter
{
private readonly ILogger _logger;
public SampleActionFilterImpl(ILoggerFactory loggerFactory)
{
_logger = loggerFactory.CreateLogger<SampleActionFilterAttribute>();
}
public void OnActionExecuting(ActionExecutingContext context)
{
_logger.LogInformation("Business action starting...");
// perform some business logic work
}
public void OnActionExecuted(ActionExecutedContext context)
{
// perform some business logic work
_logger.LogInformation("Business action completed.");
}
}
}
Where if you used SL you could have done this with var _logger = Locator.Get();. And then we come to the Views. With all there good will regarding DI they had to use SL for the views. the new syntax #inject StatisticsService StatsService is the same as var StatsService = Locator.Get<StatisticsService>();.
The most advertised part of DI is unit testing. But what people and up doing is just testing there mock services with no purpose or having to wire up there DI engine to do real tests. And I know that you can do anything badly but people end up making a SL locator even if they don't know what it is. Where not a lot of people make DI without ever reading on it first.
My biggest problem with DI is that the user of the class must be aware of the inner workings of the class in other to use it.
SL can be used in a good way and has some advantages most of all its simplicity.
I know this question is a little old, I just thought I would give my input.
In reality, 9 times out of 10 you really don't need SL and should rely on DI. However, there are some cases where you should use SL. One area that I find myself using SL (or a variation, thereof) is in game development.
Another advantage of SL (in my opinion) is the ability to pass around internal classes.
Below is an example:
internal sealed class SomeClass : ISomeClass
{
internal SomeClass()
{
// Add the service to the locator
ServiceLocator.Instance.AddService<ISomeClass>(this);
}
// Maybe remove of service within finalizer or dispose method if needed.
internal void SomeMethod()
{
Console.WriteLine("The user of my library doesn't know I'm doing this, let's keep it a secret");
}
}
public sealed class SomeOtherClass
{
private ISomeClass someClass;
public SomeOtherClass()
{
// Get the service and call a method
someClass = ServiceLocator.Instance.GetService<ISomeClass>();
someClass.SomeMethod();
}
}
As you can see, the user of the library has no idea this method was called, because we didn't DI, not that we'd be able to anyways.
If the example only takes log4net as dependency, then you only need to do this:
ILog log = LogManager.GetLogger(typeof(Foo));
There is no point to inject the dependency as log4net provides granular logging by taking the type (or a string) as parameter.
Also, DI is not correlated with SL. IMHO the purpose of ServiceLocator is for resolve optional dependencies.
Eg: If the SL provides an ILog interface, i will write logging daa.
For DI, do you need to have a hard reference to the injected type assembly? I don’t see anyone talking about that. For SL, I can tell my resolver where to load my type dynamically when it needed from a config.json or similar. Also, if your assembly contains several thousand types and their inheritance, do you need thousands cascading call to the service collection provider to register them? That’s where I do see much talk about. Most are talking about the benefit of DI and what it is in general, when it comes to how to implement it in .net, they presented with an extension method for adding reference to a hard linked types assembly. That’s not very decoupling to me.

Design for Cross-Platform Classes in C#

Summary: I want to know the best design for creating cross-platform (eg. desktop, web, and Silverlight) classes in C#, with no duplication of code, with the pros and cons of each design.
I'm often writing new, useful classes for one application domain; there's no reason why they won't work across domains. How can I structure my code to make it ideally cross-platform?
For example, let's say I wanted to make a generic "MyTimer" class with an interval and on-tick event. In desktop, this would use the built-in .NET timer. In Silverlight, I would use a DispatchTimer.
Design #1 might be "create a class and use pre-processor directives for conditional compilation," eg. "#IF SILVERILGHT ...". However, this leads to code that is less understandable, readable, and maintainable.
Design #2 might be "create subclasses called DesktopTimer and SilverlightTimer and consume those from MyTimer." How would that work?
While this is a trivial case, I may have more complicated classes that, for example, consume platform-specific classes (IsolatedStorage, DispatchTimer, etc.) but aren't directly replacing them.
What other designs/paradigms can I use?
I would suggest writing Interfaces that you would simply implement for your platform specific code. Then, the interfaces assure that your code will respect the contracts given by your interface, otherwise there will be a code break (if one member is not implemented).
Besides, within this library where resides your specific timer classes, to stick to your example, I would create a class for each platform, thus using the DispatchTimer for Silverlight, and the built-in .NET timer for the desktop version.
In the end, you would end up using only one interface that only its implementers know how to deal with the contract specifically to your underlying platform.
EDIT #1
Conditonal design is not an option for a good design. Here is a tool that will help you deal with the Dependancy Injection, that is called Unity Application Block, and is used to deal with such scenario like yours.
You only use an XML configuration that is very versatile to "tell" what has to be instantiated when this or that interface is needed. Then, the UnityContainer consults with the configuration you have made, and instantiate the right class for you. This assures good design approach and architecture.
EDIT #2
I'm not very familiar with Dependency Injection, and not at all familiar with Unity Application Block. Can you point to some resources or explain these a bit further?
Microsoft Enterprise Library 5.0 - April 2010;
Microsoft Unity 2.0 – April 2010;
Microsoft Unity 2.0 Documentation for Visual Studio 2008;
Are there good tutorial/walkthroughs for unity that don't use configuration files? (SO question on the topic that should provide valuable hints to start with Unity);
Specifying Types in the Configuration File;
Walkthrough: The Unity StopLight QuickStart;
Walkthrough: The Unity Event Broker Extension QuickStart.
I think these resources shall guide you through your learnings. If you need further assistance, please let me know! =)
EDIT #3
But anyway, the StopLight quickstart [...] seems to imply that the dependency mapping of interface to concrete class is done in code (which won't work for me).
In fact, you can do both code and XML dependency mapping, the choice is yours! =)
Here are some example that you should perhaps inspire from to make the StopLight quickstart use the XML configuration instead of the coded mapping.
Testing Your Unity XML Configuration;
Using Design-Time Configuration;
Source Schema for the Unity Application Block.
If this doesn't help you get through, let me know. I shall then provide a simple example using XML dependency mapping. =)
1) Interfaces with platform-specific class in their own assemblies: ITimer in a shared assembly, and a "WebAssembly" containing WebTimer, for example. Then the "WebAssembly.dll", or "DesktopAssembly.dll" are on-demand loaded. This turns it into more of a deployment/configuration issue, and everything compiles. Dependency Injection or MEF become a great help here.
2) Interfaces (again), but with conditional compilation. This makes it less of a deployment issue, and more of a compilation problem. WebTimer would have #ifdef WEB_PLATFORM around it, and so on.
Personally, I'd lean to #1 - but in a complicated application, most likely you'll end up having to use both because of slight changes in the available parts of the .net framework between silverlight and everything else. You may even want different behavior in core parts of your app just for the performance issues.
I think interfaces are a good choice here (defining what a timer will do without actually implementing it)
public interface ITimer
{
void CreateTimer(int _interval, TimerDelegate _delegate);
void StopTimer();
// etc...
} // eo interface ITimer
From this, you derive your concrete timers:
public class DesktopTimer : ITimer
{
} // eo DesktopTimer
public class SilverlightTimer : ITimer
{
} // eo class SilverlightTimer
public class WebTimer : Timer
{
} // eo class WebTimer
Then comes the fun part. How do we create the right timer? Here you could implement some kind of platform-factory that returned the right timer depending on what platform it is running on. Here is a quick and dirty idea (I would make it more dynamic than this, and perhaps implement one factory for multiple kinds of classes, but this is an example)
public enum Platform
{
Desktop,
Web,
Silverlight
} // eo enum Platform
public class TimerFactory
{
private class ObjectInfo
{
private string m_Assembly;
private string m_Type;
// ctor
public ObjectInfo(string _assembly, string _type)
{
m_Assembly = _assembly;
m_Type = _type;
} // eo ctor
public ITimer Create() {return(AppDomain.CurrentDomain.CreateInstanceAndUnwrap(m_Assembly, m_Type));}
} // eo class ObjectInfo
Dictionary<Platform, ObjectInfo> m_Types = new Dictionary<PlatForm, ObjectInfo>();
public TimerFactory()
{
m_Types[Platform.Desktop] = new ObjectInfo("Desktop", "MyNamespace.DesktopTimer");
m_Types[Platform.Silverlight] = new ObjectInfo("Silverlight", "MyNameSpace.SilverlightTimer");
// ...
} // eo ctor
public ITimer Create()
{
// based on platform, create appropriate ObjectInfo
} // eo Create
} // eo class TimerFactory
As I mentioned above, I would not have a factory for every time of object, but make a generic platform-factory that could handle timers, containers and whatever else you want. This is just an example.
The Model-View-Presenter pattern is a really good approach if you want to separate all of your user interface logic from the actual GUI framework you are using. Read Michael Feather's article "The Humble Dialog Box" to get an excellent explanation of how it works:
http://www.objectmentor.com/resources/articles/TheHumbleDialogBox.pdf
The original article was made for C++, if you want a C# example, look here:
http://codebetter.com/blogs/jeremy.miller/articles/129546.aspx
The Pros are:
you will make your GUI logic resusable
your GUI logic becomes applicable for unit testing
The Cons:
if your program does not need more than one GUI framework, this approach produces more lines-of-code, and you have to deal with more complexity, since you have to decide all through your coding which parts of your code belong into the view and which into the presenter
Go with all OOD you know. I'd suggest creating platform-agnostic (Windows, Mono/destkop, web) domain model. Use abstract classes to model platform-dependant stuff (like the Timer). Use Dependency Injection and/or Factory patterns to use specific implementations.
EDIT: at some point you have to specify what concrete classes to use, but using the abovementioned patterns can bring all that code into one place without using conditional compilation.
EDIT: an example of DI/Factory. Of course you can use on of existing frameworks, which will give you more power and expressivenes. For the simple example it seems like an overkill, but the more complicated the code, the bigger the gain of using the patterns.
// Common.dll
public interface IPlatformInfo
{
string PlatformName { get; }
}
public interface PlatformFactory
{
IPlatformInfo CreatePlatformInfo();
// other...
}
public class WelcomeMessage
{
private IPlatformInfo platformInfo;
public WelcomeMessage(IPlatformInfo platformInfo)
{
this.platformInfo = platformInfo;
}
public string GetMessage()
{
return "Welcome at " + platformInfo.PlatformName + "!";
}
}
// WindowsApp.exe
public class WindowsPlatformInfo : IPlatformInfo
{
public string PlatformName
{
get { return "Windows"; }
}
}
public class WindowsPlatformFactory : PlatformFactory
{
public IPlatformInfo CreatePlatformInfo()
{
return new WindowsPlatformInfo();
}
}
public class WindowsProgram
{
public static void Main(string[] args)
{
var factory = new WindowsPlatformFactory();
var message = new WelcomeMessage(factory.CreatePlatformInfo());
Console.WriteLine(message.GetMessage());
}
}
// MonoApp.exe
public class MonoPlatformInfo : IPlatformInfo
{
public string PlatformName
{
get { return "Mono"; }
}
}
public class MonoPlatformFactory : PlatformFactory
{
public IPlatformInfo CreatePlatformInfo()
{
return new MonoPlatformInfo();
}
}
public class MonoProgram
{
public static void Main(string[] args)
{
var factory = new MonoPlatformFactory();
var message = new WelcomeMessage(factory.CreatePlatformInfo());
Console.WriteLine(message.GetMessage());
}
}
As others have sugested, interfaces are the way to go here. I would alter the interface from sugestion Moo-Juice suggestion slightly...
`
//Why is this block not formated like code???
public interface ITimer{
void StopTimer(); // etc...
void StartTimer(); // etc...
TimeSpan Duration {get;} // eo interface ITimer
}`
Now you would need to get the ITimer into your class that is using it. The most timple way to do this is called dependency injection. The most common approach to achieve this is called constructor injection.
So when creating a class that needs a timer you pass a timer into the class when creating one.
Basically you do:
var foo = new Foo(new WebTimer());
Since that will get complicated quite fast, you can utilize some helpers. This pattern is called inversion of control. There are some frameworks that will help you, like the ninject or castle windsor.
Both are inversion of control (IOC) containers. (Thats the secret sauce)
Basically you "register" your timer in the IOC, and also register your "Foo". When you need a "Foo", you ask your IOC Container to create one. The container looks at the constructor, finds that it needs a ITimer. It will then create an ITimer for you, and pass it into the constructor, and finally hand you the complete class.
Inside you class you dont need to have any knowledge about the ITimer, or how to create it, since all that was moved to the outside.
For different Applications you now only need to register the correct components, and you are done...
P.s.: Be carefull and dont confuse the IOC Container with a service locator...
Links:
http://ninject.org/download
http://www.castleproject.org/container/index.html
http://www.pnpguidance.net/Category/Unity.aspx
Why not have configuration section which will tell your library about the platform of the host application. This you have to set only once in your application to the host config file (web.config or app.config), and rest you can use using Factory method as suggested by Moo-Juice. You can use platform detail over entire functionality of the library.

Categories

Resources