Automapper inheritance and custom type converters - c#

First of all, I've already scoured the web and SO for a solution, couldn't find anything.
I'm working on a highly decoupled solution, with all classes designed for a DI approach (asking interfaces as dependencies of course). We use Automapper in the MVC layer to transform server side POCOs in the flattened ViewModels. So far nothing strange.
The complexity is given by the fact that some properties of the ViewModels must be created using some services that are registered in the IoC container (ninject) and are not available during application start, where automapper is configured. Telling automapper what method to use to resolve types was easy, just one line of configuration as per documentation.
For simple cases, where a single property needs this behaviour, we create a custom ValueResolver for that property, expose the dependencies in the constructor and use them to return the desired value.
The problem arises when we have, let's say, 20 properties which need the same behaviour, but different output values, while all other ViewModel properties are fine with the default "automapping".
I can't find a way to tell automapper "to resolve those properties, use this class, for all the others just use your convention".
I tried several approaches, including putting the desired properties on an interface and then use Inheritance mapping as described on the wiki. It works as long as I don't use a TypeConverter, which is pointless, because I need it (or something like it) to get the services needed through DI/IoC.
An example:
public interface IMyInterface
{
string MyUrl1 {get;set;}
//snip with many other urls
}
public class MyViewModel : IMyInterface
{
public string MyUrl1 {get;set;}
//snip with many other urls
public string MyAutomapperProperty1 {get;set;}
//snip with many other properties where I want to use the conventions
}
What I need is something like this
public class MyTypeConverter : TypeConverter<MyPoco, IMyInterface>
{
// here in the overridden method return an instance of MyViewModel
// with just the properties exposed by the interface
}
Then in automapper config:
Automapper.CreateMap<MyPoco, IMyInterface>()
.Include<MyPoco, MyViewModel>()
.ConvertUsing<MyTypeConverter>();
Automapper.CreateMap<MyPoco, MyViewModel>();
This doesn't work. If I remove the call to ConvertUsing and add inline mappings with the ForMember method everything works, and the ForMember declarations are correctly inherited by the second mapping, but just not the ConvertUsing. But I can't have DI with the ForMember method (as I am in the application start, which is static, plus that would mean to instantiate those object at application level and keep them alive for all the application, instead of creating and disposing them when needed).
I know a solution is putting the related properties in a separate object and create a TypeConverter straight for that object and use it, but refactoring of those POCOs is not an option right now, and besides, it seems strange there isn't a way to make inheritance and DI work together.
Hoping that someone can help me out, thanks for reading.
UPDATE
As requested, I'll try to be more clear as what I want to achieve.
We have some "services", known at application level only through their interfaces. Something like:
public interface IUrlResolver { }
Which exposes a bunch of methods to get URLs related to our application, based on some parameters that we pass, and other stuff. The implementation might as well expose a dependency to other services (interfaces) and so on. Thus we use DI and let the IoC container resolver the chain of dependencies.
Now let's say my ViewModel has 50 properties, of which 30 are fine to go with the convention based mapping of automapper, while the other 20 need all to be resolved by different methods of my IUrlResolver interface. To get the facts straight:
I can't have DI where I configure automapper because it happens in Application_Start (or some static method called by application start)
I wouldn't have my IUrlResolver resolved at application level even if it was possible, because it would be stale for all application life
If I use a ValueResolver for a single property, everything is fine. The ValueResolver constructor requests IUrlResolver which gets injected at runtime when automapper transformation occurs.
I could create 20 different ValueResolver classes for my 20 properties, but it clutters the code, gets a ton of code duplication, no reusability, etc. A mess of bad practices.
What I want is the ability to partially map an object with a custom class (like a TypeConverter) while having all the other properties being mapped with the default convention engine
I don't have a "sample syntax" to post because the closest way I tried is the one I posted before. I'm open to a complete different approach if it exists, providing it allows me to keep things decoupled and doesn't involve refactoring those properties in a separate class.

I can't have DI where I configure automapper because it happens in
Application_Start (or some static method called by application start)
That sounds like a rather artificial constraint. Both DI container wire-up and AutoMapper configuration are presumably happening early in the application launch phase. It should be at least theoretically feasible to run your AutoMapper configuration after your DI container is configured. If there are practical reasons why this is not the case, could you please elaborate?
Addendum:
If the core problem is that you want to enable resolution of dependencies from your IoC container at mapping time, AutoMapper does provide hooks for this. See https://github.com/AutoMapper/AutoMapper/wiki/Custom-value-resolvers#custom-constructor-methods and https://github.com/AutoMapper/AutoMapper/wiki/Containers for two candidate approaches that you could use.

Related

How do I inject into base class with Castle Windsor?

I have a series of core services that I want to configure with Castle Windsor, things like Logging, Caching, Email config, etc. Making these services easily configurable by an app.config change would be a great boon (e.g. even just for development/testing it's great to be able to tell the app to route all the email traffic through some other mechanism than the actual mail server).
Two questions:
Many of the classes that need access to these services all inherit from an abstract base class (contains core logic used by all subclasses) so it would seem ideal to inject the core services into this base class somehow so that all the children would inherit the references to the services. Note these subclasses also all implement an Interface so that may be the better path to go down?
I also have a scenario where unrelated objects in other assemblies also need to be able to tap into the core services. These objects are not instantiated by me but by other libraries (I'm implementing the interface of some 3rd party library that then uses my implementation in its framework). If I need access to email or logging or some other core service in this code, how do I get a reference?
I hope that makes sense, thank you.
Regarding your first point, use property injection.
You have two choices for injecting dependencies; via the constructor or via properties. Since you don't want to pass dependencies down the constructor chain, the only other way is via property injection. This has the advantage that if a base class need to add/remove/change a dependency, it doesn't affect everything that inherits from it.
Some folks (myself included) shy away from property injection because it makes dependencies non-obvious and can imply that they are optional. This can make unit testing (you're doing that, right?) difficult because you have to inspect the class to see what dependencies are needed. If they were in the constructor, it'd be obvious.
However, if you can make sane null-object implementations of your services so that they are optional, or the unit-testing implications don't phase you, then this is a good route to go down.
As to your second question, if you can't control how the class gets created, you can't expect Windsor to supply any of its dependencies. At best, you can resolve the dependencies individually (i.e. call container.Resolve<IYourDependency>()) and assign them to the properties of your implementation.

What are the benefits of using Dependency Injection when retrieving implementations of an interface at runtime?

public List<IBusinessObject> RetrieveAllBusinessObjects()
{
var businessObjectType= typeof(IBusinessObject);
List<Type> implementationsOfBusinessObject = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(s => s.GetTypes())
.Where(businessObjectType.IsAssignableFrom).ToList();
return implementationsOfBusinessObject.Select(t =>(IBusinessObject)Activator.CreateInstance(t)).ToList();
}
I was suggested by a user on stack overflow that I should check out dependency injection as a workaround for the above snippit. What would be the benefits of this?
Just a little overview on what the scenario is:
Our database has little to no store procedures so we have begun implementing C# business objects for our more complicated tables. As we are hoping to switch databases some time soon this seems to be the best option. All of the business objects must be loaded using reflection at runtime to help manage them. All of these business objects implement the interface IBusinessObject.
The suggestion to use dependency injection came from this question
EDIT:
The RetrieveAllBusinessObjects method is in a class behind an interface so is directly testable
We use AutoFac if that changes anything. We don't use a separate config file.
-
instead of using the code above, you simply use DI which is configured in the config file of the app but also sometimes you can decorate a property or a parameter in a method which will then be automatically injected in (by the mapping set up either programmatically or via config) when the request is made to access that object or when it is going to be invoked in the method being called in the params.
it also makes it a bit more testable in that you can create different concrete types which implement of an interface, then instead of having to recompile the code, you just flick the mappings by the config file and viola...all works.
DI would do the above without you having to write code to do it so there's less opportunity for you to introduce bugs.
DI gives you many more benefits such as
Making it easier to test individual units of code. Dependencies can be mocked, so you can limit the code being tested
Easy to understand the dependencies within your code. Dependencies generally get injected in certain places, usually the constructor.
Linked to 1/ above, because you should now defined interfaces between your code, when your requirements change & you need to rewrite a component, you can do so with a higher level confidence that it will work with your existing code-base.
There are other benefits that others can probably describe better, but you'll need to evaluate those according to your needs.

How should common references be passed around in an assembly?

I am trying to get rid off static classes, static helper methods and singleton classes in my code base. Currently, they are pretty much spread over the whole code, especially so for the utility classes and the logging library. This is mainly due to the need for mocking ability as well as object-oriented design and development concerns, e.g. extensibility. I might also need to introduce some form of dependency injection in the future and would like to leave an open door for that.
Basically, the problem I have encountered is about the method of passing the commonly used references around. These are objects that are used by almost every class in the code base, such as the logging interface, the utility (helper) class interface and maybe an instance of a class that holds an internal common state for the assembly which most classes relate to.
There are two options, as far as I'm aware. One is to define a class (or an interface) that stores the common references, a context if you will, and pass the context to each object that is created. The other option is to pass each common reference to almost every class as a separate parameter which would increase the number of parameters of the class constructors.
Which one of these methods is better, what are the pros and cons of each, and is there a better method for this task?
I generally go with the context object approach, and pass the context object either to an object's constructor, or to a method -- depending on which one makes the most sense.
The context object pattern can take a few forms.
You can define an interface that has exactly the members you need, or you can generate a sort of container class. For example, when writing loosely-coupled components, I tend to have each component I implement have a matching interface, so that it can be reimplemented if desired. Then I register the objects on a "manager" object, something like this:
public interface IServiceManager
{
public T GetService<T>();
public T RequireService<T>();
public void RegisterService<T>(T service);
public void UnregisterService<T>(T service);
}
Behind the scenes there is a map from type to object, which allows me to extremely quickly assemble a large set of diverse components into a working whole. Each component asks for the others by interface, and the manager object is what glues them together. (If you correctly author your components, you can even swap out one service for another while the process is running!)
One would register a service something along these lines:
class FooService : IFooService { }
// During process start-up:
serviceManager.RegisterService<IFooService>(new FooService());
There is more overhead with this approach than with the flat-interface approach due to the dictionary lookup, but it has allowed me to build very sophisticated systems that can be easily redeployed with different service implementations. (And, as is usual, any bottlenecks I encounter are never in looking up a service object from a dictionary, but somewhere else such as the database.)
You're going to get varied opinions, but generally passing a separate parameter to the constructor for each dependency is preferred for a few reasons:
It clearly defines the actual dependencies for a class - with a "context" you don't know which parts of the context are used without digging into the code.
Generally having a lot of parameters to a constructor is a design smell, so using constructor injection helps you sniff out design flaws.
When testing you can mock out individual dependencies versus having to mock an entire context
I would suggest passing as a parameter to the constructor. This has great advantage for both dependency injection and unit testability ( mocking).

2 Product lines sharing same code

We are working on two product lines that will share the same code.
For functionality that differs, I have both product lines implement the same interface (or base classes in some case) and these types will be created in the Main class (which is separate for both product lines) and passed further downstream.
For code that is deep inside the business logic, it is very hard to have product line specific code. We do not want to user if(ProductLine == "ProductLine1") and else methodology.
So I am planning to implement a Factory class which will have static methods to return NewObject1(), NewObject2() and so on. This Factory class will be registered in the Main class as Factory.RegisterClient(ProductLine1).
So with the above approach, the factory(which internally contains ProductLine1Factor & ProductLine2Factory) knows which type of objects to create.
Do you know a better approach to this problem. Please note that ProductLine1 was already existing and ProductLine2 is something new (but is 90% similar to ProductLine1). We cannot do drastic refactoring such that both product lines exist. We want to do as minimally invasive code changes as possible.
The factory approach typically exposes an interface, but the problem with interfaces is that I cannot expose static types which are also needed.
I would really appreciate if some experts would shed some light.
Your approach sounds fine.
Instead of a custom crafted factory, why don't you use a fully fledged IoC framework like NInject or Unity? You could have the service implemented twice, for each client, and select one in a container configuration file, statically. This way you don't even need to change the single line of your code if you add yet another implementation, you just reconfigure i.e. make some changes in the xml file.
Anyway, an IoC container is just a tool, use it or not, it just replaces your factory (IoC containers are sometimes called "factories on steroids").

How to Use Ninject

I have been trying to use Ninject today and have a couple of questions. First of all do I need to use the Inject attribute on all constructors that I want to use injection for. This seems like a really lame design? Do I need to create a Kernel then use that everywhere I pass in an injected class?
The best way to get started with Ninject is to start small. Look for a new.
Somewhere in the middle of your application, you're creating a class inside another class. That means you're creating a dependency. Dependency Injection is about passing in those dependencies, usually through the constructor, instead of embedding them.
Say you have a class like this, used to automatically create a specific type of note in Word. (This is similar to a project I've done at work recently.)
class NoteCreator
{
public NoteHost Create()
{
var docCreator = new WordDocumentCreator();
docCreator.CreateNewDocument();
[etc.]
WordDocumentCreator is a class that handles the specifics of creating a new document in Microsoft Word (create an instance of Word, etc.). My class, NoteCreator, depends on WordDocumentCreator to perform its work.
The trouble is, if someday we decide to move to a superior word processor, I have to go find all the places where WordDocumentCreator is instantiated and change them to instantiate WordPerfectDocumentCreator instead.
Now imagine that I change my class to look like this:
class NoteCreator
{
WordDocumentCreator docCreator;
public NoteCreator(WordDocumentCreator docCreator) // constructor injection
{
this.docCreator = docCreator;
}
public NoteHost Create()
{
docCreator.CreateNewDocument();
[etc.]
My code hasn't changed that much; all I've done within the Create method is remove the line with the new. But now I'm injecting my dependency. Let's make one more small change:
class NoteCreator
{
IDocumentCreator docCreator;
public NoteCreator(IDocumentCreator docCreator) // change to interface
{
this.docCreator = docCreator;
}
public NoteHost Create()
{
docCreator.CreateNewDocument();
[etc.]
Instead of passing in a concrete WordDocumentCreator, I've extracted an IDocumentCreator interface with a CreateNewDocument method. Now I can pass in any class that implements that interface, and all NoteCreator has to do is call the method it knows about.
Now the tricky part. I should now have a compile error in my app, because somewhere I was creating NoteCreator with a parameterless constructor that no longer exists. Now I need to pull out that dependency as well. In other words, I go through the same process as above, but now I'm applying it to the class that creates a new NoteCreator. When you start extracting dependencies, you'll find that they tend to "bubble up" to the root of your application, which is the only place where you should have a reference to your DI container (e.g. Ninject).
The other thing I need to do is configure Ninject. The essential piece is a class that looks like this:
class MyAppModule : NinjectModule
{
public override void Load()
{
Bind<IDocumentCreator>()
.To<WordDocumentCreator>();
This tells Ninject that when I attempt to create a class that, somewhere down the line, requires an IDocumentCreator, it should create a WordDocumentCreator and use that. The process Ninject goes through looks something like this:
Create the application's MainWindow. Its constructor requires a NoteCreator.
OK, so create a NoteCreator. But its constructor requires an IDocumentCreator.
My configuration says that for an IDocumentCreator, I should use WordDocumentCreator. So create a WordDocumentCreator.
Now I can pass the WordDocumentCreator to the NoteCreator.
And now I can pass that NoteCreator to the MainWindow.
The beauty of this system is threefold.
First, if you fail to configure something, you'll know right away, because your objects are created as soon as your application is run. Ninject will give you a helpful error message saying that your IDocumentCreator (for instance) can't be resolved.
Second, if management later mandates the user of a superior word processor, all you have to do is
Write a WordPerfectDocumentCreator that implements IDocumentCreator.
Change MyAppModule above, binding IDocumentCreator to WordPerfectDocumentCreator instead.
Third, if I want to test my NoteCreator, I don't have to pass in a real WordDocumentCreator (or whatever I'm using). I can pass in a fake one. That way I can write a test that assumes my IDocumentCreator works correctly, and only tests the moving parts in NoteCreator itself. My fake IDocumentCreator will do nothing but return the correct response, and my test will make sure that NoteCreator does the right thing.
For more information about how to structure your applications this way, have a look at Mark Seemann's recent book, Dependency Injection in .NET. Unfortunately, it doesn't cover Ninject, but it does cover a number of other DI frameworks, and it talks about how to structure your application in the way I've described above.
Also have a look at Working Effectively With Legacy Code, by Michael Feathers. He talks about the testing side of the above: how to break out interfaces and pass in fakes for the purpose of isolating behavior and getting it under test.
First of all do I need to use the Inject attribute on all constructors
that I want to use injection for. This seems like a really lame
design?
No you shouldn't have to do this at all actually. Since you work with ASP.NET MVC you can just install the Ninject.MVC3 Nuget package. This will get you started with a NinjectMVC3 class in the App_Start folder. You can use the RegisterServices method to register your interfaces/classes with Ninject. All controllers that have dependencies to those interfaces will then be automatically resolved by Ninject, there is no need for the Inject attribute.
Do I need to create a Kernel then use that everywhere I pass in an
injected class?
No - what you are describing sounds more like the Service Locator pattern, not dependency injection - you will want to pass in your dependencies ideally in the constructor, instead of resolving them within particular classes using the kernel. There should be just one central composition root where the resolving is done, which is within the composition root in either the RegisterServices method mentioned above or a separate Ninject module instantiated there - the later approach will allow you a little more flexibility and modularity (no pun intended) in changing how you resolve your dependencies.
Here's a good beginner's tutorial on dependency injection with Ninject and MVC3.
Don't forget there are docs, including an intro I feel would be very appropriate given the sort of questions you are asking on the Ninject Wiki. You're just annoying yourself if you're trying to use Ninject without reading it end to end.
Stick the table of contents on your bookmark bar for a bit.
I can also highly recommend Mark Seemann's Dependency Injection in .Net as a companion book for DI based architecture (even though it doesnt directly cover Ninject).

Categories

Resources