In my everlasting quest to suck less I'm currently checking out mvc Turbine to do the IoC dirty work.
I'm using the mvc Turbine nerd dinner example as a lead and things look rather logical thus far.
Although I'm refering to the turbine project here, I'm guessing the philosophy behind it is something general to the pattern
Safe for some reading and the rare podcast, I am new to the IoC concept and I have a few questions.
So far I have a IServiceRegistration entry for each IRepository I want to register
For example:
public class UserRepositoryRegistration : IServiceRegistration
{
public void Register(IServiceLocator locator)
{
locator.Register<IUserRepository, UserRepository>();
}
}
The concrete implementation of the IUserRepository needs some configuration though. Something like a connection string or in this case a path to the db4o file to use.
Where and to who should I supply this information?
Both Robert and Lucas hit the nail on the head with their answers. All the "extra stuff" for the account would live within the UserRepository class. This is currently the way the Turbine ND is implemented.
However, nothing stops you from creating a new class called ConnectionStringProvider that can then be 'injected' in your UserRepository which will provide the connection string (whether it be hard coded or read from a config file.
The code can be as follows:
public class ConnectionStringProvider {
public string ConnectionString {
get{
// your impl here
}
}
}
public class UserRepository {
public UserRepository(ConnectionStringProvider provider){
// set internal field here to use later
// with db connection
}
}
From here, you add a registration for ConnectionStringProvider within UserRepositoryRegistration class and Turbine will handle the rest for you.
In general, this is solely the concern of the concrete UserRepository that requires the connection string or database path. You would do just fine by dropping the path in the application configuration file and having your concrete repository pull out the configuration data directly.
Not all repositories are going to require this information, which is one of the reasons you have the abstraction in the first place. For example, a fast in memory concrete IUserRepository will not require a path to the database or likely any additional configuration to work.
Similar to Robert, I would recommend putting this into the application configuration file, however, with specific entries for each injection type. That way your connection string or path can be customized with each injection.
Related
I'm using .NET Core Dependency Injection to instantiate a SqlConnection object during the application startup, which I'm then planning to inject in my repository. This SqlConnection will be used by Dapper to read/write data from the database within my repository implementation. I am going to use async calls with Dapper.
The question is: should I inject the SqlConnection as transient or as a singleton? Considering the fact that I want to use async my thought would be to use transient unless Dapper implements some isolation containers internally and my singleton's scope will still be wrapped within whatever the scope Dapper uses internally.
Are there any recommendations/best practices regarding the lifetime of the SqlConnection object when working with Dapper? Are there any caveats I might be missing?
Thanks in advance.
If you provide SQL connection as singleton you won't be able to serve multiple requests at the same time unless you enable MARS, which also has it's limitations. Best practice is to use transient SQL connection and ensure it is properly disposed.
In my applications I pass custom IDbConnectionFactory to repositories which is used to create connection inside using statement. In this case repository itself can be singleton to reduce allocations on heap.
Great question, and already two great answers. I was puzzled by this at first, and came up with the following solution to solve the problem, which encapsulates the repositories in a manager. The manager itself is responsible for extracting the connection string and injecting it into the repositories.
I've found this approach to make testing the repositories individually, say in a mock console app, much simpler, and I've have much luck following this pattern on several larger-scale project. Though I am admittedly not an expert at testing, dependency injection, or well anything really!
The main question I'm left asking myself, is whether the DbService should be a singleton or not. My rationale was that, there wasn't much point constantly creating and destroying the various repositories encapsulated in DbService and since they are all stateless I didn't see much problem in allowing them to "live". Though this could be entirely invalid logic.
EDIT: Should you want a ready made solution check out my Dapper repository implementation on GitHub
The repository manager is structured as follows:
/*
* Db Service
*/
public interface IDbService
{
ISomeRepo SomeRepo { get; }
}
public class DbService : IDbService
{
readonly string connStr;
ISomeRepo someRepo;
public DbService(string connStr)
{
this.connStr = connStr;
}
public ISomeRepo SomeRepo
{
get
{
if (someRepo == null)
{
someRepo = new SomeRepo(this.connStr);
}
return someRepo;
}
}
}
A sample repository would be structured as follows:
/*
* Mock Repo
*/
public interface ISomeRepo
{
IEnumerable<SomeModel> List();
}
public class SomeRepo : ISomeRepo
{
readonly string connStr;
public SomeRepo(string connStr)
{
this.connStr = connStr;
}
public IEnumerable<SomeModel> List()
{
//work to return list of SomeModel
}
}
Wiring it all up:
/*
* Startup.cs
*/
public IConfigurationRoot Configuration { get; }
public void ConfigureServices(IServiceCollection services)
{
//...rest of services
services.AddSingleton<IDbService, DbService>();
//...rest of services
}
And finally, using it:
public SomeController : Controller
{
IDbService dbService;
public SomeController(IDbService dbService)
{
this.dbService = dbService;
}
public IActionResult Index()
{
return View(dbService.SomeRepo.List());
}
}
I agree with #Andrii Litvinov, both answer and comment.
In this case I would go with approach of data-source specific connection factory.
With same approach, I am mentioning different way - UnitOfWork.
Refer DalSession and UnitOfWork from this answer. This handles connection.
Refer BaseDal from this answer. This is my implementation of Repository (actually BaseRepository).
UnitOfWork is injected as transient.
Multiple data sources could be handled by creating separate DalSession for each data source.
UnitOfWork is injected in BaseDal.
Are there any recommendations/best practices regarding the lifetime of the SqlConnection object when working with Dapper?
One thing most of developers agree is that, connection should be as short lived as possible. I see two approaches here:
Connection per action.
This of-course will be shortest life span of connection. You enclose connection in using block for each action. This is good approach as long as you do not want to group the actions. Even when you want to group the actions, you can use transaction in most of the cases.
Problem is when you want to group actions across multiple classes/methods. You cannot use using block here. Solution is UnitOfWork as below.
Connection per Unit Of Work.
Define your unit of work. This will be different per application. In web application, "connection per request" is widely used approach.
This makes more sense because generally there are (most of the time) group of actions we want to perform as a whole. This is explained in two links I provided above.
Another advantage of this approach is that, application (that uses DAL) gets more control on how connection should be used. And in my understanding, application knows better than DAL how connection should be used.
I'm working on a system where I'd like to have my layers decoupled as much as possible, you know, some kind of modular application to be able to switch databases and stuff without a serious modification of the rest of the system.
So, I've been watching for x-the time one of the talks of Robert C. Martin about good practices, clean code, decoupling architecture etc, to get some inspiration. What I find kinda weird is his description of the system Fitnesse and the way they've implemented store/load methods for WikiPages. I'm linking the video as well: Robert C. Martin - Clean Architecture and Design
What's he describing (at least from my understanding) is that the entity is aware of the mechanism how to store and load itself from some persistent layer. When he wanted to store WikiPages in-memory, he simply overrode the WikiPage and created a new InMemoryWikiPage. When he wanted to store them in a database, he did the same thing...
So, one of my questions is - what is this approach called? I've been learning the whole time about Repository patterns and stuff, and why should be classes like this persistence-ignorant, but I can't seem to find any materials on this thing he did. Because my application will consist of modules, I think this may help to solve my problems without a need for creating some centralized store for my entities... Every module would simply take care of itself including persistence of its entities.
I think the code would look like is something like this:
public class Person : IEntity
{
public int ID { get;set; }
public string Name { get;set; }
public void Save()
{
..
}
public void Update()
{
}
public void Delete()
{
}
...
}
Seems a bit weird, but... Or maybe I misunderstood what he said in the video?
My second question would be, if you don't agree with this approach, what would be the path you'd take in such modular application?
Please provide an example if possible with some explanation.
I'll answer your second question. I think you will be interested as well in Dependency Injection.
I'm not an expert on DI but I'll try to explain as clear as I'm able to.
First off, from wikipedia:
Dependency injection is a software design pattern that allows removing hard-coded dependencies and making it possible to change them, whether at run-time or compile-time.
The primary purpose of the dependency injection pattern is to allow selection among multiple implementations of a given dependency interface at runtime, or via configuration files, instead of at compile time.
There are many libraries around that help you implement this design pattern: AutoFac, SimpleInjector, Ninject, Spring .NET, and many others.
In theory, this is what your code would look like (AutoFac example)
var containerBuilder = new ContainerBuilder();
//This is your container builder. It will be used to register interfaces
// with concrete implementations
Then, you register concrete implementations for interface types:
containerBuilder.RegisterType<MockDatabase>().As<IDatabase>().InstancePerDependency();
containerBuilder.RegisterType<Person>().As<IPerson>().InstancePerDependency();
In this case, InstancePerDependency means that whenever you try to resolve IPerson, you'll get a new instance. It could be for example SingleInstance, so whenever you tried to resolve IPerson, you would get the same shared instance.
Then you build your container, and use it:
var container = containerBuilder.Build();
IPerson myPerson = container.Resolve<IPerson>(); //This will retrieve the object based on whatever implementation you registered for IPerson
myPerson.Id = 1;
myPerson.Save(); //Save your changes
The model I used in this example:
interface IEntity
{
int Id { get; set; }
string TableName { get; }
//etc
}
interface IPerson: IEntity
{
void Save();
}
interface IDatabase
{
void Save(IEntity entity);
}
class SQLDatabase : IDatabase
{
public void Save(IEntity entity)
{
//Your sql execution (very simplified)
//yada yada INSERT INTO entity.TableName VALUES (entity.Id)
//If you use EntityFramework it will be even easier
}
}
class MockDatabase : IDatabase
{
public void Save(IEntity entity)
{
return;
}
}
class Person : IPerson
{
IDatabase _database;
public Person(IDatabase database)
{
this._database = database;
}
public void Save()
{
_database.Save(this);
}
public int Id
{
get;
set;
}
public string TableName
{
get { return "Person"; }
}
}
Don't worry, AutoFac will automatically resolve any Person Dependencies, such as IDatabase.
This way, in case you wanted to switch your database, you could simply do this:
containerBuilder.RegisterType<SqlDatabase>().As<IDatabase>().InstancePerDependency();
I wrote an over simplified (not suitable for use) code which serves just as a kickstart, google "Dependency Injection" for further information. I hope this helps. Good luck.
The pattern you posted is an Active Record.
The difference between Repository and Active Record Pattern is that in Active Record pattern, data query and persistence, and the domain object are in one class, where as in Repository, the data persistence and query are decoupled from the domain object itself.
Another pattern that you may want to look into is the Query Object which, unlike respository pattern where its number of methods will increase in every possible query (filter, sorting, grouping, etc) the query object can use fluent interface to be expressive [1] or dedicated in which one you may pass parameter [2]
Lastly, you may look at Command Query Responsibility Segregation architecture for ideas. I personally loosely followed it, just picked up ideas that can help me.
Hope this helps.
Update base on comment
One variation of Repository pattern is this
UserRepository
{
IEnumerable<User> GetAllUsers()
IEnumerable<User> GetAllByStatus(Status status)
User GetUserById(int id)
...
}
This one does not scale since the repository get's updated for additional query that way be requested
Another variation is to pass query object as parameter to the data query
UserRepository
{
IEnumerable<User> GetAll(QueryObject)
User GetUserById(int id)
...
}
var query = new UserQueryObject(status: Status.Single)
var singleUsers = userRepo.GetAll(query)
Some in .Net world, Linq expression is passed instead of QueryObject
var singleUsers = userRepo.GetAll(user => user.Status == Status.Single)
Another variation is to do dedicate Repository for retrieval on one entity by its unique identifier and save it, while query object is used to submit data retrieval, just like in CQRS.
Update 2
I suggest you get familiar with the SOLID principles. These principles are very helpful in guiding you creating a loosely coupled, high cohesive architecture.
Los Techies compilation on SOLID pricples contains good introductory articles regarding SOLID priciples.
I'm using Simple Injector, but maybe what I need is more of a conceptual answer.
Here's the deal, suppose I have an interface with my application settings:
public interface IApplicationSettings
{
bool EnableLogging { get; }
bool CopyLocal { get; }
string ServerName { get; }
}
Then, one would usually have a class which implements IApplicationSettings, getting each field from a specified source, for instance:
public class AppConfigSettings : IApplicationSettings
{
private bool? enableLogging;
public bool EnableLogging
{
get
{
if (enableLogging == null)
{
enableLogging = Convert.ToBoolean(ConfigurationManager.AppSettings["EnableLogging"];
}
return enableLogging;
}
}
...
}
HOWEVER! Let's say I want to get EnableLogging from app.config, CopyLocal from database, and ServerName from another implementation which gets the current computer name. I want to be able to mix-match my app configuration without having to create 9 implementations, one for each combination.
I'm assuming that I can't pass any parameters because the interfaces are resolved by the injector (container).
I thought of this, initially:
public interface IApplicationSettings<TEnableLogging,TCopyLocal,TServerName>
where TEnableLogging : IGetValue<bool>
where TCopyLocal : IGetValue<bool>
where TServerName : IGetValue<string>
{
TEnableLogging EnableLog{get;}
TCopyLocal CopyLocal{get;}
TServerName ServerName{get;}
}
public class ApplicationSettings<TEnableLogging,TCopyLocal,TServerName>
{
private bool? enableLogging;
public bool EnableLogging
{
get
{
if (enableLogging == null)
{
enableLogging = Container.GetInstance<TEnableLogging>().Value
}
return enableLogging;
}
}
}
However, with this I have one main problem: How do I know how to create an instance of TEnableLogging (which is a IGetValue<bool>)? Oh, assume that IGetValue<bool> is an interface which has a Value property, which will be implemented by the concrete class. But the concrete class may need some specifics (like what's the name of the key in app.config) or not (I may simply want to return always true).
I'm relatively new to dependency injection, so maybe I'm thinking in a wrong way. Does anyone have any ideas on how to accomplish this?
(You may answer using another DI library, I won't mind. I think I just need to grab the concept of it.)
You are definitely heading the wrong way here.
Some years ago I built an application that contained an interface much like your IApplicationSettings. I believe I named it IApplicationConfiguration, but it contained all application's configuration values as well.
Although it helped me make my application testable at first, after some time the design started to get in the way. A lot of implementations depended on that interface, but it kept changing a lot and with it the implementation, and the test version.
Just like you I implemented some lazy loading, but this had a terrible down side. When one of the configuration values was missing, I only found out that it did when the value was called for the first time. This resulted in a configuration that was hard to verify.
It took me a couple of iterations of refactoring to find out what the core of the problem was. Big interfaces are a problem. My IApplicationConfiguration class was violating the Interface Segregation Principle and the result was poor maintainability.
In the end I found out that this interface was completely useless. Besides violating the ISP, those configuration values described an implementation detail and instead of making an application wide abstraction, it is much better to supply each implementation directly with the configuration value they need, and only the values they need.
When you do this, the easiest thing to do is to wrap those values into a Parameter Object (even if it is just one value), and inject those configuration values into the constructor. Here's an ecample:
var enableLogging =
Convert.ToBoolean(ConfigurationManager.AppSettings["EnableLogging"]);
container.RegisterSingleton(new LoggerSettings(loggingEnabled: enableLogging));
In this case, LoggerSettings is a configuration object specific to Logger, which requires it as constructor argument.
When doing this, the enableLogging value is read just once from the configuration file and is done so during application startup. This makes it fast and makes it fail at application startup when the value is missing.
Summary: I want to know the best design for creating cross-platform (eg. desktop, web, and Silverlight) classes in C#, with no duplication of code, with the pros and cons of each design.
I'm often writing new, useful classes for one application domain; there's no reason why they won't work across domains. How can I structure my code to make it ideally cross-platform?
For example, let's say I wanted to make a generic "MyTimer" class with an interval and on-tick event. In desktop, this would use the built-in .NET timer. In Silverlight, I would use a DispatchTimer.
Design #1 might be "create a class and use pre-processor directives for conditional compilation," eg. "#IF SILVERILGHT ...". However, this leads to code that is less understandable, readable, and maintainable.
Design #2 might be "create subclasses called DesktopTimer and SilverlightTimer and consume those from MyTimer." How would that work?
While this is a trivial case, I may have more complicated classes that, for example, consume platform-specific classes (IsolatedStorage, DispatchTimer, etc.) but aren't directly replacing them.
What other designs/paradigms can I use?
I would suggest writing Interfaces that you would simply implement for your platform specific code. Then, the interfaces assure that your code will respect the contracts given by your interface, otherwise there will be a code break (if one member is not implemented).
Besides, within this library where resides your specific timer classes, to stick to your example, I would create a class for each platform, thus using the DispatchTimer for Silverlight, and the built-in .NET timer for the desktop version.
In the end, you would end up using only one interface that only its implementers know how to deal with the contract specifically to your underlying platform.
EDIT #1
Conditonal design is not an option for a good design. Here is a tool that will help you deal with the Dependancy Injection, that is called Unity Application Block, and is used to deal with such scenario like yours.
You only use an XML configuration that is very versatile to "tell" what has to be instantiated when this or that interface is needed. Then, the UnityContainer consults with the configuration you have made, and instantiate the right class for you. This assures good design approach and architecture.
EDIT #2
I'm not very familiar with Dependency Injection, and not at all familiar with Unity Application Block. Can you point to some resources or explain these a bit further?
Microsoft Enterprise Library 5.0 - April 2010;
Microsoft Unity 2.0 – April 2010;
Microsoft Unity 2.0 Documentation for Visual Studio 2008;
Are there good tutorial/walkthroughs for unity that don't use configuration files? (SO question on the topic that should provide valuable hints to start with Unity);
Specifying Types in the Configuration File;
Walkthrough: The Unity StopLight QuickStart;
Walkthrough: The Unity Event Broker Extension QuickStart.
I think these resources shall guide you through your learnings. If you need further assistance, please let me know! =)
EDIT #3
But anyway, the StopLight quickstart [...] seems to imply that the dependency mapping of interface to concrete class is done in code (which won't work for me).
In fact, you can do both code and XML dependency mapping, the choice is yours! =)
Here are some example that you should perhaps inspire from to make the StopLight quickstart use the XML configuration instead of the coded mapping.
Testing Your Unity XML Configuration;
Using Design-Time Configuration;
Source Schema for the Unity Application Block.
If this doesn't help you get through, let me know. I shall then provide a simple example using XML dependency mapping. =)
1) Interfaces with platform-specific class in their own assemblies: ITimer in a shared assembly, and a "WebAssembly" containing WebTimer, for example. Then the "WebAssembly.dll", or "DesktopAssembly.dll" are on-demand loaded. This turns it into more of a deployment/configuration issue, and everything compiles. Dependency Injection or MEF become a great help here.
2) Interfaces (again), but with conditional compilation. This makes it less of a deployment issue, and more of a compilation problem. WebTimer would have #ifdef WEB_PLATFORM around it, and so on.
Personally, I'd lean to #1 - but in a complicated application, most likely you'll end up having to use both because of slight changes in the available parts of the .net framework between silverlight and everything else. You may even want different behavior in core parts of your app just for the performance issues.
I think interfaces are a good choice here (defining what a timer will do without actually implementing it)
public interface ITimer
{
void CreateTimer(int _interval, TimerDelegate _delegate);
void StopTimer();
// etc...
} // eo interface ITimer
From this, you derive your concrete timers:
public class DesktopTimer : ITimer
{
} // eo DesktopTimer
public class SilverlightTimer : ITimer
{
} // eo class SilverlightTimer
public class WebTimer : Timer
{
} // eo class WebTimer
Then comes the fun part. How do we create the right timer? Here you could implement some kind of platform-factory that returned the right timer depending on what platform it is running on. Here is a quick and dirty idea (I would make it more dynamic than this, and perhaps implement one factory for multiple kinds of classes, but this is an example)
public enum Platform
{
Desktop,
Web,
Silverlight
} // eo enum Platform
public class TimerFactory
{
private class ObjectInfo
{
private string m_Assembly;
private string m_Type;
// ctor
public ObjectInfo(string _assembly, string _type)
{
m_Assembly = _assembly;
m_Type = _type;
} // eo ctor
public ITimer Create() {return(AppDomain.CurrentDomain.CreateInstanceAndUnwrap(m_Assembly, m_Type));}
} // eo class ObjectInfo
Dictionary<Platform, ObjectInfo> m_Types = new Dictionary<PlatForm, ObjectInfo>();
public TimerFactory()
{
m_Types[Platform.Desktop] = new ObjectInfo("Desktop", "MyNamespace.DesktopTimer");
m_Types[Platform.Silverlight] = new ObjectInfo("Silverlight", "MyNameSpace.SilverlightTimer");
// ...
} // eo ctor
public ITimer Create()
{
// based on platform, create appropriate ObjectInfo
} // eo Create
} // eo class TimerFactory
As I mentioned above, I would not have a factory for every time of object, but make a generic platform-factory that could handle timers, containers and whatever else you want. This is just an example.
The Model-View-Presenter pattern is a really good approach if you want to separate all of your user interface logic from the actual GUI framework you are using. Read Michael Feather's article "The Humble Dialog Box" to get an excellent explanation of how it works:
http://www.objectmentor.com/resources/articles/TheHumbleDialogBox.pdf
The original article was made for C++, if you want a C# example, look here:
http://codebetter.com/blogs/jeremy.miller/articles/129546.aspx
The Pros are:
you will make your GUI logic resusable
your GUI logic becomes applicable for unit testing
The Cons:
if your program does not need more than one GUI framework, this approach produces more lines-of-code, and you have to deal with more complexity, since you have to decide all through your coding which parts of your code belong into the view and which into the presenter
Go with all OOD you know. I'd suggest creating platform-agnostic (Windows, Mono/destkop, web) domain model. Use abstract classes to model platform-dependant stuff (like the Timer). Use Dependency Injection and/or Factory patterns to use specific implementations.
EDIT: at some point you have to specify what concrete classes to use, but using the abovementioned patterns can bring all that code into one place without using conditional compilation.
EDIT: an example of DI/Factory. Of course you can use on of existing frameworks, which will give you more power and expressivenes. For the simple example it seems like an overkill, but the more complicated the code, the bigger the gain of using the patterns.
// Common.dll
public interface IPlatformInfo
{
string PlatformName { get; }
}
public interface PlatformFactory
{
IPlatformInfo CreatePlatformInfo();
// other...
}
public class WelcomeMessage
{
private IPlatformInfo platformInfo;
public WelcomeMessage(IPlatformInfo platformInfo)
{
this.platformInfo = platformInfo;
}
public string GetMessage()
{
return "Welcome at " + platformInfo.PlatformName + "!";
}
}
// WindowsApp.exe
public class WindowsPlatformInfo : IPlatformInfo
{
public string PlatformName
{
get { return "Windows"; }
}
}
public class WindowsPlatformFactory : PlatformFactory
{
public IPlatformInfo CreatePlatformInfo()
{
return new WindowsPlatformInfo();
}
}
public class WindowsProgram
{
public static void Main(string[] args)
{
var factory = new WindowsPlatformFactory();
var message = new WelcomeMessage(factory.CreatePlatformInfo());
Console.WriteLine(message.GetMessage());
}
}
// MonoApp.exe
public class MonoPlatformInfo : IPlatformInfo
{
public string PlatformName
{
get { return "Mono"; }
}
}
public class MonoPlatformFactory : PlatformFactory
{
public IPlatformInfo CreatePlatformInfo()
{
return new MonoPlatformInfo();
}
}
public class MonoProgram
{
public static void Main(string[] args)
{
var factory = new MonoPlatformFactory();
var message = new WelcomeMessage(factory.CreatePlatformInfo());
Console.WriteLine(message.GetMessage());
}
}
As others have sugested, interfaces are the way to go here. I would alter the interface from sugestion Moo-Juice suggestion slightly...
`
//Why is this block not formated like code???
public interface ITimer{
void StopTimer(); // etc...
void StartTimer(); // etc...
TimeSpan Duration {get;} // eo interface ITimer
}`
Now you would need to get the ITimer into your class that is using it. The most timple way to do this is called dependency injection. The most common approach to achieve this is called constructor injection.
So when creating a class that needs a timer you pass a timer into the class when creating one.
Basically you do:
var foo = new Foo(new WebTimer());
Since that will get complicated quite fast, you can utilize some helpers. This pattern is called inversion of control. There are some frameworks that will help you, like the ninject or castle windsor.
Both are inversion of control (IOC) containers. (Thats the secret sauce)
Basically you "register" your timer in the IOC, and also register your "Foo". When you need a "Foo", you ask your IOC Container to create one. The container looks at the constructor, finds that it needs a ITimer. It will then create an ITimer for you, and pass it into the constructor, and finally hand you the complete class.
Inside you class you dont need to have any knowledge about the ITimer, or how to create it, since all that was moved to the outside.
For different Applications you now only need to register the correct components, and you are done...
P.s.: Be carefull and dont confuse the IOC Container with a service locator...
Links:
http://ninject.org/download
http://www.castleproject.org/container/index.html
http://www.pnpguidance.net/Category/Unity.aspx
Why not have configuration section which will tell your library about the platform of the host application. This you have to set only once in your application to the host config file (web.config or app.config), and rest you can use using Factory method as suggested by Moo-Juice. You can use platform detail over entire functionality of the library.
I am having difficult figuring out when a dependency should be injected. Let's just work with a simple example from my project:
class CompanyDetailProvider : ICompanyDetailProvider {
private readonly FilePathProvider provider;
public CompanyDetailProvider(FilePathProvider provider) {
this.provider = provider;
}
public IEnumerable<CompanyDetail> GetCompanyDetailsForDate(DateTime date) {
string path = this.provider.GetCompanyDetailFilePathForDate(date);
var factory = new DataReaderFactory();
Func<IDataReader> sourceProvider = () => factory.CreateReader(
DataFileType.FlatFile,
path
);
var hydrator = new Hydrator<CompanyDetail>(sourceProvider);
return hydrator;
}
}
(Not production quality!)
ICompanyDetailProvider is responsible for providing instances of CompanyDetails for consumers. The concrete implementation CompanyDetailProvider does it by hydrating instances of CompanyDetail from a file using a Hydrator<T> which uses reflection to populate instances of T sourced from an IDataReader. Clearly CompanyDetailProvider is dependent on DataReaderFactory (which returns instances of OleDbDataReader given a path to a file) and Hydrator. Should these dependencies be injected? Is it right to inject FilePathProvider? What qualities do I examine to decide if they should be injected?
I evaluate dependencies' points of use through the intent/mechanism lens: is this code clearly communicating its intent, or do I have to extract that from a pile of implementation details?
If the code indeed looks like a pile of implementation details, I determine the inputs and outputs and create an entirely new dependency to represent the why behind all the how. I then push the complexity into the new dependency, making the original code simpler and clearer.
When I read the code in this question, I clearly see the retrieval of a file path based on a date, followed by an opaque set of statements which don't clearly communicate the goal of reading an entity of a certain type at a certain path. I can work my way through it but that breaks my stride.
I suggest you raise the level of abstraction of the second half of the calculation, after you get the path. I would start by defining a dependency which implements the code's inputs/outputs:
public interface IEntityReader
{
IEnumerable<T> ReadEntities<T>(string path);
}
Then, rewrite the original class using this intention-revealing interface:
public sealed class CompanyDetailProvider : ICompanyDetailProvider
{
private readonly IFilePathProvider _filePathProvider;
private readonly IEntityReader _entityReader;
public CompanyDetailProvider(IFilePathProvider filePathProvider, IEntityReader entityReader)
{
_filePathProvider = filePathProvider;
_entityReader = entityReader;
}
public IEnumerable<CompanyDetail> GetCompanyDetailsForDate(DateTime date)
{
var path = _filePathProvider.GetCompanyDetailsFilePathForDate(date);
return _entityReader.ReadEntities<CompanyDetail>(path);
}
}
Now you can sandbox the gory details, which become quite cohesive in isolation:
public sealed class EntityReader : IEntityReader
{
private readonly IDataReaderFactory _dataReaderFactory;
public EntityReader(IDataReaderFactory dataReaderFactory)
{
_dataReaderFactory = dataReaderFactory;
}
public IEnumerable<T> ReadEntities<T>(string path)
{
Func<IDataReader> sourceProvider =
() => _dataReaderFactory.CreateReader(DataFileType.FlatFile, path);
return new Hydrator<T>(sourceProvider);
}
}
As shown in this example, I think you should abstract the data reader factory away and directly instantiate the hydrator. The distinction is that EntityReader uses the data reader factory, while it only creates the hydrator. It isn't actually dependent on the instance at all; instead, it serves as a hydrator factory.
I tend to be on the more liberal side of injecting dependencies so I would definitely want to inject both IDataReader to get rid of the new DataFactoryReader and the Hydrator. It keeps everything more loosely coupled which of course makes it easier to maintain.
Another benefit that is easy to attain right away is better testability. You can create mocks of your IDataReader and Hydrator to isolate your unit tests to just the GetCompanyDetailsForDate method and not have to worry about what happens inside the datareader and hydrator.
How to determine if a class should use dependency injection
Does this class require an external dependency?
If yes, inject.
If no, has no dependency.
To answer "Is it right to inject FilePathProvider?" yes it is right.
Edit: For clarification any external dependency is where you call to an unrelated but dependent class, especially when it involves a physical resources such as reading File Pathes from disk, but this also implies any kind of service or model class that does logic indepedent to the core functionality of the class.
Generally this be surmised with anytime you call the new operator. In most circumstances you want to refactor away all usages of the new operator when it has to deal with any class other than a data transfer object. When the class is internal to the usage location then a new statement can be fine if it reduces complexity such as the new DataReaderFactory() however this does appear to be a very good candidate for constructor injection also.