Setter Injection or Ambient Context pattern - c#

I have some global components I am not sure how to put them in design. Such as:
Settings class: it is interfacing the initial settings of the program, it could be app.config(1way), web.config(1way), hard coded values(1way), or sqldb(2way) behind the scenes.
Language class: it contains different language sets, and again, I could have some resx files(1way), hard coded values(1way) or sqldb(2way) behind it.
First question is, should I make these classes setter properties in dependency injection (I use Windsor):
public ISettings Settings {set;}
public ILanguage Language {set;}
Or should I make them ambient context:
string DoSomethingAndReportIt() {
//do something ...
var param = Settings.Current.SomeParam;
//report it ...
return Language.Current.SomeClass_SomeMethod_Job_Done;
}
I notice there are a few components in .net library that actually use ambient context pattern, e.g. System.Security.Principal, System.Web.ProfileBase, System.Thread.CurrentCulture ...
Do you think it is no harm to make my global classes such as Settings and Language to be ambient context classes? If not, why DI is preferred? Do they take more advantage in unit testing compare to ambient?
Second question is, if DI is better, (I have a feeling that the DI pattern is preferred), what is a good way to proxy the existing ambient classes such as Security.Principal or Profile to follow the DI pattern?

Ambient context is OK when you need to implement functionality that spans across multiple layers. (In your case you say that the two objects are global) This functionality is known as crosscutting concerns. As you noticed many classes in .NET are implemented as ambient context, like IPrincipal. In order to get a working version of your implementation of ambient context, you will need to have some default value provided to your Settings and Language objects if they are developed as ambient context. My assumption is that you will provide some default implementation of ILanguage and ISettings, and considering that you will use them globally they are good candidates for ambient context.
On the other hand, how often do you plan to use those objects that implement these two interfaces? And, is the existence of the two objects crucial, meaning Settings != null and Language != null? If you really intend to use them in one or two classes, and/or if the existence of the objects is not really important, you might want to go with the setter injection. The setter injection does not really need a default value, so your object can be null.
Personally I am not a fan of ambient context. However I would use it if it turns out to be the most acceptable solution. In case of your implementations I would do something like this: because you will need to initialize objects which implement the two interfaces once and in one location only, you could start with the ambient context. If you realize that you are using it in a very small number of locations, think about refactoring it as a setter injection. If the existence of objects is important think about constructor injection implementation.

Related

What is the point of dependency injection container? [duplicate]

This question already has answers here:
What is the difference between an interface and a class, and why I should use an interface when I can implement the methods directly in the class?
(16 answers)
Closed 5 years ago.
With .net core you can register "Services" which as I understand, simply means you can register types to concrete classes.
As such, I decided it's about time I learnt DI and practised it. I understand the concept, and with testing it is massively beneficial. However what confuses me is the idea of registering services and whether it's actually needed.
For example, if I have:
public class MyClass
{
public MyClass(IDataContext)
{
... store it
}
}
Then this means I can inject any class that implements the IDataContext, allowing for fakes and moqs in testing. But why would I register a service and map IDataContext to a concrete class in the startup? Is there something wrong with just using the following in other methods:
DataContext dc = new DataContext(); // concrete
var c = new MyClass(dc);
Edit
This question was around the point of using the container (services) rather than why use an interface in the constructor.
Now those classes where you put this code
public class MyService
{
public void DoSomething()
{
DataContext dc = new DataContext(); // concrete
var c = new MyClass(dc);
c.DoSomething();
}
}
have a hard dependency on DataContext and MyClass. So you can't test MyService in isolation. Classes shouldn't care how other classes do what they do, they should only care that they do what they say they're going to do. That's why we use interfaces. This is separation of concerns. Once you've achieved this, you can unit test any piece of code in isolation without depending on the behavior of outside code.
Registering your dependencies up front in one location is also cleaner and means you can swap dependencies out by changing one location instead of hunting down all the usages and changing them individually.
In my code example at the top, MyService requires the usage of both DataContext and MyClass. Instead, it should be like this:
public class MyService
{
private readonly IMyClass _myClass;
public MyService(IMyClass myClass)
{
_myClass = myClass;
}
public void DoSomething()
{
_myClass.DoSomething();
}
}
public interface IMyClass
{
void DoSomething();
}
public class MyClass : IMyClass
{
private readonly IDataContext _context;
public MyClass(IDataContext context)
{
_context = context;
}
public void DoSomething()
{
_context.SaveSomeData();
}
}
Now MyService isn't dependent on DataContext at all, it doesn't need to worry about it because that's not its job. But it does need something that fulfills IMyClass, but it doesn't care how it's implemented. MyService.DoSomething() can now be unit tested without depending on the behavior of other code.
If you weren't using a container to handle satisfying the dependencies, then you're probably introducing hard dependencies into your classes, which defeats the entire point of coding against an interface in the first place.
Testing in isolation is important. It's not a unit test if you're testing more than one finite piece of code. It's an integration test (which have their own value for different reasons). Unit tests make it quick and easy to verify a finite block of code works as expected. When a unit test isn't passing, you know right where the problem is and don't have to search hard to find it. So if a unit test depends on other types, or even other systems (likely in this case, DataContext is specific to a particular database) then we can't test MyService without touching a database. And that means the database must be in a particular state for testing, which means the test likely isn't idempotent (you can't run it over and over and expect the same results.)
For more information, I suggest you watch Deep Dive into Dependency Injection and Writing Decoupled Quality Code and Testable Software by Miguel Castro. The best point he makes is that if you have to use new to create an instance of an object, you've tightly coupled things. Avoid using new, and Dependency Injection is a pattern that enables you to avoid it. (using new isn't always bad, I'm comfortable with using new for POCO models).
You can inject your dependencies manually. However this can get a very tedious task. If your services get bigger, you will get more dependencies, where each dependency on its own can have multiple dependencies.
If you change your dependencies, you need to adjust all usages. One of the main advantages of a DI container is, that the container will do all dependency resolving. No manual work required. Just register the service and use it wherever you want and how often you want.
For small projects this seems like too much overhead, but if your project grows a little, you will really appreciate this.
For fixed dependencies, which are strongly related and not likely to change, injecting them manually is fine.
Using a DI container has another advantage. The DI container will control the life cycle of its services. A service could be a singleton, transient (each request will get a new instance) or have scoped life time.
For example, if you have a transactional workflow. The scope could match the transaction. While in the transaction, requests to a service will return the same instance.
The next transaction will open a new scope and therefore will get new instances.
This allows you to either discard or commit all instances of one transaction, but prevents that following transaction uses resources from the previous one.
You right, you can create all instances manually. In small projects it's an usual practice. The place in your project where links the classes is called Composition Root, and what you do is Constructor Injection.
IoC-libraries can simplify this code, especially considering complex cases like life-time scopes and group registration.
Inversion of control (of constructing an object)
The idea of this pattern is when you want to construct an object you only need to know about the type of the object and nothing about its dependencies or parameters.
Dependency injection
This pattern is taking the inversion of control pattern a step further by enabling you to directly inject an object into a constructor for example.
Again you only need to know the type of the object you want to get and the dependency container will inject an object.
You also don't need to know if a new object is constucted or if you get a allready existing reference.
The most common used type of dependency injection is constructor injection but you could inject your type into other places like methods.
Seperation of concerns
Generally you register a type by an interface to get rid of the dependency on the type.
This is very helpfull for mocking types when testing and it helps to use the open closed principle.
Martin Fowler on "Inversion of Control Containers and the Dependency Injection pattern".

Architecture to avoid creating repositories inside testable methods

Working on an MVVM application. Each ViewModel class has a constructor which accepts a Repository class so that it can be mocked out for unit testing.
The application is designed to be operated across several windows at once. So it contains a number "View" or "Open" style methods which create new ViewModels and place them into new windows. Because these are triggered via the UI, they are often inside existing ViewModels. For instance:
public void ViewQuote(Quote quote)
{
if (quote.CreatedOn == null)
{
quote.CreatedOn = DateTime.Now;
}
NavigationHelper.NewWindow(this, new QuoteViewModel(quote, new Repository()));
}
Now, that flow control statement looks worth testing to ensure that quotes passed with a null CreatedOn date get assigned one. However, my test for this fails because although the parent ViewModel has a mocked Repository, the NewWindow method spins up a new ViewModel with a real-life Repository inside it. This then throws an error when it's used inside the constructor of that class.
There are two obvious options.
One is to pull out the date assignment into a stand-alone function to test. That'll work, but it seems too simplistic for its own function. Plus if I do it all over the application it risks creating too much fragmentation for easy readability.
The other is to somehow change the constructor code for ViewModels to not use the Repository directly. That might be an option here, but it's unlikely to be workable for every possible scenario.
Or is there a third way to design this better so I can pass a mocked Repository into the constructor of my new ViewModel?
Newing up services (or service-like objects such as repositories) is a design smell. And the problems that you're experiencing are the consequence.
In other words, you are lacking a clear and well-defined Composition Root.
Solution: Use proper dependency injection
The only clean solution to this is to inject services through the constructor. Repositories usually have a shorter lifecycle than the application itself, so in this case you would inject a factory that is able to create the repository.
Note that clear dependency trees are good design, but using a DI framework such as Autofac is only one technical solution to implement such a design. You can completely solve your problems and create a clean composition root without using a DI framework.
So although this is probably a lot of work, you should redesign your application to have a clear composition root. Otherwise, you will run into small issues over and over again, especially in the testing area.
You could work around it by adding a public Repository NewRepository property, in your ViewQuote method change it to look like this:
public void ViewQuote(Quote quote)
{
if (quote.CreatedOn == null)
{
quote.CreatedOn = DateTime.Now;
}
if(NewRepository == null) NewRepository = new Respository();
NavigationHelper.NewWindow(this, new QuoteViewModel(quote, NewRepository));
}
Then in your mock, just ensure that the public NewRepository property is assigned to before the test is run on that bit of code.
Not very elegant, but it would require the least amount of changing in my opinion.

Avoid a global state

Imagine some SOA. We have a few different services, of which the OperationContext is extended by some SecureOperationContext which makes sure certain security demands are met.
Assume furthermore that sometimes we might need to know a certain property from this SecureOperationContext somewhere else, in a place where there is and won't be this SecureOperationContext. For example a username for some sort of logging purpose.
Currently we're using something that looks and smells plain dirty. The fat's dripping of in my opionion.
Now, in some 'Common' library, there is a class defined with a ThreadStatic property: Username. I guess you can catch my drift: the security stuff sets this static global variable and lo and behold we have it available for logging puproses.
This thing bugs me, but on the other hand what else to do? I was thinking about creating a method that takes a string as parameter to handle this, but then all my methods need to still read that username property, which is non-dry.
So on one hand, this way everything is handled on the background, but I'm not just quite happy having to maintain some (global) class just to achieve this.
Any tips?
I'm not sure how to put it in less abstract terms, but here goes (in pseudo).
public WebService
{
public Save(Car car)
{
// Some SecurityCOntext is known here, this holds top secret info,
// like the username
// and sets this into the golbal helper class UserNameManagemer
// car has for example a CreatedDate property (from an Interface),
//but I don't want handle do this propertyin every Create method can handled in some general piecei of code.
efcontainer.AddObjcect(car)
e.SaveChanges() ->
//Now savechanges will check the objects in the ObjectSatateManager
//and sets the apppriopriate property via the global thing.
}
}
Now what to do to rid myself of this global variable!. Passing a username to SaveChanges is undesirable ass well, since we'd then still have to manually repat this for everything, which blows.
Encapsulate the global property in a service. Define an interface for that service. Now, depend on that interface everywhere you need the data by having a constructor parameter of that type.
This is called dependency injection and is a very important concept when you want to avoid problems as the one you currently have. A dependency injection container such as Autofac can help if you have a big application, but is not strictly required.
The most important thing is to understand dependency injection and have a well-defined composition root, no matter whether you use a DI container or do it yourself.
The security stuff sets this static global variable and lo and behold we have it available for logging puproses.
This sounds like the data is determined dynamically. Note that you can still use a service to track the value. That service also knows whether the value is available or not. This way, you can better manage the temporal coupling that you have at the moment.
Edit: You can further improve the design by creating the client objects through a factory. That factory can ensure that the value is available, so it couples the lifetime of the client objects to the availability of the value. This way, you are sure to always act in a context where the value can be safely accessed.

Should factories set model properties?

As part of an overall S.O.L.I.D. programming effort I created a factory interface & an abstract factory within a base framework API.
People are already starting to overload the factories Create method. The problem is people are overloading the Create method with model properties (and thereby expecting the factory to populate them).
In my opinion, property setting should not be done by the factory. Am I wrong?
public interface IFactory
{
I Create<C, I>();
I Create<C, I>(long id); //<--- I feel doing this is incorrect
IFactoryTransformer Transformer { get; }
IFactoryDataAccessor DataAccessor { get; }
IFactoryValidator Validator { get; }
}
UPDATE - For those unfamiliar with SOLID principles, here are a few of them:
Single Responsibility Principle
It states that every object should have a single responsibility, and that responsibility should be entirely encapsulated by the class
Open/Closed Principle
The meaning of this principle is that when a get a request for a feature that needs to be added to your application, you should be able to handle it without modifying old classes, only by adding subclasses and new implementations.
Dependency Inversion Principle
It says that you should decouple your software modules. To achieve that you’d need to isolate dependencies.
Overall:
I'm 90% sure I know the answer. However, I would like some good discussion from people already using SOLID. Thank you for your valuable opinions.
UPDATE - So what do I think a a SOLID factory should do?
IMHO a SOLID factory serves-up appropriate object-instances...but does so in a manner that hides the complexity of object-instantiation. For example, if you have an Employee model...you would ask the factory to get you the appropriate one. The DataAccessorFactory would give you the correct data-access object, the ValidatorFactory would give you the correct validation object etc.
For example:
var employee = Factory.Create<ExxonMobilEmployee, IEmployee>();
var dataAccessorLdap = Factory.DataAccessor.Create<LDAP, IEmployee>();
var dataAccessorSqlServer = Factory.DataAccessor.Create<SqlServer, IEmployee>();
var validator = Factory.Validator.Create<ExxonMobilEmployee, IEmployee>();
Taking the example further we would...
var audit = new Framework.Audit(); // Or have the factory hand it to you
var result = new Framework.Result(); // Or have the factory hand it to you
// Save your AuditInfo
audit.username = 'prisonerzero';
// Get from LDAP (example only)
employee.Id = 10;
result = dataAccessorLdap.Get(employee, audit);
employee = result.Instance; // All operations use the same Result object
// Update model
employee.FirstName = 'Scooby'
employee.LastName = 'Doo'
// Validate
result = validator.Validate(employee);
// Save to SQL
if(result.HasErrors)
dataAccessorSqlServer.Add(employee, audit);
UPDATE - So why am I adamant about this separation?
I feel segregating responsibilities makes for smaller objects, smaller Unit Tests and it enhances reliability & maintenance. I recognize it does so at the cost of creating more objects...but that is what the SOLID Factory protects me from...it hides the complexity of gathering and instantiating said objects.
I'd say it's sticking to DRY principle, and as long as it's simple values wiring I don't see it being problem/violation. Instead of having
var model = this.factory.Create();
model.Id = 10;
model.Name = "X20";
scattered all around your code base, it's almost always better to have it in one place. Future contract changes, refactorings or new requirements will be much easier to handle.
It's worth noting that if such object creation and then immediately properties setting is common, then that's a pattern your team has evolved and developers adding overloads is only a response to this fact (notably, a good one). Introducing an API to simplify this process is what should be done.
And again, if it narrows down to simple assignments (like in your example) I wouldn't hesitate to keep the overloads, especially if it's something you notice often. When things get more complicated, it would be a sign of new patterns being discovered and perhaps then you should resort to other standard solutions (like the builder pattern for example).
Assuming that your factory interface is used from application code (as opposed to infrastructural Composition Root), it actually represents a Service Locator, which can be considered an anti-pattern with respect to Dependency Injection. See also Service Locator: roles vs. mechanics.
Note that code like the following:
var employee = Factory.Create<ExxonMobilEmployee, IEmployee>();
is just syntax sugar. It doesn't remove dependency on concrete ExxonMobilEmployee implementation.
You might also be interested in Weakly-typed versus Strongly-typed Message Channels and Message Dispatching without Service Location (those illustrate how such interfaces violate the SRP) and other publications by Mark Seemann.
After about 6 months of experience in Dependency Injection, I've only discovered few cases where factories should set properties:
If the setter is marked as internal, and the properties are expected to be set once by the factory only. This usually happens on interfaces with only getter properties whose implementations are expected to be created thru a factory.
When the model uses property injection. I rarely see classes that use property injection (I also try to avoid building these), but when I do see one, and the needed service is only available elsewhere, it's a case where you have no choice.
For the bottom line, leave public setters out of factories. Only set properties that are marked as internal Let the clients decide on what properties they need to set if they are allowed to do so. This will keep your factories clean of unneeded functions.

Constructor injection overuse

I am looking for best practices of avoiding constructor injection overuse. For example I have Meeting entity which has few sub entities like shown below:
Meeting
MeetingContacts
MeetingAttendees
MeetingType
Address
MeetingCompanies
MeetingNotes
MeetingService class looks like below:
public class MeetingService
{
private readonly IMeetingContactRepository _meetingContactRepository;
private readonly IMeetingAttendeeRepository _meetingAttendeeRepository;
private readonly IMeetingTypeRepository _meetingTypeRepository;
private readonly IAddressRepository _addressRepository;
private readonly IMeetingCompanyRepository _meetingCompanyRepository;
private readonly IMeetingNoteRepository _meetingNoteRepository;
private readonly IMeetingRepositoy _meetingReposity;
public MeetingService(IMeetingRepositoy meetingReposity, IMeetingContactRepository meetingContactRepository, IMeetingAttendeeRepository meetingAttendeeRepository,
IMeetingTypeRepository meetingTypeRepository, IAddressRepository addressRepository,
IMeetingCompanyRepository meetingCompanyRepository, IMeetingNoteRepository meetingNoteRepository)
{
_meetingReposity = meetingReposity;
_meetingContactRepository = meetingContactRepository;
_meetingAttendeeRepository = meetingAttendeeRepository;
_meetingTypeRepository = meetingTypeRepository;
_addressRepository = addressRepository;
_meetingCompanyRepository = meetingCompanyRepository;
_meetingNoteRepository = meetingNoteRepository;
}
public void SaveMeeting(Meeting meeting)
{
meetingReposity.Save();
if(Condition1())
_meetingContactRepository.Save();
if(Condition2())
_meetingAttendeeRepository.Save();
if(Condition3())
_meetingTypeRepository.Save();
if(Condition4())
_addressRepository.Save();
if(Condition5())
_meetingCompanyRepository.Save();
if(Condition6())
_meetingNoteRepository.Save();
}
//... other methods
}
Here are just seven dependencies but real code contains much more of them. I used different techniques described in the "Dependency Injection Constructor Madness" but I have not found how to deal with repository dependencies.
Is there any way how I can reduce the number of dependencies and keep the code testable?
Constructor overuse is just a symptom - it seems you are approximating a unit of work by having a "master" class that knows about the various elements of message persistence and plugs them into the overall save.
The downside is that each repository communicates its independence of the others by exposing a dedicated Save method; this is incorrect, though, as SaveMeeting explicitly states that the repositories are not independent.
I suggest identifying or creating a type that the repositories consume; this centralizes your changes and allows you to save them from a single place. Examples include DataContext (LINQ to SQL), ISession (NHibernate), and ObjectContext (Entity Framework).
You can find more information on how the repositories might work in a previous answer of mine:
Advantage of creating a generic repository vs. specific repository for each object?
Once you have the repositories, you would identify the context in which they would act. This generally maps to a single web request: create an instance of the common unit of work at the beginning of the request and hand it to all the repositories. At the end of the request, save the changes in the unit of work, leaving the repositories free to worry about what data to access.
This neatly captures and saves everything as a single unit. This is very similar to the working copy of your source control system: you pull the current state of the system into a local context, work with it, and save the changes when you're done. You don't save each file independently - you save them all at the same time as a discrete revision.
To expand a little bit on my comment above:
Since this question is directed towards how to manage repository dependencies, I have to assume that the MeetingService is managing some sort of persistent commit. In the past, when I have seen classes like MeetingService with that many dependencies, it is clear they are doing too much. So, you have to ask yourself, "what is my transaction boundary". In other words, what is the smallest commit that you can make that means that a meeting has been successfully saved.
If the answer is that a meeting is successfully saved after a call to meetingReposity.Save(); then that is all that MeetingService should be managing (for the commit).
Everything else is, essentially, a side effect of the fact that a meeting has been saved (notice that now we are speaking in the past tense). At this point, event subscription for each of the other repositories makes more sense.
This also has the nice effect of separating the logic in all of the conditions into subscriber classes that follow SRP to handle that logic. This becomes important when the logic of when the contact repository commits goes through a change, for example.
Hope this helps.
Each of the first three answers give important suggestions and ideas for dealing with the problem in the abstract. However, I may be reading too much into your example above, but this looks like a problem of too many aggregate roots, not too many dependencies per se. This has to do with either a lack in the persistence mechanism underlying your repository injection infrastructure, or a misconfiguration of the same.
Simply, the Contacts, Attendees, Notes, &c. should be composite properties of the Meeting itself (if only as links to separately managed Contact, &c. objects/data); therefore, your persistence mechanism should be saving them automatically.
Heeding Bryan Watts' adage that "constructor overuse is just a symptom," a couple of other possibilities:
Your persistence mechanism should be handling persistence of the Meeting graph automatically, and is either misconfigured or lacks the capability to do this (all three that Bryan suggests do this, and I would add DbContext (EF 4.1+)). In this case, there should really only be one dependency--IMeetingRepositoy--and it can handle atomic saves of the meeting and its composites itself.
SaveMeeting() is saving not just links to other objects (Contacts, Attendees, &c.) but is also saving those objects as well, in which case I would have to agree with dtryon that MeetingService and SaveMeeting() are doing far more than the names imply, and his mechanism could alleviate it.
Do you really need the repository functionality to be split into that many interfaces? Do you need to mock them separately? If not, you could have fewer interfaces, with more methods.
But let's assume your class really needs that many dependencies. In that case you could:
Create a configuration object (MeetingServiceBindings) that provides all the dependencies. You could have a single configuration object per whole module, not just single service. I don't think there's anything wrong with this solution.
Use a Dependency injection tool, like NInject. It's quite simple, you can configure your dependencies in code in one place and don't need any crazy XML files.

Categories

Resources