Lately I've been trying to follow the TDD methodology, and this results in lot of subclasses so that one can easily mock dependencies, etc.
For example, I could have say a RecurringProfile which in turn has methods / operations which can be applied to it like MarkAsCancel, RenewProfile, MarkAsExpired, etc.. Due to TDD, these are implemented as 'small' classes, like MarkAsCancelService, etc.
Does it make sense to create a 'facade' (singleton) for the various methods/operations which can be performed on say a RecurringProfile, for example having a class RecurringProfileFacade. This would then contain methods, which delegate code to the actual sub-classes, eg.:
public class RecurringProfileFacade
{
public void MarkAsCancelled(IRecurringProfile profile)
{
MarkAsCancelledService service = new MarkAsCancelledService();
service.MarkAsCancelled(profile);
}
public void RenewProfile(IRecurringProfile profile)
{
RenewProfileService service = new RenewProfileService();
service.Renew(profile);
}
...
}
Note that the above code is not actual code, and the actual code would use constructor-injected dependencies. The idea behind this is that the consumer of such code would not need to know the inner details about which classes/sub-classes they need to call, but just access the respective 'Facade'.
First of all, is this the 'Facade' pattern, or is it some other form of design pattern?
The other question which would follow if the above makes sense is - would you do unit-tests on such methods, considering that they do not have any particular business logic function?
I would only create a facade like this if you intend to expose your code to others as a library. You can create a facade which is the interface everyone else uses.
This will give you some capability later to change the implementation.
If this is not the case, then what purpose does this facade provide? If a piece of code wants to call one method on the facade, it will have a dependency on the entire facade. Best to keep dependencies small, and so calling code would be better with a dependency on MarkAsCancelledService tha one on RecurringProfileFacade.
In my opinion, this is kind of the facade pattern since you are abstracting your services behind simple methods, though a facade pattern usually has more logic I think behind their methods. The reason is because the purpose of a facade pattern is to offer a simplified interface on a larger body of code.
As for your second question, I always unit test everything. Though, in your case, it depends, does it change the state of your project when you cancel or renew a profile ? Because you could assert that the state did change as you expected.
If your design "tells" you that you could use a Singleton to do some work for you, then it's probably bad design. TDD should lead you far away from thinking about using singletons.
Reasons on why it's a bad idea (or can be an ok one) can be found on wikipedia
My answer to your questions is: Look at other patterns! For example UnitOfWork and Strategy, Mediator and try to acheive the same functionality with these patterns and you'll be able to compare the benefits from each one. You'll probably end up with a UnitOfStrategicMediationFacade or something ;-)
Consider posting this questions over at Code Review for more in depth analysis.
When facing that kind of issue, I usually try to reason from a YAGNI/KISS perspective :
Do you need MarkAsCancelService, MarkAsExpiredService and the like in the first place ? Wouldn't these operations have a better home in RecurringProfile itself ?
You say these services are byproducts of your TDD process but TDD 1. doesn't mean stripping business entities of all logic and 2. if you do externalize some logic, it doesn't have to go into a Service. If you can't come up with a better name than [BehaviorName]Service for your extracted class, it's often a sign that you should stop and reconsider whether the behavior should really be extracted.
In short, your objects should remain cohesive, which means they shouldn't encapsulate too many responsibilities, but not become anemic either.
Assuming these services are justified, do you really need a Facade for them ? If it's only a convenient shortcut for developers, is it worth the additional maintenance (a change in one of the services will generate a change in the facade) ? Wouldn't it be simpler if each consumer of one of the services knows how to leverage that service directly ?
The other question which would follow if the above makes sense is -
would you do unit-tests on such methods, considering that they do not
have any particular business logic function?
Unit testing boilerplate code can be a real pain indeed. Some will take that pain, others consider it not worthy. Due to the repetitive and predictable nature of such code, one good compromise is to generate your tests automatically, or even better, all your boilerplate code automatically.
Related
I saw in many places whenc# programmers uses 3-tire architecture, they tend to use interfaces between each layer. for example, if the solution is like
SampleUI
Sample.Business.Interface
Sample.Business
Sample.DataAccess.Interface
Sample.DataAccess
Here UI calls the business layer through the interface and business calls the data access in the same fashion.
If this approach is to reduce the dependency between the layers, it's already in place with class library without additional use of the interface.
The code sample is below,
Sample.Business
public class SampleBusiness{
ISampleDataAccess dataAccess = Factory.GetInstance<SampleDataAccess>();
dataAccess.GetSampledata();
}
Sample.DataAccess.Interface
public interface IsampleDataAccess{
string GetSampleData();
}
Sample.DataAccess
public class SampleDataAccess:ISampleDataAccess{
public string GetSampleData(){
returns data;// data from database
}
}
This inference in between does any great job?
What if I use newSampleDataAccess().SampleData() and remove the complete interface class library?
Code Contract
There is one remarkable advantage of using interfaces as part of the design process: It is a contract.
Interfaces are specifications of contracts in the sense that:
If I use (consumes) the interface, I am limiting myself to use what the interface exposes. Well, unless I want to play dirty (reflection, et. al) that is.
If I implement the interface, I am limiting myself to provide what the interface exposes.
Doing things this way has the advantage that it eases dividing work in the development team among layers. It allows the developers of a layer to provide an cough interface cough that the next layer can use to communicate with it… Even before such interface has been implemented.
Once they have agreed on the interface - at least on a minimum viable interface. They can start developing the layers in parallel, known that the other team will uphold their part of the contract.
Mocking
A side effect of using interfaces this way, is that it allows to mock the implementation of the component. Which eases the creation of unit tests. This way you can test the implementation of a layer in isolation. So you can distinguish with ease when a layer is failing because it has a defect, and when a layer is failing because the layer below it has a defect.
For projects that are develop by a single individual - or by a group that doesn't bother too much in drawing clear lines to separate work - the ability to mock might be their main motivation to implement interfaces.
Consider for example, if you want to test if your presentation layer can handle paging correctly… But you need to request data to fill those pages. It could be the case that:
The layer below is not ready.
The database does not have data to provide yet.
It is failing and they do not know if the paging code is not correct, or the defect comes from a point deeper in the code.
Etc…
Either way the solution is mocking. In addition, mocking is easier if you have interfaces to mock.
Changing the implementation
If - for whatever reason - some of the developer decides they want to change the implementation their layer, they can do so trusting the contract imposed by the interface. This way, they can swap implementation without having to change the code of the other layers.
What reason?
Perhaps they want to test a new technology. In this case, they will probably create an alternative implementation as an experiment. In addition, they will want to have both versions working so they can test which one works better.
Addendum: Not only for testing both versions, but also to ease rolling back to the main version. Of course, they might accomplish this with source version control. Because of that, I will not consider rolling back as a motivation to use interfaces. Yet, it might be an advantage for anybody not using version control. For anybody not using it… Start using it!
Or perhaps they need to port the code to a different platform, or a different database engine. In this case, they probably do not want to throw away the old code either… For example, if they have clients that run Windows and SQL Server and other that run Linux and Oracle, it makes sense to maintain both versions.
Of course, in either case, you might want to be able to implement those changes by doing the minimum possible work. Therefore, you do not want to change the layer above to target a different implementation. Instead you will probably have some form of factory or inversion of control container, that you can configure to do dependency injection with the implementation you want.
Mitigating change propagation
Of course, they may decide to change the actual interfaces. If the developers working on a layer need something additional on the interface they can add it to the interface (given whatever methodology the team has set up to approve these changes) without going to mess with the code of the classes that the other team is working on. In source version control, this will ease merging changes.
At the end, the purpose of using a layer architecture is separation of concerns. Which implies separation of reason of change… If you need to change the database, your changes should not propagate into code dedicated to present information to the user. Sure, the team can accomplish this with concrete classes. Yet, interfaces provide a good and evident, well defined, language supported, barrier to stop the propagation of change. In particular if the team has good rules about responsibility (No, I do not mean code concerns, I mean, what developer is responsible of doing what).
You should always use an abstraction of the layer to have the ability
to mock the interfaces in unit tests
to use fake implementations for faster development
to easily develop alternative implementations
to switch between different implementations
...
I have a Business Layer, whose only one class should be visible to outer world. So, I have marked all classes as internal except that class. Since that class requires some internal class to instantiate, I need other classes to be marked as public and other classes depend on some other classes and so on. So ultimately almost all of my internal classes are made public.
How do You handle such scenarios?
Also today there is just one class exposed to outer world but in future there may be two or three, so it means I need three facades?
Thanks
Correct, all of your injected dependencies must be visible to your Composition Root. It sounds like you're asking this question: Ioc/DI - Why do I have to reference all layers/assemblies in entry application?
To quote part of that answer from Mark Seeman:
you don't have to add hard references to all required libraries. Instead, you can use late binding either in the form of convention-based assembly-scanning (preferred) or XML configuration.
Also this, from Steven:
If you are very strict about protecting your architectural boundaries using assemblies, you can simply move your Composition Root to a separate assembly.
However, you should ask yourself why doing so would be worth the effort. If it is merely to enforce architectural boundaries, there is no substitute for discipline. My experience is that that discipline is also more easily maintained when following the SOLID principles, for which dependency injection is the "glue".
After doing a lot of research I am writing my findings, so that it may be of some help to newcomers on Dependency Injection
Misunderstandings regarding my current design and Dependency Injection:
Initial approach and problem(s) associated with it:
My business layer was having a composition root inside it, where as
it should be outside the business layer and near to the application
entry point. In composition root I was essentially having a big factory referred as Poor Man's DI by Mark Seemann. In my application starting point, I was creating an instance of this factory class and then creating my only (intended to be) visible class to outside world. This decision clearly violates Liskov's Principle which says that every dependency should be replaceable. I was having a modular design, but my previous approach was tightly coupled, I wasn't able to reap more benefits out of it, despite only some code cleanliness and code maintainability.
A better approach is:
A very helplful link given by Facio Ratio
The Composition root should have been near the application root, all dependency classes should be made public which I referred initially as a problem; making them public I am introducing low coupling and following Liskov's substitution which is good.
You can change the public class to the interface and all other parts of the program will only know about the interface. Here's some sample code to illustrate this:
public interface IFacade
{
void DoSomething();
}
internal class FacadeImpl : IFacade
{
public FacadeImpl(Alfa alfa, Bravo bravo)
{
}
public void DoSomething()
{
}
}
internal class Alfa
{
}
internal class Bravo
{
}
I can see three solutions, none real good. You might want to combine them in someway. But...
First, put some simple parameters (numeric, perhaps) in the constructor that let the caller say what he wants to do, and that the new public class instance can use to grab internal class objects (to self-inject). (You could use special public classes/interfaces used solely to convey information here too.) This makes for an awkward and limited interface, but is great for encapsulation. If the caller prefers adding a few quick parameters to constructing complex injectable objects anyway this might work out well. (It's always a drag when a method wants five objects of classes you never heard of before when the only option you need, or even want, is "read-only" vs "editable".)
Second, you could make your internal classes public. Now the caller has immense power and can do anything. This is good if the calling code is really the heart of the system, but bad if you don't quite trust that code or if the caller really doesn't want to be bothered with all the picky details.
Third, you might find you can pull some classes from the calling code into your assembly. If you're really lucky, the class making the call might work better on the inside (hopefully without reintroducing this problem one level up).
Response to comments:
As I understand it, you have a service calling a method in a public class in your business layer. To make the call, it needs objects of other classes in the business layer. These other classes are and should be internal. For example, you want to call a method called GetAverage and pass it an instance of the (internal) class RoundingPolicy so it knows how to round. My first answer is that you should take an integer value instead of a class: a constant value such as ROUND_UP, ROUND_DOWN, NEAREST_INTEGER, etc. GetAverage would then use this number to generate the proper RoundingPolicy instance inside the business layer, keeping RoundingPolicy internal.
My first answer is the one I'm suggesting. However, it gives the service a rather primitive interface, so my second two answers suggest alternatives.
The second answer is actually what you are trying to avoid. My thinking was that if all those internal classes were needed by the service, maybe there was no way around the problem. In my example above, if the service is using 30 lines of code to construct just the right RoundingPolicy instance before passing it, you're not going to fix the problem with just a few integer parameters. You'd need to give the overall design a lot of thought.
The third answer is a forlorn hope, but you might find that the calling code is doing work that could just as easily be done inside the business layer. This is actually similar to my first answer. Here, however, the interface might be more elegant. My first answer limits what the service can do. This answer suggests the service doesn't want to do much anyway; it's always using one identical RoundingPolicy instance, so you don't even need to pass a parameter.
I may not fully understand your question, but I hope there's an idea here somewhere that you can use.
Still more: Forth Answer:
I considered this a sort of part of my first answer, but I've thought it through and think I should state it explicitly.
I don't think the class you're making the call to needs an interface, but you could make interfaces for all the classes you don't want to expose to the service. IRoundingPolicy, for instance. You will need some way to get real instances of these interfaces, because new IRoundingPolicy() isn't going to work. Now the service is exposed to all the complexities of the classes you were trying to hide (down side) but they can't see inside the classes (up side). You can control exactly what the service gets to work with--the original classes are still encapsulated. This perhaps makes a workable version of my second answer. This might be useful in one or two places where the service needs more elaborate options than my first answer allows.
I have ended up with a constructor that looks like this whilst attempting to end up with an object i can easily test.
public UserProvider(
IFactory<IContainer> containerFactory,
IRepositoryFactory<IUserRepository> userRepositoryFactory,
IFactory<IRoleProvider> roleProviderFactory,
IFactory<IAuthenticationProvider> authenticationProviderFactory,
IFactory<IEmailAdapter> emailAdapterFactory,
IFactory<IGuidAdapter> guidAdapterFactory,
IRepositoryFactory<IVehicleRepository> vehicleRepositoryFactory,
IRepositoryFactory<IUserVehicleRepository> userVehicleRepositoryFactory,
IFactory<IDateTimeAdapter> dateTimeAdapterFactory)
This is all the dependencies the object will have and is the busiest constructor i have. But if someone saw this would it really raise a big wtf?
My aim was to end up with logic that is easy to test. Whilst it requires a good amount of mocks it is certainly very easy to verify my logic. However i am concerned that I may of ended up with too much of a good thing.
I am curious if this is normal for most people implementing ioc.
There are several simplifications I can make - such as I don't really need to pass in the factories for several of the adapters as i could just pass the adapter in directly as it has no internal state. But I am really asking in terms of the number of parameters.
Or more to the point i am looking for assurance that I am not going overboard ;)
But I am beginnign to get the impression that the UserProvider class should be broken down a bit - but then I end up with even more plumbing which is what is driving this concern.
I guess a sub question is maybe should I be considering using a service Locator pattern if I have these concerns?
When using DI and constructor injection violation of the SRP becomes very visible. This is acutally a good thing, and it is not DI / IOC's fault. If you were not using constructor injection, the class would have the same dependencies, it would just not be as visible.
What you could do in your concrete example is hide some of the related dependencies behind facades. For example IVehicleRepository and IUserVehicleRepository could be hidden behind an IVehicle facade. It might also make sense to put IUserRepository, IRoleProvider and IAuthenticationProvider behind a facade.
In my opinion that is a lot of parameters for a constructor. Here's how I would handle this to get good testability and reduce "code smell."
Instead of passing in the factories to create instances of your classes just pass in the classes themselves. This automatically cuts your dependencies in half because the UserProvider would not be concerned with creating any objects that it needs (and subsequently disposing of them if necessary) it would just use what is given to it instead of using the factories that it needs to create object instances that it needs.
Remove your adapters from the constructor and just create instances of these interfaces inside of the UserProvider. Think about how often are you going to need to change the way you format a guid for example. This would still be testable as long as your adapters don't have a lot of dependencies.
The point I'm making is to get a good balance of testability and practicality. When implementing Ioc try and determine where you've had trouble with testability in the past and where you've had issues maintaining and changing code because there were too many dependencies. That is where you'll see the most benefit.
I have used a lot of static methods in Data Access Layer (DAL) like:
public static DataTable GetClientNames()
{
return CommonDAL.GetDataTable("....");
}
But I found out that some developers don't like the idea of static method inside DAL. I just need some reason to use or not use static methods inside DAL.
Thanks
Tony
From purists' point of view, this violates all kinds of best practices (like, dependency on implementation, tight coupling, opaque dependencies, etc.). I would've said this myself, but recently I tend to move towards simpler solutions without diving too much in "enterprizey" features and buzzwords. Therefore, if it's fine with you do write code like this, if this architecture allows for fast development and is testable and, most important, solves your business problem -- its's just fine.
If I had to pick one reason not to use static methods, that would be that it limits your ability to write unit tests against your code. For example creating mocks for your DAL will be more difficult because there is not an actual interface to code against, just a bunch of static methods. This further limits you if/when you decide to adopt frameworks that require interfaces to support things like IoC, dependency injection etc.
That's Unit of Work, just static, isn't it?
I've noticed that when I'm doing TDD it often leads to a very large amount of interfaces. For classes that have dependencies, they are injected through the constructor in the usual manner:
public class SomeClass
{
public SomeClass(IDependencyA first, IDependency second)
{
// ...
}
}
The result is that almost every class will implement an interface.
Yes, the code will be decoupled and can be tested very easily in isolation, but there will also be extra levels of indirection that just makes me feel a little...uneasy. Something doesn't feel right.
Can anyone share other approaches that doesn't involve such heavy use of interfaces?
How are the rest of you guys doing?
Your tests are telling you to redesign your classes.
There are times when you can't avoid passing complex collaborators that need to be stubbed to make your classes testable, but you should look for ways to provide them with the outputs of those collaborators instead and think about how you could re-arrange their interactions to eliminate complex dependencies.
For example, instead of providing a TaxCalculator with a ITaxRateRepository (that hits a database during CalculateTaxes), obtain those values before creating your TaxCalculator instance and provide them to its constructor:
// Bad! (If necessary on occasion)
public TaxCalculator(ITaxRateRepository taxRateRepository) {}
// Good!
public TaxCalculator(IDictonary<Locale, TaxRate> taxRateDictionary) {}
Sometimes this means you have to make bigger changes, adjust object lifetimes or restructure large swaths of code, but I've often found low-lying fruit once I started looking for it.
For an excellent roundup of techniques for reducing your dependency on dependencies, see Mock Eliminating Patterns.
Don't use interfaces! Most mocking frameworks can mock concrete classes.
That's the drawback of mock based testing approaches. This is as much a test boundary discussion as it is about mocking. By having a 1:1 ratio of test cases to domain classes your test boundary is very small. A result of a small test boundary is a proliferation of interfaces and tests that depend on them. Refactoring becomes more difficult due to the number of interactions you are mocking and stubbing out. By testing clusters of classes with a single test, refactoring becomes easier and you use fewer interfaces. Beware, however that you can test too many classes at once. The more complexity your classes have, the more code paths you need to test. This can lead to a combinatorial explosion and you can't possibly test them all. Listen to the code and tests, they're telling you something about your code. If you see the complexity increasing, it's probably a good time to introduce a new Test Case and Interface/Implementation and mock it out in your original.
If you are feeling uneasy about the number of interfaces being passed into a particular class; then it is probably a sign that you are introducing too many disparate dependencies.
If SomeClass depends on IDependencyA, IDependencyB, and IDependencyC, this is an opportunity to see if you can extract out the logic that the class performs with those three interfaces into another class/interface, IDependencyABC.
Then when you are writing your tests for SomeClass, you only need to mock out the logic that IDependencyABC now provides.
In addition, if you are still uncomfortable; maybe it is not an interface you require. For example, classes that contain state (parameters being passed around, for instance) could probably just be created and passed around as concrete classes instead. Jeff's answer alluded to this, where he mentions passing into your constructor ONLY what you need. This provides less coupling between your constructs and is a better indication of the intent of your class's needs. Just be careful passing around data structures (IDictionary<,>).
In the end, TDD is working when you get that warm fuzzy feeling during your cycles. If you feel uneasy, watch for some of the code smells and fix some of those issues to get back on track.