I'm trying to do dependency injection throughout the app tier and am running into a scenario I'm sure others have seen. There are some 3rd party web services that we consume and the clients were auto-generated with a base class. The clients do not have an interface and the data types are in the same file/project.
The obvious problem is if I want to do unit testing I'll need to mock the service. I'll need to extract an interface and move the data types into a "contracts" project that is available to both real/mock clients. However, the next time the client is auto-generated that work needs to be redone. Creating a proxy at runtime wouldn't help much because then we would have to manually create the interfaces and data types from the WSDL. Is there a better way to handle this?
Extracting an interface from the implementation is not going to help you much anyway, as it's going to be a poor abstraction.
Interfaces should be defined and owned by the clients that consume the interfaces. As Agile Principles, Patterns, and Practices explain, "clients […] own the abstract interfaces" (chapter 11). Thus, any attempt to define a data-centric interface like extracting interfaces from auto-generated web service clients is bound to cause problems sooner or later, because it's violating various SOLID principles like the Dependency Inversion Principle or the Interface Segregation Principle.
Instead, your client code should define the interfaces that they require. Then you can always implement those interfaces with the auto-generated web service clients. If you've used Microsoft's tooling (Visual Studio, wsdl.exe, etc.) the relevant auto-generated class should already be a partial class, which means you should be able to add behaviour to it without touching the auto-generated part of it.
Related
I saw in many places whenc# programmers uses 3-tire architecture, they tend to use interfaces between each layer. for example, if the solution is like
SampleUI
Sample.Business.Interface
Sample.Business
Sample.DataAccess.Interface
Sample.DataAccess
Here UI calls the business layer through the interface and business calls the data access in the same fashion.
If this approach is to reduce the dependency between the layers, it's already in place with class library without additional use of the interface.
The code sample is below,
Sample.Business
public class SampleBusiness{
ISampleDataAccess dataAccess = Factory.GetInstance<SampleDataAccess>();
dataAccess.GetSampledata();
}
Sample.DataAccess.Interface
public interface IsampleDataAccess{
string GetSampleData();
}
Sample.DataAccess
public class SampleDataAccess:ISampleDataAccess{
public string GetSampleData(){
returns data;// data from database
}
}
This inference in between does any great job?
What if I use newSampleDataAccess().SampleData() and remove the complete interface class library?
Code Contract
There is one remarkable advantage of using interfaces as part of the design process: It is a contract.
Interfaces are specifications of contracts in the sense that:
If I use (consumes) the interface, I am limiting myself to use what the interface exposes. Well, unless I want to play dirty (reflection, et. al) that is.
If I implement the interface, I am limiting myself to provide what the interface exposes.
Doing things this way has the advantage that it eases dividing work in the development team among layers. It allows the developers of a layer to provide an cough interface cough that the next layer can use to communicate with it… Even before such interface has been implemented.
Once they have agreed on the interface - at least on a minimum viable interface. They can start developing the layers in parallel, known that the other team will uphold their part of the contract.
Mocking
A side effect of using interfaces this way, is that it allows to mock the implementation of the component. Which eases the creation of unit tests. This way you can test the implementation of a layer in isolation. So you can distinguish with ease when a layer is failing because it has a defect, and when a layer is failing because the layer below it has a defect.
For projects that are develop by a single individual - or by a group that doesn't bother too much in drawing clear lines to separate work - the ability to mock might be their main motivation to implement interfaces.
Consider for example, if you want to test if your presentation layer can handle paging correctly… But you need to request data to fill those pages. It could be the case that:
The layer below is not ready.
The database does not have data to provide yet.
It is failing and they do not know if the paging code is not correct, or the defect comes from a point deeper in the code.
Etc…
Either way the solution is mocking. In addition, mocking is easier if you have interfaces to mock.
Changing the implementation
If - for whatever reason - some of the developer decides they want to change the implementation their layer, they can do so trusting the contract imposed by the interface. This way, they can swap implementation without having to change the code of the other layers.
What reason?
Perhaps they want to test a new technology. In this case, they will probably create an alternative implementation as an experiment. In addition, they will want to have both versions working so they can test which one works better.
Addendum: Not only for testing both versions, but also to ease rolling back to the main version. Of course, they might accomplish this with source version control. Because of that, I will not consider rolling back as a motivation to use interfaces. Yet, it might be an advantage for anybody not using version control. For anybody not using it… Start using it!
Or perhaps they need to port the code to a different platform, or a different database engine. In this case, they probably do not want to throw away the old code either… For example, if they have clients that run Windows and SQL Server and other that run Linux and Oracle, it makes sense to maintain both versions.
Of course, in either case, you might want to be able to implement those changes by doing the minimum possible work. Therefore, you do not want to change the layer above to target a different implementation. Instead you will probably have some form of factory or inversion of control container, that you can configure to do dependency injection with the implementation you want.
Mitigating change propagation
Of course, they may decide to change the actual interfaces. If the developers working on a layer need something additional on the interface they can add it to the interface (given whatever methodology the team has set up to approve these changes) without going to mess with the code of the classes that the other team is working on. In source version control, this will ease merging changes.
At the end, the purpose of using a layer architecture is separation of concerns. Which implies separation of reason of change… If you need to change the database, your changes should not propagate into code dedicated to present information to the user. Sure, the team can accomplish this with concrete classes. Yet, interfaces provide a good and evident, well defined, language supported, barrier to stop the propagation of change. In particular if the team has good rules about responsibility (No, I do not mean code concerns, I mean, what developer is responsible of doing what).
You should always use an abstraction of the layer to have the ability
to mock the interfaces in unit tests
to use fake implementations for faster development
to easily develop alternative implementations
to switch between different implementations
...
Are there two or more ways to couple and decouple the service type and service contract?
The book I'm reading speaks of coupling and decoupling in two different ways. Am I reading it wrong? I can provide examples but I'm trying not to over complicate this post.
1) She speaks of coupling and decoupling the service contract and service type. Basically coupled is when you don't define an interface for the service contract. You just use the type. Decoupling is when you use an interface as the contract and a type to implement the interface.
2) But she also seems to talk about coupling the service contract (interface) and service type (class implementation) by placing them both in the same assembly. I understand assembly to mean class file (.cs).
ARe both of these coupling/decoupling scenarios? How do you differentiate between the types in conversation? Do they have different names?
I can copy the actual text if necessary. But i was hoping you might understand without all the extra reading.
Thanks!
I am trying to hold to the DRY principle in developing WCF services for our application, but I seem to be going down a lot of rabbit holes. My original idea was to have an abstract base class to hold code common to all services, and have derived classes for each concrete service, but cannot seem to get VS2012 to play nice.
Whenever you create a service class, it INSISTS on putting the contract (interface) and implementation classes in the same project, and trying to pull those apart seems to hose up the wiring that VS has done under the hood, so then things break.
I guess all my years of "classic" OO design are getting in the way, I wanted to have the concrete services derive from the interface class AND the abstract base class, but I'm not having a lot of luck. I have found questions/blogs on having polymorphic DATA types used by services, but have not found examples of polymorphic SERVICE types. Can anyone point me?
Thanks,
Peter
UPDATE: Perhaps I am over-thinking the whole thing, I am actually NOT trying to have inheritance for OPERATIONS since a composite approach would make more sense, I just want to keep common code in one place (obviously...), and the whole "static helper class" approach always feels "dirty" to me, kind of defeating the whole OO approach...I am hoping I can simply have the contrete service classes inherit from an abstract base class that is NOT necessarily the implementation of any particular service contract, but is just a way to keep the code DRY...
ALSO: I am trying to use the Template pattern for the service classes, since the overall structure of the services is so similar (devil is always in the details...)
You can separate the interface classes and implementation classes into different projects. One easy way to do is to create the projects manually and write/copy the code as you would for any .NET OO solution.
The following is a set of samples provided by Microsoft...
http://www.microsoft.com/en-us/download/details.aspx?id=21459
You should be able to dig into the samples and find one that meets your requirement.
I have a project structure like so :-
CentralRepository.BL
CentralRepository.BO
CentralRepository.DataAccess
CentralRepository.Tests
CentralRepository.Webservices
and there is an awful lot of dependencies between these. I want to leverage unity to reduce the dependencies, so im going to create interfaces for my classes. My question is in which project should the interfaces reside in. My thoughts are they should be in the BO layer. Can someone give me some guidance on this please
On a combinatorial level, you have three options:
Define interfaces in a separate library
Define interfaces together with their consumers
Define interfaces together with their implementers
However, the last option is a really bad idea because it tightly couples the interface to the implementer (or the other way around). Since the whole point of introducing an interface in the first place is to reduce coupling, nothing is gained by doing that.
Defining the interface together with the consumer is often sufficient, and personally I only take the extra step of defining the interface in a separate library when disparate consumers are in play (which is mostly tend to happen if you're shipping a public API).
BO is essentially your domain objects, or at least that is my assumption. In general, unless you are using a pattern like ActiveRecord, they are state objects only. An interface, on the other hand, specifies behavior. Not a good concept, from many "best practices", to mix the behavior and state. Now I will likely ramble a bit, but I think the background may help.
Now, to the question of where interfaces should exist. There are a couple of choices.
Stick the interfaces in the library they belong to.
Create a separate contract library
The simpler is to stick them in the same library, but then your mocks rely on the library, as well as your tests. Not a huge deal, but it has a slight odor to it.
My normal method is to set up projects like this:
{company}.{program/project}.{concern (optional)}.{area}.{subarea (optional)}
The first two to three bits of the name are covered in yours by the word "CentralRepository". In my case it would be MyCompany.CentralRepository or MyCompany.MyProgram.CentralRepository, but naming convention is not the core part of this post.
The "area" portions are the thrust of this post, and I generally use the following.
Set up a domain object library (your BO): CentralRepository.Domain.Models
Set up a domain exception library: CentralRepository.Domain.Exceptions
All/most other projects reference the above two, as they represent the state in the application. Certainly ALL business libraries use these objects. The persistance library(s) may have a different model and I may have a view model on the experience library(s).
Set up the core library next: CentralRepository.Core (may have subareas?). this is where the business logic lays (the actual applciation, as persistence and experience changes should not affect core functionality).
Set up a test library for core: CentralRepository.Core.Test.Unit.VS (I have Unit.VS to show these are unit tests, not integration tests with a unit test library, and I am using VS to indicate MSTest - others will have different naming).
Create tests and then set up business functionality. As need, set up interfaces. Example
Need data from a DAL, so an interface and mock are set up for data to use for Core tests. The name here would be something like CentralRepository.Persist.Contracts (may also use a subarea, if there are multiple types of persistence).
The core concept here is "Core as Application" rather than n-tier (they are compatible, but thinking of business logic only, as a paradigm, keeps you loosely coupled with persistence and experience).
Now, back to your question. The way I set up interfaces is based on the location of the "interfaced" classes. So, I would likely have:
CentralRepository.Core.Contracts
CentralRepository.Experience.Service.Contracts
CentralRepository.Persist.Service.Contracts
CentralRepository.Persist.Data.Contracts
I am still working with this, but the core concept is my IoC and testing should both be considered and I should be able to isolate testing, which is better achieved if I can isolate the contracts (interfaces). Logical separation is fine (single library), but I don't generally head that way due to having at least a couple of green developers who find it difficult to see logical separation without physical separation. Your mileage may vary. :-0
Hope this rambling helps in some way.
I would suggest keeping interfaces wherever their implementers are in the majority of cases, if you're talking assemblies.
Personally, when I'm using a layered approach, I tend to give each layer its own assembly and give it a reference to the layer below it. In each layer, most of the public things are interfaces. So, I in the data access layer, I might have ICustomerDao and IOrderDao as public interfaces. I'll also have public Dao factories in the DAO assembly. I'll then have specific implementations marked as internal -- CustomerDaoMySqlImpl or CustomerDaoXmlImpl that implement the public interface. The public factory then provides implementations to users (i.e. the domain layer) without the users knowing exactly which implementation they're getting -- they provide information to the factory, and the factory turns around and hands them a ICustomerDao that they use.
The reason I mention all this is to lay the foundation for understanding what interfaces are really supposed to be -- contracts between the servicer and client of an API. As such, from a dependency standpoint, you want to define the contract generally where the servicer is. If you define it elsewhere, you're potentially not really managing your dependencies with interfaces and instead just introducing a non-useful layer of indirection.
So anyway, I'd say think of your interfaces as what they are -- a contract to your clients as to what you're going to provide, while keeping private the details of how you're going to provide it. That's probably a good heuristic that will make it more intuitive where to put the interfaces.
Recently I was hacking out some code to communicate with an external web api. The web api is a set of GET/POST operations that return Xml, called in from my application via HttpWebRequest and manipulated on my side using Linq to Xml.
There is are logical groupings of api methods that form multiple services. I have created classes to represent each of these services. Each service has to contact the same base uri and base through the same set of response headers. That is, there are a series methods that are shared between all of my service classes. I performed an Extract Method to Superclass refactoring on these common methods and inherited all my service classes form the new super class. All the methods involved in the refactoring deal with configuring the underlying connection to the remote api and dealing with the raw data coming back, such as deserializing the xml into POCO's.
I've just been asked why I'm using inheritance to use methods from the base class instead of injecting it in. Frankly I've got no real good answer so I want to understand why this question was asked and what are the merits of injection over inheritance. I'm aware that basic OO design tenets say we should favour composition over inheritance, and I can see how to refactor for compoosition, but I'm unsure what advantages this would gain me.
My colleague has said "not only is it more testable... well it's easier to test". I can see his argument but I'd like to know more. Hopefully I've given enough info to get a sensible response.
It Depends :)
If your child classes provide speicialisation over the base class then this favours inheritance while if the base class provides more of a utility function then i would go with composition.
If you child classes are just adapting the POCO's to the needed format for the web calls then inheritance could be the way to go, but as with most things software development the cat can be skinned many ways.
Injecting it in allows you to test the injected code in isolation, rather than with inheritance where you would need to sub-class to test just the common code.
Hope i've been able to help