IOC and interfaces - c#

I have a project structure like so :-
CentralRepository.BL
CentralRepository.BO
CentralRepository.DataAccess
CentralRepository.Tests
CentralRepository.Webservices
and there is an awful lot of dependencies between these. I want to leverage unity to reduce the dependencies, so im going to create interfaces for my classes. My question is in which project should the interfaces reside in. My thoughts are they should be in the BO layer. Can someone give me some guidance on this please

On a combinatorial level, you have three options:
Define interfaces in a separate library
Define interfaces together with their consumers
Define interfaces together with their implementers
However, the last option is a really bad idea because it tightly couples the interface to the implementer (or the other way around). Since the whole point of introducing an interface in the first place is to reduce coupling, nothing is gained by doing that.
Defining the interface together with the consumer is often sufficient, and personally I only take the extra step of defining the interface in a separate library when disparate consumers are in play (which is mostly tend to happen if you're shipping a public API).

BO is essentially your domain objects, or at least that is my assumption. In general, unless you are using a pattern like ActiveRecord, they are state objects only. An interface, on the other hand, specifies behavior. Not a good concept, from many "best practices", to mix the behavior and state. Now I will likely ramble a bit, but I think the background may help.
Now, to the question of where interfaces should exist. There are a couple of choices.
Stick the interfaces in the library they belong to.
Create a separate contract library
The simpler is to stick them in the same library, but then your mocks rely on the library, as well as your tests. Not a huge deal, but it has a slight odor to it.
My normal method is to set up projects like this:
{company}.{program/project}.{concern (optional)}.{area}.{subarea (optional)}
The first two to three bits of the name are covered in yours by the word "CentralRepository". In my case it would be MyCompany.CentralRepository or MyCompany.MyProgram.CentralRepository, but naming convention is not the core part of this post.
The "area" portions are the thrust of this post, and I generally use the following.
Set up a domain object library (your BO): CentralRepository.Domain.Models
Set up a domain exception library: CentralRepository.Domain.Exceptions
All/most other projects reference the above two, as they represent the state in the application. Certainly ALL business libraries use these objects. The persistance library(s) may have a different model and I may have a view model on the experience library(s).
Set up the core library next: CentralRepository.Core (may have subareas?). this is where the business logic lays (the actual applciation, as persistence and experience changes should not affect core functionality).
Set up a test library for core: CentralRepository.Core.Test.Unit.VS (I have Unit.VS to show these are unit tests, not integration tests with a unit test library, and I am using VS to indicate MSTest - others will have different naming).
Create tests and then set up business functionality. As need, set up interfaces. Example
Need data from a DAL, so an interface and mock are set up for data to use for Core tests. The name here would be something like CentralRepository.Persist.Contracts (may also use a subarea, if there are multiple types of persistence).
The core concept here is "Core as Application" rather than n-tier (they are compatible, but thinking of business logic only, as a paradigm, keeps you loosely coupled with persistence and experience).
Now, back to your question. The way I set up interfaces is based on the location of the "interfaced" classes. So, I would likely have:
CentralRepository.Core.Contracts
CentralRepository.Experience.Service.Contracts
CentralRepository.Persist.Service.Contracts
CentralRepository.Persist.Data.Contracts
I am still working with this, but the core concept is my IoC and testing should both be considered and I should be able to isolate testing, which is better achieved if I can isolate the contracts (interfaces). Logical separation is fine (single library), but I don't generally head that way due to having at least a couple of green developers who find it difficult to see logical separation without physical separation. Your mileage may vary. :-0
Hope this rambling helps in some way.

I would suggest keeping interfaces wherever their implementers are in the majority of cases, if you're talking assemblies.
Personally, when I'm using a layered approach, I tend to give each layer its own assembly and give it a reference to the layer below it. In each layer, most of the public things are interfaces. So, I in the data access layer, I might have ICustomerDao and IOrderDao as public interfaces. I'll also have public Dao factories in the DAO assembly. I'll then have specific implementations marked as internal -- CustomerDaoMySqlImpl or CustomerDaoXmlImpl that implement the public interface. The public factory then provides implementations to users (i.e. the domain layer) without the users knowing exactly which implementation they're getting -- they provide information to the factory, and the factory turns around and hands them a ICustomerDao that they use.
The reason I mention all this is to lay the foundation for understanding what interfaces are really supposed to be -- contracts between the servicer and client of an API. As such, from a dependency standpoint, you want to define the contract generally where the servicer is. If you define it elsewhere, you're potentially not really managing your dependencies with interfaces and instead just introducing a non-useful layer of indirection.
So anyway, I'd say think of your interfaces as what they are -- a contract to your clients as to what you're going to provide, while keeping private the details of how you're going to provide it. That's probably a good heuristic that will make it more intuitive where to put the interfaces.

Related

Is this example counts as cyclic referencing

I have a confusion with the cyclic references and communication patterns between view and controller(logic) classes.
Below image is a simple UML, which logic and view has their references and can talk with each other freely.
Cyclic reference UML example
But as I know, cyclic references are BAD, veryyy BAD. My friend showed me below UML as an solution to overcome this issue. We can just inherit these classes with interface and reference interfaces to this classes. As he said, its not counted as an cyclic referencing, but in my opinion its just an illusion.
Basic solution UML example
Back to the questions:
Is coupling scripts still bad if they are cohesive little structures in the big chunk of our code.
Does the second UML can be evaluated as better design, instead of first one. And is it cycling references?
Even for the these two cohesive coupled class (but decoupled from all of the other classes in the codebase). Should I implement better solution?
I ve read this blogpost which serves better solutions: https://www.sebaslab.com/the-truth-behind-inversion-of-control-part-i-dependency-injection/ , in the "Object Communication and Dependency Injection" section. Blogger mentiones 4 different communication method. The second UML is I think the first one he mentioned.
Do you guys have better solution, both for view-logic object communication and general whole codebase object communication?
I would not say that cyclical dependencies are always bad. It implies a strong coupling between components, but in some cases is can be useful to use multiple strongly coupled classes instead of a single much larger class.
If you want to reduce coupling, interfaces is the typical way to go. But it might be sufficient with a single interface, i.e. either IController or IView, that should be sufficient to remove any cycles.
However, the common model when writing things like UIs is to follow a Model-View-ViewModel, where the view references the view model, that in turn references the model. This does not really remove the circular references, since the view needs to know when the view model has changed. But it is abstracted by the use of events.
Events and delegates can often be used as a lighter weight alternative to interfaces. There are also patterns using event-managers, where you can send and listen to events of a specific type, without needing direct references between all the objects, but I'm somewhat skeptical of this, since I suspect it could be difficult to get an overview of what objects send and listens to some event-type.
In the end it usually depend on what specifically you want to do. Events are good when you want a fairly loose coupling, but are not well suited for complex interactions. For such cases interfaces might be better, but they do not really add much if classes are strongly coupled. As a rule of thumb when adding interfaces, you should ask yourself: What other implementations of this interface will be made? if the answer is none, then don't add it, at least not for the moment.
I would also not put to much stock on technical interviewers, since poor interview questions seem fairly common.

Solution structure with Repository, DAL, BAL

We would like to create a new project with a clean architecture. So our team decided to have:
Repository pattern
Data Access Layer
Business Access Layer
Common Layer (Abstractions such as IPersonRepository, IPersonService, ICSVExport)
Some Core services such as create CSV files.
UnitTests
Now what we have is:
PersonsApp.Solution
--PersonsApp.WebUI
-- Controllers (PersonController)
--PersonApp.Persistence
--Core folder
-IGenericRepository.cs (Abstraction)
-IUnitOfWork.cs (Abstraction)
--Infrastructure folder
-DbDactory.cs (Implementation)
-Disposable.cs (Implementation)
-IDbFactory.cs (Abstraction)
-RepositoryBase.cs (Abstraction)
--Models folder
- Here we DbContext, EF models (Implementation)
--Repositories
- PersonRepository.cs (Implementation)
--PersonApp.Service
--Core folder
-IPersonService.cs (Abstraction)
-ICSVService.cs (Abstraction)
--Business
-PersonService.cs (Abstraction)
--System
-CSVService.cs (Abstraction)
--PersonApp.Test
In my view, our structure is a little bit messy.
The first problem is:
PersonApp.Service has abstractions(interfaces) and implementations
in one class library.
The second problem is:
PersonApp.Persistence has abstractions(RepositoryBase) and
implementations in one class library. But if I move RepositoryBase,
IGenericRepository, IUnitOfWork in a class library called
PersonApp.Abstractions, then I will circular reference errors
between PersonApp.Abstractions and PersonApp.Persistence
What is the best way to organize our solution?
This is probably not a good S.O. question given it's asking something that is opinion-based. When planning out project structure I aim to keep things simple. If an abstraction is for polymorphism I will consider moving interfaces into a separate "common" assembly. For example if I want to provide several possible implementations of a thing, I will have a common assembly that declares the interface, then separate assemblies for the specific implementations. In most cases I use interfaces as contracts so that I can substitute the real with mocks. In these cases I keep the interfaces nested beneath the concrete implementation. I use a VS add-in called NestIn to provide nesting support. This keeps the project structure nice and compact. However, a caveat, if you are using .Net Standard libraries, file nesting doesn't appear to be supported. (Hopefully this changes / has changed)
So for a SomeService, my folder project structure would look like:
Services [folder]
SomeService.cs [concrete]
SomeService.dependencies.cs [partial] [nested]
ISomeService [nested]
the .dependencies.cs file is a partial class where I put all dependencies and the constructor. This keeps them tucked out of the way while I'm working on implementation. I used to rely on #regions way back, but frankly I cannot stand them now. Partial classes are much better IMO.
My repositories live alongside my entities in a Domain assembly.
Entities [folder]
Configuration [folder]
OrderConfiguration.cs
Order.cs
Repositories [folder]
OrderManagementRepository.cs
OrderManagementRepository.dependencies.cs
IOrderManagementRepository.cs
MySystemDbContext.cs
I don't use Generic repositories, rather repositories are designed to pair up with Controllers or Services that they serve. I might have some general purpose repositories that service more than one consumer. (stuff like lookups, etc.) This pattern evolved for me from wanting to satisfy SRP. The biggest issue with things like generic repositories is that they need to serve multiple masters. While an OrderRepository might serve a single responsibility in being worried solely about Orders, the problem I see is that many different places will need access to Order information. This means different criteria, and wanting different amounts of data. So instead, if I have an OrderManagementService that deals with orders, order lines, etc. and touches on Products and other bits and bobs in the process of placing orders, I will use an OrderManagementRepository to serve virtually all data needed by the service, and manage the wrapping of supported operations for managing an order. This means my service only typically needs 1 repository dependency (rather than an OrderRepository, ProductRepository, etc. etc. etc.) and my OrderManagemmentRepository has only 1 reason to change. (But that's getting off topic. :)
I started relying on Nesting a while ago back when you needed ReSharper or the like to get access to "Go to Implementation" for interfaces. Go to Definition would take you to the interfaces, which when in a separate namespace or assembly made navigating around dependencies a pain. By nesting interfaces under their concrete implementations, it's a quick click through from the interface to it's concrete implementation and back. I make use of tracking the current code file in the solution manager so as I navigate through code my project view highlights/expands to the currently viewed file.
Ultimately, your project structure should reflect how you prefer to navigate through the code to make it as intuitive and easy to get around to find the bits you need. That will be different for different people, so partial classes and nesting works really well for me, as I am a very visual person that uses the project view a lot. It might not serve any benefit for people that are hotkey navigation wizards. Ultimately though I'd say keep it simple, and adaptable. Trying to plan it out too much in the early stages is like premature optimization. Don't be afraid to move things around as a project grows. A project that grows simply by adding code will invariably turn into a unstable, confusing tangled mess, no matter how well you try to plan ahead. Good code comes from constant re-factoring which is moving things around and deleting as well as adding. When your style is adaptable and you are building in a way that is constantly refining and code is getting better through natural selection, the structure is free to evolve.
Hopefully that might give some food for thought. Good luck in the green fields!
Edit: Regarding polymorphic interfaces vs. contract interfaces. With polymorphic interfaces where I want to have multiple, substitute-able concrete implementations, this is a case where the interface (and any applicable base class) would reside in a separate assembly. The nesting solution applies for cases where the only substitution is for mocking purposes. (unit testing) A recent example of a polymorphic instance was when I needed to replace an in-built SMS service wrapper to support a new SMS provider. This resulted in re-factoring a hard-coded concrete class from the original code into a SMSCore assembly containing the ISMSProvider interface and some general common definitions, then two assemblies for the implementations: SMSByMessageMedia and SMSBySoprano.
Other cases that come up might be around customizations. For instance I have a number of personal libraries and such for general purpose code, and when implementing them for a client there might be some client-specific "isms" that I want to make. These cases are typically resolved by extending the general purpose implementation (Open-Closed Principle) by overriding, or implementing a provided interface for the custom dependency that the general purpose code can consume. In both of these cases, the client project is going to have a reference to the concrete implementation(s) anyways, so having extendable classes and dependency interfaces in that assembly/namespace doesn't pose any issues. This saves needing to add several different namespaces & assembly references.

why do we require interfaces between UI,Business and Data access in C#

I saw in many places whenc# programmers uses 3-tire architecture, they tend to use interfaces between each layer. for example, if the solution is like
SampleUI
Sample.Business.Interface
Sample.Business
Sample.DataAccess.Interface
Sample.DataAccess
Here UI calls the business layer through the interface and business calls the data access in the same fashion.
If this approach is to reduce the dependency between the layers, it's already in place with class library without additional use of the interface.
The code sample is below,
Sample.Business
public class SampleBusiness{
ISampleDataAccess dataAccess = Factory.GetInstance<SampleDataAccess>();
dataAccess.GetSampledata();
}
Sample.DataAccess.Interface
public interface IsampleDataAccess{
string GetSampleData();
}
Sample.DataAccess
public class SampleDataAccess:ISampleDataAccess{
public string GetSampleData(){
returns data;// data from database
}
}
This inference in between does any great job?
What if I use newSampleDataAccess().SampleData() and remove the complete interface class library?
Code Contract
There is one remarkable advantage of using interfaces as part of the design process: It is a contract.
Interfaces are specifications of contracts in the sense that:
If I use (consumes) the interface, I am limiting myself to use what the interface exposes. Well, unless I want to play dirty (reflection, et. al) that is.
If I implement the interface, I am limiting myself to provide what the interface exposes.
Doing things this way has the advantage that it eases dividing work in the development team among layers. It allows the developers of a layer to provide an cough interface cough that the next layer can use to communicate with it… Even before such interface has been implemented.
Once they have agreed on the interface - at least on a minimum viable interface. They can start developing the layers in parallel, known that the other team will uphold their part of the contract.
Mocking
A side effect of using interfaces this way, is that it allows to mock the implementation of the component. Which eases the creation of unit tests. This way you can test the implementation of a layer in isolation. So you can distinguish with ease when a layer is failing because it has a defect, and when a layer is failing because the layer below it has a defect.
For projects that are develop by a single individual - or by a group that doesn't bother too much in drawing clear lines to separate work - the ability to mock might be their main motivation to implement interfaces.
Consider for example, if you want to test if your presentation layer can handle paging correctly… But you need to request data to fill those pages. It could be the case that:
The layer below is not ready.
The database does not have data to provide yet.
It is failing and they do not know if the paging code is not correct, or the defect comes from a point deeper in the code.
Etc…
Either way the solution is mocking. In addition, mocking is easier if you have interfaces to mock.
Changing the implementation
If - for whatever reason - some of the developer decides they want to change the implementation their layer, they can do so trusting the contract imposed by the interface. This way, they can swap implementation without having to change the code of the other layers.
What reason?
Perhaps they want to test a new technology. In this case, they will probably create an alternative implementation as an experiment. In addition, they will want to have both versions working so they can test which one works better.
Addendum: Not only for testing both versions, but also to ease rolling back to the main version. Of course, they might accomplish this with source version control. Because of that, I will not consider rolling back as a motivation to use interfaces. Yet, it might be an advantage for anybody not using version control. For anybody not using it… Start using it!
Or perhaps they need to port the code to a different platform, or a different database engine. In this case, they probably do not want to throw away the old code either… For example, if they have clients that run Windows and SQL Server and other that run Linux and Oracle, it makes sense to maintain both versions.
Of course, in either case, you might want to be able to implement those changes by doing the minimum possible work. Therefore, you do not want to change the layer above to target a different implementation. Instead you will probably have some form of factory or inversion of control container, that you can configure to do dependency injection with the implementation you want.
Mitigating change propagation
Of course, they may decide to change the actual interfaces. If the developers working on a layer need something additional on the interface they can add it to the interface (given whatever methodology the team has set up to approve these changes) without going to mess with the code of the classes that the other team is working on. In source version control, this will ease merging changes.
At the end, the purpose of using a layer architecture is separation of concerns. Which implies separation of reason of change… If you need to change the database, your changes should not propagate into code dedicated to present information to the user. Sure, the team can accomplish this with concrete classes. Yet, interfaces provide a good and evident, well defined, language supported, barrier to stop the propagation of change. In particular if the team has good rules about responsibility (No, I do not mean code concerns, I mean, what developer is responsible of doing what).
You should always use an abstraction of the layer to have the ability
to mock the interfaces in unit tests
to use fake implementations for faster development
to easily develop alternative implementations
to switch between different implementations
...

In a multi-tier architecture, is it OK to have adjacent layers reference one another?

If I have an app that consists of multiple layers, defined as multiple projects in a solution, is it ok to have a layer reference the layer directly above/below it? Or, should one use dependency injection to eliminate this need?
I am building on a more specific questions that I asked here, but I would like more general advice.
How would I go about setting up a project like this in VS2010? Would I need a third project to house the DI stuff?(I am using Ninject)
EDIT: example
Here is an example of my two layers. first layer has an IUnitOfWork Interface and the second layer has a class that implements said interface. Setup in this manner, the project will not build unless layer 2 has a references to layer 1. How can I avoid this? Or, should I not even be worried about references and leave it alone since the layers are adjacent to one another?
Layer 1
public interface IUnitOfWork
{
void Save();
}
Layer 2
public DataContext : IUnitOfWork
{
public void Save()
{
SaveChanged(); //...
}
}
General advise is to decouple layers by interfaces and use Dependency Injection and IoC containers for great level of flexibility whilst maintaining an Application.
But sometimes it could be an overkill for small applications, so to give you a more specific example you have to provide at least description of the application and layers which it has.
Regarding DI stuff, I would suggest to encapsulate it in a separate assembly.
See great article by Martin Fowler Inversion of Control Containers and the Dependency Injection pattern
EDIT: Answer to comments regarding interface
Only one way to get rid of such coupling is to store common interfaces/classes in a separate assembly. In your case create separate assembly and put here is IUnitOfWork interface.
EDIT: Ninject projects reference
There are 147 Ninject projects, I would suggest to download and investigate most interesting from your point of view: Ninject projects
This is a known "Tightly Coupled vs Loosely Coupled" dilemma and there is no general recommendation for it. It depends very much on how large are your component, how do they interact, how often are they changing, which teams do work on them, what are your build times.
My general advice would be keep balance. Do not go crazy by decoupling every mini class and on the other hand do not create a monolith where one small modification causes rebuild of the whole world.
Where change is expected, Favor loosely coupled components
Where stability is expected, Favor tightly coupled components
It is always a trade off:
There are costs associated with each decision:
The cost of having to make changes across tightly coupled components are well known.
-The change is invasive
-It may take a lot of work to determine everything in the dependency chain
-It's easy to miss dependencies
-It's difficult to insure quality
On the other hand, the costs of over-engineering are bad too
-code bloat
-complexity
-slower development
-difficult for new developers to become productive
To your example:
Take a look at Stoplight Example coming along with Microsoft Unity Application Blocks
People have already answered your question. I would suggest that the "below" layer should not reference the "above" layer as the below layer purpose is to provide certain functionality which is consumed by above layer and hence it should not know anything about above layer. Just like the nice layer model of TCP/IP stack

Common definitions in loose coupled design

I'm trying to put together a very granulary loose coupled design.
But I can't decide how to handle common definitions.
Right now I seperate concerns by adding it as an external dll. Through injection and interfaces my domain can use my business logic without knowing the implementation.
The problem I'm having is that for all my components to be loosely coupled, they need to implement the same interfaces. My solution was a seperate project (dll) with just all the definitions.
This started out well, but seems to become bloathed and chains all code together on this one dll-dependency.
What's the most pragmatic way to go about ?
Thanks!
EDIT
Sorry I think I initially misunderstood your question. So you have one assembly which contains your interfaces and you have your implementations in other assemblies using DI to create your dependant objects. I tend to create a core assembly in my application which holds the main behaviours of the app (smart entities, enums and interfaces). This assembly depends on little but is heavy depended on by the rest of the application. Check out this project as an example - whocanhelpme.codeplex.com. You could call this core bloated but it, by definition, needs to be very rich.
You might find that many of your abstract units follow common design patterns. Here is a site that gives a good description of each one - you may be able to derive names from these (Observer, Factory, Adapter etc.):
http://www.dofactory.com/Patterns/Patterns.aspx
I would say, that the layer should only know about the next layer and its interfaces, so it is fine to place interfaces along with their implementations and then add references between layers (assemblies) in the chain.
You can configure DI using bootstrapper pattern and resolve through the locator. Regarding cross cutting concerns like logging, caching ect there should be separate assembly referenced to each layer. Here you can also employ contracts and in the future perhaps replace these cross cutting functionalities with another assembly implementing the same contracts.
Hope this helps at least a bit :)

Categories

Resources