How to test .net project with Ninject in proper way - c#

We have a project with separated bussiness layer. It's like lots of services (classes) in separated project in the solution. Also we use ninject to manage dependancies.
All classes in bussiness layes project are internal, and it communicates with «another world» through interfaces.
If to create new project that would contain test then it wont see internal classes (but yeah we can do a hack and declare Internal to Public in AsseblyInfo).
What i really need to know is what's neccessary to test:
We can create test envirement of everything, and test only through produced interfaces (there is no «clear» DAL, we are using linq2sql, but its possible to be mocked)
This way looks goods, because we know nothing about internal BisLayer structure and test only «contract» functionality. But the bad side is that the system has lots of options, settings and bindings and it seems impossible or pretty hard to check all possible variants of it
We can place tests in the same project or set attribute to make internal being seen as public, so we'd be able to test internal classes. Its good because we can test almost everything, but its hard to control bindings, cos it'd be nice Ninject to do it, and we would only override bindings we need in concret test.
Also its not clear how to test classes implementing the same interface (and doing similar things). Like we have few implementations of Cache but each impl-tion keeps data in different places (mssql, key-value db, asp cache, etc), so tests for each implementation actually would be the same

As you say you need to have access to the classes in order to test. So, make only the internals that are exposed to the outside trough interfaces accessible from the outside.
Write your tests only against the behaviour that is exposed to the outside, "another world" as you call it.
Write the more generic test cases first and the go into details as needed.
As this will be an ongoing process together with the development/change of the actual functionality you'll the be able to decide how many fine grained scenarios you actually need.
Also take a look at Ninject Mocking Kernel extension https://github.com/ninject/ninject.mockingkernel

Related

why do we require interfaces between UI,Business and Data access in C#

I saw in many places whenc# programmers uses 3-tire architecture, they tend to use interfaces between each layer. for example, if the solution is like
SampleUI
Sample.Business.Interface
Sample.Business
Sample.DataAccess.Interface
Sample.DataAccess
Here UI calls the business layer through the interface and business calls the data access in the same fashion.
If this approach is to reduce the dependency between the layers, it's already in place with class library without additional use of the interface.
The code sample is below,
Sample.Business
public class SampleBusiness{
ISampleDataAccess dataAccess = Factory.GetInstance<SampleDataAccess>();
dataAccess.GetSampledata();
}
Sample.DataAccess.Interface
public interface IsampleDataAccess{
string GetSampleData();
}
Sample.DataAccess
public class SampleDataAccess:ISampleDataAccess{
public string GetSampleData(){
returns data;// data from database
}
}
This inference in between does any great job?
What if I use newSampleDataAccess().SampleData() and remove the complete interface class library?
Code Contract
There is one remarkable advantage of using interfaces as part of the design process: It is a contract.
Interfaces are specifications of contracts in the sense that:
If I use (consumes) the interface, I am limiting myself to use what the interface exposes. Well, unless I want to play dirty (reflection, et. al) that is.
If I implement the interface, I am limiting myself to provide what the interface exposes.
Doing things this way has the advantage that it eases dividing work in the development team among layers. It allows the developers of a layer to provide an cough interface cough that the next layer can use to communicate with it… Even before such interface has been implemented.
Once they have agreed on the interface - at least on a minimum viable interface. They can start developing the layers in parallel, known that the other team will uphold their part of the contract.
Mocking
A side effect of using interfaces this way, is that it allows to mock the implementation of the component. Which eases the creation of unit tests. This way you can test the implementation of a layer in isolation. So you can distinguish with ease when a layer is failing because it has a defect, and when a layer is failing because the layer below it has a defect.
For projects that are develop by a single individual - or by a group that doesn't bother too much in drawing clear lines to separate work - the ability to mock might be their main motivation to implement interfaces.
Consider for example, if you want to test if your presentation layer can handle paging correctly… But you need to request data to fill those pages. It could be the case that:
The layer below is not ready.
The database does not have data to provide yet.
It is failing and they do not know if the paging code is not correct, or the defect comes from a point deeper in the code.
Etc…
Either way the solution is mocking. In addition, mocking is easier if you have interfaces to mock.
Changing the implementation
If - for whatever reason - some of the developer decides they want to change the implementation their layer, they can do so trusting the contract imposed by the interface. This way, they can swap implementation without having to change the code of the other layers.
What reason?
Perhaps they want to test a new technology. In this case, they will probably create an alternative implementation as an experiment. In addition, they will want to have both versions working so they can test which one works better.
Addendum: Not only for testing both versions, but also to ease rolling back to the main version. Of course, they might accomplish this with source version control. Because of that, I will not consider rolling back as a motivation to use interfaces. Yet, it might be an advantage for anybody not using version control. For anybody not using it… Start using it!
Or perhaps they need to port the code to a different platform, or a different database engine. In this case, they probably do not want to throw away the old code either… For example, if they have clients that run Windows and SQL Server and other that run Linux and Oracle, it makes sense to maintain both versions.
Of course, in either case, you might want to be able to implement those changes by doing the minimum possible work. Therefore, you do not want to change the layer above to target a different implementation. Instead you will probably have some form of factory or inversion of control container, that you can configure to do dependency injection with the implementation you want.
Mitigating change propagation
Of course, they may decide to change the actual interfaces. If the developers working on a layer need something additional on the interface they can add it to the interface (given whatever methodology the team has set up to approve these changes) without going to mess with the code of the classes that the other team is working on. In source version control, this will ease merging changes.
At the end, the purpose of using a layer architecture is separation of concerns. Which implies separation of reason of change… If you need to change the database, your changes should not propagate into code dedicated to present information to the user. Sure, the team can accomplish this with concrete classes. Yet, interfaces provide a good and evident, well defined, language supported, barrier to stop the propagation of change. In particular if the team has good rules about responsibility (No, I do not mean code concerns, I mean, what developer is responsible of doing what).
You should always use an abstraction of the layer to have the ability
to mock the interfaces in unit tests
to use fake implementations for faster development
to easily develop alternative implementations
to switch between different implementations
...

Design of package with facades

I'm coming from Java and I'm fairly new to C#, so please bear with me in case I don't use the correct terms or I'm thinking to "java-ish".
Situation
I'm working on a package (all classes in the same namespace) that provides facades to consume data from a few remote services. For each facade there is an interface (like IEventServiceGateway) and an implementation (like EventServiceGateway), all declared public. Each of these implementations consumes data from at least one service, all a bit verbose, so I wrote a client class (like UserServiceClient) for each that provides common operations to all implementations. Because nobody outside of the package should use them, I declared them internal. Furthermore, I did the same with the WCF proxies.
I have two assemblies, one for the facades and the clients and one for the unit tests, the namespace of both is the same.
Problems
I cannot unit test the internal classes, because they are not accessible in the assembly containing the unit tests. I know there are "hacks" to circumvent these restrictions, but using such hacks usually means that I'm using things not as supposed. Not testing the clients does not seem like a sensible solution, because the code paths would be much to complex when just testing the facades. Furthermore, I would test the same things like edge cases over and over again.
I cannot inject the internal classes into the facades using constructor injection, because visibility of the argument is "lower" than the visibility of the constructor. But hard coding of the WCF proxies reduces the testability.
So I have the feeling that either my design is borked because I totally misunderstood the thing with the facades (that only the facades and their implementations should be accessible, nothing else), that my project setup is flawed or that something else is wrong.
I would greatly appreciate if somebody could enlighten me.
You could use the InternalsVisibleToAttribute attribute:
When you add that to your assembly that's internal, you can specify other assemblies that can have access that normally wouldn't be able to.

IOC and interfaces

I have a project structure like so :-
CentralRepository.BL
CentralRepository.BO
CentralRepository.DataAccess
CentralRepository.Tests
CentralRepository.Webservices
and there is an awful lot of dependencies between these. I want to leverage unity to reduce the dependencies, so im going to create interfaces for my classes. My question is in which project should the interfaces reside in. My thoughts are they should be in the BO layer. Can someone give me some guidance on this please
On a combinatorial level, you have three options:
Define interfaces in a separate library
Define interfaces together with their consumers
Define interfaces together with their implementers
However, the last option is a really bad idea because it tightly couples the interface to the implementer (or the other way around). Since the whole point of introducing an interface in the first place is to reduce coupling, nothing is gained by doing that.
Defining the interface together with the consumer is often sufficient, and personally I only take the extra step of defining the interface in a separate library when disparate consumers are in play (which is mostly tend to happen if you're shipping a public API).
BO is essentially your domain objects, or at least that is my assumption. In general, unless you are using a pattern like ActiveRecord, they are state objects only. An interface, on the other hand, specifies behavior. Not a good concept, from many "best practices", to mix the behavior and state. Now I will likely ramble a bit, but I think the background may help.
Now, to the question of where interfaces should exist. There are a couple of choices.
Stick the interfaces in the library they belong to.
Create a separate contract library
The simpler is to stick them in the same library, but then your mocks rely on the library, as well as your tests. Not a huge deal, but it has a slight odor to it.
My normal method is to set up projects like this:
{company}.{program/project}.{concern (optional)}.{area}.{subarea (optional)}
The first two to three bits of the name are covered in yours by the word "CentralRepository". In my case it would be MyCompany.CentralRepository or MyCompany.MyProgram.CentralRepository, but naming convention is not the core part of this post.
The "area" portions are the thrust of this post, and I generally use the following.
Set up a domain object library (your BO): CentralRepository.Domain.Models
Set up a domain exception library: CentralRepository.Domain.Exceptions
All/most other projects reference the above two, as they represent the state in the application. Certainly ALL business libraries use these objects. The persistance library(s) may have a different model and I may have a view model on the experience library(s).
Set up the core library next: CentralRepository.Core (may have subareas?). this is where the business logic lays (the actual applciation, as persistence and experience changes should not affect core functionality).
Set up a test library for core: CentralRepository.Core.Test.Unit.VS (I have Unit.VS to show these are unit tests, not integration tests with a unit test library, and I am using VS to indicate MSTest - others will have different naming).
Create tests and then set up business functionality. As need, set up interfaces. Example
Need data from a DAL, so an interface and mock are set up for data to use for Core tests. The name here would be something like CentralRepository.Persist.Contracts (may also use a subarea, if there are multiple types of persistence).
The core concept here is "Core as Application" rather than n-tier (they are compatible, but thinking of business logic only, as a paradigm, keeps you loosely coupled with persistence and experience).
Now, back to your question. The way I set up interfaces is based on the location of the "interfaced" classes. So, I would likely have:
CentralRepository.Core.Contracts
CentralRepository.Experience.Service.Contracts
CentralRepository.Persist.Service.Contracts
CentralRepository.Persist.Data.Contracts
I am still working with this, but the core concept is my IoC and testing should both be considered and I should be able to isolate testing, which is better achieved if I can isolate the contracts (interfaces). Logical separation is fine (single library), but I don't generally head that way due to having at least a couple of green developers who find it difficult to see logical separation without physical separation. Your mileage may vary. :-0
Hope this rambling helps in some way.
I would suggest keeping interfaces wherever their implementers are in the majority of cases, if you're talking assemblies.
Personally, when I'm using a layered approach, I tend to give each layer its own assembly and give it a reference to the layer below it. In each layer, most of the public things are interfaces. So, I in the data access layer, I might have ICustomerDao and IOrderDao as public interfaces. I'll also have public Dao factories in the DAO assembly. I'll then have specific implementations marked as internal -- CustomerDaoMySqlImpl or CustomerDaoXmlImpl that implement the public interface. The public factory then provides implementations to users (i.e. the domain layer) without the users knowing exactly which implementation they're getting -- they provide information to the factory, and the factory turns around and hands them a ICustomerDao that they use.
The reason I mention all this is to lay the foundation for understanding what interfaces are really supposed to be -- contracts between the servicer and client of an API. As such, from a dependency standpoint, you want to define the contract generally where the servicer is. If you define it elsewhere, you're potentially not really managing your dependencies with interfaces and instead just introducing a non-useful layer of indirection.
So anyway, I'd say think of your interfaces as what they are -- a contract to your clients as to what you're going to provide, while keeping private the details of how you're going to provide it. That's probably a good heuristic that will make it more intuitive where to put the interfaces.

Remove dependency to logging code

I have more like desing question as I'm refactoring quite big piece of code that I took over.
It's not modular, basically it's pseudo-object-oriented code. It contains hard coded dependencies, no interfaces, multiple responsibilities etc. Just mayhem.
Among others it contains a great deal of internal calls to class called Audit, that contains methods like Log, Info, LogError etc... That class has to be configured in application config in order to work, otherwise it's crash. And that's the main pain for me. And please, let's focus on that issue in responses, namely making client code independent of logging classes/solutions/frameworks.
And now, I would like those classes, that have that Audit class dependency hardcoded, refactored in order to obtain several benefits:
First is to extract them nicely to different assemblies, as I will need some functionality available in other applications (for instance generating attachments code - let's call it AttachmentsGenerator class, that until now was specyfic to one application, but now that code could be used in many places)
Remove internal dependencies so that other application that will take advantage of my AttachmentsGenerator class without the need to add reference to other
Do a magic trick in order to allow AttachmentsGenerator class to report some audit info, traces etc. But I don't want it to have hardcoded implementation. As a matter of fact, I don't want it to be mandatory, so it would be possible to use AttachmentsGenerator without that internal logging configured and without the necessity for the client code to add reference to another assemblies in order to use logging. Bottom line: if client code wants to use AttachmentsGenerator, it adds reference to assembly that contains that class, then it uses new operator and that's all.
What kind approach can I use in terms of design patterns etc to achieve it? I would appreciate some links to articles that address that issue - as it can be timeconsuming to elaborate ideas in answer. Or if you can suggest simple interface/class/assembly sketch.
Thanks a lot,
Paweł
Edit 1: As my question is not quite clear, I'll rephrase it once again: This is my plan, are there other interesting ways to do this?
Seems like the easiest way to do this would be to use dependency injection.
Create a generic ILogger interface with methods for logging.
Create a concrete implementation of ILogger that just does nothing for all the methods (e.g. NullLogger)
Create another concrete implementation that actually does logging via whatever framework you choose (e.g. log4net)
Use a DI tool (spring, structure map, etc.) to inject the appropriate implementation depending on whether or not you want logging enabled.
Implement logging (and any other cross-cutting concerns) as a Decorator. That's way more SOLID than having to inject some ILogger interface into each and every service (which would violate both the Single Responsibility Principle and DRY).

Tramp Data vs. Testability

I'm not doing much new development right now, but a lot of refactoring of older C# subsystems whose original requirements no longer support new and I'll add unexpected requirements. I'm also now using Rhino Mocks and unit tests where possible (vs 2008).
The dilemma for me is that to make the methods testable and mockable, I need to define clear "contracts" using interfaces. However, if I do this, a lot of the global data that many of the classes use turns into tramp data, passed from method to method until it gets to its intended user; this looks ugly, and is against my sensibilities, but ... can be mocked. Making a mixed bag class with a lot of static global properties is a more attractive option but not Rhino testable. Is there a middle ground between the two? Testable but not too trampy? Pattern perhaps?
You should also understand that these applications run on an in-house corporate developed platform, so there are a lot of helper classes and services that are instantiated once per application, and then are used throughout the application, for example a database accessor helper class. Another example is using configuration files that are read once, and used throughout the application by different methods for various reasons.
Your thoughts appreciated.
What you might want to look at here is some form of the Service Locator Pattern. Make them classes find their own tramps.
Some other reasonable options would include wrapping up the bulk of the commonly used stuff in an "application context" class of some sort.
You also might wish to look into dependency injection if you haven't done so yet.

Categories

Resources