Design of package with facades - c#

I'm coming from Java and I'm fairly new to C#, so please bear with me in case I don't use the correct terms or I'm thinking to "java-ish".
Situation
I'm working on a package (all classes in the same namespace) that provides facades to consume data from a few remote services. For each facade there is an interface (like IEventServiceGateway) and an implementation (like EventServiceGateway), all declared public. Each of these implementations consumes data from at least one service, all a bit verbose, so I wrote a client class (like UserServiceClient) for each that provides common operations to all implementations. Because nobody outside of the package should use them, I declared them internal. Furthermore, I did the same with the WCF proxies.
I have two assemblies, one for the facades and the clients and one for the unit tests, the namespace of both is the same.
Problems
I cannot unit test the internal classes, because they are not accessible in the assembly containing the unit tests. I know there are "hacks" to circumvent these restrictions, but using such hacks usually means that I'm using things not as supposed. Not testing the clients does not seem like a sensible solution, because the code paths would be much to complex when just testing the facades. Furthermore, I would test the same things like edge cases over and over again.
I cannot inject the internal classes into the facades using constructor injection, because visibility of the argument is "lower" than the visibility of the constructor. But hard coding of the WCF proxies reduces the testability.
So I have the feeling that either my design is borked because I totally misunderstood the thing with the facades (that only the facades and their implementations should be accessible, nothing else), that my project setup is flawed or that something else is wrong.
I would greatly appreciate if somebody could enlighten me.

You could use the InternalsVisibleToAttribute attribute:
When you add that to your assembly that's internal, you can specify other assemblies that can have access that normally wouldn't be able to.

Related

How to test .net project with Ninject in proper way

We have a project with separated bussiness layer. It's like lots of services (classes) in separated project in the solution. Also we use ninject to manage dependancies.
All classes in bussiness layes project are internal, and it communicates with «another world» through interfaces.
If to create new project that would contain test then it wont see internal classes (but yeah we can do a hack and declare Internal to Public in AsseblyInfo).
What i really need to know is what's neccessary to test:
We can create test envirement of everything, and test only through produced interfaces (there is no «clear» DAL, we are using linq2sql, but its possible to be mocked)
This way looks goods, because we know nothing about internal BisLayer structure and test only «contract» functionality. But the bad side is that the system has lots of options, settings and bindings and it seems impossible or pretty hard to check all possible variants of it
We can place tests in the same project or set attribute to make internal being seen as public, so we'd be able to test internal classes. Its good because we can test almost everything, but its hard to control bindings, cos it'd be nice Ninject to do it, and we would only override bindings we need in concret test.
Also its not clear how to test classes implementing the same interface (and doing similar things). Like we have few implementations of Cache but each impl-tion keeps data in different places (mssql, key-value db, asp cache, etc), so tests for each implementation actually would be the same
As you say you need to have access to the classes in order to test. So, make only the internals that are exposed to the outside trough interfaces accessible from the outside.
Write your tests only against the behaviour that is exposed to the outside, "another world" as you call it.
Write the more generic test cases first and the go into details as needed.
As this will be an ongoing process together with the development/change of the actual functionality you'll the be able to decide how many fine grained scenarios you actually need.
Also take a look at Ninject Mocking Kernel extension https://github.com/ninject/ninject.mockingkernel

IoC and "hiding implementation details"

I implemented DI in my project through constructor injection, now the composition root is where all resolving takes place (this is, at the web project), and my question is whether the idea of creating an additional project that just handles the resolving is insane.
The reasoning behind this is while I would still have the implementation assemblies in the build directory (because they would still be referenced by the "proxy" project), I wouldn't need to reference them at web project level, which in turn would mean that the implementation of these interfaces wouldn't be accessible from somewhere other than where they're implemented (unless explicitly referenced, which would quickly pinpoint that something is wrong: you don't want to be doing this).
Is this a purposeless effort likely to become error prone or is it a reasonable thing to do?
There are pros and cons of this. As BrokenGlass said, this is a litmus test, on the flip side you really have to be careful you deploy all of the assemblies. Since dependencies of included libs are not put into the bin folder of the web app, you'll need to ensure they aren't missed although upon first run you would experience this and the resolution would ideally be easy.
This is indeed a matter of personal preference, for ease I like to include in the web app, but again, it can ensure those dependencies don't leak to the web app. However if your project is organized in such as way where your controllers always inject what you require, then the chances of it happening are less. For ex, if you take IContext in every controller then you are less likely to use using(var context = new Context()) in your app, since the standard has been set.
This is not insane at all - it is a very good litmus test to make sure no dependencies have sneaked in and very useful as such. This would only work though if your abstractions / interfaces are defined in a different assembly than the concrete classes that implement those interfaces.
Having said that, personally I have always kept the aggregate root within the main web app assembly, there is extra effort involved in this extra assembly and since I for the most part only inject interfaces I am not too worried about it, since my main concern is really testability. There might be projects though for which this is a worthwhile approach.
You could do some post-build processing to ensure the implementation doesn't leak out.
Cheers
Tymek

IOC and interfaces

I have a project structure like so :-
CentralRepository.BL
CentralRepository.BO
CentralRepository.DataAccess
CentralRepository.Tests
CentralRepository.Webservices
and there is an awful lot of dependencies between these. I want to leverage unity to reduce the dependencies, so im going to create interfaces for my classes. My question is in which project should the interfaces reside in. My thoughts are they should be in the BO layer. Can someone give me some guidance on this please
On a combinatorial level, you have three options:
Define interfaces in a separate library
Define interfaces together with their consumers
Define interfaces together with their implementers
However, the last option is a really bad idea because it tightly couples the interface to the implementer (or the other way around). Since the whole point of introducing an interface in the first place is to reduce coupling, nothing is gained by doing that.
Defining the interface together with the consumer is often sufficient, and personally I only take the extra step of defining the interface in a separate library when disparate consumers are in play (which is mostly tend to happen if you're shipping a public API).
BO is essentially your domain objects, or at least that is my assumption. In general, unless you are using a pattern like ActiveRecord, they are state objects only. An interface, on the other hand, specifies behavior. Not a good concept, from many "best practices", to mix the behavior and state. Now I will likely ramble a bit, but I think the background may help.
Now, to the question of where interfaces should exist. There are a couple of choices.
Stick the interfaces in the library they belong to.
Create a separate contract library
The simpler is to stick them in the same library, but then your mocks rely on the library, as well as your tests. Not a huge deal, but it has a slight odor to it.
My normal method is to set up projects like this:
{company}.{program/project}.{concern (optional)}.{area}.{subarea (optional)}
The first two to three bits of the name are covered in yours by the word "CentralRepository". In my case it would be MyCompany.CentralRepository or MyCompany.MyProgram.CentralRepository, but naming convention is not the core part of this post.
The "area" portions are the thrust of this post, and I generally use the following.
Set up a domain object library (your BO): CentralRepository.Domain.Models
Set up a domain exception library: CentralRepository.Domain.Exceptions
All/most other projects reference the above two, as they represent the state in the application. Certainly ALL business libraries use these objects. The persistance library(s) may have a different model and I may have a view model on the experience library(s).
Set up the core library next: CentralRepository.Core (may have subareas?). this is where the business logic lays (the actual applciation, as persistence and experience changes should not affect core functionality).
Set up a test library for core: CentralRepository.Core.Test.Unit.VS (I have Unit.VS to show these are unit tests, not integration tests with a unit test library, and I am using VS to indicate MSTest - others will have different naming).
Create tests and then set up business functionality. As need, set up interfaces. Example
Need data from a DAL, so an interface and mock are set up for data to use for Core tests. The name here would be something like CentralRepository.Persist.Contracts (may also use a subarea, if there are multiple types of persistence).
The core concept here is "Core as Application" rather than n-tier (they are compatible, but thinking of business logic only, as a paradigm, keeps you loosely coupled with persistence and experience).
Now, back to your question. The way I set up interfaces is based on the location of the "interfaced" classes. So, I would likely have:
CentralRepository.Core.Contracts
CentralRepository.Experience.Service.Contracts
CentralRepository.Persist.Service.Contracts
CentralRepository.Persist.Data.Contracts
I am still working with this, but the core concept is my IoC and testing should both be considered and I should be able to isolate testing, which is better achieved if I can isolate the contracts (interfaces). Logical separation is fine (single library), but I don't generally head that way due to having at least a couple of green developers who find it difficult to see logical separation without physical separation. Your mileage may vary. :-0
Hope this rambling helps in some way.
I would suggest keeping interfaces wherever their implementers are in the majority of cases, if you're talking assemblies.
Personally, when I'm using a layered approach, I tend to give each layer its own assembly and give it a reference to the layer below it. In each layer, most of the public things are interfaces. So, I in the data access layer, I might have ICustomerDao and IOrderDao as public interfaces. I'll also have public Dao factories in the DAO assembly. I'll then have specific implementations marked as internal -- CustomerDaoMySqlImpl or CustomerDaoXmlImpl that implement the public interface. The public factory then provides implementations to users (i.e. the domain layer) without the users knowing exactly which implementation they're getting -- they provide information to the factory, and the factory turns around and hands them a ICustomerDao that they use.
The reason I mention all this is to lay the foundation for understanding what interfaces are really supposed to be -- contracts between the servicer and client of an API. As such, from a dependency standpoint, you want to define the contract generally where the servicer is. If you define it elsewhere, you're potentially not really managing your dependencies with interfaces and instead just introducing a non-useful layer of indirection.
So anyway, I'd say think of your interfaces as what they are -- a contract to your clients as to what you're going to provide, while keeping private the details of how you're going to provide it. That's probably a good heuristic that will make it more intuitive where to put the interfaces.

Tramp Data vs. Testability

I'm not doing much new development right now, but a lot of refactoring of older C# subsystems whose original requirements no longer support new and I'll add unexpected requirements. I'm also now using Rhino Mocks and unit tests where possible (vs 2008).
The dilemma for me is that to make the methods testable and mockable, I need to define clear "contracts" using interfaces. However, if I do this, a lot of the global data that many of the classes use turns into tramp data, passed from method to method until it gets to its intended user; this looks ugly, and is against my sensibilities, but ... can be mocked. Making a mixed bag class with a lot of static global properties is a more attractive option but not Rhino testable. Is there a middle ground between the two? Testable but not too trampy? Pattern perhaps?
You should also understand that these applications run on an in-house corporate developed platform, so there are a lot of helper classes and services that are instantiated once per application, and then are used throughout the application, for example a database accessor helper class. Another example is using configuration files that are read once, and used throughout the application by different methods for various reasons.
Your thoughts appreciated.
What you might want to look at here is some form of the Service Locator Pattern. Make them classes find their own tramps.
Some other reasonable options would include wrapping up the bulk of the commonly used stuff in an "application context" class of some sort.
You also might wish to look into dependency injection if you haven't done so yet.

Can anyone explain to me, at length, how to use IOC containers?

I use dependency injection through parameters and constructors extensively. I understand the principle to this degree and am happy with it. On my large projects, I end up with too many dependencies being injected (anything hitting double figures feels to big - I like the term 'macaroni code').
As such, I have been considering IOC containers. I have read a few articles on them and so far I have failed to see the benefit. I can see how it assists in sending groups of related objects or in getting the same type over and over again. I'm not sure how they would help me in my projects where I may have over a hundred classes implementing the same interface, and where I use all of them in varying orders.
So, can anybody point me at some good articles that not only describe the concepts of IOC containers (preferably without hyping one in particular), but also show in detail how they benefit me in this type of project and how they fit into the scope of a large architecture?
I would hope to see some non-language specific stuff but my preferred language if necessary is C#.
Inversion of Control is primarily about dependency management and providing testable code. From a classic approach, if a class has a dependency, the natural tendency is to give the class that has the dependency direct control over managing its dependencies. This usually means the class that has the dependency will 'new' up its dependencies within a constructor or on demand in its methods.
Inversion of Control is just that...it inverts what creates dependencies, externalizing that process and injecting them into the class that has the dependency. Usually, the entity that creates the dependencies is what we call an IoC container, which is responsible for not only creating and injecting dependencies, but also managing their lifetimes, determining their lifestyle (more on this in a sec), and also offering a variety of other capabilities. (This is based on Castle MicroKernel/Windsor, which is my IoC container of choice...its solidly written, very functional, and extensible. Other IoC containers exist that are simpler if you have simpler needs, like Ninject, Microsoft Unity, and Spring.NET.)
Consider that you have an internal application that can be used either in a local context or a remote context. Depending on some detectable factors, your application may need to load up "local" implementations of your services, and in other cases it may need to load up "remote" implementations of your services. If you follow the classic approach, and create your dependencies directly within the class that has those dependencies, then that class will be forced to break two very important rules about software development: Separation of Concerns and Single Responsibility. You cross boundaries of concern because your class is now concerned about both its intrinsic purpose, as well as the concern of determining which dependencies it should create and how. The class is also now responsible for many things, rather than a single thing, and has many reasons to change: its intrinsic purpose changes, the creation process for its dependencies changes, the way it finds remote dependencies changes, what dependencies its dependencies may need, etc.
By inverting your dependency management, you can improve your system architecture and maintain SoC and SR (or, possibly, achieve it when you were previously unable to due to dependencies.) Since an external entity, the IoC container, now controls how your dependencies are created and injected, you can also gain additional capabilities. The container can manage the life cycles of your dependencies, creating and destroying them in more flexible ways that can improve efficiency. You also gain the ability to manage the life styles of your objects. If you have a type of dependency that is created, used, and returned on a very frequent basis, but which have little or no state (say, factories), you can give them a pooled lifestyle, which will tell the container to automatically create an object pool for that particular dependency type. Many lifestyles exist, and a container like Castle Windsor will usually give you the ability to create your own.
The better IoC containers, like Castle Windsor, also provide a lot of extendability. By default, Windsor allows you to create instances of local types. Its possible to create Facilities that extend Windsor's type creation capabilities to dynamically create web service proxies and WCF service hosts on the fly, at runtime, eliminating the need to create them manually or statically with tools like svcutil (this is something I did myself just recently.) Many facilities exist to bring IoC support existing frameworks, like NHibernate, ActiveRecord, etc.
Finally, IoC enforces a style of coding that ensures unit testable code. One of the key factors in making code unit testable is externalizing dependency management. Without the ability to provide alternative (mocked, stubbed, etc.) dependencies, testing a single "unit" of code in isolation is a very difficult task, leaving integration testing the only alternative style of automated testing. Since IoC requires that your classes accept dependencies via injection (by constructor, property, or method), each class is usually, if not always, reduced to a single responsibility of properly separated concern, and fully mockable dependencies.
IoC = better architecture, greater cohesion, improved separation of concerns, classes that are easier to reduce to a single responsibility, easily configurable and interchangeable dependencies (often without requiring a recompilation of your code), flexible dependency life styles and life time management, and unit testable code. IoC is kind of a lifestyle...a philosophy, an approach to solving common problems and meeting critical best practices like SoC and SR.
Even (or rather, particularly) with hundreds of different implementations of a single interface, IoC has a lot to offer. It might take a while to get your head fully wrapped around it, but once you fully understand what IoC is and what it can do for you, you'll never want to do things any other way (except perhaps embedded systems development...)
If you have over a hundred of classes implementing a common interface, an IoC won't help very much, you need a factory.
That way, you may do the following:
public interface IMyInterface{
//...
}
public class Factory{
public static IMyInterface GetObject(string param){
// param is a parameter that will help the Factory decide what object to return
// (that is only an example, there may not be any parameter at all)
}
}
//...
// You do not depend on a particular implementation here
IMyInterface obj = Factory.GetObject("some param");
Inside the factory, you may use an IoC Container to retrieve the objects if you like, but you'll have to register each one of the classes that implement the given interface and associate them to some keys (and use those keys as parameters in GetObject() method).
An IoC is particularly useful when you have to retrieve objects that implement different interfaces:
IMyInteface myObject = Container.GetObject<IMyInterface>();
IMyOtherInterface myOtherObject Container.GetObject<IMyOtherInterface>();
ISomeOtherInterface someOtherObject = Container.GetObject<ISomeOtherInterface>();
See? Only one object to get several different type objects and no keys (the intefaces themselves are the keys). If you need an object to get several different object, but all implementing the same interface, an IoC won't help you very much.
In the past few weeks, I've taken the plunge from dependency-injection only to full-on inversion of control with Castle, so I understand where your question is coming from.
Some reasons why I wouldn't want to use an IOC container:
It's a small project that isn't going to grow that much. If there's a 1:1 relationship between constructors and calls to those constructors, using an IOC container isn't going to reduce the amount of code I have to write. You're not violating "don't repeat yourself" until you're finding yourself copying and pasting the exact same "var myObject = new MyClass(someInjectedDependency)" for a second time.
I may have to adapt existing code to facilitate being loaded into IOC containers. This probably isn't necessary until you get into some of the cooler Aspect-oriented programming features, but if you've forgotten to make a method virtual, sealed off that method's class, and it doesn't implement an interface, and you're uncomfortable making those changes because of existing dependencies, then making the switch isn't quite as appealing.
It adds an additional external dependency to my project -- and to my team. I can convince the rest of my team that structuring their code to allow DI is swell, but I'm currently the only one that knows how to work with Castle. On smaller, less complicated projects, this isn't going to be an issue. For the larger projects (that, ironically, would reap the most benefit from IOC containers), if I can't evangelize using an IOC container well enough, going maverick on my team isn't going to help anybody.
Some of the reasons why I wouldn't want to go back to plain DI:
I can add or take away logging to any number of my classes, without adding any sort of trace or logging statement. Having the ability for my classes to become interwoven with additional functionality without changing those classes, is extremely powerful. For example:
Logging: http://ayende.com/Blog/archive/2008/07/31/Logging--the-AOP-way.aspx
Transactions: http://www.codeproject.com/KB/architecture/introducingcastle.aspx (skip down to the Transaction section)
Castle, at least, is so helpful when wiring up classes to dependencies, that it would be painful to go back.
For example, missing a dependency with Castle:
"Can't create component 'MyClass' as
it has dependencies to be satisfied.
Service is waiting for the following
dependencies:
Services:
- IMyService which was not registered."
Missing a dependency without Castle:
Object reference is not set to an
instance of an object
Dead Last: The ability to swap injected services at runtime, by editing an Xml File. My perception is that this is the most tauted feature, but I see it as merely icing on the cake. I'd rather wire up all my services in code, but I'm sure I'll run into a headache in the future where my mind will be changed on this.
I will admit that -- being a newbie to IOC and Castle -- I'm probably only scratching the surface, but so far, I genuinely like what I see. I feel like the last few projects I've built with it are genuinely capable of reacting to the unpredictable changes that arise from day to day at my company, a feeling I've never quite had before.
Try these:
http://www.martinfowler.com/articles/injection.html
http://msdn.microsoft.com/en-us/library/aa973811.aspx
I have no links but can provide you with an example:
You have a web controller that needs to call a service which has a data access layer.
Now, I take it in your code you are constructing these objects your self at compile time. You are using a decent design pattern, but if you ever need to change the implementation of say the dao, you have to go into you code and remove the code that sets this dependency up, recompile / test/ deploy. But if you were to use a IOC container you would just change the class in the configuration and restart the application.
Jeremy Frey misses one of the biggest reasons for using an IOC container: it makes your code easier to mock and test.
Encouraging the use of interfaces has lots of other nice benefits: better layering, easier to dynamically generate proxies for things like declarative transactions, aspect-oriented programming and remoting.
If you think IOC is only good for replacing calls to "new", you don't get it.
IoC containers usually do the dependency injections which in some projects are not a big deal , but some of the frameworks that provide IoC containers offer other services that make it worth to use them.
Castle for example has a complete list of services besides an IoC container.Dynamic proxies ,Transaction management and NHibernate facilities are some of them.
Then I think you should consider IoC contianers as a part of an application framework.
Here's why I use an IoC container:
1.Writing unit tests will be easier .Actually you write different configurations to do different things
2.Adding different plugins for different scenarios(for different customers for example)
3.Intercepting classes to add different aspects to our code.
4.Since we are using NHibernate ,Transaction management and NHibernate facilites of Castle are very helpful in developing and maintaining our code .
It's like every technical aspects of our application is handled using an application framework and we have time to think about what customers really want.

Categories

Resources