Tramp Data vs. Testability - c#

I'm not doing much new development right now, but a lot of refactoring of older C# subsystems whose original requirements no longer support new and I'll add unexpected requirements. I'm also now using Rhino Mocks and unit tests where possible (vs 2008).
The dilemma for me is that to make the methods testable and mockable, I need to define clear "contracts" using interfaces. However, if I do this, a lot of the global data that many of the classes use turns into tramp data, passed from method to method until it gets to its intended user; this looks ugly, and is against my sensibilities, but ... can be mocked. Making a mixed bag class with a lot of static global properties is a more attractive option but not Rhino testable. Is there a middle ground between the two? Testable but not too trampy? Pattern perhaps?
You should also understand that these applications run on an in-house corporate developed platform, so there are a lot of helper classes and services that are instantiated once per application, and then are used throughout the application, for example a database accessor helper class. Another example is using configuration files that are read once, and used throughout the application by different methods for various reasons.
Your thoughts appreciated.

What you might want to look at here is some form of the Service Locator Pattern. Make them classes find their own tramps.
Some other reasonable options would include wrapping up the bulk of the commonly used stuff in an "application context" class of some sort.
You also might wish to look into dependency injection if you haven't done so yet.

Related

why do we require interfaces between UI,Business and Data access in C#

I saw in many places whenc# programmers uses 3-tire architecture, they tend to use interfaces between each layer. for example, if the solution is like
SampleUI
Sample.Business.Interface
Sample.Business
Sample.DataAccess.Interface
Sample.DataAccess
Here UI calls the business layer through the interface and business calls the data access in the same fashion.
If this approach is to reduce the dependency between the layers, it's already in place with class library without additional use of the interface.
The code sample is below,
Sample.Business
public class SampleBusiness{
ISampleDataAccess dataAccess = Factory.GetInstance<SampleDataAccess>();
dataAccess.GetSampledata();
}
Sample.DataAccess.Interface
public interface IsampleDataAccess{
string GetSampleData();
}
Sample.DataAccess
public class SampleDataAccess:ISampleDataAccess{
public string GetSampleData(){
returns data;// data from database
}
}
This inference in between does any great job?
What if I use newSampleDataAccess().SampleData() and remove the complete interface class library?
Code Contract
There is one remarkable advantage of using interfaces as part of the design process: It is a contract.
Interfaces are specifications of contracts in the sense that:
If I use (consumes) the interface, I am limiting myself to use what the interface exposes. Well, unless I want to play dirty (reflection, et. al) that is.
If I implement the interface, I am limiting myself to provide what the interface exposes.
Doing things this way has the advantage that it eases dividing work in the development team among layers. It allows the developers of a layer to provide an cough interface cough that the next layer can use to communicate with it… Even before such interface has been implemented.
Once they have agreed on the interface - at least on a minimum viable interface. They can start developing the layers in parallel, known that the other team will uphold their part of the contract.
Mocking
A side effect of using interfaces this way, is that it allows to mock the implementation of the component. Which eases the creation of unit tests. This way you can test the implementation of a layer in isolation. So you can distinguish with ease when a layer is failing because it has a defect, and when a layer is failing because the layer below it has a defect.
For projects that are develop by a single individual - or by a group that doesn't bother too much in drawing clear lines to separate work - the ability to mock might be their main motivation to implement interfaces.
Consider for example, if you want to test if your presentation layer can handle paging correctly… But you need to request data to fill those pages. It could be the case that:
The layer below is not ready.
The database does not have data to provide yet.
It is failing and they do not know if the paging code is not correct, or the defect comes from a point deeper in the code.
Etc…
Either way the solution is mocking. In addition, mocking is easier if you have interfaces to mock.
Changing the implementation
If - for whatever reason - some of the developer decides they want to change the implementation their layer, they can do so trusting the contract imposed by the interface. This way, they can swap implementation without having to change the code of the other layers.
What reason?
Perhaps they want to test a new technology. In this case, they will probably create an alternative implementation as an experiment. In addition, they will want to have both versions working so they can test which one works better.
Addendum: Not only for testing both versions, but also to ease rolling back to the main version. Of course, they might accomplish this with source version control. Because of that, I will not consider rolling back as a motivation to use interfaces. Yet, it might be an advantage for anybody not using version control. For anybody not using it… Start using it!
Or perhaps they need to port the code to a different platform, or a different database engine. In this case, they probably do not want to throw away the old code either… For example, if they have clients that run Windows and SQL Server and other that run Linux and Oracle, it makes sense to maintain both versions.
Of course, in either case, you might want to be able to implement those changes by doing the minimum possible work. Therefore, you do not want to change the layer above to target a different implementation. Instead you will probably have some form of factory or inversion of control container, that you can configure to do dependency injection with the implementation you want.
Mitigating change propagation
Of course, they may decide to change the actual interfaces. If the developers working on a layer need something additional on the interface they can add it to the interface (given whatever methodology the team has set up to approve these changes) without going to mess with the code of the classes that the other team is working on. In source version control, this will ease merging changes.
At the end, the purpose of using a layer architecture is separation of concerns. Which implies separation of reason of change… If you need to change the database, your changes should not propagate into code dedicated to present information to the user. Sure, the team can accomplish this with concrete classes. Yet, interfaces provide a good and evident, well defined, language supported, barrier to stop the propagation of change. In particular if the team has good rules about responsibility (No, I do not mean code concerns, I mean, what developer is responsible of doing what).
You should always use an abstraction of the layer to have the ability
to mock the interfaces in unit tests
to use fake implementations for faster development
to easily develop alternative implementations
to switch between different implementations
...

Design issue: static class only initializes once, breaks unit testing

I have a static Configuration class responsible for data settings for my entire system. It loads certain values from the registry in its constructor, and all of its methods are based on these values. If it cannot get the values from the registry (which is possible if the application hasn't been activated yet), it throws an exception, which translates to a TypeInitializationException, which is fine by me.
I wrote unit tests using NUnit to make sure that Configuration's constructor handles all cases correctly - normal values, blank values, Null value. Each test initializes the registry using the relevant values and then calls some method inside Configuration.
And here's the problem: NUnit has decided to run the Null test first. It clears the registry, initializes Configuration, throws an exception - all is well. But then, because this is a static class whose constructor just failed - it doesn't re-construct the class again for the other tests, and they all fail.
I would have a problem even without the Null test, because Configuration probably (I'm guessing) gets initialized once for all classes that use it.
My question is: Should I use reflection to re-construct the class for each test, or should I re-design this class to check the registry in a property rather than the constructor?
My advice is to re-design your Configuration class a bit. Because your question is theoretical in nature, with a little practical aspect (failure of unit test), I'll provide some link to back-up my ideas.
First, make Configuration an non-static class. Miško Hevery, engineer at google, has a very good speech about OO Design for Testability where he specifically touches global state as a bad design choice, specially if you want to test it.
Your configuration could accept RegistryProvider instance through constructor (I assume you heard about Dependency Injection principles). RegistryProvider responsibility would be just read values from registry and that's the only thing, that it should do. Now when you test Configuration, you will provide RegistryProvider stub (if you don't know what stubs and mocks are - google it, they are simple in nature), where you will hardcode values for specific registry entries.
Now, benefits:
you have good unit tests, because you don't rely on registry
you don't have global state (testability)
your tests don't depend on
each other (each have separate Configuration instance)
your tests don't rely on environment, in which they are executed (you may not have permissions to access registry, but still you are able to test your Configuration class)
If you feel like you are not quite good at Dependency Injection, I would recommend a marvelous piece of art and engineering, provided to mortal souls by the genius of Mark Seemann, called Dependency Injection in .NET. One of the best book I've read about class design, which is oriented to .NET developers.
To make my answer more concrete :
Should I use reflection to re-construct the class for each test?
No, you should never use reflexion in your tests (only if it is no other case). Reflexion will make you tests:
fragile
hard to understand
hard to maintain
Use object-oriented practices with conjunction of encapsulation to achieve hiding of implementation. Then test only external behavior and don't rely on internal implementation details. This will make you tests depend only on external behavior and not on internal implementation, which can change a lot.
should I re-design this class to check the registry in a property
rather than the constructor?
Designing you class as described in my answer will make you able to test your class not accessing registry at all. This is a cornerstone of unit tests - not to rely on external systems (databases, file systems, web, registry, etc... ). If you want to test if you can access registry at all - write separate integration tests, where you will write to registry and read from it.
Now I don't have enough information to tell you whether you should read registry via RegistryProvider in Configuration constructor, or lazily read registry on demand, that's a tricky question. But I definitely can say - try to avoid global state as much as you can, try to minimize dependency on implementation details in you tests (this related to OO principles as a whole) and try to tests you objects in isolation, i.e. without accessing external systems. Then you can mimic exceptional cases, for example does you class behaves as expected when registry is no available? It is really hard to re-create such scenario if you access registry directly via static members of a Configuration class.
static classes / methods are notoriously hard to unit-test.
(Also notice that what you're currently doing isn't unit testing at all; it's integration testing (you're changing registry values for your tests).)
I'm afraid you'll have to choose between your class being testable and it being static.
A compromise you could make is to move the 'logical' bits (i.e validation etc.) to a different, non-static class, which will be called by the main static class.
That non-static class can be then easily tested.
I would opt for redesign.
Also having a TypeInitializationException if anything goes wrong could confuse the user/developer.
I would suggest adapting your code to use the singleton pattern as a first step.

How to test .net project with Ninject in proper way

We have a project with separated bussiness layer. It's like lots of services (classes) in separated project in the solution. Also we use ninject to manage dependancies.
All classes in bussiness layes project are internal, and it communicates with «another world» through interfaces.
If to create new project that would contain test then it wont see internal classes (but yeah we can do a hack and declare Internal to Public in AsseblyInfo).
What i really need to know is what's neccessary to test:
We can create test envirement of everything, and test only through produced interfaces (there is no «clear» DAL, we are using linq2sql, but its possible to be mocked)
This way looks goods, because we know nothing about internal BisLayer structure and test only «contract» functionality. But the bad side is that the system has lots of options, settings and bindings and it seems impossible or pretty hard to check all possible variants of it
We can place tests in the same project or set attribute to make internal being seen as public, so we'd be able to test internal classes. Its good because we can test almost everything, but its hard to control bindings, cos it'd be nice Ninject to do it, and we would only override bindings we need in concret test.
Also its not clear how to test classes implementing the same interface (and doing similar things). Like we have few implementations of Cache but each impl-tion keeps data in different places (mssql, key-value db, asp cache, etc), so tests for each implementation actually would be the same
As you say you need to have access to the classes in order to test. So, make only the internals that are exposed to the outside trough interfaces accessible from the outside.
Write your tests only against the behaviour that is exposed to the outside, "another world" as you call it.
Write the more generic test cases first and the go into details as needed.
As this will be an ongoing process together with the development/change of the actual functionality you'll the be able to decide how many fine grained scenarios you actually need.
Also take a look at Ninject Mocking Kernel extension https://github.com/ninject/ninject.mockingkernel

Simplest, fastest way to break out all dependencies from a class

When working with legacy code, and trying to create tests, I often break out dependencies from classes or methods so I can write unit tests using mocks for these dependencies. Dependencies most often come in the form of calls to static classes and objects created using the new keyword in the constructor or other locations in that class.
In most cases, static calls are handled either by wrapping the static dependency, or if its a singleton pattern (or similar) in the form of StaticClass.Current.MethodCall() passing that dependency by its interface go the constructor instead.
In most cases, uses of the new keyword in the constructor is simply replaced by passing that interface in the constructor instead.
In most cases, uses of the new keyword in other parts of the class, is handled either by the same method as above, or by if needed create a factory, and pass the factory's interface in the constructor.
I always use Resharpers refactoring tools to help me all of these break-outs, however most things are still manual labour (which could be automated), and for some legacy classes and methods that can be a very very tedious process. Is there any other refactoring plugins and/or tools which would help me in this process? Is there a "break out all depencencies from this class in a single click" refactoring tool? =)
It sounds to me like all these steps are common for many developers and a common problem, and before I attempt writing plugin to Resharper or CodeRush, I have to ask, because someone has probably already attempted this..
ADDED:
In reflection to answers below: even if you might not want to break out everything at once (one click total break out might cause more problems than it helps) still being able to simply break out 1 methods dependencies, or 1-2 dependencies easily, would be of big difference.
Also, refactoring code has a measure of "try and see what happens just to learn how everything fits together", and a one click total break out would help that process tons, even if you dont check that code in..
I don't think there is any tool that can automate this for you. Working with legacy code means -as you know- changing code with little steps at a time. The steps are often deliberately small to prevent errors from being made. Usually the first change you should make is one that makes that code testable. After you've written the test you change that part of the code in such way that you fix the bug or implement the RFC.
Because you should take small steps I believe it is hard to use a refactoring tool to magically make all your dependencies disappear. With legacy systems you would hardly ever want to make big changes at once, because the risk of breaking (and not finding out because of the lack of tests) is too big. This however, doesn’t mean refactoring tools aren’t useful in this scenario. On the contrary; they help a lot.
If you haven't already, I'd advise you to read Michael Feathers' book Working Effectively with Legacy Code. It describes in great details a series of patterns that help you refactor legacy code to a more testable system.
Good luck.
When it comes to static call dependencies, you might want to check out Moles. It's able to do code injection at run-time to stub out any static or non-virtual method call with your own test implementation. This is handy for testing legacy code that wasn't designed using testable dependency-injected interfaces.

Is it a good practice to create wrapper over 3rd party components like MS enterprise Library or Log4net?

This is more like a good practise question. I want to offer different generic libraries like Logging, caching etc. There are lots of third party libraries like MS enterprise library, log4Net, NCache etc for these.
I wanted to know if its a good practise to use these directly or create wrapper over each service and Use a DI to inject that service in the code.
regards
This is subjective, but also depends on the library.
For instance, log4net or NHibernate have strictly interface based API's. There is no need to wrap them. There are other libraries which will make your classes hard to test if you don't have any interfaces in between. There it might be advisable to write a clean interface.
It is sometimes good advise to let only a small part of the code access API's like for instance NHibernate or a GUI library. (Note: This is not wrapping, this is building abstraction layers.) On the other side, it does not make sense to reduce access to log4net calls, this will be used in the whole application.
log4net is probably a good example where wrapping is just overkill. Some people suffer of "wrappitis", which is an anti-pattern (sometimes referred to as "wrapping wool in cotton") and just produces more work. Log4net has such a simple API and is highly customizable, they made it better then your wrapper most probably will be.
You will find out that wrapping a library will not allow you to just exchange it with another product. Upgrading to newer versions will also not be easier, rather you need to update your wrapper for nothing.
If you want to be able to swap implementations of those concepts, creating a wrapper is the way to go.
For logging there already is something like that available Common.Logging.
Using wrapping interfaces does indeed make unit testing much easier, but what's equally important, it makes it possible to use mocks.
As an example, the PRISM framework for Silverlight and WPF defines an ILoggerFacade interface with a simple method named Log. Using this concept, here's how I define a mocked logger (using Moq) in my unit tests:
var loggerMock = new Mock<ILoggerFacade>(MockBehavior.Loose);
loggerMock.Setup(lg => lg.Log(It.IsAny<string>(), It.IsAny<Category>(), It.IsAny<Priority>()))
.Callback((string s, Category c, Priority p) => Debug.Write(string.Format("**** {0}", s)));
Later you can pass loggerMock.Object to the tested object via constructor or property, or configure a dependency injector that uses it.
It sounds like you are thinking of wrapping the logging implementation and then sharing with different teams. Based on that here are some pros and cons.
Advantages of Wrapping
Can abstract away interfaces and
dependencies from implementation.
This provides some measure of protection against breaking changes
in the implementation library.
Can
make standards enforcement easier and
align different project's
implementations.
Disadvantages of Wrapping
Additional development work.
Potential additional documentation
work (how to use new library).
More
likely to have bugs in the wrapping
code than mature library. (Deploying your bug fixes can be a big headache!)
Developers
need to learn new library (even if
very simple).
Can sometimes be
difficult to wrap an entire library
to avoid leaking implementation
interfaces. These types of wrapper
classes usually offer no value other
than obfuscation. e.g. MyDbCommand
class wraps some other DbCommand
class.
I've wrapped part of Enterprise Library before and I didn't think it added much value. I think you would be better off:
Documenting the best practices and usage
Providing a reference implementation
Verifying compliance (code reviews
etc.)
This is more of a subjective question but IMO it's a good thing to wrap any application/library specific usage into a service model design that has well thought out interfaces so you can easily use DI and later if you ever need to switch from say EntLib Data Application Block to NHibernate you don't need to re-architect you're whole application.
I generally create a "helper" or "service" class that can be called statically to wrapper common functionality of these libraries.
I don't know that there is a tremendous amount of risk in directly referencing/calling these, since they are definitely mature projects (at least EntLib and Log4Net), but having a wrapper isolates you from the confusion of version change, etc. and gives you more options in the future, at a fairly low cost.
I think it's better to use a wrapper, personally, simply because these are things you don't want to be running when your unit tests run (assuming you're unit testing also).
Yes if being able to replace the implementation is a requirement now or in a reasonable future.
No otherwise.
Defining the interface that your application will use for all logging/enterprising/... purposes is the core work here. Writing the wrapper is merely a way to make the compiler enforce use of this interface rather than the actual implementation.

Categories

Resources