Is it a test smell to mix in real implementation and mocks? - c#

I have a consumer class responsible for consuming a string and deciding what to do with it. It can either parse and insert the parse data in a database or notify an administrator.
Below is my implementation.
public void Consume(string email)
{
if(_emailValidator.IsLocate(email))
{
var parsedLocate = _parser.Parse(email);
// Insert locate in database
}
else if(_emailValidator.IsGoodNightCall(email))
{
// Notify email notifying them that a locate email requires attention.
_notifier.Notify();
}
}
Below is my unit test.
// Arrange
var validator = new EmailValidator();
var parser = new Mock<IParser>();
var notifier = new Mock<INotifier>();
var consumer = new LocateConsumer(validator, parser.Object, notifier.Object);
var email = EmailLiterals.Locate;
// Act
consumer.Consume(email);
// Assert
parser.Verify(x => x.Parse(email), Times.Once());
Is it code smell to mix mocks and real implementation in unit tests? Also, how do always having to test whether method abc() always ran once? It doesn't seem right that once I add a new unit test every time I add a function inside my if block. Seems like if I continue adding to my Consume method I'm create a trap.
Thank you.

To be nitpicking, a unit test is an automated test that tests a unit in isolation. If you combine two or more units, it's not a unit test any more, it's an integration test.
However, depending on the type of units you integrate, having lots of that type of integration tests may be quite okay.
Krzysztof Kozmic recently wrote a blog post about this where he describes how Castle Windsor has very few unit tests, but lots of integration tests. AutoFixture also has a large share of those types of integration tests. I think the most important point is that as a general rule the integration must not cross library boundaries.
In any case you can view the actual implementation as an extreme end of the Test Double Continuum, so just as there are scenarios where it makes sense to use Stubs, Mocks, Spies, or Fakes, there are also scenarios where the actual implementation may make sense.
However, just keep in mind that you are no longer testing the unit in isolation, so you do introduce a coupling between the units that makes it more difficult to vary each independently.
To conclude, I still consider it a smell because it should always be an occasion to stop and think. However, a smell indicates no more than that, and sometimes, once you've thought it over, you can decide to move along.

I would say a strong yes. Unit testing should be free of dependencies among components.

> Is it a test smell to mix in real implementation and mocks?
This is an integration test (combining 2 or more modules) and not a unittest (test one module in isolation)
My answer is No: I think it is ok to have mocks in integration test.

Related

Dependency between two unit tests in c#

Suppose that we have two unit tests that are dependent on each other. TestA depends on TestB. Now we want to change the code so when we run TestA, TestB will automatically be run.
[TestMethod]
public void TestA()
{
string id = "123456789";
NewUser user = new NewUser();
Boolean idCheck = user.checkID(id);
Assert.IsFalse(idCheck);
}
[TestMethod]
[HostType("ASP.NET")]
[UrlToTest("http://localhost:1776/Login.aspx")]
[AspNetDevelopmentServerHost("$(SolutionDir)\\project", "/")]
public void TestB()
{
Page page = _testContextInstance.RequestedPage;
Button button = page.FindControl("BNewUser") as Button;
PrivateObject po = new PrivateObject(page);
po.Invoke("BNewUser_Click", button, EventArgs.Empty);
Assert.IsFalse(page.Visible);
}
Unit tests should be F.I.R.S.T. Where I means isolated (not only from external resources, but from other tests also). TestB should have single reason to fail - if requirement which it verifies is not implemented. In your case it can fail if TestA was not run before, but requirement for TestB is implemented. So you never can tell real reason of test failure.
If you need some preconditions to be set before running TestB, then you should add this preconditions setting to Arrange part of TestB.
UPDATE: Reuse of Unit Test Artifacts. Allow Us to Dream article is just a dreaming of re-using unit tests for integration testing:
In theory it looks interesting. But in practice unit tests and integration tests are very different. First should be isolated latter are quite the reverse - they should use real dependencies and external resources. Lets imaging you will use some dependency injection framework to provide different implementations of dependencies to your SUT - mocked for unit-tests and real for integration tests. Sounds good. But it will make unit tests very hard to maintain - you will not know setup of mocked object in current test, because you are moving arrange part of test out of test to dependency injection framework. All unit tests will have only act and assert parts, which will have some value, but they will be really hard to understand and maintain.
Next part is even worth - how would you configure dependency injection framework to provide different setups of mocks for every different unit test? Also integration tests will require additional setup and tear-down steps which don't exist in separate unit-tests (you should clear and fill database etc). Also I can't even imagine how long it will take to run several thousands of integration tests which require real database, services and files. Unit tests use mocks, thus they are fast. Integration tests are slow.
As you can see, these types of tests are very different by their nature. So just don't try to mix them and use each one as it supposed to be used.
I think you may want to use a common initialization for your unit test. You can validate the initialization inside of TestB before you continue.
Take a look at
c# unit test with common code repeated
this may answer your question.
As a rule, I hold the opinion that unit tests should not have dependencies, if you find that you cannot separate the two sets of functionality, you may consider re-factoring it.
Tim

What qualifies as a unit test & what qualifies as a functional/integration test?

I am a bit confused on what the limits are for a unit tests and integration/functional tests? Are there any definite boundaries between these?
Let me start with a scenario ....
If I have a set of classes that perform a process. Each processes comprises have a few tasks.
ProcessA {
var a = do TaskA;
var c = do TaskC;
var b = do TaskB;
}
ProcessB {
var c = do TaskC;
var d = do TaskD;
}
ProcessC {
var a = do TaskA;
var d = do TaskD;
}
If we take the above design, then I can write unit tests to test out each of the Tasks to make sure that they do what they are supposed to. I am cool with that.
My problem arises with the fact that I would like write unit test, or I think they would be unit tests, for all the Processes. I want to make sure that the tasks are in the proper order, and the business rules are correct within the process itself.
The definition of a unit test is that unit tests test out a bite size chunk of the code. What I am trying to test out is a larger, 'ProcessA', 'ProcessB', code. In those tests, I will still be decoupling from the datastore & services. Would that still be considered a unit test or integration/functional test?
UPDATE
Based on all the comments, I guess the proper question to ask is that if all the external dependencies are mocked in 'ProcessA', 'ProcessB', etc. classes, would a unit test for those classes be considered a unit test or a integration test?
And thanks for being patient with me ....
As soon as you said I want to make sure that the tasks are in the proper order, and the business rules are correct within the process itself. you stopped talking about unit tests and began talking about integration tests.
Even though you are decoupling from the datastore and services you are testing busines rules. If your organization is anything like mine, business rules can (and often do) change. It is this volatile nature that makes it an integration test.
The base rule I've used is "unit tests are modular, independent pieces of code not dependant upon external data sources. whereas integration tests are tests that require the use of external data sources, either mocked up or from production."
The art of Unit Testing, second edition lists Integration tests with the following definition:
I consider integration tests as any tests that aren’t fast and consistent and that use one or more real dependencies of the units under test. For example, if the test uses the real system time, the real filesystem, or a real database, it has stepped into the realm of integration testing.
and unit tests with the following (emphasis mine):
A unit test is an automated piece of code that invokes the unit of work being tested, and then checks some assumptions about a single end result of that unit. A unit test is almost always written using a unit testing framework. It can be written easily and runs quickly. It’s trustworthy, readable, and maintainable. It’s consistent in its results as long as production code hasn't changed.
Based upon the same book as above. A unit test :
Is an automated piece of code that invokes a different method and then checks some assumptions on the logical behaviour of that method or class.
This is true if you can reduce ProcessA/B/C into small chunks that can be tested independently of one another.
can be executed repeatedly by anyone on the development team.
If someone from your team requires values or a cheat sheet to refer to determine if the test passes or not, it is not a unit test.
Now based upon your edit it depends. Integration tests and unit tests can overlap and many other developers may have different ideas on to how they would label the tests.
The answer to your question is to determine the best practice for you and your development team. If the consensus it's a unit test, then consider it as such.
Unit and integration tests should help the process, not hinder it. Please don't let semantics get in the way of that.
Setting aside tools etc... Unit test is simply testing a specific piece of code and would typically be performed by the programmer.
Functional and Integration are typically performed by QA team:
Functional Testing is testing if the code does what it was intended to do.
Integration Testing is about testing code and measuring how it interacts with other modules/parts of the system.
I would say that a unit test is a test that tests one class only (a unit), whereas integration tests test the integration of multiple classes/assemblies and/or even external data sources. Functional tests should explicitly test the functionality exposed to the users (which may be internal users, outside of the dev team etc.)
EDIT:
The only way to implement unit tests for composite classes is to use dependency injection. This means that you should define an interface (possibly more than one), e.g.:
interface ITask
{
Perform();
}
and inject it into your Process classes:
class ProcessC
{
ITask taskA;
ITask taskD;
ProcessA(ITask taskA, ITask taskD)
{
this.taskA = taskA;
this.taskD = taskD;
}
void Run() //change to your real methods
{
taskA.Perform(); //capture results
taskD.Perform();
}
}
Now you can mock the injected interface ITask (using a mocking framework), so that you can isolate the behaviour of a process from the behaviour of the tasks.

Using TestContext to share information between unit tests

I'm writing a set of unit tests to test a CRUD system.
I need to register a user in Test1 - which returns a ServiceKey
I then need to add data in Test2 for which I need the ServiceKey
What is the best way to pass the ServiceKey? I tried to set it in the TestContext, but it just seems to disappear between the tests.
You should not share aany state between unit tests, one of the very important properties of good unit tests - Independency. Tests should not affect each other.
See this StackOverflow post: What Makes a Good Unit Test?
EDIT: Answer to comment
To share a logic/behaviour (method) you can extract the common code into a helper method and call it from different tests, for instance helper method which creates an user mock:
private IUser CreateUser(string userName)
{
var userMock = MockRepository.GenerateMock<IUser>();
userMock.Expect(x => x.UserName).Return(userName);
return userMock;
}
the idea of unit tests is that each tests checks one functionality. if you create dependencies in between your tests it is no longer certain that they will pass all the time (they might get executed in a different order, etc.).
what you can do in your specific case is keeping your Test1 as it is. it only focuses on the functionality of the registering process. you don't have to save that ServiceKey anywhere. just assert inside the test method.
for the second test you have to setup (fake) everything you need it to run successfully. it is generally a good idea to follow the "Arrange Act Assert"-Principle, where you setup your data to test, act upon it and then check if everything worked as intended (it also adds more clarity and structure to your tests).
therefore it is best to fake the ServiceKey you would get in the first test run. this way it is also much easier to controll the data you want to test. use a mocking framework (e.g. moq or fakes in vs2012) to arrange your data they way you need it. moq is a very lightweight framework for mocking. you should check it out if you are yet not using any mocking utilities.
hope this helps.

How to ensure that database cleanup is always performed after a test?

Consider the following example of a unit test. The comments pretty much explain my problem.
[TestMethod]
public void MyTestMethod()
{
//generate some objects in the database
...
//make an assert that fails sometimes (for example purposes, this fails always)
Assert.IsTrue(false);
//TODO: how do we clean up the data generated in the database now that the test has ended here?
}
There are two ways to do this. One is using TestInitialize and TestCleanup attributes on methods in the test class. They will always be run before and after the test, respectively.
Another way is to use the fact that test failures are propagated to the test runner via exceptions. This means that a try { } finally { } block in your test can be used clean up anything after an assert fails.
[TestMethod]
public void FooTest()
{
try
{
// setup some database objects
Foo foo = new Foo();
Bar bar = new Bar(foo);
Assert.Fail();
}
finally
{
// remove database objects.
}
}
The try/finally cleanup can get really messy is there are a lot of objects to cleanup. What my team has leaned towards is a helper class which implements IDisposable. It tracks what objects have been created and pushes them onto a stack. When Dispose is called the items are popped off the stack and removed from the database.
[TestMethod]
public void FooTest()
{
using (FooBarDatabaseContext context = new FooBarDatabaseContext())
{
// setup some db objects.
Foo foo = context.NewFoo();
Bar bar = context.NewBar(foo);
Assert.Fail();
} // calls dispose. deletes bar, then foo.
}
This has the added benefit of wrapping the constructors in method calls. If constructor signatures change we can easily modify the test code.
I think the best answer in situations like this is to think very carefully about what you are trying to test. Ideally a unit test should be trying to test a single fact about a single method or function. When you start combining many things together it crosses over into the world of integration tests (which are equally valuable, but different).
For unit testing purposes, to enable you to test only the thing you want to test, you will need to design for testability. This typically involves additional use of interfaces (I'm assuming .NET from the code you showed) and some form of dependency injection (but doesn't require an IoC/DI container unless you want one). It also benefits from, and encourages you to create very cohesive (single purpose) and decoupled (soft dependencies) classes in your system.
So when you are testing business logic that depends on data from a database, you would typically use something like the Repository Pattern and inject a fake/stub/mock IXXXRepository in for unit testing. When you are testing the concrete repository, you either need to do the kind of database cleanup you are asking about or you need to shim/stub the underlying database call. That is really up to you.
When you do need to create/populate/cleanup the database, you might consider taking advantage of the various setup and teardown methods available in most testing frameworks. But be careful, because some of them are run before and after each test, which can seriously impact the performance of your unit tests. Tests that run too slowly will not be run very often, and that is bad.
In MS-Test, the attributes you would use to declare setup/teardown are ClassInitialize, ClassCleanUp, TestInitialize, TestCleanUp. Other frameworks have similarly named constructs.
There are a number of frameworks that can help you with the mocking/stubbing: Moq, Rhino Mocks, NMock, TypeMock, Moles and Stubs (VS2010), VS11 Fakes (VS11 Beta), etc. If you are looking for dependency injection frameworks, look at things like Ninject, Unity, Castle Windsor, etc.
A couple of responses:
If it's using an actual database, I'd argue that it's not a "unit test" in the strictest sense of the term. It's an integration test. A unit test should have no such side-effects. Consider using a mocking library to simulate the actual database. Rhino Mocks is one, but there are plenty of others.
If, however, the whole point of this test is to actually interact with a database, then you'll want to interact with a transient test-only database. In that case part of your automated testing would include code to build the test database from scratch, then run the tests, then destroy the test database. Again, the idea is to have no external side-effects. There are probably multiple ways to go about this, and I'm not familiar enough with unit testing frameworks to really give a concrete suggestion. But if you're using the testing that's built in to Visual Studio then perhaps a Visual Studio Database Project would be of use.
Your question is a little bit too general. Usually you should clean up after every single test. Usually you cannot rely that all tests are always executed in the same order and you have to be sure about what is in your database. For general setup or cleanup most unit test frameworks provide setUp and tearDown methods that you can override and will automatically be called. I don't know how that works in C# but e. g. in JUnit (Java) you have these methods.
I agree with David. Your tests usually should have no side effects. You should set up a new database for every single test.
You'll have to do manual cleanup in this circumstance. Ie, the opposite of the generate some objects in the db.
The alternative, is to use Mocking tools such as Rhino Mocks so that the database is just a in memory database
It depends on what you are actually testing. Looking on the comments I would say yes, but by the way it's difficult to deduct looking on comments. Cleaning up the object you just inserted you, in practice, reset the state of the test. So if you cleanup, you begin to test from cleanup system.
I think the clean up depends on how you're building the data, so if "old test data" doesn't interact with future tests runs, I think it's fine to leave it behind.
An approach I've been taking when writing integration tests is to have the tests run against a different db than the application db. I tend to rebuild the test db as a precondition to each test run. That way you don't need a granular clean-up scheme for each test as each test run gets a clean slate between runs.
Most of my development i done using SQL server, but I have in some cases run my tests against a SQL Compact edition db, which is fast and efficient to rebuild between runs.
mbUnit has a very handy attribute Rollback that cleans up the database after finishing the test. However, you'll have to configure DTC (Distributed Transaction Coordinator) in order to be able to use it.
I was having a comparable issue where one test's assertion was preventing cleanup and causing other tests to fail.
Hopefully this is of use to somebody, sometime.
[Test]
public void Collates_Blah_As_Blah()
{
Assert.False(SINGLETON.Collection.Any());
for (int i = 0; i < 2; i++)
Assert.That(PROCESS(ValidRequest) == Status.Success);
try
{
Assert.AreEqual(1, SINGLETON.Collection.Count);
}
finally
{
SINGLETON.Collection.Clear();
}
}
The finally block will execute whether the assertion passes or fails, it also doesn't introduce the risk of false passes - which catch will cause!
The following is a skeleton of a test method that I use. this allows me to use a try catch finally to do the cleanup code in the finally block without loosing my assert that fail.
[TestMethod]
public void TestMethod1()
{
Exception defaultException = new Exception("No real execption.");
try
{
#region Setup
#endregion
#region Tests
#endregion
}
catch (Exception exc)
{
/*if an Assert fails this catches its Exception so that it can be thrown
in the finally block*/
defaultException = exc;
}
finally
{
#region Cleanup
//cleanup code goes here
if (!defaultException.Message.Equals("No real execption."))
{
throw defaultException;
}
#endregion
}
}

When to use mocking versus faking in C# unit testing?

Can anyone come up with guidelines suggesting the ideal scenarios to choose mocking versus faking, i.e., setting up the essentials manually?
I am a bit confused with how to approach this situation.
Well you have a few things you need to sort out. You have two basic things you'll need to know: Nomenclature and Best Practices.
First I want to give you a great video resource from a great tester, Roy Osherove:
Unit Testing Reviews by Roy Osherove
He starts out by saying that he has
done some reviews of test harnesses
shipped with several open source
projects. You can find those here:
http://weblogs.asp.net/rosherove/archive/tags/TestReview/default.aspx
These are basically video reviews
where he walks you through these test
harnesses and tells you what is good
and what is bad. Very helpful.
Roy also has a book that I understand
is very good.
Nomenclature
This podcast will help out
immensely: http://www.hanselminutes.com/default.aspx?showID=187
I'll paraphrase the podcast, though
(that Hanselminutes intro music is
dreadful):
Basically everything you do with an
isolation framework (like Moq, Rhino Mocks, Type Mock, etc) is called a
fake.
A fake is an object in use during
a test that the code you are testing
can call in place of production code.
A fake is used to isolate the code you
are trying to test from other parts of
your application.
There are (mainly) two types of fakes: stubs
and mocks.
A mock is a fake that you put in
place so that the code you are testing
can call out to it and you assert that
the call was made with the correct
parameters. The below sample does just
this using the Moq isolation
framework:
[TestMethod]
public void CalculateTax_ValidTaxRate_DALCallIsCorrect()
{
//Arrange
Mock<ITaxRateDataAccess> taxDALMock = new Mock<ITaxRateDataAccess>();
taxDALMock.Setup(taxDAL => taxDAL.GetTaxRateForZipCode("75001"))
.Returns(0.08).Verifiable();
TaxCalculator calc = new TaxCalculator(taxDALMock.Object);
//Act
decimal result = calc.CalculateTax("75001", 100.00);
//Assert
taxDALMock.VerifyAll();
}
A stub is almost the same as a
mock, except that you put it in place
to make sure the code you are testing
gets back consistent data from its
call (for instance, if your code calls
a data access layer, a stub would
return back fake data), but you don’t
assert against the stub itself. That
is, you don’t care to verify that the
method called your fake data access
layer – you are trying to test
something else. You provide the stub
to get the method you are trying to
test to work in isolation.
Here’s an example with a stub:
[TestMethod]
public void CalculateTax_ValidTaxRate_TaxValueIsCorrect()
{
//Arrange
Mock<ITaxRateDataAccess> taxDALStub = new Mock<ITaxRateDataAccess>();
taxDALStub.Setup(taxDAL => taxDAL.GetTaxRateForZipCode("75001"))
.Returns(0.08);
TaxCalculator calc = new TaxCalculator(taxDALStub.Object);
//Act
decimal result = calc.CalculateTax("75001", 100.00);
//Assert
Assert.AreEqual(result, 8.00);
}
Notice here that we are testing the
output of the method, rather than the
fact that the method made a call to
another resource.
Moq doesn’t really make an API
distinction between a mock and a stub
(notice both were declared as
Mock<T>), but the usage here is
important in determining the type.
Hope this helps set you straight.
There are at leat 5 different kinds of test doubles: dummies, stubs, mocks, spies and fakes. A good overview is at http://code.google.com/testing/TotT-2008-06-12.pdf and they are also categorized at http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html
You want to test a chunk of code, right, let's say a method. Your method downloads a file from a http url, and then saves the file on disk, and then mail out that the file is on disk. All these three actions are of course service-classes your method calls, because then they are easy to mock. If you don't mock these, your test will download stuff, access the disk, and mail a message every time you run that test. Then you are not just testing the code in the method, you are also testing the code that downloads, writes to disk and sends a mail. Now if you are mocking these, you are testing just the methods code. Also you are able to simulate a download failure for instance, to see that your method's code is behaving correctly.
Now as for faking, I usually fake classes that are just holding values, and don't have much logic. If you are sending in an object that holds some values, that get changed in the method, you can read off of it in the test to see that the method do the right thing.
Of course the rules can (and sometimes must) be bent a bit, but the general way of thinking is test your code, and your code only.
The Little Mocker, from Bob Martin, is a very good reading on the topic.
[...] long ago some very smart people wrote a paper that introduced and defined the term Mock Object. Lots of other people read it and started using that term. Other people, who hadn't read the paper, heard the term and started using it with a broader meaning. They even turned the word into a verb. They'd say, "Let's mock that object out.", or "We've got a lot of mocking to do."
The article explains the differences between mocks, fakes, spys and stubs.

Categories

Resources