Suppose that we have two unit tests that are dependent on each other. TestA depends on TestB. Now we want to change the code so when we run TestA, TestB will automatically be run.
[TestMethod]
public void TestA()
{
string id = "123456789";
NewUser user = new NewUser();
Boolean idCheck = user.checkID(id);
Assert.IsFalse(idCheck);
}
[TestMethod]
[HostType("ASP.NET")]
[UrlToTest("http://localhost:1776/Login.aspx")]
[AspNetDevelopmentServerHost("$(SolutionDir)\\project", "/")]
public void TestB()
{
Page page = _testContextInstance.RequestedPage;
Button button = page.FindControl("BNewUser") as Button;
PrivateObject po = new PrivateObject(page);
po.Invoke("BNewUser_Click", button, EventArgs.Empty);
Assert.IsFalse(page.Visible);
}
Unit tests should be F.I.R.S.T. Where I means isolated (not only from external resources, but from other tests also). TestB should have single reason to fail - if requirement which it verifies is not implemented. In your case it can fail if TestA was not run before, but requirement for TestB is implemented. So you never can tell real reason of test failure.
If you need some preconditions to be set before running TestB, then you should add this preconditions setting to Arrange part of TestB.
UPDATE: Reuse of Unit Test Artifacts. Allow Us to Dream article is just a dreaming of re-using unit tests for integration testing:
In theory it looks interesting. But in practice unit tests and integration tests are very different. First should be isolated latter are quite the reverse - they should use real dependencies and external resources. Lets imaging you will use some dependency injection framework to provide different implementations of dependencies to your SUT - mocked for unit-tests and real for integration tests. Sounds good. But it will make unit tests very hard to maintain - you will not know setup of mocked object in current test, because you are moving arrange part of test out of test to dependency injection framework. All unit tests will have only act and assert parts, which will have some value, but they will be really hard to understand and maintain.
Next part is even worth - how would you configure dependency injection framework to provide different setups of mocks for every different unit test? Also integration tests will require additional setup and tear-down steps which don't exist in separate unit-tests (you should clear and fill database etc). Also I can't even imagine how long it will take to run several thousands of integration tests which require real database, services and files. Unit tests use mocks, thus they are fast. Integration tests are slow.
As you can see, these types of tests are very different by their nature. So just don't try to mix them and use each one as it supposed to be used.
I think you may want to use a common initialization for your unit test. You can validate the initialization inside of TestB before you continue.
Take a look at
c# unit test with common code repeated
this may answer your question.
As a rule, I hold the opinion that unit tests should not have dependencies, if you find that you cannot separate the two sets of functionality, you may consider re-factoring it.
Tim
Related
Following is the exact scenario in my application:
There are several C# methods in the codebase which are using Entity
framework to talk with SQL database.
Unit tests are written against all methods, and covers all possible permutation and combinations based on method signature, input requirements, and return values.
Unit tests are working fine, and are failing when they should (i.e., cases like some validation is changed or expected return value is changed, but unit tests are not reflected for the same).
There are cases, where a developer performs a change in SQL schema, and updates the entity in the C# code. In this case, unit tests are passing which is absolutely fine because its just underlying logic is changed, but not the input, validations, or return value.
However, I want some unit tests to be failed when the database schema and entity are changed, but unit tests are not changed. That means, I want developers to fix the unit tests when they change database schema and entity.
Can anyone please suggest how to achieve the same?
Any help on this would be much appreciated.
Thanks
I moved up this last paragraph because it appears to the the X of your XY problem. I do suggest reading all other paragraphs because you seem to be misunderstanding what a unit test is, and what purpose it serves.
Unit tests do not test multiple codebases!
I just noticed this in the comment you wrote:
So if a developer of one project changes the database schema, we want to alert them that this schema change should be reflected in other project as well.
Unit tests should not be used as a way to synchronize two otherwise completely unrelated codebases.
This is a simple application of DRY vs WET: if these application share the same database schema, the database schema (and related EF entities) should be in a single source which both applications depend on.
Don't develop the same thing twice because you end up with constantly having to sync one when the other changes (which is what you're dealing with now).
For example, you can host the "shared data" project on a Nuget server, and have both applications refer to that self-hosted Nuget packages. When confirgured correctly, whenever an application is built, it will fetch the latest version of the "shared data" project, thus ensuring that both applications always work with the latest version of the database schema.
A second way to do this (if you don't want to create a shared dependency) is to have your subversioning system observe certain files (e.g. the entity class folder) and alert you if any change is made in those files. When you receive an alert, you'll be aware to check if this change impacts the other application's codebase.
And if I understand you correctly, you're really just trying to get an alert, right?
There are other ways to solve this, and the decision depends on your infrastructure and what technologies your team uses. But it should definitely not be done by manually scripting/faking unit test failures.
Don't forget the "unit" in "unit test"
I want some unit tests to be failed when the database schema and entity are changed
Unit tests test one thing (hence "unit"). It seems like you're writing tests that test two things: the business logic and the database. That's an integration test. Just as a rule of thumb:
Unit tests test one component.
Integration tests test the interaction between multiple components.
Don't test external dependencies
If you were to write a unit test for EF specifically (which you shouldn't, but more on that later), you would not actually involve an actual database in the test. At best, you'd assert the SQL that is generated by EF, without running that query on the database.
Repeating my point, unit tests test one component. Entity Framework and the database server are two different things.
When using external components (such as a database server), you are again delving into the realm of integration tests.
Only test your own code
As mentioned before, you shouldn't be testing Entity Framework. Unit testing a library is basically doing the library developer's work. You shouldn't occupy yourself how a library works. If anything, you use a library specifically because you don't want to know how it works internally.
Consider this test:
public void TestForAddition()
{
var expectedValue = 2;
var testValue = 1 + 1;
Assert.AreEqual(testValue,expectedValue);
}
What you're doing here is testing the + operator, which is provided by the C#.Net framework. That is not part of your code, and therefore you shouldn't be testing it. The idea is the same as the EF argument: it's not your code, and not up to you to test. You have to trust in the dependencies you use.
However, if you have overloaded the + operator, you should test that (because it's your code)
public static Person operator+ (Person a, Person b)
{
return new Person() { Name = a.Name + " " + b.Name };
}
public void TestForPersonAddition()
{
var expectedValue = "Billie Jean";
var billie = new Person() { Name = "Billie" };
var jean = new Person() { Name = "Jean" };
Assert.AreEqual(billie + jean,expectedValue);
}
What this means for your EF-centric example is that you should not be unit testing Entity Framework. You should only have unit tests for your code, not that of EF.
You can, however, write integration tests for this. And this can achieve what you want: you run the integration tests against an existing database. If the integration tests pass, you'll know that the new codebase is compatible with the existing database. If they don't, then you know an issue has occurred that warrants a developer's attention.
Unit tests test behavior!
However, I want some unit tests to be failed when the database schema and entity are changed, but unit tests are not changed. That means, I want developers to fix the unit tests when they change database schema and entity.
So you want your developers to fix unit tests that were never broken in the first place?
Your intention doesn't make sense to me. You've changed some code and the unit tests are not failing. That is a good thing. But you want to make them fail, so your developers are then forced to "fix" the unit tests (that weren't broken in the first place).
If the unit tests pass and yet you still claim that there is something wrong in the behavior of the application, then your unit tests are not doing the job they're supposed to do.
You're misusing unit tests. The issue isn't trying to figure out how to force unit tests to fail; the issue is what you're expecting a unit test's outcome to tell you.
Unit tests should not be used to test whether the internal system has changed. Unit tests test whether the application still behaves the same way (even if some of the code has changed).
If I rewrite your codebase from scratch, and then run the same (untouched) unit tests on my new codebase, and all unit tests pass, that means that my code behaves the same way as yours and our codebases are functional equivalents of each other (at least in regards to the behavior that you've written tests for)
TL;DR - I mixed up "Integration Tests" with "Unit Tests".
I'm confused about Unit Testing and IoC containers... :(
I've read this article about how you should not use IoC containers in Unit Testing. This seems to be an opinion of many people on SO and in various other articles. In Unit Tests, you test your methods but any dependencies should be mocked.
Using the aforementioned article, I'd like to ask some questions.
To put it another way, if component A calls component B then from a unit testing perspective, we cannot let component A call the actual implementation of component B. Instead, component B must be mocked.
But... why?
We use a fake instead of the real component B so that 1) our tests do
not rely on code in any other class, 2) component B returns the same
data every time and 3) we can intercept calls to component B so we can
check how and when it is being called.
ad. 1) Instead of testing, what would happen in the real application, I'm now reasonlessly falsificating component B....to what end? So that I know that component A is tested in isolated manner? But my application uses both components together and these components work together.
The citation implies that I have to Unit Test component A and component B in isolation and that I should test only the business of the component.
But that undermines the whole point of automated testing in which I create assurances about the functions of the application, that the application will not crash, using these two components together. Not about its internal units in isolated context.
ad. 2) I know that everything I test is deterministic and for various inputs X it will return some Y, or throw an exception or something else - that's what I'm actually testing.
ad. 3) I can imagine this makes sense in complex tests...
To me mocking makes sense if component B is a 3rd party code I cannot easily create in a testing class without duplicating an awful lot of a code...Or if I have reasons not to call the actual implementation of component B such as not really wanting to do actual changes in database, not actually sending emails, not actually moving/writing/reading/deleting files etc.
But then, I'd mock using a different IoC container, where instead of Bind<ISomeService>().To<BusinessImplementation>() I would write Bind<ISomeService>().To<TestImplementation() (code example in Ninject)
By testing, I want to make assurances about the application, what would happen in deployed application and by mocking dependencies without good reason, I test in a very different context.
When the application starts, it uses the IoC container as I wrote it. The dependencies of the application are resolved using the IoC container.
I believe I'm probably wrong about something but I can't see it yet...
The purpose isn't to replace integration tests that will test modules from a higher level. Unit tests are intended to test a discrete class in isolation, mostly to confirm that the design and coding of that class is complete.
Instead, component B must be mocked. But... why?
Simply put, Unit Testing is for testing a unitary piece of feature. If the test fails, will it be because of component A or component B?
What the test is about?
Does component B pass its own battery of tests?
Testing component A along with a real instance of B will not answer these questions. On the contrary, it will rise up more questions than it can actually answer.
Instead of testing, what would happen in the real application, I'm now reasonlessly falsificating component B....to what end?
Actually, it is to isolate component A from component B so that any misbehave is only caused by component A. This it to reduce complexity around testing and to make it clear what you are doing, hence the "unit" in unit testing.
To me mocking makes sense if component B is a 3rd party code I cannot easily create in a testing class without duplicating an awful lot of a code...
Basically, you can do it this way. And this is not only what matters in unit testing. Instead, using dependency injection, one shall mock every reference types so that one isolates component A from any external influence, that is to make sure that component A behaves according to the expectations.
The day you want to test component A with a real instance of component B is actually not the day you are doing unit testing, but this is called integration testing, and those tests shall be written after you are sure that every component behaves in its unitary form.
Please see the answers of this question: Unit Testing or Functional Testing?
On a side note, it is not recommended to use DI containers in unit testing. It complexes the tests for no added value.
I am a bit confused on what the limits are for a unit tests and integration/functional tests? Are there any definite boundaries between these?
Let me start with a scenario ....
If I have a set of classes that perform a process. Each processes comprises have a few tasks.
ProcessA {
var a = do TaskA;
var c = do TaskC;
var b = do TaskB;
}
ProcessB {
var c = do TaskC;
var d = do TaskD;
}
ProcessC {
var a = do TaskA;
var d = do TaskD;
}
If we take the above design, then I can write unit tests to test out each of the Tasks to make sure that they do what they are supposed to. I am cool with that.
My problem arises with the fact that I would like write unit test, or I think they would be unit tests, for all the Processes. I want to make sure that the tasks are in the proper order, and the business rules are correct within the process itself.
The definition of a unit test is that unit tests test out a bite size chunk of the code. What I am trying to test out is a larger, 'ProcessA', 'ProcessB', code. In those tests, I will still be decoupling from the datastore & services. Would that still be considered a unit test or integration/functional test?
UPDATE
Based on all the comments, I guess the proper question to ask is that if all the external dependencies are mocked in 'ProcessA', 'ProcessB', etc. classes, would a unit test for those classes be considered a unit test or a integration test?
And thanks for being patient with me ....
As soon as you said I want to make sure that the tasks are in the proper order, and the business rules are correct within the process itself. you stopped talking about unit tests and began talking about integration tests.
Even though you are decoupling from the datastore and services you are testing busines rules. If your organization is anything like mine, business rules can (and often do) change. It is this volatile nature that makes it an integration test.
The base rule I've used is "unit tests are modular, independent pieces of code not dependant upon external data sources. whereas integration tests are tests that require the use of external data sources, either mocked up or from production."
The art of Unit Testing, second edition lists Integration tests with the following definition:
I consider integration tests as any tests that aren’t fast and consistent and that use one or more real dependencies of the units under test. For example, if the test uses the real system time, the real filesystem, or a real database, it has stepped into the realm of integration testing.
and unit tests with the following (emphasis mine):
A unit test is an automated piece of code that invokes the unit of work being tested, and then checks some assumptions about a single end result of that unit. A unit test is almost always written using a unit testing framework. It can be written easily and runs quickly. It’s trustworthy, readable, and maintainable. It’s consistent in its results as long as production code hasn't changed.
Based upon the same book as above. A unit test :
Is an automated piece of code that invokes a different method and then checks some assumptions on the logical behaviour of that method or class.
This is true if you can reduce ProcessA/B/C into small chunks that can be tested independently of one another.
can be executed repeatedly by anyone on the development team.
If someone from your team requires values or a cheat sheet to refer to determine if the test passes or not, it is not a unit test.
Now based upon your edit it depends. Integration tests and unit tests can overlap and many other developers may have different ideas on to how they would label the tests.
The answer to your question is to determine the best practice for you and your development team. If the consensus it's a unit test, then consider it as such.
Unit and integration tests should help the process, not hinder it. Please don't let semantics get in the way of that.
Setting aside tools etc... Unit test is simply testing a specific piece of code and would typically be performed by the programmer.
Functional and Integration are typically performed by QA team:
Functional Testing is testing if the code does what it was intended to do.
Integration Testing is about testing code and measuring how it interacts with other modules/parts of the system.
I would say that a unit test is a test that tests one class only (a unit), whereas integration tests test the integration of multiple classes/assemblies and/or even external data sources. Functional tests should explicitly test the functionality exposed to the users (which may be internal users, outside of the dev team etc.)
EDIT:
The only way to implement unit tests for composite classes is to use dependency injection. This means that you should define an interface (possibly more than one), e.g.:
interface ITask
{
Perform();
}
and inject it into your Process classes:
class ProcessC
{
ITask taskA;
ITask taskD;
ProcessA(ITask taskA, ITask taskD)
{
this.taskA = taskA;
this.taskD = taskD;
}
void Run() //change to your real methods
{
taskA.Perform(); //capture results
taskD.Perform();
}
}
Now you can mock the injected interface ITask (using a mocking framework), so that you can isolate the behaviour of a process from the behaviour of the tasks.
I'm writing a set of unit tests to test a CRUD system.
I need to register a user in Test1 - which returns a ServiceKey
I then need to add data in Test2 for which I need the ServiceKey
What is the best way to pass the ServiceKey? I tried to set it in the TestContext, but it just seems to disappear between the tests.
You should not share aany state between unit tests, one of the very important properties of good unit tests - Independency. Tests should not affect each other.
See this StackOverflow post: What Makes a Good Unit Test?
EDIT: Answer to comment
To share a logic/behaviour (method) you can extract the common code into a helper method and call it from different tests, for instance helper method which creates an user mock:
private IUser CreateUser(string userName)
{
var userMock = MockRepository.GenerateMock<IUser>();
userMock.Expect(x => x.UserName).Return(userName);
return userMock;
}
the idea of unit tests is that each tests checks one functionality. if you create dependencies in between your tests it is no longer certain that they will pass all the time (they might get executed in a different order, etc.).
what you can do in your specific case is keeping your Test1 as it is. it only focuses on the functionality of the registering process. you don't have to save that ServiceKey anywhere. just assert inside the test method.
for the second test you have to setup (fake) everything you need it to run successfully. it is generally a good idea to follow the "Arrange Act Assert"-Principle, where you setup your data to test, act upon it and then check if everything worked as intended (it also adds more clarity and structure to your tests).
therefore it is best to fake the ServiceKey you would get in the first test run. this way it is also much easier to controll the data you want to test. use a mocking framework (e.g. moq or fakes in vs2012) to arrange your data they way you need it. moq is a very lightweight framework for mocking. you should check it out if you are yet not using any mocking utilities.
hope this helps.
I have a consumer class responsible for consuming a string and deciding what to do with it. It can either parse and insert the parse data in a database or notify an administrator.
Below is my implementation.
public void Consume(string email)
{
if(_emailValidator.IsLocate(email))
{
var parsedLocate = _parser.Parse(email);
// Insert locate in database
}
else if(_emailValidator.IsGoodNightCall(email))
{
// Notify email notifying them that a locate email requires attention.
_notifier.Notify();
}
}
Below is my unit test.
// Arrange
var validator = new EmailValidator();
var parser = new Mock<IParser>();
var notifier = new Mock<INotifier>();
var consumer = new LocateConsumer(validator, parser.Object, notifier.Object);
var email = EmailLiterals.Locate;
// Act
consumer.Consume(email);
// Assert
parser.Verify(x => x.Parse(email), Times.Once());
Is it code smell to mix mocks and real implementation in unit tests? Also, how do always having to test whether method abc() always ran once? It doesn't seem right that once I add a new unit test every time I add a function inside my if block. Seems like if I continue adding to my Consume method I'm create a trap.
Thank you.
To be nitpicking, a unit test is an automated test that tests a unit in isolation. If you combine two or more units, it's not a unit test any more, it's an integration test.
However, depending on the type of units you integrate, having lots of that type of integration tests may be quite okay.
Krzysztof Kozmic recently wrote a blog post about this where he describes how Castle Windsor has very few unit tests, but lots of integration tests. AutoFixture also has a large share of those types of integration tests. I think the most important point is that as a general rule the integration must not cross library boundaries.
In any case you can view the actual implementation as an extreme end of the Test Double Continuum, so just as there are scenarios where it makes sense to use Stubs, Mocks, Spies, or Fakes, there are also scenarios where the actual implementation may make sense.
However, just keep in mind that you are no longer testing the unit in isolation, so you do introduce a coupling between the units that makes it more difficult to vary each independently.
To conclude, I still consider it a smell because it should always be an occasion to stop and think. However, a smell indicates no more than that, and sometimes, once you've thought it over, you can decide to move along.
I would say a strong yes. Unit testing should be free of dependencies among components.
> Is it a test smell to mix in real implementation and mocks?
This is an integration test (combining 2 or more modules) and not a unittest (test one module in isolation)
My answer is No: I think it is ok to have mocks in integration test.