I am working on a test library using NUnit, and am generating a custom report while the tests run.
In the TearDown of my tests I call a method that will report the result of the test. It works just fine if the test passes, but is never reached if the test fails, is ignored, or inconclusive.
To make things more difficult, the "//do stuff" in the TearDown can also cause the test to fail in which case it would still need to be logged. BUT Assert throws an exception which means it leaves the block and never reaches the reporting code.
[SetUp]
public void ExampleSetup()
{
//whatever
}
[Test]
public void ExampleTest()
{
//whatever
}
[TearDown]
public void ExampleTearDown()
{
//do stuff
someObject.ReportTestResult();
}
More Info
- Using NUnit 3.2.0
By looking up the NUnit documentation's Framework Extensibility chapter we can see that you need to use the Action Attribute.
The Action Attribute extension point is:
designed to better enable composability of test logic by creating
attributes that encapsulate specific actions to be taken before or
after a test is run.
Summary of a basic implementation of this extension point
You need to implement the ITestAction interface in your given say FooBarActionAttribute() class.
Given this you implement BeforeTest(), AfterTest() and Targets property.
For a basic scenario you need to perform your custom operations in the above two methods.
The final thing you need to do is to use this attribute, for example as:
[Test][FooBarActionAttribute()]
public void Should_Baz_a_FooBar() {
...
}
This will get executed right before and after the test method run.
For more advanced techniques please consult the linked documentation, it's pretty straightforward.
Frankly, this is just a bad idea. TearDown is part of your test. That's why it can fail. (But note that NUnit treats an assert failure in teardown as an error rather than a failure)
Since a test can do anything - good or bad, right or wrong - putting your logging into the test code is unreliable. You should be logging in the code that runs tests, not in the tests themselves. As stated above, the TearDown is a part of your test. So is any ActionAttribute you may define, as suggested in another answer. Don't do logging there.
Unfortunately, there is a history of doing logging within the tests because NUnit either didn't supply an alternative - at least not one that was easy to use. For NUnit V2, you could create a test event listener addin for exactly this purpose. You still can if that's the version of NUnit you are using.
For NUnit 3.0 and higher, you should create a listener extension that works with the TestEngine. The TestEngine and its extensions are completely separate from your tests. Once you have a well-tested extension, you will be able to use it with all your tests and the tests won't be cluttered with logging code.
Related
We are using Moq 4 for our unit tests, specifically controller unit tests. We are using Moq's Verify() method to make sure an error was logged. The problem is the tests can pass but then fail the very next run.
We are using Serilog for our logger.
The Action method looks like
public IActionResult Index(){
try{
var data = _repository.GetData();
return View(data);
}catch(Exception e){
Log.Error(e.Message);
}
}
So the unit test is using
_mockLogger.Verify(x=> x.Write(LogEventLevel.Error,It.IsAny<string>()));
mockLogger is setup in the test's constructor like
var _mockLogger = new Mock<Serilog.ILogger>();
Log.Logger = _mockLogger.Object;
//...
and the repository is mocked to throw an exception when invoked.
When it fails, we are getting the error message
"Moq.MoqException expected invocation on the Mock at least once but was never peformed x=>x.Write(LogEventLevel.Error,It.IsAny<string>())"
Any Ideas?
It's not entirely possible to see what the problem is from the posted code. And I appreciate how hard it can be to make a MCVE for this. So I'm going to take two guesses.
Guess 1: I suspect the cause of your issue is the use of statics in your code, specifically to do with the logger.
I suspect what's happening is that other tests (not shown in the post) are also modifying/defining how the logger should behave, and since the logger is static, the tests are interfering with each other.
Try redesigning the code so that the instance of the logging functionality is dependency injected into the class under test, using serilog's ILogger interface, store this in a readonly field and use that when you want to log.
Guess 2: Based on the part of the post which says "...setup in the test's constructor" you haven't said (or tagged) which testing framework you're using; but the handful that I've used prefer you to do this kind of thing in attributed methods rather than in the constructor of the test. For example, NUnit has OneTimeSetUp (before any of the tests in that class are run), SetUp (before each test in that class is run), TearDown (after each test in that class is run), OneTimeTearDown (after all of the tests in that class are run). It's possible that the constructors of your tests are being called in an order that you're not expecting, and which is not supported by your testing framework; whereas the attributed methods sequence is guaranteed by the framework.
I have tests written in XUnit using InlineData and MemberData attributes. I would like to run tests via code elsewhere in my project and have the attributes automatically fill in test data like they normally do when ran through the VS test runner.
If it weren't for the attributes I would just call the methods directly like any other normal method. The asserts are still checked and it functions fine. But if I call a method directly that has the attributes, the attributes are ignored and I must provide all the test data manually through code. Is there some sort of test runner class in XUnit that I can reuse to accomplish this? I've been trying to dig through their API to no avail.
Why I want to do this will take some explanation, but bear with me. I'm writing tests against specific interfaces rather than their concrete implementations (think standard collection interfaces for example). There's plenty there to test and I don't want to copy paste them for each concrete implementer (could be dozens). I write the tests once and then pass each concrete implementation of the interface as the first argument to the test, a subject to test on.
But this leaves a problem. XUnit sees the test and wants to run it, but it can't because there are no concrete implementations available at this layer, there's only the interface. So I want to write tests at the higher layer that just new up the concrete implementations, and then invoke the interface tests passing in the new subjects. I can easily do this for tests that only accept 1 argument, the subject, but for tests where I'm using InlineData or MemberData too I would like to reuse those test cases already provided and just add the subject as the first argument.
Available for reference is the GitHub issue How to programmatically run XUnit tests from the xUnit.net project.
The class AssemblyRunner is now part of Xunit.Runner.Utility.
From the linked issue, xUnit.net contributor Brad Wilson provided a sample runner in the samples.xunit project on GitHub. This program demonstrates the techniques described in the issue. Namely, the portion responsible for running the tests after they have been discovered is as follows:
using (var runner = AssemblyRunner.WithAppDomain(testAssembly))
{
runner.OnDiscoveryComplete = OnDiscoveryComplete;
runner.OnExecutionComplete = OnExecutionComplete;
runner.OnTestFailed = OnTestFailed;
runner.OnTestSkipped = OnTestSkipped;
Console.WriteLine("Discovering...");
runner.Start(typeName);
finished.WaitOne(); // A ManualResetEvent
finished.Dispose();
return result;
}
For a deeper dive, he describes a method using XunitFrontController and TestDiscoveryVisitor to find and run tests. This is what AssemblyRunner does for its implementation.
Nevermind, I figured it out. Taking a closer look at XUnit's attribute hierarchy I found that the DataAttributes (InlineData, MemberData, etc) have a GetData method you can call to retrieve the set of data they represent. With a little reflection I can easily find all the tests in my test class and call the test methods, invoking the data attribute's get data method if there are any present, and perform the tests via my own code that way. The GetData part would have been much harder if I had to role my own version of it. Thank you XUnit authors for not forcing me to do that.
Consider the following example of a unit test. The comments pretty much explain my problem.
[TestMethod]
public void MyTestMethod()
{
//generate some objects in the database
...
//make an assert that fails sometimes (for example purposes, this fails always)
Assert.IsTrue(false);
//TODO: how do we clean up the data generated in the database now that the test has ended here?
}
There are two ways to do this. One is using TestInitialize and TestCleanup attributes on methods in the test class. They will always be run before and after the test, respectively.
Another way is to use the fact that test failures are propagated to the test runner via exceptions. This means that a try { } finally { } block in your test can be used clean up anything after an assert fails.
[TestMethod]
public void FooTest()
{
try
{
// setup some database objects
Foo foo = new Foo();
Bar bar = new Bar(foo);
Assert.Fail();
}
finally
{
// remove database objects.
}
}
The try/finally cleanup can get really messy is there are a lot of objects to cleanup. What my team has leaned towards is a helper class which implements IDisposable. It tracks what objects have been created and pushes them onto a stack. When Dispose is called the items are popped off the stack and removed from the database.
[TestMethod]
public void FooTest()
{
using (FooBarDatabaseContext context = new FooBarDatabaseContext())
{
// setup some db objects.
Foo foo = context.NewFoo();
Bar bar = context.NewBar(foo);
Assert.Fail();
} // calls dispose. deletes bar, then foo.
}
This has the added benefit of wrapping the constructors in method calls. If constructor signatures change we can easily modify the test code.
I think the best answer in situations like this is to think very carefully about what you are trying to test. Ideally a unit test should be trying to test a single fact about a single method or function. When you start combining many things together it crosses over into the world of integration tests (which are equally valuable, but different).
For unit testing purposes, to enable you to test only the thing you want to test, you will need to design for testability. This typically involves additional use of interfaces (I'm assuming .NET from the code you showed) and some form of dependency injection (but doesn't require an IoC/DI container unless you want one). It also benefits from, and encourages you to create very cohesive (single purpose) and decoupled (soft dependencies) classes in your system.
So when you are testing business logic that depends on data from a database, you would typically use something like the Repository Pattern and inject a fake/stub/mock IXXXRepository in for unit testing. When you are testing the concrete repository, you either need to do the kind of database cleanup you are asking about or you need to shim/stub the underlying database call. That is really up to you.
When you do need to create/populate/cleanup the database, you might consider taking advantage of the various setup and teardown methods available in most testing frameworks. But be careful, because some of them are run before and after each test, which can seriously impact the performance of your unit tests. Tests that run too slowly will not be run very often, and that is bad.
In MS-Test, the attributes you would use to declare setup/teardown are ClassInitialize, ClassCleanUp, TestInitialize, TestCleanUp. Other frameworks have similarly named constructs.
There are a number of frameworks that can help you with the mocking/stubbing: Moq, Rhino Mocks, NMock, TypeMock, Moles and Stubs (VS2010), VS11 Fakes (VS11 Beta), etc. If you are looking for dependency injection frameworks, look at things like Ninject, Unity, Castle Windsor, etc.
A couple of responses:
If it's using an actual database, I'd argue that it's not a "unit test" in the strictest sense of the term. It's an integration test. A unit test should have no such side-effects. Consider using a mocking library to simulate the actual database. Rhino Mocks is one, but there are plenty of others.
If, however, the whole point of this test is to actually interact with a database, then you'll want to interact with a transient test-only database. In that case part of your automated testing would include code to build the test database from scratch, then run the tests, then destroy the test database. Again, the idea is to have no external side-effects. There are probably multiple ways to go about this, and I'm not familiar enough with unit testing frameworks to really give a concrete suggestion. But if you're using the testing that's built in to Visual Studio then perhaps a Visual Studio Database Project would be of use.
Your question is a little bit too general. Usually you should clean up after every single test. Usually you cannot rely that all tests are always executed in the same order and you have to be sure about what is in your database. For general setup or cleanup most unit test frameworks provide setUp and tearDown methods that you can override and will automatically be called. I don't know how that works in C# but e. g. in JUnit (Java) you have these methods.
I agree with David. Your tests usually should have no side effects. You should set up a new database for every single test.
You'll have to do manual cleanup in this circumstance. Ie, the opposite of the generate some objects in the db.
The alternative, is to use Mocking tools such as Rhino Mocks so that the database is just a in memory database
It depends on what you are actually testing. Looking on the comments I would say yes, but by the way it's difficult to deduct looking on comments. Cleaning up the object you just inserted you, in practice, reset the state of the test. So if you cleanup, you begin to test from cleanup system.
I think the clean up depends on how you're building the data, so if "old test data" doesn't interact with future tests runs, I think it's fine to leave it behind.
An approach I've been taking when writing integration tests is to have the tests run against a different db than the application db. I tend to rebuild the test db as a precondition to each test run. That way you don't need a granular clean-up scheme for each test as each test run gets a clean slate between runs.
Most of my development i done using SQL server, but I have in some cases run my tests against a SQL Compact edition db, which is fast and efficient to rebuild between runs.
mbUnit has a very handy attribute Rollback that cleans up the database after finishing the test. However, you'll have to configure DTC (Distributed Transaction Coordinator) in order to be able to use it.
I was having a comparable issue where one test's assertion was preventing cleanup and causing other tests to fail.
Hopefully this is of use to somebody, sometime.
[Test]
public void Collates_Blah_As_Blah()
{
Assert.False(SINGLETON.Collection.Any());
for (int i = 0; i < 2; i++)
Assert.That(PROCESS(ValidRequest) == Status.Success);
try
{
Assert.AreEqual(1, SINGLETON.Collection.Count);
}
finally
{
SINGLETON.Collection.Clear();
}
}
The finally block will execute whether the assertion passes or fails, it also doesn't introduce the risk of false passes - which catch will cause!
The following is a skeleton of a test method that I use. this allows me to use a try catch finally to do the cleanup code in the finally block without loosing my assert that fail.
[TestMethod]
public void TestMethod1()
{
Exception defaultException = new Exception("No real execption.");
try
{
#region Setup
#endregion
#region Tests
#endregion
}
catch (Exception exc)
{
/*if an Assert fails this catches its Exception so that it can be thrown
in the finally block*/
defaultException = exc;
}
finally
{
#region Cleanup
//cleanup code goes here
if (!defaultException.Message.Equals("No real execption."))
{
throw defaultException;
}
#endregion
}
}
I've written a class and want to test if it works well. For now I think the best way to do it is to create new console application referencing main project, then make new instance of my class and mess with it. This approach unlike others enables IntelliSense, using keywords (no full names for classes) and Debugging.
Anyone knows how to do it in more convenient way to do this without making new console app?
Using a console app to test your class is what I would call a "poor man's unit test."
You are on the right track in wanting to do this sort of testing and I (and most others on SO) would suggest using a unit testing framework of some sort to help you out. (Mocking may be important and useful to you as well.)
Here's the thing though. Regardless of what framework you use, or if you go with a console app to test your code, you do have to create a separate project, or a separate, significant chunk of code of some sort, to be able to execute tests properly and independently. That's just part of the process. It is an investment but don't let the extra work keep you from doing it. It will save a lot time, and your skin, a little while in the future. Maybe even next week.
Also, while you're looking up unit testing make sure to also study up on test-driven development (TDD.)
Unit testing is absolutely the way to go. Depending on what version of VS you are running, there may be unit testing functionality built in, or you may have to use an additional tool such as NUnit. Both options are good and will allow you to fully test your classes.
Bear in mind also, that a comprehensive suite of unit tests will make refactoring much easier in the long run as well. If you refactor and break your unit tests, you know you've made a boo-boo somewhere. :)
Unit testing is the way forward here> this is a good introductory article.
The basic concept of a unit test is that you isolate and invoke a specific portion of code and assert that the results are expected and within reason. For example lets say you have a simple method to return the square of a floating point number:
public float Square(float value)
{
return value * value;
}
A reasonable unit test would be that the method returns the correct value, handles negetive values etc:
Assert.AreEqual(25, Square(5));
Assert.AreEqual(100, Square(-10));
Unit tests are also a good way to see how your code handles edge cases:
Assert.Throws<OverflowException>(Square(float.MaxValue));
If you are using VS 2010, check out Pex and Moles...
http://research.microsoft.com/en-us/projects/pex/
The Console App approach is more of a test harness for your class, which is fine.
But you can also use a unit testing framework to test your class. Visual Studio has one built in or you can leverage something like NUnit.
Also, try the Object Test Bench in Visual Studio. It should allow you to create a new instance, modify and view properties, and call some methods. It usually only works with very simple apps, though.
If you use Visual Studio 2008 or higher you will be able to test your code using MSTest framework:
1.Open Test View window: Test/Windows/Test View;
2.Add new unit test project: right click on Solution in Solution Explorer/Add/New
Project/Test Project;
3.Remove all files apart from UnitTest.cs file in created test project;
4.Write your unit test in method under [TestMethod] attribute:
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestMethod1()
{
var ranges = new Ranges();
int sum = ranges.CountOccurrences(11);
Assert.AreEqual(128, sum);
}
}
5.Run your test from Test View window added in p.1
6.See test results in Test/Windows/Test Results window
I've got a LOT of tests written for a piece of software (which is a GREAT thing) but it was built essentially as a standalone test in C#. While this works well enough, it suffers from a few shortcomings, not the least of which is that it isn't using a standard testing framework and ends up requiring the person running the test to comment out calls to tests that shouldn't be run (when it isn't desired to run the entire test 'suite'). I'd like to incorporate it into my automated testing process.
I saw that the Test Edition of VS 2008 has the notion of a 'Generic Test' that might do what I want, but we're not in a position to spend the money on that version currently. I recently started using the VS 2008 Pro version.
These test methods follow a familiar pattern:
Do some setup for the test.
Execute the test.
Reset for the next test.
Each of them returns a bool (pass/fail) and a string ref to a fail reason, filled in if it fails.
On the bright side, at least the test methods are consistent.
I am sitting here tonight contemplating the approach I might take tomorrow morning to migrate all this test code to a testing framework and, frankly, I'm not all that excited about the idea of poring over 8-9K lines of test code by hand to do the conversion.
Have you had any experience undertaking such a conversion? Do you have any tips? I think I might be stuck slogging through it all doing global search/replaces and hand-changing the tests.
Any thoughts?
If you use NUnit (which you should), you'll need to create a new test method for each of your current test methods. NUnit uses reflection to query the test class for methods marked with the [Test] attribute, which is how it builds its list of the tests that show up in the UI, and the test classes use the NUnit Assert method to indicate whether they've passed or failed.
It seems to me that if your test methods are as consistent as you say, all of those NUnit methods would look something like this:
[Test]
public void MyTest()
{
string msg;
bool result = OldTestClass.MyTest(out msg);
if (!result)
{
Console.WriteLine(msg);
}
Assert.AreEqual(result, true);
}
Once you get that to work, your next step is to write a program that uses reflection to get all of the test method names on your old test class and produces a .cs file that has an NUnit method for each of your original test methods.
Annoying, maybe, but not extremely painful. And you'll only need to do it once.
You're about to live through the idiom of "An Ounce of Prevention is worth a pound of cure". Its especially true in programming.
You make no mention of NUnit(which I think was bought by Microsoft for 2008, but don't hold me to that). Is there a paticular reason you didn't just use NUnit in the first place?