We are using Moq 4 for our unit tests, specifically controller unit tests. We are using Moq's Verify() method to make sure an error was logged. The problem is the tests can pass but then fail the very next run.
We are using Serilog for our logger.
The Action method looks like
public IActionResult Index(){
try{
var data = _repository.GetData();
return View(data);
}catch(Exception e){
Log.Error(e.Message);
}
}
So the unit test is using
_mockLogger.Verify(x=> x.Write(LogEventLevel.Error,It.IsAny<string>()));
mockLogger is setup in the test's constructor like
var _mockLogger = new Mock<Serilog.ILogger>();
Log.Logger = _mockLogger.Object;
//...
and the repository is mocked to throw an exception when invoked.
When it fails, we are getting the error message
"Moq.MoqException expected invocation on the Mock at least once but was never peformed x=>x.Write(LogEventLevel.Error,It.IsAny<string>())"
Any Ideas?
It's not entirely possible to see what the problem is from the posted code. And I appreciate how hard it can be to make a MCVE for this. So I'm going to take two guesses.
Guess 1: I suspect the cause of your issue is the use of statics in your code, specifically to do with the logger.
I suspect what's happening is that other tests (not shown in the post) are also modifying/defining how the logger should behave, and since the logger is static, the tests are interfering with each other.
Try redesigning the code so that the instance of the logging functionality is dependency injected into the class under test, using serilog's ILogger interface, store this in a readonly field and use that when you want to log.
Guess 2: Based on the part of the post which says "...setup in the test's constructor" you haven't said (or tagged) which testing framework you're using; but the handful that I've used prefer you to do this kind of thing in attributed methods rather than in the constructor of the test. For example, NUnit has OneTimeSetUp (before any of the tests in that class are run), SetUp (before each test in that class is run), TearDown (after each test in that class is run), OneTimeTearDown (after all of the tests in that class are run). It's possible that the constructors of your tests are being called in an order that you're not expecting, and which is not supported by your testing framework; whereas the attributed methods sequence is guaranteed by the framework.
Related
public bool A (UserRequest foo)
{
ClientRequest boo = B(foo); //Mapping local model to client model
C(boo);
return result;
}
I want to write an unit test for method A to test method B but I don't want my unit test to call method C. Method C is a private method but it makes calls to a third party client. I am unable to setup method C in my Unit Test since the type "ClientRequest" doesn't have a reference in the test case assembly. How can this be implemented without adding a reference of the client dll to my test assembly as well. How to skip calling method C ?
C is a private method
Things are private for a reason. They are implementation details to which consuming code shouldn't be coupled. And unit tests are consuming code.
it makes calls to a third party client
Therein lies the problem with your unit tests. Don't try to break apart the class being tested, digging into its internals and ultimately modifying what it's doing and as a result invalidating the tests in the first place.
Instead, isolate and mock the dependency. Somewhere in C() this class has an external dependency. Instead of obscuring that dependency deep within the class, wrap it in an interface/implementation and provide that implementation to the class. (This is called Dependency Injection. There are frameworks which provide rich functionality around the concept, but the concept itself can be achieved manually for simple cases as well.)
So when application code uses this class, instances are provided with an implementation of the dependency which calls the external service. And when unit tests use this class, instances are provided with a mock implementation that pretends to call the external service.
Then your tests can include mocking the results of that service as well, triggering controlled failure responses to test how the class handles them.
I am working on a test library using NUnit, and am generating a custom report while the tests run.
In the TearDown of my tests I call a method that will report the result of the test. It works just fine if the test passes, but is never reached if the test fails, is ignored, or inconclusive.
To make things more difficult, the "//do stuff" in the TearDown can also cause the test to fail in which case it would still need to be logged. BUT Assert throws an exception which means it leaves the block and never reaches the reporting code.
[SetUp]
public void ExampleSetup()
{
//whatever
}
[Test]
public void ExampleTest()
{
//whatever
}
[TearDown]
public void ExampleTearDown()
{
//do stuff
someObject.ReportTestResult();
}
More Info
- Using NUnit 3.2.0
By looking up the NUnit documentation's Framework Extensibility chapter we can see that you need to use the Action Attribute.
The Action Attribute extension point is:
designed to better enable composability of test logic by creating
attributes that encapsulate specific actions to be taken before or
after a test is run.
Summary of a basic implementation of this extension point
You need to implement the ITestAction interface in your given say FooBarActionAttribute() class.
Given this you implement BeforeTest(), AfterTest() and Targets property.
For a basic scenario you need to perform your custom operations in the above two methods.
The final thing you need to do is to use this attribute, for example as:
[Test][FooBarActionAttribute()]
public void Should_Baz_a_FooBar() {
...
}
This will get executed right before and after the test method run.
For more advanced techniques please consult the linked documentation, it's pretty straightforward.
Frankly, this is just a bad idea. TearDown is part of your test. That's why it can fail. (But note that NUnit treats an assert failure in teardown as an error rather than a failure)
Since a test can do anything - good or bad, right or wrong - putting your logging into the test code is unreliable. You should be logging in the code that runs tests, not in the tests themselves. As stated above, the TearDown is a part of your test. So is any ActionAttribute you may define, as suggested in another answer. Don't do logging there.
Unfortunately, there is a history of doing logging within the tests because NUnit either didn't supply an alternative - at least not one that was easy to use. For NUnit V2, you could create a test event listener addin for exactly this purpose. You still can if that's the version of NUnit you are using.
For NUnit 3.0 and higher, you should create a listener extension that works with the TestEngine. The TestEngine and its extensions are completely separate from your tests. Once you have a well-tested extension, you will be able to use it with all your tests and the tests won't be cluttered with logging code.
I have tests written in XUnit using InlineData and MemberData attributes. I would like to run tests via code elsewhere in my project and have the attributes automatically fill in test data like they normally do when ran through the VS test runner.
If it weren't for the attributes I would just call the methods directly like any other normal method. The asserts are still checked and it functions fine. But if I call a method directly that has the attributes, the attributes are ignored and I must provide all the test data manually through code. Is there some sort of test runner class in XUnit that I can reuse to accomplish this? I've been trying to dig through their API to no avail.
Why I want to do this will take some explanation, but bear with me. I'm writing tests against specific interfaces rather than their concrete implementations (think standard collection interfaces for example). There's plenty there to test and I don't want to copy paste them for each concrete implementer (could be dozens). I write the tests once and then pass each concrete implementation of the interface as the first argument to the test, a subject to test on.
But this leaves a problem. XUnit sees the test and wants to run it, but it can't because there are no concrete implementations available at this layer, there's only the interface. So I want to write tests at the higher layer that just new up the concrete implementations, and then invoke the interface tests passing in the new subjects. I can easily do this for tests that only accept 1 argument, the subject, but for tests where I'm using InlineData or MemberData too I would like to reuse those test cases already provided and just add the subject as the first argument.
Available for reference is the GitHub issue How to programmatically run XUnit tests from the xUnit.net project.
The class AssemblyRunner is now part of Xunit.Runner.Utility.
From the linked issue, xUnit.net contributor Brad Wilson provided a sample runner in the samples.xunit project on GitHub. This program demonstrates the techniques described in the issue. Namely, the portion responsible for running the tests after they have been discovered is as follows:
using (var runner = AssemblyRunner.WithAppDomain(testAssembly))
{
runner.OnDiscoveryComplete = OnDiscoveryComplete;
runner.OnExecutionComplete = OnExecutionComplete;
runner.OnTestFailed = OnTestFailed;
runner.OnTestSkipped = OnTestSkipped;
Console.WriteLine("Discovering...");
runner.Start(typeName);
finished.WaitOne(); // A ManualResetEvent
finished.Dispose();
return result;
}
For a deeper dive, he describes a method using XunitFrontController and TestDiscoveryVisitor to find and run tests. This is what AssemblyRunner does for its implementation.
Nevermind, I figured it out. Taking a closer look at XUnit's attribute hierarchy I found that the DataAttributes (InlineData, MemberData, etc) have a GetData method you can call to retrieve the set of data they represent. With a little reflection I can easily find all the tests in my test class and call the test methods, invoking the data attribute's get data method if there are any present, and perform the tests via my own code that way. The GetData part would have been much harder if I had to role my own version of it. Thank you XUnit authors for not forcing me to do that.
I'm writing a set of unit tests to test a CRUD system.
I need to register a user in Test1 - which returns a ServiceKey
I then need to add data in Test2 for which I need the ServiceKey
What is the best way to pass the ServiceKey? I tried to set it in the TestContext, but it just seems to disappear between the tests.
You should not share aany state between unit tests, one of the very important properties of good unit tests - Independency. Tests should not affect each other.
See this StackOverflow post: What Makes a Good Unit Test?
EDIT: Answer to comment
To share a logic/behaviour (method) you can extract the common code into a helper method and call it from different tests, for instance helper method which creates an user mock:
private IUser CreateUser(string userName)
{
var userMock = MockRepository.GenerateMock<IUser>();
userMock.Expect(x => x.UserName).Return(userName);
return userMock;
}
the idea of unit tests is that each tests checks one functionality. if you create dependencies in between your tests it is no longer certain that they will pass all the time (they might get executed in a different order, etc.).
what you can do in your specific case is keeping your Test1 as it is. it only focuses on the functionality of the registering process. you don't have to save that ServiceKey anywhere. just assert inside the test method.
for the second test you have to setup (fake) everything you need it to run successfully. it is generally a good idea to follow the "Arrange Act Assert"-Principle, where you setup your data to test, act upon it and then check if everything worked as intended (it also adds more clarity and structure to your tests).
therefore it is best to fake the ServiceKey you would get in the first test run. this way it is also much easier to controll the data you want to test. use a mocking framework (e.g. moq or fakes in vs2012) to arrange your data they way you need it. moq is a very lightweight framework for mocking. you should check it out if you are yet not using any mocking utilities.
hope this helps.
We are using Moq to perform some unit tests, and there's some strange behaviour, perhaps it's some configuration issue that i'm missing.
basically, i have 2 tests (which call a windows workflow, which calls custom activities that call a method using Invoke. i don't know if this helps but i want to give as much info as i can). The tests run OK when executed alone, but if I execute them in a same run, the first one passes and the second fails (doesn't matter if i change the order of them, the 2nd one always fails)
The mock is recreated every time, loaded using Unity. ex :
MockProcessConfigurator = MockFactory.Create<IProcessConfigurator>();
MockProcessConfigurator.Setup(x => x.MyMethod(It.IsAny<Order>()));
[...]
InversionOfControl.Instance.Register<IProcessConfigurator>(MockProcessConfigurator .Object)
The invoked call (WF custom activity) is
var invoker = new WorkflowInvoker(new MyWorkflow());
invoker.Invoke(inputParameter);
The call (Invoked call) is
MyModuleService.ProcessConfigurator.MyMethod(inputOrder);
when debugging, i see that the ProcessConfigurator is always mocked.
The call that fails in the test is something as simple as this :
MockEnvironment.MockProcessConfigurator.Verify(x => x.MyMethod(It.IsAny<Order>()), Times.Exactly(1));
When debugging, the method is actually called everytime, so i suspect that there's something worng with the mock instance. I'm a bit lost here, because things seem to be implemented correctly, but for some reason when they're run one after the other, there's a problem
This type of error commonly occurs when the two tests share something.
For example, you set up your mock with an expectation that a method will be called 1 time in your test setup, and then two tests each call that method 1 time - your expectation will fail because it has now been called 2 times.
This suggests that you should be moving the set up of expectations into each test.
A generic troubleshooting for this type of problem is to try to isolate the dependency between the two test.
Move any setup-code to inside the tests.
Move any tear-down code to inside the tests.
Move any field initializers to inside the tests. (Those are only run once per fixture!)
This should make both your test pass when run together. When you got the green-lights, you can start moving out duplicated stuff again to initializers/setup one piece at the time, running the tests after each change you make.
You should be able to learn what is causing this coupling between the tests. Good luck!
Thought I'd add an additional situation and solution I just came across:
If you are running tests from two separate projects at the same time and each project is using a different version of Moq, this same problem can happen.
For me, I had TestProjectA using Moq 4.2.14 and TestProjectB using Moq 4.2.15 on accident (courtesy of Nuget). When running a test from A and a test from B simultaneously, the first test succeeded and the second silently failed.
Adjusting both projects to use the same version solved the issue.
To expand on the answer Sohnee gave, I would check my setup/teardown methods to make sure you're tidying everything up properly. I've had similar issues when running tests in bulk and not having tidied up my mock objects.
I don't know if this is relevant, but MockFactory.Create seems odd. I normally create mocks as follows:
var mockProcessConfigurator = new Mock<IProcessConfigurator>();
When using a MockFactory (which I never have needed), you would normally create an instance of it. From the moq QuickStart:
var factory = new MockFactory(MockBehavior.Strict) { DefaultValue = DefaultValue.Mock };
Calling a static method on MockFactory seems to defeat the purpose. If you have a nonstandard naming convention where MockFactory is actually a variable of type MockFactory, that's probably not your issue (but will be a constant source of confusion). If MockFactory is a property of your test class, insure that it is recreated on SetUp.
If possible I would eliminate the factory, as it is a potential source of shared state.
EDIT: As an aside, WorkflowInvoker.Invoke takes an Activity as a parameter. Rather than creating an entire workflow to test a custom activity, you can just pass an instance of the custom activity. If that's what you want to test, it keeps your unit test more focused.