I currently try to setup some integration testing in order to verify that my RavenDB queries return what i expect.
I followed the documentation and have a single test up and running.
The problem starts when i add an additional test method to the same class. I get the following exception:
System.InvalidOperationException : Cannot configure server after it was started. Please call 'ConfigureServer' method before any 'GetDocumentStore' is called.
So basically it tells me, the server is already running. But this confuses me. Since XUnit creates an new instance of the test class for every discovered test method in that class. Additionally it calls Dispose() on any instance, that implements IDisposable. Which is indirectly implemented by the base class RavenTestDriver.
So what i think is happening:
XUnit creates a new instance of my test class
XUnit calls my test method
My test method calls ConfigureServer, the RavenDB Embedded Server is started
My test method finishes
XUnit calls Dispose on my instance of the test class, the RavenDB Embedded Server is stopped
Rince and repeat for the next test method
But i looks like, my assumption on #5 is wrong. The RavenDB Embedded Server seems to be never stopped. Additionally i cannot find a way to manually stop it. I was trying to manually dispoose it with EmbeddedServer.Instance.Dispose(). But that doesn't change anything. (The .Instance gives a clue, that the EmbeddedServer is probably a singleton, which may be part of the problem here).
I also tried to move the ConfigureServer call into the constructor of the test class. Since XUnit class the constructore for every test method (just like the setup-method from JUnit). But then i get the same result.
But the interesting part is: Calling ConfigureServer in two different classes just works fine.
I have created a small reproducer repository.
So has anybody an idea on how to setup RavenDB in a Unit/Integration-Test environment where you want to run multiple tests against it?
Remove the ConfigureServer method from all tests & constructor. Calling GetDocumentStore() will create the embedded server.
https://github.com/ravendb/ravendb/blob/e8f08f191e1b085421ca5b4db191b199e7a8fc69/src/Raven.TestDriver/RavenTestDriver.cs#L272
If you want to configure the server then you should set it in a static constructor:
static MovieTests_ConfigureInConstructor()
{
ConfigureServer(new TestServerOptions() {
CommandLineArgs = new System.Collections.Generic.List<string> { "--RunInMemory=true", },
FrameworkVersion = null,
});
}
Related
We are writing integration tests for our application ( a minimal api made with .net 6). We are working with xUnit and we have defined a fixture for the collection of tests (all tests belong to the same collection) and not a class fixture. This way, the fixture is created only once and not for every class. The fixture contains a property that we call SUT that contains the WebApplicationFactory instance used to get a client and make requests to our API. We want to test how the application behaves according to the variation of certain config values. In order to do this, we wrote a test that we will refer to as "ProblematicTest" from now on. ProblematicTest should not run in parallel with the rest (should be in the same collection). As part of that ProblematicTest code and only in that test code we ask the fixture to recreate the WebApplicationFactory instance with new config values in order to test the application behavior with these new config values (we implemented a method in the Fixture class to replace the value of the WebApplicationFactory instance with the new one and in the creation of this new instance we use AddJsonFile to modify the relevant config values). And here comes the problem, described by the following facts:
When the ProblematicTest is run alone (with no previous or posterior
execution of any other test), it works well, even when by default,
previous to the test execution, the fixture contains a
WebApplicationFactory instance created with the original config
values. This means that the new instance of WebApplicationFactory is
successfully created with the new relevant config values and
everything works ok.
If one test is run previous to this
ProblematicTest (this means, a request to the tested API using the
original WebApplicationFactory instance is done) then, when accessing the
configuration during the execution of ProblematicTest, we can see
that the application does has the new values in the config as
expected. But there is a third-party component that, somehow, still
"remembers" the previous config values of the original
WebApplicationFactory and works with them, causing the test to fail.
We think that even if the third-party component reads the config at
the beginning of the application execution and never again, the new in-memory application
should work with the new config values from the beginning since we are working with a new
instance of WebApplicationFactory. This way, this capacity to "remember" the
original config values is utterly unexpected for us unless we have a
wrong understanding of how WebApplicationFactory works internally.
If we run ProblematicTest previous to a second test, the execution
of the former affects the execution of the latter also in relation
to the third-party component because it also remembers the first
values it used (the ones defined for ProblematicTest) and the second
test starts failing.
And here come my questions:
Are we missing something in regarding the expected behavior of a WebApplicationFactory instance when recreated? Shouldn't the creation of a new instance of this class cause a total reset of the application? What could explain this weird and unexpected behavior of a component of the application remembering old config values even when the application is supposedly completely recreated, including that third-party component, in between the execution of the two tests?
I have multiple unit tests, where I do check if a method was called.
I use the NSubstitute mocking libraries to check rather a method was called with the help of the "Received()" method, just like that:
MessageHandling.Received().Submit(Messages.DATA_EXPORT_SUCCESS);
The tests are working fine when I run them individually, but when run them all, some of them fail for no apparent reason. When I debug the code, I see that the method which should get called was called, but the Received() method from NSubstitute says there was no call at all.
I do also call the ClearReceivedCalls() in my TearDown methode
MessageHandling.ClearReceivedCalls();
But this does not seem to help.
Is there something else I should take care of, when using the Received() method?
My test functions are a bit more complicated then just the to check for a call, but that is the only reason why my tests fail.
I assume MessageHandling is initialized as a single instance property, that is used in every test? Try to make your test class stateless, by initializing a new mocked instance in every test.
I have a setup with some VS projects which depend on each other. Simplified, imagine these projects:
API Project (ASP.NET WebAPI 2)
DummyBackendServer (WinForms with a WCF Service)
API.Test Project (Unit Test Project which should Test API-Project's Controllers)
Usually, there would be (for example) an android client, which connects to the API-Server (1) and requests some information. The API-Server would then connect to the DummyBackendServer (2) to request these information, generate an appropriate format and send an answer to the android client.
Now I need to create some unit tests for the API-Server. My problem is, that I don't find a way to tell VS to start the DummyBackendServer (2), when it runs the unit tests. When I first start the DummyServer I can't run the tests, because the menu option is grayed out.
So, is there any way to tell VS to start another project the tests depend on?
For anyone who doesn't want to unit test the correct way and just want to run multiple projects in the same soluition, here is the answer.
Rightclick the backend project -> Debug -> Start without debugging.
The interface will not grey out so you can start other projects.
Start test with rightclick -> Run Tests
Or run your frontend with debugging as usual by having it set as startup project (Bold in the Solution Explorer) and clicking F5 or the green arrow, Start With Debugging.
Divide and conquer!
If the test (some will say that those are not unit test, but that is not part of this question) requires some services to be up - make that happen! Have them deployed to some dev or staging environment, then you only need to configure the connection from the API test assembly.
I would split the solution in two and call them integration tests. If you want them to bee unit test you have what you need from the post above.
You should use the IoC containers or something similar in your project, so you can get the mock of your other projects while run the Unit Tests.
Which one you'll select is up to you, personally I use Rhino.Mocks:
Create a mock repository:
MockRepository mocks = new MockRepository();
Add a mock object to the repository:
ISomeInterface robot = (ISomeInterface)mocks.CreateMock(typeof(ISomeInterface));
//If you're using C# 2.0, you may use the generic version and avoid upcasting:
ISomeInterface robot = mocks.CreateMock<ISomeInterface>();
"Record" the methods that you expect to be called on the mock object:
// this method has a return type, so wrap it with Expect.Call
Expect.Call(robot.SendCommand("Wake Up")).Return("Groan");
// this method has void return type, so simply call it
robot.Poke();
//Note that the parameter values provided in these calls represent those values we
//expect our mock to be called with. Similary, the return value represents the value
//that the mock will return when this method is called.
//You may expect a method to be called multiple times:
// again, methods that return values use Expect.Call
Expect.Call(robot.SendCommand("Wake Up")).Return("Groan").Repeat.Twice();
// when no return type, any extra information about the method call
// is provided immediately after via static methods on LastCall
robot.Poke();
LastCall.On(robot).Repeat.Twice();
Set the mock object to a "Replay" state where, as called, it will replay the operations just recorded.
mocks.ReplayAll();
Invoke code that uses the mock object.
theButler.GetRobotReady();
Check that all calls were made to the mock object.
mocks.VerifyAll();
We are using Moq to perform some unit tests, and there's some strange behaviour, perhaps it's some configuration issue that i'm missing.
basically, i have 2 tests (which call a windows workflow, which calls custom activities that call a method using Invoke. i don't know if this helps but i want to give as much info as i can). The tests run OK when executed alone, but if I execute them in a same run, the first one passes and the second fails (doesn't matter if i change the order of them, the 2nd one always fails)
The mock is recreated every time, loaded using Unity. ex :
MockProcessConfigurator = MockFactory.Create<IProcessConfigurator>();
MockProcessConfigurator.Setup(x => x.MyMethod(It.IsAny<Order>()));
[...]
InversionOfControl.Instance.Register<IProcessConfigurator>(MockProcessConfigurator .Object)
The invoked call (WF custom activity) is
var invoker = new WorkflowInvoker(new MyWorkflow());
invoker.Invoke(inputParameter);
The call (Invoked call) is
MyModuleService.ProcessConfigurator.MyMethod(inputOrder);
when debugging, i see that the ProcessConfigurator is always mocked.
The call that fails in the test is something as simple as this :
MockEnvironment.MockProcessConfigurator.Verify(x => x.MyMethod(It.IsAny<Order>()), Times.Exactly(1));
When debugging, the method is actually called everytime, so i suspect that there's something worng with the mock instance. I'm a bit lost here, because things seem to be implemented correctly, but for some reason when they're run one after the other, there's a problem
This type of error commonly occurs when the two tests share something.
For example, you set up your mock with an expectation that a method will be called 1 time in your test setup, and then two tests each call that method 1 time - your expectation will fail because it has now been called 2 times.
This suggests that you should be moving the set up of expectations into each test.
A generic troubleshooting for this type of problem is to try to isolate the dependency between the two test.
Move any setup-code to inside the tests.
Move any tear-down code to inside the tests.
Move any field initializers to inside the tests. (Those are only run once per fixture!)
This should make both your test pass when run together. When you got the green-lights, you can start moving out duplicated stuff again to initializers/setup one piece at the time, running the tests after each change you make.
You should be able to learn what is causing this coupling between the tests. Good luck!
Thought I'd add an additional situation and solution I just came across:
If you are running tests from two separate projects at the same time and each project is using a different version of Moq, this same problem can happen.
For me, I had TestProjectA using Moq 4.2.14 and TestProjectB using Moq 4.2.15 on accident (courtesy of Nuget). When running a test from A and a test from B simultaneously, the first test succeeded and the second silently failed.
Adjusting both projects to use the same version solved the issue.
To expand on the answer Sohnee gave, I would check my setup/teardown methods to make sure you're tidying everything up properly. I've had similar issues when running tests in bulk and not having tidied up my mock objects.
I don't know if this is relevant, but MockFactory.Create seems odd. I normally create mocks as follows:
var mockProcessConfigurator = new Mock<IProcessConfigurator>();
When using a MockFactory (which I never have needed), you would normally create an instance of it. From the moq QuickStart:
var factory = new MockFactory(MockBehavior.Strict) { DefaultValue = DefaultValue.Mock };
Calling a static method on MockFactory seems to defeat the purpose. If you have a nonstandard naming convention where MockFactory is actually a variable of type MockFactory, that's probably not your issue (but will be a constant source of confusion). If MockFactory is a property of your test class, insure that it is recreated on SetUp.
If possible I would eliminate the factory, as it is a potential source of shared state.
EDIT: As an aside, WorkflowInvoker.Invoke takes an Activity as a parameter. Rather than creating an entire workflow to test a custom activity, you can just pass an instance of the custom activity. If that's what you want to test, it keeps your unit test more focused.
How can I test the following method?
It is a method on a concrete class implementation of an interface.
I have wrapped the Process class with an interface that only exposes the methods and properties I need. The ProcessWrapper class is the concrete implementation of this interface.
public void Initiate(IEnumerable<Cow> cows)
{
foreach (Cow c in cows)
{
c.Process = new ProcessWrapper(c);
c.Process.Start();
count++;
}
}
There are two ways to get around this. The first is to use dependency injection. You could inject a factory and have Initiate call the create method to get the kind of ProcessWrapper you need for your test.
The other solution is to use a mocking framework such as TypeMock, that will let you work around this. TypeMock basically allows you to mock anything, so you could use it to provide a mock object instead of the actual ProcessWrapper instances.
I'm not familiar with C# (I prefer mine without the hash), but you need some sort of interface to the process (IPC or whatever is the most convenient method) so you can send it test requests and get results back. At the simplest level, you would just send a message to the process and receive the result. Or you could have more granularity and send more specific commands from your test harness. It depends on how you have set up your unit testing environment, more precisely how you send the test commands, how you receive them and how you report the results.
I would personally have a test object inside the process that simply receives, runs & reports the unit test results and have the test code inside that object.
What does your process do? Is there any way you could check that it is doing what it's supposed to do? For example, it might write to a file or a database table. Or it might expose an API (IPC, web-service, etc.) that you could try calling with test data.
From a TDD perspective, it might make make sense to plug in a "mock/test process" that performs some action that you can easily check. (This may require code changes to allow your test code to inject something.) This way, you're only testing your invocation code, and not-necessarily testing an actual business process. You could then have different unit tests to test your business process.