A lot of our ViewModel-Unit-Tests create a ViewModel in the Arrange-Phase, call OnNavigatedTo() in the Act-Phase and Assert some stuff that should have happened after OnNavigatedTo finished (e.g. some Properties contain certain values).
When the implementation of OnNavigatedTo() contains asynchronious method calls (like loading data from the backend), we have a problem because OnNavigatedTo returns void and we can not await it's completion.
In our unit tests, we mock the calls to the backends, so they return immediately and the problem almost never shows up. However we had a case where exactly this scenario caused an issue on the build server (running on Linux) and i suspect this always creates a kind of race-condition which just works by accident most of the time.
One workaround we came up with was to provide a public method returning a Task which contains the implementation of OnNavigatedTo and is used by the unit tests but it's generally no good idea to extend the public API surface of the system under test just for the sake of testing.
Moving all the code to IInitializeAsync.InitializeAsync is in my opinion not an option as well as those 2 lifecycle hooks aren't equivalent and InitializeAsync might not get called on every page navigation.
So my question is: How can we reliably unit test code in INavigationAware.OnNavigatedTo which makes async calls?
One workaround we came up with was to provide a public method returning a Task which contains the implementation of OnNavigatedTo and is used by the unit tests
I'd do exactly this, except that I'd use internal (with InternalsVisibleTo if the fixture must be in another assembly) or even private (if you can make the fixture a nested class).
Alternatively, define your own INavigatedToAsyncForTest (with methods that return Task) and implement it explicitly to limit the discoverability of these methods.
but it's generally no good idea to extend the public API surface of the system under test just for the sake of testing.
True, but making the method internal screams "this is not public API", especially if the type itself is internal (which view models should be, unless you have to put your views in different assembly), so you can somewhat mitigate this.
Related
I've been reading a lot about unit testing recently. I'm currently reading The Art of Unit Testing by Roy Osherove. But one problem isn't properly addressed in the book: How do you ensure that your stubs behave exactly like the "real thing" does?
For example, I'm trying to create an ImageTagger application.There, I have a class ImageScanner which job it is to find all images inside a folder. The method I want to test has the following signature: IEnumerable<Image> FindAllImages(string folder). If there aren't any Images inside that folder, the method is supposed to return null.
The method itself issues a call to System.IO.Directory.GetFiles(..) in order to find all the images inside that folder.
Now, I want to write a test which ensures that FindAllImages(..) returns null if the folder is empty. As Osherove writes in his book, I should extract an Interface IDirectory which has a single method GetFiles(..). This interface is injected into my ImageScanner class. The actual implementation just calls System.IO.Directory.GetFiles(..) The Interface however allows me to create stubs which can simulate Directory.GetFiles() behavior.
Directory.GetFiles returns an empty array if there aren't any files present. So my stub would look like this:
class EmptyFolderStub : IDirectory
{
string[] GetFiles(string path)
{
return new string[]{};
}
}
I can just inject EmptyFolderStub into my ImageScanner class and test whether it returns null.
Now, what if I decide that there's a better library for searching for files? But its GetFiles(..) method throws an exception or returns null if there are no files to be found. My test still passes since the stub simulates the old GetFiles(..) behavior. However, the production code will fail since it isn't prepared to handle a null or an exception from the new library.
Of course you could say that by extracting an Interface IDirectory there also exists a contract which guarantees that the IDirectory.GetFiles(..) method is supposed to return an empty array. So technically I have to test whether the actual implementation satisfies that contract. But apparently, aside from Integration Testing, there's no way to tell if the method actually behaves this way. I could, of course, read the API specification and make sure that it returns an empty array, but don't think this is the point of unit testing.
How can I overcome this problem? Is this even possible with unit testing or do I have to rely on integration tests to capture edge cases like this? I feel like I'm testing something which I don't have any control over. But I would at least expect that there is a test which breaks when I introduce a new incompatible library.
In unit testing, identifying SUT (system under test) is very important. As soon as it is identified, its dependencies should be replaced them with stubs.
Why? Because we want to pretend that we are living in a perfect world of bug-free collaborators, and under this condition we want to check how SUT only behaves.
Your SUT is surely FindAllImages. Stick to this to avoid being lost. All stubs are actually replacements of dependencies (collaborators) that should work perfectly without any bit of failure (this is the reason of their existence). Stubs cannot fail a test. Stubs are imaginary perfect objects.
Pay attention: this configurations has an important meaning. It has a philosophy behind:
*If a test passes, given that all of its dependencies work fine (either by using stubs or actual objects), the SUT is guaranteed to behave as expected.
In other words, if dependencies work, then SUT works hence green test retain its meaning in any environment : If A --> Then B
But it doesn't say if dependencies fail then SUT test should pass or not or something. IMHO, any further logical interpretation is misleading.
If Not A --> Then ??? (we can't say anything)
In summary:
Failing actual dependency and how SUT should react to is another test scenario. You may design a stub which throws exception and check for SUT's expected
behavior.
Of course you could say that by extracting an Interface IDirectory
there also exists a contract which guarantees that the
IDirectory.GetFiles(..) method is supposed to return an empty array.
... I could, of course, read the API specification and make sure that
it returns an empty array, but don't think this is the point of unit
testing.
Yes, I would say that. You didn't change the signature, but you're changing the contract. An interface is a contract, not just a signature. And that's why that is the point of unit testing - if the contract does its part, this unit does its part.
If you wanted extra peace of mind, you could write unit tests against IDirectory, and in your example, those would break (expecting empty string array). This would alert you to contract changes if you change implementations (or a new version of current implementation breaks it).
I'm new to testing and I just started testing my MVC application.
Currently I'm testing if my controller's action method are calling the right repository methods which in turn reads or writes the data from database.
What I'm also testing is if the return type of the action method is View, PartialView or RedirectToRoute, etc.
I've got some comments saying that testing if the controller's Action method is calling the right function in repository doesn't really make sense. Is it true?
What should I include in my Unit test for my MVC application, that uses Repository pattern as well.
It could make sense to check if you action call correct method on your repository but you'll need to mock it to avoid access to database. Unit tests should be isolated from external components.
Although it's not ideal, you could replace your "real" database by a lightweight in memory Sqlite to avoid mocking your database access in your tests.
I personally use Moq as mocking framework but it is plenty of mature mocking framework for .NET.
Take into account that testing if a method is called checks behavior instead of status. This make test more fragil as becomes dependent on internal implementation, but depending your scenario it could be perfeclty valid.
Unit testing is about testing a component's behavior in isolation, meaning that while testing a specific component, this component doesn't interract with any external component.
Usually, the way to do that is using mocks. All your dependencies must be mock so you can control them.
Testing if a method have been called is valid. If the logic is not on your tested component, then your job is done. Your component call a function and case x,y and z in another case. Behavior is fine? Thats good enough.
If you have difficulties testing because you have a database dependency, thats usually a design problem. Usually, this is solved by using an database abstraction in front of the database, who its only job is the make call and return value from the database. That abstraction can be mock and injected in your tested class. That way, you can even return pre-configured values to your tested class and continue the process.
This depends on different scenarios, for ex, In Controller, You have one Action
bool SaveEmployee(), which inside calls, service and then Database Layer to save. So testing whether Emp is actually saved in db does not make sense as it should be in another Unit Test for corresponding Database layer Function. Here, you just need to verify the status after it is successful, Failed, Duplicate or throws some Exception. You can simply Mock the function and return bool or string(like Success, as appropriate) and verify actual output with expected Output.
We are using Moq to perform some unit tests, and there's some strange behaviour, perhaps it's some configuration issue that i'm missing.
basically, i have 2 tests (which call a windows workflow, which calls custom activities that call a method using Invoke. i don't know if this helps but i want to give as much info as i can). The tests run OK when executed alone, but if I execute them in a same run, the first one passes and the second fails (doesn't matter if i change the order of them, the 2nd one always fails)
The mock is recreated every time, loaded using Unity. ex :
MockProcessConfigurator = MockFactory.Create<IProcessConfigurator>();
MockProcessConfigurator.Setup(x => x.MyMethod(It.IsAny<Order>()));
[...]
InversionOfControl.Instance.Register<IProcessConfigurator>(MockProcessConfigurator .Object)
The invoked call (WF custom activity) is
var invoker = new WorkflowInvoker(new MyWorkflow());
invoker.Invoke(inputParameter);
The call (Invoked call) is
MyModuleService.ProcessConfigurator.MyMethod(inputOrder);
when debugging, i see that the ProcessConfigurator is always mocked.
The call that fails in the test is something as simple as this :
MockEnvironment.MockProcessConfigurator.Verify(x => x.MyMethod(It.IsAny<Order>()), Times.Exactly(1));
When debugging, the method is actually called everytime, so i suspect that there's something worng with the mock instance. I'm a bit lost here, because things seem to be implemented correctly, but for some reason when they're run one after the other, there's a problem
This type of error commonly occurs when the two tests share something.
For example, you set up your mock with an expectation that a method will be called 1 time in your test setup, and then two tests each call that method 1 time - your expectation will fail because it has now been called 2 times.
This suggests that you should be moving the set up of expectations into each test.
A generic troubleshooting for this type of problem is to try to isolate the dependency between the two test.
Move any setup-code to inside the tests.
Move any tear-down code to inside the tests.
Move any field initializers to inside the tests. (Those are only run once per fixture!)
This should make both your test pass when run together. When you got the green-lights, you can start moving out duplicated stuff again to initializers/setup one piece at the time, running the tests after each change you make.
You should be able to learn what is causing this coupling between the tests. Good luck!
Thought I'd add an additional situation and solution I just came across:
If you are running tests from two separate projects at the same time and each project is using a different version of Moq, this same problem can happen.
For me, I had TestProjectA using Moq 4.2.14 and TestProjectB using Moq 4.2.15 on accident (courtesy of Nuget). When running a test from A and a test from B simultaneously, the first test succeeded and the second silently failed.
Adjusting both projects to use the same version solved the issue.
To expand on the answer Sohnee gave, I would check my setup/teardown methods to make sure you're tidying everything up properly. I've had similar issues when running tests in bulk and not having tidied up my mock objects.
I don't know if this is relevant, but MockFactory.Create seems odd. I normally create mocks as follows:
var mockProcessConfigurator = new Mock<IProcessConfigurator>();
When using a MockFactory (which I never have needed), you would normally create an instance of it. From the moq QuickStart:
var factory = new MockFactory(MockBehavior.Strict) { DefaultValue = DefaultValue.Mock };
Calling a static method on MockFactory seems to defeat the purpose. If you have a nonstandard naming convention where MockFactory is actually a variable of type MockFactory, that's probably not your issue (but will be a constant source of confusion). If MockFactory is a property of your test class, insure that it is recreated on SetUp.
If possible I would eliminate the factory, as it is a potential source of shared state.
EDIT: As an aside, WorkflowInvoker.Invoke takes an Activity as a parameter. Rather than creating an entire workflow to test a custom activity, you can just pass an instance of the custom activity. If that's what you want to test, it keeps your unit test more focused.
My guess is that the current semantics of unit testing involve actually calling the method, i.e., if I have a method MyTest() then that's what gets called. My question is this: is it possible to somehow change the pipeline of the way tests are executed (preferably without recompiling the test runner) so that, say, instead of calling the method directly it's called via a wrapper I provide (i.e., MyWrapper(MyTest))?
Thanks.
If you use MbUnit then there's lot of stuff you can customize by defining custom attributes.
The easiest way to do this is to create a subclass of TestDecoratorAttribute and override the SetUp, TearDown or Execute methods to wrap them with additional logic of your choice.
However if you need finer control, you can instead create a subclass of TestDecoratorPatternAttribute and override the DecorateTest method with logic to add additional test actions or test instance actions.
For example, the MbUnit [Repeat] attribute works by wrapping the test's body run action (which runs all phases of the test) with a loop and some additional bookkeeping to run the test repeatedly.
Here's the code for RepeatAttribute: http://code.google.com/p/mb-unit/source/browse/trunk/v3/src/MbUnit/MbUnit/Framework/RepeatAttribute.cs
It depends on how the unit testing framework provides interception and extensibility capabilities.
Most frameworks (MSTest, NUnit etc.) allow you to define Setup and Teardown methods that are guaranteed to run before and after the test.
xUnit.NET has more advanced extensibility mechanisms where you can define custom attributes you can use to decorate your test methods to change the way they are invoked. As an example, there's a TheoryAttribute that allows you to define Parameterized Tests.
I don't know MBUnit, so I can't say whether it supports these scenarios or not.
Told by my boss to use Moq and that is it.
I like it but it seems that unlike MSTest or mbunit etc... you cannot test internal methods
So I am forced to make public some internal implementation in my interface so that i can test it.
Am I missing something?
Can you test internal methods using Moq?
Thanks a lot
You can use the InternalsVisibleTo attribute to make the methods visible to Moq.
http://geekswithblogs.net/MattRobertsBlog/archive/2008/12/16/how-to-make-a-quotprotectedquot-method-available-for-quotpartialquot-mocking-and-again.aspx
There is nothing wrong with making the internals visible to other classes for testing. If you need to test the internals of a class, by all means do so. Just because the methods are not public does not mean you should ignore them and test only the public ones. A well designed application actually will have a majority of the code encapsulated within your classes in such a way that they are not public. So ignoring the non-public methods in your testing is a big mistake IMHO. One of the beauties of unit testing is that you test all the parts of your code, no matter how small and when your tests are all up and running at 100% it is a very reasonable assumption that when all these parts are put together, your application will work properly for the end user. Of course verifying that latter part is where integration level tests come in - which is a different discussion. So test away!!!
If you have many code that isn't tested by the public methods, you probably have code that should be moved to another classes.
As said in another answer, you can use the InternalsVisibleTo attribute for that. But that doesn't mean you should do it.
From my point of view Mocking should be used to mock up some behavior that we are dependent on but are not setting out to test. Hence:
Q: Am I missing something?
- No you're not missing anything, MOQ is missing the ability to mock private behaviors.
Q: Can you test internal methods using Moq?
- If the result of the private behavior is visible publicly, then yes you can test the internal method but it's not because of Moq that you can test them. I would like to make a point here is that Mock is not the ability to test but rather the ability to similar behaviors that we are not testing but depend on.
C: A main benefit with TDD is that your code becomes easy to change. If you start testing internals, then the code becomes rigid and hard to change
- I don't agree with this comment for 2 main reasons:
1: It is not a beginner misconception, as TDD is not just about the ability to code faster but also better quality code. Hence the more test we can do the better.
2: It doesn't make the code anymore harder to change if you can somehow can test the internal methods.
Your initial presumption that it is necessary to test internal method is a common beginners misconception about unit testing.
Granted, there may exist cases where private methods should be tested in isolation, but the 99% common case is that the private methods are being tested implicitly because they make the public methods pass their tests. The public methods call the private methods.
Private methods are there for a reason. If they do not result in external testable behaviour, then you don't need them.
Do any of your public tests fail if you just flat out delete them? If yes, then they are already being tested. If not, then why do you need them? Find out what you need them for and then express that in a test against the public interface.
A main benefit with TDD is that your code becomes easy to change. If you start testing internals, then the code becomes rigid and hard to change.
InternalsVisibleTo is your friend for testing internals.
Remember to sign your assemblies and you're safe.