In a nunit test, can we an assert that if the test fails at a certain circumstance then it is a pass (i.e. it is expected to try and fail).
But it should pass under all other circumstances.
Thing is that the test could fall apart before it can reach it's assertions sections.
I mean something in the lines of
[TestSetup]
void init()
{
if (condition==true)
{
Assert.That(this test fails); /*any feature to do this?*/
}
}
If the test can fail and it's classed as a pass under some circumstances, it sounds a bad test. Split it out into individual tests with clear method names detailing what it's achieving.
So instead of having one test and having a conditional inside it, split it out into each scenario. That way you can have the one scenario, where it is supposed to fail under something like
// do stuff
bool success = DoStuff();
Assert.IsFalse(success);
It's a little hard to understand your question. Are you wanting Assert.Fail() to force a failure? Like so...
[TestSetup]
public void Init()
{
if (condition==true)
{
Assert.Fail();
}
}
If instead you want to check for failure instead of cause one, you should follow Arran's advice and check for a specific fact about the work you're validating - such as a method's return value.
You can also use the "Action" object to invoke the code in an action and test that action if it is an exception. Look at FluentAssertions they have lots of examples.
int number1 = 1;
int number0 = 0;
Action someAction = () => { int j = number1 / number0; };
someAction.ShouldThrow<DivideByZeroException>();
Related
I want to test that a bunch of buttons on screen are responding properly, this amount this increase dynamically thought the project and will be used in more than a single test.
My first try was using [TestCase] attribute
[TestCase("High action - Button")]
[TestCase("Low action - Button")]
[TestCase("Right action - Button")]
[TestCase("Left action - Button")]
public void WhenClickOnActionButtons_ActionStreamShouldIncrease (string buttonNameInScene)
But this means that every time that some buttons were added I would need to remember to come back to every test that use the buttons and manually add the new test case
Then I thought that have a container that control the buttons would solve my problem, I will lost the possibility to see which buttons caused the error in the test suite, but this can be fixed just by adding a simple message into the assert
[Test]
[Order(1)]
public IEnumerator WhenClickOnActionButtons_ActionStreamShouldIncrease ()
{
foreach (var battleActionButton in battleActionButtons) // <- Buttons containers
{
// Arrange
var changedWhenClicked = false;
battleActionButton.GetComponent<Button>().onClick.AddListener(() => changedWhenClicked = true);
// Act
var positionOfButtonInScreen = Camera.main.WorldToScreenPoint(battleActionButton.transform.position);
LeftClickOnScreenPosition(positionOfButtonInScreen);
yield return null;
// Assert
Assert.True(changedWhenClicked, $"{battleActionButton.name} didn't pressed correctly");
}
}
But now if the button container is empty the test pass, so I thought, let's create a test that runs before it to check if the container is empty
[Test]
[Order(0)]
public void ActionButtonsGroup_IsGreaterThan0()
{
// Arrange
// Act
// Assert
Assert.That(battleActionButtons.Count, Is.GreaterThan(0));
}
But then, if empty, this fails, and the next pass, so I thought, maybe I could make the tests stop running when a test fails, but then I discovered that a test should not rely on other tests, so my question is, how should I handle this kindle of case?
This sounds like a case for using a proper collection as a data source for a parametrised test. In NUnit you can use TestCaseSource for this:
[TestCaseSource(typeof(BattleActionButtons))]
public void ButtonTest(BattleActionButton button)
{
// Test body goes here...
}
where BattleActionButtons is a real class:
public sealed class BattleActionButtons : IReadOnlyCollection<BattleActionButton>
{
private readonly List<BattleActionButton> buttons = new();
public BattleActionButtons()
{
// Add buttons here...
}
public int Count => buttons.Count;
public IEnumerator<BattleActionButton> GetEnumerator()
{
return buttons.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
Perhaps BattleActionButtons is just a test-specific class, but perhaps it would make sense in the production code as well..(?) With TDD, it often happens that you introduce an abstraction to factor your tests in a nice way, only to discover that this abstraction is of general usefulness.
If you're concerned that BattleActionButtons is empty, write another test that verifies that this isn't the case:
[Test]
public void ButtonsAreNotEmpty()
{
Assert.IsNotEmpty(new BattleActionButtons());
}
There's no need to order the tests. If BattleActionButtons is empty, ButtonsAreNotEmpty will fail. It's true that the data-driven parametrised test (here called ButtonTest) will pass when the collection is empty, but there's nothing wrong in that. It just becomes a vacuous truth.
(Unit tests are essentially predicates, and a parametrised test is a universal claim that states that for all (∀) values in the data source, the predicate holds. Thus, when the data source is empty, any test is trivially going to pass, which is exactly as it should be according to predicate logic.)
You can put the assertion from the second test before the foreach loop:
[Test]
[Order(1)]
public IEnumerator WhenClickOnActionButtons_ActionStreamShouldIncrease ()
{
// Precondition
Assert.That(battleActionButtons.Count, Is.GreaterThan(0));
foreach (var battleActionButton in battleActionButtons) // <- Buttons containers
{
// Arrange
var changedWhenClicked = false;
battleActionButton.GetComponent<Button>().onClick.AddListener(() => changedWhenClicked = true);
// Act
var positionOfButtonInScreen = Camera.main.WorldToScreenPoint(battleActionButton.transform.position);
LeftClickOnScreenPosition(positionOfButtonInScreen);
yield return null;
// Assert
Assert.True(changedWhenClicked, $"{battleActionButton.name} didn't pressed correctly");
}
}
I am trying to use Fluent Assertions on C# outside of test frameworks. Is there any way I could cast an FA check to bool? For example I need to achieve something like:
bool result= a.Should().Be(0);
If passes then result = true;
if fails, result = false.
Is there any way of casting or extracting a bool result from the assertion?
Fluent Assertions is designed to throw exceptions that testing frameworks catch, not to return values.
About the best you can do is to create a method that accepts an action, and that catches the exception when the action throws. In the catch you'd return false.
public bool Evaluate(Action a)
{
try
{
a();
return true;
}
catch
{
return false;
}
}
You would use it like this:
bool result = Evaluate(() => a.Should().Be(0));
This will have terrible performance in the negative case; throwing exceptions is not cheap. You might get frowns from others because, generally, exceptions shouldn't be used for flow control.
That said, this does what you want.
It is possible to test if a method has been called using Moq and dependency injection. However, is it possible to test if one method in a class calls another within the same class?
For example, I want to test that if I log a certain exception, that an information message is logged as well.
The method is:
public void Error(string message, Exception exception, long logId = 0)
{
var int32 = (int)logId;
Info("Id was converted to an int so that it would fit in the log: " + logId, int32);
Error(message, exception, int32);
}
This was my attempt at unit testing it. The test fails, is there any way that it can it be done?
void logging_an_error_with_a_long_id_also_logs_info()
{
var mock = new Mock<ILogger>();
var testedClass = new Logger();
var counter = 0;
testedClass.Error("test" + counter++, new Exception("test" + counter), Int64.MaxValue);
mock.Verify(m => m.Info(It.IsAny<string>(), It.IsAny<int>()));
}
Since the Info and Error methods are in the same class (ClassA), I don't believe I can pass ClassA as a dependency into ClassA. So does it not need tested?
The best you're going to be able to do is to make Info virtual. This will allow you to create a Mock<Logger>, set CallBase = true, and verify that Info was called.
var mock = new Mock<Logger>
{
CallBase = true
};
mock.Object.Error("test" + counter++, new Exception("test" + counter), Int64.MaxValue);
mock.Verify(m => m.Info(It.IsAny<string>(), It.IsAny<int>()));
This way, you're still calling the actual implementation of Error, but you've used Moq to verify the Info method was called.
It feels like you're trying to test the wrong thing. It's not really important that the Info method on your class is called from the Error method, what's important is that the behaviour of the Info method occurs. How it happens is an implementation detail of the class.
If I had a math class with two functions:
public int Mult(int x, int y) {
return x*y;
}
public int Sqr(int x) {
return Mult(x,y);
}
I wouldn't test that calling Sqr called out to the Mult function, I would test Sqr(4)==16. It doesn't matter if that calculation takes place in the Sqr method, or in another method of the class.
Whilst #Andrew's solution is probably what you're after, mocking the class you're testing tends to lead to tightly coupled, brittle tests.
If it's impractical to test the call by observing it's side effects, then it may be a sign that the implementation could use a bit of refactoring.
I made a list (a so called "test list" in Test-Driven Development by Example) from which I will pick a test to implement.
So I start up Visual Studio, create a new solution, add a project for the unit tests, and then... I need to come up with a class in which I will put the test method for the test I picked of the list.
Here's where I get stuck. How do I know which class I need, how to name it and how to know if it is correct? Is this something that needed to be thought of beforehand?
Have you read Kent Beck - TDD? Stop trying to work it all out in advance. Dive in, do something, make it work, what ever it is, then you will have a better idea of what it should be and you can change it. The principal is this, think about what you want done before you think about how to do it. Write a test that tests does something you want done, then implement a solution. You will get it wrong first time, and second, and third, but the process will bring you closer to the actual solution, and by the time you are done you should have valuable test suite and a set of loosely couple classes that get the job done.
EDIT IN REPONSE TO COMMENT
No, not a random name. You need to perform a certain amount of design up front. I often start by coming up with the key types that I think my solution will need. I then start a test class (Say FooTest) in which I write a test for something that I want Foo to do. I use the process of writing the test to write the interface. Resharper is great for this as I can reference types and methods that do not yet exist and have Resharper create them:
[TestFixture]
public class FooTest
{
[Test]
public void Bar()
{
var foo = (IFoo)null; //At this point I use Resharper to create IFoo interface
Assert.IsTrue(foo.Bar()); //At this point I use Resharper to create bool IFoo.Bar();
}
}
obviously the above will fail with a null ref ex., but I have a test and I have an interface with a method. I can carry on following this process to model my solution until I reach a point when I am ready to develop a concrete implementation. Following this process I focus on the interface and the interaction between types, not the implementation of those types. Once I have built Foo I simply change the above to var foo = new Foo(); and make all tests green. This process also means I have an interface for every class which is essential when writing unit tests as I can easily mock dependencies using a dynamic mock lib, like MOQ.
Is this something that needed to be thought of beforehand?
Beforehand. That's the reason it called Test Driven Develop...
You have to design the workflow before starting to implement.
A good idea would be to start thinking about this in terms of your domain rather than some vague test. For example you need to develop Foo with functionality of foo1 and foo2.
So you create a test class called FooTest with foo1Test and foo2Test. Initially these tests would fail and you just work to make them pass.
What does your system do? You can start there.
Let's assume you're writing a feature that reads a document containing transactions for a given account and generates an aggregate summary of debits and credits.
Let's create a test:
public class TransactionSummarizationTest {
#Test
public void summarizesAnEmptyDocument() {
TransactionSummarization summarizer = new TransactionSummarization();
Summary s = summarizer.summarizeTransactionsIn(new Scanner());
assertEquals(0.00, s.debits, 0.0);
assertEquals(0.00, s.credits, 0.0);
}
Since the TransactionSummarization and Summary classes don't exist yet, you create them now. They would look like so:
TransactionSummarization.java
public class TransactionSummarization {
public Summary summarizeTransactionsIn(Scanner transactionList) {
return null;
}
}
Summary.java
public class Summary {
public double debits;
public double credits;
}
Now that you've taken care of all of the compile errors you may run the test. It will fail with a NullPointerException due to your empty implementation of the summarizeTransactionsIn method. Return a summary instance from the method and it passes.
public Summary summarizeTransactionsIn(Scanner transactionList) {
return new Summary();
}
Run your test again and it passes.
Now that you've got your first test, what's next? I'm thinking that I would want to try the test with a single transaction.
#Test
public void summarizesDebit() {
TransactionSummarization summarizer = new TransactionSummarization();
Summary s = summarizer.summarizeTransactionsIn(new Scanner("01/01/12,DB,1.00"));
assertEquals(1.00, s.debits, 0.0);
assertEquals(0.00, s.credits, 0.0);
}
After running the test, we should see it fail because we aren't accumulating the values, simply returning a new Summary
public Summary summarizeTransactionsIn(Scanner transactionList) {
String currentLine = transactionList.nextLine();
txAmount = currentLine.split(",")[2];
double amount = Double.parseDouble(txAmount);
return new Summary(amount);
}
After fixing the compile error in Summary and implementing the constructor your tests should be passing again. What's the next test? What can we learn? Well, I'm curious about that debit/credit thing, so let's do that next.
#Test
public void summarizesCredit() {
TransactionSummarization summarizer = new TransactionSummarization();
Summary s = summarizer.summarizeTransactionsIn(new Scanner("01/01/12,CR,1.00"));
assertEquals(0.00, s.debits, 0.0);
assertEquals(1.00, s.credits, 0.0);
}
Run this test, we should see it fail because debits are 1.00, but credits are 0.0. Exactly the opposite of what we wanted, but it's entirely expected because we haven't examined the transaction type in any way. Let's do that now.
public Summary summarizeTransactionsIn(Scanner transactionList) {
double debits = 0.0;
double credits = 0.0;
String currentLine = transactionList.nextLine();
String[] data = currentLine.split(",");
double amount = Double.parseDouble(data[2]);
if("DB".equals(data[1]))
debits += amount;
if("CR".equals(data[1]))
credits += amount;
return new Summary(debits, credits);
}
Now all of the tests pass and we can move on to our next test. Now what? I'm thinking that processing only one line in a file won't help us much if we want this project to be successful. How about processing multiple records at the same time? Let's write a test!
#Test
public void summarizesDebitsAndCredits() {
String transactions = "01/01/12,CR,1.75\\n" +
"01/02/12,DB,3.00\\n" +
"01/02/12,DB,2.50\\n" +
"01/02/12,CR,1.25";
TransactionSummarization summarizer = new TransactionSummarization();
Summary s = summarizer.summarizeTransactionsIn(new Scanner(transactions));
assertEquals(5.50, s.debits, 0.0);
assertEquals(3.00, s.credits, 0.0);
}
Now, running all of our tests we see that this one fails in a predictable way. It tells us that our debits are 0.00, and credits are 1.75 since we've only processed the first record.
Let's fix that now. A simple while loop and we should be back in business:
public Summary summarizeTransactionsIn(Scanner transactionList) {
double debits = 0.0;
double credits = 0.0;
while(transactionList.hasLine()) {
String currentLine = transactionList.nextLine();
String[] data = currentLine.split(",");
double amount = Double.parseDouble(data[2]);
if("DB".equals(data[1]))
debits += amount;
if("CR".equals(data[1]))
credits += amount;
}
return new Summary(debits, credits);
}
All the tests pass and I'll leave the rest up to you. Some things to think about are edge cases such as he file containing mixed case, for example "cr" versus "CR", or invalid/missing data, etc.
Also, I realized after I typed all that up that you were referring to C#. Unfortunately I did it in Java and am too lazy to convert it to C#, but I hope this helps anyway. :-)
Thanks!
Brandon
I'm curious to know if there's any built-in mechanism to retry tests in the Visual Studio 2008 unit testing framework for C#.
Case in point, I have a C# unit test which looks something like:
[TestMethod]
public void MyMethod() {
DoSomething();
Assert.Something();
}
Now, occasionally DoSomething() performs badly; in that case I would like to rerun the DoSomething() method before reaching the assertion. Obviously I can do something like:
...
do {
Initialization();
DoSomething();
} while (PerformedOK() == false);
Assert.Something();
...
Though this is a bit cumbersome because of the added loop and repeating the test initialization which would otherwise be completely handled by other methods / class constructor.
My question is whether there is a more convenient mechanism for retrying a test, something like:
DoSomething();
if (PerformedOK() == false) Retry();
else Assert.Something();
which will automatically retry the test without registering it as a failure, while performing all the regular initialization code as usual.
Seriously...
occasionally DoSomething() performs
badly
A test should be green every time. If the tested code sometimes perform "badly", then you need to fix your code, isolating the different behavior. You should have two test, one where it Asserts correct when DoSomething fails (and is supposed to fail), and one where it Asserts correct when DoSomething is ok (and is supposed to be ok).
Having retry logic in a test is just wrong imo. You should always Assert on the expected outcome, and you should be able to isolate and instrument your code to return what you expect.
[Edit - added some code which could be used for a retry loop]
You could create a loop wrapper which takes whatever method in and calls it X number of times, or until it succeeds. You could also have the Loop function call your init, or pass it as a separate argument. The method could also return a bool if successful. Change the signature to fit your needs.
[TestMethod]
public void something()
{
Loop.LoopMe(TestMethod,3);
Assert.Something();
}
class Loop
{
public static void LoopMe(Action action, int maxRetry)
{
Exception lastException = null;
while (maxRetry > 0)
{
try
{
action();
return;
}
catch (Exception e)
{
lastException = e;
maxRetry--;
}
}
throw lastException;
}
}
Your second example is almost the same lines of code and same complexity. There are tons of ways to skin it, you could not that I am advocating it use recursion.
[TestMethod]
public void MyMethod() {
bool success = DoSomething();
Assert.IsTrue(success);
}
public boolean DoSomething(){
//Do whatever
if(performedOk){
return true;
}else{
//find a way to stop it.
}
}
But the point is it is a unit test. If something is causing it to go wrong, you need to find a way isolate your test, so that it is in a controlled environment.
Unless you have a requirement that says, test should pass eventually. The best retry logic you should use, is after it fails. Click the test and hit run again.