I have a simple NUnit.Framework.TestCaseAttribute.
In the past, this Unit test used to work but for some reason running this test results in a "Parameter Mismatch error" There is nothing being passed to it and it has no parameters.
The version of NUnit that I am using is 3.12.0.0
[TestCase()]
public void TestGroupA_None()
{
}
For methods with no args, you can use Test attribute instead of TestCase Attribute
Sample Code:
[Test]
public void TestCaseTest_NoArgs()
{
}
Related
I have an issue with parameters, I suppose to check the Application Login if it set to true, then execute the method LoginFirst, but every time I try to execute the test I got this error:
Message: Test method AppNameWebMultiMap.Bader.DeleteDomain.DeleteDomainTest threw exception:
System.Reflection.TargetParameterCountException: Parameter count mismatch.
Here is the method:
[TestMethod]
private void LoginFirst()
{
var login = new AppLogin();
login.AppLoginBySaTest();
}
Here is the how I execute it:
[TestMethod]
public void DeleteDomainTest(bool loginFirst = true)
{
//Login
if (loginFirst)
{
LoginFirst();
}
//Execute delete domains function
}
The method DeleteDomainTest do the following:
First login
then attempt to delete domains.
From the comments, the problem here is that the method:
[TestMethod]
public void DeleteDomainTest(bool loginFirst = true) {...}
is marked as a test method (via the attribute), and has a parameter, with the reason for the parameter being that it is used from "other methods in the application". The test framework wants the test method to be parameterless.
This suggests a fundamental misapplication of test methods. If a method is used by other code, then it isn't a test method. Test methods should always be standalone and top-level. You should be able to resolve this simply by refactoring slightly:
[TestMethod]
public void DeleteDomainTest() { DeleteDomainImpl(); }
internal void DeleteDomainImpl(bool loginFirst = true) {...}
Now we have a DeleteDomainImpl method that can be used from other tests as required, and a DeleteDomainTest that actually is the test that runs it in this case - using the default parameters.
So i have this code:
[TestFixture]
[Category("MyTestSet")]
public class MyTests
{
[Test]
public void TestCase12()
{
ExecuteTestCase(12);
}
[Test]
public void TestCase13()
{
ExecuteTestCase(13);
}
[Test]
public void TestCase14()
{
ExecuteTestCase(14);
}
}
The ExecuteTestCase gets test parameters from my web server and executes the test case with these settings.
Each time i add a new test case parameters on my web server i need to add a new test in my C# code and pass the ID of test case parameters i have in my web server database and compile my code.
Is there any way to do it automatically? Like say, C# gets from my server ID's of all test case parameters and creates tests for them on the fly?
What is important, test cases change frequently. I was thinking about running all test cases in one test case on a loop, but than i'd be unable to run my test cases separately for example in Nunit IDE.
So my question is: how to run multiple test cases depending on data i receive on run time.
You can use TestCaseSourceattribute in order to get parameters from web service and have your test cases auto generated
[TestFixture]
[Category("MyTestSet")]
public class MyTests
{
[Test, TestCaseSource(nameof(GetTestParameters))]
public void TestCase(int parameter)
{
ExecuteTestCase(parameter);
}
static int[] GetTestParameters()
{
//call web service and get parameters
return new[] { 1, 2, 3 };
}
}
documentation
I have the challenge that when using InlineAutoData that the test is run with random values as well. The background is that I am testing a conversion with some input that is required to follow the specification. I am not interested in random data.
The following test runs twice. Once with the InlineAutoData and one with random strings. The test have been deliberately made simple and to fail on the random data run:
[Theory, GeneralTransferTestConventions]
[InlineAutoData("Allowed", "Allowed")]
public void Testing(string test1Data, string test2Data)
{
Assert.Equal(test1Data, test2Data);
}
My question is if there is a way to avoid the test run with random data and how to do that?
Remove the AutoFixture stuff integration:
[Theory]
[InlineData("Allowed", "Allowed")]
public void Testing(string test1Data, string test2Data)
{
Assert.Equal(test1Data, test2Data);
}
This is a pure xUnit.net test, and is entirely deterministic.
As a note, though, there's no reason to make a test parametrised if there's only going to be a single set of test cases, so either add more InlineData test cases:
[Theory]
[InlineData("Allowed", "Allowed")]
[InlineData("foo", "foo")]
[InlineData("bar", "bar")]
public void Testing(string test1Data, string test2Data)
{
Assert.Equal(test1Data, test2Data);
}
or make it a 'normal' test:
[Fact]
public void Testing()
{
var test1Data = "Allowed";
var test2Data = "Allowed";
Assert.Equal(test1Data, test2Data);
}
I'm trying to use the TestContext.CurrentContext of NUnit 2.6.2 but it's always null.
What I would like is to have an output with the result of tests, but if I run the following code I always get a NullReferenceException in the TearDown method.
All the properties inside Test and Result are throwing the exception.
[TestFixture]
public class UtilitiesTests
{
[TearDown]
public void TearDown()
{
//using console here just for sake of simplicity.
Console.WriteLine(String.Format("{0}: {1}", TestContext.CurrentContext.Test.FullName, TestContext.CurrentContext.Result.Status));
}
[Test]
public void CleanFileName()
{
var cleanName = Utilities.CleanFileName("my &file%123$99\\|/?\"*:<>.jpg");
Assert.AreEqual("my-efile12399---.jpg", cleanName);
}
}
What I'm possibly doing wrong?
According to this discussion you have to make sure you execute with the correct version of the NUnit testrunner. The version has to be NUnit 2.6.2.
Try to run your tests with nunit-console with the correct version.
Update: I did set up a new project in VS2012 and added NUnit 2.6.2 and NUnit.Runners 2.6.2 using NuGet. With the Resharper Testrunner I did get no error but also no Console output, so I did run NUnit.exe from <project-folder>\packages\NUnit.Runners.2.6.2\tools\
This is the output I recieved:
The result looks ok.
I ran your example code above.
However, I had to modify your code so I could run it:
using System;
using NUnit.Framework;
[TestFixture]
public class UtilitiesTests
{
[TearDown]
public void TearDown()
{
//using console here just for sake of simplicity.
Console.WriteLine(String.Format("{0}: {1}", TestContext.CurrentContext.Test.FullName, TestContext.CurrentContext.Result.Status));
}
[Test]
public void CleanFileName()
{
var cleanName = "my &file%123$99\\|/?\"*:<>.jpg";
Assert.AreEqual("my &file%123$99\\|/?\"*:<>.jpg", cleanName);
}
}
You should try to run your tests using NUnit.exe again, but before verify that you have the correct verison in Help -> About NUnit ...
Mine looks like this:
I want to ignore certain tests based on data I pulled from a configuration file during the TestFixtureSetUp. Is there a way to ignore running a test based on parameters?
[TestFixture]
public class MessagesTests
{
private bool isPaidAccount;
[TestFixtureSetUp]
public void Init () {
isPaidAccount = ConfigurationManager.AppSettings["IsPaidAccount"] == "True";
}
[Test]
//this test should run only if `isPaidAccount` is true
public void Message_Without_Template_Is_Sent()
{
//this tests an actual web api call.
}
}
If account we are testing with is a paid account, the test should run fine, if not, the method will throw an exception.
Would there be an extension of the attribute [Ignore(ReallyIgnore = isPaidAccount )]? Or should I write this inside the method and run 2 separate test cases for eg.
public void Message_Without_Template_Is_Sent()
{
if(isPaidAccount)
{
//test for return value here
}
else
{
//test for exception here
}
}
You can use Assert.Ignore() like Matthew states. You could also use Assert.Inconclusive() if you want to categorize the result differently.
This Question/Answer is slightly similar: Programmatically skip an nunit test