I'm an QA intern at the moment at an insurance company and I'm doing some tests on the company's website. I have done many cases and now they're asking for data driven tests which I'm struggling with.
I have done all my tests as;
1 TestFixture
Tests for each page that is in the testcase.
Like this;
[TestFixture]
public class Test : BaseClassForTheTest
{
[Test, Order(1)]
TestcodeForHomePage
[Test,Order(2)]
testcodeForNextPage
}
So I need to run the full test, for many datas from excel file. I'm using NUnit as you might have noticed.
The real question is, how can I pass a DataTable into the TestFixture and make test blocks run for the datatable. On the run, the first test block will run for the first row on the datasheet named MyTable and the second test block will run for the first row of the datasheet named SecondTable. Since these tests are triggered by the previous test block, I can't give datasource to the Test blocks.
I've looked up on the internet but couldn't find anything about passing a Datatable into the TestFixture. Thanks in advance people :)
There's nothing built into NUnit to read an Excel file. But you can use TestCaseSource or TestFixtureSource to generate data from anywhere you like.
Your source will have to be a method, which will then read the excel file and return the proper arguments.
Here's an outline using TestCaseSource...
[TestFixtureSource("DataFromExcel")]
public class MyTestFixture : BaseClassForTheTest
{
IEnumerable<TestCaseData> DataFromExcel()
{
// Read the Excel file
// For each row of data you want to use
// yield return new TestCaseData(/*test fixture args here*/);
}
public MyTestFixture(/* your arg list */)
{
// Save each arg in a private member
}
[Test, Order(1)]
TestcodeForHomePage()
{
// Code that uses the saved values from the constructor
}
[Test,Order(2)]
TestcodeForNextPage()
{
// Code that uses the saved values from the constructor
}
}
Related
I have a pack of smoke test that run and randomly pull data from a table and search by that data in another method and assert after. The test will fail if no data exist. I have a reusable method called RandomVinSelect(). I want to stop the test if there is no data. I have searched for test result warnings that test could not be ran instead of failing the test. At this point I am stumped. This is the code I have I do not want to run the lines after UI.RandomVINSelect if no data found. I am thinking there may not be a way for this using Xunit and it would just be pass or fail...
public static string RandomVinSelect(this Browser ui, string table,
string selector)
{
//I want to stop test here if no data exist or create a dataexist
//method that stops a test.
int rows = ui.GetMultiple(table).Count;
Random num = new Random();
string randomnum = Convert.ToString(num.Next(1, rows));
string newselector = selector.Replace("1", randomnum);
string vin = ui.Get(newselector).Text;
return vin;
}
Perhaps just put smoke tests in a separate test package or collection and include a test that just checks to see if it can get data, then when you run this group of tests if that first test fails you know it is just due to no data being available.
Not ideal but might be good enough?
I installed the new nuget package(xunit.SkippableFact) added [SkippableFact] to my smoke test. Then I created a method that can be called to check and see if data is available and runs a Skip.If(condition,"my message") within that method and closes the test early if no data is present. Then in the Test Explorer show a warning symbol.
public static void IsDataAvaiable(this Browser ui)
{
bool data = true;
string pagecount = ui.GetPageCount();
if (pagecount == "0") data = false;
Skip.If(data == false, "The test could not run because no data available for validation.");
}
I'm new to unit testing and I want to see output from my tests.
Let's assume I'm testing for the existance of certain objects:
List<MyObject> actual = target.GetMyObjects();
Assert.IsTrue(actual.Count > 0, String.Format("{0} objectes fetched", actual.Count));
In the 'Test Result' window in VS2010 I want to see the result of "String.Format("{0} objectes fetched", actual.Count)".
Is that possible?
Found it:
I added the column Output(StdOut) to the Test Result window.
I changed the end of my test method to this:
bool success = actual.Count > 0;
Assert.IsTrue(success, "No models in the database");
if (success)
{
Console.Write(String.Format("{0} models fetched", actual.Count));
}
Yes this is possible. If the test fails whatever message that you put in the second parameter might be useful.In your case if the count value is important for you to debug the error go ahead with it.
Even if the failing or succeeding the test is automated later when debugging this information might be helpful. http://www.creatingsoftware.net/2010/03/best-practices-for-assert-statements-in.html
Alternatively you could use
Debug.Print("whatever");
And then when you run your test, you get a hyperlink "Output" in the success/fail window which will show all of your debug messages.
Obviously you need to add
Using System.Diagnostics;
Dom
No, you don't want to see the output.
Each unit test must either succeed or fail. This enables the test runner to aggregate the test results into a single Fail/Pass test result. If human inspection is required, the point of unit testing is lost - it must be automated.
I'm looking at using NDbUnit to help with the unit testing of an application. As the question title states, I'm wondering if it is possible to keep NDbUnit test data in separate XML files. I have noticed already that my single test data XML file is quite big and could start to become unmanageable when I add a few more entities to it.
Now, having read this question it looks as if it's not possible but I would just like to be sure.
If it helps, this is sample code which illustrates the problem. The idea is that programs are associated with vendors. I have set up test data containing 3 vendors, the second one of which has 3 programs. TestData.xml contains all of the test data for all of the vendors and programs. When I use it, the unit test passes as expected. If I try to read the individual XML file in separately using multiple calls to db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity); it seems as if the second call overwrites whatever was done in the first one.
private const string xmlSchema = #"..\..\schema.xsd";
// All of the test data in one file.
private const string xmlData = #"..\..\XML Data\TestData.xml";
// Individual test data files.
private const string vendorData = #"..\..\XML Data\Vendor_TestData.xml";
private const string programData = #"..\..\XML Data\Program_TestData.xml";
public void WorkingExampleTest()
{
INDbUnitTest db = new SqlDbUnitTest(connectionString);
db.ReadXmlSchema(xmlSchema);
db.ReadXml(xmlData);
db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
VendorCollection vendors = VendorController.List();
Assert.IsNotNull(vendors);
ProgramCollection collection = VendorController.GetPrograms(vendors[1].VendorID);
Assert.IsNotNull(collection);
Assert.IsTrue(collection.Count == 3);
}
public void NotWorkingExampleTest()
{
INDbUnitTest db = new SqlDbUnitTest(connectionString);
db.ReadXmlSchema(xmlSchema);
db.ReadXml(vendorData);
db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
db.ReadXml(programData);
db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
VendorCollection vendors = VendorController.List();
Assert.IsNotNull(vendors);
// This line throws an ArgumentOutOfRangeException because there are no vendors in the collection.
ProgramCollection collection = VendorController.GetPrograms(vendors[1].VendorID);
Assert.IsNotNull(collection);
Assert.IsTrue(collection.Count == 3);
}
This does work:
Watch out for the meaning of the DbOperationFlag value you are using; the "Clean" part of "CleanInsertIdentity" means "clean out the existing records before performing the insert-identity part of the process".
See http://code.google.com/p/ndbunit/source/browse/trunk/NDbUnit.Core/DbOperationFlag.cs for more info on the possible enum values.
You might try the same process with either Insert or InsertIdentity to see if you can achieve what you are after, but by design CleanInsertIdentity isn't going to work for this scenario :)
I have started using a TDD approach to develop a small app that reads data from Excel files. Using a repository pattern type approach I have come to a hurdle which baffles me.
In order to read the Excel files, I am using the OpenXml-SDK. Now typically reading from an Excel file using the SDK requires several if not more steps to actually get the values you want to read.
The approach I have taken thus far is reflected in the following test and accompanying function.
[Test]
public void GetRateData_ShouldReturn_SpreadSheetDocument()
{
//Arrange
var fpBuilder = new Mock<IDirectoryBuilder>();
fpBuilder.Setup(fp => fp.FullPath()).Returns(It.IsAny<string>());
var doc = new Mock<IOpenXmlUtilities>();
doc.Setup(d => d.OpenReadOnlySpreadSheet(It.IsAny<string>()))
.Returns(Mock.Of<SpreadsheetDocument>());
swapData = new SwapRatesRepository(fpBuilder.Object, doc.Object);
//Act
var result = swapData.GetRateData();
//Assert
doc.Verify();
fpBuilder.Verify();
}
public class SwapRatesRepository: IRatesRepository<SwapRates>
{
private const string SWAP_DATA_FILENAME = "DATE_MKT_ZAR_SWAPFRA1.xlsx";
private IDirectoryBuilder builder;
private IOpenXmlUtilities openUtils;
public SwapRatesRepository(IDirectoryBuilder builder)
{
// TODO: Complete member initialization
this.builder = builder;
}
public SwapRatesRepository(IDirectoryBuilder builder,
IOpenXmlUtilities openUtils)
{
// TODO: Complete member initialization
this.builder = builder;
this.openUtils = openUtils;
}
public SwapRates GetRateData()
{
// determine the path of the file based on the date
builder.FileName = SWAP_DATA_FILENAME;
var path = builder.FullPath();
// open the excel file
using(SpreadsheetDocument doc = openUtils.OpenReadOnlySpreadSheet(path))
{
//WorkbookPart wkBookPart = doc.WorkbookPart;
//WorksheetPart wkSheetPart = wkBookPart.WorksheetParts.First();
//SheetData sheetData = wkSheetPart.Worksheet
// .GetFirstChild<SheetData>();
}
return new SwapRates(); // ignore this class for now, design later
}
}
However, the next steps after the spreadsheet is open would be to actually start interrogating the Excel object model to retrieve the values. As noted above, I making use of mocks for anything open xml related. However, in some cases the objects can't be mocked(or I don't know how to mock them since they are static). That gave rise to IOpenXmlUtilities which are merely simple wrapper calls into the OpenXml-SDK.
In terms of design, we know that reading data from excel files is a short term solution (6-8 months), so these tests only affect the repository/data access for the moment.
Obviously I don't want to leave the TDD approach(as tempting as it is), so I am looking for advise and guidance on how to continue my TDD endeavours with the OpenXml SDK. The other aspect relates to mocking - I am confused as to when and how to use mocks in this case. I don't want to unknowingly writes tests that test the OpenXml-SDK.
*Side note: I know that the SOLIDity of my design can be improved but I leaving that for now. I have a set of separate tests that relate to the builder object. The other side effect that may occur is the design of an OpenXML-SDK wrapper library.
Edit: Unbeknown at the time, by creating the OpenXML-SDK wrappers for the OpenXML-SDK, i have used a design pattern similar (or exact) called the Adaptor pattern.
If you can't mock it and can't create a small unittest, it might be better to take it to a higher level and make a scenario test. You can use the [TestInitialize] and [TestCleanup] methods to create a setup for the test.
using Pex and Moles might be another way to get it tested.
I'm writing unit tests with NUnit and the TestDriven.NET plugin. I'd like to provide parameters to a test method like this :
[TestFixture]
public class MyTests
{
[Test]
public void TestLogin(string userName, string password)
{
// ...
}
...
}
As you can see, these parameters are private data, so I don't want to hard-code them or put them in a file. Actually I don't want to write them anywhere, I want to be prompted each time I run the test.
When I try to run this test, I get the following message in the output window :
TestCase 'MyProject.MyTests.TestLogin' not executed: No arguments were provided
So my question is, how do I provide these parameters ? I expected TestDriven.NET to display a prompt so that I can enter the values, but it didn't...
Sorry if my question seems stupid, the answer is probably very simple, but I couldn't find anything useful on Google...
EDIT: I just found a way to do it, but it's a dirty trick...
[Test, TestCaseSource("PromptCredentials")]
public void TestLogin(string userName, string password)
{
// ...
}
static object[] PromptCredentials
{
get
{
string userName = Interaction.InputBox("Enter user name", "Test parameters", "", -1, -1);
string password = Interaction.InputBox("Enter password", "Test parameters", "", -1, -1);
return new object[]
{
new object[] { userName, password }
};
}
}
I'm still interested in a better solution...
Use the TestCase attribute.
[TestCase("User1", "")]
[TestCase("", "Pass123")]
[TestCase("xxxxxx", "xxxxxx")]
public void TestLogin(string userName, string password)
{
// ...
}
Unit Tests should normally not take any parameters. You create the necessary data within the test itself.
The expected value
You call your method you want to test passing the necessary arguments
You compare the result with the expected value and the returned value from your tested method
MS Unit tests don't allow the passing of parameters to tests. Instead you need to create Datadriven Unit tests. Try the link, it may help you.
As I mentioned. I wouldn't declare passing arguments to unit tests itself good practice.
Update: I was young :). Consider Sarfraz's answer instead on how to pass parameters to NUnit tests.
I think you can solve this problem by using the RowTest plugin for NUnit found here http://www.andreas-schlapsi.com/2008/01/29/rowtest-extension-120/
You can create simple Data-Driven Tests where the test data is provided by [Row] attributes. So here is an example of a test that is run over and over again with different parameters:
[TestFixture]
public class RowTestSample
{
[RowTest]
[Row( 1000, 10, 100.0000)]
[Row(-1000, 10, -100.0000)]
[Row( 1000, 7, 142.85715)]
[Row( 1000, 0.00001, 100000000)]
[Row(4195835, 3145729, 1.3338196)]
public void DivisionTest(double numerator, double denominator, double result)
{
Assert.AreEqual(result, numerator / denominator, 0.00001);
}
}
I agree with the other answers that passing arguments may not be best practise, but neither is hard coding credentials or server addresses that may change at some point.
Inspired by the suggested solution in question, I simply read console input instead of using input boxes. The arguments are saved in a file. When starting a the tests, the file is redirected and to be read from some initialization function that should be called before any test cases run.
nunit tests.dll < test.config
This avoids user interaction and should be runnable by any automation script. Downside is that the password still has to be saved somewhere, but at least it can be saved local on the testers machine and is easy to change.
This was for a project, where excel sheets containing the tests (not unit tests by definition) were used to let others create test cases for a bigger server side project without changing any code. It would have been bad if all test cases had to be forced in a single giant excel sheet. Also there was no CI, just many testing environments on different servers.
Create a class and store the details of the required variable in const.
Note: If you create variable as static then it won't work in the Nunit framework.
public Class Credential
{
public const string PUBLIC_USER = "PublicUser#gmail.com";
public const string PASSWORD= "password123";
}
[Test]
[TestCase(Credential.PUBLIC_USER, Credential.PASSWORD)]
public void VerifyLogin(string username, string password)