How do I run specflow tests using Nunit 2.6.1 on a build Server?
Also how do you maintainin and organize these tests to run successfully on the build server with multiple automation programmers coding seperate tests?
I will try to answer the second part of your question for now:
"how do you maintainin and organize these tests to run successfully on the build server with multiple automation programmers coding seperate tests?"
These kind of tests have a huge potential of being turned into a unmaintainable mess and here is why:
steps are used to create contexts (for the scenario test)
steps can be called from any scenario in any order
Due to these two simple facts, its easy to compare this model with procedural/structured programming. The scenario context is no different from some global variables and the steps are some methods that can be called at any time at any place.
What my team does to avoid a huge mess in the step files is to keep them as stupid as possible. Everything a step does is to parse and call services that will do the real meaningful job and hold the context of the current test. We call these services as 'xxxxDriver' (where xxxx is the domain object that we are dealing with).
a stupid example:
[Given("a customer named (.*)")]
public void GivenACustomer(string customerName)
{
_customerDriver.CreateCustomer(customerName);
}
[Given("an empty schedule for the customer (.*)")]
public void GivenEmptySchedule(string customerName)
{
var customer = _customerDriver.GetCustomer(customerName);
_scheduleDriver.CreateForCustomer(customer);
}
The 'xxxxxDriver' will contain all repositories, webgateways, stubs, mocks or anything related with the related domain object. Another important detail is that if you inject these drivers in the step files, specflow will create an instance of it for each scenario and use it among all step files.
This was the best way we found to keep some consistency in the way we maintain and extend the steps without stepping in each other toes with a big team touching the same codebase. Another huge advantage is that it help us to find similar steps navigating through the usages of the methods in driver classes.
There's a clear example in the specflow codebase itself. (look at the drivers folder)
https://github.com/techtalk/SpecFlow/tree/master/Tests/TechTalk.SpecFlow.Specs
Related
I am working on writing a tool which
- sets up a connection to Sql and runs a series of stored procedures
- Hits the file system to verify and also delete files
- Talks to other subsystems through exposed APIs
I am new to the concept of TDD but have been doing a lot of reading on it. I wanted apply TDD for this development but I am stuck. There are a lot of interactions with external systems which need to be mocked/stubbed or faked. What I am finding difficult is the proper approach to take in doing this in TDD.. here is a sample of what I would like accomplished.
public class MyConfigurator
{
public static void Start()
{
CheckSystemIsLicenced(); // will throw if its not licenced. Makes call to a library owned by company
CleanUpFiles(); // clean up several directories
CheckConnectionToSql(); //ensure connection to sql can be made
ConfigureSystemToolsOnDatabase(); //runs a set of stored procedure. Range of checks are also implemented and will throw if something goes wrong.
}
}
After this I have another class which cleans up the system if things have gone wrong. For the purpose of this question, its not that relevant but it essentially will just clear certain tables and fix up database so that the tool can run again from scratch to do its configuration tasks.
It almost appears to be here that when using TDD the only tests I end up having are things like (assuming I am using FakeItEasy)
A.CallTo(()=>fakeLicenceChecker.CheckSystemIsLicenced("lickey")).MustHaveHappened();
It just is a whole lot of tests which just appear to be "MustHaveHappened". Am I doing something wrong? Is there a different way to start this project using TDD? Or is this a particular scenario where perhaps TDD is not really recommended? Any guidance would be greatly appreciated.
In your example, if the arrangement of the unit test shows lickey as the input, then it is reasonable to assert that the endpoint has been called with the proper value. In more complex scenarios, the input-to-assert flow covers more subsystems so that the test itself doesn't seem as trivial. You might set up an ID value as input and test that down the line you are outputting a value for an object that is deterministically related to the input ID.
One aspect of TDD is that the code changes while the tests do not - except for functionally equivalent refactoring. So your first tests would naturally arrange and assert data at the outermost endpoints. You would start with a test that writes a real file to the filesystem, calls your code, and then checks to see that the file is deleted as expected. Of course, the file system is a messy workspace for portable testing, so you might decide early on to abstract the file system by one step. Ditto with the database by using EF and mocking your DbContext or by using a mocked repository pattern. These abstractions can be pre-TDD application architecture decisions.
Something I do frequently is to use utility code that starts with an IFileSystem interface that declares methods that mimic a lot of what is available in System.IO.File. In production I use an implementation of IFileSystem that just passes through to File.XXX() methods. Then you can mock up and verify the interface instead of trying to setup and cleanup real files.
In this particular method the only thing you can test is that the methods were called. It's ok to do what you are doing by asserting the mock classes. It's up to you to determine if this particular test is valuable or not. TDD assumes tests for everything, but I find it to be more practical to focus your testing on scenarios where it adds value. Hard for others to make that determination, but you should trust yourself to make the call in each specific scenario.
I think integration tests would add the most bang for buck. Use the real DB and FileSystem.
If you have complex logic in the tool, then you may want to restructure the tool design to abstract out the DB and fileSystem and write the unit tests with mocks. From the code snippet you posted, it looks like a simple script to me.
We are writing an app in MVC/C# on VS2013.
We use selenium web driver for ui tests and have written a framework around it to take make the process more declarative and less procedural. Here is an example
[Test, Category("UITest")]
public void CanCreateMeasureSucceedWithOnlyRequiredFields()
{
ManageMeasureDto dto = new ManageMeasureDto()
{
Name = GetRandomString(10),
MeasureDataTypeId = MeasureDataTypeId.FreeText,
DefaultTarget = GetRandomString(10),
Description = GetRandomString(100)
};
new ManageScreenUITest<MeasureConfigurationController, ManageMeasureDto>(c => c.ManageMeasure(dto), WebDrivers).Execute(ManageScreenExpectedResult.SavedSuccessfullyMessage);
}
The Execute method takes the values in the DTO and calls selenium.sendKeys and a bunch of other methods to actually perform the test, submit the form, assert the response contains what we want.
We have happy with the results of this framework, but it occurs to me that something similar could also be used to create load testing scenarios.
IE, I could have another implementation of the UI test framework that used a HTTPWebRequest to issue the requests to the web server.
From there, I could essentially "gang up" my tests to create a scenario, eg:
public void MeasureTestScenario()
{
CanListMeasures();
CanSearchMeasures();
measureId = CanCreateMeasure();
CanEditMeasure(measureId);
}
The missing piece of the puzzle is how do I run these "scenarios" under load? IE, I'm after a test running that can execute 1..N of these tests of parallel, hopefully do things like Ramp up time, random waits in between each scenario, etc. Ideally the count of concurrent tests would be configurable at run time too, ie, spawn 5 more test threads. Reporting isn't super important because I can log execution times as part of the web request.
Maybe there is another approach too, eg, using C# to control a more traditional load testing tool?
The real advantage to this method is handling more complicated screens, ie for a screen that contains collections of objects. I can get a random parent record from the DB, use my existing automapper code to create the nested dto, have code to walk that dto changing random values then use my test framework to submit that dto's values as a web request. Much easier then hand coding JMeter scripts.
cheers,
dave
While using NUnit as runner for different tasks/tests is OK, I don't think it's the right tool for performance testing.
It uses reflection, it has overhead, it uses eventually the predefined settings for webclient, etc. It's much better if a performance testing tool is used. You can use NUnit to configure/start it :)
I disagree that NUnit isn't usable, with the appropriate logging any functional test runner can be used for performance testing. The overhead does not matter when the test code itself is providing data like when requests are sent and responses are received or when a page is requested and when it finishes loading. Parsing logs files and generating some basic metrics is a pretty trivial task IMO. If it becomes a huge project that changes things but more likely, I could get by with a command line app that takes an hour or two to write.
I think the real question is how much you can leverage by doing it with your normal test runner. If you already have a lot of logging in your test code and can easily invoke it then using NUnit become pretty appealing. If you don't have sufficient logging or a framework that would support some decent load scenarios then using some load testing tools/frameworks would probably be easier.
We are just starting to write integration tests to test database,data access layer, webservice calls etc.
Currently I have some idea to write integration tests like
1) Always recreating tables in initialize function.
2) Always clear the data inside a function, if you are saving new inside the same function.
But I would like to know some more good practices.
As with all testing, it is imperative to start from a known state, and upon test completion, clear down to a clean state.
Also, test code often gets overlooked as not real code and hence not maintained properly... it is more important than code. At least as much design, should go into the architecture of your tests. Plan reasonable levels of abstraction, ie if you are testing a web app, consider having tiers like so: an abstraction of your browser interaction, an abstraction of your components on pages, an abstraction of pages and your tests. Tests interact with pages and components, pages interact with components, components interact with the browser interaction tier and the browser interaction tier interacts with your (possibly third party) browser automation library.
If your test code is not maintained properly or well-thought out, they become a hindrance more than an aid to writing new code.
If your team is new to testing, there are many coding katas out there that aim to teach the importance of good tests (and out of this comes good code), they generally focus on a unit testing level, however many of the principals are the same.
In general, I would suggest you look into mocking your database access layer and web service call classes in order to make it more testable. There are lots of books on this subject, like Osherove's The Art of Unit Testing.
That having been said, integration tests should be repeatable. Thus, I would choose option 1, to write a script that can recreate the database from scratch with test data. Option 2 is more difficult, because it can hard to be sure that the cleaning function does not leave unwanted data residue.
When unit testing the DAL, I do it like this:
[TestFixtureSetUp]
public void TestFixtureSetUp()
{
//this grabs the data from the database using an XSD file to map the schema
//and saves it as xml (backup.xml)
DatabaseFixtureSetUp();
}
[SetUp]
public void SetUp()
{
//this inserts test data into the database from xml (testdata.xml)
//it clears the tables first so you have fresh data before every test.
DatabaseSetUp();
}
[TestFixtureTearDown]
public void TestFixtureTearDown()
{
//this clears the table and inserts the backup data into the database
//to return it to the state it was before running the tests.
DatabaseFixtureTearDown();
}
I am currently working on a c# project which makes use of an SQLite Database. For the project I am required to do unit testing but was told that unit testing shouldn't involve external files, like database files for the testing and instead the test should emulate the database.
If I have a function that tests if a something exists in a database how could this sort of method be tested with a unit testing.
in general it makes life easier if external files are avoided and everything is done in code. There are no rules which says "shouldn't", and sometimes it just makes more sense to have the external dependency. But only after you have considered how not to have it, and realized what the tradeoffs are.
Secondly, what Bryan said is a good option and the one I've used before.
In an application that uses a database, there will be at least one component whose responsibility is to communicate with that database. The unit test for that component could involve a mocked database, but it is perfectly valid (and often desirable) to test the component using a real database. After all, the component is supposed to encapsulate and broker communication with that database -- the unit test should test that. There are numerous strategies to perform such unit tests conveniently -- see the list of related SO questions in the sidebar for examples.
The general prescription to avoid accessing databases in unit tests applies to non-database components. Since non-database components typically outnumber database-related components by a wide margin, the vast majority of unit tests should not involve a database. Indeed, if such non-database components required a database to be tested effectively, there is likely a design problem present -- probably improper separation of concerns.
Thus, the principle that unit tests should avoid databases is generally true, but it is not an absolute rule. It is just a (strong) guideline that aids in structuring complex systems. Following the rule too rigidly makes it very difficult to adequately test "boundary" components that encapsulate external systems -- places in which bugs find it very easy to hide! So, the question that one should really be asking oneself when a unit test demands a database is this: is the component under test legitimately accessing the database directly or should it instead collaborate with another that has that responsibility?
This same reasoning applies to the use of external files and other resources in unit tests as well.
With SQLite, you could use an in-memory database. You can stage your database by inserting data and then run your tests against it.
Once databases get involved it always blurs the line between unit testing and integration testing. Having said that, it is always a nice and very useful test to be able to put something in a database (Setup), Do your test and remove it at the end (Cleanup). This lets you test end to end one part of your application.
Personally I like to do this in an attribute driven fashion. By Specifying the Sql scripts to run for each test as an attribute like so ..
[PreSqlExecute("SetupTestUserDetails.sql")]
[PostSqlExecute("CleanupTestUserDetails.sql")]
public void UpdateUserNameTest()
{
The connectionstrings come from the app.config as usual and can even be a symbolic link to the app.config in your main project.
Unfortunately this isn't a standard feature with the MS test runner that ships with visual studio. If you are using Postsharp as your AOP framework, this is easy to do. If not, you can still get the same functionality for standard MS Test Runner, by using a feature of .Net called "Context Bound Objects". This lets you inject custom code into an object creation chain to do AOP like stuff as long as your objects inherit from ContextBoundObject.
I did a blog post with more details and a small, complete code sample here.
http://www.chaitanyaonline.net/2011/09/25/improving-integration-tests-in-net-by-using-attributes-to-execute-sql-scripts/
I think is really bad idea to have unit tests that depends on database information.
Also I think is a bad idea to use sqlite for unit tests.
You need to test objects protocol, so if you need something in your tests you should create them somewhere in the tests (usually at setUp).
Since is difficult to remove persistence, the popular way to do it is using SQLite, but always create what you need in unit tests.
check this link Unit Tests And Databases this will be more helpful I think
It's best to use a mocking framework, to mimic a database. For C# there is the Entity Framework. Even the use of sqlite is an outside dependency to your code.
So I've written a class and I have the code to test it, but where should I put that code? I could make a static method Test() for the class, but that doesn't need to be there during production and clutters up the class declaration. A bit of searching told me to put the test code in a separate project, but what exactly would the format of that project be? One static class with a method for each of the classes, so if my class was called Randomizer, the method would be called testRandomizer?
What are some best practices regarding organizing test code?
EDIT: I originally tagged the question with a variety of languages to which I thought it was relevant, but it seems like the overall answer to the question may be "use a testing framework", which is language specific. :D
Whether you are using a test framework (I highly recommend doing so) or not, the best place for the unit tests is in a separate assembly (C/C++/C#) or package (Java).
You will only have access to public and protected classes and methods, however unit testing usually only tests public APIs.
I recommend you add a separate test project/assembly/package for each existing project/assembly/package.
The format of the project depends on the test framework - for a .NET test project, use VSs built in test project template or NUnit in your version of VS doesn't support unit testing, for Java use JUnit, for C/C++ perhaps CppUnit (I haven't tried this one).
Test projects usually contain one static class init methods, one static class tear down method, one non-static init method for all tests, one non-static tear down method for all tests and one non-static method per test + any other methods you add.
The static methods let you copy dlls, set up the test environment and clear up the test enviroment, the non-static shared methods are for reducing duplicate code and the actual test methods for preparing the test-specific input, expected output and comparing them.
Where you put your test code depends on what you intend to do with the code. If it's a stand-alone class that, for example, you intend to make available to others for download and use, then the test code should be a project within the solution. The test code would, in addition to providing verification that the class was doing what you wanted it to do, provide an example for users of your class, so it should be well-documented and extremely clear.
If, on the other hand, your class is part of a library or DLL, and is intended to work only within the ecosystem of that library or DLL, then there should be a test program or framework that exercises the DLL as an entity. Code coverage tools will demonstrate that the test code is actually exercising the code. In my experience, these test programs are, like the single class program, built as a project within the solution that builds the DLL or library.
Note that in both of the above cases, the test project is not built as part of the standard build process. You have to build it specifically.
Finally, if your class is to be part of a larger project, your test code should become a part of whatever framework or process flow has been defined for your greater team. On my current project, for example, developer unit tests are maintained in a separate source control tree that has a structure parallel to that of the shipping code. Unit tests are required to pass code review by both the development and test team. During the build process (every other day right now), we build the shipping code, then the unit tests, then the QA test code set. Unit tests are run before the QA code and all must pass. This is pretty much a smoke test to make sure that we haven't broken the lowest level of functionality. Unit tests are required to generate a failure report and exit with a negative status code. Our processes are probably more formal than many, though.
In Java you should use Junit4, either by itself or (I think better) with an IDE. We have used three environments : Eclipse, NetBeans and Maven (with and without IDE). There can be some slight incompatibilities between these if not deployed systematically.
Generally all tests are in the same project but under a different directory/folder. Thus a class:
org.foo.Bar.java
would have a test
org.foo.BarTest.java
These are in the same package (org.foo) but would be organized in directories:
src/main/java/org/foo/Bar.java
and
src/test/java/org/foo/BarTest.java
These directories are universally recognised by Eclipse, NetBeans and Maven. Maven is the pickiest, whereas Eclipse does not always enforce strictness.
You should probably avoid calling other classes TestPlugh or XyzzyTest as some (old) tools will pick these up as containing tests even if they don't.
Even if you only have one test for your method (and most test authorities would expect more to exercise edge cases) you should arrange this type of structure.
EDIT Note that Maven is able to create distributions without tests even if they are in the same package. By default Maven also requires all tests to pass before the project can be deployed.
Most setups I have seen or use have a separate project that has the tests in them. This makes it a lot easier and cleaner to work with. As a separate project it's easy to deploy your code without having to worry about the tests being a part of the live system.
As testing progresses, I have seen separate projects for unit tests, integration tests and regression tests. One of the main ideas for this is to keep your unit tests running as fast as possible. Integration & regression tests tend to take longer due to the nature of their tests (connecting to databases, etc...)
I typically create a parallel package structure in a distinct source tree in the same project. That way your tests have access to public, protected and even package-private members of the class under test, which is often useful to have.
For example, I might have
myproject
src
main
com.acme.myapp.model
User
com.acme.myapp.web
RegisterController
test
com.acme.myapp.model
UserTest
com.acme.myapp.web
RegisterControllerTest
Maven does this, but the approach isn't particularly tied to Maven.
This would depend on the Testing Framework that you are using. JUnit, NUnit, some other? Each one will document some way to organize the test code. Also, if you are using continuous integration then that would also affect where and how you place your test. For example, this article discusses some options.
Create a new project in the same solution as your code.
If you're working with c# then Visual Studio will do this for you if you select Test > New Test... It has a wizard which will guide you through the process.
hmm. you want to test random number generator... may be it will be better to create strong mathematical proof of correctness of algorithm. Because otherwise, you must be sure that every sequence ever generated has a desired distribution
Create separate projects for unit-tests, integration-tests and functional-tests. Even if your "real" code has multiple projects, you can probably do with one project for each test-type, but it is important to distinguish between each type of test.
For the unit-tests, you should create a parallel namespace-hierarchy. So if you have crazy.juggler.drummer.Customer, you should unit-test it in crazy.juggler.drummer.CustomerTest. That way it is easy to see which classes are properly tested.
Functional- and integration-tests may be harder to place, but usually you can find a proper place. Tests of the database-layer probably belong somewhere like my.app.database.DatabaseIntegrationTest. Functional-tests might warrant their own namespace: my.app.functionaltests.CustomerCreationWorkflowTest.
But tip #1: be tough about separating the various kind of tests. Especially be sure to keep the collection of unit-tests separate from the integration-tests.
In the case of C# and Visual Studio 2010, you can create a test project from the templates which will be included in your project's solution. Then, you will be able to specify which tests to fire during the building of your project. All tests will live in a separate assembly.
Otherwise, you can use the NUnit Assembly, import it to your solution and start creating methods for all the object you need to test. For bigger projects, I prefer to locate these tests inside a separate assembly.
You can generate your own tests but I would strongly recommend using an existing framework.