I have some 200 unit tests for an app that run against a simulated piece of hardware with an asynchronous messaging api. All the tests run and pass individually but many of them will fail when run as a group, because of the asynchronous nature of the external calls.
Is there a way that I can set up test sequencing in vs 2012 so that I can add a small delay after each test to give the externals a chance to clear their cache?
edit:
The order also seems to matter somewhat, certain tests lock up the external resources longer than others.
I understand that these are integration tests and unit tests together. Much of this is unit testing, much of it is mocked services, and much of it is api integration testing.
I inherited this suite of tests, so the separation of "integration tests" and "unit tests" is beyond my control.
A little context: We're interfacing with LabView dlls here, which come with their own set of eccentricities that we're trying to mitigate and instigate.
Yes you can add a wait with something like this:
System.Threading.Thread.Sleep(5000);
Buy I would instead recommend that you create a simple LabVIEW function that queries the status of all instruments inside of a while loop and "waits" for each one to respond. For example, if you're using GPIB that standard query is:
*IDN?
You could also look into NI's TestStand product which is specifically designed to coordinate complex asynchronous tests and I believe it has an API you can access from C#.
http://www.ni.com/teststand/whatis/
If you are using VS 2012 or later there is also fakes which is MS mocks.
Otherwise RhinoMock works fairly good. Bit of a learning curve.
I have also broken up classes to separate unit test that conflicted with each other.
Related
I have a C# library that runs a couple of PowerShell scripts to manipulate Windows Hypervisor. (e.g. turning VM on and off, create a VM with vhds, getting switches and etc.) It's hard to mock the environment and control the output of the scripts. Scripts for checking/validating stuff could be easier, but the scripts for operational purpose could be a headache because most of these methods could be irreversible.
I found a good unit test framework Pester for PowerShell. But my code consist a great amount of C# code. Is there any good way to handle this unit test problem gracefully?
Thanks in advance!
There are several points to make here:
When the code you test does multiple things (units) it is, by definition, an integration test, since it tests the integration of multiple units (the comment of #Xerillio).
When the code you tests interacts with infrastructure (file system, database, in your case Powershell and VM's etc.), it is, by definition, an integration test.
You could try to extract small amounts of code in seperate functions in C# and unit test those, giving you certainty those work and do the same in Powershell (using Pester). But the real question is; what do you want to achieve? I would assume you want to be sure you're application works now and keeps working in the future (does not regress), in which case you could look more into a more end to end approach to testing.
Find the most outer point of your application with what you can interact from a test, so you test the biggest part of the stack you can.
Check which parts you can automatically test without causing havoc (you mention "irreversible methods")
Use a unit test framework for setting up the tests in C# (like xUnit) expressing clear Setup and Teardown phases that, respectively, setup everything needed before a test and clean up after a test (keep in mind that tests might run parallel by default and this doesn't go well with shared resources on the file system etc.)
This avoids having to rewrite code just because you want to unit test it. On the other hand, extracting pure units makes testing those specific units way easier.
Using visual studio 2017 NUnit and the resharper test runner, how do you maintain a good unit test speed when doing TDD in a large c# project (5000+) tests. Even if each of those tests takes only 5ms, that's 25 seconds which is quite slow for a TDD cycle.
Our tests don't call the database nor do they call external web services. They only test business logic.
I have found that using moq, doing a Mock.Setup() alone takes almost 1ms. Since we might have a few moq setups call per tests, this is the primary culprit for our slow unit tests.
Is there any way to speed up unit tests speed? Is there any mocking libraries faster than moq? Or maybe another test runner that is faster?
You are going down the wrong rabbit whole: the overall runtime of all your unit tests is still in a very reasonable range!
While doing development (maybe using TDD) you don't care about all unit tests. You only care about those that are relevant to the current component/package/... !
As in: when you make a change in file A, you probably want to (manually) run all unit tests for the directory A lives in. You make another small change, you run these tests again.
Then, later on, when you think: "I am done for now", then you invoke all unit tests, to ensure you didn't break something on the other end of the building by rearranging the furniture in that room over here.
So, the answer is: you are fine, don't worry.
We have 5000+ Java unit tests. On our fastest build server, it can take about 10 minutes to work them all. But that is still ok. The backend build still comes back after 20 minutes and tells us "broken" or "all fine". Why? Because the build server only kicks in when I decide that my change set is complete, and I push it to the server.
When those 25 seconds are a problem for, then because you are running all tests too often because you trigger them manually. Now: rather spend your energy figuring clever ways to only run the relevant tests when working on a specific problems in an efficient way. (in Java with JUnit, it is easy: I click on the current package, and go "run all tests in here)
Consider getting rid of most tests you have.
While I don't have a lot of experience with unit testing and TDD, the limited experience I do have points to most unit tests being quite useless. And there are experienced people that support my opinion, as an example of this viewpoint, consider these two articles:
https://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
https://rbcs-us.com/documents/Segue.pdf
I'm not sure about continuity of this discussion (i.e. who was the first to bring it up), but here a couple of people who reference the article above and agree with it, and maybe add a couple of their own points:
https://medium.com/pacroy/why-most-unit-testing-is-waste-tests-dont-improve-quality-developers-do-47a8584f79ab
https://techbeacon.com/1-unit-testing-best-practice-stop-doing-it
Do note, that the argument is not about testing at all, it's about extensive unit-tests as opposed to integration and system-level testing.
I am trying to understand acceptance tests, but I am confused where it starts or what kind of test/s is/are involved.
Do I have to use automating GUI test frameworks or do I have to use unit tests? What is the boundary of an acceptance test?
Edit: My question is about automated acceptance tests.
Acceptance testing is done after the entire application/software is developed and integrated.
Acceptance testing is done mainly to test whether the application meets the user requirements.
There are mainly 2 types of acceptance testing.
Alpha testing
Beta testing
Acceptance testing is mainly done by the client (person who asked for the software to be developed) and end users.
Alpha testing is done by the client. He is helped by the developer.
Here the client looks at the software to make sure all his requirements are fulfilled.
Beta testing is done after Alpha testing is completed.
Here the application is released to a set of people who behave as end users and use the applications.
Unit test are not to be confused with acceptance tests.
Acceptance tests are basically requirements, written as tests so that:
It is clear when requirements are met;
Actual testing is easier to plan, and to run.
Unit tests are automated test for small bits of code, used to keep an eye on all the little bits without the need for constant (and much hated) manual checks.
You can go down the UI road. Selenium or WatiR are solid tools that can be used to run ui-based test suite.
If you are Dot.Net developer, you can use WatiN, but the problem with it is that it seems to be pretty much dead, as it had no new version since April 2011.
I did manage to have some decent test suite work for me a while back, integrating SpecFlow (more on that later) and watiN, and it worked fine.
However, as time went by, i realized that when I was doing UI based tests, all i was doing was loading a page, clicking something, and than checking the results in the DB. Sometimes, i also checked that the screen also showed me the expected message, but that's all. That conclusion drove me off UI based testing.
What i started doing, is make sure the UI is built on rules and idioms. The tooling now-days (asp.net mvc, razor templates or better yet - knockout.js) allows us to do so without too much pain. When the UI is built methodically, and not by everybody throwing whatever field they like on the page, most of the time all you need to test is the methods that build it, and not the result. Ofcurse, if i do want to test it (and in some scenarios, you will), it is much easier (and faster) to test it with tools like QUnit,
So my way current of practicing ATDD :
Use specflow to get business requirements into test code.
Test only "code behind".
Use knockoutJS for the UI plumbing (using lots of custom bindings)
Create standards for Models that are returned to the view.
Treat UI behavior tests as unit tests.
Here is a good starting point for specflow: http://www.infoq.com/articles/Spec-Flow
I was browsing for a while the internet and this site and instead of finding some ways to unit test my existing code the only finding was to separate logic and interaction with the user (MVC approach). Although this is great for new projects it is time-consuming and as a result too expensive to invest for existing ones. Is there a way to create specific unit tests, ideally automated, for existing GUI projects that unfortunately connect directly to databases or other systems to get data and the data are manipulated before it is shown? Currently we have two projects the one being MFC, the other C# .net 2.0 Thanks a lot.
Unit testing won't cut in here considering you can't change your existing code (not to mention you don't really unit test UI). You should look for some kind of GUI testing automation/scripting tools. Like Sikuli. Quoting literally the first paragraph from their website:
Sikuli is a visual technology to automate and test graphical user interfaces (GUI) using images (screenshots).
It doesn't get any simplier than that. You "tell" the tool which parts of your UI it should observe/interact, it records and replays it. Skimming through this presentation will give you idea of what exactly you can do (might also check their video). Probably won't solve all your problems, but might be alternative worth considering.
Unit testing an already existing project is always a challenge. however i point you to some open source tools that will help you to automate unit testing
C++
Boost unit test framework
Google Mock
C#
NUnit
NMock
It is possible to produce some level of automated testing, but not unit tests.
Unit tests, by definition, test small units of logic decoupled from the system as a whole. I'd recommend new code be written in the way you described (mvc etc) to be unit testable.
With your existing code, unit testing will obviously require refactoring, which I appreciate is not in your timeframe. You will need to work with what you've got an look at a way to perform more whole-system automated testing, probably driven through the UI. The fact these are not Unit tests is by-the-by, there are helpful tests to have even if you have unit tests. Its helpful to know the distinction though when you are searching for resources.
You are probably best searching for automated ui testing. With the .net apps, you may find something like White useful
If you are lucky enough to have a Premium (at least) version of Visual Studio 2010, then you could consider writing Coded UI Tests.
A UI Test is basically an automated sequence of actions (mouse, keyboard...) on a GUI. These are very high level tests (or functional tests), not unit tests, but it can help testing a GUI application that is already existing.
For instance you can easily automate CRUD actions (which imply a database) and check (assert) that actions have produce the expected result in the UI (new created item in a list...).
Write UI testing can be very time consuming because there are various aspect you had to test. Thanks God there are a lot of frameworks to achieve that result but you always have to write some code.
I assume you already did unit testing (Visual Studio itself comes with a not so bad unit test framework) so what you want to check is not algorithms but UI automation/results. What does it mean? Everything that is code must be tested by code (database operations and algorithms, for example). Even some UI controls can be somehow tested by code (example: if I simulate a user click I'll get that event fired when this condition is true). Trust me, UI testing is Black Art and often you'll get failed tests even if everything is OK.
Simple stress scenario
For a simple scenario, for example to stress your application to reproduce a bug repeating the same operation many times, you can use a macro recorder (such as WinMacro). You register user inputs and then you run that macro in a loop. If there's a subtle bug you have many chances to reproduce (and/or to find) it when that actions are repeated 5000 times in a night. That done you'll get data from your logs.
Simple scenario
If your application can be somehow automated (it may be easy for .NET application using VSA) you can prepare some "good" macro to automate an operation, put results in a file and compare them with a known good results data file.
Simple tip: for MFC application you can write your own "macro" with a text file where each line is a Windows message with its parameter; read it, parse it and SendMessage() them to your application to simulate user inputs, menu clicks and so on. Grab - for example - text box value and compare with something known. WinSpy++ is your friend.
Complex scenario
For anything else (is my custom control drawing everything in the right way? when user click that button then UI colors changes?) you have to use a more complex tool. There are several tools to automate UI testing, built-in in Visual Studio 2010 (not in every edition) what you need to create coded UI tests. What does it mean? You write code to automate your application and then you write more code to check its results (sometimes even comparing bitmaps with known results. It may be tedious and a lot but virtually you can test everything, even if the application hasn't been designed for UI testing.
Start reading this from MSDN.
There is a plethora of commercial tools too (even if I never used it in any project) so I not write any link, I guess you'll have a lot of results in Google.
Mocking is usually the best approach for simulating integration points, but unfortunately most Mocking frameworks fall short if the code is too interconnected and bury dependencies in private methods etc.
It might be worth looking into a framework called Moles (made by Microsoft) that have very few limitations for what you can Mock. It even handles private methods!
Perhaps you could use it to mock your db calls to test your data manipulation?
There are several tutorials online.
Here's one that might get you started:
http://www.unit-testing.net/CurrentArticle/How-To-Mock-Code-Dependencies-Using-Moles.html
I have read umpteen answers on SO while searching for NUnit + dependent methods + order of test execution. Every single answer suggests that forcing any set of order for unit tests is extremely evil.
I am writing Selenium tests using NUnit.
So I am trying to write integration tests using Unit testing framework!!!
To cite an example of integration tests (and this is just one example). I need to create a valid account before proceeding with other tests. If creation of account fails then I would like to abort entire test execution.
Since I don't want to rely on alphabetic order of test and in true spirit of NUnit, decided to create an account before any further test. Though it does not look right to me for two core reasons -
Unnecessary code duplication/execution
What if application account creation is not working, all my tests would still try to create and account again and again and failing
I am inclined to think that NUnit may be not be right deal with Selenium tests.
But if not Nunit then what should I use?
Selenium Core itself comes with a TestRunner that is written in Javascript and you can run your tests directly from the browser.
For more see:
http://www.developerfusion.com/article/84484/light-up-your-development-with-selenium-tests/
Apart from that, using Nunit and tests written in C# are much more easier to write and maintain. Are you using SetUp and TearDown while writing your tests? That way you can avoid code duplication.
Regarding you second point, you can have a flag that is set on first setup failure and skips the setup the next time or the setup itself tracking it and quickly failing the next time. And tests don't run if setup fails in Nunit.
I run Selenium with NUnit all the time. It just depends on how you write your tests. To avoid code duplication, I make a library of helper functions that do common things, like log in or log out of my site, that the other tests use to get to the page they need to test. (I use the term 'library' in a loose sense; I don't actually split them into their own C# project.)
You are right that if the account creation function is broken, the other tests will fail. But personally, I don't see that as a problem, as the point of unit tests is to make sure that your changes didn't have unintended effects elsewhere in your project. If the account creation broke, clearly that affects a lot of things. Ditto if my login helper method fails: if you can't log in, you can't get to anything in the site. Effectively, the whole site is broken.
If you need to create new accounts on each test then the approach that I would take is to have that code moved into your SetUp code. If some of your tests don't require login, split them out into different files.
Any bits of duplcation should be removed, test code should be as clean and robust as production code. Splitting the files with different tests help maintain the idea of Single Responsibility.
Did you also look at PNunit?
See one of the anwers in this question:
Has anyone found a way to run C# Selenium RC tests in parallel?
I'm still not 100% sure how TestNG would work with grid, suppose you have a 3-step registration process and you divide this up in 3 tests. Is TestNG with grid going to help you here? I suppose not, or will it detect that test C needs to have test A and B runned on the same thread?
PNunit looks like it could provide a way to distribute dependent tests to the same machine. Although it's probably quite complicated to set up.
Two approaches might help you, with the problem you describe as an answer to AutomatedTester:
First, NUnit 2.4.4 defines a SuiteAttribute that lets you run tests in the order you want. Very handy but it has a major restriction: it is not compatible with TestCaseAttribute. That means all your tests have to be triggered only by TestAttribute; which is very annoying if you target coverage of value-based boundary tests (thus several data-driven test cases). More info on http://www.nunit.org/index.php?p=suite&r=2.5.10
Another approach is to prepare an integration sample database tailored just for your test cases. Say you have a 15-steps registration process: create a student record and push it to step one, then another student and push it to step two, and so on. Save your database and restore it as test fixture setup. Then test each step with a different student.
It is perfectly valid in most cases to do integration tests on different records for each step, as it provides the same functionwise and codewise coverage, and it follows the idea of integration testing because your records in the DB are true records (created by the UI with all flaws that comes with the UI).
Of course it needs more time to run and storage space because of the DB copies you'll have to store. If your system can't afford that, then you'll probably want to look at the first solution.
It also gives you the advantage of being able to spot bugs on later steps even if earlier steps are unstable: all tests are run on each test campaign which is not the case in the solution you ask for.