How to unit test a C# library that runs PowerShell scripts - c#

I have a C# library that runs a couple of PowerShell scripts to manipulate Windows Hypervisor. (e.g. turning VM on and off, create a VM with vhds, getting switches and etc.) It's hard to mock the environment and control the output of the scripts. Scripts for checking/validating stuff could be easier, but the scripts for operational purpose could be a headache because most of these methods could be irreversible.
I found a good unit test framework Pester for PowerShell. But my code consist a great amount of C# code. Is there any good way to handle this unit test problem gracefully?
Thanks in advance!

There are several points to make here:
When the code you test does multiple things (units) it is, by definition, an integration test, since it tests the integration of multiple units (the comment of #Xerillio).
When the code you tests interacts with infrastructure (file system, database, in your case Powershell and VM's etc.), it is, by definition, an integration test.
You could try to extract small amounts of code in seperate functions in C# and unit test those, giving you certainty those work and do the same in Powershell (using Pester). But the real question is; what do you want to achieve? I would assume you want to be sure you're application works now and keeps working in the future (does not regress), in which case you could look more into a more end to end approach to testing.
Find the most outer point of your application with what you can interact from a test, so you test the biggest part of the stack you can.
Check which parts you can automatically test without causing havoc (you mention "irreversible methods")
Use a unit test framework for setting up the tests in C# (like xUnit) expressing clear Setup and Teardown phases that, respectively, setup everything needed before a test and clean up after a test (keep in mind that tests might run parallel by default and this doesn't go well with shared resources on the file system etc.)
This avoids having to rewrite code just because you want to unit test it. On the other hand, extracting pure units makes testing those specific units way easier.

Related

How do I create integration/unit test sequencing in vs2012?

I have some 200 unit tests for an app that run against a simulated piece of hardware with an asynchronous messaging api. All the tests run and pass individually but many of them will fail when run as a group, because of the asynchronous nature of the external calls.
Is there a way that I can set up test sequencing in vs 2012 so that I can add a small delay after each test to give the externals a chance to clear their cache?
edit:
The order also seems to matter somewhat, certain tests lock up the external resources longer than others.
I understand that these are integration tests and unit tests together. Much of this is unit testing, much of it is mocked services, and much of it is api integration testing.
I inherited this suite of tests, so the separation of "integration tests" and "unit tests" is beyond my control.
A little context: We're interfacing with LabView dlls here, which come with their own set of eccentricities that we're trying to mitigate and instigate.
Yes you can add a wait with something like this:
System.Threading.Thread.Sleep(5000);
Buy I would instead recommend that you create a simple LabVIEW function that queries the status of all instruments inside of a while loop and "waits" for each one to respond. For example, if you're using GPIB that standard query is:
*IDN?
You could also look into NI's TestStand product which is specifically designed to coordinate complex asynchronous tests and I believe it has an API you can access from C#.
http://www.ni.com/teststand/whatis/
If you are using VS 2012 or later there is also fakes which is MS mocks.
Otherwise RhinoMock works fairly good. Bit of a learning curve.
I have also broken up classes to separate unit test that conflicted with each other.

Integrating tests written in Python and tests in C# in one solid solution

What I'm trying to do is to combine two approaches, two frameworks into one solid scope, process ...
I have a bunch of tests in python with self-written TestRunner over proboscis library which gave me a good way to write my own Test Result implementation (in which I'm using jinja). This framework is now a solid thing. These tests are for tesing UI (using Selenium) on ASP.NET site.
On another hand I have to write tests for business logic. Apparently it would be right to use NUnit or TestDriven.NET for C#.
Could you please give me a tip, hint, advice of how I should integrate these two approaches in one final solution? May be the answer would be just to set up a CI server, donno...
Please note, the reason I'm using Python for ASP.Net portal is in its flexibility and opportunity to build any custom Test Runner, Test Loader, Test Discovery and so on...
P.S. Using IronPython is not an option for me.
P.P.S. For the sake of clarity: proboscis is the python library which allows to set test order and dependency of a choosen test. And these two options are the requirements!
Thank you in advance!
I don't know if you can fit them in one runner or process. I'm also not that familiar with Python. It seems to me that the Python written tests are more on a high level though. Acceptance tests or integration tests or whatever you want to call them. And the NUnit ones are unit test level. Therefore I would suggest that you first run the unit tests and if they pass the Python ones. You should be able to integrate that in a build script. And as you already suggested, if you can run that on a CI server, that would be my preferred approach in your situation.

Unit Test existing UI code

I was browsing for a while the internet and this site and instead of finding some ways to unit test my existing code the only finding was to separate logic and interaction with the user (MVC approach). Although this is great for new projects it is time-consuming and as a result too expensive to invest for existing ones. Is there a way to create specific unit tests, ideally automated, for existing GUI projects that unfortunately connect directly to databases or other systems to get data and the data are manipulated before it is shown? Currently we have two projects the one being MFC, the other C# .net 2.0 Thanks a lot.
Unit testing won't cut in here considering you can't change your existing code (not to mention you don't really unit test UI). You should look for some kind of GUI testing automation/scripting tools. Like Sikuli. Quoting literally the first paragraph from their website:
Sikuli is a visual technology to automate and test graphical user interfaces (GUI) using images (screenshots).
It doesn't get any simplier than that. You "tell" the tool which parts of your UI it should observe/interact, it records and replays it. Skimming through this presentation will give you idea of what exactly you can do (might also check their video). Probably won't solve all your problems, but might be alternative worth considering.
Unit testing an already existing project is always a challenge. however i point you to some open source tools that will help you to automate unit testing
C++
Boost unit test framework
Google Mock
C#
NUnit
NMock
It is possible to produce some level of automated testing, but not unit tests.
Unit tests, by definition, test small units of logic decoupled from the system as a whole. I'd recommend new code be written in the way you described (mvc etc) to be unit testable.
With your existing code, unit testing will obviously require refactoring, which I appreciate is not in your timeframe. You will need to work with what you've got an look at a way to perform more whole-system automated testing, probably driven through the UI. The fact these are not Unit tests is by-the-by, there are helpful tests to have even if you have unit tests. Its helpful to know the distinction though when you are searching for resources.
You are probably best searching for automated ui testing. With the .net apps, you may find something like White useful
If you are lucky enough to have a Premium (at least) version of Visual Studio 2010, then you could consider writing Coded UI Tests.
A UI Test is basically an automated sequence of actions (mouse, keyboard...) on a GUI. These are very high level tests (or functional tests), not unit tests, but it can help testing a GUI application that is already existing.
For instance you can easily automate CRUD actions (which imply a database) and check (assert) that actions have produce the expected result in the UI (new created item in a list...).
Write UI testing can be very time consuming because there are various aspect you had to test. Thanks God there are a lot of frameworks to achieve that result but you always have to write some code.
I assume you already did unit testing (Visual Studio itself comes with a not so bad unit test framework) so what you want to check is not algorithms but UI automation/results. What does it mean? Everything that is code must be tested by code (database operations and algorithms, for example). Even some UI controls can be somehow tested by code (example: if I simulate a user click I'll get that event fired when this condition is true). Trust me, UI testing is Black Art and often you'll get failed tests even if everything is OK.
Simple stress scenario
For a simple scenario, for example to stress your application to reproduce a bug repeating the same operation many times, you can use a macro recorder (such as WinMacro). You register user inputs and then you run that macro in a loop. If there's a subtle bug you have many chances to reproduce (and/or to find) it when that actions are repeated 5000 times in a night. That done you'll get data from your logs.
Simple scenario
If your application can be somehow automated (it may be easy for .NET application using VSA) you can prepare some "good" macro to automate an operation, put results in a file and compare them with a known good results data file.
Simple tip: for MFC application you can write your own "macro" with a text file where each line is a Windows message with its parameter; read it, parse it and SendMessage() them to your application to simulate user inputs, menu clicks and so on. Grab - for example - text box value and compare with something known. WinSpy++ is your friend.
Complex scenario
For anything else (is my custom control drawing everything in the right way? when user click that button then UI colors changes?) you have to use a more complex tool. There are several tools to automate UI testing, built-in in Visual Studio 2010 (not in every edition) what you need to create coded UI tests. What does it mean? You write code to automate your application and then you write more code to check its results (sometimes even comparing bitmaps with known results. It may be tedious and a lot but virtually you can test everything, even if the application hasn't been designed for UI testing.
Start reading this from MSDN.
There is a plethora of commercial tools too (even if I never used it in any project) so I not write any link, I guess you'll have a lot of results in Google.
Mocking is usually the best approach for simulating integration points, but unfortunately most Mocking frameworks fall short if the code is too interconnected and bury dependencies in private methods etc.
It might be worth looking into a framework called Moles (made by Microsoft) that have very few limitations for what you can Mock. It even handles private methods!
Perhaps you could use it to mock your db calls to test your data manipulation?
There are several tutorials online.
Here's one that might get you started:
http://www.unit-testing.net/CurrentArticle/How-To-Mock-Code-Dependencies-Using-Moles.html

is NUnit bad choice for Selenium test?

I have read umpteen answers on SO while searching for NUnit + dependent methods + order of test execution. Every single answer suggests that forcing any set of order for unit tests is extremely evil.
I am writing Selenium tests using NUnit.
So I am trying to write integration tests using Unit testing framework!!!
To cite an example of integration tests (and this is just one example). I need to create a valid account before proceeding with other tests. If creation of account fails then I would like to abort entire test execution.
Since I don't want to rely on alphabetic order of test and in true spirit of NUnit, decided to create an account before any further test. Though it does not look right to me for two core reasons -
Unnecessary code duplication/execution
What if application account creation is not working, all my tests would still try to create and account again and again and failing
I am inclined to think that NUnit may be not be right deal with Selenium tests.
But if not Nunit then what should I use?
Selenium Core itself comes with a TestRunner that is written in Javascript and you can run your tests directly from the browser.
For more see:
http://www.developerfusion.com/article/84484/light-up-your-development-with-selenium-tests/
Apart from that, using Nunit and tests written in C# are much more easier to write and maintain. Are you using SetUp and TearDown while writing your tests? That way you can avoid code duplication.
Regarding you second point, you can have a flag that is set on first setup failure and skips the setup the next time or the setup itself tracking it and quickly failing the next time. And tests don't run if setup fails in Nunit.
I run Selenium with NUnit all the time. It just depends on how you write your tests. To avoid code duplication, I make a library of helper functions that do common things, like log in or log out of my site, that the other tests use to get to the page they need to test. (I use the term 'library' in a loose sense; I don't actually split them into their own C# project.)
You are right that if the account creation function is broken, the other tests will fail. But personally, I don't see that as a problem, as the point of unit tests is to make sure that your changes didn't have unintended effects elsewhere in your project. If the account creation broke, clearly that affects a lot of things. Ditto if my login helper method fails: if you can't log in, you can't get to anything in the site. Effectively, the whole site is broken.
If you need to create new accounts on each test then the approach that I would take is to have that code moved into your SetUp code. If some of your tests don't require login, split them out into different files.
Any bits of duplcation should be removed, test code should be as clean and robust as production code. Splitting the files with different tests help maintain the idea of Single Responsibility.
Did you also look at PNunit?
See one of the anwers in this question:
Has anyone found a way to run C# Selenium RC tests in parallel?
I'm still not 100% sure how TestNG would work with grid, suppose you have a 3-step registration process and you divide this up in 3 tests. Is TestNG with grid going to help you here? I suppose not, or will it detect that test C needs to have test A and B runned on the same thread?
PNunit looks like it could provide a way to distribute dependent tests to the same machine. Although it's probably quite complicated to set up.
Two approaches might help you, with the problem you describe as an answer to AutomatedTester:
First, NUnit 2.4.4 defines a SuiteAttribute that lets you run tests in the order you want. Very handy but it has a major restriction: it is not compatible with TestCaseAttribute. That means all your tests have to be triggered only by TestAttribute; which is very annoying if you target coverage of value-based boundary tests (thus several data-driven test cases). More info on http://www.nunit.org/index.php?p=suite&r=2.5.10
Another approach is to prepare an integration sample database tailored just for your test cases. Say you have a 15-steps registration process: create a student record and push it to step one, then another student and push it to step two, and so on. Save your database and restore it as test fixture setup. Then test each step with a different student.
It is perfectly valid in most cases to do integration tests on different records for each step, as it provides the same functionwise and codewise coverage, and it follows the idea of integration testing because your records in the DB are true records (created by the UI with all flaws that comes with the UI).
Of course it needs more time to run and storage space because of the DB copies you'll have to store. If your system can't afford that, then you'll probably want to look at the first solution.
It also gives you the advantage of being able to spot bugs on later steps even if earlier steps are unstable: all tests are run on each test campaign which is not the case in the solution you ask for.

What is unit testing? [duplicate]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I saw many questions asking 'how' to unit test in a specific language, but no question asking 'what', 'why', and 'when'.
What is it?
What does it do for me?
Why should I use it?
When should I use it (also when not)?
What are some common pitfalls and misconceptions
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Note that if your test code writes to a file, opens a database connection or does something over the network, it's more appropriately categorized as an integration test. Integration tests are a good thing, but should not be confused with unit tests. Unit test code should be short, sweet and quick to execute.
Another way to look at unit testing is that you write the tests first. This is known as Test-Driven Development (TDD for short). TDD brings additional advantages:
You don't write speculative "I might need this in the future" code -- just enough to make the tests pass
The code you've written is always covered by tests
By writing the test first, you're forced into thinking about how you want to call the code, which usually improves the design of the code in the long run.
If you're not doing unit testing now, I recommend you get started on it. Get a good book, practically any xUnit-book will do because the concepts are very much transferable between them.
Sometimes writing unit tests can be painful. When it gets that way, try to find someone to help you, and resist the temptation to "just write the damn code". Unit testing is a lot like washing the dishes. It's not always pleasant, but it keeps your metaphorical kitchen clean, and you really want it to be clean. :)
Edit: One misconception comes to mind, although I'm not sure if it's so common. I've heard a project manager say that unit tests made the team write all the code twice. If it looks and feels that way, well, you're doing it wrong. Not only does writing the tests usually speed up development, but it also gives you a convenient "now I'm done" indicator that you wouldn't have otherwise.
I don't disagree with Dan (although a better choice may just be not to answer)...but...
Unit testing is the process of writing code to test the behavior and functionality of your system.
Obviously tests improve the quality of your code, but that's just a superficial benefit of unit testing. The real benefits are to:
Make it easier to change the technical implementation while making sure you don't change the behavior (refactoring). Properly unit tested code can be aggressively refactored/cleaned up with little chance of breaking anything without noticing it.
Give developers confidence when adding behavior or making fixes.
Document your code
Indicate areas of your code that are tightly coupled. It's hard to unit test code that's tightly coupled
Provide a means to use your API and look for difficulties early on
Indicates methods and classes that aren't very cohesive
You should unit test because its in your interest to deliver a maintainable and quality product to your client.
I'd suggest you use it for any system, or part of a system, which models real-world behavior. In other words, it's particularly well suited for enterprise development. I would not use it for throw-away/utility programs. I would not use it for parts of a system that are problematic to test (UI is a common example, but that isn't always the case)
The greatest pitfall is that developers test too large a unit, or they consider a method a unit. This is particularly true if you don't understand Inversion of Control - in which case your unit tests will always turn into end-to-end integration testing. Unit test should test individual behaviors - and most methods have many behaviors.
The greatest misconception is that programmers shouldn't test. Only bad or lazy programmers believe that. Should the guy building your roof not test it? Should the doctor replacing a heart valve not test the new valve? Only a programmer can test that his code does what he intended it to do (QA can test edge cases - how code behaves when it's told to do things the programmer didn't intend, and the client can do acceptance test - does the code do what what the client paid for it to do)
The main difference of unit testing, as opposed to "just opening a new project and test this specific code" is that it's automated, thus repeatable.
If you test your code manually, it may convince you that the code is working perfectly - in its current state. But what about a week later, when you made a slight modification in it? Are you willing to retest it again by hand whenever anything changes in your code? Most probably not :-(
But if you can run your tests anytime, with a single click, exactly the same way, within a few seconds, then they will show you immediately whenever something is broken. And if you also integrate the unit tests into your automated build process, they will alert you to bugs even in cases where a seemingly completely unrelated change broke something in a distant part of the codebase - when it would not even occur to you that there is a need to retest that particular functionality.
This is the main advantage of unit tests over hand testing. But wait, there is more:
unit tests shorten the development feedback loop dramatically: with a separate testing department it may take weeks for you to know that there is a bug in your code, by which time you have already forgotten much of the context, thus it may take you hours to find and fix the bug; OTOH with unit tests, the feedback cycle is measured in seconds, and the bug fix process is typically along the lines of an "oh sh*t, I forgot to check for that condition here" :-)
unit tests effectively document (your understanding of) the behaviour of your code
unit testing forces you to reevaluate your design choices, which results in simpler, cleaner design
Unit testing frameworks, in turn, make it easy for you to write and run your tests.
I was never taught unit testing at university, and it took me a while to "get" it. I read about it, went "ah, right, automated testing, that could be cool I guess", and then I forgot about it.
It took quite a bit longer before I really figured out the point: Let's say you're working on a large system and you write a small module. It compiles, you put it through its paces, it works great, you move on to the next task. Nine months down the line and two versions later someone else makes a change to some seemingly unrelated part of the program, and it breaks the module. Worse, they test their changes, and their code works, but they don't test your module; hell, they may not even know your module exists.
And now you've got a problem: broken code is in the trunk and nobody even knows. The best case is an internal tester finds it before you ship, but fixing code that late in the game is expensive. And if no internal tester finds it...well, that can get very expensive indeed.
The solution is unit tests. They'll catch problems when you write code - which is fine - but you could have done that by hand. The real payoff is that they'll catch problems nine months down the line when you're now working on a completely different project, but a summer intern thinks it'll look tidier if those parameters were in alphabetical order - and then the unit test you wrote way back fails, and someone throws things at the intern until he changes the parameter order back. That's the "why" of unit tests. :-)
Chipping in on the philosophical pros of unit testing and TDD here are a few of they key "lightbulb" observations which struck me on my tentative first steps on the road to TDD enlightenment (none original or necessarily news)...
TDD does NOT mean writing twice the amount of code. Test code is typically fairly quick and painless to write and is a key part of your design process and critically.
TDD helps you to realize when to stop coding! Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing.
The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of BOTH being bad / buggy being fairly low. Often its the test that needs fixing but that's still a good outcome.
TDD helps with coding constipation. You know that feeling that you have so much to do you barely know where to start? It's Friday afternoon, if you just procrastinate for a couple more hours... TDD allows you to flesh out very quickly what you think you need to do, and gets your coding moving quickly. Also, like lab rats, I think we all respond to that big green light and work harder to see it again!
In a similar vein, these designer types can SEE what they're working on. They can wander off for a juice / cigarette / iphone break and return to a monitor that immediately gives them a visual cue as to where they got to. TDD gives us something similar. It's easier to see where we got to when life intervenes...
I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interprete this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete.
TDD helps in all kinds of surprising ways down the line. Good unit tests can help document what something is supposed to do, they can help you migrate code from one project to another and give you an unwarranted feeling of superiority over your non-testing colleagues :)
This presentation is an excellent introduction to all the yummy goodness testing entails.
I would like to recommend the xUnit Testing Patterns book by Gerard Meszaros. It's large but is a great resource on unit testing. Here is a link to his web site where he discusses the basics of unit testing. http://xunitpatterns.com/XUnitBasics.html
I use unit tests to save time.
When building business logic (or data access) testing functionality can often involve typing stuff into a lot of screens that may or may not be finished yet. Automating these tests saves time.
For me unit tests are a kind of modularised test harness. There is usually at least one test per public function. I write additional tests to cover various behaviours.
All the special cases that you thought of when developing the code can be recorded in the code in the unit tests. The unit tests also become a source of examples on how to use the code.
It is a lot faster for me to discover that my new code breaks something in my unit tests then to check in the code and have some front-end developer find a problem.
For data access testing I try to write tests that either have no change or clean up after themselves.
Unit tests aren’t going to be able to solve all the testing requirements. They will be able to save development time and test core parts of the application.
This is my take on it. I would say unit testing is the practice of writing software tests to verify that your real software does what it is meant to. This started with jUnit in the Java world and has become a best practice in PHP as well with SimpleTest and phpUnit. It's a core practice of Extreme Programming and helps you to be sure that your software still works as intended after editing. If you have sufficient test coverage, you can do major refactoring, bug fixing or add features rapidly with much less fear of introducing other problems.
It's most effective when all unit tests can be run automatically.
Unit testing is generally associated with OO development. The basic idea is to create a script which sets up the environment for your code and then exercises it; you write assertions, specify the intended output that you should receive and then execute your test script using a framework such as those mentioned above.
The framework will run all the tests against your code and then report back success or failure of each test. phpUnit is run from the Linux command line by default, though there are HTTP interfaces available for it. SimpleTest is web-based by nature and is much easier to get up and running, IMO. In combination with xDebug, phpUnit can give you automated statistics for code coverage which some people find very useful.
Some teams write hooks from their subversion repository so that unit tests are run automatically whenever you commit changes.
It's good practice to keep your unit tests in the same repository as your application.
LibrarIES like NUnit, xUnit or JUnit are just mandatory if you want to develop your projects using the TDD approach popularized by Kent Beck:
You can read Introduction to Test Driven Development (TDD) or Kent Beck's book Test Driven Development: By Example.
Then, if you want to be sure your tests cover a "good" part of your code, you can use software like NCover, JCover, PartCover or whatever. They'll tell you the coverage percentage of your code. Depending on how much you're adept at TDD, you'll know if you've practiced it well enough :)
Unit-testing is the testing of a unit of code (e.g. a single function) without the need for the infrastructure that that unit of code relies on. i.e. test it in isolation.
If, for example, the function that you're testing connects to a database and does an update, in a unit test you might not want to do that update. You would if it were an integration test but in this case it's not.
So a unit test would exercise the functionality enclosed in the "function" you're testing without side effects of the database update.
Say your function retrieved some numbers from a database and then performed a standard deviation calculation. What are you trying to test here? That the standard deviation is calculated correctly or that the data is returned from the database?
In a unit test you just want to test that the standard deviation is calculated correctly. In an integration test you want to test the standard deviation calculation and the database retrieval.
Unit testing is about writing code that tests your application code.
The Unit part of the name is about the intention to test small units of code (one method for example) at a time.
xUnit is there to help with this testing - they are frameworks that assist with this. Part of that is automated test runners that tell you what test fail and which ones pass.
They also have facilities to setup common code that you need in each test before hand and tear it down when all tests have finished.
You can have a test to check that an expected exception has been thrown, without having to write the whole try catch block yourself.
I think the point that you don't understand is that unit testing frameworks like NUnit (and the like) will help you in automating small to medium-sized tests. Usually you can run the tests in a GUI (that's the case with NUnit, for instance) by simply clicking a button and then - hopefully - see the progress bar stay green. If it turns red, the framework shows you which test failed and what exactly went wrong. In a normal unit test, you often use assertions, e.g. Assert.AreEqual(expectedValue, actualValue, "some description") - so if the two values are unequal you will see an error saying "some description: expected <expectedValue> but was <actualValue>".
So as a conclusion unit testing will make testing faster and a lot more comfortable for developers. You can run all the unit tests before committing new code so that you don't break the build process of other developers on the same project.
Use Testivus. All you need to know is right there :)
Unit testing is a practice to make sure that the function or module which you are going to implement is going to behave as expected (requirements) and also to make sure how it behaves in scenarios like boundary conditions, and invalid input.
xUnit, NUnit, mbUnit, etc. are tools which help you in writing the tests.
Test Driven Development has sort of taken over the term Unit Test. As an old timer I will mention the more generic definition of it.
Unit Test also means testing a single component in a larger system. This single component could be a dll, exe, class library, etc. It could even be a single system in a multi-system application. So ultimately Unit Test ends up being the testing of whatever you want to call a single piece of a larger system.
You would then move up to integrated or system testing by testing how all the components work together.
First of all, whether speaking about Unit testing or any other kinds of automated testing (Integration, Load, UI testing etc.), the key difference from what you suggest is that it is automated, repeatable and it doesn't require any human resources to be consumed (= nobody has to perform the tests, they usually run at a press of a button).
I went to a presentation on unit testing at FoxForward 2007 and was told never to unit test anything that works with data. After all, if you test on live data, the results are unpredictable, and if you don't test on live data, you're not actually testing the code you wrote. Unfortunately, that's most of the coding I do these days. :-)
I did take a shot at TDD recently when I was writing a routine to save and restore settings. First, I verified that I could create the storage object. Then, that it had the method I needed to call. Then, that I could call it. Then, that I could pass it parameters. Then, that I could pass it specific parameters. And so on, until I was finally verifying that it would save the specified setting, allow me to change it, and then restore it, for several different syntaxes.
I didn't get to the end, because I needed-the-routine-now-dammit, but it was a good exercise.
What do you do if you are given a pile of crap and seem like you are stuck in a perpetual state of cleanup that you know with the addition of any new feature or code can break the current set because the current software is like a house of cards?
How can we do unit testing then?
You start small. The project I just got into had no unit testing until a few months ago. When coverage was that low, we would simply pick a file that had no coverage and click "add tests".
Right now we're up to over 40%, and we've managed to pick off most of the low-hanging fruit.
(The best part is that even at this low level of coverage, we've already run into many instances of the code doing the wrong thing, and the testing caught it. That's a huge motivator to push people to add more testing.)
This answers why you should be doing unit testing.
The 3 videos below cover unit testing in javascript but the general principles apply across most languages.
Unit Testing: Minutes Now Will Save Hours Later - Eric Mann - https://www.youtube.com/watch?v=_UmmaPe8Bzc
JS Unit Testing (very good) - https://www.youtube.com/watch?v=-IYqgx8JxlU
Writing Testable JavaScript - https://www.youtube.com/watch?v=OzjogCFO4Zo
Now I'm just learning about the subject so I may not be 100% correct and there's more to it than what I'm describing here but my basic understanding of unit testing is that you write some test code (which is kept separate from your main code) that calls a function in your main code with input (arguments) that the function requires and the code then checks if it gets back a valid return value. If it does get back a valid value the unit testing framework that you're using to run the tests shows a green light (all good) if the value is invalid you get a red light and you then can fix the problem straight away before you release the new code to production, without testing you may actually not have caught the error.
So you write tests for you current code and create the code so that it passes the test. Months later you or someone else need to modify the function in your main code, because earlier you had already written test code for that function you now run again and the test may fail because the coder introduced a logic error in the function or return something completely different than what that function is supposed to return. Again without the test in place that error might be hard to track down as it can possibly affect other code as well and will go unnoticed.
Also the fact that you have a computer program that runs through your code and tests it instead of you manually doing it in the browser page by page saves time (unit testing for javascript). Let's say that you modify a function that is used by some script on a web page and it works all well and good for its new intended purpose. But, let's also say for arguments sake that there is another function you have somewhere else in your code that depends on that newly modified function for it to operate properly. This dependent function may now stop working because of the changes that you've made to the first function, however without tests in place that are run automatically by your computer you will not notice that there's a problem with that function until it is actually executed and you'll have to manually navigate to a web page that includes the script which executes the dependent function, only then you notice that there's a bug because of the change that you made to the first function.
To reiterate, having tests that are run while developing your application will catch these kinds of problems as you're coding. Not having the tests in place you'd have to manually go through your whole application and even then it can be hard to spot the bug, naively you send it out into production and after a while a kind user sends you a bug report (which won't be as good as your error messages in a testing framework).
It's quite confusing when you first hear of the subject and you think to yourself, am I not already testing my code? And the code that you've written is working like it is supposed to already, "why do I need another framework?"... Yes you are already testing your code but a computer is better at doing it. You just have to write good enough tests for a function/unit of code once and the rest is taken care of for you by the mighty cpu instead of you having to manually check that all of your code is still working when you make a change to your code.
Also, you don't have to unit test your code if you don't want to but it pays off as your project/code base starts to grow larger as the chances of introducing bugs increases.
Unit-testing and TDD in general enables you to have shorter feedback cycles about the software you are writing. Instead of having a large test phase at the very end of the implementation, you incrementally test everything you write. This increases code quality very much, as you immediately see, where you might have bugs.

Categories

Resources