The Value of Unit Testing - c#

Here are some typical answers(ranked in ascending order of corniness) I get from managers/bosses whenever I bring up the importance of having unit tests and code coverage as an integral part of the development cycle
"That is the job of QA, just focus on features and development"
"The application is not mission critical, if there are some bugs it's not the end of the world"
"We can't afford to spend time on unit testing"
"Try not to get too fancy"
In spite of having the best intentions of doing a good job, at the end of the day when time comes for the blame game, the burden finally falls on the developer.
It's all too often that I've seen that things break in production, some of which which could have been avoided by catching these bugs statically by running unit tests.
I just wanted to get a conversation going to see what peoples experiences have been and what is the best way to tackle this.
UPDATE: Thanks everyone for a lot of insightful advice. There are several answers that I wish I could select as the right answer.

Introducing unit tests into development process is like investment: you have to put some money up front to get profit later. Management should be more attentive to this analogy if you follow through with it: describe what investments are required and then lay down plan for profits.
For example:
Investments:
time spend to implement test
infrastructure (no serious product
unit tests can be possible without
test-specific infrastructure code
that streamlines product specific
test patterns, test data
creation/removal, etc.);
time spend on writing actual tests;
time spend on reviewing and
supporting tests;
etc.
Profits:
no bug ever re-appears without a sign;
no major features are released without unit tests passing;
cycle development-qa-fix bugs is cut in half for majority of bugs: development-unit test-fix bugs;
etc.

Most managers won't see the advantages of unit testing until they see it in action where it makes sense, so my advice, based on experience, is to take the ff steps:
Apply unit tests to recurring bugs - This is the best use case to prove the value of unit tests. When you have bugs that just appear and reappear every other build, the unit test will allow developers to see which changes caused the bugs, aside from alerting them in advance that a fix is in order. It's quite easy to demonstrate to management as well.
Apply unit tests to regular bugs - With the usefulness of unit tests now clearly demonstrated, several instances of recurring bugs disappearing in the long term should be enough to encourage everyone to use unit tests to evaluate all bugs, to prevent them from becoming recurring bugs.
Apply unit tests to new functionality - With unit tests making sure that old bugs don't reccur, and confirm that they are fixed in the first place, the next step would be to apply it to new functionality to ensure that bugs will be minimized. Make it clear that it is impossible to totally eliminate bugs.
Apply full blown TDD - The final step will be to apply unit testing even before coding, as a design tool that both helps in designing code and minimizing bugs.
Of course I'm not saying that this is easy -- what I had stated above is an oversimplification which even I struggle with everyday -- it's difficult to convince everyone.
If later you decide to move on to a different company, you may want to explicitly look for a company that practices TDD.

People listen to their wallets. Show how much time you can save by catching bugs early on. Translate that to dollar-savings.

With regard to #3, spending time on unit testing will most likely decrease overall time to market. Great article - http://blog.scottbellware.com/2008/12/does-test-driven-development-speed-up.html

For me, the best advantage to adopt the unit test is that I can change my coding behavior to make it more testable, in another word, in more loose coupled way.
if you cannot practice unit test in your real project due to the management issues, I would choose to practice on some small toy project, just to force yourself to get a way to write testable code even there is NO unit test.
My own 2 cents.

If unit testing, I'm assuming you're talking about TDD here, is that important to you you should use your own time to write them (assuming you have time). If you do, keep a record of how much time you actually spend writing them and after they've been in place for a release cycle or two go to your managers with some data.
If the answers you've posted are really what your managers are saying then you work for idiots and perhaps some hard data can sway them. Given the market, quitting isn't likely an option and playing office politics won't get you anywhere (or improve the quality of your code).
Until your managers understand that TDD IS NOT solely about preventing bugs or "testing" they will NEVER get it. TDD is about design and overall code quality.
You have to show them. If they can't be persuaded then I would start looking.
Quietly ;)

My short and incomplete advice would be to:
Just change jobs. A company whose managers give that sort of answers is going to fail anyway, and soon. Get out before it's too late.
Reverse the blame game. Make an official statement every time something gets released without unit testing that it has been done so, and that you are not guaranteeing it's bug-free.
Write down the time you spend on tasks, separating bug fixing after failed deployments, and total it against a (potential) time allotment for writing unit tests.

Why don't you just write unit tests for your code? Do you know if there are some other developers having the same problem? Probably they will follow your example and write unit tests, too.
I don't think that the problem is the technique or the costs for an integration server. The problem is the management's attitude to unit testing. So convince them with all developers.
There are lots of hints in this thread (Jon Limjap's answer), try it!

Don't sell management on a particular approach; that's just going to be difficult and isn't really going to buy you much. Whether or not your management chain appreciates unit tested code doesn't matter.
Sure, unit testing your code has a lot of benefits associated with it, but don't rely on management buy-off to write your tests. When people start seeing results, they'll flock towards The Right Thing.

One other thought to add to the other excellent comments on this thread (many of which I've upvoted): make sure that your management knows that Unit testing is very highly automated at this point. I find it very impressive to pop NUnit on the screen, hit the "Run All" button and see dozens of green-lighted tests being passed in seconds. Do that once, saying "this verifies that all of my older work is still correct despite all of my newer changes" and you just may win a few converts. In any event, they'll come to trust you - with your visible proof of quality - more than they trust others. That can only be good for your career.

Well the classical response is that the earlier you catch a bug the less expensive it is to fix, I think most managers can relate to that.
As Mark said showing something concrete is the best way to convince PHBs that something is good as they are so used to hearing talk and probably don't know the difference between unit testing and other testing.

Now there is a resource to help. A modern list of use cases, tangible evidence for TDD.
Do you need to convince your boss or teammates that TDD is being used? That it's not some theory? That it's not just heir say?
Now you can, check out WeDoTDD.com, a list of companies that TDD and the stories behind those teams.
That's exactly why I created this site, to put to rest the arguments around "TDD proof" and "Does TDD work" and "Who is doing TDD".
You can also learn a lot about the topic itself there by reading the stories behind these companies and teams practicing it.

Related

Best Option for Retrospective application of TDD into C# codebase

I have an existing framework consisting of 5 C# libraries, the framework is well used since 2006 and is the main code base to the majority of my projects. My company wishes to roll out TDD for reasons of software quality; having worked through many tutorials and reading the theory I understand the benefits of TDD.
Time is not unlimited I need to make plans for a pragmatic approach to this. From what I know already, the options as I see them are:
A) One test project could be used in order to overlap objects from all 5 library components. A range of high level tests could be a starting point to what is first seen as a very large software library.
B) A test project for each of the 5 library components. The projects will be testing functions at the lowest level in isolation of the other library components.
C) As the code is widely regarded as working, only add unit tests to bug fixes or new features. Write a test that fails on the logic that has the bug in it with the steps to reproduce the bug. Then re-factor the code until the tests pass. Now you can have confidence that the bug is fixed and also it will not be introduced later on in the cycle
Whichever option is chosen, "Mocking" may be needed to replace external dependencies such as:
Database
Web Service
Configuration Files
If anybody has any more input this would be very helpful.
I plan to use Microsoft's inbuilt MSTest in Visual Studio 2010.
We have a million-and-a-half line code base. Our approach was to start by writing some integration tests (your option A). These tests exercise almost the whole system end-to-end: they copy database files from a repository, connect to that database, perform some operations on the data, and then output reports to CSV and compare them against known-good output. They're nowhere near comprehensive, but they exercise a large number of the things that our clients rely on our software to do.
These tests run very slowly, of course; but we still run all of them continuously, six years later (and now spread across eight different machines), because they catch things that we still don't have unit tests for.
Once we had a decent base of integration tests, we spent some time adding finer-grained tests around the high-traffic parts of the system (your option B). We were given time to do this because there was a perception of poor quality in our code.
Once we had improved the quality to a certain threshold, they started asking us to do real work again. So we settled into a rhythm of writing tests for new code (your option C). In addition, if we need to make changes to an existing piece of code that doesn't yet have unit tests, we might spend some time covering existing functionality with tests before we start making changes.
All of your approaches have their merits, but as you gain test coverage over time, the relative payoffs will change. For our code base, I think our strategy was a good one; integration tests will help catch any errors you make when trying to break dependencies to add unit tests.
Neither (A) nor (B) can properly be considered TDD. The code is already written; new tests will not drive its design. That does not mean there is not value in pursuing either of those paths, but it would be a mistake to consider them TDD. With respect to "the code is widely regarded to be working," I suspect that if you were to start (B) you would come to discover some holes in it. Untested code almost invariably contains bugs.
My advice would be to pursue (B), because I find greater value in unit tests than in integration tests (although much of that greater value lies in the design advantages for which you are too late). Integration tests are valuable too, and can tell you different important things about your code, but I like to start with unit tests. Pick one of the 5 components and start writing what we call characterization tests. Begin to discover the behaviors, build your experience at writing unit tests. Pick the easiest things to test first; build on what you learn with the easy methods to gradually ramp up to test the trickier bits. In writing these characterization tests you are almost certain to discover surprising behavior. Note it, for sure, and give some thought to whether it should be fixed (or whether the fixes are likely to break code that relies on the surprising behavior).
And of course, write tests for any new features or bug fixes before the code that implements them. Good luck!
By definition, if you are creating tests for an existing code base, this is not TDD.
I would take C) as a given: whenever you have a bug, write a test that "proves" the bug, and quash it, forever.
I agree with Carl Manaster advice. Another angle on the question is "economics": writing tests for a legacy app can be expensive, so where will you get the most bang for the buck? Think about a) classes and methods that are the most used, b) classes and methods which are the most likely to have a bug (usually the ones with the highest code complexity).
Also consider using tools like Pex and Code Contracts, which together can help you think about tests you haven't thought about, and problems that may exist in your code.
I would go with option C. Trying to fit unit tests around code that wasn't designed for Unit testing can be a major time suck. I would recommend only adding tests when you revist parts of the code and even then you may have to refactor that code to allow it to be unit tested.
Integration tests might be something to consider as well on legacy code as I assume they would be easier to put in place than Unit tests.
Options A and B don't fit the definition of TDD, and are both quite time-consuming. I would choose option C, because it's the most pragmatic solution.

Adding Unit Tests to code not designed for it

At work, we have a three-tier product. There is a client application which the users use and it queries data from a server which forwards those requests to a SQL database. We don't allow the client to have direct access to the SQL server.
The client product is what I'm wanting to unit test, but it has over 1.2 million lines of C# code and is a very old product. It was not designed with unit testing in mind and the lead developers for this product are generally opposed to unit testing mostly because of risk vs reward concerns, as well as how redesign would be required to reduce the amount of mocking that would need to be done. The redesign of these core, low-level client libraries and objects also has them concerned.
My philosophy is to certainly never neglect unit testing (because we'll always be too busy for it, and it'll always seem risky, and thus will never ever get done) and take an iterative approach to implementing unit tests.
I'm interested in hearing solutions to this situation. I'm sure many of you have encountered the situation of having to add unit testing into existing infrastructure. How could unit tests be added iteratively into the code base without hindering productivity and release cycles?
In situations like this (and we've in fact been undergoing the same process with an old webforms to MVC transition) is to simply start testing new code. Over time, eventually that old code will be rewritten or refactored.
Before 'new' code is considered valid, it must be unit tested and code reviewed. Over time, eventually you will find that more and more of your solution is now under test, and less and less old code is being called.
I found Michael Feathers' Working Effectively with Legacy Code useful when researching this topic.
My experiences:
Implement a way for system (end to end) tests
When you add new functionality write system tests and design the new functionality with unit tests
When you change existing functionality write system tests beforehand
Do NOT try to rewrite existing modules for testing them with unit tests
This way you get unit tests for new functionality and create a (althought wide) security net for old functionality. Over time more and more parts of your system get system tests and (in percent) you get more coverage with unit tests. Redesigning old code just to get unit test coverage is to costly.
I'm with your lead developers. It's 1.2 million lines of code which means which means that there is PLENTY of room for error when testing and redesigning. Its not designed for UT so it would probably take a non-trivial effort to rewrite the code so that it can be tested. I'm sure unit testing would be very time consuming. Plus, it's an old product which presumably means many of the bugs have already been found and fixed. If it ain't broke, dont fix it. Wouldn't you rather move on to the more interesting aspects of the project than testing and refactoring old code?
That said, if I absolutely thought it needed to happen, I would probably just write the tests for the pieces I touch, as I touch them. If I dont touch them, I dont test them.

What is unit testing? [duplicate]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I saw many questions asking 'how' to unit test in a specific language, but no question asking 'what', 'why', and 'when'.
What is it?
What does it do for me?
Why should I use it?
When should I use it (also when not)?
What are some common pitfalls and misconceptions
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Note that if your test code writes to a file, opens a database connection or does something over the network, it's more appropriately categorized as an integration test. Integration tests are a good thing, but should not be confused with unit tests. Unit test code should be short, sweet and quick to execute.
Another way to look at unit testing is that you write the tests first. This is known as Test-Driven Development (TDD for short). TDD brings additional advantages:
You don't write speculative "I might need this in the future" code -- just enough to make the tests pass
The code you've written is always covered by tests
By writing the test first, you're forced into thinking about how you want to call the code, which usually improves the design of the code in the long run.
If you're not doing unit testing now, I recommend you get started on it. Get a good book, practically any xUnit-book will do because the concepts are very much transferable between them.
Sometimes writing unit tests can be painful. When it gets that way, try to find someone to help you, and resist the temptation to "just write the damn code". Unit testing is a lot like washing the dishes. It's not always pleasant, but it keeps your metaphorical kitchen clean, and you really want it to be clean. :)
Edit: One misconception comes to mind, although I'm not sure if it's so common. I've heard a project manager say that unit tests made the team write all the code twice. If it looks and feels that way, well, you're doing it wrong. Not only does writing the tests usually speed up development, but it also gives you a convenient "now I'm done" indicator that you wouldn't have otherwise.
I don't disagree with Dan (although a better choice may just be not to answer)...but...
Unit testing is the process of writing code to test the behavior and functionality of your system.
Obviously tests improve the quality of your code, but that's just a superficial benefit of unit testing. The real benefits are to:
Make it easier to change the technical implementation while making sure you don't change the behavior (refactoring). Properly unit tested code can be aggressively refactored/cleaned up with little chance of breaking anything without noticing it.
Give developers confidence when adding behavior or making fixes.
Document your code
Indicate areas of your code that are tightly coupled. It's hard to unit test code that's tightly coupled
Provide a means to use your API and look for difficulties early on
Indicates methods and classes that aren't very cohesive
You should unit test because its in your interest to deliver a maintainable and quality product to your client.
I'd suggest you use it for any system, or part of a system, which models real-world behavior. In other words, it's particularly well suited for enterprise development. I would not use it for throw-away/utility programs. I would not use it for parts of a system that are problematic to test (UI is a common example, but that isn't always the case)
The greatest pitfall is that developers test too large a unit, or they consider a method a unit. This is particularly true if you don't understand Inversion of Control - in which case your unit tests will always turn into end-to-end integration testing. Unit test should test individual behaviors - and most methods have many behaviors.
The greatest misconception is that programmers shouldn't test. Only bad or lazy programmers believe that. Should the guy building your roof not test it? Should the doctor replacing a heart valve not test the new valve? Only a programmer can test that his code does what he intended it to do (QA can test edge cases - how code behaves when it's told to do things the programmer didn't intend, and the client can do acceptance test - does the code do what what the client paid for it to do)
The main difference of unit testing, as opposed to "just opening a new project and test this specific code" is that it's automated, thus repeatable.
If you test your code manually, it may convince you that the code is working perfectly - in its current state. But what about a week later, when you made a slight modification in it? Are you willing to retest it again by hand whenever anything changes in your code? Most probably not :-(
But if you can run your tests anytime, with a single click, exactly the same way, within a few seconds, then they will show you immediately whenever something is broken. And if you also integrate the unit tests into your automated build process, they will alert you to bugs even in cases where a seemingly completely unrelated change broke something in a distant part of the codebase - when it would not even occur to you that there is a need to retest that particular functionality.
This is the main advantage of unit tests over hand testing. But wait, there is more:
unit tests shorten the development feedback loop dramatically: with a separate testing department it may take weeks for you to know that there is a bug in your code, by which time you have already forgotten much of the context, thus it may take you hours to find and fix the bug; OTOH with unit tests, the feedback cycle is measured in seconds, and the bug fix process is typically along the lines of an "oh sh*t, I forgot to check for that condition here" :-)
unit tests effectively document (your understanding of) the behaviour of your code
unit testing forces you to reevaluate your design choices, which results in simpler, cleaner design
Unit testing frameworks, in turn, make it easy for you to write and run your tests.
I was never taught unit testing at university, and it took me a while to "get" it. I read about it, went "ah, right, automated testing, that could be cool I guess", and then I forgot about it.
It took quite a bit longer before I really figured out the point: Let's say you're working on a large system and you write a small module. It compiles, you put it through its paces, it works great, you move on to the next task. Nine months down the line and two versions later someone else makes a change to some seemingly unrelated part of the program, and it breaks the module. Worse, they test their changes, and their code works, but they don't test your module; hell, they may not even know your module exists.
And now you've got a problem: broken code is in the trunk and nobody even knows. The best case is an internal tester finds it before you ship, but fixing code that late in the game is expensive. And if no internal tester finds it...well, that can get very expensive indeed.
The solution is unit tests. They'll catch problems when you write code - which is fine - but you could have done that by hand. The real payoff is that they'll catch problems nine months down the line when you're now working on a completely different project, but a summer intern thinks it'll look tidier if those parameters were in alphabetical order - and then the unit test you wrote way back fails, and someone throws things at the intern until he changes the parameter order back. That's the "why" of unit tests. :-)
Chipping in on the philosophical pros of unit testing and TDD here are a few of they key "lightbulb" observations which struck me on my tentative first steps on the road to TDD enlightenment (none original or necessarily news)...
TDD does NOT mean writing twice the amount of code. Test code is typically fairly quick and painless to write and is a key part of your design process and critically.
TDD helps you to realize when to stop coding! Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing.
The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of BOTH being bad / buggy being fairly low. Often its the test that needs fixing but that's still a good outcome.
TDD helps with coding constipation. You know that feeling that you have so much to do you barely know where to start? It's Friday afternoon, if you just procrastinate for a couple more hours... TDD allows you to flesh out very quickly what you think you need to do, and gets your coding moving quickly. Also, like lab rats, I think we all respond to that big green light and work harder to see it again!
In a similar vein, these designer types can SEE what they're working on. They can wander off for a juice / cigarette / iphone break and return to a monitor that immediately gives them a visual cue as to where they got to. TDD gives us something similar. It's easier to see where we got to when life intervenes...
I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interprete this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete.
TDD helps in all kinds of surprising ways down the line. Good unit tests can help document what something is supposed to do, they can help you migrate code from one project to another and give you an unwarranted feeling of superiority over your non-testing colleagues :)
This presentation is an excellent introduction to all the yummy goodness testing entails.
I would like to recommend the xUnit Testing Patterns book by Gerard Meszaros. It's large but is a great resource on unit testing. Here is a link to his web site where he discusses the basics of unit testing. http://xunitpatterns.com/XUnitBasics.html
I use unit tests to save time.
When building business logic (or data access) testing functionality can often involve typing stuff into a lot of screens that may or may not be finished yet. Automating these tests saves time.
For me unit tests are a kind of modularised test harness. There is usually at least one test per public function. I write additional tests to cover various behaviours.
All the special cases that you thought of when developing the code can be recorded in the code in the unit tests. The unit tests also become a source of examples on how to use the code.
It is a lot faster for me to discover that my new code breaks something in my unit tests then to check in the code and have some front-end developer find a problem.
For data access testing I try to write tests that either have no change or clean up after themselves.
Unit tests aren’t going to be able to solve all the testing requirements. They will be able to save development time and test core parts of the application.
This is my take on it. I would say unit testing is the practice of writing software tests to verify that your real software does what it is meant to. This started with jUnit in the Java world and has become a best practice in PHP as well with SimpleTest and phpUnit. It's a core practice of Extreme Programming and helps you to be sure that your software still works as intended after editing. If you have sufficient test coverage, you can do major refactoring, bug fixing or add features rapidly with much less fear of introducing other problems.
It's most effective when all unit tests can be run automatically.
Unit testing is generally associated with OO development. The basic idea is to create a script which sets up the environment for your code and then exercises it; you write assertions, specify the intended output that you should receive and then execute your test script using a framework such as those mentioned above.
The framework will run all the tests against your code and then report back success or failure of each test. phpUnit is run from the Linux command line by default, though there are HTTP interfaces available for it. SimpleTest is web-based by nature and is much easier to get up and running, IMO. In combination with xDebug, phpUnit can give you automated statistics for code coverage which some people find very useful.
Some teams write hooks from their subversion repository so that unit tests are run automatically whenever you commit changes.
It's good practice to keep your unit tests in the same repository as your application.
LibrarIES like NUnit, xUnit or JUnit are just mandatory if you want to develop your projects using the TDD approach popularized by Kent Beck:
You can read Introduction to Test Driven Development (TDD) or Kent Beck's book Test Driven Development: By Example.
Then, if you want to be sure your tests cover a "good" part of your code, you can use software like NCover, JCover, PartCover or whatever. They'll tell you the coverage percentage of your code. Depending on how much you're adept at TDD, you'll know if you've practiced it well enough :)
Unit-testing is the testing of a unit of code (e.g. a single function) without the need for the infrastructure that that unit of code relies on. i.e. test it in isolation.
If, for example, the function that you're testing connects to a database and does an update, in a unit test you might not want to do that update. You would if it were an integration test but in this case it's not.
So a unit test would exercise the functionality enclosed in the "function" you're testing without side effects of the database update.
Say your function retrieved some numbers from a database and then performed a standard deviation calculation. What are you trying to test here? That the standard deviation is calculated correctly or that the data is returned from the database?
In a unit test you just want to test that the standard deviation is calculated correctly. In an integration test you want to test the standard deviation calculation and the database retrieval.
Unit testing is about writing code that tests your application code.
The Unit part of the name is about the intention to test small units of code (one method for example) at a time.
xUnit is there to help with this testing - they are frameworks that assist with this. Part of that is automated test runners that tell you what test fail and which ones pass.
They also have facilities to setup common code that you need in each test before hand and tear it down when all tests have finished.
You can have a test to check that an expected exception has been thrown, without having to write the whole try catch block yourself.
I think the point that you don't understand is that unit testing frameworks like NUnit (and the like) will help you in automating small to medium-sized tests. Usually you can run the tests in a GUI (that's the case with NUnit, for instance) by simply clicking a button and then - hopefully - see the progress bar stay green. If it turns red, the framework shows you which test failed and what exactly went wrong. In a normal unit test, you often use assertions, e.g. Assert.AreEqual(expectedValue, actualValue, "some description") - so if the two values are unequal you will see an error saying "some description: expected <expectedValue> but was <actualValue>".
So as a conclusion unit testing will make testing faster and a lot more comfortable for developers. You can run all the unit tests before committing new code so that you don't break the build process of other developers on the same project.
Use Testivus. All you need to know is right there :)
Unit testing is a practice to make sure that the function or module which you are going to implement is going to behave as expected (requirements) and also to make sure how it behaves in scenarios like boundary conditions, and invalid input.
xUnit, NUnit, mbUnit, etc. are tools which help you in writing the tests.
Test Driven Development has sort of taken over the term Unit Test. As an old timer I will mention the more generic definition of it.
Unit Test also means testing a single component in a larger system. This single component could be a dll, exe, class library, etc. It could even be a single system in a multi-system application. So ultimately Unit Test ends up being the testing of whatever you want to call a single piece of a larger system.
You would then move up to integrated or system testing by testing how all the components work together.
First of all, whether speaking about Unit testing or any other kinds of automated testing (Integration, Load, UI testing etc.), the key difference from what you suggest is that it is automated, repeatable and it doesn't require any human resources to be consumed (= nobody has to perform the tests, they usually run at a press of a button).
I went to a presentation on unit testing at FoxForward 2007 and was told never to unit test anything that works with data. After all, if you test on live data, the results are unpredictable, and if you don't test on live data, you're not actually testing the code you wrote. Unfortunately, that's most of the coding I do these days. :-)
I did take a shot at TDD recently when I was writing a routine to save and restore settings. First, I verified that I could create the storage object. Then, that it had the method I needed to call. Then, that I could call it. Then, that I could pass it parameters. Then, that I could pass it specific parameters. And so on, until I was finally verifying that it would save the specified setting, allow me to change it, and then restore it, for several different syntaxes.
I didn't get to the end, because I needed-the-routine-now-dammit, but it was a good exercise.
What do you do if you are given a pile of crap and seem like you are stuck in a perpetual state of cleanup that you know with the addition of any new feature or code can break the current set because the current software is like a house of cards?
How can we do unit testing then?
You start small. The project I just got into had no unit testing until a few months ago. When coverage was that low, we would simply pick a file that had no coverage and click "add tests".
Right now we're up to over 40%, and we've managed to pick off most of the low-hanging fruit.
(The best part is that even at this low level of coverage, we've already run into many instances of the code doing the wrong thing, and the testing caught it. That's a huge motivator to push people to add more testing.)
This answers why you should be doing unit testing.
The 3 videos below cover unit testing in javascript but the general principles apply across most languages.
Unit Testing: Minutes Now Will Save Hours Later - Eric Mann - https://www.youtube.com/watch?v=_UmmaPe8Bzc
JS Unit Testing (very good) - https://www.youtube.com/watch?v=-IYqgx8JxlU
Writing Testable JavaScript - https://www.youtube.com/watch?v=OzjogCFO4Zo
Now I'm just learning about the subject so I may not be 100% correct and there's more to it than what I'm describing here but my basic understanding of unit testing is that you write some test code (which is kept separate from your main code) that calls a function in your main code with input (arguments) that the function requires and the code then checks if it gets back a valid return value. If it does get back a valid value the unit testing framework that you're using to run the tests shows a green light (all good) if the value is invalid you get a red light and you then can fix the problem straight away before you release the new code to production, without testing you may actually not have caught the error.
So you write tests for you current code and create the code so that it passes the test. Months later you or someone else need to modify the function in your main code, because earlier you had already written test code for that function you now run again and the test may fail because the coder introduced a logic error in the function or return something completely different than what that function is supposed to return. Again without the test in place that error might be hard to track down as it can possibly affect other code as well and will go unnoticed.
Also the fact that you have a computer program that runs through your code and tests it instead of you manually doing it in the browser page by page saves time (unit testing for javascript). Let's say that you modify a function that is used by some script on a web page and it works all well and good for its new intended purpose. But, let's also say for arguments sake that there is another function you have somewhere else in your code that depends on that newly modified function for it to operate properly. This dependent function may now stop working because of the changes that you've made to the first function, however without tests in place that are run automatically by your computer you will not notice that there's a problem with that function until it is actually executed and you'll have to manually navigate to a web page that includes the script which executes the dependent function, only then you notice that there's a bug because of the change that you made to the first function.
To reiterate, having tests that are run while developing your application will catch these kinds of problems as you're coding. Not having the tests in place you'd have to manually go through your whole application and even then it can be hard to spot the bug, naively you send it out into production and after a while a kind user sends you a bug report (which won't be as good as your error messages in a testing framework).
It's quite confusing when you first hear of the subject and you think to yourself, am I not already testing my code? And the code that you've written is working like it is supposed to already, "why do I need another framework?"... Yes you are already testing your code but a computer is better at doing it. You just have to write good enough tests for a function/unit of code once and the rest is taken care of for you by the mighty cpu instead of you having to manually check that all of your code is still working when you make a change to your code.
Also, you don't have to unit test your code if you don't want to but it pays off as your project/code base starts to grow larger as the chances of introducing bugs increases.
Unit-testing and TDD in general enables you to have shorter feedback cycles about the software you are writing. Instead of having a large test phase at the very end of the implementation, you incrementally test everything you write. This increases code quality very much, as you immediately see, where you might have bugs.

Best strategy to get coding prepared for unit testing

I have a solution that is missing a lot of code coverage. I need to refactor this code to decouple to begin to create unit tests. What is the best strategy? I am first thinking that I should push to decouple business logic from data access from businessobjects to first get some organization and then drill down from there. Since many of the classes don't support single responsible principle, it's hard to begin testing them.
Are there are other suggestions or best practices from taking a legacy solution and getting it into shape to be ready for code coverage and unit testing?
Check out Working Effectively with Legacy Code.
One of the most important things to do and best ways to approach in legacy code is defects. It is a process that you will continue to do with any code base that you introduce unit testing to, as well. Whenever a defect is reported, write a unit test that will expose the defect. You will quickly find that code that would break on a regular basis (i.e. "Oh, yay. The plugh() method in the xyzzy class is broken again!) will start breaking less and less.
Really, just start doing it. You aren't going to have tremendous coverage in a legacy application overnight. Start by hitting the code that is more prone to breakage, and start branching out. Make sure that any new development within the code has a higher code coverage, as well.
Remember the mantra of TDD is "red/green/refactor", and you might want to look into refactoring tools to help do some of the tedious tasks that go along with it. JetBrain's ReSharper is popular, and my personal choice.
I suggest creating tests for the existing code first, until you have good enough coverage. You need to test your code as it is to make sure you don't break anything when you refactor. Of course, you'll want to do this piece by piece, writing tests for a module, then refactoring it before moving on to the next one. Once you have good coverage you can decide if it's worth continuing with the refactoring just to make the code more testable. From your description, I suspect it will be.
Start by creating the tests. Refactor the code as needed to support the tests.
These are generic guidelines I find useful for unit testing:
1) Identify Boundary Objects (Win/WebForms, CustomControls etc).
2) Identify Control Objects (Business layer objects)
3) Write Unit tests only for control objects public methods invoked by boundary objects. This way you'll be sure you're covering main functional aspects of your app.
In your case, if the business rules are tightly coupled with boundary objects you're in trouble - in my opinion you should try to refactor your stuff focusing on hot spots based on functional requirements of your app.
The feasibilty of this obviosuly highly depends on the specific case.
Try to start reading about TDD.
http://www.codeproject.com/KB/dotnet/tdd_in_dotnet.aspx
You need to test your code as it is to make sure you don't break anything when you refactor.

How can I help junior members gain confidence in their ability to refactor code? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I wanted to see what folks thought were the best way to help junior members gain confidence in their ability to refactor code. I find that often junior developers simply follow the existing patterns and sense an aversion to break anything due to the perceived risk. I am trying to get them to look at the requirements (new and existing) and have the code map onto the domain model versus "tweaking" or "bending" the existing code base to support new functionality. Just wanted to see if there were any folks who have success here. We are currently thinking about pushing more pair programming, TDD, code reviews etc., but wanted to check if there were other success stories...
The only reliable answer will be unit tests IMHO. Not necessarily TDD -- just get them to write tests for the existing methods/classes they want to refactor, make sure they're doing what they're doing, then allow them to refactor like crazy while relying on the tests to make sure the methods are still valid.
The confidence level and flexibility unit tests allow are immensely valuable in these kinds of activities.
It seems that pair programming with a senior developer in combination with writing tests would be the optimal situation. If the more experienced developer in the pair is leading, the testing will just be part of it.
Writing tests alone leaves room for improvement. Writing tests in a pair environment is the best of both worlds and therefore should build the confidence quicker.
I recommend you to have Unit Tests before start a heavy refactoring, if you have a good code coverage, by simply running the tests you'll be able to see if some test fail, which refactoring affected the desired program behavior.
Basically Unit Testing can give you the confidence your team needs to refactor.
No matter what, you should push for "more pair programming, TDD, code reviews etc". You should also make sure that your programmers (both Junior in years and in habits) are skilled in the fundamentals.
I reccomend suggesting that they read McConnell's Code Complete and Fowler's Refactoring.
I agree - unit tests, and pair programming.
The first step would be to have them write tests for whatever they want to refactor FIRST and then refactor. I also think there is some merit in code reviews with more senior developers as well as pair programming.
My suggestion would be to take a system that will change quite a bit over time and let a junior developer give a plan of what basics they want to apply to it, e.g. are unit tests missing, what design pattern may make sense to use to add the new functionality, do you feel this is "good" code and if not, what changes would you make to it? Part of this is getting them into the code and comfortable with it as when a developer doesn't know what anything within the system does, chances are they want to make minimal changes for fear of breaking something and the subsequent negative fall out. If there can be a senior member that can act as a mentor and guide what junior developers suggest into something that better matches what is being sought, this may be the big thing to get in there.
Note that for the above that senior member may have to have a good deal of familiarity and in a way already done the planning of how to make the changes that the junior developer will do, but the idea is to get the juniors more into the code is how I'd see it. If the junior developers can get in the habit of jumping into things and be encouraged to do that, then I can see some success there. The key is to have the idea of how to correct what the junior developer suggests and yet encourage them to give more into the overall process rather than be told what to do.
Some people are more likely to stand out and take a chance, the key is for the group to see how this turns out as what you want in the end is a group of junior developers all working on various solutions where the senior developer may have originally built the system or integrated various products together and thus can give an input on what should be done but act as a guide rather than a parent in getting stuff done.
Another way to view this is to simply visualize things from the junior developer's view. If they suggest something and get something, e.g. praise or better assignments, then this may get things rolling, though one has to be careful on what is given as to keep taking things up a notch can run into problems if it grows too high.
Ask them to write or investigate existing test cases
Run the those test cases, record the result
Ask them to refactor the code
review the refactor code
Run the test cases match the previous observations
Also, try doing some coding dojo's. One pair sits and programs at the beamer, rotating one developer every five minutes. Have them talk about how they refactor, and why.
See: http://www.codingdojo.org/
I believe the question is not specific to C#, so I'll suggest giving it a try with Java using Eclipse. Eclipse has the best refactoring tools I have ever seen so far (although I never tried IntelliJ IDEA or Resharper). I have benefited a lot by learning refactoring through Eclipse, especially using the Preview window before performing any change.
I would recommend a combination books, tools, coding and mentoring.
Before anything else - get the candidate to buy or borrow Refactoring by Martin Fowler and read it.
If you have a good source control system - it should be trivial to create a separate branch to play with. Also if the end result of the exercise is useful you can easily merge it into the trunk.
Next, pick a specific task that you know would require them to comprehend the application structure. For example, ask them to instrument a part of the system. This provides a task within which to work instead of general directives (like read the framework documentation or read this code)
The next part is to ask that tests be written to support the functionality touched by this task. Review the tests and encourage writing of comprehensive tests. It is important to use a tool for this like NUnit or if your IDE supports unit testing, use that. Encourage the person to ask questions and find out why things are the way they are.
If you are using an IDE that supports it introduce the refactoring tools in the IDE and encourage them to refactor the code. Use pair programming or regular code reviews to mentor the person. The tests can be used to show how unit tests are a vital part of a good refactoring effort. Start small - may be change the names of things or extract fields into properties and then move to more complex ones.
Hopefully, by the end of the exercise, not only will the person be comfortable enough to question, tweak and change the application and you would have some useful functionality out of it as well :-)

Categories

Resources