My project has 1000+ unit tests that, in a local machine, all run in less than 10 seconds. But when they run on TFS Build, some tests run significantly slower than others. 3 of them run in about 1-2 minutes, other 4 in 5-30 seconds, and the others in fractions of a second. I've noticed that all those slower tests use fakes from Microsoft Fakes, and each one of them is the first to run in it's class. But a lot of the other tests also use fakes (some more intensively) and run in regular time. I would like to know what may be causing that slowdown and how can I fix it.
Edit: I've noticed that every slower test runs after a mockless test. Maybe that slowdown is caused by the initialization of the ShimsContext. In my test classes, the ShimsContext is created and disposed on TestInitialize and TestCleanup methods. Does that affects significantly the performance?
First off all I would strongly recommend you move away from shims. They are a crutch, and aside from a very few scenarios, simply not needed. Design your code for testability, and you will find you can do without them.
Second, shims are not thread safe, and cannot be used concurrently safely. Likely this is why you are seeing the really slow run times.
Either your local is running things concurrently when it shouldn't (MS says it isn't safe, but not enforced), and the build server isn't.
Or the build server is trying to be parallel and it is causing issues.
Tweak your concurrency settings to disable it and see what effect that has on your runtime.
Refer following links :
https://softwareengineering.stackexchange.com/questions/184834/how-do-we-make-unit-tests-run-fast
http://arlobelshee.com/the-no-mocks-book/
The links say that making tests fast can be hard. Decoupling is key. Mocks/fakes are OK, but one can do better by refactoring to make mocks/fakes unnecessary.
I have a moderately sized, highly tangled, messy, big ball-of-mud project that I want to refactor. It is about 50k lines of code and I am the only maintainer. One class is 10k LOC long with line widths of 400 characters. Features are still being added, albeit slowly. I feel good about the refactoring part. Ripping apart code is very comfortable to me. However, getting a set of tests around the application to ensure that I don't break it seems daunting. Huge object graphs and mocking out all the databases will be real buggers. I probably could get away with doing massive refactoring without breaking things horribly. But, prudence dictates some level of tests to ensure at least some level of safety. At the same time, I don't want to spend any more than a minimal amount of time getting a set of "behavior lock-down" tests around the code. I fully intend to add a full set of unit tests once I get things a bit decoupled. A rewrite is a non-starter.
Does anyone have any pointers or techniques? Has anyone done this sort of thing before?
Mindset
Generally it can be tough going trying to write automated tests (even high level ones) for applications that were not build with testability in mind.
The best thing is going to be make sure you are disciplined in writing tests as you refactor (which it sounds like you are intending to be). This will slowly turn that ball of code, into an elegant dancing unicorn of well encapsulated testable classes.
Suggestions
Start with creating some manual high level tests (e.g. user goes to page one, clicks on the red button, then a textbox appears..) to have a starting point. Depending on the technology the app is built in there are a few frameworks out there that can help automate these high level (often UI driven) tests:
For web apps Selenium is a great choice, for WPF apps you can use the UI Automation framework, for other application, while its a bit rudimentary, AutoIt can be a life saver.
Here how I do it in c++ (and c-).
Create a directory to house the tests, cd to this directory
Create a directory to house mock objects, say mock-objs.
Create a makefile to compile all object files of interest.
Add necessary include directories or mock .h files to make all object file compile.
Congratulate yourself you are 90% done.
Add a test harness of your choice (e.g. cppunit, atf-tests, google test ..).
Add a null test case - just start, log and declare success.
Add necessary libraries and/or mock .c/.cpp files until link is successful and the very first test passes. Note: all functions in these .c/.cpp mock files should contain only a primitive to fail the test when called.
Congratulate yourself you are 99% done.
Add a primitive event scheduler: say just list of callbacks - so you can post request and receive response from event callback.
Add a primitive timer: say timer wheel or even timer list if you need just a few timers.
Write advance-time function: (a) process all queued events (b)increment current time to the next tick, (c)expire all timers waiting for this tick, (d)process all queued events over and over again until no events left, (e) if end the time advance is not reached go to step(b).
Congratulate yourself: now you can add tests with relative ease: add a test case, modify mock functions as required, repeat.
When working on larger projects, it can take at least 10 seconds to compile and start the unit test framework. Are there effective ways to reduce the feedback loop time? I intend to make just small changes in one unit test class and one other class between test runs.
I considered some other approaches. I do not see any way to compile and run a single test class and dependencies. I could increase the number of projects in the solution so that each assembly takes less time to compile, but that causes other issues. NCrunch appears to reduce the need to manually run tests, but it still compiles full assemblies.
Clarification:
The 10 seconds included time to compile the unit test class and the class under test. My issue with NCrunch may have been because of a less powerful computer.
You'd have to put each test class in a separate assembly - an assembly is effectively the unit of compilation. If it's taking 10 seconds to recompile after just a change to a test class, that suggests that either you've got too many tests in one assembly, or you've got a very slow machine. It could well be that getting a better machine (or improving the existing one with more memory or an SSD) is the best way forward.
I use NCrunch myself, and although it still compiles full assemblies, the fact that it's doing it in the background means that usually by the time I've taken a mental breath, the tests have rebuilt and are running. NCrunch works well if you've got multiple processors and a ramdisk, by the way - you can set where it builds, and also how many processors it can use.
If you've only considered NCrunch (or The Mighty Moose etc - similar stuff) but not actually tried it, you should give it a go before assuming it won't be fast enough for you.
You might check out AutoTest.Net, which is an add-on for Visual Studio that runs your unit tests in the background as you write your code.
This way you can treat your unit tests more like compiler errors / warnings and get relatively real-time feedback.
Declarative unit tests would effectively zero out the compilation time, but only if your architecture permits. For example, moving the unit tests to database worked well for us in a large-scale project.
I want to monitor xunit.net tests that are running in CI (if you know similar approaches for nunit it may also helps).
Integration tests that were running double in time and I would like to get information like methods call timing and resources usage.
I started windows performance monitor but it isnt clear to me where the time is being spent inside the test and why it double in execution.
Any advice? Thanks!
For a high-tech option that might cost you a few dollars, look into a profiling tool. there is a really good one for dotnet called ANTS Performance Profiler that can help you work out where some of your bottlenecks are. It will also link to your code to show you exactly where the problems are showing up.
The simplest free (as in only spending your time) approach that I can think of is to introduce a base test class, and use the setup/teardown methods to log timing and other information to either console or file. Alternately you could create a test harness for use within each of your tests, and add the logging there. A slightly more sophisticated approach would be to use some method of delegation to trigger the Assign/Act/Assert stages of your tests, and apply your logging there. Personally, I do all of my C# testing using a product called StoryQ. If I needed something logged, I would introduce a few extension methods to effectively wrapper my logging while still behaving as test steps.
If it were me though, I'd spend the bucks and choose the profiler. It will pay for itself time and again, and really helps you to determine where to invest your time optimizing your code.
My current position is this: if I thoroughly test my ASP.NET applications using web tests (in my case via the VS.NET'08 test tools and WatiN, maybe) with code coverage and a broad spectrum of data, I should have no need to write individual unit tests, because my code will be tested in conjunction with the UI through all layers. Code coverage will ensure I'm hitting every functional piece of code (or reveal unused code) and I can provide data that will cover all reasonably expected conditions.
However, if you have a different opinion, I'd like to know:
What additional benefit does unit testing give that justifies the effort to include it in a project (keep in mind, I'm doing the web tests anyway, so in many cases, the unit tests would be covering code that web tests already cover).
Can you explain your reasons in detail with concete examples? Too often I see responses like "that's not what it's meant for" or "it promotes higher quality" - which really doesn't address the practical question I have to face, which is, how can I justify - with tangible results - spending more time testing?
Code coverage will ensure I'm hitting every functional piece of code
"Hit" does not mean "Testing"
The problem with only doing web-testing is that it only ensures that you hit the code, and that it appears to be correct at a high-level.
Just because you loaded the page, and it didn't crash, doesn't mean that it actually works correctly. Here are some things I've encountered where 'web tests' covered 100% of the code, yet completely missed some very serious bugs which unit testing would not have.
The page loaded correctly from a cache, but the actual database was broken
The page loaded every item from the database, but only displayed the first one - it appeared to be fine even though it failed completely in production because it took too long
The page displayed a valid-looking number, which was actually wrong, but it wasn't picked up because 1000000 is easy to mistake for 100000
The page displayed a valid number by coincidence - 10x50 is the same as 25x20, but one is WRONG
The page was supposed to add a log entry to the database, but that's not visible to the user so it wasn't seen.
Authentication was bypassed to make the web-tests actually work, so we missed a glaring bug in the authentication code.
It is easy to come up with hundreds more examples of things like this.
You need both unit tests to make sure that your code actually does what it is supposed to do at a low level, and then functional/integration (which you're calling web) tests on top of those, to prove that it actually works when all those small unit-tested-pieces are chained together.
Unit testing is likely to be significantly quicker in turn-around than web testing. This is true not only in terms of development time (where you can test a method in isolation much more easily if you can get at it directly than if you have to construct a particular query which will eventually hit it, several layers later) but also in execution time (executing thousands of requests in sequence will take longer than executing short methods thousands of times).
Unit testing will test small units of functionality, so when they fail you should be able to isolate where the issue is very easily.
In addition, it's a lot easier to mock out dependencies with unit tests than when you hit the full front end - unless I've missed something really cool. (A few years ago I looked at how mocking and web testing could integrate, and at the time there was nothing appropriate.)
Unit testing does not generally prove that any given set of functionality works--at least it's not supposed to. It proves that your class contract works as you expect it to.
Acceptance tests are more oriented at customer requirements. Every requirement should have an acceptance test, but there is really no requirement between acceptance tests and unit tests--they might not even be in the same framework.
Unit testing can be used to drive code development, and speed of retesting is a significant factor. When testing, you often remove parts that the class under test relies on to test in isolation.
Acceptance tests the system just as you would deliver it--from GUI to database. Sometimes they take hours or days (or weeks) to run.
If you start to think of it as two completely different beasts, you will be a much more effective tester.
Good, focused unittests make it a lot faster to find and fix problems when they crop up. When a well-written unittest breaks, you know pretty much what the failure means and what caused it.
Also, they're typically faster to run, meaning that you're much more likely to run them during development as part of your edit-compile-test cycle (as opposed to only when you're about to check-in).
When you write unit tests you will be forced to write your code in a better way. More loosely coupled and more object oriented. That will lead to better architecture and a more flexible system.
If you write unit tests in a TDD style you will probably don't do as much unnecessary code because you will focus on tiny steps and only do the necessary.
You will be more confident when doing refactoring to improve your code to increase maintainability and reduce code smell.
And the unit test themselves will serve as exilent documentation of what the system does and does not.
Those are just a few examples of benefits I have noticed when applying TDD to my work.
One more aspect - forgive the somewhat ideal environment that this is situated in:
Suppose you have 3 components that finally have to work together. Each can individually be completely unit-tested (whatever that means) with 5 unit tests. This makes 5 + 5 + 5 = 15 unit tests for complete coverage.
Now if you have your integration/web/combined test that tests all components together, you'd need (remember the ideal world for this abstract scenario) 5 * 5 * 5 = 125 tests that test all permutations to give you the same result as the 15 test cases above (given that you can even trigger all permutations, otherwise there would be untested/unspecified behaviour that might bite you later when you extend your components).
Additionally the 125 test cases would have a significantly more complicated setup, higher turnaround time and greatly decreased maintainability should the feature ever change. I'd rather opt for 15 unit tests plus some basic web tests that ensure that the components are wired somewhat correctly.
Unit testing gives you speed and most of all, pin point accuracy of where a failure or bug has been introduced. It enables you to test each component, isolated from every other component and be assured that it works as it should.
For example, if you had a web test, that sent an Ajax request back to a service on the server which then hit a database, which then fails. Was it the Javascript, service, business logic or database that caused the problem?
Where as if you unit test each of the service on its own, stubbing/mocking out the database, or each business logic unit, then you are more likely to know exactly where the bug is. Unit testing is less about coverage (although important) and more about isolation.
Almost as Dijkstra put it: unit-tests could only be used to show that software has defects, not to prove that it's defect-free. So, in general, hitting every code path once (and obtaining 100% coverage) has nothing to do with testing - it just helps to eliminate bitrot.
If you are playing it by the book, every serious bug should be eliminated only after there is a unit test written that triggers that bug. Coincidentally, fixing the bug means that this particular unit-test is not failing anymore. In future, this unit test checks that this particular bug stays fixed.
It is much easier to write a unit test that triggers a particular bug than write an end-to-end (web) test that does ONLY that and doesn't run heaps of completely irrelevant code along the way (which could also fail and mess up with root cause analysis).
Unit Tests test that each component works. These are extremely helpful in finding defects close to the time they are created, which dramatically cuts down the cost to fix defects and dramatically reduces the number of defects which end up in your release. Additionally, good unit tests make refactoring a whole lot easier and more robust.
Integration tests (or "web" tests in this case) are also very important but are a later line of defense than unit tests. A single integration test covers such a huge swath of code that when one fails it requires a lot of investigation to determine the defect (or possibly the group of defects) which caused the failure. This is costly, especially when you are trying to test a release build to get it out the door. This is even more costly given that the chance of introducing a bug with the fix tends to be pretty high on average and the chance that the failure is blocking further testing of the release (which is extremely expensive to the development cycle).
In contrast, when a unit test fails you know exactly where the defective code is and you usually know exactly what the code is doing wrong. Also, a unit test failure should only impact one developer at a time and be fixed before the code is checked in.
It's always more expensive to fix bugs later than earlier. Always.
Consider building an automobile. Would you wait until the entire vehicle rolls off the assembly line to test that each component works? At that point if you discover the CD player or the engine or the air conditioner or the cruise control doesn't work you have to take the whole vehicle off the line, fix the problem, then re-test everything (which hopefully doesn't reveal any new defects). This is obviously the wrong way to do it, just as it is obviously wrong to try to release software while only testing if it works at the end of the process rather than at every important step along the line from the ground up.
It depends on how the ASP.NET application architecture. If the web pages are merely hooking up an underlying business logic layer or data access layer, then unit tests working independently of the ASP.NET state model are faster to developer and run than similar WaitiN tests.
I recently developed an area of a legacy ASP.NET application, in Visual Studio 2003, with NUnit as the test framework. Whereas previously testing involved working through UI tests to ensure functionality was implemented correctly, 90% of the testing occurred wihtout requiring UI interaction.
The only problem I had was time estimates - one of the tasks was planned in Trac as taking 1 day for the data access/business logic, and 2 days for the UI creation and testing. With NUnit running over the data access/business logic the time for that portion of the development went from 1 day to 2 days. The UI developmente was reduced to a single 1/2 day.
This continued with other tasks within the new module being added to the application. The unit tests discovered bugs faster, and in a way that was less painful (for me) and I have more confidence in the application functioning as expected. Even better the unit tests are very repeatable, in that they do not depend on any UI redesign, so tend to be less fragile as changes in design fail in compilation, not at runtime.
Unit testing allows for more rigourous performance testing and makes it much easier to determine where bottlenecks occur. For large applications, performance becomes a major issue when 10,000 users are hitting a method at the same time if that method takes 1/100th of a second to execute, perhaps because of a poorly written database query, because now some of the users have to wait up to 10 seconds for the page to load. I know I personally won't wait that long for pages to load, and will just move on to a different place.
Unit testing allows you to focus on the logic of getting data, applying rules, and validating business processes before you add the intricacies of a front end. By the time the unit tests have been written and conducted, I know that the infrastructure for the app will support the data types that I pass back and forth between processes, that my transactions work appropriately, etc.
In other words, the potential points of failure are better isolated by the time you start your acceptance testing because you have already worked through the potential errors in your code with the unit tests. With the history of the unit tests, you'll know that only certain modules will throw certain errors. This makes tracking down the culprit code much easier.