Longform Question:
When running unit tests on objects(s) whose purpose it is to track various lengths of elapsed time, is there any way to speed up the process rather than having to sit through it? In essence, if there’s a unit test that would take sixty or more seconds to complete, is there a way to simulate that test in one or two seconds. I don’t want something that will cheat the test as I still want the same comparable, accurate results, just without the minute of waiting before I get them. I guess you could say I’m asking if anyone knows how to implement a form of time warp.
Background Info:
I’m currently working with an object that can count up or down, and then does an action when the desired time has elapsed. All of my tests pass, so I’m completely fine on that front. My problem is that the tests require various lengths of time to pass for the tests to be completely thorough. This isn’t a problem for short tests, say five seconds, but if I wish to test longer lengths of time, say sixty seconds or longer, I have to wait that long before I get my result.
I’m using longer lengths of time on some tests to see how accurate the timing is, and if I’ve made sure the logic is correct so rollover isn’t an issue. I’ve essentially found that, while a short duration of time is fine for the majority of the tests, there are a few that have to be longer.
I’ve been googling and regardless of what search terms I’ve used, I can’t seem to find an answer to a question such as this. The only ones that seem to pop up are "getting up to speed with unit tests" and others of that nature. I’m currently using the MSTestv2 framework that comes with VS2017 if that helps.
Any help is greatly appreciated!
Edit:
Thanks for the responses! I appreciate the info I've been given so far and it's nice to get a fresh perspective on how I could tackle the issue. If anyone else has anything they'd like to / want to add, I'm all ears!
In 1998, John Carmack wrote:
If you don't consider time an input value, think about it until you do -- it is an important concept
The basic idea here being that your logic is going to take time as an input, and your boundary is going to have an element that can integrate with the clock.
In C#, the result is probably going to look like ports and adapters; you have a port (interface) that describes how you want to interact with the clock, and an adapter (implementation) that implements the interface and reads times off of the clock that you will use in production.
In your unit tests, you replace the production adapter with an implementation that you control.
Key idea:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies
Your adapter implementation should be so simple (by design) that you can just look at it and evaluate its correctness. No logic, no data transformations, just the simplest thing that could possibly insulate you from the boundary.
Note that this might be a significant departure from your current design. That's OK, and part of the point of test driven design; the difficulties in testing are supposed to help you recognize the separable responsibilities in your code, and create the correct boundaries around them.
Cory Benfield's talk on building protocol libraries the right way describes this approach in the context of reading data from a socket; read data from IO, and just copy the bytes as is into a state machine that performs all of the logic.
So your "clock" might actually just be a stream/sequence of timestamp events, and your unit tests then document "given this sequence of timestamps, then that is the expected behavior of the functional core".
The slower tests, that actually interact with the real clock, are moved from the unit test suite to the integration suite. They still have value, and you still want them to pass, but you don't want the delays they produce to interrupt the development/refactoring workflow.
We've got a large C# solution with multiple APIs, SVCs and so on.
Usual sort of enterprisy mess that you get after the same code has been worked on for years by multiple people.
Anyway! We have an ability to call an external service and we have some unit tests in place that use a Moq like stub implementation of the services interface.
It so happens that there can be a large delay in calling the external service and it's not anything that we can control (it's a GDS interface).
We've been working on a way to streamline the user experience for this part of our platform.
The problem is, the stub doesn't actually do much at all - and of course, is lightening fast, compared to the real thing.
We want to introduce a random delay into one of the stubbed methods, that will cause the call to take between 10 and 20 seconds to complete.
The naive approach is to do:
int sleepTimer = random.Next(10, 20);
Thread.Sleep(sleepTimer * 1000);
But something about this gives me a bad feeling.
What other ways do people have of solving this kind of scenario, or is Thread.Sleep actually Ok to use in this context ?
Thanks for your time!
-Russ
Edit, To answer some of the comments:
Basically, we don't want to call the live external service from our test suite, because it costs money and other business problems.
However, we want to test that our new processes work well, even when there's a variable delay in this essential call to the external service.
I would love to explain the exact process, but I'm not allowed to.
But yeah, the summary is that our test needs to ensure that a long running call to an external service doesn't obstruct the rest of the flow; and we need to ensure that other tasks don't get into any kind of race conditions, as they depend on the result of this call.
I agree that calling it a unit-test is somewhat incorrect now!
I have a moderately sized, highly tangled, messy, big ball-of-mud project that I want to refactor. It is about 50k lines of code and I am the only maintainer. One class is 10k LOC long with line widths of 400 characters. Features are still being added, albeit slowly. I feel good about the refactoring part. Ripping apart code is very comfortable to me. However, getting a set of tests around the application to ensure that I don't break it seems daunting. Huge object graphs and mocking out all the databases will be real buggers. I probably could get away with doing massive refactoring without breaking things horribly. But, prudence dictates some level of tests to ensure at least some level of safety. At the same time, I don't want to spend any more than a minimal amount of time getting a set of "behavior lock-down" tests around the code. I fully intend to add a full set of unit tests once I get things a bit decoupled. A rewrite is a non-starter.
Does anyone have any pointers or techniques? Has anyone done this sort of thing before?
Mindset
Generally it can be tough going trying to write automated tests (even high level ones) for applications that were not build with testability in mind.
The best thing is going to be make sure you are disciplined in writing tests as you refactor (which it sounds like you are intending to be). This will slowly turn that ball of code, into an elegant dancing unicorn of well encapsulated testable classes.
Suggestions
Start with creating some manual high level tests (e.g. user goes to page one, clicks on the red button, then a textbox appears..) to have a starting point. Depending on the technology the app is built in there are a few frameworks out there that can help automate these high level (often UI driven) tests:
For web apps Selenium is a great choice, for WPF apps you can use the UI Automation framework, for other application, while its a bit rudimentary, AutoIt can be a life saver.
Here how I do it in c++ (and c-).
Create a directory to house the tests, cd to this directory
Create a directory to house mock objects, say mock-objs.
Create a makefile to compile all object files of interest.
Add necessary include directories or mock .h files to make all object file compile.
Congratulate yourself you are 90% done.
Add a test harness of your choice (e.g. cppunit, atf-tests, google test ..).
Add a null test case - just start, log and declare success.
Add necessary libraries and/or mock .c/.cpp files until link is successful and the very first test passes. Note: all functions in these .c/.cpp mock files should contain only a primitive to fail the test when called.
Congratulate yourself you are 99% done.
Add a primitive event scheduler: say just list of callbacks - so you can post request and receive response from event callback.
Add a primitive timer: say timer wheel or even timer list if you need just a few timers.
Write advance-time function: (a) process all queued events (b)increment current time to the next tick, (c)expire all timers waiting for this tick, (d)process all queued events over and over again until no events left, (e) if end the time advance is not reached go to step(b).
Congratulate yourself: now you can add tests with relative ease: add a test case, modify mock functions as required, repeat.
I want to monitor xunit.net tests that are running in CI (if you know similar approaches for nunit it may also helps).
Integration tests that were running double in time and I would like to get information like methods call timing and resources usage.
I started windows performance monitor but it isnt clear to me where the time is being spent inside the test and why it double in execution.
Any advice? Thanks!
For a high-tech option that might cost you a few dollars, look into a profiling tool. there is a really good one for dotnet called ANTS Performance Profiler that can help you work out where some of your bottlenecks are. It will also link to your code to show you exactly where the problems are showing up.
The simplest free (as in only spending your time) approach that I can think of is to introduce a base test class, and use the setup/teardown methods to log timing and other information to either console or file. Alternately you could create a test harness for use within each of your tests, and add the logging there. A slightly more sophisticated approach would be to use some method of delegation to trigger the Assign/Act/Assert stages of your tests, and apply your logging there. Personally, I do all of my C# testing using a product called StoryQ. If I needed something logged, I would introduce a few extension methods to effectively wrapper my logging while still behaving as test steps.
If it were me though, I'd spend the bucks and choose the profiler. It will pay for itself time and again, and really helps you to determine where to invest your time optimizing your code.
My current position is this: if I thoroughly test my ASP.NET applications using web tests (in my case via the VS.NET'08 test tools and WatiN, maybe) with code coverage and a broad spectrum of data, I should have no need to write individual unit tests, because my code will be tested in conjunction with the UI through all layers. Code coverage will ensure I'm hitting every functional piece of code (or reveal unused code) and I can provide data that will cover all reasonably expected conditions.
However, if you have a different opinion, I'd like to know:
What additional benefit does unit testing give that justifies the effort to include it in a project (keep in mind, I'm doing the web tests anyway, so in many cases, the unit tests would be covering code that web tests already cover).
Can you explain your reasons in detail with concete examples? Too often I see responses like "that's not what it's meant for" or "it promotes higher quality" - which really doesn't address the practical question I have to face, which is, how can I justify - with tangible results - spending more time testing?
Code coverage will ensure I'm hitting every functional piece of code
"Hit" does not mean "Testing"
The problem with only doing web-testing is that it only ensures that you hit the code, and that it appears to be correct at a high-level.
Just because you loaded the page, and it didn't crash, doesn't mean that it actually works correctly. Here are some things I've encountered where 'web tests' covered 100% of the code, yet completely missed some very serious bugs which unit testing would not have.
The page loaded correctly from a cache, but the actual database was broken
The page loaded every item from the database, but only displayed the first one - it appeared to be fine even though it failed completely in production because it took too long
The page displayed a valid-looking number, which was actually wrong, but it wasn't picked up because 1000000 is easy to mistake for 100000
The page displayed a valid number by coincidence - 10x50 is the same as 25x20, but one is WRONG
The page was supposed to add a log entry to the database, but that's not visible to the user so it wasn't seen.
Authentication was bypassed to make the web-tests actually work, so we missed a glaring bug in the authentication code.
It is easy to come up with hundreds more examples of things like this.
You need both unit tests to make sure that your code actually does what it is supposed to do at a low level, and then functional/integration (which you're calling web) tests on top of those, to prove that it actually works when all those small unit-tested-pieces are chained together.
Unit testing is likely to be significantly quicker in turn-around than web testing. This is true not only in terms of development time (where you can test a method in isolation much more easily if you can get at it directly than if you have to construct a particular query which will eventually hit it, several layers later) but also in execution time (executing thousands of requests in sequence will take longer than executing short methods thousands of times).
Unit testing will test small units of functionality, so when they fail you should be able to isolate where the issue is very easily.
In addition, it's a lot easier to mock out dependencies with unit tests than when you hit the full front end - unless I've missed something really cool. (A few years ago I looked at how mocking and web testing could integrate, and at the time there was nothing appropriate.)
Unit testing does not generally prove that any given set of functionality works--at least it's not supposed to. It proves that your class contract works as you expect it to.
Acceptance tests are more oriented at customer requirements. Every requirement should have an acceptance test, but there is really no requirement between acceptance tests and unit tests--they might not even be in the same framework.
Unit testing can be used to drive code development, and speed of retesting is a significant factor. When testing, you often remove parts that the class under test relies on to test in isolation.
Acceptance tests the system just as you would deliver it--from GUI to database. Sometimes they take hours or days (or weeks) to run.
If you start to think of it as two completely different beasts, you will be a much more effective tester.
Good, focused unittests make it a lot faster to find and fix problems when they crop up. When a well-written unittest breaks, you know pretty much what the failure means and what caused it.
Also, they're typically faster to run, meaning that you're much more likely to run them during development as part of your edit-compile-test cycle (as opposed to only when you're about to check-in).
When you write unit tests you will be forced to write your code in a better way. More loosely coupled and more object oriented. That will lead to better architecture and a more flexible system.
If you write unit tests in a TDD style you will probably don't do as much unnecessary code because you will focus on tiny steps and only do the necessary.
You will be more confident when doing refactoring to improve your code to increase maintainability and reduce code smell.
And the unit test themselves will serve as exilent documentation of what the system does and does not.
Those are just a few examples of benefits I have noticed when applying TDD to my work.
One more aspect - forgive the somewhat ideal environment that this is situated in:
Suppose you have 3 components that finally have to work together. Each can individually be completely unit-tested (whatever that means) with 5 unit tests. This makes 5 + 5 + 5 = 15 unit tests for complete coverage.
Now if you have your integration/web/combined test that tests all components together, you'd need (remember the ideal world for this abstract scenario) 5 * 5 * 5 = 125 tests that test all permutations to give you the same result as the 15 test cases above (given that you can even trigger all permutations, otherwise there would be untested/unspecified behaviour that might bite you later when you extend your components).
Additionally the 125 test cases would have a significantly more complicated setup, higher turnaround time and greatly decreased maintainability should the feature ever change. I'd rather opt for 15 unit tests plus some basic web tests that ensure that the components are wired somewhat correctly.
Unit testing gives you speed and most of all, pin point accuracy of where a failure or bug has been introduced. It enables you to test each component, isolated from every other component and be assured that it works as it should.
For example, if you had a web test, that sent an Ajax request back to a service on the server which then hit a database, which then fails. Was it the Javascript, service, business logic or database that caused the problem?
Where as if you unit test each of the service on its own, stubbing/mocking out the database, or each business logic unit, then you are more likely to know exactly where the bug is. Unit testing is less about coverage (although important) and more about isolation.
Almost as Dijkstra put it: unit-tests could only be used to show that software has defects, not to prove that it's defect-free. So, in general, hitting every code path once (and obtaining 100% coverage) has nothing to do with testing - it just helps to eliminate bitrot.
If you are playing it by the book, every serious bug should be eliminated only after there is a unit test written that triggers that bug. Coincidentally, fixing the bug means that this particular unit-test is not failing anymore. In future, this unit test checks that this particular bug stays fixed.
It is much easier to write a unit test that triggers a particular bug than write an end-to-end (web) test that does ONLY that and doesn't run heaps of completely irrelevant code along the way (which could also fail and mess up with root cause analysis).
Unit Tests test that each component works. These are extremely helpful in finding defects close to the time they are created, which dramatically cuts down the cost to fix defects and dramatically reduces the number of defects which end up in your release. Additionally, good unit tests make refactoring a whole lot easier and more robust.
Integration tests (or "web" tests in this case) are also very important but are a later line of defense than unit tests. A single integration test covers such a huge swath of code that when one fails it requires a lot of investigation to determine the defect (or possibly the group of defects) which caused the failure. This is costly, especially when you are trying to test a release build to get it out the door. This is even more costly given that the chance of introducing a bug with the fix tends to be pretty high on average and the chance that the failure is blocking further testing of the release (which is extremely expensive to the development cycle).
In contrast, when a unit test fails you know exactly where the defective code is and you usually know exactly what the code is doing wrong. Also, a unit test failure should only impact one developer at a time and be fixed before the code is checked in.
It's always more expensive to fix bugs later than earlier. Always.
Consider building an automobile. Would you wait until the entire vehicle rolls off the assembly line to test that each component works? At that point if you discover the CD player or the engine or the air conditioner or the cruise control doesn't work you have to take the whole vehicle off the line, fix the problem, then re-test everything (which hopefully doesn't reveal any new defects). This is obviously the wrong way to do it, just as it is obviously wrong to try to release software while only testing if it works at the end of the process rather than at every important step along the line from the ground up.
It depends on how the ASP.NET application architecture. If the web pages are merely hooking up an underlying business logic layer or data access layer, then unit tests working independently of the ASP.NET state model are faster to developer and run than similar WaitiN tests.
I recently developed an area of a legacy ASP.NET application, in Visual Studio 2003, with NUnit as the test framework. Whereas previously testing involved working through UI tests to ensure functionality was implemented correctly, 90% of the testing occurred wihtout requiring UI interaction.
The only problem I had was time estimates - one of the tasks was planned in Trac as taking 1 day for the data access/business logic, and 2 days for the UI creation and testing. With NUnit running over the data access/business logic the time for that portion of the development went from 1 day to 2 days. The UI developmente was reduced to a single 1/2 day.
This continued with other tasks within the new module being added to the application. The unit tests discovered bugs faster, and in a way that was less painful (for me) and I have more confidence in the application functioning as expected. Even better the unit tests are very repeatable, in that they do not depend on any UI redesign, so tend to be less fragile as changes in design fail in compilation, not at runtime.
Unit testing allows for more rigourous performance testing and makes it much easier to determine where bottlenecks occur. For large applications, performance becomes a major issue when 10,000 users are hitting a method at the same time if that method takes 1/100th of a second to execute, perhaps because of a poorly written database query, because now some of the users have to wait up to 10 seconds for the page to load. I know I personally won't wait that long for pages to load, and will just move on to a different place.
Unit testing allows you to focus on the logic of getting data, applying rules, and validating business processes before you add the intricacies of a front end. By the time the unit tests have been written and conducted, I know that the infrastructure for the app will support the data types that I pass back and forth between processes, that my transactions work appropriately, etc.
In other words, the potential points of failure are better isolated by the time you start your acceptance testing because you have already worked through the potential errors in your code with the unit tests. With the history of the unit tests, you'll know that only certain modules will throw certain errors. This makes tracking down the culprit code much easier.