Unit testing - how to emulate a delay - c#

We've got a large C# solution with multiple APIs, SVCs and so on.
Usual sort of enterprisy mess that you get after the same code has been worked on for years by multiple people.
Anyway! We have an ability to call an external service and we have some unit tests in place that use a Moq like stub implementation of the services interface.
It so happens that there can be a large delay in calling the external service and it's not anything that we can control (it's a GDS interface).
We've been working on a way to streamline the user experience for this part of our platform.
The problem is, the stub doesn't actually do much at all - and of course, is lightening fast, compared to the real thing.
We want to introduce a random delay into one of the stubbed methods, that will cause the call to take between 10 and 20 seconds to complete.
The naive approach is to do:
int sleepTimer = random.Next(10, 20);
Thread.Sleep(sleepTimer * 1000);
But something about this gives me a bad feeling.
What other ways do people have of solving this kind of scenario, or is Thread.Sleep actually Ok to use in this context ?
Thanks for your time!
-Russ
Edit, To answer some of the comments:
Basically, we don't want to call the live external service from our test suite, because it costs money and other business problems.
However, we want to test that our new processes work well, even when there's a variable delay in this essential call to the external service.
I would love to explain the exact process, but I'm not allowed to.
But yeah, the summary is that our test needs to ensure that a long running call to an external service doesn't obstruct the rest of the flow; and we need to ensure that other tasks don't get into any kind of race conditions, as they depend on the result of this call.
I agree that calling it a unit-test is somewhat incorrect now!

Related

C# stop async inheritance

I'm getting in touch with the whole async / await functionality in C# right now.
I think I know what it is good for. But I encountered places where I do not want the common inheritance of all the methods which call a library function of mine to need to be "async" aware.
Consider this (rough pseudo-code, not really representing the real thing, it's just about the context):
string JokeOfTheHour;
public string GiveJokeOfTheHour()
{
if(HourIsOver)
{
jokeOfTheHour = thirdPartyLibrary.GetNewJoke().GetAwaiter().GetResult();
}
return jokeOfTheHour;
}
I have a web-back-end library function which is called up to a million times per hour (or even more).
Exactly one time of these million calls per hour, the logic within uses a third party library which just supports async calls for the methods I want to use from it.
I don't want the user of my library to even think that it would make any sense for them to asynchronously run any code when calling my library-function, because it would only generate unnessecary overhead for their code and runtime the absolute most of the time.
The reasons I would state here are:
Seperation of Concern. I know how I work, my user does not need to.
Context is everything. As a developer, having background-knowledge is the way for me to know which cases I need to consider when writing code, and which not. That enables me to ommit writing hundreds of lines of code for stuff that should never happen.
Now, I want to know what general rules there are to do this. But sadly, I can't find simple statements or rules browsing the web where anybody sais "In this, this and this situation, you can stop this "async" keyword bubbling up your method-calltree". I've just seen persons (some of them Microsoft MVP's) talking about that there absolutely are situations where this should be done, also stating that you should use .GetAwaiter().GetResult() as a best practice then, but they are never specific about the situations itself.
What I am looking for is a down-to-the-ground general rule in which I can say:
Even though I might call third party functions which are async, I do not execute async, and do not want to appear as such. I'm a bottom level function using caches 99.99999% of the time. I don't need my user to implement the async methodology all the way up to where my actual user needs to decide where the async execution stops (Which makes my user who should actually benefit timely from my library do write more code and have more execution time).
I would really be thankful for your help :)
You seem to want your method to introduce itself with: "I'm fast". The truth is that from time to time it can actually be (very) slow. This potentially has serious consequences.
The statement
I'm a bottom level function using caches 99.99999% of the time'
is not correct if you call your method once an hour.
It is better for consumers of your method to see "I can be slow, but if you call me often, I cache the result, so I will return fast" (which would be GiveJokeOfTheHourAsync() with a comment.)
If you want your method to always be fast I would suggest one of these options:
Have an UpdateJokeAsync method that you call without waiting for it in your if(HourIsOver). This would mean returning stale result until you fetch a new one.
Update your joke using a timer.
Make 'get' always get the last known and have UpdateJokeAsync to update the joke.

When running unit tests on objects(s) whose purpose it is to track various lengths of elapsed time, is there any way to speed up the process?

Longform Question:
When running unit tests on objects(s) whose purpose it is to track various lengths of elapsed time, is there any way to speed up the process rather than having to sit through it? In essence, if there’s a unit test that would take sixty or more seconds to complete, is there a way to simulate that test in one or two seconds. I don’t want something that will cheat the test as I still want the same comparable, accurate results, just without the minute of waiting before I get them. I guess you could say I’m asking if anyone knows how to implement a form of time warp.
Background Info:
I’m currently working with an object that can count up or down, and then does an action when the desired time has elapsed. All of my tests pass, so I’m completely fine on that front. My problem is that the tests require various lengths of time to pass for the tests to be completely thorough. This isn’t a problem for short tests, say five seconds, but if I wish to test longer lengths of time, say sixty seconds or longer, I have to wait that long before I get my result.
I’m using longer lengths of time on some tests to see how accurate the timing is, and if I’ve made sure the logic is correct so rollover isn’t an issue. I’ve essentially found that, while a short duration of time is fine for the majority of the tests, there are a few that have to be longer.
I’ve been googling and regardless of what search terms I’ve used, I can’t seem to find an answer to a question such as this. The only ones that seem to pop up are "getting up to speed with unit tests" and others of that nature. I’m currently using the MSTestv2 framework that comes with VS2017 if that helps.
Any help is greatly appreciated!
Edit:
Thanks for the responses! I appreciate the info I've been given so far and it's nice to get a fresh perspective on how I could tackle the issue. If anyone else has anything they'd like to / want to add, I'm all ears!
In 1998, John Carmack wrote:
If you don't consider time an input value, think about it until you do -- it is an important concept
The basic idea here being that your logic is going to take time as an input, and your boundary is going to have an element that can integrate with the clock.
In C#, the result is probably going to look like ports and adapters; you have a port (interface) that describes how you want to interact with the clock, and an adapter (implementation) that implements the interface and reads times off of the clock that you will use in production.
In your unit tests, you replace the production adapter with an implementation that you control.
Key idea:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies
Your adapter implementation should be so simple (by design) that you can just look at it and evaluate its correctness. No logic, no data transformations, just the simplest thing that could possibly insulate you from the boundary.
Note that this might be a significant departure from your current design. That's OK, and part of the point of test driven design; the difficulties in testing are supposed to help you recognize the separable responsibilities in your code, and create the correct boundaries around them.
Cory Benfield's talk on building protocol libraries the right way describes this approach in the context of reading data from a socket; read data from IO, and just copy the bytes as is into a state machine that performs all of the logic.
So your "clock" might actually just be a stream/sequence of timestamp events, and your unit tests then document "given this sequence of timestamps, then that is the expected behavior of the functional core".
The slower tests, that actually interact with the real clock, are moved from the unit test suite to the integration suite. They still have value, and you still want them to pass, but you don't want the delays they produce to interrupt the development/refactoring workflow.

WCF PerCall inside-operation cross-caller event handling

With my WCF service, I am solving an issue that has both performance and design effects.
The service is a stateless RESTful PerCall service, that does a lot of simple and common thins, which all work like a dandy.
But, there is one operation, that has started to scare me a lot recently, so there is the problem:
Clients make parametrized calls to the operation and the computation of the result requires lots of time to finish. But result to a call with identical parameters will always be the same, until data on the server change. And clients make an awful LOT of calls with exact the same parameters. The server, however, cannot predict the parameters, that the users will like, so sadly enough, the results cannot be precomputed.
So I came up with caching layer and store the result object as a key-value pair, where key represents the parameters which lead to this result. And if the relevant data change, I just flush the cache. Still simple and no problems with this.
Client calls the service, server receives the call, looks, whether the result is already cached and returns it, if so. But, if the result is not cached yet, the client starts the computation. The computation may take up to 2 minutes (average time 10-15 seconds) to finish and by that time, other clients may come and because the result is still not known to cache, each of them would start their own computation. Which is NOT what we really want, so there is a flag, if someone has already started the computation with this parameters this is the place in code, where other callers' code stops and waits for the computation to be finished and inserted into cache, from where each of the invoked instances will grab the result, return it to the client and dispose.
And this is the part, which I am really struggling with.
By now, my solution looks something like this (before you read further, I want to warn you, because my experience is not near decent level and I still am a big noob in all C#, WCF and related stuff... no need telling me I'm a noob, because I am fully aware of that):
Stopwatch sw = new Stopwatch();
sw.Start();
while (true)
{
if (Cache.Contains(parameters) || sw.Elapsed > threshold)
break;
Thread.Sleep(100);
}
...do relevant stuff here
As you see, there are more problems with this solution:
Having the loop, check and all this stuff does not only feel ugly, with many clients waiting this way, the resources tend to jump up.
If the operation fails (the initial caller's computation fails to deliver within the limits of threshold), I do not really know, which client has got to be next up trying the computation, or how, or even whether should I run the operation again, or return a fault to the client...
EDIT: This is not related to synchronization, I am aware of the need for locking in some parts of my application, so my concerns are not synchronization-reated.
What should I do when the relevant server-side data change while invoked code is still performing computation (resulting in such result being a wrong one). ... More over, this has some other horrible effects on the application, but yeah, I am getting to the question here:
So, like most of the time, I did my homework and performed qoogling around before asking, but did not succeed in finding some guidance that I would either understand or that would suit my issues and domain.
I got a strong feel, that I have to introduce some kind of (static?) events-based-and-or-asynchronous class (call it layer if you will), that does some tricks and organizes and manages all this things in some kind of a register-to-me-and-i-will-give-you-a-poke / poke-all-registered-threads manner. But despite being able (to certain extent) to use the newly introduced tasks, TPL, and async-await, I not only have very limited experience on this field, more sadly, I really really need help explaining how it could come together with events (or do I even need them?)... When i try / run little things in a test-console application, I might succeed, but bringing it into this bigger environment of my WCF application, I struggle to get a clue.
So guys I will gladly welcome every kind of relevant thoughts, advice, guidance, links, code and criticism touching my topic.
I am aware of the fact, it might be confusing and will do my best to clear all misunderstandings and tricky parts, just ask me for doing that.
Thanks for help!

How do I add high level tests to lock down behavior before refactoring?

I have a moderately sized, highly tangled, messy, big ball-of-mud project that I want to refactor. It is about 50k lines of code and I am the only maintainer. One class is 10k LOC long with line widths of 400 characters. Features are still being added, albeit slowly. I feel good about the refactoring part. Ripping apart code is very comfortable to me. However, getting a set of tests around the application to ensure that I don't break it seems daunting. Huge object graphs and mocking out all the databases will be real buggers. I probably could get away with doing massive refactoring without breaking things horribly. But, prudence dictates some level of tests to ensure at least some level of safety. At the same time, I don't want to spend any more than a minimal amount of time getting a set of "behavior lock-down" tests around the code. I fully intend to add a full set of unit tests once I get things a bit decoupled. A rewrite is a non-starter.
Does anyone have any pointers or techniques? Has anyone done this sort of thing before?
Mindset
Generally it can be tough going trying to write automated tests (even high level ones) for applications that were not build with testability in mind.
The best thing is going to be make sure you are disciplined in writing tests as you refactor (which it sounds like you are intending to be). This will slowly turn that ball of code, into an elegant dancing unicorn of well encapsulated testable classes.
Suggestions
Start with creating some manual high level tests (e.g. user goes to page one, clicks on the red button, then a textbox appears..) to have a starting point. Depending on the technology the app is built in there are a few frameworks out there that can help automate these high level (often UI driven) tests:
For web apps Selenium is a great choice, for WPF apps you can use the UI Automation framework, for other application, while its a bit rudimentary, AutoIt can be a life saver.
Here how I do it in c++ (and c-).
Create a directory to house the tests, cd to this directory
Create a directory to house mock objects, say mock-objs.
Create a makefile to compile all object files of interest.
Add necessary include directories or mock .h files to make all object file compile.
Congratulate yourself you are 90% done.
Add a test harness of your choice (e.g. cppunit, atf-tests, google test ..).
Add a null test case - just start, log and declare success.
Add necessary libraries and/or mock .c/.cpp files until link is successful and the very first test passes. Note: all functions in these .c/.cpp mock files should contain only a primitive to fail the test when called.
Congratulate yourself you are 99% done.
Add a primitive event scheduler: say just list of callbacks - so you can post request and receive response from event callback.
Add a primitive timer: say timer wheel or even timer list if you need just a few timers.
Write advance-time function: (a) process all queued events (b)increment current time to the next tick, (c)expire all timers waiting for this tick, (d)process all queued events over and over again until no events left, (e) if end the time advance is not reached go to step(b).
Congratulate yourself: now you can add tests with relative ease: add a test case, modify mock functions as required, repeat.

Unit testing a timer based application?

I am currently writing a simple, timer based mini app in C# that performs an action n times every k seconds.
I am trying to adopt a test driven development style, so my goal is to unit test all parts of the app.
So, my question is: Is there a good way to unit test a timer based class?
The problem, as I see it, is that there is a big risk that the tests will take uncomfortably long to execute, since they must wait so and so long for the desired actions to happen.
Especially if one wants realistic data (seconds), instead of using the minimal time resolution allowed by the framework (1 ms?).
I am using a mock object for the action, to register the number of times the action was called, and so that the action takes practically no time.
What I have done is to mock the timer, and also the current system time, that my events could be triggered immediately, but as far as the code under test was concerned time elapsed was seconds.
Len Holgate has a series of 20 articles on testing timer based code.
I think what I would do in this case is test the code that actually executes when the timer ticks, rather than the entire sequence. What you really need to decide is whether it is worthwhile for you to test the actual behaviour of the application (for example, if what happens after every tick changes drastically from one tick to another), or whether it is sufficient (that is to say, the action is the same every time) to just test your logic.
Since the timer's behaviour is guaranteed never to change, it's either going to work properly (ie, you've configured it right) or not; it seems to be to be wasted effort to include that in your test if you don't actually need to.
I agree with Danny insofar as it probably makes sense from a unit-testing perspective to simply forget about the timer mechanism and just verify that the action itself works as expected. I would also say that I disagree in that it's wasted effort to include the configuration of the timer in an automated test suite of some kind. There are a lot of edge cases when it comes to working with timing applications and it's very easy to create a false sense of security by only testing the things that are easy to test.
I would recommend having a suite of tests that runs the timer as well as the real action. This suite will probably take a while to run and would likely not be something you would run all the time on your local machine. But setting these types of things up on a nightly automated build can really help root out bugs before they become too hard to find and fix.
So in short my answer to your question is don't worry about writing a few tests that do take a long time to run. Unit test what you can and make that test suite run fast and often but make sure to supplement that with integration tests that run less frequently but cover more of the application and its configuration.

Categories

Resources