When should I debug.assert over code contracts or vice versa? I want to check precondition for a method and I am confused to choose one over the other. I have unit tests where I want to test failure scenarios and expect exceptions.
Is it a good practice to use Debug.Assert and Code contract on the same method. If so what would be the order in which the code should be written?
Debug.Assert(parameter!= null);
Contract.Requires<ArgumentNullException>(parameter != null, "parameter");
or
Contract.Requires<ArgumentNullException>(parameter != null, "parameter");
Debug.Assert(parameter!= null);
Is there any rationale behind it?
These are different things. A debug assert is only executed when the code is compiled as debug and therefore will only check/assert under debug. The idea is to use this for "sanity checks" for code you are developing. Code contracts can be used in either debug or release. They assure that pre and post conditions of methods comply with the expectations of the method (meet the contract). There is also a testing framework that provides similar functionality, designed for checking test compliance.
Use Debug.Assert when you want ensure that certain things are as you expect when developing the code (and in later maintenance development).
Use code contracts when you want to assure that conditions are true in both debug and release. Contracts also allow certain forms of static analysis that can be helpful in verifying that your program is "correct".
Use the Testing framework assertions when creating unit tests.
Personally, I wouldn't use both Debug.Assert AND Code Contracts to enforce preconditions in newly written code - IMO Code Contracts supercede Debug.Assert, as they offer a more comprehensive suite of checks, not to mention the benefit which can be gained from the static checking which can be performed before the code gets to run time. Maintaining duplicate precondition checks in both the Debug.Assert and Contracts will be cumbersome.
Rationale:
You don't need to re-code any legacy preconditions you may have coded in Debug.Assert or throw code - you can keep the existing precondition check code and terminate it with Contract.EndContractBlock()
You can get the same unchecked 'release mode' behaviour when System.Diagnostics.Debug is built without /d:DEBUG if you build with contract run time checking set to None. Ref 6.2.1 in the Docs
Contracts allows a dev to be more expressive in code as to 'why' an invalid state has been detected - e.g. was it directly because of an out of band parameter (Contract.Requires). Otherwise Contract.Assert or Contract.Assume can check general state, and the "guaranteed correctness" of state on leaving a method can be expressed using Contract.Ensures. And Invariants express that the state must be held at all times.
And best of all, Static checking can enforce these Contracts as you build your code - this way you have the chance to pick up the bug through a design time or compile time warning instead of having to wait for run time. Contract Checks can be added to your Continuous Integration to look for non-compliance.
One caveat : If you are going to write Unit Tests which deliberately violate contracts, you may need to deal with ContractException - Jon Skeet explains this well here. e.g. Wire up a Contract.ContractFailed handler in your test setup to a handler which calls SetHandled and then throws a public Exception which you can catch and assert in your UT's.
Related
When writing tests is it acceptable (or should I) to use functionality from elsewhere in the application to assist in a test.
So as an example, the application I am writing tests for uses the CQRS pattern. A lot of the existing tests make use of these commands, queries and handlers when performing the arrange part of a test. They all have their own test cases so I should be OK to accept they function as expected.
I am curious though if this is best practice or if I should be performing setup during the arrange of a test manually (without using other application functionality)? If one of the commands, queries or handlers breaks, then my 'unrelated' test breaks too? Is this good or bad?
When writing tests is it acceptable (or should I) to use functionality from elsewhere in the application to assist in a test.
There are absolutely circumstances where using functionality from elsewhere is going to have good trade offs.
In my experience, it is useful to think about an automated check as consisting of two parts - a measurement that produces a value, and a validation that evaluates whether that value satisfies some specification.
Measurement actual = measurement(args)
assert specification.isSatisfiedBy(actual)
In the specification part, re-using code is commonplace. Consider
String actual = measurement(args)
assert specification.expected.equals(actual)
So here, we have introduced a dependency on String::equals, and that's fine, we have lots and lots of confidence that String::equals is correct, thanks to the robust distributed test program of everybody in the world using it.
Foo actual = measurement(args)
assert specification.expected.equals(actual)
Same idea here, except that instead of some general purpose type we are using our own bespoke equality check. If the bespoke equality check is well tested, then you can be confident that any assertion failures indicate a problem in the measurement. (If not, well then at least the check signals that measurement and specification are in disagreement, and you can investigate why.)
Sometimes, you'll want to have an explicit dependency on other parts of the system, because that's a better description of the actual requirements. For example, compare
int actual = foo("a")
assert 7 == actual
with
assert 7 == bar(0) // This check might be in a different test
assert bar(0) == foo("a")
At a fixed point in time, these spellings are essentially equivalent; but for tests that are expected to evaluate many generations of an evolving system, the verification is somewhat different:
// Future foo should return the same thing as today's foo
assert 7 == foo("a")
// Future foo should return the same thing as future bar
assert bar(0) == foo("a")
Within measurements, the tradeoffs are a bit different, but because you included cqrs I'll offer one specific observation: measurements are about reads.
(Sometimes what we read is "how many times did we crash?" or "what messages did we send?" but, explicit or implicit, we're evaluating the information that comes out of our system).
That means that including a read invocation in your measurement is going to be common, even in designs where you have decoupled reads from writes.
A lot of the existing tests make use of these commands, queries and handlers when performing the arrange part of a test.
Yup and the answer is the same - we're still talking about tradeoffs: does the test detect the problems you want it to? how expensive is it to track down the fault that was detected? How common are false positives (the "fault" is in the test itself, not the test subject)? How much future work are you signing up for just to "maintain" the test (which is related, in part, to how "stable" the dependencies are) during its useful lifetime.
I have a lot of classes similar to the following one:
public class Foo
{
private readonly Ba _ba;
private Foo(Ba ba)
{
if (ba is null) throw new ArgumentNullException(ba);
_ba = ba;
}
}
In other classes' internals, I call this constructor of Foo, but as this would be unintended, in each constructor call ba is not null.
I wrote a lot of test methods for the consisting framework, but I am unable to reach the 100 % of code coverage as the exception in the above code snippet is newer thrown.
I see the following alternatives:
Remove the null check: This would work for the current project implementation, but whenever I might add an accidental call Foo(null), debugging will be more difficult.
Decorate the constructor with [ExcludeFromCodeCoverage]: This would work for the current Foo(Ba) implementation, but whenever I might change the implementation, new code paths in the constructor could develope and accidentally be missed to test.
How would you solve the dilemma?
Notes
The code example is written in C#, but the question might address a general unit testing/exception handling problem.
C# 8 might solve this problem by introducing non-nullable reference types, but I am searching for a good solution until it has been released stable.
You have missed the most important alternative: Don't see it as a desirable goal to achieve 100% code coverage.
The robustness checks in your code are, strictly speaking, not testable in a sensible way. This will happen in various other parts of your code as well - it often happens in switch statements where all possible cases are explicitly covered and an extra default case is added just to throw an exception or otherwise handle this 'impossible' situation. Or, think of assertion statements added to the code: Since assertions should never fail, you will strictly speaking never be able to cover the else branch that is hidden inside the assertion statement - how do you test that the expression inside the assertion is good to actually detect the problem you want it to?
Removing such robustness code and assertions is not a good idea, because they also protect you from undesired side effects of future changes. Excluding code from coverage analysis might be acceptable for the examples you have shown, but in most of the cases I have mentioned it would not be a good option. In the end you will have to make an informed decision (by looking at the coverage report in detail, not only the overall percentage) which statements/branches etc. of your code really need to be covered and which not.
And, as a final note, be aware that a high code coverage is not necessarily an indication that your test suite has a high quality. Your test suite has a high quality if it will detect the bugs in the code that could likely exist. You can have a test suite with 100% coverage that will not detect any of the potential bugs.
Please clarify, is the Code Contracts is similar to FxCop and StyleCop?
As per the online references, we need to add Codes for implementing the code contract conditions inside the function of existing code.
public void Initialize(string name, int id)
{
Contract.Requires(!string.IsNullOrEmpty(name));
Contract.Requires(id > 0);
Contract.Ensures(Name == name);
//Do some work
}
Usually in FxCop, the code we want to check will be in separate Dll and the Class library which includes the rules to check will be in separate dll.
Likewise whether we can create separate class library for Code contract to rule the existing code?
Please confirm..
disclaimer: you'd better take their current docs and read them through, write down the features and then compare them. What I wrote below is some facts I remembered long time ago about their core functionalities and I can't guarantee you that they are not outdated and now-wrong. For example, someone could write some complex&heavy rules for FxCop that behave as Contracts do. This is why I'm marking it as community-wiki. Please correct me if I'm wrong anywhere.
No they are not similar, although they share common target: help you find bugs.
FxCop is a "static analyzer", which inspects your code and tries to find "bad patterns". You will not see any FxCop rules/effects during runtime. FxCop has a set of "rules" that will be used during inspection and it reports to you whenever it finds a rule to be broken. Rules can be very easy and nitpicking like you must initialize every variable or you must name classes with uppercase or complex ones like you shouldn't have more than one loop in a method. Some rules are available by the standard installation, and you can expand the ruleset with your own rules.
CodeContracts is two-sided. At the most basic level, it is a set of helper methods, like throw if argument 'foo' is null. At runtime, when someone passes a null, it will throw. Just that simple. However, if you use those helper methods correctly and widely in your code, you will be able to run an additional static analyzer. It will scan your code, find all usages of those helper methods, and will try to automatically detect any places where their contracts are not satisfied. So, with the "argument is null" example, it will try to find all usages of that function, check who calls it with what args, it will try to deduce (prove) if that arg can be null at all anytime, and will warn you if it finds such case. There are more types of such validators other than just not-null, but you can't add/write your own. I mean, you could add more such helper validators, but the static analyzer wouldn't pick them up (it's too hard to write a general theorem prover for any rule).
CodeContracts is more powerful in its analyses than FxCop, but limited in diversity and scope. CodeContracts cannot check the structure of the code: it will not check the number of loops, code complexity, names of methods, code hierarchy, etc. It can only attempt to prove/disprove some contracts (runtime requirements) of some methods/interfaces. FxCop on the other hand can inspect your code, style, structure, etc, but it will not "prove" or "deduce" anything - it will just check for some bad patterns defined by rules.
While FxCop is used to verify some code-style or typical perfomance issues,
Code Contracts influences your code design, so it aims to achieve higher level goals. It's a .NET implementation attempt of contract programming methodology used in Eiffel language. Methodology says, that every type will behave correctly (performing its postconditions and invariants), only if it will have input according to required preconditions.
You should describe your types preconditions, invariants and postconditions with use of library helper methods and attributes (Contract.Requires, etc.) and Code Contracts static analizer will be able to detect their failures during compilation.
(Last time I looked at it, tool was rather slow and hard to use. Seems, that it haven't been completed by microsoft research team. Fortunately, few days ago a new version of it have been released with bug-fixes for async-await as well as VS2015 support.)
I'm using Code Contracts in my C# application, together with unit tests. When I ask for the code coverage results of the unit tests, lines containing code contracts are reported as "not covered".
Lets take for example a method that has 2 parameters:
void MyMethod(object param1, object param2) {
Contract.Requires<ArgumentNullException>(param1 != null);
Contract.Requires<ArgumentNullException>(param2 != null);
// Other stuff covered explicitly by unit tests
}
Since the contracts fail if the conditions aren't met, shouldn't the code coverage tool report that the two parameters are covered?
For my understanding, code covered by contracts doesn't need to be unit tested again. Is this correct?
So, you are technically correct that Code Contracts will throw an ArgumentNullException if any one of your arguments are null.
However, it would still be a good idea to unit test your preconditions. It's not so much to ensure that Code Contracts is working correctly—it does! But it's to ensure that you actually specified the right contract!
I am saying this from personal experience. I was writing a blog post on Code Contracts and unit testing. And while writing the sample code, I stated a precondition on a method. I ran the unit tests and a few tests failed. I was a bit taken aback. What happened? Well, silly me, I reversed the Boolean condition I wanted to enforce. Oops. Thanks to unit tests, though, this was caught and easily corrected.
Also, Code Contracts is not only about guaranteeing that parameters don't come in null (in your example). Code Contracts also function as a form of communication to clients of your code. They tell consumers of your library that, if they meet the stated preconditions, you guarantee that the method will execute successfully and satisfy any object invariants and/or stated method postconditions.
While Code Contracts can go a long way in helping you to write code that doesn't fail, it's not a silver bullet. It can't catch all logic errors (Code Contracts won't help you avoid infinite loops, for example). So, unit tests are still very much an important part of the development process even when using Code Contracts.
I have seen couple of posts regarding usage of Debug.Assert in C#.
But I still have one doubt, may be its repeated, but I need to ask.
Is there a strict rule that Debug.Assert should be used only for checking members of a class or used to check parameters to a public method?
Or Can i Use Debug.Assert where ever I want, and to check whichever condition?
Thanks
sandeep
Is there a strict rule that Debug.Assert should be used only for checking members of a class or used to check parameters to a public method?
Do not use Debug.Assert() to check parameters to a public method. Parameters should be checked in both debug and release builds.
You should use an explicit if followed by thowing ArgumentNullException, ArgumentOutOfRangeException or ArgumentException for invalid parameters.
Alternatively, use Code Contracts to express the parameter preconditions using Contract.Requires().
For further reference, see this thread: When should I use Debug.Assert()?
Other than that, then you can use Debug.Assert() wherever you want, but be aware that it might take a little more setting up for Asp.Net: Is it worth using Debug.Assert in ASP.NET?
Also see here: http://gregbeech.com/blog/how-to-integrate-debug-assert-with-your-asp-net-web-application
You can use it wherever you want. Just be aware that it's a debug check. So it's checked only at development time while you test it. If you need your program to actually change behaviour based on a condition, you still need additional ifs.
Read the coding guidelines over at Microsoft and try to use tools like the static code analysis or Visual Studio (formerly FxCop) and StyleCop to have an automated way of checking your code quality and common mistakes.