I would like a way to get warnings when an object reference could potentially throw a Null Reference Exception, so that I can write defensive code for these.
I have looked at Resharper, but didn't see anything there that accomplishes this.
Code Contracts is probably a non-starter; the application is quite large, and it's written in .NET 3.5, before Code Contracts became officially available.
Resharper does in fact accomplish something like this. Possible NullReferenceExpections are highlighted in the IDE in blue, with tooltips when you hover over them.
Resharper then keeps track of potential errors and warnings in it's own inspection results window (separate from Visual Studio's compiler errors and warnings).
Generally speaking, unless you specifically initialized an object, It can always have the potential to throw a null object reference, at least as far as the compiler is concerned.
in order for an algorithm to check whether a reference to the object can potentially be null, it would have to traverse every possible path that your program can take, and that includes paths in any external libraries that you may be using. Even for the simplest of programs, such an algorithm would kill the performance of your compiler.
I'm against the idea of blindly defending against null for each field available in the code and inside each method.
The following help me deciding about where to check against null values:
1- Who will be invoking your methods?
If a method is private and you have control over how's it's being accessed, I don't see it makes sense to protect against null checks unless it's part of the method's logic to expect null values.
If a method is exposed to the public (Such as an API), then of course null checks should be a huge concern.
2- Software Design:
Image you have are calling method1(fromAnimalToString(animal)); and for some reason fromAnimalToString() never returns null (Though might return an empty string instead).
Then in such case, it wouldn't make sense to check animal != null in method1()'s body
3- Testing:
In software engineering, it's almost impossible to test all possible scenarios that can ever execute. However, test normal and alternative scenarios and make sure the flow is as expected.
Related
One of my classes has a horrible requirement that resolving one of it's fields requires a service to be brought in by Dependency Injection, which is obviously not possible in a model in the standard Equals() and GetHashCode() calls. (Yes, I'd prefer it not to, bad practice etc, but I'm kind of stuck with it as a business requirement, unfortunately)
I can solve this by creating a Comparer class using IEqualityComparer<T>, but this leaves me with the default Object.Equals() and GetHashCode() being implemented, which may give misleading results when called.
As the presence of the IEqualityComparer is kind of 'hidden' unless you know about it, is it reasonable practice to override the Equals() and GetHashCode to return an exception to say that comparisons should use the Comparer? (Maybe just an Assert so that it only dies in debug/tests)
Throwing an exception like NotSupportedException is better than giving an incorrect answer, although since this is a class, arguably reference equality would suffice as the default, just using the external equality comparer for the custom functionality. But if that is going to cause confusion (in particular with people accidentally using the default API when they should be using the custom one); I wouldn't hesitate. The main problem you'll see is things like Contains checks blowing up, since classes aren't often used as dictionary keys.
As for only doing this in DEBUG builds... well, if it is wrong: it is wrong. If there's a scenario you aren't currently testing but that is used in prod, IMO it is better to become aware of that fact than to not. Although perhaps you might use an environment variable it similar to disable it in case you can't conveniently deploy a fixed build at short notice.
If I'm comparing two mutable objects, I would expect reference equality to be used by default. For records or structs I would expect value equality. For immutable objects I would probably expect value equality, but it depend a bit more on the context.
So I would only throw exceptions or use Debug.Asserts if I was sure reference equality is never the correct thing to use. And in that case I would be extra careful to document and highlight this unexpected behavior.
I would prefer exceptions over a Debug.Assert, since testing is usually done on release builds. And you want to find and fix these kinds of problems, since they most likely indicate a programming bug. There is also Trace.Assert, but I would probably not recommend it since it will make things like automated testing more difficult.
I've recently joined a new project, and our data access layer has a utility method that takes a list of objects and throws if any of them are null, which is normally called at the top of the method. It's pretty handy, except that ReSharper has no idea what it does and thus shows a bunch of "Possible NullReferenceException" warnings in the methods that use it. Is there any way to configure it to know that this method ensures that the objects passed to it aren't null?
I have a lot of classes similar to the following one:
public class Foo
{
private readonly Ba _ba;
private Foo(Ba ba)
{
if (ba is null) throw new ArgumentNullException(ba);
_ba = ba;
}
}
In other classes' internals, I call this constructor of Foo, but as this would be unintended, in each constructor call ba is not null.
I wrote a lot of test methods for the consisting framework, but I am unable to reach the 100 % of code coverage as the exception in the above code snippet is newer thrown.
I see the following alternatives:
Remove the null check: This would work for the current project implementation, but whenever I might add an accidental call Foo(null), debugging will be more difficult.
Decorate the constructor with [ExcludeFromCodeCoverage]: This would work for the current Foo(Ba) implementation, but whenever I might change the implementation, new code paths in the constructor could develope and accidentally be missed to test.
How would you solve the dilemma?
Notes
The code example is written in C#, but the question might address a general unit testing/exception handling problem.
C# 8 might solve this problem by introducing non-nullable reference types, but I am searching for a good solution until it has been released stable.
You have missed the most important alternative: Don't see it as a desirable goal to achieve 100% code coverage.
The robustness checks in your code are, strictly speaking, not testable in a sensible way. This will happen in various other parts of your code as well - it often happens in switch statements where all possible cases are explicitly covered and an extra default case is added just to throw an exception or otherwise handle this 'impossible' situation. Or, think of assertion statements added to the code: Since assertions should never fail, you will strictly speaking never be able to cover the else branch that is hidden inside the assertion statement - how do you test that the expression inside the assertion is good to actually detect the problem you want it to?
Removing such robustness code and assertions is not a good idea, because they also protect you from undesired side effects of future changes. Excluding code from coverage analysis might be acceptable for the examples you have shown, but in most of the cases I have mentioned it would not be a good option. In the end you will have to make an informed decision (by looking at the coverage report in detail, not only the overall percentage) which statements/branches etc. of your code really need to be covered and which not.
And, as a final note, be aware that a high code coverage is not necessarily an indication that your test suite has a high quality. Your test suite has a high quality if it will detect the bugs in the code that could likely exist. You can have a test suite with 100% coverage that will not detect any of the potential bugs.
I read today about C# 4.0 code contracts. It seems like the common practice for validating a parameter to a method isn't null is as follows:
Contract.Requires(p != null);
However it seems quite unreasonable to me that I'd have to do this for every parameter of every interface method in my code. In the vast majority of cases, the parameters are expected not to be null. I'd expect there would be some sort of mechanism that allows defining some specific parameters are "allowed" to be null (similarly to the "#Nullable" annotation in Java), and that the Contracts framework will automatically ensure the rest aren't null.
Besides saving much time on this "boilerplate checks" (as well as many "Contracts classes", as many times there simply aren't any conditions to be verified except for non-null parameters), it'll also make the contracts code cleaner and more "logic-oriented".
My question is, is there any way to do this, and if not, where isn't there one, or possibly why is my approach here wrong?
I don't agree, null is very helpful when you need to check if something didn't initialized yet, or data was not found, and sometimes you'll want to pass null to a method and its fine, the code contracts are good for common methods that serves lots of classes, and for api definitions. If you write in a layered architecture you just need to protect the interactions between the layers, and you are null safe inside each layer.
your domain got nulls, and its ok.
I've recently found the need to check at compile-time whether either: a) a certain assembly reference exists and can be successfully resolved, or b) a certain class (whose fully qualified name is known) is defined. These two situations are equivalent for my purposes, so being able to check for one of them would be good enough. Is there any way to do this in .NET/C#? Preprocessor directives initially struck me as something that might help, but it seems it doesn't have the necessary capability.
Of course, checking for the existence of a type at runtime can be done easily enough, but unfortunately that won't resolve my particular problem in this situation. (I need to be able to ignore the fact that a certain reference is missing and thus fall-back to another approach in code.)
Is there a reason you can't add a reference and then use a typeof expression on a type from the assembly to verify it's available?
var x = typeof(SomeTypeInSomeAssembly);
If the assembly containing SomeTypeInSomeAssembly is not referenced and available this will not compile.
It sounds like you want the compiler to ignore one branch of code, which is really only doable by hiding it behind an #if block. Would defining a compiler constant and using #if work for your purposes?
#if MyConstant
.... code here that uses the type ....
#else
.... workaround code ....
#endif
Another option would be to not depend on the other class at compile-time at all, and use reflection or the .NET 4.0 dynamic keyword to use it. If it'll be called repeatedly in a perf-critical scenario in .NET 3.5 or earlier, you could use DynamicMethod to build your code on first use instead of using reflection every time.
I seem to have found a solution here, albeit not precisely for what I was initially hoping.
My Solution:
What I ended up doing is creating a new build configuration and then defining a precompiler constant, which I used in code to determine whether to use the reference, or to fall back to the alternative (guaranteed to work) approach. It's not fully automatic, but it's relatively simple and seems quite elegant - good enough for my purposes.
Alternative:
If you wanted to fully automate this, it could be done using a pre-build command that runs a Batch script/small program to check the availabilty of a given reference on the machine and then updates a file containing precompiler constants. This however I considered more effort than it was worth, though it may have been more useful if I had multiple independent references that I need to resolve (check availability).