Does anyone know how to add Code Contracts Ensures in ReSharper ExternalAnnotations? It's not there in the last v7.1.3 nor in the latest v8 EAP, and neither in any of the custom xmls floating around.
Specifically it should detect if a method does not return a null: Contract.Ensures(Contract.Result<T>() != null);
If you're attempting to simply appease the analysis engine, the simplest thing to use is [NotNull] in front of the method declaration. The Contract Annotations to which you posted a link above is a more powerful mechanism for defining relationships between input parameters and the return value, e.g., [ContactAnnotation("null => null")].
However, explicitly analyzing for a Contract.Ensures statement is an entirely different proposition, as no automatic analysis can be defined for this statement via [ContractAnnotation] or any other ReSharper annotation attribute.
Related
I've started using nullable reference types in C# 8. So far, I'm loving the improvement except for one small thing.
I'm migrating an old code base, and it's filled with a lot of redundant or unreachable code, something like:
void Blah(SomeClass a) {
if (a == null) {
// this should be unreachable, since a is not nullable
}
}
Unfortunately, I don't see any warning settings that can flag this code for me! Was this an oversight by Microsoft, or am I missing something?
I also use ReSharper, but none of its warning settings appear to capture this either. Has anybody else found a solution to this?
Edit: I'm aware that technically this is still reachable because the nullability checks aren't bulletproof. That's not really the point. In a situation like this, where I declare a paramater as NOT nullable, it is a usually a mistake to check if it's null. In the rare event that null gets passed in as a non-nullable type, I'd prefer to see the NullReferenceException and track down the offending code that passed in null by mistake.
It's really important to note that not only are the nullability checks not bullet proof, but while they're designed to discourage callers from sending null references, they do nothing to prevent it. Code can still compile that sends a null to this method, and there isn't any runtime validation of the parameter values themselves.
If you’re certain that all callers will be using C# 8’s nullability context—e.g., this is an internal method—and you’re really diligent about resolving all warnings from Roslyn’s static flow analysis (e.g., you’ve configured your build server to treat them as errors) then you’re correct that these null checks are redundant.
As noted in the migration guide, however, any external code that isn’t using C# nullability context will be completely oblivious to this:
The new syntax doesn't provide runtime checking. External code might circumvent the compiler's flow analysis.
Given that, it’s generally considered a best practice to continue to provide guard clauses and other nullability checks in any public or protected members.
In fact, if you use Microsoft’s Code Analysis package—which I’d recommend—it will warn you to use a guard clause in this exact situation. They considered removing this for code in C# 8’s nullability context, but decided to maintain it due to the above concerns.
When you get these warnings from Code Analysis, you can wrap your code in a null check, as you've done here. But you can also throw an exception. In fact, you could throw another NullReferenceException—though that's definitely not recommended. In a case like this, you should instead throw an ArgumentNullException, and pass the name of the parameter to the constructor:
void Blah(SomeClass a) {
if (a == null) {
throw new ArgumentNullException(nameof(a));
}
…
}
This is much preferred over throwing a NullReferenceException at the source because it communicates to callers what they can do to avoid this scenario by explicitly naming the exact parameter (in this case) that was passed as a null. That's more useful than just getting a NullReferenceException—and, possibly a reference to your internal code—where the exception occurred.
Critically, this exception isn't meant to help you debug your code—that's what Code Analysis is doing for you. Instead, it's demonstrating that you've already identified the potential dereference of a null value, and you've accounted for it at the source.
Note: These guard clauses can add a lot of clutter to your code. My preference is to create a reusable internal utility that handles this via a single line. Alternatively, a single-line shorthand for the above code is:
void Blah(SomeClass a) {
_ = a?? throw new ArgumentNullException(nameof(a));
}
This is a really roundabout way of answering your original question, which is how to detect the presence of null checks made unnecessary by C#’s non-nullable reference types.
The short answer is that you can’t; at this point, Roselyn’s static flow analysis is focused on identifying the possibility of dereferencing null objects, not detecting potentially extraneous checks.
The long answer, though, as outlined above, is that you shouldn’t; until Microsoft adds runtime validation, or mandates the nullability context, those null checks continue to provide value.
Please bear with me, this is not (quite!) a duplicate of any of these SO answers:
Conflicting overloaded methods with optional parameters
Overload Resolution and Optional Parameters in C# 4
Call overload method with default parameters
When trying to "influence" what overload the compiler will choose given conflicts arising from optional parameters, answers in the above posts reference the C# Programming Guide, which indicates the overload resolution (OR) heuristics at play here:
If two candidates are judged to be equally good, preference goes to a candidate that does not have optional parameters for which arguments were omitted in the call.
Fair enough. My question is, why doesn't (or why can't) the Obsolete attribute (or some other markup) influence the OR decision on judging two candidates to be equally good? For example, consider the following overloads:
[Obsolete(“This method is deprecated.”)]
[EditorBrowsable(EditorBrowsableState.Never)]]
bool foo() { return true; }
bool foo(bool optional = false) { return optional; }
It seems OR should not judge these overloads to be equally good--the non-deprecated overload with an optional parameter should win. If this were the case in this over-simplified example, code previously compiled for foo() would be backwards compatible and happily continue to return true. Future code compiled for this library could also call foo(), but that resolved overload would return false.
Is this a valuable/possible feature we are missing from the language? Or is there any other way to make this wish of mine work? Thanks for any insights,
-Mike
Overload resolution is a very tricky thing to get right, and language designers think long and hard about exactly what the rules should be. The more different considerations and special cases you add, the likely language users are to run into obscure gotchas where it doesn't quite work the way they'd expect. I don't believe adding this new condition to overload resolution would be a valuable addition to the language.
If OR did prefer the method without the ObsoluteAttribute, then the only way to invoke the obsolete method without reflection would be something like this:
((Action)obj.foo)();
And if this were a generic method, there would be no way to call it with an anonymous type parameter. This would definitely be surprising behavior (at least to me).
The ObsoleteAttribute (declared with IsError = false) is a way for you to provide a hint for your consumers to update their code without completely removing the previous capabilities. Typically you want to do this if you plan to remove the feature in a future version. If you want to prohibit them from calling this method entirely, you can either:
Set IsError = true with [Obsolete("This method is deprecated.", true)] so that if the code is re-compiled, it generates an error rather than a warning. It could still be called via reflection.
Remove the deprecated function entirely. It cannot be called via reflection.
It's an interesting question. Still, I prefer it the way it is for the following reasons:
Simplicity
OR depends on method signature compiled to module and it is intuitive at that. Depending on any attributes would change the dependency scope to a wider "thing" and start snowballing - should you consider maybe another attribute? Or attributes on arguments, etc..
Change management
Method signatures & OR are different concern than the deprecation marker attribute. The latter is metadata and if it started to affect OR from one point on, then it would possibly break many existing applications. Especially libraries where deprecation cycle is longer for a reason.
I would be very annoyed if a functionally tested code started to behave different after a soft decision that a certain part of code will be phased out in "undetermined future".
All in all having code like this ...
[Obsolete(“This method is deprecated.”)]
bool foo() { return true; }
bool foo(bool optional = false) { return optional; }
stinks. The compiler will apply an algorithm to select the right method when you call foo() but any developer looking at the code will need to know (and to apply) that algorithm to understand what the code does so maintainability of the code will decrease a lot.
Simply remove the optional in this case as it hurts more than anything else.
I have a piece of code which looks a little like this:
public TReturn SubRegion(TParam foo)
{
Contract.Requires(foo!= null);
Contract.Ensures(Contract.Result<TReturn>() != null);
if (!CheckStuff(foo))
foo.Blah();
return OtherStuff(foo);
}
CC is giving me a warning:
Warning 301 CodeContracts: Consider adding the postcondition Contract.Ensures(Contract.Result() != null); to provide extra-documentation to the library clients
Which is obviously completely redundant! I have several such redundant warnings and it's becoming a problem (real warnings getting buried in a torrent of redundant suggestions).
So I have two questions:
1) Am I missing something which means this is not a redundant recommendation? In which case what do I need to do to fix this warning?
2) Alternatively, if this is just a quirk of CCCheck and cannot be fixed how can I hide or suppress this warning?
N.b. Just in case you think my example is missing something important, the full code is the SubRegion method here.
Regarding 2: The documentation is pretty good, take a look at 6.6.10 Filtering Warning Messages:
To instruct the static contract checker not to emit a particular class
of warnings for a method (a type, an assembly), annotate the method
(the type, the assembly) with the attribute:
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Contracts", warningFamily)]
where warningFamily is one of: Requires, Ensures, Invariant, NonNull,
ArrayCreation, ArrayLowerBound, ArrayUpperBound, DivByZero,
MinValueNegation.
If necessary, the static contract checker allows filtering a single
warning message (instead of an entire family) as well. To do so you
can annotate a method with the attribute
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Contracts", warningFamily-ILOffset-MethodILOffset)]
where warningFamily is as
above, and ILOffset and MethodILOffset are used by the static
contract checker to determine the program point the warning refers to.
The offsets can be obtained from the static contract checker by
providing the -outputwarnmasks switch in the "Custom Options" entry in
the VS pane. Check the Build Output Window for the necessary
information.
I would like a resharper pattern to detect unhandled IDisposables if possible. If I have a method
IDisposable Subscribe(...){....}
and call it without assigning and using that IDisposable I would like to be told about it. I have tried the following pattern
;$expr$;
where expr is of type IDisposable. The following happens.
the first is detected correctly but the second is an error because simple assignment to an existing variable is also and expression in C# whereas assignment using var is not. Is it possible to detect that the return value is assigned via structural search?
I notice that resharper has the following code quality options
but I'm guessing they are built with something more sophisticated than the structural search parser.
Unfortunately, this can't be done with structural search and replace. For one thing, there is no construct to match against the absence of something, so there's no way to match against a method invocation that does NOT have an assignment of its return value.
As you note, there are inspections that track pure functions that don't use the return value, and they're not implemented with SSR. You can make them apply to your methods by applying the [Pure] attribute to them. However, this is implying that the method actually is pure, i.e. has no side effects, so may be the wrong semantic in this instance.
Looking through System.Linq.Enumerable in DotPeek I notice that some methods are flavoured with a [__DynamicallyInvokable] attribute.
What role does this attribute play? Is it something added by DotPeek or does it play another role, perhaps informing the compiler on how best to optimise the methods?
It is undocumented, but it looks like one of the optimizations in .NET 4.5. It appears to be used to prime the reflection type info cache, making subsequent reflection code on common framework types run faster. There's a comment about it in the Reference Source for System.Reflection.Assembly.cs, RuntimeAssembly.Flags property:
// Each blessed API will be annotated with a "__DynamicallyInvokableAttribute".
// This "__DynamicallyInvokableAttribute" is a type defined in its own assembly.
// So the ctor is always a MethodDef and the type a TypeDef.
// We cache this ctor MethodDef token for faster custom attribute lookup.
// If this attribute type doesn't exist in the assembly, it means the assembly
// doesn't contain any blessed APIs.
Type invocableAttribute = GetType("__DynamicallyInvokableAttribute", false);
if (invocableAttribute != null)
{
Contract.Assert(((MetadataToken)invocableAttribute.MetadataToken).IsTypeDef);
ConstructorInfo ctor = invocableAttribute.GetConstructor(Type.EmptyTypes);
Contract.Assert(ctor != null);
int token = ctor.MetadataToken;
Contract.Assert(((MetadataToken)token).IsMethodDef);
flags |= (ASSEMBLY_FLAGS)token & ASSEMBLY_FLAGS.ASSEMBLY_FLAGS_TOKEN_MASK;
}
Without further hints what a "blessed API" might mean. Although it is clear from the context that this will only work on types in the framework itself. There ought to be additional code somewhere that checks the attribute applied to types and methods. No idea where that is located, but given that it would have to need to have a view of all .NET types to have a shot at caching, I can only think of Ngen.exe.
I found that it's used in the Runtime*Info.IsNonW8PFrameworkAPI() suite of internal methods. Having this attribute placed on a member makes IsNonW8PFrameworkAPI() return false for it and thus makes the member available in WinRT applications and shuts up the The API '...' cannot be used on the current platform. exception.
Profiler writers should place this attribute on members emitted by their profiler into framework assemblies, if they want to access them under WinRT.
sorry if I answer late. This attribute is used to call unmanaged code function from managed code. It is used to separate the managed language from the unmanaged language. a barrier of two worlds. It is also used for security reasons. to make unmanaged code reflection inaccessible.