ReSharper: how to remove "Possible 'System.NullReferenceException'" warning - c#

Here is a piece of code:
IUser user = managerUser.GetUserById(UserId);
if ( user==null )
throw new Exception(...);
Quote quote = new Quote(user.FullName, user.Email);
Everything is fine here. But if I replace "if" line with the following one:
ComponentException<MyUserManagerException>.FailIfTrue(user == null, "Can't find user with Id=" + UserId);
where function implementation is following:
public abstract class ComponentException<T> : ComponentException
where T : ComponentException, new()
{
public static void FailIfTrue(bool expression, string message)
{
if (expression)
{
T t = new T();
t.SetErrorMessage(message);
throw t;
}
}
//...
}
Then ReSharper generates me a warning: Possible 'System.NullReferenceException' pointing on 1st usage of 'user' object.
Q1. Why it generates such exception? As far as I see if user==null then exception will be generated and execution will never reach the usage point.
Q2. How to remove that warning? Please note:
1. I don't want to suppress this warning with comments (I will have a lot of similar pieces and don't want to transform my source code in 'commented garbage);
2. I don't want to changes ReSharper settings to change this problem from warning to 'suggestion' of 'hint'.
Thanks.
Any thoughts are welcome!
P.S. I am using resharper 5.1, MVSV 2008, C#

Resharper only looks at the current method for its analysis, and does not recursively analyse other methods you call.
You can however direct Resharper a bit and give it meta-information about certain methods. It knows for example about "Assert.IsNotNull(a)", and will take that information into account for the analysis. It is possible to make an external annotations file for Resharper and give it extra information about a certain library to make its analysis better. Maybe this might offer a way to solve your problem.
More information can be found here.
An example showing how it's used for the library Microsoft.Contracts can be found here.

A new answer in old post...
Here a little sample of my code regarding how to use CodeContract via ContractAnnotation with Resharper:
[ContractAnnotation("value:null=>true")]
public static bool IsNullOrEmpty(this string value)
{
return string.IsNullOrEmpty(value);
}
It is very simple...if u find the breadcrumb in the wood. You can check other cases too.
Have a nice day

Q1: Because Resharper doesn't do path analysing. It just sees a possible null reference and flags that.
Q2: You can't without doing either of what you provided already.

You do know (or expect) that this code will throw an exception if there is a null reference:
ComponentException<MyUserManagerException>.FailIfTrue([...]);
However, since there is no contract specifying this, ReSharper has to assume that this is just a normal method call which may return without throwing any exception in any case.
Make this method implement the ReSharper contract, or as a simple workaround (which only affects debug mode, therefore no performance penalty for release mode), just after the FailIfTrue call:
Debug.Assert(user != null);
That will get rid of the warning, and as an added bonus do a runtime check in debug mode to ensure that the condition assumed by you after calling FailIfTrue is indeed met.

This is caused by the Resharper engine. These "possible NullReferenceException" happen because someone (probably at Resharper) has declared/configured somewhere an annotation on the method.
Here is how it works: ReSharper NullReferenceException Analysis and Its Contracts
Unfortunately, sometimes, these useful annotation are just wrong.
When you detect an error, you should report it to JetBrains and they will update the annotations on the next release. They're used to this.
Meanwhile, you can try to fix it by yourself. Read the article for more :)

Please check if you have any user==null if check above the given code. If there is, then ReSharper thinks that the variable "can be null" so recommends you to use a check/assert before referencing it. In some cases, that's the only way ReSharper can guess whether a variable can or cannot be null.

Related

Get method body c#

I am trying to write a custom rule (code analysis) where, if the body of a method contains empty statements, an error is raised.
However, there is one problem. I can not seem to figure out how to get the body of a method (the text that is in the method).
How can I get the text inside a method, and assign it to a string?
Thanks in advance.
For reference; I use c# in visual studio, with FxCop to make the rule.
Edit: Some code added for reference, this does NOT work.
using Microsoft.FxCop.Sdk;
using Microsoft.VisualStudio.CodeAnalysis.Extensibility;
public override ProblemCollection Check(Member member)
{
Method method = member as Method;
if (method == null)
{
return null;
}
if (method.Name.Name.Contains("{}"))
{
var resolution = GetResolution(member.Name.Name);
var problem = new Problem(resolution, method)
{
Certainty = 100,
FixCategory = FixCategories.Breaking,
MessageLevel = MessageLevel.Warning
};
Problems.Add(problem);
}
return Problems;
}
FxCop doesn't analyse source code, it works on .Net assemblies built from any language.
You may be able to find whether the method contains a statement or not using FxCop, I advice you to read the documentation and check the implementation of existing rules to understand it.
An empty statement in the middle of other code might be removed by the compiler and you may not find it using FxCop. If you want to analyze source code you should take a look at StyleCop.
However, there is one problem. I can not seem to figure out how to get the body of a method
(the text that is in the method).
You can not. FxCop does not work based on the source, but analysis the compiled bytecode.
What you can do is find the source - which is not totally trivial - but you have to do so without the FxCop API. A start point may be analysing the pdb files (not sure where to find documentation) as they can point you to the file that contains the method.

Does this Resharper fix for disposed closure warning make any sense?

I'm working on getting rid of some warnings from a static code analysis.
In one specific case there was no disposing being done on a ManualResetEvent.
The code in question executes a Func on the main thread and blocks the calling thread for a certain number of milliseconds. I realize this sounds like a weird thing to do, but it's outside the scope of this question, so bear with me.
Suppose I add a using statement like so:
object result = null;
using (var completedEvent = new ManualResetEvent(false))
{
_dispatcher.BeginInvoke((Action)(() =>
{
result = someFunc;
completedEvent.Set(); // Here be dragons!
}));
completedEvent.WaitOne(timeoutMilliseconds);
return result;
}
Now, I realize that this is likely to cause problems. I also happen to use Resharper and it warns me with the message "Access to disposed closure".
Resharper proposes to fix this by changing the offending line to:
if (completedEvent != null)
{
completedEvent.Set();
}
Now, the proposed solution puzzles me. Under normal circumstances, there would be no reason why a variable would be set to null by a using statement. Is there some implementation detail to closures in .NET that would guarantee the variable to be null after the variable that has been closed over is disposed?
As a bonus question, what would be a good solution to the problem of disposing the ManualResetEvent?
You are mixing up ReSharper's "quick fix" and "context action". When ReSharper proposes to fix something, then most likely you'd see a bulb there. You don't see a bulb here, because there are no quick fixes to this warning.
But aside from quick fixes ReSharper also has "context actions", where it can do some routine tasks for you (think of them like a small refactorings). When ReSharper has a context action for a code under cursor, it would show you a pick. Here you see a context action called "Check if something is not null". It has no relation to a warning, and there is no convention that after disposing a variable would be set to null.
Also, when you press Alt-Enter, you'd see a striked-out bulb to give you impression that ReSharper doesn't suggest any quick fixes for this warning, but it can disable it with comments. In fact, that's the only way to make this warning go away easily. But I'd rewrite this piece of code instead.
I ran into exactly this only a few hours ago.
It's a false alarm. R# doesn't understand that execution will block until the event is set, even though this defers disposal to exactly the right moment.
IMO it's a good solution. Just ignore R#.
Suggest catching an ObjectDisposedException when you call completedEvent.Set() in case the timeout has expired and the event has been disposed. I don't think this will prevent the R# warning, but it's safe.
I think, you must check on null, moreover you must catch this exception.
Imagine what will happened if someFunc runs more than timeoutMilliseconds.

C# Compiler should give warning but doesn't?

Someone on my team tried fixing a 'variable not used' warning in an empty catch clause.
try { ... } catch (Exception ex) { }
-> gives a warning about ex not being used. So far, so good.
The fix was something like this:
try { ... } catch (Exception ex) { string s = ex.Message; }
Seeing this, I thought "Just great, so now the compiler will complain about s not being used."
But it doesn't! There are no warnings on that piece of code and I can't figure out why. Any ideas?
PS. I know catch-all clauses that mute exceptions are a bad thing, but that's a different topic. I also know the initial warning is better removed by doing something like this, that's not the point either.
try { ... } catch (Exception) { }
or
try { ... } catch { }
In this case the compiler detects that s is written but not read, and deliberately suppresses the warning.
The reason is because C# is a garbage-collected language, believe it or not.
How do you figure that?
Well, consider the following.
You have a program which calls a method DoIt() that returns a string. You do not have the source code for DoIt(), but you wish to examine in the debugger what its return value is.
Now in your particular case you are using DoIt() for its side effects, not its return value. So you say
DoIt(); // discard the return value
Now you're debugging your program and you go to look at the return value of DoIt() and it's not there because by the time the debugger breaks after the call to DoIt(), the garbage collector might have already cleaned up the unused string.
And in fact the managed debugger has no facility for "look at the thing returned by the previous method call". The unmanaged C++ debugger has that feature because it can look at the EAX register where the discarded return value is still sitting, but you have no guarantee in managed code that the returned value is still alive if it was discarded.
Now, one might argue that this is a useful feature and that the debugger team should add a feature whereby returned values are kept alive if there's a debugger breakpoint immediately following a method execution. That would be a nice feature, but I'm the wrong person to ask for it; go ask the debugger team.
What is the poor C# developer to do? Make a local variable, store the result in the local variable, and then examine the local in the debugger. The debugger does ensure that locals are not garbage collected aggressively.
So you do that and then the compiler gives you a warning that you've got a local that is only written to and never read because the thing doing the reading is not part of the program, it's the developer sitting there watching the debugger. That is a very irritating user experience! Therefore we detect the situation where a non-constant value is being assigned to a local variable or field that is never read, and suppress that warning. If you change up your code so that instead it says string s = "hello"; then you'll start getting the warning because the compiler reasons, well, this can't possibly be someone working around the limitations of the debugger because the value is right there where it can be read by the developer already without the debugger.
That explains that one. There are numerous other cases where we suppress warnings about variables that are never read from; a detailed exegisis of all the compiler's policies for when we report warnings and when we do not would take me quite some time to write up, so I think I will leave it at that.
The variable s is used... to hold a reference to ex.Message. If you had just string s; you would get the warning.
I think the person that answers this will need some insight into how the compiler works. However, something like FxCop would probably catch this.
Properties are just methods, and there's nothing stopping anyone from putting some code that does something in the ex.Message property. Therefore, while you may not be doing anything with s, calling ex.Message COULD potentially have value....
It's not really the job of a compiler to work out every single instance and corner case when a variable may or may not be used. Some are easy to spot, some are more problematic. Erring on the side of caution is the sensible thing to do (especially when warnings can be set to be treated as errors - imagine if software didn't compile just because the compiler thought you weren't using something you were). Microsoft Compiler team specifically say:
"...our guidance for customers who are
interested in discovering unused
elements in their code is to use
FxCop. It can discover unused fields
and much more interesting data about
your code."
-Ed Maurer, Development Lead, Managed Compiler Platform
Resharper would catch that
Static analysis is somewhat limited in what it can accomplish today. (Although as Eric pointed out not because it doesn't know in this case.)
The new Code Contracts in .NET 4 enhances static checking considerably and one day I'm sure you'll get more help with obvious bugs like this.
If you've tried Code Contracts you'll know however that doing an exhaustive static analysis of your code is not easy - it can thrash for minutes after each compile. Will static analysis ever be able to find every problem like this at compile time? Probably not: see http://en.wikipedia.org/wiki/Halting_problem.

Why doesn't the C# compiler stop properties from referring to themselves?

If I do this I get a System.StackOverflowException:
private string abc = "";
public string Abc
{
get
{
return Abc; // Note the mistaken capitalization
}
}
I understand why -- the property is referencing itself, leading to an infinite loop. (See previous questions here and here).
What I'm wondering (and what I didn't see answered in those previous questions) is why doesn't the C# compiler catch this mistake? It checks for some other kinds of circular reference (classes inheriting from themselves, etc.), right? Is it just that this mistake wasn't common enough to be worth checking for? Or is there some situation I'm not thinking of, when you'd want a property to actually reference itself in this way?
You can see the "official" reason in the last comment here.
Posted by Microsoft on 14/11/2008 at
19:52
Thanks for the suggestion for
Visual Studio!
You are right that we could easily
detect property recursion, but we
can't guarantee that there is nothing
useful being accomplished by the
recursion. The body of the property
could set other fields on your object
which change the behavior of the next
recursion, could change its behavior
based on user input from the console,
or could even behave differently based
on random values. In these cases, a
self-recursive property could indeed
terminate the recursion, but we have
no way to determine if that's the case
at compile-time (without solving the
halting problem!).
For the reasons above (and the
breaking change it would take to
disallow this), we wouldn't be able to
prohibit self-recursive properties.
Alex Turner
Program Manager
Visual C# Compiler
Another point in addition to Alex's explanation is that we try to give warnings for code which does something that you probably didn't intend, such that you could accidentally ship with the bug.
In this particular case, how much time would the warning actually save you? A single test run. You'll find this bug the moment you test the code, because it always immediately crashes and dies horribly. The warning wouldn't actually buy you much of a benefit here. The likelihood that there is some subtle bug in a recursive property evaluation is low.
By contrast, we do give a warning if you do something like this:
int customerId;
...
this.customerId= this.customerId;
There's no horrible crash-and-die, and the code is valid code; it assigns a value to a field. But since this is nonsensical code, you probably didn't mean to do it. Since it's not going to die horribly, we give a warning that there's something here that you probably didn't intend and might not otherwise discover via a crash.
Property referring to itself does not always lead to infinite recursion and stack overflow. For example, this works fine:
int count = 0;
public string Abc
{
count++;
if (count < 1) return Abc;
return "Foo";
}
Above is a dummy example, but I'm sure one could come up with useful recursive code that is similar. Compiler cannot determine if infinite recursion will happen (halting problem).
Generating a warning in the simple case would be helpful.
They probably considered it would unnecessary complicate the compiler without any real gain.
You will discover this typo easily the first time you call this property.
First of all, you'll get a warning for unused variable abc.
Second, there is nothing bad in teh recursion, provided that it's not endless recursion. For example, the code might adjust some inner variables and than call the same getter recursively. There is however for the compiler no easy way at all to prove that some recursion is endless or not (the task is at least NP). The compiler could catch some easy cases, but then the consumers would be surprised that the more complicated cases get through the compiler's checks.
The other cases cases that it checks for (except recursive constructor) are invalid IL.
In addition, all of those cases, even recursive constructors) are guarenteed to fail.
However, it is possible, albeit unlikely, to intentionally create a useful recursive property (using if statements).

Is there a way to get VS2008 to stop warning me about unreachable code?

I have a few config options in my application along the lines of
const bool ExecuteThis=true;
const bool ExecuteThat=false;
and then code that uses it like
if(ExecuteThis){ DoThis(); }
if(ExecuteThat){ DoThat(); } //unreachable code warning here
The thing is, we may make slightly different releases and not ExecuteThis or ExecuteThat and we want to be able to use consts so that we don't have any speed penalties from such things at run time. But I am tired of seeing warnings about unreachable code. I'm a person that likes to eliminate all of the warnings, but I can't do anything about these. Is there some option I can use to turn just these warnings off?
To disable:
#pragma warning disable 0162
To restore:
#pragma warning restore 0162
For more on #pragma warning, see MSDN.
Please note that the C# compiler is optimized enough to not emit unreachable code. This is called dead code elimination and it is one of the few optimizations that the C# compiler performs.
And you shouldn't willy-nilly disable the warnings. The warnings are a symptom of a problem. Please see this answer.
First of all, I agree with you, you need to get rid of all warnings. Every little warning you get, get rid of it, by fixing the problem.
Before I go on with what, on re-read, amounts to what looks like a rant, let me emphasis that there doesn't appear to be any performance penalty to using code like this. Having used Reflector to examine code, it appears code that is "flagged" as unreachable isn't actually placed into the output assembly.
It is, however, checked by the compiler. This alone might be a good enough reason to disregard my rant.
In other words, the net effect of getting rid of that warning is just that, you get rid of the warning.
Also note that this answer is an opinion. You might not agree with my opinion, and want to use #pragma to mask out the warning message, but at least have an informed opinion about what that does. If you do, who cares what I think.
Having said that, why are you writing code that won't be reached?
Are you using consts instead of "defines"?
A warning is not an error. It's a note, for you, to go analyze that piece of code and figure out if you did the right thing. Usually, you haven't. In the case of your particular example, you're purposely compiling code that will, for your particular configuration, never execute.
Why is the code even there? It will never execute.
Are you confused about what the word "constant" actually means? A constant means "this will never change, ever, and if you think it will, it's not a constant". That's what a constant is. It won't, and can't, and shouldn't, change. Ever.
The compiler knows this, and will tell you that you have code, that due to a constant, will never, ever, be executed. This is usually an error.
Is that constant going to change? If it is, it's obviously not a constant, but something that depends on the output type (Debug, Release), and it's a "#define" type of thing, so remove it, and use that mechanism instead. This makes it clearer, to people reading your code, what this particular code depends on. Visual Studio will also helpfully gray out the code if you've selected an output mode that doesn't set the define, so the code will not compile. This is what the compiler definitions was made to handle.
On the other hand, if the constant isn't going to change, ever, for any reason, remove the code, you're not going to need it.
In any case, don't fall prey to the easy fix to just disable that warning for that piece of code, that's like taking aspirin to "fix" your back ache problems. It's a short-term fix, but it masks the problem. Fix the underlying problem instead.
To finish this answer, I'm wondering if there isn't an altogether different solution to your problem.
Often, when I see code that has the warning "unreachable code detected", they fall into one of the following categories:
Wrong (in my opinion) usage of const versus a compiler #define, where you basically say to the compiler: "This code, please compile it, even when I know it will not be used.".
Wrong, as in, just plain wrong, like a switch-case which has a case-block that contains both a throw + a break.
Leftover code from previous iterations, where you've just short-circuited a method by adding a return at some point, not deleting (or even commenting out) the code that follows.
Code that depends on some configuration setting (ie. only valid during Debug-builds).
If the code you have doesn't fall under any of the above settings, what is the specific case where your constant will change? Knowing that might give us better ways to answer your question on how to handle it.
What about using preprocessor statements instead?
#if ExecuteThis
DoThis();
#endif
#if ExecuteThat
DoThat();
#endif
Well, #pragma, but that is a but grungy. I wonder if ConditionalAttribute would be better - i.e.
[Conditional("SOME_KEY")]
void DoThis() {...}
[Conditional("SOME_OTHER_KEY")]
void DoThis() {...}
Now calls to DoThis / DoThat are only included if SOME_KEY or SOME_OTHER_KEY are defined as symbols in the build ("conditional compilation symbols"). It also means you can switch between them by changing the configuration and defining different symbols in each.
The fact that you have the constants declared in code tells me that you are recompiling your code with each release you do, you are not using "contants" sourced from your config file.
So the solution is simple:
- set the "constants" (flags) from values stored in your config file
- use conditional compilation to control what is compiled, like this:
#define ExecuteThis
//#define ExecuteThat
public void myFunction() {
#if ExecuteThis
DoThis();
#endif
#if ExecuteThat
DoThat();
#endif
}
Then when you recompile you just uncomment the correct #define statement to get the right bit of code compiled. There are one or two other ways to declare your conditional compilation flags, but this just gives you an example and somewhere to start.
The quickest way to "Just get rid of it" without modifying your code would be to use
#pragma warning disable 0162
On your Namespace, class or method where you want to supress the warning.
For example, this wont throw the warning anymore:
#pragma warning disable 0162
namespace ConsoleApplication4
{
public class Program
{
public const bool something = false;
static void Main(string[] args)
{
if (something) { Console.WriteLine(" Not something" ); }
}
}
However be warn that NO METHOD inside this namespace will throw the warning again... and well.. warnings are there for a reason (what if it happened when you did NOT planned it to be unreachable?)
I guess a safer way would be to write the variables in a configuration file, and read them from there at the beginning of the program, that way you don't even need to recompile to have your different versions/releases! Just change the app file and go :D.
about the speed penalty.. yes.. making it this way would inquire in a speed penalty... compared to using const but unless you are really worried about wating 1/100 of a millisecond more.. I would go for it that way.
Here's a trick:
bool FALSE = false;
if (FALSE) { ...
This will prevent the warning. Wait. I know there's all this "You should not use that, why have code that is not executed?". Answer: Because during development you often need to set up test code. Eventually you will get rid of it. It's important that code it is there during development, but not executed all the time.
Sometimes you want to remove code execution temporarily, for debugging purposes.
Sometimes you want to truncate execution of a function temporarily. You can use this to avoid warnings:
...code..
{ bool TRUE = true; if (TRUE) return; }
...more code...
These things avoid the warning. You might say, if temporary, one should keep the warning in. yes... but again... maybe you should not check it in that way, but temporarily it is useful to get the warning out, for example when spending all day debugging complex collision code.
So, you may ask, why does it matter? Well, these warnings get very annoying when I press F4 to go to the first error, and get some damn 10 warnings first instead and I am knee deep in debugging.
Use the #pragma, you say. Well, that would be a good idea, except that I can't find a way to do that globally on my entire project.. or in a way that will work with Unity3D, which is what I code in C# for. Oh, how useful #include would be.
Never mind, use #if ! Well... yes... but sometimes they are not what you want. They make the code messy and unreadable. You have to bracket your code with them just right. Putting if(false) in front of a block is just so much easier... no need to delimit the block, the braces do that.
What I do is make a nice globally accessible FALSE and TRUE and use them as I need do avoid errors.
And if I want to check if I am actually using them, I can search for all references, or more crudely, but just as effectively, remove them and thus be forced to look at every occurrence of this little trick.
The easiest way is to stop writing unreachable code :D #DontDoThat

Categories

Resources