C# Compiler should give warning but doesn't? - c#

Someone on my team tried fixing a 'variable not used' warning in an empty catch clause.
try { ... } catch (Exception ex) { }
-> gives a warning about ex not being used. So far, so good.
The fix was something like this:
try { ... } catch (Exception ex) { string s = ex.Message; }
Seeing this, I thought "Just great, so now the compiler will complain about s not being used."
But it doesn't! There are no warnings on that piece of code and I can't figure out why. Any ideas?
PS. I know catch-all clauses that mute exceptions are a bad thing, but that's a different topic. I also know the initial warning is better removed by doing something like this, that's not the point either.
try { ... } catch (Exception) { }
or
try { ... } catch { }

In this case the compiler detects that s is written but not read, and deliberately suppresses the warning.
The reason is because C# is a garbage-collected language, believe it or not.
How do you figure that?
Well, consider the following.
You have a program which calls a method DoIt() that returns a string. You do not have the source code for DoIt(), but you wish to examine in the debugger what its return value is.
Now in your particular case you are using DoIt() for its side effects, not its return value. So you say
DoIt(); // discard the return value
Now you're debugging your program and you go to look at the return value of DoIt() and it's not there because by the time the debugger breaks after the call to DoIt(), the garbage collector might have already cleaned up the unused string.
And in fact the managed debugger has no facility for "look at the thing returned by the previous method call". The unmanaged C++ debugger has that feature because it can look at the EAX register where the discarded return value is still sitting, but you have no guarantee in managed code that the returned value is still alive if it was discarded.
Now, one might argue that this is a useful feature and that the debugger team should add a feature whereby returned values are kept alive if there's a debugger breakpoint immediately following a method execution. That would be a nice feature, but I'm the wrong person to ask for it; go ask the debugger team.
What is the poor C# developer to do? Make a local variable, store the result in the local variable, and then examine the local in the debugger. The debugger does ensure that locals are not garbage collected aggressively.
So you do that and then the compiler gives you a warning that you've got a local that is only written to and never read because the thing doing the reading is not part of the program, it's the developer sitting there watching the debugger. That is a very irritating user experience! Therefore we detect the situation where a non-constant value is being assigned to a local variable or field that is never read, and suppress that warning. If you change up your code so that instead it says string s = "hello"; then you'll start getting the warning because the compiler reasons, well, this can't possibly be someone working around the limitations of the debugger because the value is right there where it can be read by the developer already without the debugger.
That explains that one. There are numerous other cases where we suppress warnings about variables that are never read from; a detailed exegisis of all the compiler's policies for when we report warnings and when we do not would take me quite some time to write up, so I think I will leave it at that.

The variable s is used... to hold a reference to ex.Message. If you had just string s; you would get the warning.

I think the person that answers this will need some insight into how the compiler works. However, something like FxCop would probably catch this.

Properties are just methods, and there's nothing stopping anyone from putting some code that does something in the ex.Message property. Therefore, while you may not be doing anything with s, calling ex.Message COULD potentially have value....

It's not really the job of a compiler to work out every single instance and corner case when a variable may or may not be used. Some are easy to spot, some are more problematic. Erring on the side of caution is the sensible thing to do (especially when warnings can be set to be treated as errors - imagine if software didn't compile just because the compiler thought you weren't using something you were). Microsoft Compiler team specifically say:
"...our guidance for customers who are
interested in discovering unused
elements in their code is to use
FxCop. It can discover unused fields
and much more interesting data about
your code."
-Ed Maurer, Development Lead, Managed Compiler Platform

Resharper would catch that

Static analysis is somewhat limited in what it can accomplish today. (Although as Eric pointed out not because it doesn't know in this case.)
The new Code Contracts in .NET 4 enhances static checking considerably and one day I'm sure you'll get more help with obvious bugs like this.
If you've tried Code Contracts you'll know however that doing an exhaustive static analysis of your code is not easy - it can thrash for minutes after each compile. Will static analysis ever be able to find every problem like this at compile time? Probably not: see http://en.wikipedia.org/wiki/Halting_problem.

Related

try-catch instead of if in edge cases

Would it be a good idea to replace the if statements with try-catch in the following usecases (performance and readability wise?):
Example 1
public static void AddInitializable(GameObject initializable)
{
if(!HasInstance)
{ // this should only happen if I have forgotten to instantiate the GameManager manually
Debug.LogWarning("GameManager not found.");
return;
}
instance.initializables.Add(initializable);
initializable.SetActive(false);
}
public static void AddInitializable2(GameObject initializable)
{
try
{
instance.initializables.Add(initializable);
initializable.SetActive(false);
}
catch
{
Debug.LogWarning("GameManager not found.");
}
}
Example 2
public static void Init(int v)
{
if(!HasInstance)
{// this should happen only once
instance = this;
}
instance.alj = v;
}
public static void Init2(int v)
{
try
{
instance.alj = v;
}
catch
{
instance = this;
Init(v);
}
}
Edit:
Question 2: How many Exceptions can I get to be still performance positive?
It depends.
Try-blocks are generally cheap, so when the exception is not thrown, that would be an acceptable solution. But: In your case, if the condition is not satisfied (meaning the thing was not initialized before that method was called), this is a programming error, not something that should ever happen in the finished program. It is perfectly valid that such errors crash the program. Makes spotting the bugs and fixing them much easier in development, and avoids that you silently hide it (in example 1, you silently don't do anything, which might cause confusing behavior later).
So: If it would be a programming error, don't use an exception handler, nor a test (except maybe an Assert). Just let the program crash (with a NullReferenceException in this case).
I would agree with PMF: Depends!
On your specific use case and in specific whether something is your fault or something you can't control / predict.
So in general I'd say there are three ways of how to handle stuff that isn't behaving as expected
A) let throw an exception to indicate that this is really bad and there is no way to recover => and most probably crash your app
This usually makes totally sense on development time because while debugging you explicitly want your app to crash so you can find and fix the issue.
This should be happening for everything where the cause is basically something that you messed up and can be fixed by you. (In your case instance not initialized correctly)
B) return something e.g. false to indicate that something went bad but allow this to be handled by the code and e.g. try something else.
In my eyes this should be the preferred way of dealing with stuff you can't control yourself like e.g. user input and other unpredictable conditions like internet connectivity etc.
C) Just ignore it and do nothing at all.
Depends of course on what exactly you are doing but this should actually happen almost never. For a User this can be extremely frustrating and also for you as developer it makes debugging hard to impossible!
In combination with B of course this is valid since something else will already have delt with the issue.
And to add just in general unless you work on some core / reused library I would actually never throw exceptions myself except you are re-throwing caught ones to add additional debugging information. This basically falls under "you can't control" how others will use your library -> this basically from your perspective falls under user input ;)
Now all three options can be achieved by try - catch or if checks internally of course and it depends on your specific case which way you want to go.
Some thoughts of mine on this
Readability wise I would prefer the if already alone because it makes clear exactly which condition is checked. When I see a try - catch I don't know exactly at which point which exact exception might be thrown on first glance.
Thus using try - catch as a replacement for if just obscures what exactly is failing and makes debugging hard to impossible
Exceptions are quite expensive! So performance wise I would say use if wherever possible.
There are cases though - and in my opinion these are the only ones where try - catch would be allowed - where you use a library and there simply is no way to prevent an exception.
Example: FileIO
the file you want to access does not exist
-> You don't need try - catch for this (in my eyes it would be the lazy way). This is something you can and should actually check first if(!File.Exists(...)) so your program can correctly deal with it and handle that case (e.g. you might want to tell the user instead of simply crash or doing nothing).
The file is currently opened by another program so you can't write to it.
-> There is no way to find this out beforehand. You will get an exception and can't avoid it. Here you want to try - catch in order to still allow your code to deal with such case (as before e.g. tell the user instead of simply crash).
But then how you actually deal with them again depends:
If you e.g. use some hardcoded paths and these files definitely should be there -> Exception because it means you as developer messed something up.
If the path comes from user input -> Catch because this is something you as developer can't control but don't just want your app to crash, rather show a hint to the user that he messed it up.
Now in your use case the Example 1 both of your solutions seem pretty bad to me. You go with the last option C and just ignore the call - a user won't see the warning and also a developer might just not note / ignore it.
You definitely want to get an Exception here if this means that your app will not behave correctly and not catch it at all!
In general there is no need for a special bool flag. I would rather go with
if(instance == null)
{
Debug.LogError(...);
return;
}
Because this is most probably a more severe error not only a warning so it at least gains visibility.
In your Example 2 you actually have kind of a lazy initialization anyway so either way the call itself is basically valid.
In such case though again this is something you can easily check and I would not wait for an exception (especially not simply any) because I already know that there definitely will be one at least once.
In my opinion this should rather be
if(instance == null)
{
// I have put `???` because, well, in a "static" method there is no "this" so
// I wonder where the instance should come from in that case ;)
instance = ???;
}
instance.alj = v;
So you're kind of along the right lines here.
Unless you are in dire need of increasing performance, don't try to optimize, and if you do need to optimize, make sure you're doing it right (exceptions are more expensive that if statements, especially if you know they're going to happen)
The first example you've given, I can kind of get behind. You're making the assumption that something was initialized, and if it turns out it wasn't, throw an error. You're logging it, it's ok, you initialize it and you'll probably never have to worry about that exception again.
The second example you've given is a big no no. You should not use exceptions to fall into other logic in your application. Instead, in the Init() method, just always have the line 'instance = this', don't do the if statement. Once you know it's initialized, there should never be a reason for it to throw an exception when used.
Of course, don't go crazy with this, exceptions should only be used for exceptional circumstances. If you write your code and are thinking 'Hmm, so it could be either A scenario or B scenario, and in B scenario I want this to happen, so I'll throw an exception' that's completely the wrong line of thinking. Instead it should be 'Hmm, so all this will happen, but just in case something breaks, I'll put it in a try catch and log it, as who knows, I'm not infallible'
You can see how I've applied the above logic to your two examples,
To my point of view, this is not a good idea.
We usually use try catch when we know what kind of exceptions will appear in the context, and hence a catch without exception type is not a good practice. Moreover, try catch is not expensive only if the exception rarely happens.
In your scenario, since you already know the only problem is that the property HasInstance may be false, you could directly check it with if statement. Using try catch seems more like a cost here, although it works. This seems like you are expecting an error, and you just ignore that error because its message does not matter.
Besides, I see you are using Unity and are creating a singleton GameManager, and actually I think the singleton pattern here might not be quite correct.
For example, if you use the code like this, actually there is virtually no possibility it does not have an instance if you treat your scene and gameobjects properly :)
Exceptions are there when an "impossible" state occurs in your interfaces (Not the keyword - just the word), if you have to try - catch inside your business logic, your design is compromised.
In this case, you would likely gain both extendability, and readability, if you implement the slightly miss-named null-object pattern. (Should be called default object pattern)
And simply never be able to pass a null'ed interface to the method.
So:
public interface IGameObject {
void Activate(instance initializable);
}
public class GameObjectDefault : IGameObject {
public void Activate(instance initializable){
--Does nothing on purpose
}
}
public class GameObjectReal : IGameObject {
private Instance _instance;
public GameObjectReal(Instance instance)
{
_instance = instance;
}
public void Activate(IGameObject initializable) {
_instance.initializables.Add(initializable);
--Do whatever you need to do to the object
}
}
This is pseudo, because I can't see your whole system.
But this way, if you initialize all game objects as DefaultGameObjects, your activate or any other method, will just do nothing.
Now, there is no reason to check for null.
And no reason to do a try catch. Your impossible state, is now, literally, impossible.

Tiny stub of bogus code purely for the purpose of setting a breakpoint (that doesn't create a compiler warning)

This is trivial question but find myself thinking it all the time - often when debugging, I want to break right after a certain line of code executes, and rather than putting the breakpoint on the next line of code (which may be a ways down due to large comment blocks, or putting it on the last line of code and then hitting F10 to go over it after it breaks, I have this urge to put a short stub line on which I will set my breakpoint.
In VBA I'd use DoEvents for this, the shortest thing in c# I've found that doesn't create an annoying compiler warning (the variable 'x' is declared but never used) is:
int x = 1; x++;
Is this about as good as you can get, or is there some other obvious approach to this I'm not aware of?
Note: I am aware of suppressing warnings via:
#pragma warning disable 0168 // get rid of 'variable is never used warning'
...but find it sporadically doesn't work.
For debugging purposes, what I always do is use System.Diagnostics.Debugger.Break(). In practice, it's just like inserting a break point on a statement but is much easier to determine its function after the fact, and is maintained through source control between users and systems. It doesn't clutter your Breakpoints window. It also, helpfully enough, will not trigger when running in Release mode, allowing you to place these in areas of critical importance, and leave them in without harming your customer releases.
As an alternate suggestion, following InBetween's comment:
If, instead, you are looking for a "harmless" statement that you can simply set a breakpoint on when you desire, Thread.Sleep(0) should be similar enough to your anecdotal VBA solution to suffice for your debugging purposes.
Thread.Sleep(0). This tells the system you want to forfeit the rest
of the thread’s timeslice and let another, waiting, thread run.
-- http://blogs.msmvps.com/peterritchie/2007/04/26/thread-sleep-is-a-sign-of-a-poorly-designed-program/
This would be a less ideal solution in my mind than my first suggestion, but it may be more what you're looking for.
So you need to
write code which does nothing, but can accept a breakpoint, and
tell other programmers that it does nothing (so they don't try and "fix" it).
Make a function which does nothing:
// Call it anywhere, so you can put a breakpoint on the code
public static void DoNothing()
{
// Any code which does nothing here
}
Then call it:
... some code ...
DoNothing();
... other code ...

Does this Resharper fix for disposed closure warning make any sense?

I'm working on getting rid of some warnings from a static code analysis.
In one specific case there was no disposing being done on a ManualResetEvent.
The code in question executes a Func on the main thread and blocks the calling thread for a certain number of milliseconds. I realize this sounds like a weird thing to do, but it's outside the scope of this question, so bear with me.
Suppose I add a using statement like so:
object result = null;
using (var completedEvent = new ManualResetEvent(false))
{
_dispatcher.BeginInvoke((Action)(() =>
{
result = someFunc;
completedEvent.Set(); // Here be dragons!
}));
completedEvent.WaitOne(timeoutMilliseconds);
return result;
}
Now, I realize that this is likely to cause problems. I also happen to use Resharper and it warns me with the message "Access to disposed closure".
Resharper proposes to fix this by changing the offending line to:
if (completedEvent != null)
{
completedEvent.Set();
}
Now, the proposed solution puzzles me. Under normal circumstances, there would be no reason why a variable would be set to null by a using statement. Is there some implementation detail to closures in .NET that would guarantee the variable to be null after the variable that has been closed over is disposed?
As a bonus question, what would be a good solution to the problem of disposing the ManualResetEvent?
You are mixing up ReSharper's "quick fix" and "context action". When ReSharper proposes to fix something, then most likely you'd see a bulb there. You don't see a bulb here, because there are no quick fixes to this warning.
But aside from quick fixes ReSharper also has "context actions", where it can do some routine tasks for you (think of them like a small refactorings). When ReSharper has a context action for a code under cursor, it would show you a pick. Here you see a context action called "Check if something is not null". It has no relation to a warning, and there is no convention that after disposing a variable would be set to null.
Also, when you press Alt-Enter, you'd see a striked-out bulb to give you impression that ReSharper doesn't suggest any quick fixes for this warning, but it can disable it with comments. In fact, that's the only way to make this warning go away easily. But I'd rewrite this piece of code instead.
I ran into exactly this only a few hours ago.
It's a false alarm. R# doesn't understand that execution will block until the event is set, even though this defers disposal to exactly the right moment.
IMO it's a good solution. Just ignore R#.
Suggest catching an ObjectDisposedException when you call completedEvent.Set() in case the timeout has expired and the event has been disposed. I don't think this will prevent the R# warning, but it's safe.
I think, you must check on null, moreover you must catch this exception.
Imagine what will happened if someFunc runs more than timeoutMilliseconds.

Why doesn't the C# compiler stop properties from referring to themselves?

If I do this I get a System.StackOverflowException:
private string abc = "";
public string Abc
{
get
{
return Abc; // Note the mistaken capitalization
}
}
I understand why -- the property is referencing itself, leading to an infinite loop. (See previous questions here and here).
What I'm wondering (and what I didn't see answered in those previous questions) is why doesn't the C# compiler catch this mistake? It checks for some other kinds of circular reference (classes inheriting from themselves, etc.), right? Is it just that this mistake wasn't common enough to be worth checking for? Or is there some situation I'm not thinking of, when you'd want a property to actually reference itself in this way?
You can see the "official" reason in the last comment here.
Posted by Microsoft on 14/11/2008 at
19:52
Thanks for the suggestion for
Visual Studio!
You are right that we could easily
detect property recursion, but we
can't guarantee that there is nothing
useful being accomplished by the
recursion. The body of the property
could set other fields on your object
which change the behavior of the next
recursion, could change its behavior
based on user input from the console,
or could even behave differently based
on random values. In these cases, a
self-recursive property could indeed
terminate the recursion, but we have
no way to determine if that's the case
at compile-time (without solving the
halting problem!).
For the reasons above (and the
breaking change it would take to
disallow this), we wouldn't be able to
prohibit self-recursive properties.
Alex Turner
Program Manager
Visual C# Compiler
Another point in addition to Alex's explanation is that we try to give warnings for code which does something that you probably didn't intend, such that you could accidentally ship with the bug.
In this particular case, how much time would the warning actually save you? A single test run. You'll find this bug the moment you test the code, because it always immediately crashes and dies horribly. The warning wouldn't actually buy you much of a benefit here. The likelihood that there is some subtle bug in a recursive property evaluation is low.
By contrast, we do give a warning if you do something like this:
int customerId;
...
this.customerId= this.customerId;
There's no horrible crash-and-die, and the code is valid code; it assigns a value to a field. But since this is nonsensical code, you probably didn't mean to do it. Since it's not going to die horribly, we give a warning that there's something here that you probably didn't intend and might not otherwise discover via a crash.
Property referring to itself does not always lead to infinite recursion and stack overflow. For example, this works fine:
int count = 0;
public string Abc
{
count++;
if (count < 1) return Abc;
return "Foo";
}
Above is a dummy example, but I'm sure one could come up with useful recursive code that is similar. Compiler cannot determine if infinite recursion will happen (halting problem).
Generating a warning in the simple case would be helpful.
They probably considered it would unnecessary complicate the compiler without any real gain.
You will discover this typo easily the first time you call this property.
First of all, you'll get a warning for unused variable abc.
Second, there is nothing bad in teh recursion, provided that it's not endless recursion. For example, the code might adjust some inner variables and than call the same getter recursively. There is however for the compiler no easy way at all to prove that some recursion is endless or not (the task is at least NP). The compiler could catch some easy cases, but then the consumers would be surprised that the more complicated cases get through the compiler's checks.
The other cases cases that it checks for (except recursive constructor) are invalid IL.
In addition, all of those cases, even recursive constructors) are guarenteed to fail.
However, it is possible, albeit unlikely, to intentionally create a useful recursive property (using if statements).

Is there a way to get VS2008 to stop warning me about unreachable code?

I have a few config options in my application along the lines of
const bool ExecuteThis=true;
const bool ExecuteThat=false;
and then code that uses it like
if(ExecuteThis){ DoThis(); }
if(ExecuteThat){ DoThat(); } //unreachable code warning here
The thing is, we may make slightly different releases and not ExecuteThis or ExecuteThat and we want to be able to use consts so that we don't have any speed penalties from such things at run time. But I am tired of seeing warnings about unreachable code. I'm a person that likes to eliminate all of the warnings, but I can't do anything about these. Is there some option I can use to turn just these warnings off?
To disable:
#pragma warning disable 0162
To restore:
#pragma warning restore 0162
For more on #pragma warning, see MSDN.
Please note that the C# compiler is optimized enough to not emit unreachable code. This is called dead code elimination and it is one of the few optimizations that the C# compiler performs.
And you shouldn't willy-nilly disable the warnings. The warnings are a symptom of a problem. Please see this answer.
First of all, I agree with you, you need to get rid of all warnings. Every little warning you get, get rid of it, by fixing the problem.
Before I go on with what, on re-read, amounts to what looks like a rant, let me emphasis that there doesn't appear to be any performance penalty to using code like this. Having used Reflector to examine code, it appears code that is "flagged" as unreachable isn't actually placed into the output assembly.
It is, however, checked by the compiler. This alone might be a good enough reason to disregard my rant.
In other words, the net effect of getting rid of that warning is just that, you get rid of the warning.
Also note that this answer is an opinion. You might not agree with my opinion, and want to use #pragma to mask out the warning message, but at least have an informed opinion about what that does. If you do, who cares what I think.
Having said that, why are you writing code that won't be reached?
Are you using consts instead of "defines"?
A warning is not an error. It's a note, for you, to go analyze that piece of code and figure out if you did the right thing. Usually, you haven't. In the case of your particular example, you're purposely compiling code that will, for your particular configuration, never execute.
Why is the code even there? It will never execute.
Are you confused about what the word "constant" actually means? A constant means "this will never change, ever, and if you think it will, it's not a constant". That's what a constant is. It won't, and can't, and shouldn't, change. Ever.
The compiler knows this, and will tell you that you have code, that due to a constant, will never, ever, be executed. This is usually an error.
Is that constant going to change? If it is, it's obviously not a constant, but something that depends on the output type (Debug, Release), and it's a "#define" type of thing, so remove it, and use that mechanism instead. This makes it clearer, to people reading your code, what this particular code depends on. Visual Studio will also helpfully gray out the code if you've selected an output mode that doesn't set the define, so the code will not compile. This is what the compiler definitions was made to handle.
On the other hand, if the constant isn't going to change, ever, for any reason, remove the code, you're not going to need it.
In any case, don't fall prey to the easy fix to just disable that warning for that piece of code, that's like taking aspirin to "fix" your back ache problems. It's a short-term fix, but it masks the problem. Fix the underlying problem instead.
To finish this answer, I'm wondering if there isn't an altogether different solution to your problem.
Often, when I see code that has the warning "unreachable code detected", they fall into one of the following categories:
Wrong (in my opinion) usage of const versus a compiler #define, where you basically say to the compiler: "This code, please compile it, even when I know it will not be used.".
Wrong, as in, just plain wrong, like a switch-case which has a case-block that contains both a throw + a break.
Leftover code from previous iterations, where you've just short-circuited a method by adding a return at some point, not deleting (or even commenting out) the code that follows.
Code that depends on some configuration setting (ie. only valid during Debug-builds).
If the code you have doesn't fall under any of the above settings, what is the specific case where your constant will change? Knowing that might give us better ways to answer your question on how to handle it.
What about using preprocessor statements instead?
#if ExecuteThis
DoThis();
#endif
#if ExecuteThat
DoThat();
#endif
Well, #pragma, but that is a but grungy. I wonder if ConditionalAttribute would be better - i.e.
[Conditional("SOME_KEY")]
void DoThis() {...}
[Conditional("SOME_OTHER_KEY")]
void DoThis() {...}
Now calls to DoThis / DoThat are only included if SOME_KEY or SOME_OTHER_KEY are defined as symbols in the build ("conditional compilation symbols"). It also means you can switch between them by changing the configuration and defining different symbols in each.
The fact that you have the constants declared in code tells me that you are recompiling your code with each release you do, you are not using "contants" sourced from your config file.
So the solution is simple:
- set the "constants" (flags) from values stored in your config file
- use conditional compilation to control what is compiled, like this:
#define ExecuteThis
//#define ExecuteThat
public void myFunction() {
#if ExecuteThis
DoThis();
#endif
#if ExecuteThat
DoThat();
#endif
}
Then when you recompile you just uncomment the correct #define statement to get the right bit of code compiled. There are one or two other ways to declare your conditional compilation flags, but this just gives you an example and somewhere to start.
The quickest way to "Just get rid of it" without modifying your code would be to use
#pragma warning disable 0162
On your Namespace, class or method where you want to supress the warning.
For example, this wont throw the warning anymore:
#pragma warning disable 0162
namespace ConsoleApplication4
{
public class Program
{
public const bool something = false;
static void Main(string[] args)
{
if (something) { Console.WriteLine(" Not something" ); }
}
}
However be warn that NO METHOD inside this namespace will throw the warning again... and well.. warnings are there for a reason (what if it happened when you did NOT planned it to be unreachable?)
I guess a safer way would be to write the variables in a configuration file, and read them from there at the beginning of the program, that way you don't even need to recompile to have your different versions/releases! Just change the app file and go :D.
about the speed penalty.. yes.. making it this way would inquire in a speed penalty... compared to using const but unless you are really worried about wating 1/100 of a millisecond more.. I would go for it that way.
Here's a trick:
bool FALSE = false;
if (FALSE) { ...
This will prevent the warning. Wait. I know there's all this "You should not use that, why have code that is not executed?". Answer: Because during development you often need to set up test code. Eventually you will get rid of it. It's important that code it is there during development, but not executed all the time.
Sometimes you want to remove code execution temporarily, for debugging purposes.
Sometimes you want to truncate execution of a function temporarily. You can use this to avoid warnings:
...code..
{ bool TRUE = true; if (TRUE) return; }
...more code...
These things avoid the warning. You might say, if temporary, one should keep the warning in. yes... but again... maybe you should not check it in that way, but temporarily it is useful to get the warning out, for example when spending all day debugging complex collision code.
So, you may ask, why does it matter? Well, these warnings get very annoying when I press F4 to go to the first error, and get some damn 10 warnings first instead and I am knee deep in debugging.
Use the #pragma, you say. Well, that would be a good idea, except that I can't find a way to do that globally on my entire project.. or in a way that will work with Unity3D, which is what I code in C# for. Oh, how useful #include would be.
Never mind, use #if ! Well... yes... but sometimes they are not what you want. They make the code messy and unreadable. You have to bracket your code with them just right. Putting if(false) in front of a block is just so much easier... no need to delimit the block, the braces do that.
What I do is make a nice globally accessible FALSE and TRUE and use them as I need do avoid errors.
And if I want to check if I am actually using them, I can search for all references, or more crudely, but just as effectively, remove them and thus be forced to look at every occurrence of this little trick.
The easiest way is to stop writing unreachable code :D #DontDoThat

Categories

Resources