Related
if you think there is a possibility of getting a null pointer exception, should you use an if statement to make sure the variable is not null, or should you just catch the exception?
I don't see any difference as you can put your logic to deal with the null pointer in the if statement, or in the catch block, so which one is best practise?
I would say ALWAYS use logic to catch the exception, not try/catch.
Try/Catch should be used when you validate but some strange thing happens and something causes an error so you can handle it more gracefully.
There is no single answer that will suffice here, it depends.
Let's take a few scenarios so you can see what I mean.
Scenario: Method that takes a reference type parameter that does not accept null
You're defining a method, it takes a reference type parameter, say a stream object, and you don't want to accept null as a legal input parameter.
In this case, I would say that the contract is that null is not a valid input. If some code does in fact call that method with a null reference, the contract is broken.
This is an exception, more specifically, it's an ArgumentNullException.
Example:
public void Write(Stream stream)
{
if (stream == null)
throw new ArgumentNullException("stream");
...
I would definitely not just let the code execute until it tries to dereference the stream in this case, instead crashing with a NullReferenceException, because at that point I lost all ability to react when I know the cause.
Q. Why can't I return false instead of throwing an exception?
A. Because a return value is easy to silently ignore, do you really want your "Write" methods to just silently skip writing because you made a snafu in the calling code, passing the wrong stream object or something that cannot be written to? I wouldn't!
Scenario: Method returns a reference to an object, sometimes there is no object
In this case the contract is that null is a legal result. In my opinion, null is something to avoid because it is quite hard to make sure you handle correctly everywhere, but sometimes it is the best way.
In this case I would make sure to if my way around the result, to ensure I don't crash when the null reference comes back.
Generalisation
If you take a close look at the above two scenarios, you'll note one thing:
In both cases it comes down to what is being expected, what the contract is.
If the contract says "not null", throw an exception. Don't fall back to the old-style API way of returning false because an exceptional problem should not be silently ignored, and littering the code with if statements to ensure every method call succeeds does not make for readable code.
If the contract says "null is entirely possible", handle it with if statements.
Advertising
For getting a better grip on null problems, I would also urge you to get ReSharper for you and your team, but please note that this answer can be applied to any type of exception and error handling, the same principles applies.
With it comes attributes you can embed into your project(s) to flag these cases, and then ReSharper will highlight the code in question.
public void Write([NotNull] Stream stream)
[CanBeNull]
public SomeObject GetSomeObject()
To read more about the contract attributes that ReSharper uses, see
ReSharper NullReferenceException Analysis and Its Contracts
Contract Annotations in ReSharper 7
Well. Exceptions are just that. Exceptions. They are thrown when something unforseen has happened and should not be part of the normal program flow.
And that's what is happening here. You expected the argument to be specified when it's not. That is unexpected and you should therefore throw your own exception informing the user of that. If you want to get bonus points you can also include the reason to WHY the argument must be specified (if it's not obvious).
I've written a series of posts about exceptions: http://blog.gauffin.org/2013/04/what-is-exceptions/
From a performance standpoint it really depends what you're doing. The performance impact from a try/catch block when no exception is thrown is minimal (and if you really need that last few percent of performance, you probably should rewrite that part of your code in C++ anyway). Throwing exceptions does have a major impact on simpler operations such as string manipulation; but once you get file/database operations in the loop they're so much slower that again it becomes a trivial penalty. Throwing across an App Domain will have a non-trivial impact on just about anything though.
Performance in Operations/second:
Mode/operation Empty String File Database Complex
No exception 17,748,206 267,300 2,461 877 239
Catch without exception 15,415,757 261,456 2,476 871 236
Throw 103,456 68,952 2,236 864 236
Rethrow original 53,481 41,889 2,324 852 230
Throw across AppDomain 3,073 2,942 930 574 160
Additional test results along with the source for the tests is available from the article Performance implications of Exceptions in .NET
I would rather suggest you use if-statement for NullReference exception. For other exception, try-catch should be good enough.
The reason I suggest if-statement for NullReference exception is because C# will not tell which variable is null. if that line has more than one object could be null, you will loss track. If you are using if-statement, you can have better logging to help you get the enough information.
The main Question is if it is a good idea to have methods returning Null at all, personally i do not have any problem with this, but as soon as you try to access modifiers of an object returned from this method and you forget to check if it is assigned this becomes an issue.
Ken has a good answer about this:
If you are always expecting to find a value then throw the exception
if it is missing. The exception would mean that there was a problem.
If the value can be missing or present and both are valid for the
application logic then return a null.
See this disscussion abou tthis issue:
Returning null is usually the best idea if you intend to indicate that
no data is available.
An empty object implies data has been returned, whereas returning null
clearly indicates that nothing has been returned.
Additionally, returning a null will result in a null exception if you
attempt to access members in the object, which can be useful for
highlighting buggy code - attempting to access a member of nothing
makes no sense. Accessing members of an empty object will not fail
meaning bugs can go undiscovered.
Some further reading:
No Null Beyond Method Scope
Should We Return Null From Our Methods?
using try catch for the statements is not an good idea. because when you use try catch them it seems that if some error comes the code will not turninate the application. but if you are sure about what kind of error can come you can tap the error at that point. that will not produce any unknown errors. for example.
string name = null;
here i am going to use the name variable and i am sure that this will throw Null Refrance Error .
try
{
Console.writeLine("Name ={0}",name);
}
catch (NullRefranceException nex)
{
//handle the error
}
catch(Exception ex)
{
// handle error to prevent application being crashed.
}
This is not a good practice while you can handle this kind of error and make your code more readable. like.
if(name !=null)
Console.writeLine("Name ={0}",name);
In my experience using if is better but only if you actually expect a null reference pointer. Without any bit of code or context its difficult to say when one option is better than the other.
There's also a matter of optimization - code in try-catch blocks won't be optimized.
In general, try-catch blocks are great because they will break (move to the catch statement) whenever the exception occurs. If-else blocks rely on you predicting when the error will happen.
Also, catch blocks won't stop your code from halting when an error is hit.
Its always better to use Try Catch other than if else
Here Exceptions are two types namely handled and UN-handled exceptions
Even if u want to handle some function when the Exception u can handle it...
Handled exception always allows you to write some implementations inside the Catch block
Eg. An Alert Message, A new Function to handle when such exception occurs.
A lot of times when reading source code I see something like this:
public void Foo(Bar bar)
{
if (bar == null) return;
bar.DoSomething();
}
I do not like this, but I appear to be in the wrong as this form of defensive programming is considered good. Is it though? For example, why is bar null to begin with? Isn't doing checks like this akin to applying a bandage to a problem rather than solving the real solution? Not only does it complicate functions with additional lines of code but it also prevents the programmer from seeing potential bugs.
Here's another example:
public void Foo(int x)
{
int clientX = Math.Max(x, 0); // Ensures x is never negative
}
Others look at that and see defensive programming but I see future bugs when a programmer accidentally passes a negative value and the program suddenly breaks and no one knows why because this little bit of logic swallowed the potentially revealing exception.
Now, please do not confuse checking if user input is valid versus what I am asking here. Obviously user input should be checked. What I am asking only pertains to code that does not interact with the user or his or her input.
this int clientX = Math.Max(x, 0); is NOT defensive programming - it is masquerading potential problems!
Defensive programming would be
if ( x < 0 )
throw new Exception ( "whatever" ); // or return false or...
and defensive programming is absolutely recommended... you never know how this code will be called in the future so you make sure that it handles anything appropriately i.e. things it is unable to handle must be filtered out as early as possible and the caller must be "notified" (for example by a meaningful exception)...
You check for nulls because attempting to have a null object perform an operation will trigger an exception. "Nothing" cannot do "something." You code like this because you can never know 100% of the time what state your application will be in.
That being said, there are good and bad ways of coding against invalid states, and those examples you gave aren't exactly "good" ways, although it's hard to say when taken out of context.
A lot of time you might be building a component that will be used by different applications with input being supplied by different programmers with different coding styles, some may not perform thorough validation on data passed in to your component/method so defensive programming in this situation would be a good thing to catch this, regardless of what the user of the component does.
PS: one thing though, usually you would not just return from the method as you showed above, you would throw an appropriate Exception, maybe an InvalidArgumentException or something like that.
Probably you see this more often:
public void Foo(Bar bar)
{
if (bar == null) throw new ArgumentNullException("bar");
bar.DoSomething();
}
This way, if someone did supply a null value as parameter (maybe as a result of some other method), you don't see a NullReferenceException from "somewhere" in your code, but an exception that states the problem more clearly.
Simply returning on invalid input is akin to a silent try/catch in my opinion.
I still validate data coming into my functions from other pieces of code, but always throw an appropriate exception when I encounter invalid input.
Something like,
int clientX = Math.Max(x, 0)
could have really bad effects in some cases, just assume if you get x negative because of fault in some other place of the program, this would cause error to propagate. I would rather suggest you log and to throw an exception (some special type specific to business logic) when undesirable situation occurs.
Is there any behavioural difference between:
if (s == null) // s is a string
{
throw new NullReferenceException();
}
And:
try
{
Console.Writeline(s);
}
catch (NullReferenceException Ex)
{ // logic in here
}
Both throw exceptions of null object, if s is null. The first example is more readable as it shows exactly where the error occurs (the exception bit is right next to the line which will cause the exception).
I have seen this coding style a lot on various blogs by various coders of all sorts of skill levels, but why not just perform the main logic by checking if s is not null and thus save the exception from ever being raised? Is there a downside to this approach?
Thanks
No, Console.WriteLine(null) won't throw an exception. It will just print nothing out. Now assuming you meant something like:
Console.WriteLine(s.Length);
then it makes sense... and you should use the first form. Exceptions should occur when you can't predict them ahead of time with your current information. If you can easily work out that something's wrong, it makes no sense to try an operation which is bound to fail. It leads to code which is harder to understand and performs worse.
So NullReferenceException, ArgumentNullException and the like shouldn't be caught unless they're due to a nasty API which sometimes throws exceptions which you can handle, but which shouldn't really be being thrown in the first place. This is why in Code Contracts, the default behaviour for a failed contract is to throw an exception which you can't catch explicitly, other than by catching everything (which is typically somewhere at the top of the stack).
As Jon Skeet already mentioned, Console.WriteLine (null) won't throw an exception.
Next to that, I'd like to say that you should 'fail fast'. That means that you have to put 'guard' clauses in your methods, and check the arguments that have been given in your methods if they can be considered to be valid.
This allows you to throw an exception yourself, and give an additional message which will be helpfull when debugging. The message can give a clear indication on what was wrong, and that is much handier then if you're faced with a NullReferenceException that has been thrown without any good information in it's message property.
If you are writing a class library there may be occasions when you know that if a certain parameter contains a null value, that may cause trouble further down the line. In those cases I usually find it to be a good idea to throw an exception (even though I would probably use ArgumentNullException for that case) to make the user of the class library aware of this as early and clearly as possible.
Exceptions are not always a bad thing.
Jon Skeet is right but, more generally, it's all a question of semantic.
If the situation has some applicative meaning (number out of bound, date of birth in the future, etc) you may want to test for it before doing any operation and throw a custom exception (that is one with meaning for your application).
If the situation is truly "exceptional", just write the code as if the given value were correct. See, if you put the test, you will do it everytime, knowing that the VM will do it anyway in case it needs to throw an exception. From a performance point of view, if the error happens to have a statistically small occurence, it makes no sense.
If you're taking a Design By Contract type approach to things then a piece of code can specify that it throws exceptions in order to specify its contract and to enforce it. The other half is, of course, calling code recognising the contract and fulfilling it.
In this case it would mean that if you know a method will throw an exception if you pass in null (i.e. its contract is that you don't pass nulls) then you should check before calling it.
Jon Skeet says that the method won't throw an exception anyway. That may or may not be true but the principle of guarding for method contract stands (which I believe was the point of your question).
On a regular basis, I validate my function arguments:
public static void Function(int i, string s)
{
Debug.Assert(i > 0);
Debug.Assert(s != null);
Debug.Assert(s.length > 0);
}
Of course the checks are "valid" in the context of the function.
Is this common industry practice? What is common practice concerning function argument validation?
The accepted practice is as follows if the values are not valid or will cause an exception later on:
if( i < 0 )
throw new ArgumentOutOfRangeException("i", "parameter i must be greater than 0");
if( string.IsNullOrEmpty(s) )
throw new ArgumentNullException("s","the paramater s needs to be set ...");
So the list of basic argument exceptions is as follows:
ArgumentException
ArgumentNullException
ArgumentOutOfRangeException
What you wrote are preconditions, and an essential element in Design by Contract. Google (or "StackOverflow":) for that term and you'll find quite a lot of good information about it, and some bad information, too. Note that the method includes also postconditions and the concept of class invariant.
Let's leave it clear that assertions are a valid mechanism.
Of course, they're usually (not always) not checked in Release mode, so this means that you have to test your code before releasing it.
If assertions are left enabled and an assertion is violated, the standard behaviour in some languages that use assertions (and in Eiffel in particular) is to throw an assertion violation exception.
Assertions left unchecked are not a convenient or advisable mechanism if you're publishing a code library, nor (obviously) a way to validate direct possibly incorrect input. If you have "possibly incorrect input" you have to design as part of the normal behaviour of your program an input validation layer; but you can still freely use assertions in the internal modules.
Other languages, like Java, have more of a tradition of explicitly checking arguments and throwing exceptions if they're wrong, mainly because these languages don't have a strong "assert" or "design by contract" tradition.
(It may seem strange to some, but I find the differences in tradition respectable, and not necessarily evil.)
See also this related question.
You should not be using asserts to validate data in a live application. It is my understanding that asserts are meant to test whether the function is being used in the proper way. Or that the function is returning the proper value I.e. the value that you are getting is what you expected. They are used a lot in testing frameworks. They are meant to be turned off when the system is deployed as they are slow. If you would like to handle invalid cases, you should do so explicitly as the poster above mentioned.
Any code that is callable over the network or via inter process communication absolutely must have parameter validation because otherwise it's a security vulnerability - but you have to throw an exception Debug.Assert just will not do because it only checks debug builds.
Any code that other people on your team will use also should have parameter validations, just because it will help them know it's their bug when they pass you an invalid value, again you should throw exceptions this time because you can add a nice description ot an exception with explanation what they did wrong and how to fix it.
Debug.Assert in your function is just to help YOU debug, it's a nice first line of defense but it's not "real" validation.
For public functions, especially API calls, you should be throwing exceptions. Consumers would probably appreciate knowing that there was a bug in their code, and an exception is the guaranteed way of doing it.
For internal or private functions, Debug.Assert is fine (but not necessary, IMO). You won't be taking in unknown parameters, and your tests should catch any invalid values by expected output. But, sometimes, Debug.Assert will let you zero in on or prevent a bug that much quicker.
For public functions that are not API calls, or internal methods subject to other folks calling them, you can go either way. I generally prefer exceptions for public methods, and (usually) let internal methods do without exceptions. If an internal method is particularly prone to misuse, then an exception is warranted.
While you want to validate arguments, you don't want 4 levels of validation that you have to keep in sync (and pay the perf penalty for). So, validate at the external interface, and just trust that you and your co-workers are able to call functions appropriately and/or fix the bug that inevitably results.
Most of the time I don't use Debug.Assert, I would do something like this.
public static void Function(int i, string s)
{
if (i > 0 || !String.IsNullOrEmpty(s))
Throw New ArgumentException("blah blah");
}
WARNING: This is air code, I havn't tested it.
You should use Assert to validate programmatic assumptions; that is, for the situation where
you're the only one calling that method
it should be an impossible state to get into
The Assert statements will allow you to double check that the impossible state is never reached. Use this where you would otherwise feel comfortable without validation.
For situations where the function is given bad arguments, but you can see that it's not impossible for it to receive those values (e.g. when someone else could call that code), you should throw exceptions (a la #Nathan W and #Robert Paulson) or fail gracefully (a la #Srdjan Pejic).
I try not to use Debug.Assert, rather I write guards. If the function parameter is not of expected value, I exit the function. Like this:
public static void Function(int i, string s)
{
if(i <= 0)
{
/*exit and warn calling code */
}
}
I find this reduces the amount of wrangling that need to happen.
I won't speak to industry standards, but you could combine the bottom two asserts into a single line:
Debug.Assert(!String.IsNullOrEmpty(s));
I've just started skimming 'Debugging MS .Net 2.0 Applications' by John Robbins, and have become confused by his evangelism for Debug.Assert(...).
He points out that well-implemented Asserts store the state, somewhat, of an error condition, e.g.:
Debug.Assert(i > 3, "i > 3", "This means I got a bad parameter");
Now, personally, it seems crazy to me that he so loves restating his test without an actual sensible 'business logic' comment, perhaps "i <= 3 must never happen because of the flobittyjam widgitification process".
So, I think I get Asserts as a kind-of low-level "Let's protect my assumptions" kind of thing... assuming that one feels this is a test one only needs to do in debug - i.e. you are protecting yourself against colleague and future programmers, and hoping that they actually test things.
But what I don't get is, he then goes on to say that you should use assertions in addition to normal error handling; now what I envisage is something like this:
Debug.Assert(i > 3, "i must be greater than 3 because of the flibbity widgit status");
if (i <= 3)
{
throw new ArgumentOutOfRangeException("i", "i must be > 3 because... i=" + i.ToString());
}
What have I gained by the Debug.Assert repetition of the error condition test? I think I'd get it if we were talking about debug-only double-checking of a very important calculation...
double interestAmount = loan.GetInterest();
Debug.Assert(debugInterestDoubleCheck(loan) == interestAmount, "Mismatch on interest calc");
...but I don't get it for parameter tests which are surely worth checking (in both DEBUG and Release builds)... or not. What am I missing?
Assertions are not for parameter checking. Parameter checking should always be done (and precisely according to what pre-conditions are specified in your documentation and/or specification), and the ArgumentOutOfRangeException thrown as necessary.
Assertions are for testing for "impossible" situations, i.e., things that you (in your program logic) assume are true. The assertions are there to tell you if these assumptions are broken for any reason.
Hope this helps!
There is a communication aspect to asserts vs exception throwing.
Let's say we have a User class with a Name property and a ToString method.
If ToString is implemented like this:
public string ToString()
{
Debug.Assert(Name != null);
return Name;
}
It says that Name should never null and there is a bug in the User class if it is.
If ToString is implement like this:
public string ToString()
{
if ( Name == null )
{
throw new InvalidOperationException("Name is null");
}
return Name;
}
It says that the caller is using ToString incorrectly if Name is null and should check that before calling.
The implementation with both
public string ToString()
{
Debug.Assert(Name != null);
if ( Name == null )
{
throw new InvalidOperationException("Name is null");
}
return Name;
}
says that if Name is null there bug in the User class, but we want to handle it anyway. (The user doesn't need to check Name before calling.) I think this is the kind of safety Robbins was recommending.
I've thought about this long and hard when it comes to providing guidance on debug vs. assert with respect to testing concerns.
You should be able to test your class with erroneous input, bad state, invalid order of operations and any other conceivable error condition and an assert should never trip. Each assert is checking something should always be true regardless of the inputs or computations performed.
Good rules of thumb I've arrived at:
Asserts are not a replacement for robust code that functions correctly independent of configuration. They are complementary.
Asserts should never be tripped during a unit test run, even when feeding in invalid values or testing error conditions. The code should handle these conditions without an assert occurring.
If an assert trips (either in a unit test or during testing), the class is bugged.
For all other errors -- typically down to environment (network connection lost) or misuse (caller passed a null value) -- it's much nicer and more understandable to use hard checks & exceptions. If an exception occurs, the caller knows it's likely their fault. If an assert occurs, the caller knows it's likely a bug in the code where the assert is located.
Regarding duplication: I agree. I don't see why you would replicate the validation with a Debug.Assert AND an exception check. Not only does it add some noise to the code and muddy the waters regarding who is at fault, but it a form of repetition.
I use explicit checks that throw exceptions on public and protected methods and assertions on private methods.
Usually, the explicit checks guard the private methods from seeing incorrect values anyway. So really, the assert is checking for a condition that should be impossible. If an assert does fire, it tells me the there is a defect in the validation logic contained within one of the public routines on the class.
An exception can be caught and swallowed making the error invisible to testing. That can't happen with Debug.Assert.
No one should ever have a catch handler that catches all exceptions, but people do it anyway, and sometimes it is unavoidable. If your code is invoked from COM, the interop layer catches all exceptions and turns them into COM error codes, meaning you won't see your unhandled exceptions. Asserts don't suffer from this.
Also when the exception would be unhandled, a still better practice is to take a mini-dump. One area where VB is more powerful than C# is that you can use an exception filter to snap a mini-dump when the exception is in flight, and leave the rest of the exception handling unchanged. Gregg Miskelly's blog post on exception filter inject provides a useful way to do this from c#.
One other note on assets ... they inteact poorly with Unit testing the error conditions in your code. It is worthwhile to have a wrapper to turn off the assert for your unit tests.
IMO it's a loss of development time only. Properly implemented exception gives you a clear picture of what happened. I saw too much applications showing obscure "Assertion failed: i < 10" errors. I see assertion as a temporary solution. In my opinion no assertions should be in a final version of a program. In my practice I used assertions for quick and dirty checks. Final version of the code should take erroneous situation into account and behave accordingly. If something bad happens you have 2 choices: handle it or leave it. Function should throw an exception with meaningful description if wrong parameters passed in. I see no points in duplication of validation logic.
Example of a good use of Assert:
Debug.Assert(flibbles.count() < 1000000, "too many flibbles"); // indicate something is awry
log.warning("flibble count reached " + flibbles.count()); // log in production as early warning
I personally think that Assert should only be used when you know something is outside desirable limits, but you can be sure it's reasonably safe to continue. In all other circumstances (feel free point out circumstances I haven't thought of) use exceptions to fail hard and fast.
The key tradeoff for me is whether you want to bring down a live/production system with an Exception to avoid corruption and make troubleshooting easier, or whether you have encountered a situation that should never be allowed to continue unnoticed in test/debug versions but could be allowed to continue in production (logging a warning of course).
cf. http://c2.com/cgi/wiki?FailFast
copied and modified from java question: Exception Vs Assertion
Here is by 2 cents.
I think that the best way is to use both assertions and exceptions. The main differences between the two methods, imho, if that Assert statements can be removed easily from the application text (defines, conditional attributes...), while Exception thrown are dependent (tipically) by a conditional code which is harder to remove (multine section with preprocessor conditionals).
Every application exception shall be handled correctly, while assertions shall be satisfied only during the algorithm developement and testing.
If you pass an null object reference as routine parameter, and you use this value, you get a null pointer exception. Indeed: why you should write an assertion? It's a waste of time in this case.
But what about private class members used in class routines? When these value are set somewhere, is better to check with an assertion if a null value is set. That's only because when you use the member, you get a null pointer exception but you don't know how the value was set. This cause a restart of the program breaking on all entry point use to set the private member.
Exception are more usefull, but they can be (imho) very heavy to manage and there is the possibility to use too much exceptions. And they requires additional check, maybe undesired to optimize the code.
Personally I use exceptions only whenever the code requires a deep catch control (catch statements are very low in the call stack) or whenever the function parameters are not hardcoded in the code.