Validating function arguments? - c#

On a regular basis, I validate my function arguments:
public static void Function(int i, string s)
{
Debug.Assert(i > 0);
Debug.Assert(s != null);
Debug.Assert(s.length > 0);
}
Of course the checks are "valid" in the context of the function.
Is this common industry practice? What is common practice concerning function argument validation?

The accepted practice is as follows if the values are not valid or will cause an exception later on:
if( i < 0 )
throw new ArgumentOutOfRangeException("i", "parameter i must be greater than 0");
if( string.IsNullOrEmpty(s) )
throw new ArgumentNullException("s","the paramater s needs to be set ...");
So the list of basic argument exceptions is as follows:
ArgumentException
ArgumentNullException
ArgumentOutOfRangeException

What you wrote are preconditions, and an essential element in Design by Contract. Google (or "StackOverflow":) for that term and you'll find quite a lot of good information about it, and some bad information, too. Note that the method includes also postconditions and the concept of class invariant.
Let's leave it clear that assertions are a valid mechanism.
Of course, they're usually (not always) not checked in Release mode, so this means that you have to test your code before releasing it.
If assertions are left enabled and an assertion is violated, the standard behaviour in some languages that use assertions (and in Eiffel in particular) is to throw an assertion violation exception.
Assertions left unchecked are not a convenient or advisable mechanism if you're publishing a code library, nor (obviously) a way to validate direct possibly incorrect input. If you have "possibly incorrect input" you have to design as part of the normal behaviour of your program an input validation layer; but you can still freely use assertions in the internal modules.
Other languages, like Java, have more of a tradition of explicitly checking arguments and throwing exceptions if they're wrong, mainly because these languages don't have a strong "assert" or "design by contract" tradition.
(It may seem strange to some, but I find the differences in tradition respectable, and not necessarily evil.)
See also this related question.

You should not be using asserts to validate data in a live application. It is my understanding that asserts are meant to test whether the function is being used in the proper way. Or that the function is returning the proper value I.e. the value that you are getting is what you expected. They are used a lot in testing frameworks. They are meant to be turned off when the system is deployed as they are slow. If you would like to handle invalid cases, you should do so explicitly as the poster above mentioned.

Any code that is callable over the network or via inter process communication absolutely must have parameter validation because otherwise it's a security vulnerability - but you have to throw an exception Debug.Assert just will not do because it only checks debug builds.
Any code that other people on your team will use also should have parameter validations, just because it will help them know it's their bug when they pass you an invalid value, again you should throw exceptions this time because you can add a nice description ot an exception with explanation what they did wrong and how to fix it.
Debug.Assert in your function is just to help YOU debug, it's a nice first line of defense but it's not "real" validation.

For public functions, especially API calls, you should be throwing exceptions. Consumers would probably appreciate knowing that there was a bug in their code, and an exception is the guaranteed way of doing it.
For internal or private functions, Debug.Assert is fine (but not necessary, IMO). You won't be taking in unknown parameters, and your tests should catch any invalid values by expected output. But, sometimes, Debug.Assert will let you zero in on or prevent a bug that much quicker.
For public functions that are not API calls, or internal methods subject to other folks calling them, you can go either way. I generally prefer exceptions for public methods, and (usually) let internal methods do without exceptions. If an internal method is particularly prone to misuse, then an exception is warranted.
While you want to validate arguments, you don't want 4 levels of validation that you have to keep in sync (and pay the perf penalty for). So, validate at the external interface, and just trust that you and your co-workers are able to call functions appropriately and/or fix the bug that inevitably results.

Most of the time I don't use Debug.Assert, I would do something like this.
public static void Function(int i, string s)
{
if (i > 0 || !String.IsNullOrEmpty(s))
Throw New ArgumentException("blah blah");
}
WARNING: This is air code, I havn't tested it.

You should use Assert to validate programmatic assumptions; that is, for the situation where
you're the only one calling that method
it should be an impossible state to get into
The Assert statements will allow you to double check that the impossible state is never reached. Use this where you would otherwise feel comfortable without validation.
For situations where the function is given bad arguments, but you can see that it's not impossible for it to receive those values (e.g. when someone else could call that code), you should throw exceptions (a la #Nathan W and #Robert Paulson) or fail gracefully (a la #Srdjan Pejic).

I try not to use Debug.Assert, rather I write guards. If the function parameter is not of expected value, I exit the function. Like this:
public static void Function(int i, string s)
{
if(i <= 0)
{
/*exit and warn calling code */
}
}
I find this reduces the amount of wrangling that need to happen.

I won't speak to industry standards, but you could combine the bottom two asserts into a single line:
Debug.Assert(!String.IsNullOrEmpty(s));

Related

Appropriate usage of assertions and exceptions

I've read a bit around, trying to figure out when to use assertions and exceptions appropriately, but there's still something I'm missing to the big picture. Probably I just need more experience, so I'd like to bring some simple examples to better comprehend in what situations I should use use what.
Example 1: let us start with the classical situation of an invalid value. For example I have the following class, in which both fields must be positive:
class Rectangle{
private int height;
private int length;
public int Height{
get => height;
set{
//avoid to put negative heights
}
}
//same thing for length
}
Let me remark that I am not talking about how to deal with user input in this example, since I could just make a simple control flow for that. Though, I am facing with the idea that somewhere else there may be some unexpected error and I want this to be detected since I don't want an object with invald values. So I can:
Use Debug.Assert and just stop the program if that happens, so that I can correct possible errors when they come.
Throw an ArgumentOutOfRangeException to basically do the same thing? This feels wrong, so I should use that only if I know I'm going to handle it somewhere. Though, if I know where to handle the exception, shouldn't I fix the problem where it lies? Or maybe it is meant for things that may happen but you cannot control directly in your code, like an user input (well that can be dealt with, without an exception, but maybe something else can't) or loading data?
Question: did I get the meaning of assertions and exceptions right? Also, could you please give an example in which handling exceptions can be useful (because something you cannot control before happens)? I cannot figure out what else, beyond the cases I mentioned, can happen, but I clearly still lack experience here.
To expand a bit my question: I can think of a variety of reasons why an exception can be thrown, like a NullReferenceException, an IndexOutOfBoundsException, the IO exceptions like DirectoryNotFoundException or FileNotFoundException, etc. Though, I cannot figure out situations in which handling them becomes useful, apart from simply stopping the program (in which case, shouldn't an assertion be used?) or giving a simple message of where the problem has occured. I know even this is useful and exceptions are also meant to categorize "errors" and give clue to how to fix them. Though, is a simple message really all they are useful to? That's sounds fishy, so I'll stick with the "I've never faced a proper situation, 'cause of experience" mantra.
Example 2: let us now talk about the user input, using the first example. As I've anticipated, I won't use an exception just to check that the values are positive, as that's a simple control flow. But what happens if the user inputs a letter? Should I handle an exception here (maybe a simple ArgumentException) and give a message in the catch block? Or it can be avoided, too, through control flow (check if input is of type int, or something like that)?
Thanks to anyone who will clear my lingering doubts.
Throw an ArgumentOutOfRangeException to basically do the same thing? This feels wrong, so I should use that only if I know I'm going to handle it somewhere. Though, if I know where to handle the exception, shouldn't I fix the problem where it lies?
Your reasoning is pretty good here, but not quite right. The reason you're struggling is because exceptions are used for four things in C#:
boneheaded exceptions. A boneheaded exception is something like "invalid argument" when the caller could have known that the argument is invalid. If a boneheaded exception is thrown then the caller has a bug that should be fixed. You never have a catch(InvalidArgumentException) outside of a test case because it should never be thrown in production. These exceptions exist to help your callers write correct code by telling them very loudly when they've made a mistake.
vexing exceptions are boneheaded exceptions where the caller cannot know that the argument is invalid. These are design flaws in APIs and should be eliminated. They require you to wrap API calls with try-catches to catch what looks like an exception that should be avoided, not caught. If you find that you're writing APIs that require the caller to wrap calls in a try-catch, you're doing something wrong.
fatal exceptions are exceptions like thread aborted, out of memory, and so on. Something terrible has happened and the process cannot continue. There is very little point in catching these because there's not much you can do to improve the situation, and you might make it worse.
exogenous exceptions are things like "the network cable is unplugged". You expected the network cable to be plugged in; it is not, and there is no way you could have checked earlier to see if it was, because the time of checking and the time of using are different times; the cable could be unplugged between those two times. You have to catch these.
Now that you know what the four kinds of exceptions are, you can see what the difference is between an exception and an assertion.
An assertion is something that must logically be true always, and if it is not, then you have a bug that should be fixed. You never assert that the network cable is plugged in. You never assert that a caller-supplied value is not null. There should never be a test case that causes an assertion to fire; if there is, then the test case has discovered a bug.
You assert that after your in-place sort algorithm runs, the smallest element in a non-empty array is at the beginning. There should be no way that can be false, and if there is, you have a bug. So assert that fact.
A throw by contrast is a statement, and every statement should have a test case which exercises it. "This API throws when passed null by a buggy caller" is part of its contract, and that contract should be testable. If you find you're writing throw statements that have no possible test case that verifies that they throw, consider changing it to an assertion.
And finally, never pass invalid arguments and then catch a boneheaded exception. If you're dealing with user input, then the UI layer should be verifying that the input is syntactically valid, ie, numbers where numbers are expected. The UI layer should not be passing possibly-unvetted user code to a deeper API and then handling the resulting exception.
I see it that way. Assertions are for programmers. Exceptions are for users. You can have places in your code that you expect specific value. Then you can put assertion, for example:
public int Age
{
get { return age; }
set
{
age = value;
Debug.Assert(age == value);
}
}
This is just example. So if age != value there is no exception. But "hey programmer, something strange may have happened, look at this part of code".
You use exception when application don't know how to react in a specific situation. For example:
public int Divide(int a, int b)
{
Debug.Assert(b != 0); //this is for you as a programmer, but if something bad happened, user won't see this assert, but application also doesn't know what to do in situation like this, so you will add:
if(b == 0)
throw SomeException();
}
And SomeException might be handled somewhere else in your application.

Should I throw on null parameters in private/internal methods?

I'm writing a library that has several public classes and methods, as well as several private or internal classes and methods that the library itself uses.
In the public methods I have a null check and a throw like this:
public int DoSomething(int number)
{
if (number == null)
{
throw new ArgumentNullException(nameof(number));
}
}
But then this got me thinking, to what level should I be adding parameter null checks to methods? Do I also start adding them to private methods? Should I only do it for public methods?
Ultimately, there isn't a uniform consensus on this. So instead of giving a yes or no answer, I'll try to list the considerations for making this decision:
Null checks bloat your code. If your procedures are concise, the null guards at the beginning of them may form a significant part of the overall size of the procedure, without expressing the purpose or behaviour of that procedure.
Null checks expressively state a precondition. If a method is going to fail when one of the values is null, having a null check at the top is a good way to demonstrate this to a casual reader without them having to hunt for where it's dereferenced. To improve this, people often use helper methods with names like Guard.AgainstNull, instead of having to write the check each time.
Checks in private methods are untestable. By introducing a branch in your code which you have no way of fully traversing, you make it impossible to fully test that method. This conflicts with the point of view that tests document the behaviour of a class, and that that class's code exists to provide that behaviour.
The severity of letting a null through depends on the situation. Often, if a null does get into the method, it'll be dereferenced a few lines later and you'll get a NullReferenceException. This really isn't much less clear than throwing an ArgumentNullException. On the other hand, if that reference is passed around quite a bit before being dereferenced, or if throwing an NRE will leave things in a messy state, then throwing early is much more important.
Some libraries, like .NET's Code Contracts, allow a degree of static analysis, which can add an extra benefit to your checks.
If you're working on a project with others, there may be existing team or project standards covering this.
If you're not a library developer, don't be defensive in your code
Write unit tests instead
In fact, even if you're developing a library, throwing is most of the time: BAD
1. Testing null on int must never be done in c# :
It raises a warning CS4072, because it's always false.
2. Throwing an Exception means it's exceptional: abnormal and rare.
It should never raise in production code. Especially because exception stack trace traversal can be a cpu intensive task. And you'll never be sure where the exception will be caught, if it's caught and logged or just simply silently ignored (after killing one of your background thread) because you don't control the user code. There is no "checked exception" in c# (like in java) which means you never know - if it's not well documented - what exceptions a given method could raise. By the way, that kind of documentation must be kept in sync with the code which is not always easy to do (increase maintenance costs).
3. Exceptions increases maintenance costs.
As exceptions are thrown at runtime and under certain conditions, they could be detected really late in the development process. As you may already know, the later an error is detected in the development process, the more expensive the fix will be. I've even seen exception raising code made its way to production code and not raise for a week, only for raising every day hereafter (killing the production. oops!).
4. Throwing on invalid input means you don't control input.
It's the case for public methods of libraries. However if you can check it at compile time with another type (for example a non nullable type like int) then it's the way to go. And of course, as they are public, it's their responsibility to check for input.
Imagine the user who uses what he thinks as valid data and then by a side effect, a method deep in the stack trace trows a ArgumentNullException.
What will be his reaction?
How can he cope with that?
Will it be easy for you to provide an explanation message ?
5. Private and internal methods should never ever throw exceptions related to their input.
You may throw exceptions in your code because an external component (maybe Database, a file or else) is misbehaving and you can't guarantee that your library will continue to run correctly in its current state.
Making a method public doesn't mean that it should (only that it can) be called from outside of your library (Look at Public versus Published from Martin Fowler). Use IOC, interfaces, factories and publish only what's needed by the user, while making the whole library classes available for unit testing. (Or you can use the InternalsVisibleTo mechanism).
6. Throwing exceptions without any explanation message is making fun of the user
No need to remind what feelings one can have when a tool is broken, without having any clue on how to fix it. Yes, I know. You comes to SO and ask a question...
7. Invalid input means it breaks your code
If your code can produce a valid output with the value then it's not invalid and your code should manage it. Add a unit test to test this value.
8. Think in user terms:
Do you like when a library you use throws exceptions for smashing your face ? Like: "Hey, it's invalid, you should have known that!"
Even if from your point of view - with your knowledge of the library internals, the input is invalid, how you can explain it to the user (be kind and polite):
Clear documentation (in Xml doc and an architecture summary may help).
Publish the xml doc with the library.
Clear error explanation in the exception if any.
Give the choice :
Look at Dictionary class, what do you prefer? what call do you think is the fastest ? What call can raises exception ?
Dictionary<string, string> dictionary = new Dictionary<string, string>();
string res;
dictionary.TryGetValue("key", out res);
or
var other = dictionary["key"];
9. Why not using Code Contracts ?
It's an elegant way to avoid the ugly if then throw and isolate the contract from the implementation, permitting to reuse the contract for different implementations at the same time. You can even publish the contract to your library user to further explain him how to use the library.
As a conclusion, even if you can easily use throw, even if you can experience exceptions raising when you use .Net Framework, that doesn't mean it could be used without caution.
Here are my opinions:
General Cases
Generally speaking, it is better to check for any invalid inputs before you process them in a method for robustness reason - be it private, protected, internal, protected internal, or public methods. Although there are some performance costs paid for this approach, in most cases, this is worth doing rather than paying more time to debug and to patch the codes later.
Strictly Speaking, however...
Strictly speaking, however, it is not always needed to do so. Some methods, usually private ones, can be left without any input checking provided that you have full guarantee that there isn't single call for the method with invalid inputs. This may give you some performance benefit, especially if the method is called frequently to do some basic computation/action. For such cases, doing checking for input validity may impair the performance significantly.
Public Methods
Now the public method is trickier. This is because, more strictly speaking, although the access modifier alone can tell who can use the methods, it cannot tell who will use the methods. More over, it also cannot tell how the methods are going to be used (that is, whether the methods are going to be called with invalid inputs in the given scopes or not).
The Ultimate Determining Factor
Although access modifiers for methods in the code can hint on how to use the methods, ultimately, it is humans who will use the methods, and it is up to the humans how they are going to use them and with what inputs. Thus, in some rare cases, it is possible to have a public method which is only called in some private scope and in that private scope, the inputs for the public methods are guaranteed to be valid before the public method is called.
In such cases then, even the access modifier is public, there isn't any real need to check for invalid inputs, except for robust design reason. And why is this so? Because there are humans who know completely when and how the methods shall be called!
Here we can see, there is no guarantee either that public method always require checking for invalid inputs. And if this is true for public methods, it must also be true for protected, internal, protected internal, and private methods as well.
Conclusions
So, in conclusion, we can say a couple of things to help us making decisions:
Generally, it is better to have checks for any invalid inputs for robust design reason, provided that performance is not at stake. This is true for any type of access modifiers.
The invalid inputs check could be skipped if performance gain could be significantly improved by doing so, provided that it can also be guaranteed that the scope where the methods are called are always giving the methods valid inputs.
private method is usually where we skip such checking, but there is no guarantee that we cannot do that for public method as well
Humans are the ones who ultimately use the methods. Regardless of how the access modifiers can hint the use of the methods, how the methods are actually used and called depend on the coders. Thus, we can only say about general/good practice, without restricting it to be the only way of doing it.
The public interface of your library deserves tight checking of preconditions, because you should expect the users of your library to make mistakes and violate the preconditions by accident. Help them understand what is going on in your library.
The private methods in your library do not require such runtime checking because you call them yourself. You are in full control of what you are passing. If you want to add checks because you are afraid to mess up, then use asserts. They will catch your own mistakes, but do not impede performance during runtime.
Though you tagged language-agnostic, it seems to me that it probably doesn't exist a general response.
Notably, in your example you hinted the argument: so with a language accepting hinting it'll fire an error as soon as entering the function, before you can take any action.
In such a case, the only solution is to have checked the argument before calling your function... but since you're writing a library, that cannot have sense!
In the other hand, with no hinting, it remains realistic to check inside the function.
So at this step of the reflexion, I'd already suggest to give up hinting.
Now let's go back to your precise question: to what level should it be checked?
For a given data piece it'd happen only at the highest level where it can "enter" (may be several occurrences for the same data), so logically it'd concern only public methods.
That's for the theory. But maybe you plan a huge, complex, library so it might be not easy to ensure having certainty about registering all "entry points".
In this case, I'd suggest the opposite: consider to merely apply your controls everywhere, then only omit it where you clearly see it's duplicate.
Hope this helps.
In my opinion you should ALWAYS check for "invalid" data - independent whether it is a private or public method.
Looked from the other way... why should you be able to work with something invalid just because the method is private? Doesn't make sense, right? Always try to use defensive programming and you will be happier in life ;-)
This is a question of preference. But consider instead why are you checking for null or rather checking for valid input. It's probably because you want to let the consumer of your library to know when he/she is using it incorrectly.
Let's imagine that we have implemented a class PersonList in a library. This list can only contain objects of the type Person. We have also on our PersonList implemented some operations and therefore we do not want it to contain any null values.
Consider the two following implementations of the Add method for this list:
Implementation 1
public void Add(Person item)
{
if(_size == _items.Length)
{
EnsureCapacity(_size + 1);
}
_items[_size++] = item;
}
Implementation 2
public void Add(Person item)
{
if(item == null)
{
throw new ArgumentNullException("Cannot add null to PersonList");
}
if(_size == _items.Length)
{
EnsureCapacity(_size + 1);
}
_items[_size++] = item;
}
Let's say we go with implementation 1
Null values can now be added in the list
All opoerations implemented on the list will have to handle theese null values
If we should check for and throw a exception in our operation, consumer will be notified about the exception when he/she is calling one of the operations and it will at this state be very unclear what he/she has done wrong (it just wouldn't make any sense to go for this approach).
If we instead choose to go with implementation 2, we make sure input to our library has the quality that we require for our class to operate on it. This means we only need to handle this here and then we can forget about it while we are implementing our other operations.
It will also become more clear for the consumer that he/she is using the library in the wrong way when he/she gets a ArgumentNullException on .Add instead of in .Sort or similair.
To sum it up my preference is to check for valid argument when it is being supplied by the consumer and it's not being handled by the private/internal methods of the library. This basically means we have to check arguments in constructors/methods that are public and takes parameters. Our private/internal methods can only be called from our public ones and they have allready checked the input which means we are good to go!
Using Code Contracts should also be considered when verifying input.

Best practice for null testing [duplicate]

To avoid all standard-answers I could have Googled on, I will provide an example you all can attack at will.
C# and Java (and too many others) have with plenty of types some of ‘overflow’ behaviour I don’t like at all (e.g type.MaxValue + type.SmallestValue == type.MinValue for example : int.MaxValue + 1 == int.MinValue).
But, seen my vicious nature, I’ll add some insult to this injury by expanding this behaviour to, let’s say an Overridden DateTime type. (I know DateTime is sealed in .NET, but for the sake of this example, I’m using a pseudo language that is exactly like C#, except for the fact that DateTime isn’t sealed).
The overridden Add method:
/// <summary>
/// Increments this date with a timespan, but loops when
/// the maximum value for datetime is exceeded.
/// </summary>
/// <param name="ts">The timespan to (try to) add</param>
/// <returns>The Date, incremented with the given timespan.
/// If DateTime.MaxValue is exceeded, the sum wil 'overflow' and
/// continue from DateTime.MinValue.
/// </returns>
public DateTime override Add(TimeSpan ts)
{
try
{
return base.Add(ts);
}
catch (ArgumentOutOfRangeException nb)
{
// calculate how much the MaxValue is exceeded
// regular program flow
TimeSpan saldo = ts - (base.MaxValue - this);
return DateTime.MinValue.Add(saldo)
}
catch(Exception anyOther)
{
// 'real' exception handling.
}
}
Of course an if could solve this just as easy, but the fact remains that I just fail to see why you couldn’t use exceptions (logically that is, I can see that when performance is an issue that in certain cases exceptions should be avoided).
I think in many cases they are more clear than if-structures and don’t break any contract the method is making.
IMHO the “Never use them for regular program flow” reaction everybody seems to have is not that well underbuild as the strength of that reaction can justify.
Or am I mistaken?
I've read other posts, dealing with all kind of special cases, but my point is there's nothing wrong with it if you are both:
Clear
Honour the contract of your method
Shoot me.
Have you ever tried to debug a program raising five exceptions per second in the normal course of operation ?
I have.
The program was quite complex (it was a distributed calculation server), and a slight modification at one side of the program could easily break something in a totally different place.
I wish I could just have launched the program and wait for exceptions to occur, but there were around 200 exceptions during the start-up in the normal course of operations
My point : if you use exceptions for normal situations, how do you locate unusual (ie exceptional) situations ?
Of course, there are other strong reasons not to use exceptions too much, especially performance-wise
Exceptions are basically non-local goto statements with all the consequences of the latter. Using exceptions for flow control violates a principle of least astonishment, make programs hard to read (remember that programs are written for programmers first).
Moreover, this is not what compiler vendors expect. They expect exceptions to be thrown rarely, and they usually let the throw code be quite inefficient. Throwing exceptions is one of the most expensive operations in .NET.
However, some languages (notably Python) use exceptions as flow-control constructs. For example, iterators raise a StopIteration exception if there are no further items. Even standard language constructs (such as for) rely on this.
My rule of thumb is:
If you can do anything to recover from an error, catch exceptions
If the error is a very common one (eg. user tried to log in with the wrong password), use returnvalues
If you can't do anything to recover from an error, leave it uncaught (Or catch it in your main-catcher to do some semi-graceful shutdown of the application)
The problem I see with exceptions is from a purely syntax point of view (I'm pretty sure the perfomance overhead is minimal). I don't like try-blocks all over the place.
Take this example:
try
{
DoSomeMethod(); //Can throw Exception1
DoSomeOtherMethod(); //Can throw Exception1 and Exception2
}
catch(Exception1)
{
//Okay something messed up, but is it SomeMethod or SomeOtherMethod?
}
.. Another example could be when you need to assign something to a handle using a factory, and that factory could throw an exception:
Class1 myInstance;
try
{
myInstance = Class1Factory.Build();
}
catch(SomeException)
{
// Couldn't instantiate class, do something else..
}
myInstance.BestMethodEver(); // Will throw a compile-time error, saying that myInstance is uninitalized, which it potentially is.. :(
Soo, personally, I think you should keep exceptions for rare error-conditions (out of memory etc.) and use returnvalues (valueclasses, structs or enums) to do your error checking instead.
Hope I understood your question correct :)
A first reaction to a lot of answers :
you're writing for the programmers and the principle of least astonishment
Of course! But an if just isnot more clear all the time.
It shouldn't be astonishing eg : divide (1/x) catch (divisionByZero) is more clear than any if to me (at Conrad and others) . The fact this kind of programming isn't expected is purely conventional, and indeed, still relevant. Maybe in my example an if would be clearer.
But DivisionByZero and FileNotFound for that matter are clearer than ifs.
Of course if it's less performant and needed a zillion time per sec, you should of course avoid it, but still i haven't read any good reason to avoid the overal design.
As far as the principle of least astonishment goes : there's a danger of circular reasoning here : suppose a whole community uses a bad design, this design will become expected! Therefore the principle cannot be a grail and should be concidered carefully.
exceptions for normal situations, how do you locate unusual (ie exceptional) situations ?
In many reactions sth. like this shines trough. Just catch them, no? Your method should be clear, well documented, and hounouring it's contract. I don't get that question I must admit.
Debugging on all exceptions : the same, that's just done sometimes because the design not to use exceptions is common. My question was : why is it common in the first place?
Before exceptions, in C, there were setjmp and longjmp that could be used to accomplish a similar unrolling of the stack frame.
Then the same construct was given a name: "Exception". And most of the answers rely on the meaning of this name to argue about its usage, claiming that exceptions are intended to be used in exceptional conditions. That was never the intent in the original longjmp. There were just situations where you needed to break control flow across many stack frames.
Exceptions are slightly more general in that you can use them within the same stack frame too. This raises analogies with goto that I believe are wrong. Gotos are a tightly coupled pair (and so are setjmp and longjmp). Exceptions follow a loosely coupled publish/subscribe that is much cleaner! Therefore using them within the same stack frame is hardly the same thing as using gotos.
The third source of confusion relates to whether they are checked or unchecked exceptions. Of course, unchecked exceptions seem particularly awful to use for control flow and perhaps a lot of other things.
Checked exceptions however are great for control flow, once you get over all the Victorian hangups and live a little.
My favorite usage is a sequence of throw new Success() in a long fragment of code that tries one thing after the other until it finds what it is looking for. Each thing -- each piece of logic -- may have arbritrary nesting so break's are out as also any kind of condition tests. The if-else pattern is brittle. If I edit out an else or mess up the syntax in some other way, then there is a hairy bug.
Using throw new Success() linearizes the code flow. I use locally defined Success classes -- checked of course -- so that if I forget to catch it the code won't compile. And I don't catch another method's Successes.
Sometimes my code checks for one thing after the other and only succeeds if everything is OK. In this case I have a similar linearization using throw new Failure().
Using a separate function messes with the natural level of compartmentalization. So the return solution is not optimal. I prefer to have a page or two of code in one place for cognitive reasons. I don't believe in ultra-finely divided code.
What JVMs or compilers do is less relevant to me unless there is a hotspot. I cannot believe there is any fundamental reason for compilers to not detect locally thrown and caught Exceptions and simply treat them as very efficient gotos at the machine code level.
As far as using them across functions for control flow -- i. e. for common cases rather than exceptional ones -- I cannot see how they would be less efficient than multiple break, condition tests, returns to wade through three stack frames as opposed to just restore the stack pointer.
I personally do not use the pattern across stack frames and I can see how it would require design sophistication to do so elegantly. But used sparingly it should be fine.
Lastly, regarding surprising virgin programmers, it is not a compelling reason. If you gently introduce them to the practice, they will learn to love it. I remember C++ used to surprise and scare the heck out of C programmers.
The standard anwser is that exceptions are not regular and should be used in exceptional cases.
One reason, which is important to me, is that when I read a try-catch control structure in a software I maintain or debug, I try to find out why the original coder used an exception handling instead of an if-else structure. And I expect to find a good answer.
Remember that you write code not only for the computer but also for other coders. There is a semantic associated to an exception handler that you cannot throw away just because the machine doesn't mind.
Josh Bloch deals with this topic extensively in Effective Java. His suggestions are illuminating and should apply to .NET as well (except for the details).
In particular, exceptions should be used for exceptional circumstances. The reasons for this are usability-related, mainly. For a given method to be maximally usable, its input and output conditions should be maximally constrained.
For example, the second method is easier to use than the first:
/**
* Adds two positive numbers.
*
* #param addend1 greater than zero
* #param addend2 greater than zero
* #throws AdditionException if addend1 or addend2 is less than or equal to zero
*/
int addPositiveNumbers(int addend1, int addend2) throws AdditionException{
if( addend1 <= 0 ){
throw new AdditionException("addend1 is <= 0");
}
else if( addend2 <= 0 ){
throw new AdditionException("addend2 is <= 0");
}
return addend1 + addend2;
}
/**
* Adds two positive numbers.
*
* #param addend1 greater than zero
* #param addend2 greater than zero
*/
public int addPositiveNumbers(int addend1, int addend2) {
if( addend1 <= 0 ){
throw new IllegalArgumentException("addend1 is <= 0");
}
else if( addend2 <= 0 ){
throw new IllegalArgumentException("addend2 is <= 0");
}
return addend1 + addend2;
}
In either case, you need to check to make sure that the caller is using your API appropriately. But in the second case, you require it (implicitly). The soft Exceptions will still be thrown if the user didn't read the javadoc, but:
You don't need to document it.
You don't need to test for it (depending upon how aggresive your
unit testing strategy is).
You don't require the caller to handle three use cases.
The ground-level point is that Exceptions should not be used as return codes, largely because you've complicated not only YOUR API, but the caller's API as well.
Doing the right thing comes at a cost, of course. The cost is that everyone needs to understand that they need to read and follow the documentation. Hopefully that is the case anyway.
How about performance? While load testing a .NET web app we topped out at 100 simulated users per web server until we fixed a commonly-occuring exception and that number increased to 500 users.
I think that you can use Exceptions for flow control. There is, however, a flipside of this technique. Creating Exceptions is a costly thing, because they have to create a stack trace. So if you want to use Exceptions more often than for just signalling an exceptional situation you have to make sure that building the stack traces doesn't negatively influence your performance.
The best way to cut down the cost of creating exceptions is to override the fillInStackTrace() method like this:
public Throwable fillInStackTrace() { return this; }
Such an exception will have no stacktraces filled in.
Here are best practices I described in my blog post:
Throw an exception to state an unexpected situation in your software.
Use return values for input validation.
If you know how to deal with exceptions a library throws, catch them at the lowest level possible.
If you have an unexpected exception, discard current operation completely. Don’t pretend you know how to deal with them.
I don't really see how you're controlling program flow in the code you cited. You'll never see another exception besides the ArgumentOutOfRange exception. (So your second catch clause will never be hit). All you're doing is using an extremely costly throw to mimic an if statement.
Also you aren't performing the more sinister of operations where you just throw an exception purely for it to be caught somewhere else to perform flow control. You're actually handling an exceptional case.
Apart from the reasons stated, one reason not to use exceptions for flow control is that it can greatly complicate the debugging process.
For example, when I'm trying to track down a bug in VS I'll typically turn on "break on all exceptions". If you're using exceptions for flow control then I'm going to be breaking in the debugger on a regular basis and will have to keep ignoring these non-exceptional exceptions until I get to the real problem. This is likely to drive someone mad!!
Lets assume you have a method that does some calculations. There are many input parameters it has to validate, then to return a number greater then 0.
Using return values to signal validation error, it's simple: if method returned a number lesser then 0, an error occured. How to tell then which parameter didn't validate?
I remember from my C days a lot of functions returned error codes like this:
-1 - x lesser then MinX
-2 - x greater then MaxX
-3 - y lesser then MinY
etc.
Is it really less readable then throwing and catching an exception?
Because the code is hard to read, you may have troubles debugging it, you will introduce new bugs when fixing bugs after a long time, it is more expensive in terms of resources and time, and it annoys you if you are debugging your code and the debugger halts on the occurence of every exception ;)
If you are using exception handlers for control flow, you are being too general and lazy. As someone else mentioned, you know something happened if you are handling processing in the handler, but what exactly? Essentially you are using the exception for an else statement, if you are using it for control flow.
If you don't know what possible state could occur, then you can use an exception handler for unexpected states, for example when you have to use a third-party library, or you have to catch everything in the UI to show a nice error message and log the exception.
However, if you do know what might go wrong, and you don't put an if statement or something to check for it, then you are just being lazy. Allowing the exception handler to be the catch-all for stuff you know could happen is lazy, and it will come back to haunt you later, because you will be trying to fix a situation in your exception handler based on a possibly false assumption.
If you put logic in your exception handler to determine what exactly happened, then you would be quite stupid for not putting that logic inside the try block.
Exception handlers are the last resort, for when you run out of ideas/ways to stop something from going wrong, or things are beyond your ability to control. Like, the server is down and times out and you can't prevent that exception from being thrown.
Finally, having all the checks done up front shows what you know or expect will occur and makes it explicit. Code should be clear in intent. What would you rather read?
You can use a hammer's claw to turn a screw, just like you can use exceptions for control flow. That doesn't mean it is the intended usage of the feature. The if statement expresses conditions, whose intended usage is controlling flow.
If you are using a feature in an unintended way while choosing to not use the feature designed for that purpose, there will be an associated cost. In this case, clarity and performance suffer for no real added value. What does using exceptions buy you over the widely-accepted if statement?
Said another way: just because you can doesn't mean you should.
As others have mentioned numerously, the principle of least astonishment will forbid that you use exceptions excessively for control flow only purposes. On the other hand, no rule is 100% correct, and there are always those cases where an exception is "just the right tool" - much like goto itself, by the way, which ships in the form of break and continue in languages like Java, which are often the perfect way to jump out of heavily nested loops, which aren't always avoidable.
The following blog post explains a rather complex but also rather interesting use-case for a non-local ControlFlowException:
http://blog.jooq.org/2013/04/28/rare-uses-of-a-controlflowexception
It explains how inside of jOOQ (a SQL abstraction library for Java), such exceptions are occasionally used to abort the SQL rendering process early when some "rare" condition is met.
Examples of such conditions are:
Too many bind values are encountered. Some databases do not support arbitrary numbers of bind values in their SQL statements (SQLite: 999, Ingres 10.1.0: 1024, Sybase ASE 15.5: 2000, SQL Server 2008: 2100). In those cases, jOOQ aborts the SQL rendering phase and re-renders the SQL statement with inlined bind values. Example:
// Pseudo-code attaching a "handler" that will
// abort query rendering once the maximum number
// of bind values was exceeded:
context.attachBindValueCounter();
String sql;
try {
// In most cases, this will succeed:
sql = query.render();
}
catch (ReRenderWithInlinedVariables e) {
sql = query.renderWithInlinedBindValues();
}
If we explicitly extracted the bind values from the query AST to count them every time, we'd waste valuable CPU cycles for those 99.9% of the queries that don't suffer from this problem.
Some logic is available only indirectly via an API that we want to execute only "partially". The UpdatableRecord.store() method generates an INSERT or UPDATE statement, depending on the Record's internal flags. From the "outside", we don't know what kind of logic is contained in store() (e.g. optimistic locking, event listener handling, etc.) so we don't want to repeat that logic when we store several records in a batch statement, where we'd like to have store() only generate the SQL statement, not actually execute it. Example:
// Pseudo-code attaching a "handler" that will
// prevent query execution and throw exceptions
// instead:
context.attachQueryCollector();
// Collect the SQL for every store operation
for (int i = 0; i < records.length; i++) {
try {
records[i].store();
}
// The attached handler will result in this
// exception being thrown rather than actually
// storing records to the database
catch (QueryCollectorException e) {
// The exception is thrown after the rendered
// SQL statement is available
queries.add(e.query());
}
}
If we had externalised the store() logic into "re-usable" API that can be customised to optionally not execute the SQL, we'd be looking into creating a rather hard to maintain, hardly re-usable API.
Conclusion
In essence, our usage of these non-local gotos is just along the lines of what [Mason Wheeler][5] said in his answer:
"I just encountered a situation that I cannot deal with properly at this point, because I don't have enough context to handle it, but the routine that called me (or something further up the call stack) ought to know how to handle it."
Both usages of ControlFlowExceptions were rather easy to implement compared to their alternatives, allowing us to reuse a wide range of logic without refactoring it out of the relevant internals.
But the feeling of this being a bit of a surprise to future maintainers remains. The code feels rather delicate and while it was the right choice in this case, we'd always prefer not to use exceptions for local control flow, where it is easy to avoid using ordinary branching through if - else.
Typically there is nothing wrong, per se, with handling an exception at a low level. An exception IS a valid message that provides a lot of detail for why an operation cannot be performed. And if you can handle it, you ought to.
In general if you know there is a high probability of failure that you can check for... you should do the check... i.e. if(obj != null) obj.method()
In your case, i'm not familiar enough with the C# library to know if date time has an easy way to check whether a timestamp is out of bounds. If it does, just call if(.isvalid(ts))
otherwise your code is basically fine.
So, basically it comes down to whichever way creates cleaner code... if the operation to guard against an expected exception is more complex than just handling the exception; than you have my permission to handle the exception instead of creating complex guards everywhere.
You might be interested in having a look at Common Lisp's condition system which is a sort of generalization of exceptions done right. Because you can unwind the stack or not in a controlled way, you get "restarts" as well, which are extremely handy.
This doesn't have anything much to do with best practices in other languages, but it shows you what can be done with some design thought in (roughly) the direction you are thinking of.
Of course there are still performance considerations if you're bouncing up and down the stack like a yo-yo, but it's a much more general idea than "oh crap, lets bail" kind of approach that most catch/throw exception systems embody.
I don't think there is anything wrong with using Exceptions for flow-control. Exceptions are somewhat similar to continuations and in statically typed languages, Exceptions are more powerful than continuations, so, if you need continuations but your language doesn't have them, you can use Exceptions to implement them.
Well, actually, if you need continuations and your language doesn't have them, you chose the wrong language and you should rather be using a different one. But sometimes you don't have a choice: client-side web programming is the prime example – there's just no way to get around JavaScript.
An example: Microsoft Volta is a project to allow writing web applications in straight-forward .NET, and let the framework take care of figuring out which bits need to run where. One consequence of this is that Volta needs to be able to compile CIL to JavaScript, so that you can run code on the client. However, there is a problem: .NET has multithreading, JavaScript doesn't. So, Volta implements continuations in JavaScript using JavaScript Exceptions, then implements .NET Threads using those continuations. That way, Volta applications that use threads can be compiled to run in an unmodified browser – no Silverlight needed.
But you won't always know what happens in the Method/s that you call. You won't know exactly where the exception was thrown. Without examining the exception object in greater detail....
I feel that there is nothing wrong with your example. On the contrary, it would be a sin to ignore the exception thrown by the called function.
In the JVM, throwing an exception is not that expensive, only creating the exception with new xyzException(...), because the latter involves a stack walk. So if you have some exceptions created in advance, you may throw them many times without costs. Of course, this way you can't pass data along with the exception, but I think that is a bad thing to do anyway.
There are a few general mechanisms via which a language could allow for a method to exit without returning a value and unwind to the next "catch" block:
Have the method examine the stack frame to determine the call site, and use the metadata for the call site to find either information about a try block within the calling method, or the location where the calling method stored the address of its caller; in the latter situation, examine metadata for the caller's caller to determine in the same fashion as the immediate caller, repeating until one finds a try block or the stack is empty. This approach adds very little overhead to the no-exception case (it does preclude some optimizations) but is expensive when an exception occurs.
Have the method return a "hidden" flag which distinguishes a normal return from an exception, and have the caller check that flag and branch to an "exception" routine if it's set. This routine adds 1-2 instructions to the no-exception case, but relatively little overhead when an exception occurs.
Have the caller place exception-handling information or code at a fixed address relative to the stacked return address. For example, with the ARM, instead of using the instruction "BL subroutine", one could use the sequence:
adr lr,next_instr
b subroutine
b handle_exception
next_instr:
To exit normally, the subroutine would simply do bx lr or pop {pc}; in case of an abnormal exit, the subroutine would either subtract 4 from LR before performing the return or use sub lr,#4,pc (depending upon the ARM variation, execution mode, etc.) This approach will malfunction very badly if the caller is not designed to accommodate it.
A language or framework which uses checked exceptions might benefit from having those handled with a mechanism like #2 or #3 above, while unchecked exceptions are handled using #1. Although the implementation of checked exceptions in Java is rather nuisancesome, they would not be a bad concept if there were a means by which a call site could say, essentially, "This method is declared as throwing XX, but I don't expect it ever to do so; if it does, rethrow as an "unchecked" exception. In a framework where checked exceptions were handled in such fashion, they could be an effective means of flow control for things like parsing methods which in some contexts may have a high likelihood of failure, but where failure should return fundamentally different information than success. I'm unaware of any frameworks that use such a pattern, however. Instead, the more common pattern is to use the first approach above (minimal cost for the no-exception case, but high cost when exceptions are thrown) for all exceptions.
One aesthetic reason:
A try always comes with a catch, whereas an if doesn't have to come with an else.
if (PerformCheckSucceeded())
DoSomething();
With try/catch, it becomes much more verbose.
try
{
PerformCheckSucceeded();
DoSomething();
}
catch
{
}
That's 6 lines of code too many.

I usually don't check if function input is invalid. Am I wrong?

A lot of times when reading source code I see something like this:
public void Foo(Bar bar)
{
if (bar == null) return;
bar.DoSomething();
}
I do not like this, but I appear to be in the wrong as this form of defensive programming is considered good. Is it though? For example, why is bar null to begin with? Isn't doing checks like this akin to applying a bandage to a problem rather than solving the real solution? Not only does it complicate functions with additional lines of code but it also prevents the programmer from seeing potential bugs.
Here's another example:
public void Foo(int x)
{
int clientX = Math.Max(x, 0); // Ensures x is never negative
}
Others look at that and see defensive programming but I see future bugs when a programmer accidentally passes a negative value and the program suddenly breaks and no one knows why because this little bit of logic swallowed the potentially revealing exception.
Now, please do not confuse checking if user input is valid versus what I am asking here. Obviously user input should be checked. What I am asking only pertains to code that does not interact with the user or his or her input.
this int clientX = Math.Max(x, 0); is NOT defensive programming - it is masquerading potential problems!
Defensive programming would be
if ( x < 0 )
throw new Exception ( "whatever" ); // or return false or...
and defensive programming is absolutely recommended... you never know how this code will be called in the future so you make sure that it handles anything appropriately i.e. things it is unable to handle must be filtered out as early as possible and the caller must be "notified" (for example by a meaningful exception)...
You check for nulls because attempting to have a null object perform an operation will trigger an exception. "Nothing" cannot do "something." You code like this because you can never know 100% of the time what state your application will be in.
That being said, there are good and bad ways of coding against invalid states, and those examples you gave aren't exactly "good" ways, although it's hard to say when taken out of context.
A lot of time you might be building a component that will be used by different applications with input being supplied by different programmers with different coding styles, some may not perform thorough validation on data passed in to your component/method so defensive programming in this situation would be a good thing to catch this, regardless of what the user of the component does.
PS: one thing though, usually you would not just return from the method as you showed above, you would throw an appropriate Exception, maybe an InvalidArgumentException or something like that.
Probably you see this more often:
public void Foo(Bar bar)
{
if (bar == null) throw new ArgumentNullException("bar");
bar.DoSomething();
}
This way, if someone did supply a null value as parameter (maybe as a result of some other method), you don't see a NullReferenceException from "somewhere" in your code, but an exception that states the problem more clearly.
Simply returning on invalid input is akin to a silent try/catch in my opinion.
I still validate data coming into my functions from other pieces of code, but always throw an appropriate exception when I encounter invalid input.
Something like,
int clientX = Math.Max(x, 0)
could have really bad effects in some cases, just assume if you get x negative because of fault in some other place of the program, this would cause error to propagate. I would rather suggest you log and to throw an exception (some special type specific to business logic) when undesirable situation occurs.

Debug.Assert vs. Specific Thrown Exceptions

I've just started skimming 'Debugging MS .Net 2.0 Applications' by John Robbins, and have become confused by his evangelism for Debug.Assert(...).
He points out that well-implemented Asserts store the state, somewhat, of an error condition, e.g.:
Debug.Assert(i > 3, "i > 3", "This means I got a bad parameter");
Now, personally, it seems crazy to me that he so loves restating his test without an actual sensible 'business logic' comment, perhaps "i <= 3 must never happen because of the flobittyjam widgitification process".
So, I think I get Asserts as a kind-of low-level "Let's protect my assumptions" kind of thing... assuming that one feels this is a test one only needs to do in debug - i.e. you are protecting yourself against colleague and future programmers, and hoping that they actually test things.
But what I don't get is, he then goes on to say that you should use assertions in addition to normal error handling; now what I envisage is something like this:
Debug.Assert(i > 3, "i must be greater than 3 because of the flibbity widgit status");
if (i <= 3)
{
throw new ArgumentOutOfRangeException("i", "i must be > 3 because... i=" + i.ToString());
}
What have I gained by the Debug.Assert repetition of the error condition test? I think I'd get it if we were talking about debug-only double-checking of a very important calculation...
double interestAmount = loan.GetInterest();
Debug.Assert(debugInterestDoubleCheck(loan) == interestAmount, "Mismatch on interest calc");
...but I don't get it for parameter tests which are surely worth checking (in both DEBUG and Release builds)... or not. What am I missing?
Assertions are not for parameter checking. Parameter checking should always be done (and precisely according to what pre-conditions are specified in your documentation and/or specification), and the ArgumentOutOfRangeException thrown as necessary.
Assertions are for testing for "impossible" situations, i.e., things that you (in your program logic) assume are true. The assertions are there to tell you if these assumptions are broken for any reason.
Hope this helps!
There is a communication aspect to asserts vs exception throwing.
Let's say we have a User class with a Name property and a ToString method.
If ToString is implemented like this:
public string ToString()
{
Debug.Assert(Name != null);
return Name;
}
It says that Name should never null and there is a bug in the User class if it is.
If ToString is implement like this:
public string ToString()
{
if ( Name == null )
{
throw new InvalidOperationException("Name is null");
}
return Name;
}
It says that the caller is using ToString incorrectly if Name is null and should check that before calling.
The implementation with both
public string ToString()
{
Debug.Assert(Name != null);
if ( Name == null )
{
throw new InvalidOperationException("Name is null");
}
return Name;
}
says that if Name is null there bug in the User class, but we want to handle it anyway. (The user doesn't need to check Name before calling.) I think this is the kind of safety Robbins was recommending.
I've thought about this long and hard when it comes to providing guidance on debug vs. assert with respect to testing concerns.
You should be able to test your class with erroneous input, bad state, invalid order of operations and any other conceivable error condition and an assert should never trip. Each assert is checking something should always be true regardless of the inputs or computations performed.
Good rules of thumb I've arrived at:
Asserts are not a replacement for robust code that functions correctly independent of configuration. They are complementary.
Asserts should never be tripped during a unit test run, even when feeding in invalid values or testing error conditions. The code should handle these conditions without an assert occurring.
If an assert trips (either in a unit test or during testing), the class is bugged.
For all other errors -- typically down to environment (network connection lost) or misuse (caller passed a null value) -- it's much nicer and more understandable to use hard checks & exceptions. If an exception occurs, the caller knows it's likely their fault. If an assert occurs, the caller knows it's likely a bug in the code where the assert is located.
Regarding duplication: I agree. I don't see why you would replicate the validation with a Debug.Assert AND an exception check. Not only does it add some noise to the code and muddy the waters regarding who is at fault, but it a form of repetition.
I use explicit checks that throw exceptions on public and protected methods and assertions on private methods.
Usually, the explicit checks guard the private methods from seeing incorrect values anyway. So really, the assert is checking for a condition that should be impossible. If an assert does fire, it tells me the there is a defect in the validation logic contained within one of the public routines on the class.
An exception can be caught and swallowed making the error invisible to testing. That can't happen with Debug.Assert.
No one should ever have a catch handler that catches all exceptions, but people do it anyway, and sometimes it is unavoidable. If your code is invoked from COM, the interop layer catches all exceptions and turns them into COM error codes, meaning you won't see your unhandled exceptions. Asserts don't suffer from this.
Also when the exception would be unhandled, a still better practice is to take a mini-dump. One area where VB is more powerful than C# is that you can use an exception filter to snap a mini-dump when the exception is in flight, and leave the rest of the exception handling unchanged. Gregg Miskelly's blog post on exception filter inject provides a useful way to do this from c#.
One other note on assets ... they inteact poorly with Unit testing the error conditions in your code. It is worthwhile to have a wrapper to turn off the assert for your unit tests.
IMO it's a loss of development time only. Properly implemented exception gives you a clear picture of what happened. I saw too much applications showing obscure "Assertion failed: i < 10" errors. I see assertion as a temporary solution. In my opinion no assertions should be in a final version of a program. In my practice I used assertions for quick and dirty checks. Final version of the code should take erroneous situation into account and behave accordingly. If something bad happens you have 2 choices: handle it or leave it. Function should throw an exception with meaningful description if wrong parameters passed in. I see no points in duplication of validation logic.
Example of a good use of Assert:
Debug.Assert(flibbles.count() < 1000000, "too many flibbles"); // indicate something is awry
log.warning("flibble count reached " + flibbles.count()); // log in production as early warning
I personally think that Assert should only be used when you know something is outside desirable limits, but you can be sure it's reasonably safe to continue. In all other circumstances (feel free point out circumstances I haven't thought of) use exceptions to fail hard and fast.
The key tradeoff for me is whether you want to bring down a live/production system with an Exception to avoid corruption and make troubleshooting easier, or whether you have encountered a situation that should never be allowed to continue unnoticed in test/debug versions but could be allowed to continue in production (logging a warning of course).
cf. http://c2.com/cgi/wiki?FailFast
copied and modified from java question: Exception Vs Assertion
Here is by 2 cents.
I think that the best way is to use both assertions and exceptions. The main differences between the two methods, imho, if that Assert statements can be removed easily from the application text (defines, conditional attributes...), while Exception thrown are dependent (tipically) by a conditional code which is harder to remove (multine section with preprocessor conditionals).
Every application exception shall be handled correctly, while assertions shall be satisfied only during the algorithm developement and testing.
If you pass an null object reference as routine parameter, and you use this value, you get a null pointer exception. Indeed: why you should write an assertion? It's a waste of time in this case.
But what about private class members used in class routines? When these value are set somewhere, is better to check with an assertion if a null value is set. That's only because when you use the member, you get a null pointer exception but you don't know how the value was set. This cause a restart of the program breaking on all entry point use to set the private member.
Exception are more usefull, but they can be (imho) very heavy to manage and there is the possibility to use too much exceptions. And they requires additional check, maybe undesired to optimize the code.
Personally I use exceptions only whenever the code requires a deep catch control (catch statements are very low in the call stack) or whenever the function parameters are not hardcoded in the code.

Categories

Resources