Can the (plain) throw statement in C# cause exceptions? - c#

Question: Can the plain throw statement in C# ever cause a new exception in itself?
Note that I ask this question out of curiosity, not because I have any practical or real-world situation where it would matter much. Also note that my gut feeling and experience tell me that the answer is "No", but I'm looking to validate that answer somehow (see further down on sources I've tried so far).
Here's some sample code to illustrate my question:
try
{
int x = 0, y = 1 / x;
}
catch (Exception outerException)
{
try
{
throw;
}
catch (Exception innerException)
{
// Q: Does this Assert ever fail??
System.Diagnostics.Debug.Assert(outerException.Equals(innerException));
}
}
I'm wondering if there's any way at all to alter the circumstances such that the Assert will fail, without touching the inner try/catch block.
What I've tried or was looking to try to answer this:
Read the throw (C# Reference) page on MSDN - no definitive answer;
Checked part 5.3.3.11 of the C# Language Specification - which is probably the wrong place to look for this kind of info;
Glossed through the exceptions that I could try to trigger on the throw statement. The OutOfMemoryException comes to mind, but is kind of hard to trigger at the time of the throw.
Opened up ILDASM to check the generated code. I can see that throw translates to a rethrow instruction, but I'm lost where to look further to check if that statement can or cannot throw an exception.
This is what ILDASM shows for the inner try bit:
.try
{
IL_000d: nop
IL_000e: rethrow
} // end .try
So, to summarize: can a throw statement (used to rethrow an exception) ever cause an exception itself?

In my honest opinion, theoretically the assert can 'fail' (practically I don't think so).
How?
Note: Below are just my 'opinion' on the basis of some research I earlier did on SSCLI.
An InvalidProgramException can occur. This admittedly is highly highly improbable but nevertheless theoretically possible (for instance some internal CLR error may result in the throwable object becoming unavailable!!!!).
If CLR does not find enough memory to process the 're-throw' action it will throw an OutOfMemoryException instead (CLR's internal re-throw logic requires to allocate some memory if it is not dealing with 'pre-allocated' exceptions like OutOfMemoryException).
If the CLR is running under some other host (for e.g. SQL server or even your own) and the host decides to terminate the Exception re-throw thread (on the basis of some internal logic) ThreadAbortException (known as rude thread abort in this case) will be raised. Though, I am not sure if the Assert will even execute in this case.
Custom host may have applied escalation policy to CLR (ICLRPolicyManager::SetActionOnFailure). In that case if you are dealing with an OutOfMemoryException, escalation policy may cause ThreadAbortException to occur (again rude thread abort. Not sure what happens if policy dictates a normal thread abort).
Though #Alois Kraus clarifies that 'normal' thread abort exceptions are not possible, from SSCLI research I am still doubtful that (normal) ThreadAbortException can occur.
Edit:
As I earlier said that the assert can fail theoretically but practically it is highly improbable. Hence it is very hard to develop a POC for this.
In order to provide more 'evidence', following are the snippets from SSCLI code for processing rethow IL instruction which validate my above points.
Warning: Commercial CLR can differ very widely from SSCLI.
InvalidProgramException :
if (throwable != NULL)
{
...
}
else
{
// This can only be the result of bad IL (or some internal EE failure).
RealCOMPlusThrow(kInvalidProgramException, (UINT)IDS_EE_RETHROW_NOT_ALLOWED);
}
Rude Thread Abort :
if (pThread->IsRudeAbortInitiated())
{
// Nobody should be able to swallow rude thread abort.
throwable = CLRException::GetPreallocatedRudeThreadAbortException();
}
This means that if 'rude thread abort' has been initiated, any exception gets changed to rude thread abort exception.
Now most interesting of all, the OutOfMemoryException. Since rethrow IL instruction essentially re-throws the same Exception object (i.e. object.ReferenceEquals returns true) it seems impossible that OutOfMemoryException can occur on re-throw. However, following SSCLI code shows that it is possible:
// Always save the current object in the handle so on rethrow we can reuse it. This is important as it
// contains stack trace info.
//
// Note: we use SafeSetLastThrownObject, which will try to set the throwable and if there are any problems,
// it will set the throwable to something appropiate (like OOM exception) and return the new
// exception. Thus, the user's exception object can be replaced here.
throwable = pThread->SafeSetLastThrownObject(throwable);
SafeSetLastThrownObject calls SetLastThrownObject and if it fails raises OutOfMemoryException. Here is the snippet from SetLastThrownObject (with my comments added)
...
if (m_LastThrownObjectHandle != NULL)
{
// We'll somtimes use a handle for a preallocated exception object. We should never, ever destroy one of
// these handles... they'll be destroyed when the Runtime shuts down.
if (!CLRException::IsPreallocatedExceptionHandle(m_LastThrownObjectHandle))
{
//Destroys the GC handle only but not the throwable object itself
DestroyHandle(m_LastThrownObjectHandle);
}
}
...
//This step can fail if there is no space left for a new handle
m_LastThrownObjectHandle = GetDomain()->CreateHandle(throwable);
Above code snippets shows that the throwable object's GC handle is destroyed (i.e frees up a slot in GC table) and then a new handle is created. Since a slot has just been released, new handle creation will never fail until off-course in a highly rare scenario of a new thread getting scheduled just at the right time and consuming up all the available GC handles.
Apart from this all exceptions (including rethrows) are raised through RaiseException win api. The code that catches this exception to prepare the corresponding managed exception can itself raise OutOfMemoryException.

Can the plain throw statement in C# ever cause a new exception in itself?
By definition it won't. The very point of throw; is to preserve the active exception (especially the stack-trace).
Theoretically an implementation could maybe clone the exception but what would be the point?

I suspect the bit you're missing may be the specification for rethrow, which is within ECMA-335, partition III, section 4.24:
4.24 rethrow – rethrow the current exception
Description:
The rethrow instruction is only permitted within the body of a catch handler (see
Partition I). It throws the same exception that was caught by this handler.
A rethrow does not change the stack trace in the object.
Exceptions:
The original exception is thrown.
(Emphasis mine)
So yes, it looks like your assertion is guaranteed to work according to the spec. (Of course this is assuming an implementation follows the spec...)
The relevant part of the C# specification is section 8.9.5 (C# 4 version):
A throw statement with no expression can be used only in a catch block, in which case that statement re-throws the exception that is currently being handled by that catch block.
Which again, suggests that the original exception and only that exception will be thrown.
(Section 5.3.3.11 which you referred to is just talking about definite assignment, not the behaviour of the throw statement itself.)
None of this invalidates Amit's points, of course, which are for situations which are somewhat outside the scope of what's specified in either place. (When hosts apply additional rules, it's hard for a language specification to take account of them.)

Your assertion will never fail because there is no code between the rethrow and the assertion. The only way an exception changes if you catch the exception and cause another one - eg. by having buggy code or "throw new" in your catch clause,.

Combined with recursion plain throw can easily cause StackOverflowException on 64-bit platforms.
class Program
{
// expect it to be 10 times less in real code
static int max = 455;
static void Test(int i)
{
try {
if (i >= max) throw new Exception("done");
Test(i + 1);
}
catch {
Console.WriteLine(i);
throw;
}
}
static void Main(string[] args)
{
try {
Test(0);
}
catch {
}
Console.WriteLine("Done.");
}
}
In console:
...
2
1
0
Process is terminated due to StackOverflowException.
Some explanation may be found here.

Related

Programmatic check or try...catch

I have some code here:
public static void OpenConnection(IDbConnection connection)
{
if(connection == null)
throw new ArgumentNullException("connection", "The connection was null.");
if (connection.State != ConnectionState.Closed)
connection.Close();
}
The code has to be executed quite a lot since I open and close the connection every time I do something in the database. I wonder if the next code would be a better solution performance wise:
public static void OpenConnection(IDbConnection connection)
{
try
{
connection.Close();
}
catch (NullReferenceException nullReferenceException) { throw; }
catch (Exception exception) { } // This will occur if the connection was already closed so nothing should be done then.
}
PS. Is the catch (Exception exception) { } necessary?
EDIT: Replaced ArgumentNullException by NullReferenceException in the second code since that will be the exception when the connection == null.
I wonder if the next code would be a better solution performance wise
Consider what the performance and functional difference is in each case:
connection is null
You will get a NullReferenceException instead of an ArgumentNullException, which is a functional difference since you get a different exception type (and less context on why/where the exception occurs). If you decide to catch the NullReferecenException and throw an ArgumentNullException, thne you have the overhead of throwing a new exception, so there's a performance hit.
The connection is not closed.
An attempt to close the connection is made - no real performance or functional difference here.
The connection is closed
You try to close the connection again. Probably not a huge functional difference here (since most providers probably don't get mad if you try to close a connection that's already closed), but it's unnecessary and may have some performance disadvantages depending on what the Close() method actually does.
So your second method has functional differences and may actually have a disadvantage performance wise.
Stick to the code that illustrates the expected behavior more cleanly - then only optimize if you have a measurable, correctable performance issue.
Apart from JDB's argument of exceptions being costly, take a close look at your code and tell me which of those is much easier to read/follow?
If you've never seen a method and it starts with a "try" you really need to think and take a close look. If it however starts with your guard clause (the if (connection == null) part), which by the way is a very common thing to do, you will see immediately without even having to think that if you pass null into the method you will get an exception. Take this guard clause as a contract. You never want null to be passed in there. It is much better design.
About the 'PS'. If you were to do this, remember that ALL other exceptions that might be thrown in connection.Close() will be caught and, unless done by you, never surface. Such things might get your application to incur bugs that are very hard to track down.
According to Microsoft, Exceptions are a huge performance hit. Try to avoid them whenever reasonable:
Throwing exceptions can be very expensive, so make sure that you don't throw a lot of them. Use Perfmon to see how many exceptions your application is throwing. It may surprise you to find that certain areas of your application throw more exceptions than you expected. For better granularity, you can also check the exception number programmatically by using Performance Counters.
Finding and designing away exception-heavy code can result in a decent perf win. Bear in mind that this has nothing to do with try/catch blocks: you only incur the cost when the actual exception is thrown. You can use as many try/catch blocks as you want. Using exceptions gratuitously is where you lose performance. For example, you should stay away from things like using exceptions for control flow.
https://msdn.microsoft.com/en-us/library/ms973839.aspx#dotnetperftips_topic2
Your second example is actually a worse scenario. You should hardly ever catch general exceptions. You may think you know what will be thrown, but it's very possible that something unexpected will be thrown instead, leading to system instability and possible data loss/corruption.
It's Still Wrong to Use Catch (Exception e)
Even though the CLR exception system marks the worst exceptions as CSE, it's still not a good idea to write catch (Exception e) in your code. Exceptions represent a whole spectrum of unexpected situations. The CLR can detect the worst exceptions—SEH exceptions that indicate a possibly corrupt process state. But other unexpected conditions can still be harmful if they are ignored or dealt with generically.
In the absence of process corruption, the CLR offers some pretty strong guarantees about program correctness and memory safety. When executing a program written in safe Microsoft Intermediate Language (MSIL) code you can be certain that all the instructions in your program will execute correctly. But doing what the program instructions say to do is often different from doing what the programmer wants. A program that is completely correct according to the CLR can corrupt persisted state, such as program files written to a disk.
https://msdn.microsoft.com/en-us/magazine/dd419661.aspx
Your Second solution is not better for the performance, because your application will work harder when try block cause the exception, the catch block will try to catch exception. But the second solution is much better on logical point.
On your first solution you will get an error on your first check when connection is null.
Try-Catch or Try-Catch-Finally are powerful tools to handle errors, but they are expensive. Check out this link to see what you can do with it: Using Try... Catch..., Finally!
For better performance I would use:
private static void OpenSqlConnection(string connectionString)
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
//Do some work
}
}
The above example creates a SqlConnection, opens it, does some work. The connection is automatically closed at the end of the using block.
For your code for Try-catch I would do (to catch exception):
try
{
conn.Close();
}
catch (InvalidOperationException ex)
{
Console.WriteLine(ex.GetType().FullName);
Console.WriteLine(ex.Message);
//for Asp.net app
//Response.Write(ex.GetType().FullName);
// Response.Write(ex.Message);
}
For try catch please see: this link - Best Practices for Exceptions

C# Re-throwing Exceptions

When throwing exceptions between multiple methods, should all methods re-throw the exception? for example
Method1()
{
Method2();
}
Method2()
{
try
{
// Do something
}
catch
{
throw;
}
}
try
{
Method1();
}
catch
{
// Do something about exception that was thrown from Method2()
}
Notice how in Method1(), I didn't need to wrap Method2() in a try block, should I be?
You don't need to wrap everything in try blocks.
You should only try when you want to catch something, and you should only catch something in the following cases:
You're ready to handle the exception (do whatever needs to be done and don't let it propagate up the stack),
You want to do something with the exception (e.g. log it) before rethrowing it (by using the parameterless form of throw),
You want to add details to the exception by wrapping it in another exception of your own (see Allon Guralnek's excellent comment below).
You do not need to try, catch, and rethrow exceptions unless you have some particular reason for catching them in the first place. Otherwise, they'll automatically get bubbled up from the lower level functions that throw them to the highest level function in your code. Essentially, you can think of them as getting "rethrown" all the way up, even though this isn't technically what is happening.
In fact, most of the time that you see a try/catch block written, it's incorrect. You should not catch exceptions unless you can actually handle them. It's utterly pointless (and in fact considered to be bad practice) to catch exceptions just to rethrow them. Do not wrap all of your code within try blocks.
Note that by "handle them", I mean that your code in the catch block will take some specific action based on the particular exception that was thrown that attempts to correct the exceptional condition.
For example, for a FileNotFoundException, you might inform the user that the file could not be found and ask them to choose another one.
See my answer here for more detail and a thorough discussion of "exception handling best practices".

Exception propagation in C#

Suppose I have three functions doA(), doB(), and doC() in a C# program where I know that doA() will call doB() which in turn calls doC().
Since doC() has to interact with a database, I know that it could very well generate exceptions that it won't be able to resolve that need to be brought to the user's attention. At the moment, I have the code which might throw the error in a try / catch blow in doC() and then the call to doC() in doB() in another try / catch and similarly the call to doB() in doA() in try / catch block. This allows me to just use throw; to kick the exception up to doA() where something can reasonably be done to display this to the user.
This seems a little like overkill though. I am wondering if since I don't plan on dealing with the exception in doB() or doC() if I can just get rid of the try / catch blocks there.
Assuming there are no finally blocks involved, what is the best practice for dealing with situations like this?
If your catch blocks are just like this:
catch (Exception)
{
throw;
}
then they are pointless indeed. You're not really handling the exception - don't bother with try/catch at all.
Personally I have very few try/catch blocks in my code - and although there are plenty of implicit try/finally blocks, most are due to using statements.
Yes I would get rid of the try/catch blocks - just let the exception propagate up to the top level and then catch it there. Catching an exception just to rethrow with throw; is simply not useful, although the following variation is actually harmful as it destroys the stack trace information:
catch (Exception exception)
{
throw exception;
}
You only need to catch if you intend to do something (or are trying to stop propagation). If you don't catch, it goes to the catch in the caller. In your case, it seems like doA() (or possibly its caller, depending on where you can handle it) is the only function that needs try/catch.
Exceptions bubble up the call stack.
If the method where the exception happens doesn't handle it, the methods caller gets it. If the caller doesn't handle it, it goes further up the call stack until the framework handles it and crashes your application.
To answer your question: there is no need to rethrow an exception in your case.
Type of exceptions you ll be catching can be different in every level, I m not sure what you are doing in 3 levels, but at the top of the stack you can only can 1 type of exception, in the lower level there might be different type of exception, which kinda forces u to use a broad exception type then a specific one, which might not have clear information in it.
So it depends on the types of Exceptions you ll be throwing.
IMHO, an exception should be caught the fewest number of times possible, it's actually a rather expensive operation to catch an exception.
The case might come up where you're crossing application layers, and might want one layer to log/rethrow, and the next layer up also needs to catch it. But in your case, it's just one layer so I'd say at the highest place in the call stack where you can do something with the exception, log it and do your business logic.
In short the answer to your question is yes. The only reason to catch an exception is to do something with it. If you can't do anything useful with it in DoC() then just let it bubble up.
It is always a good practice to have try catch blocks at the entry points to your code (typically in event handlers in a win forms app) so that nothing goes uncaught. At that point what you can do with it is tell the user.
However, you may also want to put some lower level handlers in place as appropriate if they can take reasonable action. For example, in doC() you might want to catch exceptions that have to do with deadlocks and retry. At some level you may also want to catch constraint errors and throw more meaningful user targeted errors in their place. I have a blog post about that here.

Eating Exceptions in c# om nom nom

Given that eating exceptions is always bad juju and re-throwing the exception loses the call stack, what's the proper way to re-factor the following?
Eating Exceptions:
try
{
… do something meaningful
}
catch(SomeException ex)
{
// eat exception
}
try
{
...
}
catch(SomeException e)
{
//Do whatever is needed with e
throw; //This rethrows and preserves call stack.
}
Catch and handle specific types of exceptions. Good practice is to not just catch System.Exception. A robust routine will strongly type the exceptions it knows how to handle.
Exceptions shouldn't be used for control flow, but there are often specific unwind procedures that need to be taken based on the type of exception.
Depending on the specific type, you may or may not choose to rethrow it. For example, an ASP parsing exception being thrown to an error page that USES the code causing the exception will cause an infinite loop.
try
{
}
catch( FileIOException )
{
// unwind and re-throw as determined by the specific exception type
}
catch( UnauthorizedAccessException )
{
// unwind and re-throw as determined by the specific exception type
}
catch( SomeOtherException )
{
// unwind and re-throw as determined by the specific exception type
}
catch( Exception )
{
// log and re-throw...add your own message, capture the call stack, etc.
// throw original exception
throw;
// OR, throw your own custom exception that provides more specific detail and captures
// the original exception as the inner exception
throw new MyStronglyTypedException();
}
finally
{
// always clean up
}
Most people think it's utterly evil to eat/suppress exceptions, especially with catch-alls. (Ironically, they use the catch all response of "don't use catch-alls, it's evil" :-). I don't understand the religious fervour with which people parrot this view, because if used sensibly, there is nothing wrong with this approach.
In my book, the worst case scenario is that my program catastrophically exits -> this creates a very unhappy customer with a total data loss situation. An unhandled exception is guaranteed to cause this every time. So failing to handle an exception is statistically more dangerous than any risk of instability that may occur if an exception is suppressed. In light of this, anything we can reasonably do to protect against an unhandled exception occurring is a good thing.
Many people seem to forget that catch alls can often handle any exception correctly, even if they don't know the details of what the exception was. By this I mean that they can guarantee that the program state remains stable, and the program continues to run within its design parameters. Or there may even be side effects such as the user finding a button unresponsive, but they still won't lose any data (i.e. graceful degradation is better than a fatal crash). For example, sometimes you want to return one value on success and a default if you fail for any reason. Part of designing code is knowing when to report errors to the user and when to fix a problem on their behalf so their program "just works". In this situation, a well designed catch-all is often the correct tool for the job.
Exceptions worry me. Fundamentally an exception is a guaranteed program crash if I don't handle it. If I only add specific exception handling for the exceptions I expect, my program is inherently fragile. Consider how easily it can be broken:
If a programmer forgets to document one exception they might throw, I won't know I need to catch it, and my code will have a vulnerability I'm not aware of.
If someone updates a method so that it throws a new exception type, that new exception could ripple up the call stack until it hits my code. But my code was not built to handle the exception. Don't tell me that the libraries I'm calling will never change.
Every exception type you specifically handle is another code path to be tested. It significantly multiplies the complexity of testing and/or the risks that a broken bit of handling code might go unnoticed.
The view underpinning the "suppression is evil" view is that all exceptions represent an instability or error - but in many cases programmers use exceptions to return little more than status information. For example, FileNotFound. The programmer writing file I/O code has decided on my behalf that a missing file is a fatal error. And it might be. It is up to me to catch this and decide that actually it's a common and perfectly normal, or expected, situation. A lot of the time, suppressing exceptions is necessary to simply stop someone else's "decision" taking out my application. The old approach of simply ignoring error return codes wasn't always a bad thing, especially given the amount of effort it takes to catch and suppress the myriad "status" exceptions that are bandied about.
The trick to silently eating/suppressing exceptions is just to be sure that this is the correct way to handle them. (And in many cases, it's not). So there may be no need to refactor your example code - it might not be bad juju.
That all depends on where the code lives.
In the depths of the system? If that is the case then I would gather some form of standard error handling should exist across the product, if not it needs to.
If it is on the presentation side for instance it may have no value to anyone except the code, and in that case additional logic may need to be placed in a finally block.
Or let it roll up hill altogether and don't wrap it in a try catch if you aren't going to do anything useful in the catch anyways.
… do something meaningful
To add to the excellent comments already provided.
There are three way to "re-throw" an exception:
catch (Exception ex)
{
throw;
}
The above preserves the call stack of the original exception.
catch (Exception ex)
{
throw ex;
}
The above eats the original exception chain and begins a new one.
catch (Exception ex)
{
throw new MyException("blah", ex);
}
The above adds the original exception to the InnerException of a new chain. This can be the best of both worlds, but which one is correct is highly dependent on what you need.
Your code can be rewritten (to eat exception) like this
try
{
… do something meaningful
}
catch
{
// eat exception
}
But I don't understand what you want to do by refactoring!!
Edit:
Re-throwing using throw; doesn't work always. Read this ->
http://weblogs.asp.net/fmarguerie/archive/2008/01/02/rethrowing-exceptions-and-preserving-the-full-call-stack-trace.aspx
In general, it's not a good idea to catch the general Exception unless you can actually handle it. I think the right answer is a combination of Tim's and Joshua's answers. If there are specific exceptions that you can handle and remain in a good state, for example FileNotFoundException you should catch it, handle it, and move on, as seen here:
try
{
// do something meaningful
}
catch(FileNotFoundException)
{
MessageBox.Show("The file does not exist.");
}
If you can't handle it and remain in a good state, don't catch it in the first place.
However, one case where you would want to catch the general Exception and re-throw it would be if you have any cleanup that you will need to do, for example aborting a database transaction, before the exception bubbles up. We can accomplish this by extending the previous example like so:
try
{
BeginTransaction();
// do something meaningful
CommitTransaction();
}
catch(FileNotFoundException)
{
MessageBox.Show("The file does not exist.");
AbortTransaction();
}
catch(Exception)
{
AbortTransaction();
throw; // using "throw;" instead of "throw ex;" preserves
// the stack trace
}
Refactor it to:
// No try
{
… do something meaningful
}
// No catch
and let the exception be handled at the main loop.
if the catch() block only rethrows exception and does not do any real exception handling then you don't need try..catch at all.
Part of the problem with eating exceptions is that it's inherently unclear what they're hiding. So... the question of the proper refactoring isn't easily answered. Ideally, however, you'd remove the try...catch clause entirely; it's unnecessary in most cases.
Best practice is to avoid try...catch entirely wherever possible; if you must deal with exceptions, then do so as locally and specifically as possible and don't propagate them up the stack; finally, include a global unhandled exception handler that does the appropriate logging (and perhaps offers to restart the application if necessary).
Unless the catch block actually does something with the exception (e.g., logging it to a system error file), there is no need to even have the try/catch block.
That being said, if the exception is worth informing the user about (e.g, logging it), then by all means use a catch block to do so.
One particularly bad pitfall of ignoring exceptions is that certain (fatal) exceptions should cause the program to terminate. Such exceptions (e.g., failure to load a class) leave the program in an unstable state, which will only lead to disaster later on in the execution. In these cases, logging the exception and then gracefully terminating is the only reasonable thing to do.
The particular way in which exceptions are eaten is not important. Never eat exceptions by any means!
Only catch exceptions that are expected to occur and which you can do something about. Examples of this include file and network IO, security exceptions, etc. For those cases you can display an explaination of what happened to the user, and sometimes you can recover gracefully.
Do not catch exceptions that should never occur. Examples of these are null-reference exceptions, invalid operation exceptions, etc. The code should be written so that these exceptions will never happen, so there is no need to catch them. If those exceptions are happending, then fix the bugs. Don't swallow the exceptions.
It is OK to log all exceptions, but this should be done with the unhandled exception handler on the program and any threads that are created. This is not done with a try/catch.
You can rethrow exception without losing call stack just re-throw as
catch(Exception e)
{
throw;
}
Why would you need this?
Usage example:
Somewhere in your app you have 3rd party code and you wrap it, and if it throws exceptions you throw WrappingException.
When you execute some other code you might get exception either from 3rdparty or either from your own so you may need:
try
{
//code that runs 3rd party
//your code, but it may throw Null ref or any other exception
}
catch( WrappingException)
{
throw;
}
catch( Exception e)
{
throw new MyAppLayer3Exception("there was exception...", e);
}
In this case you do not wrap WrappingException with your MyAppLayer3Exception.
So, at the top level of you application you may catch all exceptions and by knowing Type of exception you will know where from it came!
Hope it helps.
eating exceptions is not always "bad juju". There is no magic here; just write code to do what you need to do. As a matter of hygiene, if you catch an exception and ignore it, comment as to why you are doing it.
try
{
.....
}
catch (something)
{
// we can safely ignore ex becuase ....
}
Sometimes, it's just best not to deal with exceptions if you really don't want to deal with the added responsibility that comes with exceptions. For example, rather than catching an NullReferenceException, why not just make sure that the object exists before you try to do something with it?
if (yourObject != null)
{
... do something meaningful with yourObject ...
}
Exceptions are best reserved to handle those things you really have no control over, such as the sudden loss of a connection, or things which have the potential to kill a long-running process, such as a data import. When an exception is thrown, regardless of the reason, your application has reached a point of instability. You catch the exception to return the application to a point of stability by cleaning up the mess, e.g. disposing of the lost connection and creating a new one OR logging the line where the error occurred and advancing to the next line.
I've been dealing with exception handling for the last 15 years, starting with the first six versions of Delphi, up to (and including) .NET 1.0-4.0. It is a powerful tool, but it is a tool that is often overused. I have found consistently, during that time, the most effective exception handling process is deferring to if-then before try-catch.
One major problem with the exception hierarchy is that exceptions are categorized based upon what happened, rather than based upon the system state. Some exceptions mean "a function couldn't perform its task, but it didn't disturb the system state either". Others mean "Run for your lives! The whole system is melting down!" In many cases, it would be entirely proper for a routine which could handle the failure of a called method to swallow any and all exceptions of the former type; in other cases, such exceptions should be rethrown in a manner which indicates possible state corruption (e.g. because there was a failure in an operation necessary to reset the system state; even though the attempt to perform that operation didn't disturb anything, the fact that the state wasn't reset means it's corrupted).
It would be possible for one to manage one's own exceptions into such a hierarchy, but I don't know any good way to deal with other exceptions.

Catching specific vs. generic exceptions in c#

This question comes from a code analysis run against an object I've created. The analysis says that I should catch a more specific exception type than just the basic Exception.
Do you find yourself using just catching the generic Exception or attempting to catch a specific Exception and defaulting to a generic Exception using multiple catch blocks?
One of the code chunks in question is below:
internal static bool ClearFlags(string connectionString, Guid ID)
{
bool returnValue = false;
SqlConnection dbEngine = new SqlConnection(connectionString);
SqlCommand dbCmd = new SqlCommand("ClearFlags", dbEngine);
SqlDataAdapter dataAdapter = new SqlDataAdapter(dbCmd);
dbCmd.CommandType = CommandType.StoredProcedure;
try
{
dbCmd.Parameters.AddWithValue("#ID", ID.ToString());
dbEngine.Open();
dbCmd.ExecuteNonQuery();
dbEngine.Close();
returnValue = true;
}
catch (Exception ex)
{ ErrorHandler(ex); }
return returnValue;
}
Thank you for your advice
EDIT: Here is the warning from the code analysis
Warning 351 CA1031 : Microsoft.Design : Modify 'ClearFlags(string, Guid)' to catch a more specific exception than 'Exception' or rethrow the exception
You should almost never catch the top level Exception.
In most cases you should catch and handle the most specific exception possible and only if there is something useful you can do with it.
The exception (haha) to this is if you are catching for logging and re-throw the exception, then it is sometimes OK to catch a top level Exception, log it and rethrow it.
You should almost never catch a top level Exception and swallow it. This is because if you are catching a top level exception you don't really know what you are handling; absolutly anything could have caused it so you will almost certainly not be able to do anything that will handle every single failure case correctly. There probably are some failures that you may just want to silently handle and swallow, but by just swallowing top level Exceptions you'll also be swallowing a whole bunch that really should have been thrown upwards for your code to handle higher up. In your code example what you probably want to do is handle a SQLException and log+swallow that; and then for an Exception, log and rethrow it. This covers yourself. You're still logging all exception types, but your only swallowing the fairly predictable SQLException which indicates problems with your SQL/database.
A common practise is to only every handle exceptions that you can actually resolve at that point, if you can't resolve it at that point in code then you allow it to bubble upwards. If you can't resolve it at the next level up, allow it to continue up. If it reaches the top unhandled then display a polite appology to the user (perhaps attempt a quick autosave) and close the app. It's generally considered worse to allow an app to continue running after an unhandled exception because you can't predict the state of the application as something exceptional has occured. It's better just to shutdown and restart the app to get back to an expected state.
Have a look at this article by Krzysztof Cwalina, which I've found very helpful in understanding when to catch or ignore exceptions:
How to Design Exception Hierarchies
All the principles it describes about designing exception hierarchies are also applicable when deciding when to catch, throw, or ignore exceptions. He divides exceptions into three groups:
Usage errors, such as DivideByZeroException, which indicate errors in code; you shouldn't handle these because they can be avoided by changing your code.
Logical errors, such as FileNotFoundException, which you need to handle because you can't guarantee they won't happen. (Even if you check for the file's existence, it could still be deleted in that split-second before you read from it.)
System failures, such as OutOfMemoryException, which you can't avoid or handle.
You should read a general paper or google "Structured Exception Handling" and get a better big picture of what this topic is all about, but in general, catching every exception is considered bad practice because you have no idea what the exception was (Memory fault, out of memory error, Disk failure, etc.).
And for many unknown/unexpected exceptions, you should not be allowing the application to continue. In general, you "catch" and handle only those exceptions toy have determined, as a result of an analysis of the method you are coding the catch clause for, that method can in fact create, and that you can do something about. The only time you should catch all exceptions (catch Exception x) is to do something like logging it, in which case you should immediately rethrow the same exception (whatever it was) so that it can bubble up the stack to some general "Unhandled Exception Handler" which can display an appropriate message to the user and then cause the application to terminate.
Yes,
You should catch from the most specific exception down to the least, so you can deal with things in an appropriate manner.
For example, if you were making a web request, you should catch things like TimeOuts and 404s first, then you can inform the end user they should retry (timeout) and/or check they URL they entered.
Then you could catch something less general, in case something a bit more wacky goes wrong, then fall right back to just catching an Exception in the case that something ridiculous happens.
As a best practice, you should avoid catching Exception and using flags as return values.
Instead, you should design custom exceptions for expected exceptions and catch those directly. Anything else should bubble up as an unexpected exception.
In your example above, you may want to rethrow a more business specific Exception.
I agree that, in general, you should only catch exceptions you're expecting and understand how to handle. A few cases where I often don't do this:
As mentioned above, if I'm capturing some sort of useful information to log and then rethrow.
If I'm performing an asynchronous operation, such as handling queued messages or jobs in a worker thread, and I want to catch the exception for rethrowing in a different context. I also often use an ugly hack here that tricks the CLR into appending stack trace information, so that it's not lost when rethrowing in the new context.
If I'm working with an isolated task or operation and I can handle the exception by shutting down the task, without shutting down the whole application. I often wish here that there were a top-level exception for truly fatal exceptions (like OutOfMemoryException), as I've been ignoring these. The proper way to handle this would be to run the isolated task in its own AppDomain, but I haven't had the available schedule time to implement this on a project yet.
I agree with the code analysis tool. My exception to the rule is that I catch the general exception in my event handlers and the user is given the option to report the error or to ignore it.
In the example you provide, I think the code analysis got it right. If you can't handle a specific exception right there, you shouldn't be catching anything at all and let it bubble up to the highest level. That way you'll have a much easier time recreating the issue when you try to fix it.
You could make your example better by adding the connection string and the ID value to the exception's Data property and be sure it is logged as well. That way you give yourself a fighting chance at reproducing the error.

Categories

Resources