Azure Functions 2.x keep throwing catched exceptions - c#

In my Azure Functions 2.x Project, i have a part of an Function, a try-catch block without finally, that more or less look like this.
Dictionary<string, int> varDict = null;
Tuple<string, DateTime> varTupl = null;
try
{
varDict = await Core.GetDict(lcLat.Value, lcLong.Value);
varTupl = await Core.GetTupl(lcLat.Value, lcLong.Value);
}
catch (AggregateException ae)
{
ae.Flatten().Handle(ex =>
{
// `log` is an ILogger, the standard Azure Functions passed param
log.LogError(ex, ""); // Writes the ex's error
Debug.WriteLine(""); // Writes the ex's error
// the written content is ommited for readability sake
// But will be shown below
return true;
});
}
catch (Exception ex)
{
// Does exactly like Handle() Does
}
if(varDict != null && varTupl != null)
{
// The Code won't go here, and always return HTTP 500 Instead
}
else
{
// Here neither
}
The Run method itself is an async Task<IActionResult>, with Core as a static public class containing GetDict() and GetTupl() methods, each of them are also an static async Task<T> with their respective T return type and both doesn't have any try-catch block, only using (which are not supposed to throw any exceptions, right ?)
The problem is, even though (i assume) the exceptions raised then bubbled up into my try-catch block, even with my catch block running printing the exception with my formatting from catch block, as shown in the screenshot ,my Azure Functions keep returning HTTP Error 500, skipping the rest of the code after the try-catch block
What i have tried
Disable 'Just My Code' debugging options in my Visual Stuido 2017
Adding AggregateExceptions, before this it's only catching for Exception
Flatten the AggregateException before Handle() it
Is this common on local development environment, or it's just me handling everything incorectly ?
Also, the output window keep printing out something like this
and this
even in idle state (while the HTTP endpoint isn't being invoked, just run in debug mode, idly waiting for invocation)
are these something that i have to concerned about ? are those even related with my problem

Related

Exception in loaded plugin crashes parent application

I'm writing a .NET 6 application for which users can create plugins. In some situations however, when a plugin throws an unhandled exception, my own application crashes as well. That should not happen, no matter what. The plugin may stop working, it may unload, it may whatever, just leave the parent app alive.
Loading happens like this:
public static ServiceInfo? LoadService(string relativePath)
{
var loadContext = new ServiceLoadContext(relativePath);
_alcs.Add(loadContext);
try
{
var assembly = loadContext.LoadFromAssemblyName(new AssemblyName(Path.GetFileNameWithoutExtension(relativePath)));
var shouldLoadDll = false;
foreach (var type in assembly.GetTypes())
{
if (typeof(IMonitorableService).IsAssignableFrom(type))
{
var directoryName = new FileInfo(relativePath).Directory!.FullName;
if (Activator.CreateInstance(type, new object[] { directoryName }) is IMonitorableService result)
{
shouldLoadDll = true;
return new ServiceInfo
{
Category = Directory.GetParent(relativePath)!.Name,
Name = Path.GetFileNameWithoutExtension(relativePath),
AssemblyPath = relativePath,
Service = result!
};
}
}
}
if (!shouldLoadDll)
{
loadContext.Unload();
}
}
catch (Exception)
{
// This is handled, but this won't catch the exception in the plugin
}
return null;
}
I have my share of try/catch phrases, and since these IMonitorableServices are BackgroundServices, they're started like
public async Task StartAsync(CancellationToken cancellationToken)
{
foreach (var service in _options.Services)
{
try
{
await service.Service.StartAsync(cancellationToken);
}
catch (Exception ex)
{
// This is handled, but it won't catch the exception in the plugin
}
}
}
Now I doubt that it's really relevant to provide the specific error, but just in case: it's a
'System.InvalidOperationException: 'Collection was modified; enumeration operation may not execute',
following an operation on event subscriptions. I know how to solve that in the plugin, but I could never trust my future plugin writers to always handle their exceptions (or prevent them from happening). I need some way to catch absolutely everything in my own application. I've been breaking my head over this and I can find many considerations on plugins loaded in AppDomains, but they're from the .NET Framework era...
Who has an idea how to solve this? I could hardly imagine this is something that has been overlooked in .NET Core/6 development.
Update: I find that other type of exceptions actually are caught within the StartAsync method. So it might have something to do with the exception being raised from an event in the plugin (don't want to put you on the wrong track though). I must add, the event is registered from within the StartAsync method, but it seems to bypass the regular catch.

Exceptions are just ignored in async code block

Before I use Nito.MVVM, I used plain async/await and it was throwing me an aggregate exception and I could read into it and know what I have. But since Nito, my exceptions are ignored and the program jumps from async code block and continue executes. I know that it catch exceptions because when I put a breakpoint on catch(Exception ex) line it breaks here but with ex = null. I know that NotifyTask has properties to check if an exception was thrown but where I put it, it checks when Task is uncompleted, not when I need it.
View model:
public FileExplorerPageViewModel(INavigationService navigationService)
{
_navigationService = navigationService;
_manager = new FileExplorerManager();
Files = NotifyTask.Create(GetFilesAsync("UniorDev", "GitRemote/GitRemote"));
}
Private method:
private async Task<ObservableCollection<FileExplorerModel>> GetFilesAsync(string login, string reposName)
{
return new ObservableCollection<FileExplorerModel>(await _manager.GetFilesAsync(login, reposName));
}
Manager method(where exception throws):
public async Task<List<FileExplorerModel>> GetFilesAsync(string login, string reposName)
{
//try
//{
var gitHubFiles = await GetGitHubFilesAsync(login, reposName);
var gitRemoteFiles = new List<FileExplorerModel>();
foreach ( var file in gitHubFiles )
{
if ( file.Type == ContentType.Symlink || file.Type == ContentType.Submodule ) continue;
var model = new FileExplorerModel
{
Name = file.Name,
FileType = file.Type.ToString()
};
if ( model.IsFolder )
{
var nextFiles = await GetGitHubFilesAsync(login, reposName);
var count = nextFiles.Count;
}
model.FileSize = file.Size.ToString();
gitRemoteFiles.Add(model);
}
return gitRemoteFiles;
//}
//catch ( WebException ex )
//{
// throw new Exception("Something wrong with internet connection, try to On Internet " + ex.Message);
//}
//catch ( Exception ex )
//{
// throw new Exception("Getting ExplorerFiles from github failed! " + ex.Message);
//}
}
With try/catch or without it has the same effect. This behavior is anywhere where I have NotifyTask.
Update
There is no event, that fires when exception occurred, but there is Property Changed event, so I used it and added this code:
private void FilesOnPropertyChanged(object sender, PropertyChangedEventArgs propertyChangedEventArgs)
{
throw new Exception("EXCEPTION");
bool failed;
if ( Files.IsFaulted )
failed = true;
}
And exception not fires.
I added throw exception in App class (main class) and it fired. And when I have exceptions that come from XAML, it also fires. So maybe it not fires when it comes from a view model, or something else. I have no idea. Will be very happy for some help with it.
Update
We deal with exception = null, but the question is still alive. What I wanna add, that I rarely this issue, when the app is starting to launch on the physic device. I read some info about it, and it doesn't seem to be related, but maybe it is:
I'm not entirely sure what your desired behavior is, but here's some information I hope you find useful.
NotifyTask is a data-bindable wrapper around Task. That's really all it does. So, if its Task faults with an exception, then it will update its own data-bindable properties regarding that exception. NotifyTask is intended for use when you want the UI to respond to a task completing, e.g., show a spinner while the task is in progress, an error message if the task faults, and data if the task completes successfully.
If you want your application to respond to the task faulting (with code, not just a UI update), then you should use try/catch like you have commented out in GetFilesAsync. NotifyTask doesn't change how those exceptions work; they should work just fine.
I know that it catch exceptions because when I put a breakpoint on catch(Exception ex) line it breaks here but with ex = null.
That's not possible. I suggest you try it again.
I know that NotifyTask has properties to check if an exception was thrown but where I put it, it checks when Task is uncompleted, not when I need it.
If you really want to (asynchronously) wait for the task to complete and then check for exceptions, then you can do so like this:
await Files.TaskCompleted;
var ex = Files.InnerException;
Or, if you just want to re-raise the exception:
await Files.Task;
Though I must say this usage is extremely unusual. The much more proper thing to do is to have a try/catch within your GetFilesAsync.

Catching exceptions with "catch, when"

I came across this new feature in C# which allows a catch handler to execute when a specific condition is met.
int i = 0;
try
{
throw new ArgumentNullException(nameof(i));
}
catch (ArgumentNullException e)
when (i == 1)
{
Console.WriteLine("Caught Argument Null Exception");
}
I am trying to understand when this may ever be useful.
One scenario could be something like this:
try
{
DatabaseUpdate()
}
catch (SQLException e)
when (driver == "MySQL")
{
//MySQL specific error handling and wrapping up the exception
}
catch (SQLException e)
when (driver == "Oracle")
{
//Oracle specific error handling and wrapping up of exception
}
..
but this is again something that I can do within the same handler and delegate to different methods depending on the type of the driver. Does this make the code easier to understand? Arguably no.
Another scenario that I can think of is something like:
try
{
SomeOperation();
}
catch(SomeException e)
when (Condition == true)
{
//some specific error handling that this layer can handle
}
catch (Exception e) //catchall
{
throw;
}
Again this is something that I can do like:
try
{
SomeOperation();
}
catch(SomeException e)
{
if (condition == true)
{
//some specific error handling that this layer can handle
}
else
throw;
}
Does using the 'catch, when' feature make exception handling faster because the handler is skipped as such and the stack unwinding can happen much earlier as when compared to handling the specific use cases within the handler? Are there any specific use cases that fit this feature better which people can then adopt as a good practice?
Catch blocks already allow you to filter on the type of the exception:
catch (SomeSpecificExceptionType e) {...}
The when clause allows you to extend this filter to generic expressions.
Thus, you use the when clause for cases where the type of the exception is not distinct enough to determine whether the exception should be handled here or not.
A common use case are exception types which are actually a wrapper for multiple, different kinds of errors.
Here's a case that I've actually used (in VB, which already has this feature for quite some time):
try
{
SomeLegacyComOperation();
}
catch (COMException e) when (e.ErrorCode == 0x1234)
{
// Handle the *specific* error I was expecting.
}
Same for SqlException, which also has an ErrorCode property. The alternative would be something like that:
try
{
SomeLegacyComOperation();
}
catch (COMException e)
{
if (e.ErrorCode == 0x1234)
{
// Handle error
}
else
{
throw;
}
}
which is arguably less elegant and slightly breaks the stack trace.
In addition, you can mention the same type of exception twice in the same try-catch-block:
try
{
SomeLegacyComOperation();
}
catch (COMException e) when (e.ErrorCode == 0x1234)
{
...
}
catch (COMException e) when (e.ErrorCode == 0x5678)
{
...
}
which would not be possible without the when condition.
From Roslyn's wiki (emphasis mine):
Exception filters are preferable to catching and rethrowing because
they leave the stack unharmed. If the exception later causes the stack
to be dumped, you can see where it originally came from, rather than
just the last place it was rethrown.
It is also a common and accepted form of “abuse” to use exception
filters for side effects; e.g. logging. They can inspect an exception
“flying by” without intercepting its course. In those cases, the
filter will often be a call to a false-returning helper function which
executes the side effects:
private static bool Log(Exception e) { /* log it */ ; return false; }
… try { … } catch (Exception e) when (Log(e)) { }
The first point is worth demonstrating.
static class Program
{
static void Main(string[] args)
{
A(1);
}
private static void A(int i)
{
try
{
B(i + 1);
}
catch (Exception ex)
{
if (ex.Message != "!")
Console.WriteLine(ex);
else throw;
}
}
private static void B(int i)
{
throw new Exception("!");
}
}
If we run this in WinDbg until the exception is hit, and print the stack using !clrstack -i -a we'll see the just the frame of A:
003eef10 00a7050d [DEFAULT] Void App.Program.A(I4)
PARAMETERS:
+ int i = 1
LOCALS:
+ System.Exception ex # 0x23e3178
+ (Error 0x80004005 retrieving local variable 'local_1')
However, if we change the program to use when:
catch (Exception ex) when (ex.Message != "!")
{
Console.WriteLine(ex);
}
We'll see the stack also contains B's frame:
001af2b4 01fb05aa [DEFAULT] Void App.Program.B(I4)
PARAMETERS:
+ int i = 2
LOCALS: (none)
001af2c8 01fb04c1 [DEFAULT] Void App.Program.A(I4)
PARAMETERS:
+ int i = 1
LOCALS:
+ System.Exception ex # 0x2213178
+ (Error 0x80004005 retrieving local variable 'local_1')
That information can be very useful when debugging crash dumps.
When an exception is thrown, the first pass of exception handling identifies where the exception will get caught before unwinding the stack; if/when the "catch" location is identified, all "finally" blocks are run (note that if an exception escapes a "finally" block, processing of the earlier exception may be abandoned). Once that happens, code will resume execution at the "catch".
If there is a breakpoint within a function that's evaluated as part of a "when", that breakpoint will suspend execution before any stack unwinding occurs; by contrast, a breakpoint at a "catch" will only suspend execution after all finally handlers have run.
Finally, if lines 23 and 27 of foo call bar, and the call on line 23 throws an exception which is caught within foo and rethrown on line 57, then the stack trace will suggest that the exception occurred while calling bar from line 57 [location of the rethrow], destroying any information about whether the exception occurred in the line-23 or line-27 call. Using when to avoid catching an exception in the first place avoids such disturbance.
BTW, a useful pattern which is annoyingly awkward in both C# and VB.NET is to use a function call within a when clause to set a variable which can be used within a finally clause to determine whether the function completed normally, to handle cases where a function has no hope of "resolving" any exception that occurs but must nonetheless take action based upon it. For example, if an exception is thrown within a factory method which is supposed to return an object that encapsulates resources, any resources that were acquired will need to be released, but the underlying exception should percolate up to the caller. The cleanest way to handle that semantically (though not syntactically) is to have a finally block check whether an exception occurred and, if so, release all resources acquired on behalf of the object that is no longer going to be returned. Since cleanup code has no hope of resolving whatever condition caused the exception, it really shouldn't catch it, but merely needs to know what happened. Calling a function like:
bool CopySecondArgumentToFirstAndReturnFalse<T>(ref T first, T second)
{
first = second;
return false;
}
within a when clause will make it possible for the factory function to know
that something happened.

How to not throw exception in ASP.NET Web Api service?

I am building a ASP.NET Web Api service and I would like to create centralized exception handling code.
I want to handle different types of exceptions in different ways. I will log all exceptions using log4net. For some types of exceptions I will want to notify an administrator via email. For some types of exceptions I want to rethrow a friendlier exception that will be returned to the caller. For some types of exceptions I want to just continue processing from the controller.
But how do I do that? I am using an Exception Filter Attribute. I have this code working. The attribute is registered properly and the code is firing. I just want to know how I can continue if certain types of exceptions are thrown. Hope that makes sense.
public class MyExceptionHandlingAttribute : ExceptionFilterAttribute
{
public override void OnException(HttpActionExecutedContext actionExecutedContext)
{
//Log all errors
_log.Error(myException);
if(myException is [one of the types I need to notify about])
{
...send out notification email
}
if(myException is [one of the types that we continue processing])
{
...don't do anything, return back to the caller and continue
...Not sure how to do this. How do I basically not do anything here?
}
if(myException is [one of the types where we rethrow])
{
throw new HttpResponseException(new HttpResponseMessage(StatusCode.InternalServerError)
{
Content = new StringContent("Friendly message goes here."),
ReasonPhrase = "Critical Exception"
});
}
}
}
For some types of exceptions I want to just continue processing from the controller. But how do I do that?
By writing try..catch where you want this behaviour to occur. See Resuming execution of code after exception is thrown and caught.
To clarify, I assume you have something like this:
void ProcessEntries(entries)
{
foreach (var entry in entries)
{
ProcessEntry(entry);
}
}
void ProcessEntry(entry)
{
if (foo)
{
throw new EntryProcessingException();
}
}
And when EntryProcessingException is thrown, you actually don't care and want to continue execution.
If this assumption is correct: you can't do that with a global exception filter, as once an exception is caught, there's no returning execution to where it was thrown. There is no On Error Resume Next in C#, especially not when the exceptions are handled using filters as #Marjan explained.
So, remove EntryProcessingException from your filter, and catch that specific exception by changing the loop body:
void ProcessEntries(entries)
{
foreach (var entry in entries)
{
try
{
ProcessEntry(entry);
}
catch (EntryProcessingException ex)
{
// Log the exception
}
}
}
And your loop will happily spin to its end, but throw on all other exceptions where it will be handled by your filter.

Additional try statement in catch statement - code smell?

Situation:
My application need to process the first step in the business rules (the initial try-catch statement). If an certain error occurs when the process calls the helper method during the step, I need to switch to a second process in the catch statement. The back up process uses the same helper method. If an same error occurs during the second process, I need to stop the entire process and throw the exception.
Implementation:
I was going to insert another try-catch statement into the catch statement of the first try-catch statement.
//run initial process
try
{
//initial information used in helper method
string s1 = "value 1";
//call helper method
HelperMethod(s1);
}
catch(Exception e1)
{
//backup information if first process generates an exception in the helper method
string s2 = "value 2";
//try catch statement for second process.
try
{
HelperMethod(s2);
}
catch(Exception e2)
{
throw e2;
}
}
What would be the correct design pattern to avoid code smells in this implementation?
I caused some confusion and left out that when the first process fails and switches to the second process, it will send different information to the helper method. I have updated the scenario to reflect the entire process.
If the HelperMethod needs a second try, there is nothing directly wrong with this, but your code in the catch tries to do way too much, and it destroys the stacktrace from e2.
You only need:
try
{
//call helper method
HelperMethod();
}
catch(Exception e1)
{
// maybe log e1, it is getting lost here
HelperMethod();
}
I wouldn't say it is bad, although I'd almost certainly refactor the second block of code into a second method, so keep it comprehensible. And probably catch something more specific than Exception. A second try is sometimes necessary, especially for things like Dispose() implementations that might themselves throw (WCF, I'm looking at you).
The general idea putting a try-catch inside the catch of a parent try-catch doesn't seem like a code-smell to me. I can think of other legitimate reasons for doing this - for instance, when cleaning up an operation that failed where you do not want to ever throw another error (such as if the clean-up operation also fails). Your implementation, however, raises two questions for me: 1) Wim's comment, and 2) do you really want to entirely disregard why the operation originally failed (the e1 Exception)? Whether the second process succeeds or fails, your code does nothing with the original exception.
Generally speaking, this isn't a problem, and it isn't a code smell that I know of.
With that said, you may want to look at handling the error within your first helper method instead of just throwing it (and, thus, handling the call to the second helper method in there). That's only if it makes sense, but it is a possible change.
Yes, a more general pattern is have the basic method include an overload that accepts an int attempt parameter, and then conditionally call itself recursively.
private void MyMethod (parameterList)
{ MyMethod(ParameterList, 0)l }
private void MyMethod(ParameterList, int attempt)
{
try { HelperMethod(); }
catch(SomeSpecificException)
{
if (attempt < MAXATTEMPTS)
MyMethod(ParameterList, ++attempt);
else throw;
}
}
It shouldn't be that bad. Just document clearly why you're doing it, and most DEFINITELY try catching a more specific Exception type.
If you need some retry mechanism, which it looks like, you may want to explore different techniques, looping with delays etc.
It would be a little clearer if you called a different function in the catch so that a reader doesn't think you're just retrying the same function, as is, over again. If there's state happening that's not being shown in your example, you should document it carefully, at a minimum.
You also shouldn't throw e2; like that: you should simply throw; if you're going to work with the exception you caught at all. If not, you shouldn't try/catch.
Where you do not reference e1, you should simply catch (Exception) or better still catch (YourSpecificException)
If you're doing this to try and recover from some sort of transient error, then you need to be careful about how you implement this.
For example, in an environment where you're using SQL Server Mirroring, it's possible that the server you're connected to may stop being the master mid-connection.
In that scenario, it may be valid for your application to try and reconnect, and re-execute any statements on the new master - rather than sending an error back to the caller immediately.
You need to be careful to ensure that the methods you're calling don't have their own automatic retry mechanism, and that your callers are aware there is an automatic retry built into your method. Failing to ensure this can result in scenarios where you cause a flood of retry attempts, overloading shared resources (such as Database servers).
You should also ensure you're catching exceptions specific to the transient error you're trying to retry. So, in the example I gave, SqlException, and then examining to see if the error was that the SQL connection failed because the host was no longer the master.
If you need to retry more than once, consider placing an 'automatic backoff' retry delay - the first failure is retried immediately, the second after a delay of (say) 1 second, then doubled up to a maximum of (say) 90 seconds. This should help prevent overloading resources.
I would also suggest restructuring your method so that you don't have an inner-try/catch.
For example:
bool helper_success = false;
bool automatic_retry = false;
//run initial process
try
{
//call helper method
HelperMethod();
helper_success = true;
}
catch(Exception e)
{
// check if e is a transient exception. If so, set automatic_retry = true
}
if (automatic_retry)
{ //try catch statement for second process.
try
{
HelperMethod();
}
catch(Exception e)
{
throw;
}
}
Here's another pattern:
// set up state for first attempt
if(!HelperMethod(false)) {
// set up state for second attempt
HelperMethod(true);
// no need to try catch since you're just throwing anyway
}
Here, HelperMethod is
bool HelperMethod(bool throwOnFailure)
and the return value indicates whether or not success occurred (i.e., false indicates failure and true indicates success). You could also do:
// could wrap in try/catch
HelperMethod(2, stateChanger);
where HelperMethod is
void HelperMethod(int numberOfTries, StateChanger[] stateChanger)
where numberOfTries indicates the number of times to try before throwing an exception and StateChanger[] is an array of delegates that will change the state for you between calls (i.e., stateChanger[0] is called before the first attempt, stateChanger[1] is called before the second attempt, etc.)
This last option indicates that you might have a smelly setup though. It looks like the class that is encapsulating this process is responsible for both keeping track of state (which employee to look up) as well as looking up the employee (HelperMethod). By SRP, these should be separate.
Of course, you need to a catch a more specific exception than you currently are (don't catch the base class Exception!) and you should just throw instead of throw e if you need to rethrow the exception after logging, cleanup, etc.
You could emulate C#'s TryParse method signatures:
class Program
{
static void Main(string[] args)
{
Exception ex;
Console.WriteLine("trying 'ex'");
if (TryHelper("ex", out ex))
{
Console.WriteLine("'ex' worked");
}
else
{
Console.WriteLine("'ex' failed: " + ex.Message);
Console.WriteLine("trying 'test'");
if (TryHelper("test", out ex))
{
Console.WriteLine("'test' worked");
}
else
{
Console.WriteLine("'test' failed: " + ex.Message);
throw ex;
}
}
}
private static bool TryHelper(string s, out Exception result)
{
try
{
HelperMethod(s);
result = null;
return true;
}
catch (Exception ex)
{
// log here to preserve stack trace
result = ex;
return false;
}
}
private static void HelperMethod(string s)
{
if (s.Equals("ex"))
{
throw new Exception("s can be anything except 'ex'");
}
}
}
Another way is to flatten the try/catch blocks, useful if you're using some exception-happy API:
public void Foo()
{
try
{
HelperMethod("value 1");
return; // finished
}
catch (Exception e)
{
// possibly log exception
}
try
{
HelperMethod("value 2");
return; // finished
}
catch (Exception e)
{
// possibly log exception
}
// ... more here if needed
}
An option for retry (that most people will probably flame) would be to use a goto. C# doesn't have filtered exceptions but this could be used in a similar manner.
const int MAX_RETRY = 3;
public static void DoWork()
{
//Do Something
}
public static void DoWorkWithRetry()
{
var #try = 0;
retry:
try
{
DoWork();
}
catch (Exception)
{
#try++;
if (#try < MAX_RETRY)
goto retry;
throw;
}
}
In this case you know this "exception" probably will happen so I would prefer a simple approach an leave exceptions for the unknown events.
//run initial process
try
{
//initial information used in helper method
string s1 = "value 1";
//call helper method
if(!HelperMethod(s1))
{
//backup information if first process generates an exception in the helper method
string s2 = "value 2";
if(!HelperMethod(s2))
{
return ErrorOfSomeKind;
}
}
return Ok;
}
catch(ApplicationException ex)
{
throw;
}
I know that I've done the above nested try catch recently to handle decoding data where two third party libraries throw exceptions on failure to decode (Try json decode, then try base64 decode), but my preference is to have functions return a value which can be checked.
I generally only use the throwing of exceptions to exit early and notify something up the chain about the error if it's fatal to the process.
If a function is unable to provide a meaningful response, that is not typically a fatal problem (Unlike bad input data).
It seems like the main risk in nested try catch is that you also end up catching all the other (maybe important) exceptions that might occur.

Categories

Resources