I do not currently have this issue, but you never know, and thought experiments are always fun.
Ignoring the obvious problems that you would have to have with your architecture to even be attempting this, let's assume that you had some horribly-written code of someone else's design, and you needed to do a bunch of wide and varied operations in the same code block, e.g.:
WidgetMaker.SetAlignment(57);
contactForm["Title"] = txtTitle.Text;
Casserole.Season(true, false);
((RecordKeeper)Session["CasseroleTracker"]).Seasoned = true;
Multiplied by a hundred. Some of these might work, others might go badly wrong. What you need is the C# equivalent of "on error resume next", otherwise you're going to end up copying and pasting try-catches around the many lines of code.
How would you attempt to tackle this problem?
public delegate void VoidDelegate();
public static class Utils
{
public static void Try(VoidDelegate v) {
try {
v();
}
catch {}
}
}
Utils.Try( () => WidgetMaker.SetAlignment(57) );
Utils.Try( () => contactForm["Title"] = txtTitle.Text );
Utils.Try( () => Casserole.Season(true, false) );
Utils.Try( () => ((RecordKeeper)Session["CasseroleTracker"]).Seasoned = true );
Refactor into individual, well-named methods:
AdjustFormWidgets();
SetContactTitle(txtTitle.Text);
SeasonCasserole();
Each of those is protected appropriately.
I would say do nothing.
Yup thats right, do NOTHING.
You have clearly identified two things to me:
You know the architecture is borked.
There is a ton of this crap.
I say:
Do nothing.
Add a global error handler to send you an email every time it goes boom.
Wait until something falls over (or fails a test)
Correct that (Refactoring as necessary within the scope of the page).
Repeat every time a problem occurs.
You will have this cleared up in no time if it is that bad. Yeah I know it sounds sucky and you may be pulling your hair out with bugfixes to begin with, but it will allow you to fix the needy/buggy code before the (large) amount of code that may actually be working no matter how crappy it looks.
Once you start winning the war, you will have a better handle on the code (due to all your refactoring) you will have a better idea for a winning design for it..
Trying to wrap all of it in bubble wrap is probably going to take just a long to do and you will still not be any closer to fixing the problems.
It's pretty obvious that you'd write the code in VB.NET, which actually does have On Error Resume Next, and export it in a DLL to C#. Anything else is just being a glutton
for punishment.
Fail Fast
To elaborate, I guess I am questioning the question. If an exception is thrown, why would you want your code to simply continue as if nothing has happened? Either you expect exceptions in certain situations, in which case you write a try-catch block around that code and handle them, or there is an unexpected error, in which case you should prefer your application to abort, or retry, or fail. Not carry on like a wounded zombie moaning 'brains'.
This is one of the things that having a preprocessor is useful for. You could define a macro that swallows exceptions, then with a quick script add that macro to all lines.
So, if this were C++, you could do something like this:
#define ATTEMPT(x) try { x; } catch (...) { }
// ...
ATTEMPT(WidgetMaker.SetAlignment(57));
ATTEMPT(contactForm["Title"] = txtTitle.Text);
ATTEMPT(Casserole.Season(true, false));
ATTEMPT(((RecordKeeper)Session["CasseroleTracker"]).Seasoned = true);
Unfortunately, not many languages seem to include a preprocessor like C/C++ did.
You could create your own preprocessor and add it as a pre-build step. If you felt like completely automating it you could probably write a preprocessor that would take the actual code file and add the try/catch stuff in on its own (so you don't have to add those ATTEMPT() blocks to the code manually). Making sure it only modified the lines it's supposed to could be difficult though (have to skip variable declarations, loop constructs, etc to that you don't break the build).
However, I think these are horrible ideas and should never be done, but the question was asked. :)
Really, you shouldn't ever do this. You need to find what's causing the error and fix it. Swallowing/ignoring errors is a bad thing to do, so I think the correct answer here is "Fix the bug, don't ignore it!". :)
On Error Resume Next is a really bad idea in the C# world. Nor would adding the equivalent to On Error Resume Next actually help you. All it would do is leave you in a bad state which could cause more subtle errors, data loss and possibly data corruption.
But to give the questioner his due, you could add a global handler and check the TargetSite to see which method borked. Then you could at least know what line it borked on. The next part would be to try and figure out how to set the "next statement" the same way the debugger does it. Hopefully your stack won't have unwound at this point or you can re-create it, but it's certainly worth a shot. However, given this approach the code would have to run in Debug mode every time so that you would have your debug symbols included.
As someone mentioned, VB allows this. How about doing it the same way in C#? Enter trusty reflector:
This:
Sub Main()
On Error Resume Next
Dim i As Integer = 0
Dim y As Integer = CInt(5 / i)
End Sub
Translates into this:
public static void Main()
{
// This item is obfuscated and can not be translated.
int VB$ResumeTarget;
try
{
int VB$CurrentStatement;
Label_0001:
ProjectData.ClearProjectError();
int VB$ActiveHandler = -2;
Label_0009:
VB$CurrentStatement = 2;
int i = 0;
Label_000E:
VB$CurrentStatement = 3;
int y = (int) Math.Round((double) (5.0 / ((double) i)));
goto Label_008F;
Label_0029:
VB$ResumeTarget = 0;
switch ((VB$ResumeTarget + 1))
{
case 1:
goto Label_0001;
case 2:
goto Label_0009;
case 3:
goto Label_000E;
case 4:
goto Label_008F;
default:
goto Label_0084;
}
Label_0049:
VB$ResumeTarget = VB$CurrentStatement;
switch (((VB$ActiveHandler > -2) ? VB$ActiveHandler : 1))
{
case 0:
goto Label_0084;
case 1:
goto Label_0029;
}
}
catch (object obj1) when (?)
{
ProjectData.SetProjectError((Exception) obj1);
goto Label_0049;
}
Label_0084:
throw ProjectData.CreateProjectError(-2146828237);
Label_008F:
if (VB$ResumeTarget != 0)
{
ProjectData.ClearProjectError();
}
}
Rewrite the code. Try to find sets of statements which logically depend on each other, so that if one fails then the next ones make no sense, and hive them off into their own functions and put try-catches round them, if you want to ignore the result of that and continue.
This may help you in identifing the pieces that have the most problems.
# JB King
Thanks for reminding me. The Logging application block has a Instrumentation Event that can be used to trace events, you can find more info on the MS Enterprise library docs.
Using (New InstEvent)
<series of statements>
End Using
All of the steps in this using will be traced to a log file, and you can parse that out to see where the log breaks (ex is thrown) and id the high offenders.
Refactoring is really your best bet, but if you have a lot, this may help you pinpoint the worst offenders.
You could use goto, but it's still messy.
I've actually wanted a sort of single statement try-catch for a while. It would be helpful in certain cases, like adding logging code or something that you don't want to interrupt the main program flow if it fails.
I suspect something could be done with some of the features associated with linq, but don't really have time to look into it at the moment. If you could just find a way to wrap a statement as an anonymous function, then use another one to call that within a try-catch block it would work... but not sure if that's possible just yet.
If you can get the compiler to give you an expression tree for this code, then you could modify that expression tree by replacing each statement with a new try-catch block that wraps the original statement. This isn't as far-fetched as it sounds; for LINQ, C# acquired the ability to capture lambda expressions as expression trees that can be manipulated in user code at runtime.
This approach is not possible today with .NET 3.5 -- if for no other reason than the lack of a "try" statement in System.Linq.Expressions. However, it may very well be viable in a future version of C# once the merge of the DLR and LINQ expression trees is complete.
Why not use the reflection in c#? You could create a class that reflects on the code and use line #s as the hint for what to put in each individual try/catch block. This has a few advantages:
Its slightly less ugly as it doesn't really you require mangle your source code and you can use it only during debug modes.
You learn something interesting about c# while implementing it.
I however would recommend against any of this, unless of course you are taking over maintance of someelses work and you need to get a handle on the exceptions so you can fix them. Might be fun to write though.
Fun question; very terrible.
It'd be nice if you could use a macro. But this is blasted C#, so you might solve it with some preprocessor work or some external tool to wrap your lines in individual try-catch blocks. Not sure if you meant you didn't want to manually wrap them or that you wanted to avoid try-catch entirely.
Messing around with this, I tried labeling every line and jumping back from a single catch, without much luck. However, Christopher uncovered the correct way to do this. There's some interesting additional discussion of this at Dot Net Thoughts and at Mike Stall's .NET Blog.
EDIT: Of course. The try-catch / switch-goto solution listed won't actually compile since the try labels are out-of-scope in catch. Anyone know what's missing to make something like this compile?
You could automate this with a compiler preprocess step or maybe hack up Mike Stall's Inline IL tool to inject some error-ignorance.
(Orion Adrian's answer about examining the Exception and trying to set the next instruction is interesting too.)
All in all, it seems like an interesting and instructive exercise. Of course, you'd have to decide at what point the effort to simulate ON ERROR RESUME NEXT outweighs the effort to fix the code. :-)
Catch the errors in the UnhandledException Event of the application. That way, unhandled execptions can even be logged as to the sender and whatever other information the developer would reasonable.
Unfortunately you are probably out of luck. On Error Resume Next is a legacy option that is generally heavily discouraged, and does not have an equivalent to my knowledge in C#.
I would recommend leaving the code in VB (It sounds like that was the source, given your specific request for OnError ResumeNext) and interfacing with or from a C# dll or exe that implements whatever new code you need. Then preform refactoring to cause the code to be safe, and convert this safe code to C# as you do this.
You could look at integrating the Enterprise Library's Exception Handling component for one idea of how to handle unhandled exceptions.
If this is for ASP.Net applications, there is a function in the Global.asax called, "Application_Error" that gets called in most cases with catastrophic failure being the other case usually.
Ignoring all the reasons you'd want to avoid doing this.......
If it were simply a need to keep # of lines down, you could try something like:
int totalMethodCount = xxx;
for(int counter = 0; counter < totalMethodCount; counter++) {
try {
if (counter == 0) WidgetMaker.SetAlignment(57);
if (counter == 1) contactForm["Title"] = txtTitle.Text;
if (counter == 2) Casserole.Season(true, false);
if (counter == 3) ((RecordKeeper)Session["CasseroleTracker"]).Seasoned = true;
} catch (Exception ex) {
// log here
}
}
However, you'd have to keep an eye on variable scope if you try to reuse any of the results of the calls.
Hilite each line, one at a time, 'Surround with' try/catch. That avoids the copying pasting you mentioned
Related
I've never seen a for loop initialized this way and don't understand why it would be written this way?
I'm doing some research into connecting to an IMAP server in .NET and started looking at code from a library named ImapX. I found the for loop in a method that writes data to a NetworkStream and then appears to read the response within the funky for loop. I don't want to copy and paste someone else's code verbatim, but here's the gist:
public bool SendData(string data)
{
try
{
this.imapStreamWriter.Write(data);
for (bool flag = true; flag; flag = false)
{
var s = this.imapStreamReader.ReadLine();
}
}
catch (Exception)
{
return false;
}
return true;
}
Again, this isn't the exact code, but it's the general idea. That's all the method does, it doesn't use the server response, it just returns true if no exception was thrown. I just don't understand how or why the for loop is being used this way; can anyone explain what advantages initializing this offers, if any?
It's a horrible way of executing a loop once. If you change the flag initalizer to something which isn't always true, it may have slightly more sense, but not a lot. I've entirely-unseriously suggested this code before now:
Animal animal = ...;
for (Dog dog = animal as Dog; dog != null; dog = null)
{
// Use dog...
}
... as a way of using an as operator without "polluting" the outer scope. But it's language silliness, and not something I'd ever really use.
I had previously assumed you were looking at original source code, but from your comments it sounds like you're looking at Reflector's reconstituted C# based on the MSIL. It's important to understand that this does not necessarily bear a close resemblance to the original code as written.
At the MSIL level, there is no such thing as a for loop or a while loop. There are just conditional branch instructions. (See here: http://weblogs.asp.net/kennykerr/archive/2004/09/23/introduction-to-msil-part-6-common-language-constructs.aspx ) Any tool trying to reconstitute C# must make a number of guesses for how the code was originally structured. It seems probable that this wasn't originally written as a for loop, but that Reflector detected that the IL could have been generated by a for loop, and its heuristics made a bad guess that it should be a for loop and not something else.
If I look at the same code in ILSpy, it is rendered as a while loop. It's still redundant, but it looks a lot less weird. Furthermore, it's possible that the original code actually did something that has been optimized out, perhaps calls to [Conditional] methods or code marked with #if directives. Then again, perhaps the original code used to do something else but parts were commented out - the comments are not kept in the IL. Or maybe there was more code there in the past and it was just plain deleted.
In short, there are many ways what you see in Reflector can be very different from what was originally written. You should consider it a prettier presentation of IL, not an example of C# as it is written by humans.
I had this as a comment, but I suppose it's the answer as well.
The "loop" is pointless. It initializes flag to true and is declared to execute only while the flag is still true. However, flag is explicitly being set to false after the first iteration. So in this case it's guaranteed to run once and only once.
I suspect the author was trying to be sure that control flow would pause on ReadLine() - but it does that anyway until user input is received.
The for-loop is pretty much a no-op, as noted by others.
It does do one thing, though, that may or may not matter: it introduces a namespace scope. The scope of the variable s is the body of the for-loop. s goes out of scope once the for-loop is exited. You could get the same effect by simply creating a block, thus:
{
var s = this.imapStreamReader.ReadLine();
}
Which is still pretty silly.
No idea what the original author was trying to accomplish with this -- perhaps trying to ensure that his s was destroyed/garbage-collected/disposed -- not that this technique would work (it wouldn't).
When I ran ReSharper on my code, for example:
if (some condition)
{
Some code...
}
ReSharper gave me the above warning (Invert "if" statement to reduce nesting), and suggested the following correction:
if (!some condition) return;
Some code...
I would like to understand why that's better. I always thought that using "return" in the middle of a method problematic, somewhat like "goto".
It is not only aesthetic, but it also reduces the maximum nesting level inside the method. This is generally regarded as a plus because it makes methods easier to understand (and indeed, many static analysis tools provide a measure of this as one of the indicators of code quality).
On the other hand, it also makes your method have multiple exit points, something that another group of people believes is a no-no.
Personally, I agree with ReSharper and the first group (in a language that has exceptions I find it silly to discuss "multiple exit points"; almost anything can throw, so there are numerous potential exit points in all methods).
Regarding performance: both versions should be equivalent (if not at the IL level, then certainly after the jitter is through with the code) in every language. Theoretically this depends on the compiler, but practically any widely used compiler of today is capable of handling much more advanced cases of code optimization than this.
A return in the middle of the method is not necessarily bad. It might be better to return immediately if it makes the intent of the code clearer. For example:
double getPayAmount() {
double result;
if (_isDead) result = deadAmount();
else {
if (_isSeparated) result = separatedAmount();
else {
if (_isRetired) result = retiredAmount();
else result = normalPayAmount();
};
}
return result;
};
In this case, if _isDead is true, we can immediately get out of the method. It might be better to structure it this way instead:
double getPayAmount() {
if (_isDead) return deadAmount();
if (_isSeparated) return separatedAmount();
if (_isRetired) return retiredAmount();
return normalPayAmount();
};
I've picked this code from the refactoring catalog. This specific refactoring is called: Replace Nested Conditional with Guard Clauses.
This is a bit of a religious argument, but I agree with ReSharper that you should prefer less nesting. I believe that this outweighs the negatives of having multiple return paths from a function.
The key reason for having less nesting is to improve code readability and maintainability. Remember that many other developers will need to read your code in the future, and code with less indentation is generally much easier to read.
Preconditions are a great example of where it is okay to return early at the start of the function. Why should the readability of the rest of the function be affected by the presence of a precondition check?
As for the negatives about returning multiple times from a method - debuggers are pretty powerful now, and it's very easy to find out exactly where and when a particular function is returning.
Having multiple returns in a function is not going to affect the maintainance programmer's job.
Poor code readability will.
As others have mentioned, there shouldn't be a performance hit, but there are other considerations. Aside from those valid concerns, this also can open you up to gotchas in some circumstances. Suppose you were dealing with a double instead:
public void myfunction(double exampleParam){
if(exampleParam > 0){
//Body will *not* be executed if Double.IsNan(exampleParam)
}
}
Contrast that with the seemingly equivalent inversion:
public void myfunction(double exampleParam){
if(exampleParam <= 0)
return;
//Body *will* be executed if Double.IsNan(exampleParam)
}
So in certain circumstances what appears to be a a correctly inverted if might not be.
The idea of only returning at the end of a function came back from the days before languages had support for exceptions. It enabled programs to rely on being able to put clean-up code at the end of a method, and then being sure it would be called and some other programmer wouldn't hide a return in the method that caused the cleanup code to be skipped. Skipped cleanup code could result in a memory or resource leak.
However, in a language that supports exceptions, it provides no such guarantees. In a language that supports exceptions, the execution of any statement or expression can cause a control flow that causes the method to end. This means clean-up must be done through using the finally or using keywords.
Anyway, I'm saying I think a lot of people quote the 'only return at the end of a method' guideline without understanding why it was ever a good thing to do, and that reducing nesting to improve readability is probably a better aim.
I'd like to add that there is name for those inverted if's - Guard Clause. I use it whenever I can.
I hate reading code where there is if at the beginning, two screens of code and no else. Just invert if and return. That way nobody will waste time scrolling.
http://c2.com/cgi/wiki?GuardClause
It doesn't only affect aesthetics, but it also prevents code nesting.
It can actually function as a precondition to ensure that your data is valid as well.
This is of course subjective, but I think it strongly improves on two points:
It is now immediately obvious that your function has nothing left to do if condition holds.
It keeps the nesting level down. Nesting hurts readability more than you'd think.
Multiple return points were a problem in C (and to a lesser extent C++) because they forced you to duplicate clean-up code before each of the return points. With garbage collection, the try | finally construct and using blocks, there's really no reason why you should be afraid of them.
Ultimately it comes down to what you and your colleagues find easier to read.
Guard clauses or pre-conditions (as you can probably see) check to see if a certain condition is met and then breaks the flow of the program. They're great for places where you're really only interested in one outcome of an if statement. So rather than say:
if (something) {
// a lot of indented code
}
You reverse the condition and break if that reversed condition is fulfilled
if (!something) return false; // or another value to show your other code the function did not execute
// all the code from before, save a lot of tabs
return is nowhere near as dirty as goto. It allows you to pass a value to show the rest of your code that the function couldn't run.
You'll see the best examples of where this can be applied in nested conditions:
if (something) {
do-something();
if (something-else) {
do-another-thing();
} else {
do-something-else();
}
}
vs:
if (!something) return;
do-something();
if (!something-else) return do-something-else();
do-another-thing();
You'll find few people arguing the first is cleaner but of course, it's completely subjective. Some programmers like to know what conditions something is operating under by indentation, while I'd much rather keep method flow linear.
I won't suggest for one moment that precons will change your life or get you laid but you might find your code just that little bit easier to read.
Performance-wise, there will be no noticeable difference between the two approaches.
But coding is about more than performance. Clarity and maintainability are also very important. And, in cases like this where it doesn't affect performance, it is the only thing that matters.
There are competing schools of thought as to which approach is preferable.
One view is the one others have mentioned: the second approach reduces the nesting level, which improves code clarity. This is natural in an imperative style: when you have nothing left to do, you might as well return early.
Another view, from the perspective of a more functional style, is that a method should have only one exit point. Everything in a functional language is an expression. So if statements must always have an else clauses. Otherwise the if expression wouldn't always have a value. So in the functional style, the first approach is more natural.
There are several good points made here, but multiple return points can be unreadable as well, if the method is very lengthy. That being said, if you're going to use multiple return points just make sure that your method is short, otherwise the readability bonus of multiple return points may be lost.
Performance is in two parts. You have performance when the software is in production, but you also want to have performance while developing and debugging. The last thing a developer wants is to "wait" for something trivial. In the end, compiling this with optimization enabled will result in similar code. So it's good to know these little tricks that pay off in both scenarios.
The case in the question is clear, ReSharper is correct. Rather than nesting if statements, and creating new scope in code, you're setting a clear rule at the start of your method. It increases readability, it will be easier to maintain, and it reduces the amount of rules one has to sift through to find where they want to go.
Personally I prefer only 1 exit point. It's easy to accomplish if you keep your methods short and to the point, and it provides a predictable pattern for the next person who works on your code.
eg.
bool PerformDefaultOperation()
{
bool succeeded = false;
DataStructure defaultParameters;
if ((defaultParameters = this.GetApplicationDefaults()) != null)
{
succeeded = this.DoSomething(defaultParameters);
}
return succeeded;
}
This is also very useful if you just want to check the values of certain local variables within a function before it exits. All you need to do is place a breakpoint on the final return and you are guaranteed to hit it (unless an exception is thrown).
Avoiding multiple exit points can lead to performance gains. I am not sure about C# but in C++ the Named Return Value Optimization (Copy Elision, ISO C++ '03 12.8/15) depends on having a single exit point. This optimization avoids copy constructing your return value (in your specific example it doesn't matter). This could lead to considerable gains in performance in tight loops, as you are saving a constructor and a destructor each time the function is invoked.
But for 99% of the cases saving the additional constructor and destructor calls is not worth the loss of readability nested if blocks introduce (as others have pointed out).
Many good reasons about how the code looks like. But what about results?
Let's take a look to some C# code and its IL compiled form:
using System;
public class Test {
public static void Main(string[] args) {
if (args.Length == 0) return;
if ((args.Length+2)/3 == 5) return;
Console.WriteLine("hey!!!");
}
}
This simple snippet can be compiled. You can open the generated .exe file with ildasm and check what is the result. I won't post all the assembler thing but I'll describe the results.
The generated IL code does the following:
If the first condition is false, jumps to the code where the second is.
If it's true jumps to the last instruction. (Note: the last instruction is a return).
In the second condition the same happens after the result is calculated. Compare and: got to the Console.WriteLine if false or to the end if this is true.
Print the message and return.
So it seems that the code will jump to the end. What if we do a normal if with nested code?
using System;
public class Test {
public static void Main(string[] args) {
if (args.Length != 0 && (args.Length+2)/3 != 5)
{
Console.WriteLine("hey!!!");
}
}
}
The results are quite similar in IL instructions. The difference is that before there were two jumps per condition: if false go to next piece of code, if true go to the end. And now the IL code flows better and has 3 jumps (the compiler optimized this a bit):
First jump: when Length is 0 to a part where the code jumps again (Third jump) to the end.
Second: in the middle of the second condition to avoid one instruction.
Third: if the second condition is false, jump to the end.
Anyway, the program counter will always jump.
In theory, inverting if could lead to better performance if it increases branch prediction hit rate. In practice, I think it is very hard to know exactly how branch prediction will behave, especially after compiling, so I would not do it in my day-to-day development, except if I am writing assembly code.
More on branch prediction here.
That is simply controversial. There is no "agreement among programmers" on the question of early return. It's always subjective, as far as I know.
It's possible to make a performance argument, since it's better to have conditions that are written so they are most often true; it can also be argued that it is clearer. It does, on the other hand, create nested tests.
I don't think you will get a conclusive answer to this question.
There are a lot of insightful answers there already, but still, I would to direct to a slightly different situation: Instead of precondition, that should be put on top of a function indeed, think of a step-by-step initialization, where you have to check for each step to succeed and then continue with the next. In this case, you cannot check everything at the top.
I found my code really unreadable when writing an ASIO host application with Steinberg's ASIOSDK, as I followed the nesting paradigm. It went like eight levels deep, and I cannot see a design flaw there, as mentioned by Andrew Bullock above. Of course, I could have packed some inner code to another function, and then nested the remaining levels there to make it more readable, but this seems rather random to me.
By replacing nesting with guard clauses, I even discovered a misconception of mine regarding a portion of cleanup-code that should have occurred much earlier within the function instead of at the end. With nested branches, I would never have seen that, you could even say they led to my misconception.
So this might be another situation where inverted ifs can contribute to a clearer code.
It's a matter of opinion.
My normal approach would be to avoid single line ifs, and returns in the middle of a method.
You wouldn't want lines like it suggests everywhere in your method but there is something to be said for checking a bunch of assumptions at the top of your method, and only doing your actual work if they all pass.
In my opinion early return is fine if you are just returning void (or some useless return code you're never gonna check) and it might improve readability because you avoid nesting and at the same time you make explicit that your function is done.
If you are actually returning a returnValue - nesting is usually a better way to go cause you return your returnValue just in one place (at the end - duh), and it might make your code more maintainable in a whole lot of cases.
I'm not sure, but I think, that R# tries to avoid far jumps. When You have IF-ELSE, compiler does something like this:
Condition false -> far jump to false_condition_label
true_condition_label:
instruction1
...
instruction_n
false_condition_label:
instruction1
...
instruction_n
end block
If condition is true there is no jump and no rollout L1 cache, but jump to false_condition_label can be very far and processor must rollout his own cache. Synchronising cache is expensive. R# tries replace far jumps into short jumps and in this case there is bigger probability, that all instructions are already in cache.
I think it depends on what you prefer, as mentioned, theres no general agreement afaik.
To reduce annoyment, you may reduce this kind of warning to "Hint"
My idea is that the return "in the middle of a function" shouldn't be so "subjective".
The reason is quite simple, take this code:
function do_something( data ){
if (!is_valid_data( data ))
return false;
do_something_that_take_an_hour( data );
istance = new object_with_very_painful_constructor( data );
if ( istance is not valid ) {
error_message( );
return ;
}
connect_to_database ( );
get_some_other_data( );
return;
}
Maybe the first "return" it's not SO intuitive, but that's really saving.
There are too many "ideas" about clean codes, that simply need more practise to lose their "subjective" bad ideas.
There are several advantages to this sort of coding but for me the big win is, if you can return quick you can improve the speed of your application. IE I know that because of Precondition X that I can return quickly with an error. This gets rid of the error cases first and reduces the complexity of your code. In a lot of cases because the cpu pipeline can be now be cleaner it can stop pipeline crashes or switches. Secondly if you are in a loop, breaking or returning out quickly can save you a lots of cpu. Some programmers use loop invariants to do this sort of quick exit but in this you can broke your cpu pipeline and even create memory seek problem and mean the the cpu needs to load from outside cache. But basically I think you should do what you intended, that is end the loop or function not create a complex code path just to implement some abstract notion of correct code. If the only tool you have is a hammer then everything looks like a nail.
I know it sounds weird but I am required to put a wrapping try catch block to every method to catch all exceptions. We have thousands of methods and I need to do it in an automated way. What do you suggest?
I am planning to parse all cs files and detect methods and insert a try catch block with an application. Can you suggest me any parser that I can easily use? or anything that will help me...
every method has its unique number like 5006
public static LogEntry Authenticate(....)
{
LogEntry logEntry = null;
try
{
....
return logEntry;
}
catch (CompanyException)
{
throw;
}
catch (Exception ex)
{
logEntry = new LogEntry(
"5006",
RC.GetString("5006"), EventLogEntryType.Error,
LogEntryCategory.Foo);
throw new CompanyException(logEntry, ex);
}
}
I created this for this;
http://thinkoutofthenet.com/index.php/2009/01/12/batch-code-method-manipulation/
DONT DO IT. There is no good reason for pokemon ("gotta catch em all")error handling.
EDIT: After a few years, a slight revision is in order. I would say instead "at least dont do this manually". Use an AOP tool or weaver like PostSharp or Fody to apply this to the end result code, but make sure to consider other useful tracing or diagnostic data points like capturing time of execution, input parameters, output parameters, etc.
Well if you have to do it, then you must. However, you might try to talk whoever is forcing you to do it into letting you use the UnhandledException event of the AppDomain class. It will give you a notification about every uncaught exception in any method before it is reported to the user. Since you can also get a stack trace from the exception object, you'll be able to tell exactly where every exception occurs. It is a much better solution than rigging your code with exception handlers everywhere.
With that said, if I had to do it, I'd use some regular expressions to identify the begin and end of each method and use that to insert some exception handler everywhere. The trick to writing a regular expression for this case will be Balancing Group Definition explained more in the MSDN documentation here. There is also a relevant example of using balancing groups here.
Maybe whoever came up with the requirement doesn't understand that you can still catch all exceptions (at the top) without putting a try-catch in every single function. You can see an example of how to catch all unhandled exceptions here. I think this is a much better solution as you can actually do something with the exception, and report it, rather than blindly burying all exceptions, resulting in extremely hard to track down bugs.
This is similar to Scott's solution, but also adds an event handler to the Application.ThreadException exception which can happen if you are using threads. Probably best to use both in order to catch all exceptions.
See my answer here which describes some of the performance trade offs you will be forced to live with if you use "gotta catch em all" exception handling.
As scott said the best way to do pretty much the same thing is the UnhandledException event. I think jeff actually discussed this very problem in an early SO podcast.
I'm with StingyJack, don't do it!
However if the Gods on high decree that must be done, then see my answer to this question Get a method’s contents from a cs file
First of all, I'm with StingyJack and Binary Worrier. There's a good reason exceptions aren't caught by default. If you really want to catch exceptions and die slightly nicer, you can put a try-catch block around the Application.Run() call and work from there.
When dealing with outside sources, (files, the Internet, etc), one should (usually) catch certain exceptions (bad connection, missing file, blah blah). In my book, however, an exception anywhere else means either 1) a bug, 2) flawed logic, or 3) poor data validation...
In summary, and to completely not answer your question, are you sure you want to do this?
I had to do something kinda sorta similar (add something to a lot of lines of code); I used regex.
I would create a regex script that found the beginning of each function and insert the try catch block right after the beginning. I would then create another regex script to find the end of the function (by finding the beginning of the function right after it) and insert the try catch block at the end. This won't get you 100% of the way there, but it will hit maybe 80% which would hopefully be close enough that you could do the edge cases without too much more work.
I wanted to write this as an answer to all answers then you can be aware via question RSS;
requirement comes from our technical leader:
here is his reason:
we need no find out which func has a problem in production code, any problem has been reported as an alert, we put a unique code to ecery catch block and we see where the problem is. He knows there is a global error handling but it does not help in this scenario, and stacktrace is not accure in release mode, so he requires a try catch block like this:
everyMethod(...){
try
{
..
catch(Exception e)
{
RaiseAlert(e.Message.. blabla)
}
}
If you put a RaiseAlert call in every method, the error stacks you receive will be very confusing, if not inaccurate, assuming that you reuse methods. The logging method should really only need to be called in events or the top-most method(s). If someone is pushing the issue that exception handling needs to be in every method, they don't understand exception handling.
A couple years back we implemented a practice that exception handling had to be done in every event and one developer read it as "every method." When they were finished, we had weeks worth undoing to do because no exception reported was ever reproducible. I'm assuming they knew better, like you do, but never questioned the validity of their interpretation.
Implementing AppDomain.UnhandledException is a good backup but your only recourse in that method is to kill the application once you log the exception. You'd have to write a global exception handler to prevent this.
so here is an example for those wondering;
5006 is unique to this method;
public static LogEntry Authenticate(....)
{
LogEntry logEntry = null;
try
{
....
return logEntry;
}
catch (CompanyException)
{
throw;
}
catch (Exception ex)
{
logEntry = new LogEntry(
"5006",
RC.GetString("5006"), EventLogEntryType.Error,
LogEntryCategory.Foo);
throw new CompanyException(logEntry, ex);
}
}
I guess you could use Aspect Oriented programming, something I would like to my hands dirty with. For example http://www.postsharp.org/aopnet/overview
Although this sort of requirements are indeed evil.
If you really have to do it, why go to the trouble of modifying the source code, when you can modify the compiled executable/library directly
Have a look at Cecil (see website), It's a library that allows you to modify the bytecode directly, using this approach, your entire problem could be solved in a couple of hundred lines of C# code.
Since you are posting a question here, I am sure this is one of those things you just have to do. So instead of banging your head against an unyielding wall, why not do what Scott suggested and use the AppDomain event handler. You'll meet the requirements without spending hours of quality billable hours doing grunt work. I am sure once you tell your boss how much testing updating each and every file would entail, using the event handler will be a no-brainer!
So you are not really looking to put the same exact try-catch block in each function, right? You are going to have to tailor each try-catch to each function. Wow! Seems like a long way to "simplify" debugging.
If a user reports an error in production, why can't you just fire up Visual Studio and reproduce the steps and debug?
If you absolutely have to add the try/catch block to every method and scott's answer (AppDomain.UnhandledException) is not sufficient, you can also look into interceptors. I believe the Castle project has a fairly good implementation of method interceptors.
If you really have to do it, an alternative to wrapping the exception each time would be to use Exception.Data to capture the additional information and then rethrow the original exception ...
catch (Exception ex)
{
logEntry = new LogEntry("5006",
RC.GetString("5006"), EventLogEntryType.Error,
LogEntryCategory.Foo);
ex.Data.Add("5006",logEntry);
throw;
}
Now at the top level you can just dump the contents of ex.Data to get all the additional information you might want. This allows you to put file names, useragents, ... and all manner of other useful information in the .Data collection to help understand why an exception occurred.
I did some research work that require parsing of C# code about 2 years ago and discover that the SharpDevelop project has source code that does this really well. If you extracted the SharpDevParser project (this was two years ago, not sure if the project name stays the same) from the source code base, you can then use the parser object like this:
CSharpBinding.Parser.TParser = new CSharpBinding.Parser.TParser();
SIP.ICompilationUnitBase unitBase = sharpParser.Parse(fileName);
this gives you the compUnit.Classes which you can iterate through each class and find method within it.
I was helping a friend find a memory leak in a C# XNA Game he was writing.
I suggested we try and examine how many times each method was getting invoked.
In order to keep count, I wrote a python script that added 2 lines to update a Dictionary with the details.
Basically I wrote a python script to modify some 400~ methods with 2 required lines.
This code may help someone do more things, like the odd thing the OP wanted.
The code uses the path configured on the 3rd line, and iterates recursively while processing .cs files. It does sub-directories as well.
When it find a cs file, it looks for method declarations, it tries to be as careful as possible. MAKE A BACKUP - I AM NOT RESPONSIBLE IF MY SCRIPT VIOLATES YOUR CODE!!!
import os, re
path="D:/Downloads/Dropbox/My Dropbox/EitamTool/NetworkSharedObjectModel"
files = []
def processDir(path, files):
dirList=os.listdir(path)
for fname in dirList:
newPath = os.path.normpath(path + os.sep + fname)
if os.path.isdir(newPath):
processDir(newPath, files)
else:
if not newPath in files:
files.append(newPath)
newFile = handleFile(newPath)
if newPath.endswith(".cs"):
writeFile(newPath, newFile)
def writeFile(path, newFile):
f = open(path, 'w')
f.writelines(newFile)
f.close()
def handleFile(path):
out = []
if path.endswith(".cs"):
f = open(path, 'r')
data = f.readlines()
f.close()
inMethod = False
methodName = ""
namespace = "NotFound"
lookingForMethodDeclerationEnd = False
for line in data:
out.append(line)
if lookingForMethodDeclerationEnd:
strippedLine = line.strip()
if strippedLine.find(")"):
lookingForMethodDeclerationEnd = False
if line.find("namespace") > -1:
namespace = line.split(" ")[1][0:-2]
if not inMethod:
strippedLine = line.strip()
if isMethod(strippedLine):
inMethod = True
if strippedLine.find(")") == -1:
lookingForMethodDeclerationEnd = True
previousLine = line
else:
strippedLine = line.strip()
if strippedLine == "{":
methodName = getMethodName(previousLine)
out.append(' if (!MethodAccess.MethodAccess.Counter.ContainsKey("' + namespace + '.' + methodName + '")) {MethodAccess.MethodAccess.Counter.Add("' + namespace + '.' + methodName + '", 0);}')
out.append("\n" + getCodeToInsert(namespace + "." + methodName))
inMethod = False
return out
def getMethodName(line):
line = line.strip()
lines = line.split(" ")
for littleLine in lines:
index = littleLine.find("(")
if index > -1:
return littleLine[0:index]
def getCodeToInsert(methodName):
retVal = " MethodAccess.MethodAccess.Counter[\"" + methodName + "\"]++;\n"
return retVal
def isMethod(line):
if line.find("=") > -1 or line.find(";") > -1 or line.find(" class ") > -1:
return False
if not (line.find("(") > -1):
return False
if line.find("{ }") > -1:
return False
goOn = False
if line.startswith("private "):
line = line[8:]
goOn = True
if line.startswith("public "):
line = line[7:]
goOn = True
if goOn:
return True
return False
processDir(path, files)
When I ran ReSharper on my code, for example:
if (some condition)
{
Some code...
}
ReSharper gave me the above warning (Invert "if" statement to reduce nesting), and suggested the following correction:
if (!some condition) return;
Some code...
I would like to understand why that's better. I always thought that using "return" in the middle of a method problematic, somewhat like "goto".
It is not only aesthetic, but it also reduces the maximum nesting level inside the method. This is generally regarded as a plus because it makes methods easier to understand (and indeed, many static analysis tools provide a measure of this as one of the indicators of code quality).
On the other hand, it also makes your method have multiple exit points, something that another group of people believes is a no-no.
Personally, I agree with ReSharper and the first group (in a language that has exceptions I find it silly to discuss "multiple exit points"; almost anything can throw, so there are numerous potential exit points in all methods).
Regarding performance: both versions should be equivalent (if not at the IL level, then certainly after the jitter is through with the code) in every language. Theoretically this depends on the compiler, but practically any widely used compiler of today is capable of handling much more advanced cases of code optimization than this.
A return in the middle of the method is not necessarily bad. It might be better to return immediately if it makes the intent of the code clearer. For example:
double getPayAmount() {
double result;
if (_isDead) result = deadAmount();
else {
if (_isSeparated) result = separatedAmount();
else {
if (_isRetired) result = retiredAmount();
else result = normalPayAmount();
};
}
return result;
};
In this case, if _isDead is true, we can immediately get out of the method. It might be better to structure it this way instead:
double getPayAmount() {
if (_isDead) return deadAmount();
if (_isSeparated) return separatedAmount();
if (_isRetired) return retiredAmount();
return normalPayAmount();
};
I've picked this code from the refactoring catalog. This specific refactoring is called: Replace Nested Conditional with Guard Clauses.
This is a bit of a religious argument, but I agree with ReSharper that you should prefer less nesting. I believe that this outweighs the negatives of having multiple return paths from a function.
The key reason for having less nesting is to improve code readability and maintainability. Remember that many other developers will need to read your code in the future, and code with less indentation is generally much easier to read.
Preconditions are a great example of where it is okay to return early at the start of the function. Why should the readability of the rest of the function be affected by the presence of a precondition check?
As for the negatives about returning multiple times from a method - debuggers are pretty powerful now, and it's very easy to find out exactly where and when a particular function is returning.
Having multiple returns in a function is not going to affect the maintainance programmer's job.
Poor code readability will.
As others have mentioned, there shouldn't be a performance hit, but there are other considerations. Aside from those valid concerns, this also can open you up to gotchas in some circumstances. Suppose you were dealing with a double instead:
public void myfunction(double exampleParam){
if(exampleParam > 0){
//Body will *not* be executed if Double.IsNan(exampleParam)
}
}
Contrast that with the seemingly equivalent inversion:
public void myfunction(double exampleParam){
if(exampleParam <= 0)
return;
//Body *will* be executed if Double.IsNan(exampleParam)
}
So in certain circumstances what appears to be a a correctly inverted if might not be.
The idea of only returning at the end of a function came back from the days before languages had support for exceptions. It enabled programs to rely on being able to put clean-up code at the end of a method, and then being sure it would be called and some other programmer wouldn't hide a return in the method that caused the cleanup code to be skipped. Skipped cleanup code could result in a memory or resource leak.
However, in a language that supports exceptions, it provides no such guarantees. In a language that supports exceptions, the execution of any statement or expression can cause a control flow that causes the method to end. This means clean-up must be done through using the finally or using keywords.
Anyway, I'm saying I think a lot of people quote the 'only return at the end of a method' guideline without understanding why it was ever a good thing to do, and that reducing nesting to improve readability is probably a better aim.
I'd like to add that there is name for those inverted if's - Guard Clause. I use it whenever I can.
I hate reading code where there is if at the beginning, two screens of code and no else. Just invert if and return. That way nobody will waste time scrolling.
http://c2.com/cgi/wiki?GuardClause
It doesn't only affect aesthetics, but it also prevents code nesting.
It can actually function as a precondition to ensure that your data is valid as well.
This is of course subjective, but I think it strongly improves on two points:
It is now immediately obvious that your function has nothing left to do if condition holds.
It keeps the nesting level down. Nesting hurts readability more than you'd think.
Multiple return points were a problem in C (and to a lesser extent C++) because they forced you to duplicate clean-up code before each of the return points. With garbage collection, the try | finally construct and using blocks, there's really no reason why you should be afraid of them.
Ultimately it comes down to what you and your colleagues find easier to read.
Guard clauses or pre-conditions (as you can probably see) check to see if a certain condition is met and then breaks the flow of the program. They're great for places where you're really only interested in one outcome of an if statement. So rather than say:
if (something) {
// a lot of indented code
}
You reverse the condition and break if that reversed condition is fulfilled
if (!something) return false; // or another value to show your other code the function did not execute
// all the code from before, save a lot of tabs
return is nowhere near as dirty as goto. It allows you to pass a value to show the rest of your code that the function couldn't run.
You'll see the best examples of where this can be applied in nested conditions:
if (something) {
do-something();
if (something-else) {
do-another-thing();
} else {
do-something-else();
}
}
vs:
if (!something) return;
do-something();
if (!something-else) return do-something-else();
do-another-thing();
You'll find few people arguing the first is cleaner but of course, it's completely subjective. Some programmers like to know what conditions something is operating under by indentation, while I'd much rather keep method flow linear.
I won't suggest for one moment that precons will change your life or get you laid but you might find your code just that little bit easier to read.
Performance-wise, there will be no noticeable difference between the two approaches.
But coding is about more than performance. Clarity and maintainability are also very important. And, in cases like this where it doesn't affect performance, it is the only thing that matters.
There are competing schools of thought as to which approach is preferable.
One view is the one others have mentioned: the second approach reduces the nesting level, which improves code clarity. This is natural in an imperative style: when you have nothing left to do, you might as well return early.
Another view, from the perspective of a more functional style, is that a method should have only one exit point. Everything in a functional language is an expression. So if statements must always have an else clauses. Otherwise the if expression wouldn't always have a value. So in the functional style, the first approach is more natural.
There are several good points made here, but multiple return points can be unreadable as well, if the method is very lengthy. That being said, if you're going to use multiple return points just make sure that your method is short, otherwise the readability bonus of multiple return points may be lost.
Performance is in two parts. You have performance when the software is in production, but you also want to have performance while developing and debugging. The last thing a developer wants is to "wait" for something trivial. In the end, compiling this with optimization enabled will result in similar code. So it's good to know these little tricks that pay off in both scenarios.
The case in the question is clear, ReSharper is correct. Rather than nesting if statements, and creating new scope in code, you're setting a clear rule at the start of your method. It increases readability, it will be easier to maintain, and it reduces the amount of rules one has to sift through to find where they want to go.
Personally I prefer only 1 exit point. It's easy to accomplish if you keep your methods short and to the point, and it provides a predictable pattern for the next person who works on your code.
eg.
bool PerformDefaultOperation()
{
bool succeeded = false;
DataStructure defaultParameters;
if ((defaultParameters = this.GetApplicationDefaults()) != null)
{
succeeded = this.DoSomething(defaultParameters);
}
return succeeded;
}
This is also very useful if you just want to check the values of certain local variables within a function before it exits. All you need to do is place a breakpoint on the final return and you are guaranteed to hit it (unless an exception is thrown).
Avoiding multiple exit points can lead to performance gains. I am not sure about C# but in C++ the Named Return Value Optimization (Copy Elision, ISO C++ '03 12.8/15) depends on having a single exit point. This optimization avoids copy constructing your return value (in your specific example it doesn't matter). This could lead to considerable gains in performance in tight loops, as you are saving a constructor and a destructor each time the function is invoked.
But for 99% of the cases saving the additional constructor and destructor calls is not worth the loss of readability nested if blocks introduce (as others have pointed out).
Many good reasons about how the code looks like. But what about results?
Let's take a look to some C# code and its IL compiled form:
using System;
public class Test {
public static void Main(string[] args) {
if (args.Length == 0) return;
if ((args.Length+2)/3 == 5) return;
Console.WriteLine("hey!!!");
}
}
This simple snippet can be compiled. You can open the generated .exe file with ildasm and check what is the result. I won't post all the assembler thing but I'll describe the results.
The generated IL code does the following:
If the first condition is false, jumps to the code where the second is.
If it's true jumps to the last instruction. (Note: the last instruction is a return).
In the second condition the same happens after the result is calculated. Compare and: got to the Console.WriteLine if false or to the end if this is true.
Print the message and return.
So it seems that the code will jump to the end. What if we do a normal if with nested code?
using System;
public class Test {
public static void Main(string[] args) {
if (args.Length != 0 && (args.Length+2)/3 != 5)
{
Console.WriteLine("hey!!!");
}
}
}
The results are quite similar in IL instructions. The difference is that before there were two jumps per condition: if false go to next piece of code, if true go to the end. And now the IL code flows better and has 3 jumps (the compiler optimized this a bit):
First jump: when Length is 0 to a part where the code jumps again (Third jump) to the end.
Second: in the middle of the second condition to avoid one instruction.
Third: if the second condition is false, jump to the end.
Anyway, the program counter will always jump.
In theory, inverting if could lead to better performance if it increases branch prediction hit rate. In practice, I think it is very hard to know exactly how branch prediction will behave, especially after compiling, so I would not do it in my day-to-day development, except if I am writing assembly code.
More on branch prediction here.
That is simply controversial. There is no "agreement among programmers" on the question of early return. It's always subjective, as far as I know.
It's possible to make a performance argument, since it's better to have conditions that are written so they are most often true; it can also be argued that it is clearer. It does, on the other hand, create nested tests.
I don't think you will get a conclusive answer to this question.
There are a lot of insightful answers there already, but still, I would to direct to a slightly different situation: Instead of precondition, that should be put on top of a function indeed, think of a step-by-step initialization, where you have to check for each step to succeed and then continue with the next. In this case, you cannot check everything at the top.
I found my code really unreadable when writing an ASIO host application with Steinberg's ASIOSDK, as I followed the nesting paradigm. It went like eight levels deep, and I cannot see a design flaw there, as mentioned by Andrew Bullock above. Of course, I could have packed some inner code to another function, and then nested the remaining levels there to make it more readable, but this seems rather random to me.
By replacing nesting with guard clauses, I even discovered a misconception of mine regarding a portion of cleanup-code that should have occurred much earlier within the function instead of at the end. With nested branches, I would never have seen that, you could even say they led to my misconception.
So this might be another situation where inverted ifs can contribute to a clearer code.
It's a matter of opinion.
My normal approach would be to avoid single line ifs, and returns in the middle of a method.
You wouldn't want lines like it suggests everywhere in your method but there is something to be said for checking a bunch of assumptions at the top of your method, and only doing your actual work if they all pass.
In my opinion early return is fine if you are just returning void (or some useless return code you're never gonna check) and it might improve readability because you avoid nesting and at the same time you make explicit that your function is done.
If you are actually returning a returnValue - nesting is usually a better way to go cause you return your returnValue just in one place (at the end - duh), and it might make your code more maintainable in a whole lot of cases.
I'm not sure, but I think, that R# tries to avoid far jumps. When You have IF-ELSE, compiler does something like this:
Condition false -> far jump to false_condition_label
true_condition_label:
instruction1
...
instruction_n
false_condition_label:
instruction1
...
instruction_n
end block
If condition is true there is no jump and no rollout L1 cache, but jump to false_condition_label can be very far and processor must rollout his own cache. Synchronising cache is expensive. R# tries replace far jumps into short jumps and in this case there is bigger probability, that all instructions are already in cache.
I think it depends on what you prefer, as mentioned, theres no general agreement afaik.
To reduce annoyment, you may reduce this kind of warning to "Hint"
My idea is that the return "in the middle of a function" shouldn't be so "subjective".
The reason is quite simple, take this code:
function do_something( data ){
if (!is_valid_data( data ))
return false;
do_something_that_take_an_hour( data );
istance = new object_with_very_painful_constructor( data );
if ( istance is not valid ) {
error_message( );
return ;
}
connect_to_database ( );
get_some_other_data( );
return;
}
Maybe the first "return" it's not SO intuitive, but that's really saving.
There are too many "ideas" about clean codes, that simply need more practise to lose their "subjective" bad ideas.
There are several advantages to this sort of coding but for me the big win is, if you can return quick you can improve the speed of your application. IE I know that because of Precondition X that I can return quickly with an error. This gets rid of the error cases first and reduces the complexity of your code. In a lot of cases because the cpu pipeline can be now be cleaner it can stop pipeline crashes or switches. Secondly if you are in a loop, breaking or returning out quickly can save you a lots of cpu. Some programmers use loop invariants to do this sort of quick exit but in this you can broke your cpu pipeline and even create memory seek problem and mean the the cpu needs to load from outside cache. But basically I think you should do what you intended, that is end the loop or function not create a complex code path just to implement some abstract notion of correct code. If the only tool you have is a hammer then everything looks like a nail.
The IT department of a subsidiary of ours had a consulting company write them an ASP.NET application. Now it's having intermittent problems with mixing up who the current user is and has been known to show Joe some of Bob's data by mistake.
The consultants were brought back to troubleshoot and we were invited to listen in on their explanation. Two things stuck out.
First, the consultant lead provided this pseudo-code:
void MyFunction()
{
Session["UserID"] = SomeProprietarySessionManagementLookup();
Response.Redirect("SomeOtherPage.aspx");
}
He went on to say that the assignment of the session variable is asynchronous, which seemed untrue. Granted the call into the lookup function could do something asynchronously, but this seems unwise.
Given that alleged asynchronousness, his theory was that the session variable was not being assigned before the redirect's inevitable ThreadAbort exception was raised. This faulure then prevented SomeOtherPage from displaying the correct user's data.
Second, he gave an example of a coding best practice he recommends. Rather than writing:
int MyFunction(int x, int x)
{
try
{
return x / y;
}
catch(Exception ex)
{
// log it
throw;
}
}
the technique he recommended was:
int MyFunction(int x, int y, out bool isSuccessful)
{
isSuccessful = false;
if (y == 0)
return 0;
isSuccessful = true;
return x / y;
}
This will certainly work and could be better from a performance perspective in some situations.
However, from these and other discussion points it just seemed to us that this team was not well-versed technically.
Opinions?
Rule of thumb: If you need to ask if a consultant knows what he's doing, he probably doesn't ;)
And I tend to agree here. Obviously you haven't provided much, but they don't seem terribly competent.
I would agree. These guys seem quite incompetent.
(BTW, I'd check to see if in "SomeProprietarySessionManagementLookup," they're using static data. Saw this -- with behavior exactly as you describe on a project I inherited several months ago. It was a total head-slap moment when we finally saw it ... And wished we could get face to face with the guys who wrote it ... )
If the consultant has written an application that's supposed to be able to keep track of users and only show the correct data to the correct users and it doesn't do that, then clearly something's wrong. A good consultant would find the problem and fix it. A bad consultant would tell you that it was asynchronicity.
On the asynchronous part, the only way that could be true is if the assignment going on there is actually an indexer setter on Session that is hiding an asynchronous call with no callback indicating success/failure. This would seem to be a HORRIBLE design choice, and it looks like a core class in your framework, so I find it highly unlikely.
Usually asynchronous calls have a way to specify a callback so you can determine what the result is, or if the operation was successful. The documentation for Session should be pretty clear though on if it is actually hiding an asynchronous call, but yeah... doesn't look like the consultant knows what he is talking about...
The method call that is being assigned to the Session indexer cannot be asynch, because to get a value asynchronously, you HAVE to use a callback... no way around that, so if there is no explicit callback, it's definitely not asynch (well, internally there could be an asynchronous call, but the caller of the method would perceive it as synchronous, so it is irrelevant if the method internally for example invokes a web service asynchronously).
For the second point, I think this would be much better, and keep the same functionality essentially:
int MyFunction(int x, int y)
{
if (y == 0)
{
// log it
throw new DivideByZeroException("Divide by zero attempted!");
}
return x / y;
}
For the first point, that does indeed seem bizarre.
On the second one, it's reasonable to try to avoid division by 0 - it's entirely avoidable and that avoidance is simple. However, using an out parameter to indicate success is only reasonable in certain cases, such as int.TryParse and DateTime.TryParseExact - where the caller can't easily determine whether or not their arguments are reasonable. Even then, the return value is usually the success/failure and the out parameter is the result of the method.
Asp.net sessions, if you're using the built-in providers, won't accidentally give you someone else's session. SomeProprietarySessionManagementLookup() is the likely culprit and is returning bad values or just not working.
Session["UserID"] = SomeProprietarySessionManagementLookup();
First of all assigning the return value from an asynchronously SomeProprietarySessionManagementLookup() just wont work. The consultants code probably looks like:
public void SomeProprietarySessionManagementLookup()
{
// do some async lookup
Action<object> d = delegate(object val)
{
LookupSession(); // long running thing that looks up the user.
Session["UserID"] = 1234; // Setting session manually
};
d.BeginInvoke(null,null,null);
}
The consultant isn't totally full of BS, but they have written some buggy code. Response.Redirect() does throw a ThreadAbort, and if the proprietary method is asynchronous, asp.net doesn't know to wait for the asynchronous method to write back to the session before asp.net itself saves the session. This is probably why it sometimes works and sometimes doesn't.
Their code might work if the asp.net session is in-process, but a state server or db server wouldn't. It's timing dependent.
I tested the following. We use state server in development. This code works because the session is written to before the main thread finishes.
Action<object> d = delegate(object val)
{
System.Threading.Thread.Sleep(1000); // waits a little
Session["rubbish"] = DateTime.Now;
};
d.BeginInvoke(null, null, null);
System.Threading.Thread.Sleep(5000); // waits a lot
object stuff = Session["rubbish"];
if( stuff == null ) stuff = "not there";
divStuff.InnerHtml = Convert.ToString(stuff);
This next snippet of code doesn't work because the session was already saved back to state server by the time the asynchronous method gets around to setting a session value.
Action<object> d = delegate(object val)
{
System.Threading.Thread.Sleep(5000); // waits a lot
Session["rubbish"] = DateTime.Now;
};
d.BeginInvoke(null, null, null);
// wait removed - ends immediately.
object stuff = Session["rubbish"];
if( stuff == null ) stuff = "not there";
divStuff.InnerHtml = Convert.ToString(stuff);
The first step is for the consultant to make their code synchronous because their performance trick didn't work at all. If that fixes it, have the consultant properly implement using the Asynchronous Programming Design Pattern
I agree with him in part -- it's definitely better to check y for zero rather than catching the (expensive) exception. The out bool isSuccessful seems really dated to me, but whatever.
re: the asynchronous sessionid buffoonery -- may or may not be true, but it sounds like the consultant is blowing smoke for cover.
Cody's rule of thumb is dead right. If you have to ask, he probably doesn't.
It seems like point two its patently incorrect. .NET's standards explain that if a method fails it should throw an exception, which seems closer to the original; not the consulstant's suggestion. Assuming the exception is accurately & specifically describing the failure.
The consultants created the code in the first place right? And it doesn't work. I think you have quite a bit of dirt on them already.
The asynchronous answer sounds like BS, but there may be something in it. Presumably they have offered a suitable solution as well as pseudo-code describing the problem they themselves created. I would be more tempted to judge them on their solution rather than their expression of the problem. If their understanding is flawed their new solution won't work either. Then you'll know they are idiots. (In fact look round to see if you have a similar proof in any other areas of their code already)
The other one is a code style issue. There are a lot of different ways to cope with that. I personally don't like that style, but there will be circumstances under which it is suitable.
They're wrong on the async.
The assignment happens and then the page redirects. The function can start something asynchronously and return (and could even conceivably alter the Session in its own way), but whatever it does return has to be assigned in the code you gave before the redirect.
They're wrong on that defensive coding style in any low-level code and even in a higher-level function unless it's a specific business case that the 0 or NULL or empty string or whatever should be handled that way - in which case, it's always successful (that successful flag is a nasty code smell) and not an exception. Exceptions are for exceptions. You don't want to mask behaviors like this by coddling the callers of the functions. Catch things early and throw exceptions. I think Maguire covered this in Writing Solid Code or McConnell in Code Complete. Either way, it smells.
This guy does not know what he is doing. The obvious culprit is right here:
Session["UserID"] = SomeProprietarySessionManagementLookup();
I have to agree with John Rudy. My gut tells me the problem is in SomeProprietarySessionManagementLookup().
.. and your consultants do not sound to sure of themselves.
Storing in Session in not async. So that isn't true unless that function is async. But even so, since it isn't calling a BeginCall and have something to call on completion, the next line of code wouldn't execute until the Session line is complete.
For the second statement, while that could be used, it isn't exactly a best practice and you have a few things to note with it. You save the cost of throwing an exception, but wouldn't you want to know that you are trying to divide by zero instead of just moving past it?
I don't think that is a solid suggestion at all.
Quite strange. On the second item it may or may not be faster. It certainly isn't the same functionality though.
Typical "consultant" bollocks:
The problem is with whatever SomeProprietarySessionManagementLookup is doing
Exceptions are only expensive if they're thrown. Don't be afraid of try..catch, but throws should only occur in exceptional circumstances. If variable y shouldn't be zero then an ArgumentOutOfRangeException would be appropriate.
I'm guessing your consultant is suggesting use a status variable instead of exception for error handling is a better practice? I don't agree. How often does people forgot or too lazy to do error checking for return values? Also, pass/fail variable is not informative. There are more things can go wrong other than divide by zero like integer x/y is too big or x is NaN. When things go wrong, status variable cannot tell you what went wrong, but exception can. Exception is for exceptional case, and divide by zero or NaN are definitely exceptional cases.
The session thing is possible. It's a bug, beyond doubt, but it could be that the write arrives at whatever custom session state provider you're using after the next read. The session state provider API accommodates locking to prevent this sort of thing, but if the implementor has just ignored all that, your consultant could be telling the truth.
The second issue is also kinda valid. It's not quite idiomatic - it's a slightly reversed version of things like int.TryParse, which are there to avoid performance issues caused by throwing lots of exceptions. But unless you're calling that code an awful lot, it's unlikely it'll make a noticeable difference (compared to say, one less database query per page etc). It's certainly not something you should do by default.
If SomeProprietarySessionManagementLookup(); is doing an asynchronous assignment it would more likely look like this:
SomeProprietarySessionManagementLookup(Session["UserID"]);
The very fact that the code is assigning the result to Session["UserID"] would suggest that it is not supposed to be asynchronous and the result should be obtained before Response.Redirect is called. If SomeProprietarySessionManagementLookup is returning before its result is calculated they have a design flaw anyway.
The throw an exception or use an out parameter is a matter of opinion and circumstance and in actual practice won't amount to a hill of beans which ever way you do it. For the performance hit of exceptions to become an issue you would need to be calling the function a huge number of times which would probably be a problem in itself.
If the consultants deployed their ASP.NET application on your server(s), then they may have deployed it in uncompiled form, which means there would be a bunch of *.cs files floating around that you could look at.
If all you can find is compiled .NET assemblies (DLLs and EXEs) of theirs, then you should still be able to decompile them into somewhat readable source code. I'll bet if you look through the code you'll find them using static variables in their proprietary lookup code. You'd then have something very concrete to show your bosses.
This entire answer stream is full of typical programmer attitudes. It reminds me of Joel's 'Things you should never do' article (rewrite from scratch.) We don't really know anything about the system, other than there's a bug, and some guy posted some code online. There are so many unknowns that it is ridiculous to say "This guy does not know what he is doing."
Rather than pile on the Consultant, you could just as easily pile on the person who procured their services. No consultant is perfect, nor is a hiring manager ... but at the end of the day the real direction you should be taking is very clear: instead of trying to find fault you should expend energy into working collaboratively to find solutions. No matter how skilled someone is at their roles and responsibilities they will certainly have deficiencies. If you determine there is a pattern of incompentencies then you may choose to transition to another resource going forward, but assigning blame has never solved a single problem in history.
On the second point, I would not use exceptions here. Exceptions are reserved for exceptional cases.
However, division of anything by zero certainly does not equal zero (in math, at least), so this would be case specific.