Are lambda expressions/delegates in C# "pure", or can they be? - c#

I recently asked about functional programs having no side effects, and learned what this means for making parallelized tasks trivial. Specifically, that "pure" functions make this trivial as they have no side effects.
I've also recently been looking into LINQ and lambda expressions as I've run across examples many times here on StackOverflow involving enumeration. That got me to wondering if parallelizing an enumeration or loop can be "easier" in C# now.
Are lambda expressions "pure" enough to pull off trivial parallelizing? Maybe it depends on what you're doing with the expression, but can they be pure enough? Would something like this be theoretically possible/trivial in C#?:
Break the loop into chunks
Run a thread to loop through each chunk
Run a function that does something with the value from the
current loop position of each thread
For instance, say I had a bunch of objects in a game loop (as I am developing a game and was thinking about the possibility of multiple threads) and had to do something with each of them every frame, would the above be trivial to pull off? Looking at IEnumerable it seems it only keeps track of the current position, so I'm not sure I could use the normal generic collections to break the enumeration into "chunks".
Sorry about this question. I used bullets above instead of pseudo-code because I don't even know enough to write pseudo-code off the top of my head. My .NET knowledge has been purely simple business stuff and I'm new to delegates and threads, etc. I mainly want to know if the above approach is good for pursuing, and if delegates/lambdas don't have to be worried about when it comes to their parallelization.

First off, note that in order to be "pure" a method must not only have no side effects. It must also always return the same result when given the same arguments. So, for example, the "Math.Sin" method is pure. You feed in 12, it gives you back sin(12) and it is the same every time. A method GetCurrentTime() is not pure even if it has no side effects; it returns a different value every time you call it, no matter what arguments you pass in.
Also note that a pure method really ought not to ever throw an exception; exceptions count as observable side effects for our purposes.
Second, yes, if you can reason about the purity of a method then you can do interesting things to automatically parallelize it. The trouble is, almost no methods are actually pure. Furthermore, suppose you do have a pure method; since a pure method is a perfect candidate for memoization, and since memoization introduces a side effect (it mutates a cache!) it is very attractive to take what ought to be pure methods and then make them impure.
What we really need is some way to "tame side effects" as Joe Duffy says. Some way to draw a box around a method and say "this method isn't side-effect-free, but its side effects are not visible outside of this box", and then use that fact to drive safe automatic parallelization.
I'd love to figure out some way to add these concepts to languages like C#, but this is all totally blue-sky open-research-problem stuff here; no promises intended or implied.

Lambda's should be pure. And then the FrameWork offers automatic paralellization with a simple .AsParallel addition to a LINQ query (PLINQ).
But it is not automatic or guaranteed, the programmer is responsible to make/keep them pure.

Whether or not a lambda is pure is tied to what it is doing. As a concept it is neither pure or impure.
For example: The following lambda expression is impure as it is reading and writing a single variable in the body. Running it in parallel creates a race condition.
var i = 0;
Func<bool> del = () => {
if ( i == 42 ) { return true; }
else ( i++ ) { return false; }
};
Contrarily the following delegate is pure and has no race conditions.
Func<bool> del = () => true;

As for the loop part, you could also use the Parallel.For and Parallel.ForEach for the example about the objects in a game. This is also part of .net 4 , but you can get it as a download.

There is a 13 parts reading that discuss about the new Parallelism support in .NET 4.0 here. It includes discussion on LINQ and PLINQ as well in Part 7. It is a great read, so check it out

Related

Can the C# compiler throw an error or warning if a certain method is called in a loop

Often times a developer on my team writes code in a loop that makes a call that is relatively slow (i.e. database access or web service call or other slow method). This is a super common mistake.
Yes, we practice code reviews, and we try to catch these and fix them before merging. However, failing early is better, right?
So is there a way to catch this mistake via the compiler?
Example:
Imagine this method
public ReturnObject SlowMethod(Something thing)
{
// method work
}
Below the method is called in a loop, which is a mistake.
public ReturnObject Call(IEnumerable<Something> things)
{
foreach(var thing in Things)
SlowMethod(thing); // Should throw compiler error or warning in a loop
}
Is there any way to decorate the above SlowMethod() with an attribute or compiler statement so that it would complain if used in a loop?
No, there is nothing in regular C# to prevent a method being used in a loop.
Your options:
discourage usage in a loop by providing easier to use alternatives. Providing second (or only) method that deals with collections will likely discourage one from writing calls in a loop enough so it is no longer a major concern.
try to write your own code analysis rule (stating tutorial - https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/tutorials/how-to-write-csharp-analyzer-code-fix)
add run-time protection to the method if it is called more often than you'd like.
Obviously it makes sense to invoke those slow methods in a loop - you're trying to put work into preventing that, but that's putting work into something fundamentally negative. Why not do something positive instead? Obviously, you've provided an API that's convenient to use in a loop. So, provide some alternatives that are easier to use correctly where formerly an incorrect use in a loop would take place, like:
an iterable-based API that would make the loop implicit, to remove some of the latency since you'd have a full view of what will be iterated, and can hide the latency appropriately,
an async API that won't block the thread, with example code showing how to use it in the typical situations you've encountered thus far; remember that an API that's too hard to use correctly won't get used!
a lowest-common-denominator API: split the methods into a requester and a result provider, so that there'd naturally be two loops: one to submit all the requests, another to collect and process the results (I dislike this approach, since it doesn't make the code any nicer)

On using Publish().RefCount()

I find myself often wanting to use Publish().RefCount() to 'protect my sources'.
For example, when translating some incoming IObservable json into two IObservable properties:
var anon = source.Select(TranslateToAnonObject);
this.Xs = anon.Select(GetXFromAnonObject);
this.Ys = anon.Select(GetYFromAnonObject);
To avoid performing the translation twice, I'd be tempted to put a Publish().RefCount() behind the anon definition.
And same thing for both property values, to avoid performing the Get.. functions separately for each subscriber.
The thing is, it's getting to the point where I can't really see many situations where I wouldn't want this. But if that was right, it would surely be the default in Rx. What am I thinking wrong?
(Thinks: is it because I'm almost exclusively working with 'hot' observables?)
You do quite often want to do this. In fact I wrote an article on this very point. The reason why it isn't the default is simply that it isn't required all the time (and is easier to omit than switch off); there are many cases where it just adds overhead and there are more than a few cases where Publish() with connection control is needed because the subscriber count can fall to zero and re-subscribing would have unintended side effects, particularly (as you said) when dealing with cold observables.

How does C# async/await relates to more general constructs, e.g. F# workflows or monads?

The C# language design have always (historically) been geared towards solving specific problems rather then finding to address the underlying general problems: see for example http://blogs.msdn.com/b/ericlippert/archive/2009/07/09/iterator-blocks-part-one.aspx for "IEnumerable vs. coroutines":
We could have made it much more general. Our iterator blocks can be seen as a weak kind of coroutine. We could have chosen to implement full coroutines and just made iterator blocks a special case of coroutines. And of course, coroutines are in turn less general than first-class continuations; we could have implemented continuations, implemented coroutines in terms of continuations, and iterators in terms of coroutines.
or http://blogs.msdn.com/b/wesdyer/archive/2008/01/11/the-marvels-of-monads.aspx for SelectMany as a surrogate for (some kind of) Monads:
The C# type system is not powerful enough to create a generalized abstraction for monads which was the primary motivator for creating extension methods and the "query pattern"
I do not want to ask why has been so (many good answers have been already given, especially in Eric's blog, which may apply to all these design decisions: from performance to increased complexity, both for the compiler and the programmer).
What I am trying to understand is to which "general construct" the async/await keywords relate to (my best guess is the continuation monad - after all, F# async is implemented using workflows, which to my understanding is a continuation monad), and how they relate to it (how they differ?, what is missing?, why there is a gap, if any?)
I'm looking for an answer similar to the Eric Lippert article I linked, but related to async/await instead of IEnumerable/yield.
Edit: besides the great answers, some useful links to related questions and blog posts where suggested, I'm editing my question to list them:
A starting point for bind using await
Implementation details of the state machine behind await
Other details on how await gets compiled/rewritten
Alternative, hypothetical implementation using continuations (call/cc)
The asynchronous programming model in C# is very similar to asynchronous workflows in F#, which are an instance of the general monad pattern. In fact, the C# iterator syntax is also an instance of this pattern, although it needs some additional structure, so it is not just simple monad.
Explaining this is well beyond the scope of a single SO answer, but let me explain the key ideas.
Monadic operations.
The C# async essentially consists of two primitive operations. You can await an asynchronous computation and you can return the result from an asynchronous computation (in the first case, this is done using a new keyword, while in the second case, we're re-using a keyword that is already in the language).
If you were following the general pattern (monad) then you would translate the asynchronous code into calls to the following two operations:
Task<R> Bind<T, R>(Task<T> computation, Func<T, Task<R>> continuation);
Task<T> Return<T>(T value);
They can both be quite easily implemented using the standard task API - the first one is essentially a combination of ContinueWith and Unwrap and the second one simply creates a task that returns the value immediately. I'm going to use the above two operations, because they better capture the idea.
Translation. The key thing is to translate asynchronous code to normal code that uses the above operations.
Let's look at a case when we await an expression e and then assign the result to a variable x and evaluate expression (or statement block) body (in C#, you can await inside expression, but you could always translate that to code that first assigns the result to a variable):
[| var x = await e; body |]
= Bind(e, x => [| body |])
I'm using a notation that is quite common in programming languages. The meaning of [| e |] = (...) is that we translate the expression e (in "semantic brackets") to some other expression (...).
In the above case, when you have an expression with await e, it is translated to the Bind operation and the body (the rest of the code following await) is pushed into a lambda function that is passed as a second parameter to Bind.
This is where the interesting thing happens! Instead of evaluating the rest of the code immediately (or blocking a thread while waiting), the Bind operation can run the asynchronous operation (represented by e which is of type Task<T>) and, when the operation completes, it can finally invoke the lambda function (continuation) to run the rest of the body.
The idea of the translation is that it turns ordinary code that returns some type R to a task that returns the value asynchronously - that is Task<R>. In the above equation, the return type of Bind is, indeed, a task. This is also why we need to translate return:
[| return e |]
= Return(e)
This is quite simple - when you have a resulting value and you want to return it, you simply wrap it in a task that immediately completes. This might sound useless, but remember that we need to return a Task because the Bind operation (and our entire translation) requires that.
Larger example. If you look at a larger example that contains multiple awaits:
var x = await AsyncOperation();
return await x.AnotherAsyncOperation();
The code would be translated to something like this:
Bind(AsyncOperation(), x =>
Bind(x.AnotherAsyncOperation(), temp =>
Return(temp));
The key trick is that every Bind turns the rest of the code into a continuation (meaning that it can be evaluated when an asynchronous operation is completed).
Continuation monad. In C#, the async mechanism is not actually implemented using the above translation. The reason is that if you focus just on async, you can do a more efficient compilation (which is what C# does) and produce a state machine directly. However, the above is pretty much how asynchronous workflows work in F#. This is also the source of additional flexibility in F# - you can define your own Bind and Return to mean other things - such as operations for working with sequences, tracking logging, creating resumable computations or even combining asynchronous computations with sequences (async sequence can yield multiple results, but can also await).
The F# implementation is based on the continuation monad which means that Task<T> (actually, Async<T>) in F# is defined roughly like this:
Async<T> = Action<Action<T>>
That is, an asynchronous computation is some action. When you give it Action<T> (a continuation) as an argument, it will start doing some work and then, when it eventually finishes, it invokes this action that you specified. If you search for continuation monads, then I'm sure you can find better explanation of this in both C# and F#, so I'll stop here...
Tomas's answer is very good. To add a few more things:
The C# language design have always (historically) been geared towards solving specific problems rather then finding to address the underlying general problems
Though there is some truth to that, I don't think it is an entirely fair or accurate characterization, so I'm going to start my answer by denying the premise of your question.
It is certainly true that there is a spectrum with "very specific" on one end and "very general" on the other, and that solutions to specific problems fall on that spectrum. C# is designed as a whole to be a highly general solution to a great many specific problems; that's what a general-purpose programming language is. You can use C# to write everything from web services to XBOX 360 games.
Since C# is designed to be a general-purpose programming language, when the design team identifies a specific user problem they always consider the more general case. LINQ is an excellent case in point. In the very early days of the design of LINQ, it was little more than a way to put SQL statements in a C# program, because that's the problem space that was identified. But quite soon in the design process the team realized that the concepts of sorting, filtering, grouping and joining data applied not just to tabular data in a relational database, but also hierarchical data in XML, and to ad-hoc objects in memory. And so they decided to go for the far more general solution that we have today.
The trick of design is figuring out where on the spectrum it makes sense to stop. The design team could have said, well, the query comprehension problem is actually just a specific case of the more general problem of binding monads. And the binding monads problem is actually just a specific case of the more general problem of defining operations on higher kinds of types. And surely there is some abstraction over type systems... and enough is enough. By the time we get to solving the bind-an-arbitrary-monad problem, the solution is now so general that the line-of-business SQL programmers who were the motivation for the feature in the first place are completely lost, and we haven't actually solved their problem.
The really major features added since C# 1.0 -- generic types, anonymous functions, iterator blocks, LINQ, dynamic, async -- all have the property that they are highly general features useful in many different domains. They can all be treated as specific examples of a more general problem, but that is true of any solution to any problem; you can always make it more general. The idea of the design of each of these features is to find the point where they can't be made more general without confusing their users.
Now that I've denied the premise of your question, let's look at the actual question:
What I am trying to understand is to which "general construct" the async/await keywords relate to
It depends on how you look at it.
The async-await feature is built around the Task<T> type, which is as you note, a monad. And of course if you talked about this with Erik Meijer he would immediately point out that Task<T> is actually a comonad; you can get the T value back out the other end.
Another way to look at the feature is to take the paragraph you quoted about iterator blocks and substitute "async" for "iterator". Asynchronous methods are, like iterator methods, a kind of coroutine. You can think of Task<T> as just an implementation detail of the coroutine mechanism if you like.
A third way to look at the feature is to say that it is a kind of call-with-current-continuation (commonly abbreviated call/cc). It is not a complete implementation of call/cc because it does not take the state of the call stack at the time that the continuation is signed up into account. See this question for details:
How could the new async feature in c# 5.0 be implemented with call/cc?
I will wait and see if someone (Eric? Jon? maybe you?) can fill in more details on how actually C# generates code to implement await,
The rewrite is essentially just a variation on how iterator blocks are rewritten. Mads goes through all the details in his MSDN Magazine article:
http://msdn.microsoft.com/en-us/magazine/hh456403.aspx

Does inverting the "if" improve performance? [duplicate]

When I ran ReSharper on my code, for example:
if (some condition)
{
Some code...
}
ReSharper gave me the above warning (Invert "if" statement to reduce nesting), and suggested the following correction:
if (!some condition) return;
Some code...
I would like to understand why that's better. I always thought that using "return" in the middle of a method problematic, somewhat like "goto".
It is not only aesthetic, but it also reduces the maximum nesting level inside the method. This is generally regarded as a plus because it makes methods easier to understand (and indeed, many static analysis tools provide a measure of this as one of the indicators of code quality).
On the other hand, it also makes your method have multiple exit points, something that another group of people believes is a no-no.
Personally, I agree with ReSharper and the first group (in a language that has exceptions I find it silly to discuss "multiple exit points"; almost anything can throw, so there are numerous potential exit points in all methods).
Regarding performance: both versions should be equivalent (if not at the IL level, then certainly after the jitter is through with the code) in every language. Theoretically this depends on the compiler, but practically any widely used compiler of today is capable of handling much more advanced cases of code optimization than this.
A return in the middle of the method is not necessarily bad. It might be better to return immediately if it makes the intent of the code clearer. For example:
double getPayAmount() {
double result;
if (_isDead) result = deadAmount();
else {
if (_isSeparated) result = separatedAmount();
else {
if (_isRetired) result = retiredAmount();
else result = normalPayAmount();
};
}
return result;
};
In this case, if _isDead is true, we can immediately get out of the method. It might be better to structure it this way instead:
double getPayAmount() {
if (_isDead) return deadAmount();
if (_isSeparated) return separatedAmount();
if (_isRetired) return retiredAmount();
return normalPayAmount();
};
I've picked this code from the refactoring catalog. This specific refactoring is called: Replace Nested Conditional with Guard Clauses.
This is a bit of a religious argument, but I agree with ReSharper that you should prefer less nesting. I believe that this outweighs the negatives of having multiple return paths from a function.
The key reason for having less nesting is to improve code readability and maintainability. Remember that many other developers will need to read your code in the future, and code with less indentation is generally much easier to read.
Preconditions are a great example of where it is okay to return early at the start of the function. Why should the readability of the rest of the function be affected by the presence of a precondition check?
As for the negatives about returning multiple times from a method - debuggers are pretty powerful now, and it's very easy to find out exactly where and when a particular function is returning.
Having multiple returns in a function is not going to affect the maintainance programmer's job.
Poor code readability will.
As others have mentioned, there shouldn't be a performance hit, but there are other considerations. Aside from those valid concerns, this also can open you up to gotchas in some circumstances. Suppose you were dealing with a double instead:
public void myfunction(double exampleParam){
if(exampleParam > 0){
//Body will *not* be executed if Double.IsNan(exampleParam)
}
}
Contrast that with the seemingly equivalent inversion:
public void myfunction(double exampleParam){
if(exampleParam <= 0)
return;
//Body *will* be executed if Double.IsNan(exampleParam)
}
So in certain circumstances what appears to be a a correctly inverted if might not be.
The idea of only returning at the end of a function came back from the days before languages had support for exceptions. It enabled programs to rely on being able to put clean-up code at the end of a method, and then being sure it would be called and some other programmer wouldn't hide a return in the method that caused the cleanup code to be skipped. Skipped cleanup code could result in a memory or resource leak.
However, in a language that supports exceptions, it provides no such guarantees. In a language that supports exceptions, the execution of any statement or expression can cause a control flow that causes the method to end. This means clean-up must be done through using the finally or using keywords.
Anyway, I'm saying I think a lot of people quote the 'only return at the end of a method' guideline without understanding why it was ever a good thing to do, and that reducing nesting to improve readability is probably a better aim.
I'd like to add that there is name for those inverted if's - Guard Clause. I use it whenever I can.
I hate reading code where there is if at the beginning, two screens of code and no else. Just invert if and return. That way nobody will waste time scrolling.
http://c2.com/cgi/wiki?GuardClause
It doesn't only affect aesthetics, but it also prevents code nesting.
It can actually function as a precondition to ensure that your data is valid as well.
This is of course subjective, but I think it strongly improves on two points:
It is now immediately obvious that your function has nothing left to do if condition holds.
It keeps the nesting level down. Nesting hurts readability more than you'd think.
Multiple return points were a problem in C (and to a lesser extent C++) because they forced you to duplicate clean-up code before each of the return points. With garbage collection, the try | finally construct and using blocks, there's really no reason why you should be afraid of them.
Ultimately it comes down to what you and your colleagues find easier to read.
Guard clauses or pre-conditions (as you can probably see) check to see if a certain condition is met and then breaks the flow of the program. They're great for places where you're really only interested in one outcome of an if statement. So rather than say:
if (something) {
// a lot of indented code
}
You reverse the condition and break if that reversed condition is fulfilled
if (!something) return false; // or another value to show your other code the function did not execute
// all the code from before, save a lot of tabs
return is nowhere near as dirty as goto. It allows you to pass a value to show the rest of your code that the function couldn't run.
You'll see the best examples of where this can be applied in nested conditions:
if (something) {
do-something();
if (something-else) {
do-another-thing();
} else {
do-something-else();
}
}
vs:
if (!something) return;
do-something();
if (!something-else) return do-something-else();
do-another-thing();
You'll find few people arguing the first is cleaner but of course, it's completely subjective. Some programmers like to know what conditions something is operating under by indentation, while I'd much rather keep method flow linear.
I won't suggest for one moment that precons will change your life or get you laid but you might find your code just that little bit easier to read.
Performance-wise, there will be no noticeable difference between the two approaches.
But coding is about more than performance. Clarity and maintainability are also very important. And, in cases like this where it doesn't affect performance, it is the only thing that matters.
There are competing schools of thought as to which approach is preferable.
One view is the one others have mentioned: the second approach reduces the nesting level, which improves code clarity. This is natural in an imperative style: when you have nothing left to do, you might as well return early.
Another view, from the perspective of a more functional style, is that a method should have only one exit point. Everything in a functional language is an expression. So if statements must always have an else clauses. Otherwise the if expression wouldn't always have a value. So in the functional style, the first approach is more natural.
There are several good points made here, but multiple return points can be unreadable as well, if the method is very lengthy. That being said, if you're going to use multiple return points just make sure that your method is short, otherwise the readability bonus of multiple return points may be lost.
Performance is in two parts. You have performance when the software is in production, but you also want to have performance while developing and debugging. The last thing a developer wants is to "wait" for something trivial. In the end, compiling this with optimization enabled will result in similar code. So it's good to know these little tricks that pay off in both scenarios.
The case in the question is clear, ReSharper is correct. Rather than nesting if statements, and creating new scope in code, you're setting a clear rule at the start of your method. It increases readability, it will be easier to maintain, and it reduces the amount of rules one has to sift through to find where they want to go.
Personally I prefer only 1 exit point. It's easy to accomplish if you keep your methods short and to the point, and it provides a predictable pattern for the next person who works on your code.
eg.
bool PerformDefaultOperation()
{
bool succeeded = false;
DataStructure defaultParameters;
if ((defaultParameters = this.GetApplicationDefaults()) != null)
{
succeeded = this.DoSomething(defaultParameters);
}
return succeeded;
}
This is also very useful if you just want to check the values of certain local variables within a function before it exits. All you need to do is place a breakpoint on the final return and you are guaranteed to hit it (unless an exception is thrown).
Avoiding multiple exit points can lead to performance gains. I am not sure about C# but in C++ the Named Return Value Optimization (Copy Elision, ISO C++ '03 12.8/15) depends on having a single exit point. This optimization avoids copy constructing your return value (in your specific example it doesn't matter). This could lead to considerable gains in performance in tight loops, as you are saving a constructor and a destructor each time the function is invoked.
But for 99% of the cases saving the additional constructor and destructor calls is not worth the loss of readability nested if blocks introduce (as others have pointed out).
Many good reasons about how the code looks like. But what about results?
Let's take a look to some C# code and its IL compiled form:
using System;
public class Test {
public static void Main(string[] args) {
if (args.Length == 0) return;
if ((args.Length+2)/3 == 5) return;
Console.WriteLine("hey!!!");
}
}
This simple snippet can be compiled. You can open the generated .exe file with ildasm and check what is the result. I won't post all the assembler thing but I'll describe the results.
The generated IL code does the following:
If the first condition is false, jumps to the code where the second is.
If it's true jumps to the last instruction. (Note: the last instruction is a return).
In the second condition the same happens after the result is calculated. Compare and: got to the Console.WriteLine if false or to the end if this is true.
Print the message and return.
So it seems that the code will jump to the end. What if we do a normal if with nested code?
using System;
public class Test {
public static void Main(string[] args) {
if (args.Length != 0 && (args.Length+2)/3 != 5)
{
Console.WriteLine("hey!!!");
}
}
}
The results are quite similar in IL instructions. The difference is that before there were two jumps per condition: if false go to next piece of code, if true go to the end. And now the IL code flows better and has 3 jumps (the compiler optimized this a bit):
First jump: when Length is 0 to a part where the code jumps again (Third jump) to the end.
Second: in the middle of the second condition to avoid one instruction.
Third: if the second condition is false, jump to the end.
Anyway, the program counter will always jump.
In theory, inverting if could lead to better performance if it increases branch prediction hit rate. In practice, I think it is very hard to know exactly how branch prediction will behave, especially after compiling, so I would not do it in my day-to-day development, except if I am writing assembly code.
More on branch prediction here.
That is simply controversial. There is no "agreement among programmers" on the question of early return. It's always subjective, as far as I know.
It's possible to make a performance argument, since it's better to have conditions that are written so they are most often true; it can also be argued that it is clearer. It does, on the other hand, create nested tests.
I don't think you will get a conclusive answer to this question.
There are a lot of insightful answers there already, but still, I would to direct to a slightly different situation: Instead of precondition, that should be put on top of a function indeed, think of a step-by-step initialization, where you have to check for each step to succeed and then continue with the next. In this case, you cannot check everything at the top.
I found my code really unreadable when writing an ASIO host application with Steinberg's ASIOSDK, as I followed the nesting paradigm. It went like eight levels deep, and I cannot see a design flaw there, as mentioned by Andrew Bullock above. Of course, I could have packed some inner code to another function, and then nested the remaining levels there to make it more readable, but this seems rather random to me.
By replacing nesting with guard clauses, I even discovered a misconception of mine regarding a portion of cleanup-code that should have occurred much earlier within the function instead of at the end. With nested branches, I would never have seen that, you could even say they led to my misconception.
So this might be another situation where inverted ifs can contribute to a clearer code.
It's a matter of opinion.
My normal approach would be to avoid single line ifs, and returns in the middle of a method.
You wouldn't want lines like it suggests everywhere in your method but there is something to be said for checking a bunch of assumptions at the top of your method, and only doing your actual work if they all pass.
In my opinion early return is fine if you are just returning void (or some useless return code you're never gonna check) and it might improve readability because you avoid nesting and at the same time you make explicit that your function is done.
If you are actually returning a returnValue - nesting is usually a better way to go cause you return your returnValue just in one place (at the end - duh), and it might make your code more maintainable in a whole lot of cases.
I'm not sure, but I think, that R# tries to avoid far jumps. When You have IF-ELSE, compiler does something like this:
Condition false -> far jump to false_condition_label
true_condition_label:
instruction1
...
instruction_n
false_condition_label:
instruction1
...
instruction_n
end block
If condition is true there is no jump and no rollout L1 cache, but jump to false_condition_label can be very far and processor must rollout his own cache. Synchronising cache is expensive. R# tries replace far jumps into short jumps and in this case there is bigger probability, that all instructions are already in cache.
I think it depends on what you prefer, as mentioned, theres no general agreement afaik.
To reduce annoyment, you may reduce this kind of warning to "Hint"
My idea is that the return "in the middle of a function" shouldn't be so "subjective".
The reason is quite simple, take this code:
function do_something( data ){
if (!is_valid_data( data ))
return false;
do_something_that_take_an_hour( data );
istance = new object_with_very_painful_constructor( data );
if ( istance is not valid ) {
error_message( );
return ;
}
connect_to_database ( );
get_some_other_data( );
return;
}
Maybe the first "return" it's not SO intuitive, but that's really saving.
There are too many "ideas" about clean codes, that simply need more practise to lose their "subjective" bad ideas.
There are several advantages to this sort of coding but for me the big win is, if you can return quick you can improve the speed of your application. IE I know that because of Precondition X that I can return quickly with an error. This gets rid of the error cases first and reduces the complexity of your code. In a lot of cases because the cpu pipeline can be now be cleaner it can stop pipeline crashes or switches. Secondly if you are in a loop, breaking or returning out quickly can save you a lots of cpu. Some programmers use loop invariants to do this sort of quick exit but in this you can broke your cpu pipeline and even create memory seek problem and mean the the cpu needs to load from outside cache. But basically I think you should do what you intended, that is end the loop or function not create a complex code path just to implement some abstract notion of correct code. If the only tool you have is a hammer then everything looks like a nail.

C# foreach vs functional each [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Which one of these do you prefer?
foreach(var zombie in zombies)
{
zombie.ShuffleTowardsSurvivors();
zombie.EatNearbyBrains();
}
or
zombies.Each(zombie => {
zombie.ShuffleTowardsSurvivors();
zombie.EatNearbyBrains();
});
The first. It's part of the language for a reason.
Personally, I'd only use the second, functional approach to flow control if there is a good reason to do so, such as using Parallel.ForEach in .NET 4. It has many disadvantages, including:
It's slower. It's going to introduce a delegate invocation at each element, just like you did foreach (..) { myDelegate(); }
It's non-standard, so will be more difficult to understand by most developers
If you close over any locals, you're going to force the compiler to make a closure. This can lead to strange issues if there's threading involved, plus adds completely unnecessary bloat to the assembly.
I see no reason to write your own syntax for a flow control construct that already exists in the language.
Here you're doing some very imperative things like writing a statement rather than an expression (as presumably the Each method returns no value) and mutating state (which one can only assume the methods do, as they also appear to return no value) yet you're trying to pass them off as 'functional programming' by passing a collection of statements as a delegate. This code could barely be further from the ideals and idioms of functional programming, so why try to disguise it as such?
As much as I like multi-paradigm languages such as C#, I think they are easiest to understand and maintain when paradigms are mixed at a higher level (e.g. an entire method written in either a functional or an imperative style) rather than when multiple paradigms are mixed within a single statement or expression.
If you're writing imperative code just be honest about it and use a loop. It's nothing to be ashamed of. Imperative code is not an inherently bad thing.
Second form.
In my opinion, the less language constructs and keywords you have to use, the better. C# has enough extraneous crud in it as it is.
Generally the less you have to type, the better. Seriously, how could you not want to use "var" in situations like this? Surely if being explicit was your only goal, you'd still be using hungarian notation... you have an IDE that gives you type information whenever you hover over... or of course Ctrl+Q if you're using Resharper...
#T.E.D. The performance implications of a delegate invocation are a secondary concern. If you're doing this a thousand terms sure, run dot trace and see if it's not acceptable.
#Reed Copsey: re non-standard, if a developer can't work out what ".Each" is doing then you've got more problems, heh. Hacking the language to make it nicer is one of the great joys of programming.
The lamda version is actually not slower. I just did a quick test and the delegate version is about 30% faster.
Here is the codez:
class Blah {
public void DoStuff() {
}
}
List<Blah> blahs = new List<Blah>();
DateTime start = DateTime.Now;
for(int i = 0; i < 30000000; i++) {
blahs.Add(new Blah());
}
TimeSpan elapsed = (DateTime.Now - start);
Console.WriteLine(string.Format(System.Globalization.CultureInfo.CurrentCulture, "Allocation - {0:00}:{1:00}:{2:00}.{3:000}",
elapsed.Hours,
elapsed.Minutes,
elapsed.Seconds,
elapsed.Milliseconds));
start = DateTime.Now;
foreach(var bl in blahs) {
bl.DoStuff();
}
elapsed = (DateTime.Now - start);
Console.WriteLine(string.Format(System.Globalization.CultureInfo.CurrentCulture, "foreach - {0:00}:{1:00}:{2:00}.{3:000}",
elapsed.Hours,
elapsed.Minutes,
elapsed.Seconds,
elapsed.Milliseconds));
start = DateTime.Now;
blahs.ForEach(bl=>bl.DoStuff());
elapsed = (DateTime.Now - start);
Console.WriteLine(string.Format(System.Globalization.CultureInfo.CurrentCulture, "lambda - {0:00}:{1:00}:{2:00}.{3:000}",
elapsed.Hours,
elapsed.Minutes,
elapsed.Seconds,
elapsed.Milliseconds));
OK, So I've run more tests and here are the results.
The order of the execution(forach, lambda or lambda, foreach) didn't make much difference, lambda version was still faster:
foreach - 00:00:00.561
lambda - 00:00:00.389
lambda - 00:00:00.317
foreach - 00:00:00.337
The difference in performance is a lot less for arrays of classes. Here are the numbers for Blah[30000000]:
lambda - 00:00:00.317
foreach - 00:00:00.337
Here is the same test but Blah being a struct:
Blah[] version
lambda - 00:00:00.676
foreach - 00:00:00.437
List version:
lambda - 00:00:00.461
foreach - 00:00:00.391
Optimized build, Blah is a struct using an array.
lambda - 00:00:00.426
foreach - 00:00:00.079
Conclusion: There is no blanket answer for performance of foreach vs lambda. The answer is It depends. Here is a more scientific test for List<T>. As far as I can tell it's pretty damn efficient. If you are really concerned with performance use for(int i... loop. For iterating over a collection of a thousand customer records (example) it really doesn't matter all that much.
As far as deciding between which version to use I would put potential performance hit for lambda version way at the bottom.
Conclusion #2 T[] (where T is a value type) foreach loop is about 5 times faster for this test in an optimized build. That's the only significant difference between a Debug and Release build. So there you go, for arrays of value types use foreach, everything else - it doesn't matter.
This question contains some useful discussion, as well as a link to an MSDN blog post, on the philosophical aspects of the topic.
I think extension methods are cool, but I think break and edit-and-continue are cooler.
I'd think the second form would be tougher to optimize, as there's no way for the compiler to unroll the loop any differently for this one call than it does for anybody else's call to the Each method.
Since it was asked, I'll elaborate. The method's implementation is quite liable to be compiled separately from the code that invokes it. This means that the compiler does not know exactly how many loops it is going to have to perform.
If you use the "foreach" form then that information may be avaliable to the compiler when it is creating the code for the loop (it also may not be available, in which case no difference).
For example, if the compiler happens to know (from previous code in the same file) that the list has exactly 20 items in it, it can replace the entire loop with 20 references.
However, when the compiler creates code for the "Each" method off in its source file, it has no idea how big the caller's list is going to be. It has to support any size. The best it can do is try to find some kind of optimum unrolling for its CPU, and add extra code to loop through that and do a proper loop if it is too small for the unrolling. For a typical small loop this might even end up being slower. Of course for small loops you don't care as much....unless they happen to be inside a big loop.
As another poster mentioned, this is (and should be) a secondary concern. The important thing is which is easier to read and/or maintain, but I don't see a huge difference there.
I don't prefer either, because of what I consider to be an un-needed use of 'var'. I would write is as:
foreach(Zombie zombie in zombies){
}
But as to the Functional or foreach, for me I most definitely prefer foreach, because there doesn't seem to be a good reason for the latter.

Categories

Resources