The potential impact of an if statement inside a loop - c#

Assuming we have the Boolean check = false, and the value for this variable is either always true or false meaning it never changes through the loop, would the following be more computationally efficient than the latter one?
First:
// The value of check variable never changes inside the loop.
if(check){
for(int i=0; i < array.Length; i++){
sb.Append(String.Format("\"{0}\"", array[i].ToString());
}
}
else{
for(int i=0; i < array.Length; i++){
sb.Append(String.Format("{0}", array[i].ToString());
}
}
Second:
for(int i=0; i < array.Length; i++){
if(check){
sb.Append(String.Format("\"{0}\"", array[i].ToString());
}
else{
sb.Append(String.Format("{0}", array[i].ToString());
}
}

There is no general answer for that, especially on modern CPUs.
In theory
Theoretically, the less branches you have in your code, the better. So since the second statement repeats the branches once per loop iteration, you need more processing time and hence it is less efficient.
In practice
Modern CPUs do what is called branch prediction. That means they try to figure out in advance if a branch is taken. If the prediction is correct, the branch is free (free as in 0 CPU cycles), if it is incorrect, the CPU has to flush its execution queue and the branch is very expensive (as in much more than 1 CPU cycle).
In your specific examples you have two branch types, the ones for the loop and the ones for the if. Since your condition for the if does not change and the loop has a fixed number of executions, both branches are trivial to predict for the branch prediction engine and you can expect both alternatives to perform the same.
In coding practice
Performance considerations rarely have an impact in practice (especially in this case because of branch prediction), so you should choose the better coding style. And I would consider the second alternative to be better in this respect.

Sefe's answer is very interesting, but if you know in advance that the value will not change throughout the loop, then you really shouldn't be checking within the loop.
It is preferable to separate the decision from the loop entirely:
var template = check ? "\"{0}\"" : "{0};
for(int i=0; i < array.Length; i++)
{
sb.Append(String.Format(check, array[i].ToString());
}

Also, the whole code could be refactored as:
Func<int, string> getText;
if(array.Length > 2) getText = i => $#"A - ""{array[i]}""";
else getText = i => $#"B - ""{array[i]}""";
for(int i = 0; i < array.Length; i++) sb.Append(getText(i));
That is, you define the whole Func<int, string> based on some boolean check, and later you do the whole for loop against the pre-defined Func<int, string> which won't need a check anymore, and also, you don't repeat yourself!
See how I've used interpolated strings, which is a syntactic sugar of regular string concatenation, and I've used verbatim strings to escape quots using double quots.
In summary:
You avoid repeating yourself.
You avoid many calls to string.Format.
You avoid many calls to string.ToString().
You reduce code lines.
Compiled code is simpler, because delegates end up in a call to a method in a generated internal class, and the rest of operations are just syntactic sugar over regular string concatenation...
I know...
I know that my answer doesn't address the question at all, but I wanted to give the OP some hints on how to optimize the code from a high-level point of view, instead of focusing on low-level details.

First one is more efficient. But compiler optimizations potentially can fix the latter case during compilation.

Related

In C families, in a loop why is "less than or equal to" more preferred over just "less than" symbol? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Why is it that in C family of languages when we use a counter for any loop the most preferred comparison is Greater Equal To <= or its inverse? Please take a look at these three pieces of code
for (var i = 0; i <= 5; i++)
{...}/// loop1
for (var i = 0; i < 6; i++)
{...}/// loop2
for (var i = 0; i != 6; i++)
{...}/// loop3
I understand why loop 3 should be discouraged as something in code can assign i > 5 causing infinite loop. But loop1 and loop2 are essentially the same and loop2 may be better performance wise since only one comparison is done. So why is loop1 more preferred. Is it just convention or is there something more to it?
Note: I have no formal training in programming. i just picked up C when I needed better tools to program 8051s rather than using assembly language.
For loops are often used to iterate over arrays, and the limit is the length of the array. Since arrays are zero-based, the last valid element is length-1. So the choice is between:
for (int i = 0; i < length; i++)
and
for (int i = 0; i <= length-1; i++)
The first is simpler, so it is preferred. As a result, this idiom has become common even when the limit is not an array size.
We don't use != because occasionally we write loops where the index increments by variable steps, and sometimes it will skip over the limit. So it's safer to use a < comparison, so these won't turn into infinite loops.
This is generally a matter of contextual semantics, facilitating 'those that come after you' to maintain the code.
If you need 10 iterations of something, this is usually written as starting from 0 and having an end condition with < or != because it means the 10 is literally part of the code, thus showing clearly that 10 iterations were intended. The non-inclusive notation is also more practical for zero-based arrays like C-style strings. Notation with != is generally discouraged because it can cause endless loops in case the indexer isn't just a straightforward increment, unexpected overflows occur or the like.
On the other hand, if you need a loop from and to a specific value, it's also clearer if you have the end condition literally in the code, for example with for(var i = 1; i <= 5; i++) it is clear right away that it's an inclusive loop from 1 to 5.
These are just common reasons cited for using one notation or the other, most good programmers decide which to use by context and situation. There is no reason performance- or otherwise to prefer one over the other.
Less than or equal to is not preferred. Traditionally, in C,
less than was preferred; in C++, not equals is by far the most
idiomatic. Thus, in C:
#define N 100
int array[N];
for ( int i = 0; i < N; i ++ ) {
// ...
}
and in C++, either:
int const N = 100;
int array[N];
for ( int i = 0; i != N; ++ i ) {
// ...
}
or even more often, if there is only one container, and the
index isn't needed otherwise:
for ( int* p = std::begin( array ); p != std::end( array ); ++ p ) {
// ...
}
(In pre-C++11, of course, we used our own implementations of
begin and end to do the same thing.)
Other forms are generally not idiomatic, and are only used in
exceptional cases.
Almost all for loops have the exact same header except for the upper bound. It is a useful convention that helps with quick understanding and making less mistakes. (And the convention is <, not <=. Not sure where you got that from.)
Programs that do the same thing are not necessarily equal when it comes to code quality. Coding style has an objective component to it in that it helps humans deal with the complexity of the task.
Consistency is an important goal. If you have the choice, prefer the alternative that the majority of team members is using.

C# equivalent to Delphi High() and Low() functions for arrays that maintains performance?

In Delphi there are the Low() and High() functions that return the lowermost and uppermost index dimensions for an Array. This helps eliminate error prone for loops for iterating an array that might fall victim to an insidious +1/-1 array boundary error, like using <= when you meant < for the terminating condition in the for loop statement.
Here's an example for the Low/High functions (in Delphi):
for i := Low(ary) to High(ary) do
For now I'm using a simple for loop statement in C#:
for (int i = 0; i < ary.Length; i++)
I know there is the Array method GetDimension(N) but that has its own liabilities since I could introduce an error by accidentally using the wrong dimension index. I guess I could do something with enumerators, but I worry that there would be a significant performance cost when scanning a large array compared to using a for loop. Is there an equivalent to High/Low in C#?
The C# equivalent to the intrinsic low(ary) and high(ary) functions are, respectively, 0 and ary.Length-1. That's because C# arrays are zero based. I don't see any reason why the Length property of an array should have performance characteristics that differ from Delphi's high().
In terms of performance, the big difference between a Pascal for loop and that used by C derived languages concerns evaluation of the termination test. Consider a classic Pascal for loop:
for i := 0 to GetCount()-1 do
....
With a Pascal for loop, GetCount() is evaluated once only, at the beginning of the loop.
Now consider the equivalent in a C derived language:
for (int i=0; i<GetCount(); i++)
....
In this loop, GetCount() is evaluated every time round the loop. So in a language like C#, you would need a local variable to avoid calling that function over and over.
int N = GetCount();
for (int i=0; i<N; i++)
....
In the case of an array, if the optimiser could be certain that ary.Length did not mutate during the loop, then the code could be optimised by the compiler. I personally do not know whether or not the C# optimiser does that, but please refer to the comments for some more information.
Before you start re-writing your loops to use local variables containing the length of the array, check whether or not it makes any difference. Almost certainly it won't. The difference between Pascal and C-like for loops that I outline above is probably more significant in semantic terms than performance.
The language that I am particularly envious of is D. Here you can use a foreach loop that presents each item in an array as a reference, and thus allows you to modify the contents of the array:
void IncArray(int[] array, int increment) {
foreach (ref e; array) {
e += increment;
}
}
In C# the lower boundary is always zero, so the equivalent of Low(ary) is just 0.
For a single dimension array, the equivalent of High(ary) is ary.Length - 1. (For multi dimensional arrays you would need more than one loop anyway.)
Can you just use foreach statement instead?
like
foreach(int i in ary)
{
. . .
}
[Boundary(int Dimension)]
These boundary functions provide the start and end point for the user specified dimension of any array. A second dimension would be noted with a 1, instead of a zero, and so on.
for (int i = ary.GetLowerBound(0); i <= ary.GetUpperBound(0); i++ )
{}

Performance difference between FOR LOOPS with i<n versus to i<=n

And to a lesser extent, what about a for loop with i<(n+1)? Would (n+1) get evaluated once at start of loop or at every iteration?
for(int i=0; i<(n+1); i++){
// Do something
}
for(int i=0; i<=n; i++){
//Do something
}
UPDATE:
As suggested by nearly everyone, I ran a simple test with three loop variations i
It would likely depend on whether or not the value of n was changing over the course of the loop. If not, I would think that any modern compiler would cache the value of n+1, rather than calculating it each iteration. Of course that's not a guarantee, and with no optimizations, n+1 would be evaluated each time.
EDIT: To answer the title question, i < n vs. i <= n would have no noticeable difference (other than an extra iteration, assuming the n's were equal in both cases.) CPUs have single ops for both comparisons.
I really doubt that would make a measurable difference in the loop execution. Particularly < vs <=. If you're really concerned, you should measure it.
This is compiler specific, no language standard defines this, though most compilers would try to cache the value (if it's proven to be invariant).
Trust the compiler. Even though the c# compiler + JITer is not as good as the best c++ compilers it is still pretty good. Unless you determine with a profiler that it is causing a problem you shouldn't spend cycles worrying about these sorts of micro-optimizations. Instead write what logically matches what you are doing.
(n+1) will get evaluated on very iteration, assuming it doesn't get optimized out by the compiler.
As for performance issues - this is very easy to measure for yourself using the StopWatch class.
I would guess that unless your i is very high, the differences would be negligible.

Is It Ever Good Practice To Modify The Index Variable Inside a FOR Loop?

Given the code:
for (int i = 1; i <= 5; i++)
{
// Do work
}
Is is ever acceptable to change the value of i from within the loop?
For example:
for (int i = 1; i <= 5; i++)
{
if( i == 2)
{
i = 4;
}
// Do work
}
In my opinion, it is too confusing. Better use a while loop in such case.
It is acceptable, however, I personally think this should be avoided. Since it's creating code that will be unexpected by most developers, I find that it's causing something much less maintainable.
Personally, if you need to do this, I would recommend switching to a while loop:
int i=1;
while (i <= 5)
{
if (i == 2)
i = 4;
++i;
}
This, at least, warns people that you're using non-standard logic.
Alternatively, if you're just trying to skip elements, use continue:
for (int i = 1; i <= 5; i++)
{
if (i == 2 || i == 3)
continue;
}
While this is, technically, a few more operations than just setting i directly, it will make more sense to other developers...
YES
You see that frequently in apps that parse data. For example, suppose I'm scanning a binary file, and I'm basically looking for certain data structures. I might have code that does the following:
int SizeOfInterestingSpot = 4;
int InterestingSpotCount = 0;
for (int currentSpot = 0; currentSpot < endOfFile; currentSpot++)
{
if (IsInterestingPart(file[currentSpot])
{
InterestingSpotCount++;
//I know that I have one of what I need ,and further, that this structure in the file takes 20 bytes, so...
currentSpot += SizeOfInterestingSpot-1; //Skip the rest of that structure.
}
}
An example would be deleting items which match some criteria:
for (int i = 0; i < array.size(); /*nothing*/)
{
if (pred(array[i]))
i++;
else
array.erase(array.begin() + i);
}
However a better idea would be using iterators:
for (auto it = array.begin(); it != array.end(); /*nothing*/)
{
if (pred(*it))
++it;
else
it = array.erase(it);
}
EDIT
Oh sorry, my code is C++, and the question is about C#. But nevertheless the idea is the same:
for (int i = 0; i < list.Length; /*nothing*/)
{
if (pred(list[i]))
i++;
else
list.RemoveAt(i);
}
And a better idea might be of course just
list.RemoveAll(x => !pred(x));
Or in a slightly more modern style,
list = list.Where(pred);
(here list should be IEnumerable<...>)
I would say yes, but only in a specific cases.
It may be a bit confusing - if I set i=4 will it be incremented before the next iteration or not?
It may be a sign of a code smell - maybe you should do a LINQ query before and only process relevant elements?
Use with care!
Yes it can be. As there are an extremely enormous amount of possible situations, you're bound to find one exception where it would be considered good practice.
But stopping the theoretica lside of things, i'd say: no. Don't do it.
It gets quite complicated, and hard to read and/or follow. I would rather see something like the continue statement, although i'm not a big fan of that either.
Personally, I would say that if the logic of the algorithm called for a normally-linearly-iterating behavior, but skipping or repeating certain iterations, go for it. However, I also agree with most people that this is not normal for loop usage, so were I in your shoes, I'd make sure to throw in a line or two of comments stating WHY this is happening.
A perfectly valid use case for such a thing might be to parse a roman numeral string. For each character index in the string, look at that character and the next one. If the next character's numeric value is greater than the current character, subtract the current character's value from the next one's, add the result to the total, and skip the next char by incrementing the current index. Otherwise, just add the current character's value to the running total and continue.
An example could be a for loop where you want in a certain condition to repeat current iteration or go back to a previous iteration or even skip a certain amount of iterations (instead of a numered continue).
But these cases are rare. And even for these cases, consider that the for loop is just one means among while, do and other tools that can be used. so consider this as bad practice and try to avoid it. your code will also be less readable that way.
So for conclusion: It's achievable (not in a foreach) but strive to avoid this using while and do etc. instead.
Quoting Petar Minchev:
In my opinion, it is too confusing.
Better use a while loop in such case.
And I would say by doing that, you must be aware of some things that could happen, such as infinite loops, premature-canceled loops, weird variable values or maths when they are based on your index, and mainly (not excluding any of the others) execution flow problems based on your index and other variabes modified by the fail loop.
But if you got such a case, go for it.

Is it more memory-efficient to assign a variable to an expression?

What's more efficient?
decimal value1, value2, formula
This:
for(int i = 0; i>1000000000000; i++);
{
value1 = getVal1fromSomeWhere();
value2 = getVal2fromSomeWhere();
SendResultToA( value1*value2 + value1/value2);
SendResultToB( value1*value2 + value1/value2);
}
Or this:
for(int i = 0; i>1000000000000; i++)
{
value1 = getVal1fromSomeWhere();
value2 = getVal2fromSomeWhere();
formula = value1*value2 + value1/value2;
SendResultToA(formula);
SendResultToA(formula);
}
Intuitively I would go for the latter...
I guess there's a tradeoff between having an extra-assignment at each iteration (decimal, formula) and performing the computation on and on with no extra-variable...
EDIT :
Uhhh. God... Do I Have to go through this each time I ask a question ?
If I ask it, it is because YES it DOES MATTER to me, fellows.
Everybody does not live in a gentle non-memory-critical world, WAKE-UP !
this was just an overly simple example. I am doing MILLIONS of scientific computation and clouding multithreaded stuff, do not take me for a noob :-)
So YES, DEFINITELY every nanosecond counts.
PS : I almost regret C++ and pointers. Automatic Memory Management and GC's definitely made developers ignorant and lazy :-P
First of all profile first, and only do such micro optimizations if it's necessary. Else optimize for readability. And in your case I think the second one is easier to read.
And your statement that the second code has an additional assignment isn't true anyways. The result of your formula needs to be stored into a register in both codes.
The concept of the extra variable isn't valid once the code is compiled. For example in your case the compiler can store formula in the register where value1 or value2 was stored before, since their lifetimes don't overlap.
I wouldn't be surprised if the first one gets optimized to the second one. I think this optimization is called "Common subexpression folding". But of course it's only possible if the expression is free of side-effects.
And inspecting the IL isn't always enough to see what gets optimized. The jitter optimizes too. I had some code that was quite ugly and slow looking in IL, but very short in the finally generated x86 code. And when inspecting the machine code you need to make sure it's actually optimized. For example if you run in VS even the release code isn't fully optimized.
So my guess is that they are equally fast if the compiler can optimize them, and else the second one is faster since it doesn't need to evaluate your formula twice.
Unless you're doing this tens of thousands of times a second, it doesn't matter at all. Optimize towards readability and maintainability!
Edit: Haters gonna hate, okay fine, here you go. My code:
static void MethodA()
{
for (int i = 0; i < 1000; i++) {
var value1 = getVal1fromSomeWhere();
var value2 = getVal2fromSomeWhere();
SendResultToA(value1 * value2 + value1 / value2);
SendResultToB(value1 * value2 + value1 / value2);
}
}
static void MethodB()
{
for (int i = 0; i < 1000; i++) {
var value1 = getVal1fromSomeWhere();
var value2 = getVal2fromSomeWhere();
var formula = value1 * value2 + value1 / value2;
SendResultToA(formula);
SendResultToB(formula);
}
}
And the actual x86 assembly generated by both of them:
MethodA: http://pastie.org/1532794
MethodB: http://pastie.org/1532792
These are very long because it inlined getVal[1/2]fromSomewhere and SendResultTo[A/B], which I wired up to Random and Console.WriteLine. We can see that indeed, the CLR nor the Jitter is not smart enough to not duplicate the previous calculation, so we spend an additional 318 bytes of x86 bytecode doing the extra math.
However, keep this in mind - any gains you make by these kinds of optimizations are immediately made irrelevant by even a single extra page fault or disk read/write. These days, CPUs are rarely the bottleneck in most applications - I/O and memory are. Optimize toward spatial locality (i.e using contiguous arrays so you hit less page faults), and reducing disk I/O and hard page faults (i.e. loading code you don't need requires the OS to fault it in).
To the extent that it might matter, I think you're right. And both are equally readable (arguably).
Remember, the number of loop iterations has nothing to do with the local memory requirements. You're only talking about a few extra bytes, (and the value is going to be put on the stack for passage to the function, anyway); whereas the cycles you save* by caching the result of the calculation does go down significantly with the number of iterations.
* That is, provided that the compiler doesn't do this for you. It would be instructive to look at the IL generated in each case.
You'd have to disassemble the bytecode and/or benchmark to be sure but I'd argue that this would probably be the same since it's trivial for the compiler to see that formula (in the loop scope) does not change and can quite easily be 'inlined' (substituted) directly.
EDIT: As user CodeInChaos correctly comments disassembling the bytecode might not be enough since it's possible the optimisation is only introduced after jitting.

Categories

Resources