Time elapsed between two functions - c#

I need to find the time elapsed between two functions doing the same operation but written in different algorithm. I need to find the fastest among the two
Here is my code snippet
Stopwatch sw = new Stopwatch();
sw.Start();
Console.WriteLine(sample.palindrome()); // algorithm 1
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);//tried sw.elapsed and sw.elapsedticks
sw.Reset(); //tried with and without reset
sw.Start();
Console.WriteLine(sample.isPalindrome()); //algorithm 2
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
Technically this should give the time taken for two algorithms. This gives that the algorithm 2 is faster. But it gives different time if I interchange the calling of two function. Like if I call algorithm2 first and algorithm1 second it says algorithm1 is faster.
I dont know what I am doing wrong.

I assume your palindrome methods runs extremely fast in this example and therefore in order to get a real result you will need to run them a couple of times and then decide which is faster.
Something like this:
int numberOfIterations = 1000; // you decide on a reasonable threshold.
sample.palindrome(); // Call this the first time and avoid measuring the JIT compile time
Stopwatch sw = new Stopwatch();
sw.Start();
for(int i = 0 ; i < numberOfIterations ; i++)
{
sample.palindrome(); // why console write?
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds); // or sw.ElapsedMilliseconds/numberOfIterations
Now do the same for the second method and you will get more realistic results.

What you must do is execute both methods before the actual calculated tests for the compiled code to be JIT'd. Then test with multiple tries. Here is a code mockup.
The compiled code in CIL format will be JIT'd upon first execution, it will be translated into machine code. So testing it at first is in-accurate. So let the code be JIT'd before actually testing it.
sample.palindrome();
sample.isPalindrome();
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < 1000; i++)
{
sample.palindrome();
Console.WriteLine("palindrome test #{0} result: {1}", i, sw.ElapsedMilliseconds);
}
sw.Stop();
Console.WriteLine("palindrome test Final result: {0}", sw.ElapsedMilliseconds);
sw.Restart();
for (int i = 0; i < 1000; i++)
{
sample.isPalindrome();
Console.WriteLine("isPalindrome test #{0} result: {1}", i, sw.ElapsedMilliseconds);
}
sw.Stop();
Console.WriteLine("isPalindrome test Final result: {0}", sw.ElapsedMilliseconds);
Read more about CIL and JIT

Unless you provide the code of palindrome and isPalindrome function along with the sample class, I can't do much except speculate.
The most likely reason which I guess for this is that both your functions use the same class variables and other data. So when you call the function for the first time, it has to allocate memory to the variables, whereas the next time you call some other function, those one time expenses have already occurred. If not variables, it could be some other matter, but along the same lines.
I suggest that you call both the functions twice, and note the duration only the second time a function is called, so that any resources which they need to use may have been allocated once, and there's lesser probability of something behind the scenes messing with the result.
Let me know if this works. This is mere speculation on my part, and I may be wrong.

Related

Parallel.ForEach search doesn't find the correct value

This is my first attempt at parallel programming.
I'm writing a test console app before using this in my real app and I can't seem to get it right. When I run this, the parallel search is always faster than the sequential one, but the parallel search never finds the correct value. What am I doing wrong?
I tried it without using a partitioner (just Parallel.For); it was slower than the sequential loop and gave the wrong number. I saw a Microsoft doc that said for simple computations, using Partitioner.Create can speed things up. So I tried that but still got the wrong values. Then I saw Interlocked, but I think I'm using it wrong.
Any help would be greatly appreciated
Random r = new Random();
Stopwatch timer = new Stopwatch();
do {
// Make and populate a list
List<short> test = new List<short>();
for (int x = 0; x <= 10000000; x++)
{
test.Add((short)(r.Next(short.MaxValue) * r.NextDouble()));
}
// Initialize result variables
short rMin = short.MaxValue;
short rMax = 0;
// Do min/max normal search
timer.Start();
foreach (var amp in test)
{
rMin = Math.Min(rMin, amp);
rMax = Math.Max(rMax, amp);
}
timer.Stop();
// Display results
Console.WriteLine($"rMin: {rMin} rMax: {rMax} Time: {timer.ElapsedMilliseconds}");
// Initialize parallel result variables
short pMin = short.MaxValue;
short pMax = 0;
// Create list partioner
var rangePortioner = Partitioner.Create(0, test.Count);
// Do min/max parallel search
timer.Restart();
Parallel.ForEach(rangePortioner, (range, loop) =>
{
short min = short.MaxValue;
short max = 0;
for (int i = range.Item1; i < range.Item2; i++)
{
min = Math.Min(min, test[i]);
max = Math.Max(max, test[i]);
}
_ = Interlocked.Exchange(ref Unsafe.As<short, int>(ref pMin), Math.Min(pMin, min));
_ = Interlocked.Exchange(ref Unsafe.As<short, int>(ref pMax), Math.Max(pMax, max));
});
timer.Stop();
// Display results
Console.WriteLine($"pMin: {pMin} pMax: {pMax} Time: {timer.ElapsedMilliseconds}");
Console.WriteLine("Press enter to run again; any other key to quit");
} while (Console.ReadKey().Key == ConsoleKey.Enter);
Sample output:
rMin: 0 rMax: 32746 Time: 106
pMin: 0 pMax: 32679 Time: 66
Press enter to run again; any other key to quit
The correct way to do a parallel search like this is to compute local values for each thread used, and then merge the values at the end. This ensures that synchronization is only needed at the final phase:
var items = Enumerable.Range(0, 10000).ToList();
int globalMin = int.MaxValue;
int globalMax = int.MinValue;
Parallel.ForEach<int, (int Min, int Max)>(
items,
() => (int.MaxValue, int.MinValue), // Create new min/max values for each thread used
(item, state, localMinMax) =>
{
var localMin = Math.Min(item, localMinMax.Min);
var localMax = Math.Max(item, localMinMax.Max);
return (localMin, localMax); // return the new min/max values for this thread
},
localMinMax => // called one last time for each thread used
{
lock(items) // Since this may run concurrently, synchronization is needed
{
globalMin = Math.Min(globalMin, localMinMax.Min);
globalMax = Math.Max(globalMax, localMinMax.Max);
}
});
As you can see this is quite a bit more complex than a regular loop, and this is not even doing anything fancy like partitioning. An optimized solution would work over larger blocks to reduce overhead, but this is omitted for simplicity, and it looks like the OP is aware such issues already.
Be aware that multi threaded programming is difficult. While it is a great idea to try out such techniques in a playground rather than a real program, I would still suggest that you should start by studying the potential dangers of thread safety, there is fairly easy to find good resources about this.
Not all problems will be as obviously wrong like this, and it is quite easy to cause issues that breaks once in a million, or only when the cpu load is high, or only on single CPU systems, or issues that are only detected long after the code is put into production. It is a good practice to be paranoid whenever multiple threads may read and write the same memory concurrently.
I would also recommend learning about immutable data types, and pure functions, since these are much safer and easier to reason about once multiple threads are involved.
Interlocked.Exchange is thread safe only for Exchange, every Math.Min and Math.Max can be with race condition. You should compute min/max for every batch separately and then join results.
Using low-lock techniques like the Interlocked class is tricky and advanced. Taking into consideration that your experience in multithreading is not excessive, I would say go with a simple and trusty lock:
object locker = new object();
//...
lock (locker)
{
pMin = Math.Min(pMin, min);
pMax = Math.Max(pMax, max);
}

<= does work slower than <

I found a few questions on SO regarding performance comparison of < and <= (this one was extremely downvoted) and I always found the same answer that there is no performance difference between the two.
I wrote a program for the comparison (not so working fiddle...copy to your machine to run it) in which I created two loops for (int i = 0; i <= 1000000000; i++ ) and for (int i = 0; i < 1000000001; i++ ) in two different methods.
I ran each method 100 times; took an average of the elapsed time and found that loop with <= operator ran slower than the one with < operator.
I ran the program multiple times and <= always took more time to complete.
My results(im ms) were:
3018.73, 2778.22
2816.87, 2760.62
2859.02, 2797.05
My question is: If neither one is faster, why do I see differences in the results? Is there anything wrong with my program?
Bench-marking is a fine art. What you describe is not physically possible, the <= and < operators just generate different processor instructions that execute at the exact same speed. I altered your program slightly, running DoIt ten times and dropping two zeros from the for() loop so I wouldn't have to wait for ever:
x86 jitter:
Less Than Equal To Method Time Elapsed: 0.5
Less Than Method Time Elapsed: 0.42
Less Than Equal To Method Time Elapsed: 0.36
Less Than Method Time Elapsed: 0.46
Less Than Equal To Method Time Elapsed: 0.4
Less Than Method Time Elapsed: 0.34
Less Than Equal To Method Time Elapsed: 0.33
Less Than Method Time Elapsed: 0.35
Less Than Equal To Method Time Elapsed: 0.35
Less Than Method Time Elapsed: 0.32
Less Than Equal To Method Time Elapsed: 0.32
Less Than Method Time Elapsed: 0.32
Less Than Equal To Method Time Elapsed: 0.34
Less Than Method Time Elapsed: 0.32
Less Than Equal To Method Time Elapsed: 0.32
Less Than Method Time Elapsed: 0.31
Less Than Equal To Method Time Elapsed: 0.34
Less Than Method Time Elapsed: 0.32
Less Than Equal To Method Time Elapsed: 0.31
Less Than Method Time Elapsed: 0.32
x64 jitter:
Less Than Equal To Method Time Elapsed: 0.44
Less Than Method Time Elapsed: 0.4
Less Than Equal To Method Time Elapsed: 0.44
Less Than Method Time Elapsed: 0.45
Less Than Equal To Method Time Elapsed: 0.36
Less Than Method Time Elapsed: 0.35
Less Than Equal To Method Time Elapsed: 0.38
Less Than Method Time Elapsed: 0.34
Less Than Equal To Method Time Elapsed: 0.33
Less Than Method Time Elapsed: 0.34
Less Than Equal To Method Time Elapsed: 0.34
Less Than Method Time Elapsed: 0.32
Less Than Equal To Method Time Elapsed: 0.32
Less Than Method Time Elapsed: 0.35
Less Than Equal To Method Time Elapsed: 0.32
Less Than Method Time Elapsed: 0.42
Less Than Equal To Method Time Elapsed: 0.32
Less Than Method Time Elapsed: 0.31
Less Than Equal To Method Time Elapsed: 0.32
Less Than Method Time Elapsed: 0.35
The only real signal you get from this is the slow execution of the first DoIt(), also visible in your test results, that's jitter overhead. And the most important signal, it is noisy. The median value for both loops is about equal, the standard deviation is rather large.
Otherwise the kind of signal you always get when you micro-optimize, execution of code is not very deterministic. Short from .NET runtime overhead that's usually easy to eliminate, your program is not the only one that runs on your machine. It has to share the processor, just the WriteLine() call already has an affect. Executed by the conhost.exe process, runs concurrently with your test while your test code entered the next for() loop. And everything else that happens on your machine, kernel code and interrupt handlers also get their turn.
And codegen can play a role, one thing you should do for example is just swap the two calls. The processor itself is in general executes code very non-deterministically. The state of the processor caches and how much historical data was gathered by the branch prediction logic matters a great deal.
When I benchmark, I consider differences of 15% or less not statistically significant. Hunting down differences less than that is quite difficult, you have to very carefully study the machine code. Silly things like a branch target being mis-aligned or a variable not getting stored in a processor register can cause big effects in execution time. Not something you can ever fix, the jitter does not have enough knobs to tweak.
First of all, there are many, many reasons to see variations in benchmarks, even when they're done right. Here are a few that come to mind:
Your computer is running a lot of other processes at the same time, switching things in and out of context, and so on. The operating system is constantly receiving and handling interrupts from various I/O devices, etc. All of these things can cause the computer to pause for periods of time that dwarf the running time for the actual code you're testing.
The JIT process can detect when a function has run a certain number of times, and apply additional optimizations to it based on that information. Things like loop unrolling can drastically reduce the number of jumps that the program has to make, which are significantly more expensive than typical CPU operations. Re-optimizing the instructions takes time when it first happens, and then speeds things up after that point.
Your hardware is trying to make additional optimizations, like branch prediction, to ensure that its pipeline is being used as efficiently as possible. (If it guesses correctly, it can basically pretend that it's going to do the i++ while it waits to see whether the < or <= comparison finishes, and then discard the result if it finds out it was wrong.) The impact of these optimizations depends on a lot of factors, and is not really easy to predict.
Secondly, it's actually really difficult to do benchmarking well. Here'a benchmark template that I've been using for a while now. It's not perfect, but it's pretty good at ensuring that any emerging patterns are unlikely to be based on order of execution or random chance:
/* This is a benchmarking template I use in LINQPad when I want to do a
* quick performance test. Just give it a couple of actions to test and
* it will give you a pretty good idea of how long they take compared
* to one another. It's not perfect: You can expect a 3% error margin
* under ideal circumstances. But if you're not going to improve
* performance by more than 3%, you probably don't care anyway.*/
void Main()
{
// Enter setup code here
var actions = new[]
{
new TimedAction("control", () =>
{
int i = 0;
}),
new TimedAction("<", () =>
{
for (int i = 0; i < 1000001; i++)
{}
}),
new TimedAction("<=", () =>
{
for (int i = 0; i <= 1000000; i++)
{}
}),
new TimedAction(">", () =>
{
for (int i = 1000001; i > 0; i--)
{}
}),
new TimedAction(">=", () =>
{
for (int i = 1000000; i >= 0; i--)
{}
})
};
const int TimesToRun = 10000; // Tweak this as necessary
TimeActions(TimesToRun, actions);
}
#region timer helper methods
// Define other methods and classes here
public void TimeActions(int iterations, params TimedAction[] actions)
{
Stopwatch s = new Stopwatch();
int length = actions.Length;
var results = new ActionResult[actions.Length];
// Perform the actions in their initial order.
for(int i = 0; i < length; i++)
{
var action = actions[i];
var result = results[i] = new ActionResult{Message = action.Message};
// Do a dry run to get things ramped up/cached
result.DryRun1 = s.Time(action.Action, 10);
result.FullRun1 = s.Time(action.Action, iterations);
}
// Perform the actions in reverse order.
for(int i = length - 1; i >= 0; i--)
{
var action = actions[i];
var result = results[i];
// Do a dry run to get things ramped up/cached
result.DryRun2 = s.Time(action.Action, 10);
result.FullRun2 = s.Time(action.Action, iterations);
}
results.Dump();
}
public class ActionResult
{
public string Message {get;set;}
public double DryRun1 {get;set;}
public double DryRun2 {get;set;}
public double FullRun1 {get;set;}
public double FullRun2 {get;set;}
}
public class TimedAction
{
public TimedAction(string message, Action action)
{
Message = message;
Action = action;
}
public string Message {get;private set;}
public Action Action {get;private set;}
}
public static class StopwatchExtensions
{
public static double Time(this Stopwatch sw, Action action, int iterations)
{
sw.Restart();
for (int i = 0; i < iterations; i++)
{
action();
}
sw.Stop();
return sw.Elapsed.TotalMilliseconds;
}
}
#endregion
Here's the result I get when running this in LINQPad:
So you'll notice that there is some variation, particularly early on, but after running everything backwards and forwards enough times, there isn't a clear pattern emerging to show that one way is much faster or slower than another.

How i can adjust stopwatch to get the same values every time?

How I can adjust stopwatch to get the same values every time?
For this code for example:
Stopwatch w = new Stopwatch();
for (int i = 0; i < 40; i++)
{
w.Start();
test();
w.Stop();
w.Reset();
Console.WriteLine(w.ElapsedMilliseconds);
}
I get different value each time.
That's because of interruptions and how much resources your process/thread got allocated during execution. You can't do anything about it.
You should run your measurement multiple times and do some statistical analysis on the results: either average, median or e.g. 75th percentile

Problem in calculating time taken to execute a function

I am trying to find the time taken to run a function. I am doing it this way:
SomeFunc(input) {
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
//some operation on input
stopWatch.Stop();
long timeTaken = stopWatch.ElapsedMilliseconds;
}
Now the "some operation on input" as mentioned in the comments takes significant time based on the input to SomeFunc.
The problem is when I call SomeFunc multiple times from the main, I get timeTaken correctly only for the first time, and the rest of the time it is being assigned to 0. Is there a problem with the above code?
EDIT:
There is a UI with multiple text fields, and when a button is clicked, it is delegated to the SomeFunc. The SomeFunc makes some calculations based on the input (from the text fields) and displays the result on the UI. I am not allowed to share the code in "some operation on input" since I have signed an NDA. I can however answer your questions as to what I am trying to achieve there. Please help.
EDIT 2:
As it seems that I am getting weird value when the function is called the first time, and as #Mike Bantegui mentioned, there must be JIT optimization going on, the only solution I can think of now (to not get zero as execution time) is that to display the time in nano seconds. How is it possible to display the time in nano seconds in C#?
Well, you aren't outputing that data anywhere. Ideally you would do it something more like this.
void SomeFunc(input)
{
Do sstuff
}
main()
{
List<long> results = new List<long>();
Stopwatch sw = new Stopwatch();
for(int i = 0; i < MAX_TRIES; i++)
{
sw.Start();
SomeFunc(arg);
sw.Stop();
results.Add(sw.ElapsedMilliseconds);
sw.Reset();
}
//Perform analyses and results
}
In fact you are getting the wrong time at the first start and correct time to the remaining. You can't relay just on the first call to measure the time. However It seams to be that the operation is too fast and so you get the 0 results. To measure the test correctly call the function 1000 times for example to see the average cost time:
Stopwatch watch = StopWatch.StartNew();
for (int index = 0; index < 1000; index++)
{
SomeFunc(input);
}
watch.Stop();
Console.WriteLine(watch.ElapsedMilliseconds);
Edit:
How is it possible to display the time in nano seconds
You can get watch.ElapsedTicks and then convert it to nanoseconds : (watch.ElapsedTicks / Stopwatch.Frequency) * 1000000000
As a simple example, consider the following (contrived) example:
double Mean(List<double> items)
{
double mu = 0;
foreach (double val in items)
mu += val;
return mu / items.Length;
}
We can time it like so:
void DoTimings(int n)
{
Stopwatch sw = new Stopwatch();
int time = 0;
double dummy = 0;
for (int i = 0; i < n; i++)
{
List<double> items = new List<double>();
// populate items with random numbers, excluded for brevity
sw.Start();
dummy += Mean(items);
sw.Stop();
time += sw.ElapsedMilliseconds;
}
Console.WriteLine(dummy);
Console.WriteLine(time / n);
}
This works if the list of items is actually very large. But if it's too small, we'll have to do multiple runs under one timing:
void DoTimings(int n)
{
Stopwatch sw = new Stopwatch();
int time = 0;
double dummy = 0;
List<double> items = new List<double>(); // Reuse same list
// populate items with random numbers, excluded for brevity
sw.Start();
for (int i = 0; i < n; i++)
{
dummy += Mean(items);
time += sw.ElapsedMilliseconds;
}
sw.Stop();
Console.WriteLine(dummy);
Console.WriteLine(time / n);
}
In the second example, if the size of the list is too small, then we can accurately get an idea of how long it takes by simply running this for a large enough n. Each has it's advantages and flaws though.
However, before doing either of these I would do a "warm up" calculation before hand:
// Or something smaller, just enough to let the compiler JIT
double dummy = 0;
for (int i = 0; i < 10000; i++)
dummy += Mean(data);
Console.WriteLine(dummy);
// Now do the actual timing
An alternative method of both would be to do what #Rig did in his answer, and build up a list of results to do statistics on. In the first case, you'd simply build up a list of each individual time. In the second case, you would build up a list of the average timing of multiple runs, since the time for a calculation could smaller than finest grained time in your Stopwatch.
With all that said, I would say there is one very large caveat in all of this: Calculating the time it takes for something to run is very hard to do properly. It's admirable to want to do profiling, but you should do some research on SO and see what other people have done to do this properly. It's very easy to write a routine that times something badly, but very hard to do it right.

In .NET, which loop runs faster, 'for' or 'foreach'?

In C#/VB.NET/.NET, which loop runs faster, for or foreach?
Ever since I read that a for loop works faster than a foreach loop a long time ago I assumed it stood true for all collections, generic collections, all arrays, etc.
I scoured Google and found a few articles, but most of them are inconclusive (read comments on the articles) and open ended.
What would be ideal is to have each scenario listed and the best solution for the same.
For example (just an example of how it should be):
for iterating an array of 1000+
strings - for is better than foreach
for iterating over IList (non generic) strings - foreach is better
than for
A few references found on the web for the same:
Original grand old article by Emmanuel Schanzer
CodeProject FOREACH Vs. FOR
Blog - To foreach or not to foreach, that is the question
ASP.NET forum - NET 1.1 C# for vs foreach
[Edit]
Apart from the readability aspect of it, I am really interested in facts and figures. There are applications where the last mile of performance optimization squeezed do matter.
Patrick Smacchia blogged about this last month, with the following conclusions:
for loops on List are a bit more than 2 times cheaper than foreach
loops on List.
Looping on array is around 2 times cheaper than looping on List.
As a consequence, looping on array using for is 5 times cheaper
than looping on List using foreach
(which I believe, is what we all do).
First, a counter-claim to Dmitry's (now deleted) answer. For arrays, the C# compiler emits largely the same code for foreach as it would for an equivalent for loop. That explains why for this benchmark, the results are basically the same:
using System;
using System.Diagnostics;
using System.Linq;
class Test
{
const int Size = 1000000;
const int Iterations = 10000;
static void Main()
{
double[] data = new double[Size];
Random rng = new Random();
for (int i=0; i < data.Length; i++)
{
data[i] = rng.NextDouble();
}
double correctSum = data.Sum();
Stopwatch sw = Stopwatch.StartNew();
for (int i=0; i < Iterations; i++)
{
double sum = 0;
for (int j=0; j < data.Length; j++)
{
sum += data[j];
}
if (Math.Abs(sum-correctSum) > 0.1)
{
Console.WriteLine("Summation failed");
return;
}
}
sw.Stop();
Console.WriteLine("For loop: {0}", sw.ElapsedMilliseconds);
sw = Stopwatch.StartNew();
for (int i=0; i < Iterations; i++)
{
double sum = 0;
foreach (double d in data)
{
sum += d;
}
if (Math.Abs(sum-correctSum) > 0.1)
{
Console.WriteLine("Summation failed");
return;
}
}
sw.Stop();
Console.WriteLine("Foreach loop: {0}", sw.ElapsedMilliseconds);
}
}
Results:
For loop: 16638
Foreach loop: 16529
Next, validation that Greg's point about the collection type being important - change the array to a List<double> in the above, and you get radically different results. Not only is it significantly slower in general, but foreach becomes significantly slower than accessing by index. Having said that, I would still almost always prefer foreach to a for loop where it makes the code simpler - because readability is almost always important, whereas micro-optimisation rarely is.
foreach loops demonstrate more specific intent than for loops.
Using a foreach loop demonstrates to anyone using your code that you are planning to do something to each member of a collection irrespective of its place in the collection. It also shows you aren't modifying the original collection (and throws an exception if you try to).
The other advantage of foreach is that it works on any IEnumerable, where as for only makes sense for IList, where each element actually has an index.
However, if you need to use the index of an element, then of course you should be allowed to use a for loop. But if you don't need to use an index, having one is just cluttering your code.
There are no significant performance implications as far as I'm aware. At some stage in the future it might be easier to adapt code using foreach to run on multiple cores, but that's not something to worry about right now.
Any time there's arguments over performance, you just need to write a small test so that you can use quantitative results to support your case.
Use the StopWatch class and repeat something a few million times, for accuracy. (This might be hard without a for loop):
using System.Diagnostics;
//...
Stopwatch sw = new Stopwatch()
sw.Start()
for(int i = 0; i < 1000000;i ++)
{
//do whatever it is you need to time
}
sw.Stop();
//print out sw.ElapsedMilliseconds
Fingers crossed the results of this show that the difference is negligible, and you might as well just do whatever results in the most maintainable code
It will always be close. For an array, sometimes for is slightly quicker, but foreach is more expressive, and offers LINQ, etc. In general, stick with foreach.
Additionally, foreach may be optimised in some scenarios. For example, a linked list might be terrible by indexer, but it might be quick by foreach. Actually, the standard LinkedList<T> doesn't even offer an indexer for this reason.
My guess is that it will probably not be significant in 99% of the cases, so why would you choose the faster instead of the most appropriate (as in easiest to understand/maintain)?
There are very good reasons to prefer foreach loops over for loops. If you can use a foreach loop, your boss is right that you should.
However, not every iteration is simply going through a list in order one by one. If he is forbidding for, yes that is wrong.
If I were you, what I would do is turn all of your natural for loops into recursion. That'd teach him, and it's also a good mental exercise for you.
There is unlikely to be a huge performance difference between the two. As always, when faced with a "which is faster?" question, you should always think "I can measure this."
Write two loops that do the same thing in the body of the loop, execute and time them both, and see what the difference in speed is. Do this with both an almost-empty body, and a loop body similar to what you'll actually be doing. Also try it with the collection type that you're using, because different types of collections can have different performance characteristics.
Jeffrey Richter on TechEd 2005:
"I have come to learn over the years the C# compiler is basically a liar to me." .. "It lies about many things." .. "Like when you do a foreach loop..." .. "...that is one little line of code that you write, but what the C# compiler spits out in order to do that it's phenomenal. It puts out a try/finally block in there, inside the finally block it casts your variable to an IDisposable interface, and if the cast suceeds it calls the Dispose method on it, inside the loop it calls the Current property and the MoveNext method repeatedly inside the loop, objects are being created underneath the covers. A lot of people use foreach because it's very easy coding, very easy to do.." .. "foreach is not very good in terms of performance, if you iterated over a collection instead by using square bracket notation, just doing index, that's just much faster, and it doesn't create any objects on the heap..."
On-Demand Webcast:
http://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?EventID=1032292286&EventCategory=3&culture=en-US&CountryCode=US
you can read about it in Deep .NET - part 1 Iteration
it's cover the results (without the first initialization) from .NET source code all the way to the disassembly.
for example - Array Iteration with a foreach loop:
and - list iteration with foreach loop:
and the end results:
In cases where you work with a collection of objects, foreach is better, but if you increment a number, a for loop is better.
Note that in the last case, you could do something like:
foreach (int i in Enumerable.Range(1, 10))...
But it certainly doesn't perform better, it actually has worse performance compared to a for.
This should save you:
public IEnumerator<int> For(int start, int end, int step) {
int n = start;
while (n <= end) {
yield n;
n += step;
}
}
Use:
foreach (int n in For(1, 200, 4)) {
Console.WriteLine(n);
}
For greater win, you may take three delegates as parameters.
The differences in speed in a for- and a foreach-loop are tiny when you're looping through common structures like arrays, lists, etc, and doing a LINQ query over the collection is almost always slightly slower, although it's nicer to write! As the other posters said, go for expressiveness rather than a millisecond of extra performance.
What hasn't been said so far is that when a foreach loop is compiled, it is optimised by the compiler based on the collection it is iterating over. That means that when you're not sure which loop to use, you should use the foreach loop - it will generate the best loop for you when it gets compiled. It's more readable too.
Another key advantage with the foreach loop is that if your collection implementation changes (from an int array to a List<int> for example) then your foreach loop won't require any code changes:
foreach (int i in myCollection)
The above is the same no matter what type your collection is, whereas in your for loop, the following will not build if you changed myCollection from an array to a List:
for (int i = 0; i < myCollection.Length, i++)
This has the same two answers as most "which is faster" questions:
1) If you don't measure, you don't know.
2) (Because...) It depends.
It depends on how expensive the "MoveNext()" method is, relative to how expensive the "this[int index]" method is, for the type (or types) of IEnumerable that you will be iterating over.
The "foreach" keyword is shorthand for a series of operations - it calls GetEnumerator() once on the IEnumerable, it calls MoveNext() once per iteration, it does some type checking, and so on. The thing most likely to impact performance measurements is the cost of MoveNext() since that gets invoked O(N) times. Maybe it's cheap, but maybe it's not.
The "for" keyword looks more predictable, but inside most "for" loops you'll find something like "collection[index]". This looks like a simple array indexing operation, but it's actually a method call, whose cost depends entirely on the nature of the collection that you're iterating over. Probably it's cheap, but maybe it's not.
If the collection's underlying structure is essentially a linked list, MoveNext is dirt-cheap, but the indexer might have O(N) cost, making the true cost of a "for" loop O(N*N).
"Are there any arguments I could use to help me convince him the for loop is acceptable to use?"
No, if your boss is micromanaging to the level of telling you what programming language constructs to use, there's really nothing you can say. Sorry.
Every language construct has an appropriate time and place for usage. There is a reason the C# language has a four separate iteration statements - each is there for a specific purpose, and has an appropriate use.
I recommend sitting down with your boss and trying to rationally explain why a for loop has a purpose. There are times when a for iteration block more clearly describes an algorithm than a foreach iteration. When this is true, it is appropriate to use them.
I'd also point out to your boss - Performance is not, and should not be an issue in any practical way - it's more a matter of expression the algorithm in a succinct, meaningful, maintainable manner. Micro-optimizations like this miss the point of performance optimization completely, since any real performance benefit will come from algorithmic redesign and refactoring, not loop restructuring.
If, after a rational discussion, there is still this authoritarian view, it is up to you as to how to proceed. Personally, I would not be happy working in an environment where rational thought is discouraged, and would consider moving to another position under a different employer. However, I strongly recommend discussion prior to getting upset - there may just be a simple misunderstanding in place.
It probably depends on the type of collection you are enumerating and the implementation of its indexer. In general though, using foreach is likely to be a better approach.
Also, it'll work with any IEnumerable - not just things with indexers.
Whether for is faster than foreach is really besides the point. I seriously doubt that choosing one over the other will make a significant impact on your performance.
The best way to optimize your application is through profiling of the actual code. That will pinpoint the methods that account for the most work/time. Optimize those first. If performance is still not acceptable, repeat the procedure.
As a general rule I would recommend to stay away from micro optimizations as they will rarely yield any significant gains. Only exception is when optimizing identified hot paths (i.e. if your profiling identifies a few highly used methods, it may make sense to optimize these extensively).
It is what you do inside the loop that affects perfomance, not the actual looping construct (assuming your case is non-trivial).
The two will run almost exactly the same way. Write some code to use both, then show him the IL. It should show comparable computations, meaning no difference in performance.
In most cases there's really no difference.
Typically you always have to use foreach when you don't have an explicit numerical index, and you always have to use for when you don't actually have an iterable collection (e.g. iterating over a two-dimensional array grid in an upper triangle). There are some cases where you have a choice.
One could argue that for loops can be a little more difficult to maintain if magic numbers start to appear in the code. You should be right to be annoyed at not being able to use a for loop and have to build a collection or use a lambda to build a subcollection instead just because for loops have been banned.
It seems a bit strange to totally forbid the use of something like a for loop.
There's an interesting article here that covers a lot of the performance differences between the two loops.
I would say personally I find foreach a bit more readable over for loops but you should use the best for the job at hand and not have to write extra long code to include a foreach loop if a for loop is more appropriate.
You can really screw with his head and go for an IQueryable .foreach closure instead:
myList.ForEach(c => Console.WriteLine(c.ToString());
for has more simple logic to implement so it's faster than foreach.
Unless you're in a specific speed optimization process, I would say use whichever method produces the easiest to read and maintain code.
If an iterator is already setup, like with one of the collection classes, then the foreach is a good easy option. And if it's an integer range you're iterating, then for is probably cleaner.
Jeffrey Richter talked the performance difference between for and foreach on a recent podcast: http://pixel8.infragistics.com/shows/everything.aspx#Episode:9317
I did test it a while ago, with the result that a for loop is much faster than a foreach loop. The cause is simple, the foreach loop first needs to instantiate an IEnumerator for the collection.
I found the foreach loop which iterating through a List faster. See my test results below. In the code below I iterate an array of size 100, 10000 and 100000 separately using for and foreach loop to measure the time.
private static void MeasureTime()
{
var array = new int[10000];
var list = array.ToList();
Console.WriteLine("Array size: {0}", array.Length);
Console.WriteLine("Array For loop ......");
var stopWatch = Stopwatch.StartNew();
for (int i = 0; i < array.Length; i++)
{
Thread.Sleep(1);
}
stopWatch.Stop();
Console.WriteLine("Time take to run the for loop is {0} millisecond", stopWatch.ElapsedMilliseconds);
Console.WriteLine(" ");
Console.WriteLine("Array Foreach loop ......");
var stopWatch1 = Stopwatch.StartNew();
foreach (var item in array)
{
Thread.Sleep(1);
}
stopWatch1.Stop();
Console.WriteLine("Time take to run the foreach loop is {0} millisecond", stopWatch1.ElapsedMilliseconds);
Console.WriteLine(" ");
Console.WriteLine("List For loop ......");
var stopWatch2 = Stopwatch.StartNew();
for (int i = 0; i < list.Count; i++)
{
Thread.Sleep(1);
}
stopWatch2.Stop();
Console.WriteLine("Time take to run the for loop is {0} millisecond", stopWatch2.ElapsedMilliseconds);
Console.WriteLine(" ");
Console.WriteLine("List Foreach loop ......");
var stopWatch3 = Stopwatch.StartNew();
foreach (var item in list)
{
Thread.Sleep(1);
}
stopWatch3.Stop();
Console.WriteLine("Time take to run the foreach loop is {0} millisecond", stopWatch3.ElapsedMilliseconds);
}
UPDATED
After #jgauffin suggestion I used #johnskeet code and found that the for loop with array is faster than following,
Foreach loop with array.
For loop with list.
Foreach loop with list.
See my test results and code below,
private static void MeasureNewTime()
{
var data = new double[Size];
var rng = new Random();
for (int i = 0; i < data.Length; i++)
{
data[i] = rng.NextDouble();
}
Console.WriteLine("Lenght of array: {0}", data.Length);
Console.WriteLine("No. of iteration: {0}", Iterations);
Console.WriteLine(" ");
double correctSum = data.Sum();
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < Iterations; i++)
{
double sum = 0;
for (int j = 0; j < data.Length; j++)
{
sum += data[j];
}
if (Math.Abs(sum - correctSum) > 0.1)
{
Console.WriteLine("Summation failed");
return;
}
}
sw.Stop();
Console.WriteLine("For loop with Array: {0}", sw.ElapsedMilliseconds);
sw = Stopwatch.StartNew();
for (var i = 0; i < Iterations; i++)
{
double sum = 0;
foreach (double d in data)
{
sum += d;
}
if (Math.Abs(sum - correctSum) > 0.1)
{
Console.WriteLine("Summation failed");
return;
}
}
sw.Stop();
Console.WriteLine("Foreach loop with Array: {0}", sw.ElapsedMilliseconds);
Console.WriteLine(" ");
var dataList = data.ToList();
sw = Stopwatch.StartNew();
for (int i = 0; i < Iterations; i++)
{
double sum = 0;
for (int j = 0; j < dataList.Count; j++)
{
sum += data[j];
}
if (Math.Abs(sum - correctSum) > 0.1)
{
Console.WriteLine("Summation failed");
return;
}
}
sw.Stop();
Console.WriteLine("For loop with List: {0}", sw.ElapsedMilliseconds);
sw = Stopwatch.StartNew();
for (int i = 0; i < Iterations; i++)
{
double sum = 0;
foreach (double d in dataList)
{
sum += d;
}
if (Math.Abs(sum - correctSum) > 0.1)
{
Console.WriteLine("Summation failed");
return;
}
}
sw.Stop();
Console.WriteLine("Foreach loop with List: {0}", sw.ElapsedMilliseconds);
}
A powerful and precise way to measure time is by using the BenchmarkDotNet library.
In the following sample, I did a loop on 1,000,000,000 integer records on for/foreach and measured it with BenchmarkDotNet:
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
public class Program
{
public static void Main()
{
BenchmarkRunner.Run<LoopsBenchmarks>();
}
}
[MemoryDiagnoser]
public class LoopsBenchmarks
{
private List<int> arr = Enumerable.Range(1, 1_000_000_000).ToList();
[Benchmark]
public void For()
{
for (int i = 0; i < arr.Count; i++)
{
int item = arr[i];
}
}
[Benchmark]
public void Foreach()
{
foreach (int item in arr)
{
}
}
}
And here are the results:
Conclusion
In the example above we can see that for loop is slightly faster than foreach loop for lists. We can also see that both use the same memory allocation.
I wouldn't expect anyone to find a "huge" performance difference between the two.
I guess the answer depends on the whether the collection you are trying to access has a faster indexer access implementation or a faster IEnumerator access implementation. Since IEnumerator often uses the indexer and just holds a copy of the current index position, I would expect enumerator access to be at least as slow or slower than direct index access, but not by much.
Of course this answer doesn't account for any optimizations the compiler may implement.

Categories

Resources