I've tested List<string> vs IEnumerable<string>
iterations with for and foreach loops , is it possible that the List is much faster ?
these are 2 of few links I could find that are publicly stating that performance is better iterating IEnumerable over List.
Link1
Link2
my tests was loading 10K lines from a text file that holds a list of URLs.
I've first loaded it in to a List , then copied List to an IEnumerable
List<string> StrByLst = ...method to load records from the file .
IEnumerable StrsByIE = StrByLst;
so each has 10k items Type <string>
looping on each collection for 100 times , meaning 100K iterations, resulted with
List<string> is faster by amazing 50 x than the IEnumerable<string>
is that predictable ?
update
this is the code that is doing the tests
string WorkDirtPath = HostingEnvironment.ApplicationPhysicalPath;
string fileName = "tst.txt";
string fileToLoad = Path.Combine(WorkDirtPath, fileName);
List<string> ListfromStream = new List<string>();
ListfromStream = PopulateListStrwithAnyFile(fileToLoad) ;
IEnumerable<string> IEnumFromStream = ListfromStream ;
string trslt = "";
Stopwatch SwFr = new Stopwatch();
Stopwatch SwFe = new Stopwatch();
string resultFrLst = "",resultFrIEnumrable, resultFe = "", Container = "";
SwFr.Start();
for (int itr = 0; itr < 100; itr++)
{
for (int i = 0; i < ListfromStream.Count(); i++)
{
Container = ListfromStream.ElementAt(i);
}
//the stop() was here , i was doing changes , so my mistake.
}
SwFr.Stop();
resultFrLst = SwFr.Elapsed.ToString();
//forgot to do this reset though still it is faster (x56??)
SwFr.Reset();
SwFr.Start();
for(int itr = 0; itr<100; itr++)
{
for (int i = 0; i < IEnumFromStream.Count(); i++)
{
Container = IEnumFromStream.ElementAt(i);
}
}
SwFr.Stop();
resultFrIEnumrable = SwFr.Elapsed.ToString();
Update ... final
taking out the counter to outside of the for loops ,
int counter = ..countfor both IEnumerable & List
then passed counter(int) as a count of total items as suggested by #ScottChamberlain .
re checked that every thing is in place, now the results are 5 % faster IEnumerable.
so that concludes , use by scenario - use case... no performance difference at all ...
You are doing something wrong.
The times that you get should be very close to each other, because you are running essentially the same code.
IEnumerable is just an interface, which List implements, so when you call some method on the IEnumerable reference it ends up calling the corresponding method of List.
There is no code implemented in the IEnumerable - this is what interfaces are - they only specify what functionality a class should have, but say nothing about how it's implemented.
You have a few problems with your test, one is the IEnumFromStream.Count() inside the for loop, every time it want to get that value it must enumerate over the entire list to get the count and the value is not cached between loops. Move that call outside of the for loop and save the result in a int and use that value for the for loop, you will see a shorter time for your IEnumerable.
Also the IEnumFromStream.ElementAt(i) behaves similarly to Count() it must iterate over the whole list up to i (eg: first time it goes 0, second time 0,1, third 0,1,2, and so on...) every time where List can jump directly to the index it needs. You should be working with the IEnumerator returned from GetEnumerator() instead.
IEnumerable's and for loop's don't mix well. Use the correct tool for the job, either call GetEnumerator() and work with that or use it in a foreach loop.
Now, I know a lot of you may be saying "But it is a interface it will be just mapping the calls and it should make no difference", but there is a key thing, IEnumerable<T> Does not have a Count() or ElementAt() method!. Those methods are extension methods added by LINQ, and the LINQ classes do not know the underlying collection is a List, so it does what it knows the underlying object can do, and that is iterating over the list every time the method is called.
IEnumerable using IEnumerator
using(var enu = IEnumFromStream.GetEnumerator())
{
//You have to call "MoveNext()" once before getting "Current" the first time,
// this is done so you can have a nice clean while loop like this.
while(enu.MoveNext())
{
Container = enu.Current;
}
}
The above code is basically the same thing as
foreach(var enu in IEnumFromStream)
{
Container = enu;
}
The important thing to remember is IEnumerable's do not have a length, in fact they can be infinitely long. There is a whole field of computer science on detecting a infinitely long IEnumerable
Based on the code you posted I think the problem is with your use of the Stopwatch class.
You declare two of these, SwFr and SwFe, but only use the former. Because of this, the last call to SwFr.Elapsed will get the total amount of time across both sets of for loops.
If you are wanting to reuse that object in this way, place a call to SwFr.Reset() right after resultFrLst = SwFr.Elapsed.ToString();.
Alternatively, you could use SwFe when running the second test.
Related
there's an exercise i need to do, given a List i need to sort the content using ONLY recursive methods (no while, do while, for, foreach).
So... i'm struggling (for over 2 hours now) and i dont know how to even begin.
The function must be
List<int> SortHighestToLowest (List<int> list) {
}
I THINK i should check if the previous number is greater than the actual number and so on but what if the last number is greater than the first number on the list?, that's why im having a headache.
I appreciate your help, thanks a lot.
[EDIT]
I delivered the exercise but then teacher said i shouldn't use external variables like i did here:
List<int> _tempList2 = new List<int>();
int _actualListIndex = 0;
int _actualMaxNumber = 0;
int _actualMaxNumberIndex = 0;
List<int> SortHighestToLowest(List<int> list)
{
if (list.Count == 0)
return _tempList2;
if (_actualListIndex == 0)
_actualMaxNumber = list[0];
if (_actualListIndex < list.Count -1)
{
_actualListIndex++;
if (list[_actualListIndex] > _actualMaxNumber)
{
_actualMaxNumberIndex = _actualListIndex;
_actualMaxNumber = list[_actualListIndex];
}
return SortHighestToLowest(list);
}
_tempList2.Add(_actualMaxNumber);
list.RemoveAt(_actualMaxNumberIndex);
_actualListIndex = 0;
_actualMaxNumberIndex = 0;
return SortHighestToLowest(list);
}
Exercise is done and i approved (thanks to other exercises as well) but i was wondering if there's a way of doing this without external variables and without using System.Linq like String.Empty's response (im just curious, the community helped me to solve my issue and im thankful).
I am taking your instructions to the letter here.
Only recursive methods
No while, do while, for, foreach
Signature must be List<int> SortHighestToLowest(List<int> list)
Now, I do assume you may use at least the built-in properties and methods of the List<T> type. If not, you would have a hard time even reading the elements of your list.
That said, any calls to Sort or OrderBy methods would be beyond the point here, since they would render any recursive method useless.
I also assume it is okay to use other lists in the process, since you didn't mention anything in regards to that.
With all that in mind, I came to this piece below, making use of Max and Remove methods from List<T> class, and a new list of integers for each recursive call:
public static List<int> SortHighestToLowest(List<int> list)
{
// recursivity breaker
if (list.Count <= 1)
return list;
// remove highest item
var max = list.Max();
list.Remove(max);
// append highest item to recursive call for the remainder of the list
return new List<int>(SortHighestToLowest(list)) { max };
}
For solving this problem, try to solve smaller subsets. Consider the following list
[1,5,3,2]
Let's take the last element out of list, and consider the rest as sorted which will be [1,3,5] and 2. Now the problem reduces to another problem of inserting this 2 in its correct position. If we can insert it in correct position then the array becomes sorted. This can be applied recursively.
For every recursive problem there should be a base condition w.r.t the hypothesis we make. For the first problem the base condition is array with single element. A single element array is always sorted.
For the second insert problem the base condition will be an empty array or the last element in array is less than the element to be inserted. In both cases the element is inserted at the end.
Algorithm
---------
Sort(list)
if(list.count==1)
return
temp = last element of list
temp_list = list with last element removed
Sort(temp_list)
Insert(temp_list, temp)
Insert(list, temp)
if(list.count ==0 || list[n-1] <= temp)
list.insert(temp)
return
insert_temp = last element of list
insert_temp_list = list with last element removed
Insert(insert_temo_list, insert_temp)
For Insert after base condition its calling recursively till it find the correct position for the last element which is removed.
I'm trying to obtain the last 10 objects within an arraylist.
Case: Arraylist full of objects[ChartObjectsInt] and [ChartObjectsReal] with indexes from 0-N, i want to obtain the last 10 persons (N-10), and with these last 10 objects I want to call functions from that object; like ChartObjectsInt.getLabelName();
Can anyone help?
Code I've reached so far:
private void getLastTenObjects()
{
foreach (ChartObjectsInt chartRecords in arraylistMonitor)
{
for (int i = 0; i < 10; i++)
{
arraylistMonitor.IndexOf(i);
}
}
}
Why don't you use List rather than ArrayList, if you do so it will be more easy to get last 10 element from list.
example:
var lastTenProducts = products.OrderByDescending(p => p.ProductDate).Take(10);
//here products is the List
If you don't want to use LINQ at all
for (var i = Math.Max(arraylistMonitor.Count - 10, 0); i < arraylistMonitor.Count; i++)
{
YourFunctionCallHere(arraylistMonitor[i]);
}
The above code will loop through the last 10 items of the ArrayList by setting i to the appropriate starting index - the Math.Max call there is in case the ArrayList has 9 or fewer elements in it.
If you are willing to use LINQ
var last10 = arraylistMonitor.Cast<object>().Reverse().Take(10);
will do what you want. You may also wish to add ToList after Take(10), depending on how you wish to consume last10.
Firstly it casts it to an IEnumerable<object> then goes through the IEnumerable backwards until it has (up to) 10 items.
If you specifically want last10 to be an ArrayList (which I wouldn't recommend) then use:
var last10 = new ArrayList(arraylistMonitor.Cast<object>().Reverse().Take(10).ToList());
As others have already said, I would use List<T> as ArrayList is effectively deprecated for that as it exists from a time when C# didn't have generics.
With that said, you could write a function that would work for a list of any size and take however many like so
public List<T> GetLastX<T>(List<T> list, int amountToTake)
{
return list.Skip(list.Count - amountToTake).ToList();
}
There are a number of different way to accomplish the same simple loop though the items of an object in c#.
This has made me wonder if there is any reason be it performance or ease of use, as to use on over the other. Or is it just down to personal preference.
Take a simple object
var myList = List<MyObject>;
Lets assume the object is filled and we want to iterate over the items.
Method 1.
foreach(var item in myList)
{
//Do stuff
}
Method 2
myList.Foreach(ml =>
{
//Do stuff
});
Method 3
while (myList.MoveNext())
{
//Do stuff
}
Method 4
for (int i = 0; i < myList.Count; i++)
{
//Do stuff
}
What I was wondering is do each of these compiled down to the same thing? is there a clear performance advantage for using one over the others?
or is this just down to personal preference when coding?
Have I missed any?
The answer the majority of the time is it does not matter. The number of items in the loop (even what one might consider a "large" number of items, say in the thousands) isn't going to have an impact on the code.
Of course, if you identify this as a bottleneck in your situation, by all means, address it, but you have to identify the bottleneck first.
That said, there are a number of things to take into consideration with each approach, which I'll outline here.
Let's define a few things first:
All of the tests were run on .NET 4.0 on a 32-bit processor.
TimeSpan.TicksPerSecond on my machine = 10,000,000
All tests were performed in separate unit test sessions, not in the same one (so as not to possibly interfere with garbage collections, etc.)
Here's some helpers that are needed for each test:
The MyObject class:
public class MyObject
{
public int IntValue { get; set; }
public double DoubleValue { get; set; }
}
A method to create a List<T> of any length of MyClass instances:
public static List<MyObject> CreateList(int items)
{
// Validate parmaeters.
if (items < 0)
throw new ArgumentOutOfRangeException("items", items,
"The items parameter must be a non-negative value.");
// Return the items in a list.
return Enumerable.Range(0, items).
Select(i => new MyObject { IntValue = i, DoubleValue = i }).
ToList();
}
An action to perform for each item in the list (needed because Method 2 uses a delegate, and a call needs to be made to something to measure impact):
public static void MyObjectAction(MyObject obj, TextWriter writer)
{
// Validate parameters.
Debug.Assert(obj != null);
Debug.Assert(writer != null);
// Write.
writer.WriteLine("MyObject.IntValue: {0}, MyObject.DoubleValue: {1}",
obj.IntValue, obj.DoubleValue);
}
A method to create a TextWriter which writes to a null Stream (basically a data sink):
public static TextWriter CreateNullTextWriter()
{
// Create a stream writer off a null stream.
return new StreamWriter(Stream.Null);
}
And let's fix the number of items at one million (1,000,000, which should be sufficiently high to enforce that generally, these all have about the same performance impact):
// The number of items to test.
public const int ItemsToTest = 1000000;
Let's get into the methods:
Method 1: foreach
The following code:
foreach(var item in myList)
{
//Do stuff
}
Compiles down into the following:
using (var enumerable = myList.GetEnumerable())
while (enumerable.MoveNext())
{
var item = enumerable.Current;
// Do stuff.
}
There's quite a bit going on there. You have the method calls (and it may or may not be against the IEnumerator<T> or IEnumerator interfaces, as the compiler respects duck-typing in this case) and your // Do stuff is hoisted into that while structure.
Here's the test to measure the performance:
[TestMethod]
public void TestForEachKeyword()
{
// Create the list.
List<MyObject> list = CreateList(ItemsToTest);
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle through the items.
foreach (var item in list)
{
// Write the values.
MyObjectAction(item, writer);
}
// Write out the number of ticks.
Debug.WriteLine("Foreach loop ticks: {0}", s.ElapsedTicks);
}
}
The output:
Foreach loop ticks: 3210872841
Method 2: .ForEach method on List<T>
The code for the .ForEach method on List<T> looks something like this:
public void ForEach(Action<T> action)
{
// Error handling omitted
// Cycle through the items, perform action.
for (int index = 0; index < Count; ++index)
{
// Perform action.
action(this[index]);
}
}
Note that this is functionally equivalent to Method 4, with one exception, the code that is hoisted into the for loop is passed as a delegate. This requires a dereference to get to the code that needs to be executed. While the performance of delegates has improved from .NET 3.0 on, that overhead is there.
However, it's negligible. The test to measure the performance:
[TestMethod]
public void TestForEachMethod()
{
// Create the list.
List<MyObject> list = CreateList(ItemsToTest);
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle through the items.
list.ForEach(i => MyObjectAction(i, writer));
// Write out the number of ticks.
Debug.WriteLine("ForEach method ticks: {0}", s.ElapsedTicks);
}
}
The output:
ForEach method ticks: 3135132204
That's actually ~7.5 seconds faster than using the foreach loop. Not completely surprising, given that it uses direct array access instead of using IEnumerable<T>.
Remember though, this translates to 0.0000075740637 seconds per item being saved. That's not worth it for small lists of items.
Method 3: while (myList.MoveNext())
As shown in Method 1, this is exactly what the compiler does (with the addition of the using statement, which is good practice). You're not gaining anything here by unwinding the code yourself that the compiler would otherwise generate.
For kicks, let's do it anyways:
[TestMethod]
public void TestEnumerator()
{
// Create the list.
List<MyObject> list = CreateList(ItemsToTest);
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
// Get the enumerator.
using (IEnumerator<MyObject> enumerator = list.GetEnumerator())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle through the items.
while (enumerator.MoveNext())
{
// Write.
MyObjectAction(enumerator.Current, writer);
}
// Write out the number of ticks.
Debug.WriteLine("Enumerator loop ticks: {0}", s.ElapsedTicks);
}
}
The output:
Enumerator loop ticks: 3241289895
Method 4: for
In this particular case, you're going to gain some speed, as the list indexer is going directly to the underlying array to perform the lookup (that's an implementation detail, BTW, there's nothing to say that it can't be a tree structure backing the List<T> up).
[TestMethod]
public void TestListIndexer()
{
// Create the list.
List<MyObject> list = CreateList(ItemsToTest);
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle by index.
for (int i = 0; i < list.Count; ++i)
{
// Get the item.
MyObject item = list[i];
// Perform the action.
MyObjectAction(item, writer);
}
// Write out the number of ticks.
Debug.WriteLine("List indexer loop ticks: {0}", s.ElapsedTicks);
}
}
The output:
List indexer loop ticks: 3039649305
However the place where this can make a difference is arrays. Arrays can be unwound by the compiler to process multiple items at a time.
Instead of doing ten iterations of one item in a ten item loop, the compiler can unwind this into five iterations of two items in a ten item loop.
However, I'm not positive here that this is actually happening (I have to look at the IL and the output of the compiled IL).
Here's the test:
[TestMethod]
public void TestArray()
{
// Create the list.
MyObject[] array = CreateList(ItemsToTest).ToArray();
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle by index.
for (int i = 0; i < array.Length; ++i)
{
// Get the item.
MyObject item = array[i];
// Perform the action.
MyObjectAction(item, writer);
}
// Write out the number of ticks.
Debug.WriteLine("Enumerator loop ticks: {0}", s.ElapsedTicks);
}
}
The output:
Array loop ticks: 3102911316
It should be noted that out-of-the box, Resharper offers a suggestion with a refactoring to change the above for statements to foreach statements. That's not to say this is right, but the basis is to reduce the amount of technical debt in code.
TL;DR
You really shouldn't be concerned with the performance of these things, unless testing in your situation shows that you have a real bottleneck (and you'll have to have massive numbers of items to have an impact).
Generally, you should go for what's most maintainable, in which case, Method 1 (foreach) is the way to go.
In regards to the final bit of the question, "Did I miss any?" Yes, and I feel I would be remiss to not mention this even though the question is quite old. While those four ways of doing it will execute in relatively the same amount of time, there is a way not shown above that runs faster than all of them. Quite significantly, in fact, as the number of items in the iterated list increases. It would be the exact same way as the last method but instead of getting .Count in the condition check of the loop, you assign this value to a variable before setting up the loop and use that instead. Which leaves you with something like this:
var countVar = list.Count;
for(int i = 0; i < countVar; i++)
{
//loop logic
}
By doing it this way, you're only looking up a variable value at each iteration, rather than resolving the Count or Length properties, which is considerably less efficient.
I would suggest an even better and not well-known approach for faster loop iteration over a list. I would recommend you to first read about Span<T>. Note that you can use it if you are using .NET Core.
List<MyObject> list = new();
foreach (MyObject item in CollectionsMarshal.AsSpan(list))
{
// Do something
}
Be aware of the caveats:
The CollectionsMarshal.AsSpan method is unsafe and should be used only if you know what you're doing. CollectionsMarshal.AsSpan returns a Span<T> on the private array of List<T>. Iterating over a Span<T> is fast as the JIT uses the same tricks as for optimizing arrays. Using this method, it won't check the list is not modified during the enumeration.
This is a more detailed explanation of what it does behind the scenes and more, super interesting!
I am having an issue with the performance of a LINQ query and so I created a small simplified example to demonstrate the issue below. The code takes a random list of small integers and returns the list partitioned into several smaller lists each which totals 10 or less.
The problem is that (as I've written this) the code takes exponentially longer with N. This is only an O(N) problem. With N=2500, the code takes over 10 seconds to run on my pc.
I would appriciate greatly if someone could explain what is going on. Thanks, Mark.
int N = 250;
Random r = new Random();
var work = Enumerable.Range(1,N).Select(x => r.Next(0, 6)).ToList();
var chunks = new List<List<int>>();
// work.Dump("All the work."); // LINQPad Print
var workEnumerable = work.AsEnumerable();
Stopwatch sw = Stopwatch.StartNew();
while(workEnumerable.Any()) // or .FirstorDefault() != null
{
int soFar = 0;
var chunk = workEnumerable.TakeWhile( x =>
{
soFar += x;
return (soFar <= 10);
}).ToList();
chunks.Add(chunk); // Commented out makes no difference.
workEnumerable = workEnumerable.Skip(chunk.Count); // <== SUSPECT
}
sw.Stop();
// chunks.Dump("Work Chunks."); // LINQPad Print
sw.Elapsed.Dump("Time elapsed.");
What .Skip() does is create a new IEnumerable that loops over the source, and only begins yielding results after the first N elements. You chain who knows how many of these after each other. Everytime you call .Any(), you need to loop over all the previously skipped elements again.
Generally speaking, it's a bad idea to set up very complicated operator chains in LINQ and enumerating them repeatedly. Also, since LINQ is a querying API, methods like Skip() are a bad choice when what you're trying to achieve amounts to modifying a data structure.
You effectively keep chaining Skip() onto the same enumerable. In a list of 250, the last chunk will be created from a lazy enumerable with ~25 'Skip' enumerator classes on the front.
You would find things become a lot faster, already if you did
workEnumerable = workEnumerable.Skip(chunk.Count).ToList();
However, I think the whole approach could be altered.
How about using standard LINQ to achieve the same:
See it live on http://ideone.com/JIzpml
using System;
using System.Collections.Generic;
using System.Linq;
public class Program
{
private readonly static Random r = new Random();
public static void Main(string[] args)
{
int N = 250;
var work = Enumerable.Range(1,N).Select(x => r.Next(0, 6)).ToList();
var chunks = work.Select((o,i) => new { Index=i, Obj=o })
.GroupBy(e => e.Index / 10)
.Select(group => group.Select(e => e.Obj).ToList())
.ToList();
foreach(var chunk in chunks)
Console.WriteLine("Chunk: {0}", string.Join(", ", chunk.Select(i => i.ToString()).ToArray()));
}
}
The Skip() method and others like it basically create a placeholder object, implementing IEnumerable, that references its parent enumerable and contains the logic to perform the skipping. Skips in loops, therefore, are non-performant, because instead of throwing away elements of the enumerable, like you think they are, they add a new layer of logic that's lazily executed when you actually need the first element after all the ones you've skipped.
You can get around this by calling ToList() or ToArray(). This forces "eager" evaluation of the Skip() method, and really does get rid of the elements you're skipping from the new collection you will be enumerating. That comes at an increased memory cost, and requires all of the elements to be known (so if you're running this on an IEnumerable that represents an infinite series, good luck).
The second option is to not use Linq, and instead use the IEnumerable implementation itself, to get and control an IEnumerator. Then instead of Skip(), simply call MoveNext() the necessary number of times.
Suppose I have the following code:
List<SomeObject> someObjects = ReturnListWithThousandsOfObjects();
foreach(SomeObject someobject in someObjects)
{
DoSomething.With(someObject);
}
And also suppose that after a minute of running I put a breakpoint on DoSomething.With(someObject);.
The debugger breaks for me just fine. But now I want to know what point am I at in my iteration of the list (assume the list is unordered/has no key).
Is there a way for the debugger to say "the foreach has run 532 of 2321 iterations"?
As a debugging one off isn't there an indexof method?
i.e.
quickwatch - someObjects.indexOf(someObject);
Added - Sorry if a bit brief.
As pointed out by Guffa this will work best if the values are unique or the default equality comparer EqualityComparer function uses a unique value (such as a custom GetHashCode/Equals overload).
public class ATest
{
public int number { get; set; }
public int boo { get; set; }
public ATest()
{
}
}
protected void Go()
{
List<ATest> list = new List<ATest>();
foreach(var i in Enumerable.Range(0,30)) {
foreach(var j in Enumerable.Range(0,100)) {
list.Add(new ATest() { number = i, boo = j });
}
}
var o =0; //only for proving concept.
foreach (ATest aTest in list)
{
DoSomthing(aTest);
//proof that this does work in this example.
o++;
System.Diagnostics.Debug.Assert(o == list.IndexOf(aTest));
}
}
This is for Visual Studio, but other IDEs should have something similar:
When you set a breakpoint you can right-click it and go to "Hit Count". You can setup some parameters there ("greater than or equal to " 0 will make it work like a regular breakpoint - so would "break always"). The interesting part is the Hit Count field (which can be reset).
This can solve the "number of iterations" part. For the total number I'm afraid you're going to have to find it yourself, assuming the collection you use has such a number readily available.
You can also set the breakpoint to fire after a very large number of hits, say a few thousands/millions (I don't know what is their limit).
Then, when the "real" breakpoint fires, the one where you want to know how many times the original breakpoint was hit, you can just examine it and reset it if needed.
Is this case, if you really wanted to know what the count is, wouldn't you use a for loop?
List<SomeObject> someObjects = ReturnListWithThousandsOfObjects();
for(int someObj= 1; someObj <= someObjects.Count; someObj++)
{
Console.WriteLine(string.Format("{0} of {1} iterations", someObj, someObjects.Count));
DoSomething.With(someObject[someObj]);
}
There will be no real difference in performance between the foreach and the for loops - therefore the for loop will be a better alternative for the situation you want to achieve.
Unless you manually keep count in a variable you won't be able to easily determine this. As the loop is iterating across your collection it just uses the enumerator to grab the next element in the collection tell there are no more elements at which point it exits.
To manually keep a count you would just do:
int count = 0;
List<SomeObject> someObjects = ReturnListWithThousandsOfObjects();
foreach(SomeObject someobject in someObjects)
{
count++;
DoSomething.With(someObject);
}
Now at any point you can pause execution and see which iteration you are on
Create an extension method on List and List which accepts a Action fold, Action sideFold that let's you accumulate side effects like checking for the existence of a debugger and breaking on accumulated state.
Yes it can. Here's how [in VS 2017] :
Set a break point inside the foreach loop
Right click the break point and select 'Actions'
In the text box, enter the following: $FUNCTION {list.Items.IndexOf(item)} where 'list' is the name of your list and 'item' is the current item
Continue running the code and watch the output window