Generic list FindAll() vs. foreach - c#

I'm looking through a generic list to find items based on a certain parameter.
In General, what would be the best and fastest implementation?
1. Looping through each item in the list and saving each match to a new list and returning that
foreach(string s in list)
{
if(s == "match")
{
newList.Add(s);
}
}
return newList;
Or
2. Using the FindAll method and passing it a delegate.
newList = list.FindAll(delegate(string s){return s == "match";});
Don't they both run in ~ O(N)? What would be the best practice here?
Regards,
Jonathan

You should definitely use the FindAll method, or the equivalent LINQ method. Also, consider using the more concise lambda instead of your delegate if you can (requires C# 3.0):
var list = new List<string>();
var newList = list.FindAll(s => s.Equals("match"));

I would use the FindAll method in this case, as it is more concise, and IMO, has easier readability.
You are right that they are pretty much going to both perform in O(N) time, although the foreach statement should be slightly faster given it doesn't have to perform a delegate invocation (delegates incur a slight overhead as opposed to directly calling methods).
I have to stress how insignificant this difference is, it's more than likely never going to make a difference unless you are doing a massive number of operations on a massive list.
As always, test to see where the bottlenecks are and act appropriately.

Jonathan,
A good answer you can find to this is in chapter 5 (performance considerations) of Linq To Action.
They measure a for each search that executes about 50 times and that comes up with foreach = 68ms per cycle / List.FindAll = 62ms per cycle. Really, it would probably be in your interest to just create a test and see for yourself.

List.FindAll is O(n) and will search the entire list.
If you want to run your own iterator with foreach, I'd recommend using the yield statement, and returning an IEnumerable if possible. This way, if you end up only needing one element of your collection, it will be quicker (since you can stop your caller without exhausting the entire collection).
Otherwise, stick to the BCL interface.

Any perf difference is going to be extremely minor. I would suggest FindAll for clarity, or, if possible, Enumerable.Where. I prefer using the Enumerable methods because it allows for greater flexibility in refactoring the code (you don't take a dependency on List<T>).

Yes, they both implementations are O(n). They need to look at every element in the list to find all matches. In terms of readability I would also prefer FindAll. For performance considerations have a look at LINQ in Action (Ch 5.3). If you are using C# 3.0 you could also apply a lambda expression. But that's just the icing on the cake:
var newList = aList.FindAll(s => s == "match");

Im with the Lambdas
List<String> newList = list.FindAll(s => s.Equals("match"));

Unless the C# team has improved the performance for LINQ and FindAll, the following article seems to suggest that for and foreach would outperform LINQ and FindAll on object enumeration: LINQ on Objects Performance.
This artilce was dated back to March 2009, just before this post originally asked.

Related

What's the quickest way to check the size of an IEnumerable is greater than some given value?

I know that you can use enumerable.Any() instead of enumerable.Count() to check if the collection has anything in it efficiently.
Is there an equivalent to check the size is at least any larger size?
For example, how would I efficiently do enumerable.Count() > 3.
The most efficient approach will unfortunately depend on the implementation. It's a leaky abstraction at that point.
If you're using a List<T> or similar, using Count() will be fastest. But for any lazily-evaluated sequence, that will evaluate the whole sequence.
For a lazily-evaluated sequence, using enumerable.Skip(3).Any() will be more efficient, because it can stop once it's found the fourth element. That's all you need to know about; you don't care about the actual size.
Using the Skip()/Any() approach will be slightly less efficient than using Count() for some collections - but could be much more efficient for large lazy sequences. (It will also work even for infinite sequences, which Count() wouldn't.)
The difference in efficiency for lists will depend on how many items you're skipping, of course - if you need to see whether there are "at least a million" items then using Count() would be much more efficient for a list.
Sorry not to have an easy answer for you. If you really need this to be optimal in every case, you could perform the same kinds of optimization that the Count() method does. Something like this:
// FIXME: This name is horrible! Note that you'd call it with 4 in your case,
// as it's inclusive of minCount.
// Note this assumes C# 8 and its lovely switch expression support.
// It could be written with if/else etc of course.
public static bool HasAtMinElements<T>(this IEnumerable<T> source, int minCount) =>
source switch
{
null => throw new ArgumentNullException(nameof(source)),
ICollection<TSource> coll => coll.Count >= minCount,
ICollection coll => coll.Count >= minCount,
_ => source.Skip(minCount - 1).Any();
}
That's annoying though :( Note that it doesn't optimize IIListProvider<T> like the real Count() method does, either - because that's internal.
Enumerable.Count Method is the Microsoft's recommended way to return the number of elements in a sequence, which is what you are already doing and it is the best option as far as I see.

space complexity of a simple linq(to objects) query

I have;
var maxVal = l.TakeWhile(x=>x < val).Where(x=>Matches(x)).Max();
How much space does this need ? Does linq build up a list of the above Where() condition, or is Max() just iterating through the IEnumerable keeping track of what is the current Max() ?
And where can I find more info about this, besides asking on SO f
I have verified with Reflector that each of Enumerable.TakeWhile, Enumerable.Where and Enumerable.Max run in constant space. Consequently, this entire query should run in constant space.
Not surprising, considering TakeWhile and Where are speced to use deferred execution + streaming.
Max does not use deferred execution, but only needs to store 'max so far' and the enumerator on the source enumerable.
According to the Reflector Max() method iterates through the enumerable.
And where can I find more info about this, besides asking on SO f
You can use Reflector to look at the implementation of any .NET assembly.
The only thing offered by Enumerable that I've found doesn't run in constant space is ToList(), for obvious reasons.
With some enumerations, this is inefficient, in that you already have a space complexity above constant (typically O(n) as you are storing the items) and that the collection in question offers a mechanism with lower time complexity. If you are creating such a collection yourself it makes sense to offer your own versions of the extensions offered by Enumerable. For example, if you have a collection that is inherently sorted you should be able to offer Min() and Max() in better than O(n) complexity (whether it is O(1), O(ln) or something else would depend on what way that sorting was kept). Since instance methods override extension methods (when called on an expression of the object type rather than the instance type) then with no difference to the coder using your object, you will offer better efficiency.
Reflector is your friend.
In particular, you can take a look at Linq to Objects extension methods in the Enumerable class in System.Linq.
The above are using iterations, so they use the whatever space the enumerators takes up - usually O(1). Max() is O(1) space.
However, keep in mind that nothing stops a developer from writing an enumerator that takes up more than constant space. E.g. traversing a tree may require O(log n) space. This is the case e.g. for SortedDictionary<K,V> and SortedSet<K,V>.
So it partially depends on what l is in your code.

Which LINQ query is more effective?

I have a huge IEnumerable(suppose the name is myItems), which way is more effective?
Solution 1: Filter it first then ForEach.
Array.ForEach(myItems.Where(FILTER-IT-HERE).ToArray(),MY-ACTION);
Solution 2: Do RETURN in MY-ACTION if the item is not up to the mustard.
Array.ForEach(myItems.ToArray(),MY-ACTION-WITH-FILTER);
Is one of them always better than another? Or any other good suggestions? Thanks in advance.
Did you do any measurements? Since WE can't measure the run time of My-Action then only you can. Measure and decide.
Sometimes one has to create benchmark's because similar looking activities could produce radically different and unexpected results.
You do not say what your data source is so I'm going to assume it may be data on an SQL server in which case filtering at the server side will likely always be the best approach because you have minimized the amount of data transfer. Memory access is always faster than data transfer from disk to memory so whenever you can transfer fewer records, you are likely to have better performance.
Well, both times, you're converting to an array, which might not be so efficient if the IEnumerable is very large (like you said). You could create a generic extension method for IEnumerable, like:
public static void ForEach<T>(this IEnumerable<T> current, Action<T> action) {
foreach (var i in current) {
action(i);
}
}
and then you could do this:
IEnumerable<int> ints = new List<int>();
ints.Where(i => i == 5).ForEach(i => Console.WriteLine(i));
If performance is a concern, it's unclear to me why you'd be bothering to construct an entire array in the first place. Why not just this?
foreach (var item in myItems.Where(FILTER-IT-HERE))
MY-ACTION;
Or:
foreach (var item in myItems)
MY-ACTION-WITH-FILTER;
I ask because, while the others are right that you can't really know without testing, I wouldn't expect there to be much difference between the above two options. I would expect there to be a difference, on the other hand, between creating/populating an array (seemingly for no reason) and not creating an array.
Everything else being equal, calling ToArray() first will impart a greater performance hit than when calling it last. Although, as has been stated by others before me,
Why use ToArray() and Array.ForEach() at all?
We don't know that everything else actually is equal since you do not reveal the implementation details of your filter and action.
The idea of LINQ is to work on enumerable collections, so the best LINQ query is the one where you don't use Array.ForEach() and .ToArray() at all.
I would say that this falls into the category of premature optimization. If, after establishing benchmarks, you find that the code is too slow, you can always try each approach and pick the result that works better for you.
Since we don't know how the IEnumerable<> is produced it's hard to say which approach will perform better. We also don't know how many items will remain after you apply your predicate - nor do we know whether the action or iteration steps are going to be the dominant factor in the execution of your code. The only way to know for sure is to try it both ways, profile the results, and pick the best.
Performance aside, I would choose the version that is most clear - which (for me) is to first filter and then apply the projection to the result.

Find() vs. enumeration on lists

I'm working with a code base where lists need to be frequently searched for a single element.
Is it faster to use a Predicate and Find() than to manually do an enumeration on the List?
for example:
string needle = "example";
FooObj result = _list.Find(delegate(FooObj foo) {
return foo.Name == needle;
});
vs.
string needle = "example";
foreach (FooObj foo in _list)
{
if (foo.Name == needle)
return foo;
}
While they are equivalent in functionality, are they equivalent in performance as well?
They are not equivalent in performance. The Find() method requires a method (in this case delegate) invocation for every item in the list. Method invocation is not free and is relatively expensive as compared to an inline comparison. The foreach version requires no extra method invocation per object.
That being said, I wouldn't pick one or the other based on performance until I actually profiled my code and found this to be a problem. I haven't yet found the overhead of this scenario to every be a "hot path" problem for code I've written and I use this pattern a lot with Find and other similar methods.
If searching your list is too slow as-is, you can probably do better than a linear search. If you can keep the list sorted, you can use a binary search to find the element in O(lg n) time.
If you're searching a whole lot, consider replacing that list with a Dictionary to index your objects by name.
Technically, the runtime performance of the delegate version will be slightly worse than the other version - but in most cases you'd be hard pressed to perceive any difference.
Of more importance (IHMO) is the code time performance of being able to write what you want, rather than how you want it. This makes a big difference in maintainability.
This original code:
string needle = "example";
foreach (FooObj foo in _list)
{
if (foo.Name == needle)
return foo;
}
requires any maintainer to read the code and understand that you're looking for a particular item.
This code
string needle = "example";
return _list.Find(
delegate(FooObj foo)
{
return foo.Name == needle;
});
makes it clear that you're looking for a particular item - quicker to understand.
Finally, this code, using features from C# 3.0:
string needle = "example";
return _list.Find( foo => foo.Name == needle);
does exactly the same thing, but in one line that's even faster to read and understand (well, once you understand lambda expressions, anyway).
In summary, given that the performance of the alternatives is nearly equal, choose the one that makes the code easier to read and maintain.
"I'm working with a code base where lists need to be frequently searched for a single element"
It is better to change your data structure to be Dictionary instead of List to get better performance
Similar question was asked for List.ForEach vs. foreach-iteration (foreach vs someList.Foreach(){}).
In that case List.ForEach was a bit faster.
As Jared pointed out, there are differences.
But, as always, don't worry unless you know it's a bottleneck. And if it is a bottleneck, that's probably because the lists are big, in which case you should consider using a faster find - a hash table or binary tree, or even just sorting the list and doing binary search will give you log(n) performance which will have far more impact than tweaking your linear case.

In-memory LINQ performance

More than about LINQ to [insert your favorite provider here], this question is about searching or filtering in-memory collections.
I know LINQ (or searching/filtering extension methods) works in objects implementing IEnumerable or IEnumerable<T>. The question is: because of the nature of enumeration, is every query complexity at least O(n)?
For example:
var result = list.FirstOrDefault(o => o.something > n);
In this case, every algorithm will take at least O(n) unless list is ordered with respect to 'something', in which case the search should take O(log(n)): it should be a binary search. However, If I understand correctly, this query will be resolved through enumeration, so it should take O(n), even in list was previously ordered.
Is there something I can do to solve a query in O(log(n))?
If I want performance, should I use Array.Sort and Array.BinarySearch?
Even with parallelisation, it's still O(n). The constant factor would be different (depending on your number of cores) but as n varied the total time would still vary linearly.
Of course, you could write your own implementations of the various LINQ operators over your own data types, but they'd only be appropriate in very specific situations - you'd have to know for sure that the predicate only operated on the optimised aspects of the data. For instance, if you've got a list of people that's ordered by age, it's not going to help you with a query which tries to find someone with a particular name :)
To examine the predicate, you'd have to use expression trees instead of delegates, and life would become a lot harder.
I suspect I'd normally add new methods which make it obvious that you're using the indexed/ordered/whatever nature of the data type, and which will always work appropriately. You couldn't easily invoke those extra methods from query expressions, of course, but you can still use LINQ with dot notation.
Yes, the generic case is always O(n), as Sklivvz said.
However, many LINQ methods special case for when the object implementing IEnumerable actually implements e.g. ICollection. (I've seen this for IEnumerable.Contains at least.)
In practice this means that LINQ IEnumerable.Contains calls the fast HashSet.Contains for example if the IEnumerable actually is a HashSet.
IEnumerable<int> mySet = new HashSet<int>();
// calls the fast HashSet.Contains because HashSet implements ICollection.
if (mySet.Contains(10)) { /* code */ }
You can use reflector to check exactly how the LINQ methods are defined, that is how I figured this out.
Oh, and also LINQ contains methods IEnumerable.ToDictionary (maps key to single value) and IEnumerable.ToLookup (maps key to multiple values). This dictionary/lookup table can be created once and used many times, which can speed up some LINQ-intensive code by orders of magnitude.
Yes, it has to be, because the only way of accessing any member of an IEnumerable is by using its methods, which means O(n).
It seems like a classic case in which the language designers decided to trade performance for generality.

Categories

Resources