Aborting a linq query after finding x items? - c#

If I know there is only one matching item in a collection, is there any way to tell Linq about this so that it will abort the search when it finds it?
I am assuming that both of these search the full collection before returning one item?
var fred = _people.Where((p) => p.Name == "Fred").First();
var bill = _people.Where((p) => p.Name == "Bill").Take(1);
EDIT: People seem fixated on FirstOrDefault, or SingleOrDefault. These have nothing to do with my question. They simply provide a default value if the collection is empty. As I stated, I know that my collection has a single matching item.
AakashM's comment is of most interest to me. I would appear my assumption is wrong but I'm interested why.
For instance, when linq to objects is running the Where() function in my example code, how does it know that there are further operations on its return value?

Your assumption is wrong. LINQ uses deferred execution and lazy evaluation a lot. What this means is that, for example, when you call Where(), it doesn't actually iterate the collection. Only when you iterate the object it returns, will it iterate the original collection. And it will do it in a lazy manner: only as much as is necessary.
So, in your case, neither query will iterate the whole collection: both will iterate it only up to the point where they find the first element, and then stop.
Actually, the second query (with Take()) won't do even that, it will iterate the source collection only if you iterate the result.
This all applies to LINQ to objects. Other providers (LINQ to SQL and others) can behave differently, but at least the principle of deferred execution usually still holds.

I think First() will not scan the whole collection. It will return immediatelly after the first match. But I suggest to use FirstOrDefault() instead.
EDIT:
Difference between First() and FirstOrDefault() (from MSDN):
The First() method throws an exception if source contains no elements. To instead return a default value when the source sequence is empty, use the FirstOrDefault() method.
Enumerable.First

Substitue .Where( by .SingleorDefault(
This will find the first and only item for you.
But you can't do this for any given number. If you need 2 items, you'll need to get the entire collection.
However, you shouldn't worry about time. The most effort is used in opening a database connection and establishing a query. Executing the query doesn't take that much time, so there's no real reason to stop a query halfway :-)

Related

which of these linq queries are more performant? [duplicate]

What is the difference between these two Linq queries:
var result = ResultLists().Where( c=> c.code == "abc").FirstOrDefault();
// vs.
var result = ResultLists().FirstOrDefault( c => c.code == "abc");
Are the semantics exactly the same?
Iff sematically equal, does the predicate form of FirstOrDefault offer any theoretical or practical performance benefit over Where() plus plain FirstOrDefault()?
Either is fine.
They both run lazily - if the source list has a million items, but the tenth item matches then both will only iterate 10 items from the source.
Performance should be almost identical and any difference would be totally insignificant.
The second one. All other things being equal, the iterator in the second case can stop as soon as it finds a match, where the first one must find all that match, and then pick the first of those.
Nice discussion, all the above answers are correct.
I didn't run any performance test, whereas on the bases of my experience FirstOrDefault() sometimes faster and optimize as compare to Where().FirstOrDefault().
I recently fixed the memory overflow/performance issue ("neural-network algorithm") and fix was changing Where(x->...).FirstOrDefault() to simply FirstOrDefault(x->..).
I was ignoring the editor's recommendation to change Where(x->...).FirstOrDefault() to simply FirstOrDefault(x->..).
So I believe the correct answer to the above question is
The second option is the best approach in all cases
Where is actually a deferred execution - it means, the evaluation of an expression is delayed until its realized value is actually required. It greatly improves performance by avoiding unnecessary execution.
Where looks kind of like this, and returns a new IEnumerable
foreach (var item in enumerable)
{
if (condition)
{
yield return item;
}
}
FirstOrDefault() returns <T> and not throw any exception or return null when there is no result

Empty HashSet - Count vs Any

I am only interested to know whether a HashSet hs is empty or not.
I am NOT interested to know exactly how many elements it contains.
So I could use this:
bool isEmpty = (hs.Count == 0);
...or this:
bool isEmpty = hs.Any(x=>true);
Which one provides better results, performance-wise(specially when the HashSet contains a large number of elements) ?
On a HashSet you can use both, since HashSet internally manages the count.
However, if your data is in an IEnumerable<T> or IQueryable<T> object, using result.Any() is preferable over result.Count() (Both Linq Methods).
Linq's .Count() will iterate through the whole Enumerable, .Any() will only peek if any objects exists within the Enumerable or not.
Update:
Just small addition:
In your case with the HashSet .Count may be preferable as .Any() would require an IEmumerator to be created and returned which is a small overhead if you are not going to use the Enumerator anywhere in your code (foreach, Linq, etc.). But I think that would be considered "Micro optimization".
HastSet<T> implements ICollection<T>, which has a Count property, so a call to Count() will just call HastSet<T>.Count, which I'm assuming is an O(1) operation (meaning it doesn't actually have to count - it just returns the current size of the HashSet).
Any will iterate until it finds an item that matches the condition, then stop.
So in your case, it will just iterate one item, then stop, so the difference will probably be negligible.
If you had a filter that you wanted to apply (e.g. x => x.IsValid) then Any would definitely be faster since Count(x => x.IsValid) would iterate over the entire collection, while Any would stop as soon as if finds a match.
For those reasons I generally prefer to use Any() rather than Count()==0 since it's more direct and avoids any potential performance problems. I would only switch to Count()==0 if it provided a significant performance boost over Any().
Note that Any(x=>true) is logically the same as calling Any(). That doesn't change your question, but it looks cleaner without the lambda.
Depending on the type of collection, it may or may not matter performance-wise. So why not just use hs.Any() since that is designed for exactly what you need to know?
And the lambda expression x => true has no meaning here. You can leave that out.

Using Linq for copying/filtering produces a dynamic result in foreach loops?

So I create this projection of a dictionary of items I would like to remove.
var toRemoveList =
this.outputDic.Keys.Where(key =>
this.removeDic.ContainsKey(key));
Then I iterate through the result removing from the actual dictionary
foreach(var key in toRemoveList)
this.outputDic.Remove(key);
However during that foreach an exception is thrown saying that the list was modified during the loop. But, how so? is the linq query somewhat dynamic and gets re evaluated every time the dictionary changes? A simple .ToArray() call on the end of the query solves the issues, but imo, it shouldn't even occur in the first place.
So I create this projection of a dictionary of items I would like to remove.
var toRemoveList =
this.outputDic.Keys.Where(key =>
this.removeDic.ContainsKey(key));
As I have often said, if I can teach people one thing about LINQ it is this: the result of a query expression is a query, not the results of executing the query. You now have an object that means "the keys of a dictionary such that the key is... something". It is not the results of that query, it is that query. The query is an object unto itself; it does not give you a result set until you ask for one.
Then you do this:
foreach(var key in toRemoveList)
this.outputDic.Remove(key);
So what are you doing? You are iterating over the query. Iterating over the query executes the query, so the query is iterating over the original dictionary. But you then remove an item from the dictionary, while you are iterating over it, which is illegal.
imo, it shouldn't even occur in the first place.
Your opinion about how the world should be is a common one, but doing it your way leads to deep inefficiencies. Let us suppose that creating a query executes the query immediately rather than creates a query object. What does this do?
var query = expensiveRemoteDatabase
.Where(somefilter)
.Where(someotherfilter)
.OrderBy(something);
The first call to Where produces a query, which in your world is then executed, pulling down from the remote database all records which match that query. The second call to Where then says "oh, sorry, I meant to also apply this filter here as well, can we do that whole query again, this time with the second filter?" and so then that whole record set is computed, and then we say "oh, no, wait, I forgot to tell you when you built that last query object, we're going to need to sort it, so database, can you run this query for me a third time?"
Now perhaps do you see why queries produce a query that then does not execute until it needs to?
The reason is that toRemoveList does not contain a list of things to be removed, it contains a description of how to get a list of things that can be removed.
If you step through this in a debugger using F11 you can see this quite clearly for yourself. The first point it stops is with the cursor on foreach which is what you would expect.
Next you stop at toRemoveList (the one in foreach(var key in toRemoveList)). This is where it is setting up the iterator.
When you step through var key (with F11) however it now jumps into the original definition of toRemoveList, specifically the this.removeDic.ContainsKey(key) part. Now you get an idea of what is really happening.
The foreach is calling the iterators Next method to move to the next point in the dictionary's keys and holds onto the list. When you call into this.outputDic.Remove(key); this detects that the iterator hasn't finished and therefore stops you with this error.
As everybody is saying on here, the correct way to solve this is to use ToArray()/ToList() as what these do is to give you another copy of the list. So the you have one to step through, and one to remove from.
The .ToArray() solves the issues because it forces you to evaluate the entire enumeration and cache the local values. Without doing so, when you enumerate through it the enumerable attempts to calculate the first index, return that, then return to the collection and calculate the next index. If the underlying collection you're iterating over changes, you can no longer guarantee that the enumeration will return the appropriate value.
In short: just force the evaluation with .ToArray() (or .ToList(), or whatever).
The LINQ query uses deferred execution. It streams the items one by one, retruning them as they're requested. So yes, every time you try to remove a key it changes the result which is why it throws an exception. When you invoke ToArray() it forces execution of the query which is why it works.
EDIT: This is somewhat in response to your comments. Check out iterator blocks on msdn this is the mechanism being used when your for each executes. Your query just gets turned into an expression tree and the filter, projects, operation ect is applied to the elements one by one when they're retrieved unless it is not possible.
The reason you are getting this error is because of deferred execution of linq. To explain it fully when your loop runs is actually when the data is fetch from the dictionary. Thus modification in outputdic takes place at this point of time and it is not allowed to modify the collection you are looping upon. This is why you get this error. You can get rid of this error by asking the compiler to execute it before you run the loop.
var toRemoveList =
this.outputDic.Keys.Where(key =>
this.removeDic.ContainsKey(key)).ToList();
Notice the ToList() in the above statement. It will make sure that your query has been executed and you have your list in toRemoveList.

Is .Select<T>(...) to be prefered before .Where<T>(...)?

I got in a discussion with two colleagues regarding a setup for an iteration over an IEnumerable (the contents of which will not be altered in any way during the operation). There are three conflicting theories on which is the optimal approach. Both the others (and me as well) are very certain and that got me unsure, so for the sake of clarity, I want to check with an external source.
The scenario is as follows. We had the code below as a starting point and discovered that some of the hazaas need not to be acted upon. So, starting with the code below, we started to add a blocker for the action.
foreach(Hazaa hazaa in hazaas) ;
My suggestion is as follows.
foreach(Hazaa hazaa in hazaas.Where(element => condition)) ;
One of the guys wants to resolve it by a more explicit form, claiming that LINQ is not appropriate in this case (not sure why it'd be so but he seems to be very convinced). He's solution is this.
foreach(Hazaa hazaa in hazaas) ;
if(condition) ;
The other contra-suggestion is supported by the claim that Where risks to repeat the filtering process needlessly and that it's more certain to minimize the computational workload by picking the appropriate elements once for all by Select.
foreach(Hazaa hazaa in hazaas.Select(element => condition)) ;
I argue that the first is obsolete, since LINQ can handle data objects quite well.
I also believe that Select-ing is in this case equivalently fast to Where-ing and no needless steps will be taken (e.g. the evaluation of the condition on the elements will only be performed once). If anything, it should be faster using Where because we won't be creating an extra instance of anything.
Who's right?
Select is inappropriate. It doesn't filter anything.
if is a possible solution, but Where is just as explicit.
Where executes the condition exactly once per item, just as the if. Additionally, it is important to note that the call to Where doesn't iterate the list. So, using Where you iterate the list exactly once, just like when using if.
I think you are discussing with one person that didn't understand LINQ - the guy that wants to use Select - and one that doesn't like the functional aspect of LINQ.
I would go with Where.
The .Where() and the if(condition) approach will be the same.
But since LinQ is nicely readable i'd prefer that.
The approach with .Select() is nonsense, since it will not return the Hazaa-Object, but an IEnumerable<Boolean>
To be clear about the functions:
myEnumerable.Where(a => isTrueFor(a)) //This is filtering
myEnumerable.Select(a => a.b) //This is projection
Where() will run a function, which returns a Boolean foreach item of the enumerable and return this item depending on the result of the Boolean function
Select() will run a function for every item in the list and return the result of the function without doing any filtering.

Why do we need Single() in LINQ?

Why is the main purpose of the extension method Single()?
I know it will throw an exception if more than an element that matches the predicate in the sequence, but I still don't understand in which context it could be useful.
Edit:
I do understand what Single is doing, so you don't need to explain in your question what this method does.
It's useful for declaratively stating
I want the single element in the list and if more than one item matches then something is very wrong
There are many times when programs need to reduce a set of elements to the one that is interesting based an a particular predicate. If more than one matches it indicates an error in the program. Without the Single method a program would need to traverse parts of the potentially expensive list more once.
Compare
Item i = someCollection.Single(thePredicate);
To
Contract.Requires(someCollection.Where(thePredicate).Count() == 1);
Item i = someCollection.First(thePredicate);
The latter requires two statements and iterates a potentially expensive list twice. Not good.
Note: Yes First is potentially faster because it only has to iterate the enumeration up until the first element that matches. The rest of the elements are of no consequence. On the other hand Single must consider the entire enumeration. If multiple matches are of no consequence to your program and indicate no programming errors then yes use First.
Using Single allows you to document your expectations on the number of results, and to fail early, fail hard if they are wrong. Unless you enjoy long debugging sessions for their own sake, I'd say it's enormously useful for increasing the robustness of your code.
Every LINQ operator returns a sequence, so an IEnumerable<T>. To get an actual element, you need one of the First, Last or Single methods - you use the latter if you know for sure the sequence only contains one element. An example would be a 1:1 ID:Name mapping in a database.
A Single will return a single instance of the class/object and not a collection. Very handy when you get a single record by Id. I never expect more than one row.

Categories

Resources