When to force LINQ query evaluation? - c#

What's the accepted practice on forcing evaluation of LINQ queries with methods like ToArray() and are there general heuristics for composing optimal chains of queries? I often try to do everything in a single pass because I've noticed in those instances that AsParallel() does a really good job in speeding up the computation. In cases where the queries perform computations with no side-effects but several passes are required to get the right data out is forcing the computation with ToArray() the right way to go or is it better to leave the query in lazy form?

If you are not averse to using an 'experimental' library, you could use the EnumerableEx.Memoize extension method from the Interactive Extensions library.
This method provides a best-of-both-worlds option where the underlying sequence is computed on-demand, but is not re-computed on subequent passes. Another small benefit, in my opinion, is that the return type is not a mutable collection, as it would be with ToArray or ToList.

Keep the queries in lazy form until you start to evaluate the query multiple times, or even earlier if you need them in another form or you are in danger of variables captured in closures changing their values.
You may want to evaluate when the query contains complex projections which you want to avoid performing multiple times (e.g. constructing complex objects for sequences with lots of elements). In this case evaluating once and iterating many times is much saner.
You may need the results in another form if you want to return them or pass them to another API that expects a specific type of collection.
You may want or need to prevent accessing modified closures if the query captures variables which are not local in scope. Until the query is actually evaluated, you are in danger of other code changing their values "behind your back"; when the evaluation happens, it will use these values instead of those present when the query was constructed. (However, this can be worked around by making a copy of those values in another variable that does have local scope).

You would normally only use ToArray() when you need to use an array, like with an API that expects an array. As long as you don't need to access the results of a query, and you're not confined to some kind of connection context (like the case may be in LINQ to SQL or LINQ to Entities), then you might as well just keep the query in lazy form.

Related

Using .Where() on a List

Assuming the two following possible blocks of code inside of a view, with a model passed to it using something like return View(db.Products.Find(id));
List<UserReview> reviews = Model.UserReviews.OrderByDescending(ur => ur.Date).ToList();
if (myUserReview != null)
reviews = reviews.Where(ur => ur.Id != myUserReview.Id).ToList();
IEnumerable<UserReview> reviews = Model.UserReviews.OrderByDescending(ur => ur.Date);
if (myUserReview != null)
reviews = reviews.Where(ur => ur.Id != myUserReview.Id);
What are the performance implications between the two? By this point, is all of the product related data in memory, including its navigation properties? Does using ToList() in this instance matter at all? If not, is there a better approach to using Linq queries on a List without having to call ToList() every time, or is this the typical way one would do it?
Read http://blogs.msdn.com/b/charlie/archive/2007/12/09/deferred-execution.aspx
Deferred execution is one of the many marvels intrinsic to linq. The short version is that your data is never touched (it remains idle in the source be that in-memory, or in-database, or wherever). When you construct a linq query all you are doing is creating an IEnumerable class that is 'capable' of enumerating your data. The work doesn't begin until you actually start enumerating and then each piece of data comes all the way from the source, through the pipeline, and is serviced by your code one item at a time. If you break your loop early, you have saved some work - later items are never processed. That's the simple version.
Some linq operations cannot work this way. Orderby is the best example. Orderby has to know every piece of data because it possible that the last piece retrieved from the source very well could be the first piece that you are supposed to get. So when an operation such as orderby is in the pipe, it will actually cache your dataset internally. So all data has been pulled from the source, and has gone through the pipeline, up to the orderby, and then the orderby becomes your new temporary data source for any operations that come afterwards in the expression. Even so, orderby tries as much as possible to follow the deferred execution paradigm by waiting until the last possible moment to build its cache. Including orderby in your query still doesn't do any work, immediately, but the work begins once you start enumerating.
To answer your question directly, your call to ToList is doing exactly that. OrderByDescending is caching the data from your datasource => ToList additionally persists it into a variable that you can actually touch (reviews) => where starts pulling records one at a time from reviews, and if it matches then your final ToList call is storing the results into yet another list in memory.
Beyond the memory implications, ToList is additionally thwarting deferred execution because it also STOPS the processing of your view at the time of the call, to entirely process that entire linq expression, in order to build its in-memory representation of the results.
Now none of this is a real big deal if the number of records we're talking about are in the dozens. You'll never notice the difference at runtime because it happens so quick. But when dealing with large scale datasets, deferring as much as possible for as long as possible in hopes that something will happen allowing you to cancel a full enumeration... in addition to the memory savings... gold.
In your version without ToList: OrderByDescending will still cache a copy of your dataset as processed through the pipeline up to that point, internally, sorted of course. That's ok, you gotta do what you gotta do. But that doesn't happen until you actually try to retrieve your first record later in your view. Once that cache is complete, you get your first record, and for every next record you are then pulling from that cache, checking against the where clause, you get it or you don't based upon that where and have saved a couple of in memory copies and a lot of work.
Magically, I bet even your lead-in of db.Products.Find(id) doesn't even start spinning until your view starts enumerating (if not using ToList). If db.Products is a Linq2SQL datasource, then every other element you've specified will reduce into SQL verbiage, and your entire expression will be deferred.
Hope this helps! Read further on Deferred Execution. And if you want to know 'how' that works, look into c# iterators (yield return). There's a page somewhere on MSDN that I'm having trouble finding that contains the common linq operations, and whether they defer execution or not. I'll update if I can track that down.
/*edit*/ to clarify - all of the above is in the context of raw linq, or Linq2Objects. Until we find that page, common sense will tell you how it works. If you close your eyes and imagine implementing orderby, or any other linq operation, if you can't think of a way to implement it with 'yield return', or without caching, then execution is not likely deferred and a cache copy is likely and/or a full enumeration... orderby, distinct, count, sum, etc... Linq2SQL is a whole other can of worms. Even in that context, ToList will still stop and process the whole expression and store the results because a list, is a list, and is in memory. But Linq2SQL is uniquely capable of deferring many of those aforementioned clauses, and then some, by incorporating them into the generated SQL that is sent to the SQL server. So even orderby can be deferred in this way because the clause will be pushed down into your original datasource and then ignored in the pipe.
Good luck ;)
Not enough context to know for sure.
But ToList() guarantees that the data has been copied into memory, and your first example does that twice.
The second example could involve queryable data or some other on-demand scenario. Even if the original data was all already in memory and even if you only added a call to ToList() at the end, that would be one less copy in-memory than the first example.
And it's entirely possible that in the second example, by the time you get to the end of that little snippet, no actual processing of the original data has been done at all. In that case, the database might not even be queried until some code actually enumerates the final reviews value.
As for whether there's a "better" way to do it, not possible to say. You haven't defined "better". Personally, I tend to prefer the second example...why materialize data before you need it? But in some cases, you do want to force materialization. It just depends on the scenario.

Why are ToLookup and GroupBy different?

.ToLookup<TSource, TKey> returns an ILookup<TKey, TSource>. ILookup<TKey, TSource> also implements interface IEnumerable<IGrouping<TKey, TSource>>.
.GroupBy<TSource, TKey> returns an IEnumerable<IGrouping<Tkey, TSource>>.
ILookup has the handy indexer property, so it can be used in a dictionary-like (or lookup-like) manner, whereas GroupBy can't. GroupBy without the indexer is a pain to work with; pretty much the only way you can then reference the return object is by looping through it (or using another LINQ-extension method). In other words, any case that GroupBy works, ToLookup will work as well.
All this leaves me with the question why would I ever bother with GroupBy? Why should it exist?
why would I ever bother with GroupBy? Why should it exist?
What happens when you call ToLookup on an object representing a remote database table with a billion rows in it?
The billion rows are sent over the wire, and you build the lookup table locally.
What happens when you call GroupBy on such an object?
A query object is built; end of story.
When that query object is enumerated then the analysis of the table is done on the database server and the grouped results are sent back on demand a few at a time.
Logically they are the same thing but the performance implications of each are completely different. Calling ToLookup means I want a cache of the entire thing right now organized by group. Calling GroupBy means "I am building an object to represent the question 'what would these things look like if I organized them by group?'"
In simple LINQ-world words:
ToLookup() - immediate execution
GroupBy() - deferred execution
The two are similar, but are used in different scenarios. .ToLookup() returns a ready to use object that already has all the groups (but not the group's content) eagerly loaded. On the other hand, .GroupBy() returns a lazy loaded sequence of groups.
Different LINQ providers may have different behaviors for the eager and lazy loading of the groups. With LINQ-to-Object it probably makes little difference, but with LINQ-to-SQL (or LINQ-to-EF, etc.), the grouping operation is performed on the database server rather than the client, and so you may want to do an additional filtering on the group key (which generates a HAVING clause) and then only get some of the groups instead of all of them. .ToLookup() wouldn't allow for such semantics since all items are eagerly grouped.

Does LINQ enhance the performance by eliminating looping?

I've used Linq against some collection objects (Dictionary, List). So if I want to select items based on a criteria I write a Linq query and then enumerate the linq object. So my question is that is Linq eliminating looping the main collection and as a result improving the performance?
Absolutely not. LINQ to Objects loops internally - how else could it work?
On the other hand, LINQ is more efficient than some approaches you could take, by streaming the data only when it's required etc.
On the third hand, it involves extra layers of indirection (all the iterators etc) which will have some marginal effect on performance.
Probbaly not. LINQ lends itself to terse (hopefully) readable code.
Under the covers it's looping, unless the backing data structure supports a more efficient searching algorithm than scanning.
When you use the query directly, then you still loop over the whole collection.
You just don't see everything, because the query will only return elements that match your filter.
The overall performance will probably even take a hit, simply because of all those nested iterators that are involved.
When you called ToList() on your query result, and then used this result several times, then you'd be better off performance-wise.
No, in fact if you are using LINQ to SQL, the performance will be a little worse because LINQ after all is an additional layer on top of the ado.net stack.
if you using linq over objects. there are optimizations done by linq, the most important one is "Yield" which starts to yield results from an IEnumerable as it gets generated. which is better than the standard approach which has to wait for a List to be filled and returned by the function in order to iterate over it.

Am I misunderstanding LINQ to SQL .AsEnumerable()?

Consider this code:
var query = db.Table
.Where(t => SomeCondition(t))
.AsEnumerable();
int recordCount = query.Count();
int totalSomeNumber = query.Sum();
decimal average = query.Average();
Assume query takes a very long time to run. I need to get the record count, total SomeNumber's returned, and take an average at the end. I thought based on my reading that .AsEnumerable() would execute the query using LINQ-to-SQL, then use LINQ-to-Objects for the Count, Sum, and Average. Instead, when I do this in LINQPad, I see the same query is run three times. If I replace .AsEnumerable() with .ToList(), it only gets queried once.
Am I missing something about what AsEnumerable is/does?
Calling AsEnumerable() does not execute the query, enumerating it does.
IQueryable is the interface that allows LINQ to SQL to perform its magic. IQueryable implements IEnumerable so when you call AsEnumerable(), you are changing the extension-methods being called from there on, ie from the IQueryable-methods to the IEnumerable-methods (ie changing from LINQ to SQL to LINQ to Objects in this particular case). But you are not executing the actual query, just changing how it is going to be executed in its entirety.
To force query execution, you must call ToList().
Yes. All that AsEnumerable will do is cause the Count, Sum, and Average functions to be executed client-side (in other words, it will bring back the entire result set to the client, then the client will perform those aggregates instead of creating COUNT() SUM() and AVG() statements in SQL).
Justin Niessner's answer is perfect.
I just want to quote a MSDN explanation here: .NET Language-Integrated Query for Relational Data
The AsEnumerable() operator, unlike ToList() and ToArray(), does not cause execution of the query. It is still deferred. The AsEnumerable() operator merely changes the static typing of the query, turning a IQueryable into an IEnumerable, tricking the compiler into treating the rest of the query as locally executed.
I hope this is what is meant by:
IQueryable-methods to the IEnumerable-methods (ie changing from LINQ to SQL to LINQ to Objects
Once it is LINQ to Objects we can apply object's methods (e.g. ToString()). This is the explanation for one of the frequently asked questions about LINQ - Why LINQ to Entities does not recognize the method 'System.String ToString()?
According to ASENUMERABLE - codeblog.jonskeet, AsEnumerable can be handy when:
some aspects of the query in the database, and then a bit more manipulation in .NET – particularly if there are aspects you basically can’t implement in LINQ to SQL (or whatever provider you’re using).
It also says:
All we’re doing is changing the compile-time type of the sequence which is propagating through our query from IQueryable to IEnumerable – but that means that the compiler will use the methods in Enumerable (taking delegates, and executing in LINQ to Objects) instead of the ones in Queryable (taking expression trees, and usually executing out-of-process).
Finally, also see this related question: Returning IEnumerable vs. IQueryable
Well, you are on the right track. The problem is that an IQueryable (what the statement is before the AsEnumerable call) is also an IEnumerable, so that call is, in effect, a nop. It will require forcing it to a specific in-memory data structure (e.g., ToList()) to force the query.
I would presume that ToList forces Linq to fetch the records from the database. When you then perform the proceeding calculations they are done against the in memory objects rather than involving the database.
Leaving the return type as an Enumerable means that the data is not fetched until it is called upon by the code performing the calculations. I guess the knock on of this is that the database is hit three times - one for each calculation and the data is not persisted to memory.
Just adding a little more clarification:
I thought based on my reading that .AsEnumerable() would execute the query using LINQ-to-SQL
It will not execute the query right away, as Justin's answer explains. It only will be materialized (hit the database) later on.
Instead, when I do this in LINQPad, I see the same query is run three times.
Yes, and note that all three queries are exact the same, basically fetching all rows from the given condition into memory and then computing the count/sum/avg locally.
If I replace .AsEnumerable() with .ToList(), it only gets queried once.
But still getting all data into memory, with the advantage that now it run only once.
If performance improvement is a concern, just remove .AsEnumerable() and then the count/sum/avg will be translated correctly to their SQL correspondents. Doing so three queries will run (probably faster if there are index satisfying the conditions) but with a lot less memory footprint.

IEnumerable<T> as return type

Is there a problem with using IEnumerable<T> as a return type?
FxCop complains about returning List<T> (it advises returning Collection<T> instead).
Well, I've always been guided by a rule "accept the least you can, but return the maximum."
From this point of view, returning IEnumerable<T> is a bad thing, but what should I do when I want to use "lazy retrieval"? Also, the yield keyword is such a goodie.
This is really a two part question.
1) Is there inherently anything wrong with returning an IEnumerable<T>
No nothing at all. In fact if you are using C# iterators this is the expected behavior. Converting it to a List<T> or another collection class pre-emptively is not a good idea. Doing so is making an assumption on the usage pattern by your caller. I find it's not a good idea to assume anything about the caller. They may have good reasons why they want an IEnumerable<T>. Perhaps they want to convert it to a completely different collection hierarchy (in which case a conversion to List is wasted).
2) Are there any circumstances where it may be preferable to return something other than IEnumerable<T>?
Yes. While it's not a great idea to assume much about your callers, it's perfectly okay to make decisions based on your own behavior. Imagine a scenario where you had a multi-threaded object which was queueing up requests into an object that was constantly being updated. In this case returning a raw IEnumerable<T> is irresponsible. As soon as the collection is modified the enumerable is invalidated and will cause an execption to occur. Instead you could take a snapshot of the structure and return that value. Say in a List<T> form. In this case I would just return the object as the direct structure (or interface).
This is certainly the rarer case though.
No, IEnumerable<T> is a good thing to return here, since all you are promising is "a sequence of (typed) values". Ideal for LINQ etc, and perfectly usable.
The caller can easily put this data into a list (or whatever) - especially with LINQ (ToList, ToArray, etc).
This approach allows you to lazily spool back values, rather than having to buffer all the data. Definitely a goodie. I wrote-up another useful IEnumerable<T> trick the other day, too.
IEnumerable is fine by me but it has some drawbacks. The client has to enumerate to get the results. It has no way to check for Count etc.
List is bad because you expose too much control; the client can add/remove etc. from it and that can be a bad thing.
Collection seems the best compromise, at least in FxCop's view.
I always use what seems appropiate in my context (eg. if I want to return a read only collection I expose collection as return type and return List.AsReadOnly() or IEnumerable for lazy evaluation through yield etc.). Take it on a case by case basis
About your principle: "accept the least you can, but return the maximum".
The key to managing the complexity of a large program is a technique called information hiding. If your method works by building a List<T>, it's not often necessary to reveal this fact by returning that type. If you do, then your callers may modify the list they get back. This removes your ability to do caching, or lazy iteration with yield return.
So a better principle is for a function to follow is: "reveal as little as possible about how you work".
Returning IEnumerable<T> is OK if you're genuinely only returning an enumeration, and it will be consumed by your caller as such.
But as others point out, it has the drawback that the caller may need to enumerate if he needs any other info (for example Count). The .NET 3.5 extension method IEnumerable<T>.Count will enumerate behind the scenes if the return value does not implement ICollection<T>, which may be undesirable.
I often return IList<T> or ICollection<T> when the result is a collection - internally your method can use a List<T> and either return it as-is, or return List<T>.AsReadOnly if you want to protect against modification (e.g. if you're caching the list internally). AFAIK FxCop is quite happy with either of these.
"accept the least you can, but return the maximum" is what I advocate. When a method returns an object, what justifications we have to not return the actual type and limit the capabilities of the object by returning a base type. This however raises a question how do we know what the "maximum" (actual type) will be when we design an interface. The answer is very simple. Only in extreme cases where the interface designer is designing an open interface, which will be implemented outside the application/component, they would not know what the actual return type may be. A smart designer should always consider what the method should be doing and what an optimal/generic return type should be.
E.g. If I am designing an interface to retrieve a vector of objects, and I know the count of returned objects are going to be variable, I'll always assume a smart developer will always use a List. If someone plans to return an Array, I'd question his capabilities, unless he/she is just returning the data from another layer that he/she doesn't own. And this is probably why FxCop advocates for ICollection (common base for List and Array).
The above being said, there are couple of other things to consider
if the returned data should be mutable or immutable
if the returned data be shared across multiple callers
Regarding the LINQ lazy evaluations I am sure 95%+ C# users don't understand the intestacies. It’s so non-oo-ish. OO promotes concrete state changes on method invocations. LINQ lazy evaluation promotes runtime state changes on expression evaluation pattern (not something non-advanced users always follow).
One important aspect is that when you return a List<T> you are actual returning a reference. That makes it possible for a caller to manipulate your list. This is a common problem—for instance, a Business layer that returns a List<T> to a GUI layer.
Just because you say you're returning IEnumerable doesn't mean you can't return a List. The idea is to reduce unneeded coupling. All that the caller should care about is getting a list of things, rather than the exact type of collection used to contain that list. If you have something that's backed by an array, then getting something like Count is going to be fast anyway.
I think your own guidance is great -- if you are able to be more specific about what you're returning without a performance hit (you don't have to e.g. build a List out of your result), do so. But if your function legitimately doesn't know what type it's going to find, like if in some situations you'll be working with a List and in some with an Array, etc., then returning IEnumerable is the "best" you can do. Think of it as the "greatest common multiple" of everything you might want to return.
I can't accept the chosen answer. There are ways of dealing with the scenario described but using a List or whatever else your using isn't one of them. The moment the IEnumerable is returned you have to assume that the caller might do a foreach. In that case it doesn't matter if the concrete type is List or spaghetti. In fact just indexing is a problem especially if items are removed.
Any returned value is a snapshot. It may be the current contents of the IEnumerable in which case if it's cached it should be a clone of the cached copy; if it's supposed to be more dynamic (like the resuts of a sql query) then use yield return; however allowing the container to mutate at will and supplying methods like Count and indexer is a recipe for disaster in a multithreaded world. I haven't even gotten into the ability of the caller to call Add or Delete on a container your code is supposed to be in control of.
Also returning a concrete type locks you into an implementation. Today internally you may be using a list. Tomorrow maybe you do become multithreaded and want to use a thread safe container or an array or a queue or the Values collection of a dictionary or the output of a Linq query. If you lock yourself into a concrete return type then you have to either change a bunch of code or do a conversions before returning.
IEnumerable is cool because you can use the yield iterator that gives to the consumer just the data they need but there is a cost hidden in the construct.
Let me explain it with an example. Let's say I am consuming this method:
IEnumerable GetFilesFromFolder(string path)
So, what do I get? To get all the files of my folder I have to iterate the enumeration, and that's fine, after all that's how enumerations work, but what if, for any reason, I have to enumerate it twice?
The second time should I expect a refreshed result or the result is idempotent? I do not know. I have to check the docs of the library / method.
The call to the GetEnumerator method of the enumeration done by the consumer, could, in fact, execute an I/O operation behind the scene, or an http call, or it could simply iterate an inner array, I can not know it for sure. I have to check the docs in the hope that this behavior is documented.
Does this detail matters? I think it does. At least from a performance perspective.
Even if the cost of iterations is slow and CPU bounded, this is not zero, and it could go even worse in the scenario of chains of enumerations, that often turn debugging sessions a nightmare.
I prefer to not give the consumer of my library doubts so whenever I know my API returns few elements I always use arrays as return type, and only when the data to return is huge I use IEnumerable or IAsyncEnumerable.
Anyway, if you want to return enumerations please document your API to tell consumers if the result is a snapshot or not.

Categories

Resources