Performance of Skip and Take in Linq to Objects - c#

"Searching for alternative functionalities for "Skip" and "Take" functionalities"
1 of the link says "Everytime you invoke Skip() it will have to iterate you collection from the beginning in order to skip the number of elements you desire, which gives a loop within a loop (n2 behaviour)"
Conclusion: For large collections, don’t use Skip and Take. Find another way to iterate through your collection and divide it.
In order to access last page data in a huge collection, can you please suggest us a way other than Skip and Take approach?

Looking at the source for Skip, you can see it enumerates over all the items, even over the first n items you want to skip.
It's strange though, because several LINQ-methods have optimizations for collections, like Count and Last.
Skip apparently does not.
If you have an array or IList<T>, you use the indexer to truly skip over them:
for (int i = skipStartIndex; i < list.Count; i++) {
yield return list[i];
}

Internally it is really correct:
private static IEnumerable<TSource> SkipIterator<TSource>(IEnumerable<TSource> source, int count)
{
using (IEnumerator<TSource> enumerator = source.GetEnumerator())
{
while (count > 0 && enumerator.MoveNext())
--count;
if (count <= 0)
{
while (enumerator.MoveNext())
yield return enumerator.Current;
}
}
}
If you want to skip for IEnumerable<T> then it works right. There are no other way except enumeration to get specific element(s). But you can write own extension method on IReadOnlyList<T> or IList<T> (if this interface is implemented in collection used for your elements).
public static class IReadOnlyListExtensions
{
public static IEnumerable<T> Skip<T>(this IReadOnlyList<T> collection, int count)
{
if (collection == null)
return null;
return ICollectionExtensions.YieldSkip(collection, count);
}
private static IEnumerable<T> YieldSkip<T>(IReadOnlyList<T> collection, int count)
{
for (int index = count; index < collection.Count; index++)
{
yield return collection[index];
}
}
}
In addition you can implement it for IEnumerable<T> but check inside for optimization:
if (collection is IReadOnlyList<T>)
{
// do optimized skip
}
Such solution is used a lot of where in Linq source code (but not in Skip unfortunately).

Depends on your implementation, but it would make sense to use indexed arrays for the purpose, instead.

Related

Accessing yield return collection

Is there any way to access the IEnumerable<T> collection being build up by yield return in a loop from within the method building the IEnumerable itself?
Silly example:
Random random = new Random();
IEnumerable<int> UniqueRandomIntegers(int n, int max)
{
while ([RETURN_VALUE].Count() < n)
{
int value = random.Next(max);
if (![RETURN_VALUE].Contains(value))
yield return value;
}
}
There is no collection being built up. The sequence that is returned is evaluated lazily, and unless the caller explicitly copies the data to another collection, it will be gone as soon as it's been fetched.
If you want to ensure uniqueness, you'll need to do that yourself. For example:
IEnumerable<int> UniqueRandomIntegers(int n, int max)
{
HashSet<int> returned = new HashSet<int>();
for (int i = 0; i < n; i++)
{
int candidate;
do
{
candidate = random.Next(max);
} while (returned.Contains(candidate));
yield return candidate;
returned.Add(candidate);
}
}
Another alternative for unique random integers is to build a collection of max items and shuffle it, which can still be done just-in-time. This is more efficient in the case where max and n are similar (as you don't need to loop round until you're lucky enough to get a new item) but inefficient in the case where max is very large and n isn't.
EDIT: As noted in comments, you can shorten this slightly by changing the body of the for loop to:
int candidate;
do
{
candidate = random.Next(max);
} while (!returned.Add(candidate))
yield return candidate;
That uses the fact that Add will return false if the item already exists in the set.

Writing the implementation for List<T> using IEnumerable<T> from scratch

Out of boredom I decided to write the implementation of List from scratch using IEnumerable. I ran into a few issues that I honestly don't know how to solve:
How would you resize a generic array (T[]) when an index is nulled or set to default(T)?
Since you cannot null T, how do you overcome the numerical primitive problem with their values being 0 by default?
If nothing can be done regarding #2, how do you stop the GetEnumerator() method from yield returning 0 when utilizing a numerical data type?
Last but not least, what is the standard practice regarding downsizing an array? I know for certain that one of the best solutions for upsizing is to increase the current length by a power of 2; if and when do you downsize? Per Remove/RemoveAt or by the currently used length % 2?
Here's what I've done so far:
public class List<T> : IEnumerable<T>
{
T[] list = new T[32];
int current;
public void Add(T item)
{
if (current + 1 > list.Length)
{
T[] temp = new T[list.Length * 2];
Array.Copy(list, temp, list.Length);
list = temp;
}
list[current] = item;
current++;
}
public void Remove(T item)
{
for (int i = 0; i < list.Length; i++)
if (list[i].Equals(item))
list[i] = default(T);
}
public void RemoveAt(int index)
{
list[index] = default(T);
}
public IEnumerator<T> GetEnumerator()
{
foreach (T item in list)
if (item != null && !item.Equals(default(T)))
yield return item;
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
foreach (T item in list)
if (item != null && !item.Equals(default(T)))
yield return item;
}
}
Thanks in advance.
Well, for starters, your Remove and RemoveAt methods do not implement the same behavior as List<T>. List<T> will decrease in size by 1, whereas your List will remain constant in size. You should be shifting the values of higher index from the removed object to one lower index.
Also, GetEnumerator will iterate over all items in the array, regardless of what the value is.
I believe that will solve all of the issues you have. If someone adds a default(T) to the list, then a default(T) is what they will get back out again, regardless if T is an int and thus 0 or a class-type and thus null.
Finally, on downsizing: some growable array implementations rationalize that, if the array had ever gotten so big, then it is more likely than usual to get that big again. For that reason, they specifically avoid downsizing.
The key problem you're running into is maintaining the internal array and what remove does. List<T> does not support partial arrays internally. That doesn't mean you can't, but doing so is far more complicated. To exactly mimic List<T> you want to keep an array and a field for the number of elements in the array that are actually utilized (the list length, which is equal to or less than array length).
Add is easy, you add an element to the end like you did.
Remove is more complicated. If you are removing an element from the end, set the end element to default(T) and change the list length. If you are removing and element from the beginning or middle, then you need to shift the contents of the array and set the last one to default(T). The reason we set the last element to default(T) is to clear the reference, not so we can tell whether or not it's "in use". We know if it's "in use" based on the position in the array and our stored list length.
Another key to implementation is the enumerator. You want to loop through the first elements until you hit the list length. Don't skip nulls.
This is not a complete implementation, but should be correct implementation of the methods you started.
btw, I would not agree with
I know for certain that the best solution for upsizing is to increase the current length by a power of 2
This is the default behavior of List<T> but it's not the best solution in all situations. That's exactly why List<T> allows you to specify a capacity. If you're loading a list from a source and know how many items you're adding, then you can pre-initialize the capacity of the list to reduce the number of copies. Similarly, if you're creating hundreds or thousands of lists that are larger than the default size or likely to be larger, it can be a benefit to memory utilization to pre-initialize the lists to be the same size. That way the memory they allocate and free will be the same continuous blocks and can be more efficiently allocated and deallocated repeatedly. For example, we have a reporting calculation engine that creates about 300,000 lists for each run, with many runs a second. We know the lists are always a few hundred items each, so we pre-initialize them all to 1024 capacity. This is more than most need, but since they're all the same length and they're created and disposed of very quickly, this makes memory reusage efficient.
public class MyList<T> : IEnumerable<T>
{
T[] list = new T[32];
int listLength;
public void Add(T item)
{
if (listLength + 1 > list.Length)
{
T[] temp = new T[list.Length * 2];
Array.Copy(list, temp, list.Length);
list = temp;
}
list[listLength] = item;
listLength++;
}
public void Remove(T item)
{
for (int i = 0; i < list.Length; i++)
if (list[i].Equals(item))
{
RemoveAt(i);
return;
}
}
public void RemoveAt(int index)
{
if (index < 0 || index >= listLength)
{
throw new ArgumentException("'index' must be between 0 and list length.");
}
if (index == listLength - 1)
{
list[index] = default(T);
listLength = index;
return;
}
// need to shift the list
Array.Copy(list, index + 1, list, index, listLength - index + 1);
listLength--;
list[listLength] = default(T);
}
public IEnumerator<T> GetEnumerator()
{
for (int i = 0; i < listLength; i++)
{
yield return list[i];
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}

Linq: The "opposite" of Take?

Using Linq; how can I do the "opposite" of Take?
I.e. instead of getting the first n elements such as in
aCollection.Take(n)
I want to get everything but the last n elements. Something like
aCollection.Leave(n)
(Don't ask why :-)
Edit
I suppose I can do it this way aCollection.TakeWhile((x, index) => index < aCollection.Count - n) Or in the form of an extension
public static IEnumerable<TSource> Leave<TSource>(this IEnumerable<TSource> source, int n)
{
return source.TakeWhile((x, index) => index < source.Count() - n);
}
But in the case of Linq to SQL or NHibernate Linq it would have been nice if the generated SQL took care of it and generated something like (for SQL Server/T-SQL)
SELECT TOP(SELECT COUNT(*) -#n FROM ATable) * FROM ATable Or some other more clever SQL implementation.
I suppose there is nothing like it?
(But the edit was actually not part of the question.)
aCollection.Take(aCollection.Count() - n);
EDIT: Just as a piece of interesting information which came up in the comments - you may think that the IEnumerable's extension method .Count() is slow, because it would iterate through all elements. But in case the actual object implements ICollection or ICollection<T>, it will just use the .Count property which should be O(1). So performance will not suffer in that case.
You can see the source code of IEnumerable.Count() at TypeDescriptor.net.
I'm pretty sure there's no built-in method for this, but this can be done easily by chaining Reverse and Skip:
aCollection.Reverse().Skip(n).Reverse()
I don't believe there's a built-in function for this.
aCollection.Take(aCollection.Count - n)
should be suitable; taking the total number of items in the collection minus n should skip the last n elements.
Keeping with the IEnumerable philosphy, and going through the enumeration once for cases where ICollection isn't implemented, you can use these extension methods:
public static IEnumerable<T> Leave<T>(this ICollection<T> src, int drop) => src.Take(src.Count - drop);
public static IEnumerable<T> Leave<T>(this IEnumerable<T> src, int drop) {
IEnumerable<T> IEnumHelper() {
using (var esrc = src.GetEnumerator()) {
var buf = new Queue<T>();
while (drop-- > 0)
if (esrc.MoveNext())
buf.Enqueue(esrc.Current);
else
break;
while (esrc.MoveNext()) {
buf.Enqueue(esrc.Current);
yield return buf.Dequeue();
}
}
}
return (src is ICollection<T> csrc) ? csrc.Leave(drop) : IEnumHelper();
}
This will be much more efficient than the solutions with a double-reverse, since it creates only one list and only enumerates the list once.
public static class Extensions
{
static IEnumerable<T> Leave<T>(this IEnumerable<T> items, int numToSkip)
{
var list = items.ToList();
// Assert numToSkip <= list count.
list.RemoveRange(list.Count - numToSkip, numToSkip);
return List
}
}
string alphabet = "abcdefghijklmnopqrstuvwxyz";
var chars = alphabet.Leave(10); // abcdefghijklmnop
Currently, C# has a TakeLast(n) method defined which takes characters from the end of the string.
See here: https://msdn.microsoft.com/en-us/library/hh212114(v=vs.103).aspx

In C#, when using List<T>, is it good to cache the Count property, or is the property fast enough?

In other words, which of the following would be faster, if any?
List<MyClass> myList;
...
...
foreach (Whatever whatever in SomeOtherLongList)
{
...
if (i < myList.Count)
{
...
}
}
or
List<MyClass> myList;
...
...
int listCount = myList.Count;
foreach (Whatever whatever in SomeOtherLongList)
{
...
if (i < listCount)
{
...
}
}
Thanks :)
The Count is just an integer. it doesnt get calculated when you ask its value. it's 'pre-calculated' so it's the same. option 1 is more readable :)
For List<T> there's really no need to cache it, as it is just a simple property.
However, the Count() extension method, that may be used on any IEnumerable can be very expensive, as it may need to enumerate the entire sequence in order to count it (for lists it just uses the property, but anything else is enumerated). Also, if you just need to know if count is not zero the Any() extension method is preferred.
You could take a look via Reflector to look at the implementation of Count:
public int Count
{
get
{
return this._size;
}
}
As we can see, Count is just a property returning the member _size, which is always updateted when adding/removing items to/from the list:
public void Add(T item)
{
if (this._size == this._items.Length)
{
this.EnsureCapacity(this._size + 1);
}
this._items[this._size++] = item;
this._version++;
}
public void RemoveAt(int index)
{
if (index >= this._size)
{
ThrowHelper.ThrowArgumentOutOfRangeException();
}
this._size--;
if (index < this._size)
{
Array.Copy(this._items, index + 1, this._items, index, this._size - index);
}
this._items[this._size] = default(T);
this._version++;
}
so there is clearly no need to cache the property.
Caching explicitly will be faster b/c you save the function calls to get the count even if it is just a variable.
Since the value could change between loop iterations the compiler won't get rid of these function calls as it could change the semantics of the code.
First one is more readable and better option, there you are not wasting the memory of int (listCount) also.
there wont be any performance difference in both.
Count in List is automatically defined one, once you create a list

Can LINQ use binary search when the collection is ordered?

Can I somehow "instruct" LINQ to use binary search when the collection that I'm trying to search is ordered. I'm using an ObservableCollection<T>, populated with ordered data, and I'm trying to use Enumerable.First(<Predicate>). In my predicate, I'm filtering by the value of the field my collection's sorted by.
As far as I know, it's not possible with the built-in methods. However it would be relatively easy to write an extension method that would allow you to write something like that :
var item = myCollection.BinarySearch(i => i.Id, 42);
(assuming, of course, that you collection implements IList ; there's no way to perform a binary search if you can't access the items randomly)
Here's a sample implementation :
public static T BinarySearch<T, TKey>(this IList<T> list, Func<T, TKey> keySelector, TKey key)
where TKey : IComparable<TKey>
{
if (list.Count == 0)
throw new InvalidOperationException("Item not found");
int min = 0;
int max = list.Count;
while (min < max)
{
int mid = min + ((max - min) / 2);
T midItem = list[mid];
TKey midKey = keySelector(midItem);
int comp = midKey.CompareTo(key);
if (comp < 0)
{
min = mid + 1;
}
else if (comp > 0)
{
max = mid - 1;
}
else
{
return midItem;
}
}
if (min == max &&
min < list.Count &&
keySelector(list[min]).CompareTo(key) == 0)
{
return list[min];
}
throw new InvalidOperationException("Item not found");
}
(not tested... a few adjustments might be necessary) Now tested and fixed ;)
The fact that it throws an InvalidOperationException may seem strange, but that's what Enumerable.First does when there's no matching item.
The accepted answer is very good.
However, I need that the BinarySearch returns the index of the first item that is larger, as the List<T>.BinarySearch() does.
So I watched its implementation by using ILSpy, then I modified it to have a selector parameter. I hope it will be as useful to someone as it is for me:
public static class ListExtensions
{
public static int BinarySearch<T, U>(this IList<T> tf, U target, Func<T, U> selector)
{
var lo = 0;
var hi = (int)tf.Count - 1;
var comp = Comparer<U>.Default;
while (lo <= hi)
{
var median = lo + (hi - lo >> 1);
var num = comp.Compare(selector(tf[median]), target);
if (num == 0)
return median;
if (num < 0)
lo = median + 1;
else
hi = median - 1;
}
return ~lo;
}
}
Well, you can write your own extension method over ObservableCollection<T> - but then that will be used for any ObservableCollection<T> where your extension method is available, without knowing whether it's sorted or not.
You'd also have to indicate in the predicate what you wanted to find - which would be better done with an expression tree... but that would be a pain to parse. Basically, the signature of First isn't really suitable for a binary search.
I suggest you don't try to overload the existing signatures, but write a new one, e.g.
public static TElement BinarySearch<TElement, TKey>
(this IList<TElement> collection, Func<TElement, TItem> keySelector,
TKey key)
(I'm not going to implement it right now, but I can do so later if you want.)
By providing a function, you can search by the property the collection is sorted by, rather than by the items themselves.
Enumerable.First(predicate) works on an IEnumarable<T> which only supports enumeration, therefore it does not have random access to the items within.
Also, your predicate contains arbitrary code that eventually results in true or false, and so cannot indicate whether the tested item was too low or too high. This information would be needed in order to do a binary search.
Enumerable.First(predicate) can only test each item in order as it walks through the enumeration.
Keep in mind that all(? at least most) of the extension methods used by LINQ are implemented on IQueryable<T>orIEnumerable<T> or IOrderedEnumerable<T> or IOrderedQueryable<T>.
None of these interfaces supports random access, and therefore none of them can be used for a binary search. One of the benefits of something like LINQ is that you can work with large datasets without having to return the entire dataset from the database. Obviously you can't binary search something if you don't even have all of the data yet.
But as others have said, there is no reason at all you can't write this extension method for IList<T> or other collection types that support random access.

Categories

Resources