a Method to find Common integers between 2 arrays - c#

I need to write a method to find the commons between 2 arrays in C# but the thing is I can't convert my python logic from the past to C#
it used to be like this in python:
def commonfinder(list1, list2):
commonlist = []
for x in list1:
for y in list2:
if x==y:
commonlist.append(x)
return commonlist
but when I tried to convert it to C#:
public int [] Commons(int[] ar1, int[] ar2)
{
int commoncount;
int[] Commonslist = new int[commoncount];
foreach (int x in ar1)
{
foreach (int y in ar2)
{
if (x == y)
{
commoncount++;
// here I should add x to Commonlist
}
}
}
return Commonslist;
}
I couldn't find any method or functions that would append x to my Commonlist
and ofc I got a lot of errors I couldn't solve
can I get a tip?

Your original algorithm has O(n * m) time complexity, which can be too long:
imagine that you have lists of 1 million items each (1 trillion compares to perform). You can implement a better code with O(n + m) complexity only:
Code: (let's generalize the problem)
using System.Linq;
...
public static T[] CommonFinder<T>(IEnumerable<T> left,
IEnumerable<T> right,
IEqualityComparer<T> comparer = null) {
if (null == left || null == right)
return new T[0]; // Or throw ArgumentNullException exception
comparer = comparer ?? EqualityComparer<T>.Default;
Dictionary<T, int> dict = right
.GroupBy(item => item)
.ToDictionary(group => group.Key, group => group.Count());
List<T> result = new List<T>();
foreach (T item in left)
if (dict.TryGetValue(item, out int count)) {
result.Add(item);
if (count <= 1)
dict.Remove(item);
else
dict[item] = count - 1;
}
return result.ToArray();
}
Demo:
int[] left = new int[] { 1, 2, 3, 4, 5 };
int[] right = new int[] { 0, 3, 2, 6, 9};
var common = CommonFinder(left, right);
Console.WriteLine(string.Join(", ", common));
Outcome:
2, 3

Note: What I understood is you want a method that takes 2 int arrays and yields 1 int array as the output with the unique intersecting values.
You can use HashSet to speed up to insert and lookup time (amortized O(1)). The running time is O(Max(n,m)) due to us having to go through both the entire arrays (separately). In terms of memory, O(Min(n,m)) because we select the smaller array at the beginning to populate the set and for the rest of the logic naturally won't have more elements than the smaller array because it is the intersect.
The Main method shows you how to utilize the method. CommonIntegers has the logic which you seek.
using System;
using System.Collections.Generic;
using System.Linq;
namespace TestCode.StackOverflow
{
public class So66935672
{
public static void Main(string[] args)
{
int[] intArray1 = new int[] { 9, 9, 1, 3, 5, 6, 10, 9 };
int[] intArray2 = new int[] { 19, 17, 16, 5, 1, 6 };
Console.Write(
CommonIntegers(intArray1, intArray2)
.Select(i => $"{i}, ")
.Aggregate(string.Empty, string.Concat));
}
private static int[] CommonIntegers(int[] intArray1, int[] intArray2)
{
if (intArray1 == null || intArray1.Length == 0
|| intArray2 == null || intArray2.Length == 0)
{
return Array.Empty<int>();
}
var primaryArraySet = new HashSet<int>(); // Contains the unique values from the shorter array
var intersectSet = new HashSet<int>(); // Contains unique values found in both arrays
int[] secondarySet;
// Fill primary set
if (intArray1.Length > intArray2.Length)
{
foreach (var i in intArray2)
primaryArraySet.Add(i);
secondarySet = intArray1;
}
else
{
foreach (var i in intArray1)
primaryArraySet.Add(i);
secondarySet = intArray2;
}
// Fill intersect array
foreach (var i in secondarySet)
if (primaryArraySet.Contains(i))
intersectSet.Add(i);
return intersectSet.ToArray();
}
}
}

You can try this one:
static List<int> CommonFinder(List<int> list1, List<int> list2)
{
List<int> commonList = new List<int>();
foreach (int x in list1)
foreach (int y in list2)
if (x == y)
commonList.Add(x);
return commonList;
}
static void Main()
{
List<int> list1 = new List<int> { 1, 2, 3 };
List<int> list2 = new List<int> { 2, 3, 4};
var common = CommonFinder(list1, list2);
Console.WriteLine(string.Join(", ", common));
}

Related

How to filter members of more than one list with LINQ? [duplicate]

How do I select the unique elements from the list {0, 1, 2, 2, 2, 3, 4, 4, 5} so that I get {0, 1, 3, 5}, effectively removing all instances of the repeated elements {2, 4}?
var numbers = new[] { 0, 1, 2, 2, 2, 3, 4, 4, 5 };
var uniqueNumbers =
from n in numbers
group n by n into nGroup
where nGroup.Count() == 1
select nGroup.Key;
// { 0, 1, 3, 5 }
var nums = new int{ 0...4,4,5};
var distinct = nums.Distinct();
make sure you're using Linq and .NET framework 3.5.
With lambda..
var all = new[] {0,1,1,2,3,4,4,4,5,6,7,8,8}.ToList();
var unique = all.GroupBy(i => i).Where(i => i.Count() == 1).Select(i=>i.Key);
C# 2.0 solution:
static IEnumerable<T> GetUniques<T>(IEnumerable<T> things)
{
Dictionary<T, int> counts = new Dictionary<T, int>();
foreach (T item in things)
{
int count;
if (counts.TryGetValue(item, out count))
counts[item] = ++count;
else
counts.Add(item, 1);
}
foreach (KeyValuePair<T, int> kvp in counts)
{
if (kvp.Value == 1)
yield return kvp.Key;
}
}
Here is another way that works if you have complex type objects in your List and want to get the unique values of a property:
var uniqueValues= myItems.Select(k => k.MyProperty)
.GroupBy(g => g)
.Where(c => c.Count() == 1)
.Select(k => k.Key)
.ToList();
Or to get distinct values:
var distinctValues = myItems.Select(p => p.MyProperty)
.Distinct()
.ToList();
If your property is also a complex type you can create a custom comparer for the Distinct(), such as Distinct(OrderComparer), where OrderComparer could look like:
public class OrderComparer : IEqualityComparer<Order>
{
public bool Equals(Order o1, Order o2)
{
return o1.OrderID == o2.OrderID;
}
public int GetHashCode(Order obj)
{
return obj.OrderID.GetHashCode();
}
}
If Linq isn't available to you because you have to support legacy code that can't be upgraded, then declare a Dictionary, where the first int is the number and the second int is the number of occurences. Loop through your List, loading up your Dictionary. When you're done, loop through your Dictionary selecting only those elements where the number of occurences is 1.
I believe Matt meant to say:
static IEnumerable<T> GetUniques<T>(IEnumerable<T> things)
{
Dictionary<T, bool> uniques = new Dictionary<T, bool>();
foreach (T item in things)
{
if (!(uniques.ContainsKey(item)))
{
uniques.Add(item, true);
}
}
return uniques.Keys;
}
There are many ways to skin a cat, but HashSet seems made for the task here.
var numbers = new[] { 0, 1, 2, 2, 2, 3, 4, 4, 5 };
HashSet<int> r = new HashSet<int>(numbers);
foreach( int i in r ) {
Console.Write( "{0} ", i );
}
The output:
0 1 2 3 4 5
Here's a solution with no LINQ:
var numbers = new[] { 0, 1, 2, 2, 2, 3, 4, 4, 5 };
// This assumes the numbers are sorted
var noRepeats = new List<int>();
int temp = numbers[0]; // Or .First() if using IEnumerable
var count = 1;
for(int i = 1; i < numbers.Length; i++) // Or foreach (var n in numbers.Skip(1)) if using IEnumerable
{
if (numbers[i] == temp) count++;
else
{
if(count == 1) noRepeats.Add(temp);
temp = numbers[i];
count = 1;
}
}
if(count == 1) noRepeats.Add(temp);
Console.WriteLine($"[{string.Join(separator: ",", values: numbers)}] -> [{string.Join(separator: ",", values: noRepeats)}]");
This prints:
[0,1,2,2,2,3,4,4,5] -> [0,1,3,5]
In .Net 2.0 I`m pretty sure about this solution:
public IEnumerable<T> Distinct<T>(IEnumerable<T> source)
{
List<T> uniques = new List<T>();
foreach (T item in source)
{
if (!uniques.Contains(item)) uniques.Add(item);
}
return uniques;
}

Find all index numbers of a value in array [duplicate]

This question already has answers here:
c# Array.FindAllIndexOf which FindAll IndexOf
(10 answers)
Closed 8 years ago.
How to find all positions of a value in array
class Program
{
static void Main(string[] args)
{
int start = 0;
int[] numbers = new int[7] { 2,1,2,1,5,6,5};
}
Something like that:
int[] numbers = new [] { 2, 1, 2, 1, 5, 6, 5 };
int toFind = 5;
// all indexes of "5" {4, 6}
int[] indexes = numbers
.Select((v, i) => new {
value = v,
index = i
})
.Where(pair => pair.value == toFind)
.Select(pair => pair.index)
.ToArray();
List<int> indexes = new List<int>();
for (int i = 0; i < numbers.Length; i++)
{
if (numbers[i] == yourNumber)
indexes.Add(i);
}
Useage is: Array.indexOf(T,value)
please refere to the msdn below.
http://msdn.microsoft.com/en-us/library/system.array.indexof(v=vs.110).aspx
You can make a really simple extension method for sequences to do this:
public static class SequenceExt
{
public static IEnumerable<int> IndicesOfAllElementsEqualTo<T>
(
this IEnumerable<T> sequence,
T target
) where T: IEquatable<T>
{
int index = 0;
foreach (var item in sequence)
{
if (item.Equals(target))
yield return index;
++index;
}
}
}
The extension method works with List<>, arrays, IEnumerable<T> and other collections.
Then your code would look something like this:
var numbers = new [] { 2, 1, 2, 1, 5, 6, 5 };
var indices = numbers.IndicesOfAllElementsEqualTo(5); // Use extension method.
// Make indices into an array if you want, like so
// (not really necessary for this sample code):
var indexArray = indices.ToArray();
// This prints "4, 6":
Console.WriteLine(string.Join(", ", indexArray));
Linq could help
var indexes = numbers
.Select((x, idx) => new { x, idx })
.Where(c => c.x == number)
.Select(c => c.idx);

Combine entries from two lists by position using LINQ

Say I have two lists with following entries
List<int> a = new List<int> { 1, 2, 5, 10 };
List<int> b = new List<int> { 6, 20, 3 };
I want to create another List c where its entries are items inserted by position from two lists. So List c would contain the following entries:
List<int> c = {1, 6, 2, 20, 5, 3, 10}
Is there a way to do it in .NET using LINQ? I was looking at .Zip() LINQ extension, but wasn't sure how to use it in this case.
Thanks in advance!
To do it using LINQ, you can use this piece of LINQPad example code:
void Main()
{
List<int> a = new List<int> { 1, 2, 5, 10 };
List<int> b = new List<int> { 6, 20, 3 };
var result = Enumerable.Zip(a, b, (aElement, bElement) => new[] { aElement, bElement })
.SelectMany(ab => ab)
.Concat(a.Skip(Math.Min(a.Count, b.Count)))
.Concat(b.Skip(Math.Min(a.Count, b.Count)));
result.Dump();
}
Output:
This will:
Zip the two lists together (which will stop when either runs out of elements)
Producing an array containing the two elements (one from a, another from b)
Using SelectMany to "flatten" this out to one sequence of values
Concatenate in the remainder from either list (only one or neither of the two calls to Concat should add any elements)
Now, having said that, personally I would've used this:
public static IEnumerable<T> Intertwine<T>(this IEnumerable<T> a, IEnumerable<T> b)
{
using (var enumerator1 = a.GetEnumerator())
using (var enumerator2 = b.GetEnumerator())
{
bool more1 = enumerator1.MoveNext();
bool more2 = enumerator2.MoveNext();
while (more1 && more2)
{
yield return enumerator1.Current;
yield return enumerator2.Current;
more1 = enumerator1.MoveNext();
more2 = enumerator2.MoveNext();
}
while (more1)
{
yield return enumerator1.Current;
more1 = enumerator1.MoveNext();
}
while (more2)
{
yield return enumerator2.Current;
more2 = enumerator2.MoveNext();
}
}
}
Reasons:
It doesn't enumerate a nor b more than once
I'm skeptical about the performance of Skip
It can work with any IEnumerable<T> and not just List<T>
I'd create an extension method to do it.
public static List<T> MergeAll<T>(this List<T> first, List<T> second)
{
int maxCount = (first.Count > second. Count) ? first.Count : second.Count;
var ret = new List<T>();
for (int i = 0; i < maxCount; i++)
{
if (first.Count < maxCount)
ret.Add(first[i]);
if (second.Count < maxCount)
ret.Add(second[i]);
}
return ret;
}
This would iterate through both lists once. If one list is bigger than the other it will continue to add until it's done.
You could try this code:
List<int> c = a.Select((i, index) => new Tuple<int, int>(i, index * 2))
.Union(b.Select((i, index) => new Tuple<int, int>(i, index * 2 + 1)))
.OrderBy(t => t.Second)
.Select(t => t.First).ToList();
It makes a union of two collections and then sorts that union using index. Elements from the first collection have even indices, from the second - odd ones.
Just wrote a little extension for this:
public static class MyEnumerable
{
public static IEnumerable<T> Smash<T>(this IEnumerable<T> one, IEnumerable<T> two)
{
using (IEnumerator<T> enumeratorOne = one.GetEnumerator(),
enumeratorTwo = two.GetEnumerator())
{
bool twoFinished = false;
while (enumeratorOne.MoveNext())
{
yield return enumeratorOne.Current;
if (!twoFinished && enumeratorTwo.MoveNext())
{
yield return enumeratorTwo.Current;
}
}
if (!twoFinished)
{
while (enumeratorTwo.MoveNext())
{
yield return enumeratorTwo.Current;
}
}
}
}
}
Usage:
var a = new List<int> { 1, 2, 5, 10 };
var b = new List<int> { 6, 20, 3 };
var c = a.Smash(b); // 1, 6, 2, 20, 5, 3, 10
var d = b.Smash(a); // 6, 1, 20, 2, 3, 5, 10
This will work for any IEnumerable so you can also do:
var a = new List<string> { "the", "brown", "jumped", "the", "lazy", "dog" };
var b = new List<string> { "quick", "dog", "over" };
var c = a.Smash(b); // the, quick, brown, fox, jumped, over, the, lazy, dog
You could use Concat and an anonymous type which you order by the index:
List<int> c = a
.Select((val, index) => new { val, index })
.Concat(b.Select((val, index) => new { val, index }))
.OrderBy(x => x.index)
.Select(x => x.val)
.ToList();
However, since that's not really elegant and also less efficient than:
c = new List<int>(a.Count + b.Count);
int max = Math.Max(a.Count, b.Count);
int aMax = a.Count;
int bMax = b.Count;
for (int i = 0; i < max; i++)
{
if(i < aMax)
c.Add(a[i]);
if(i < bMax)
c.Add(b[i]);
}
I wouldn't use LINQ at all.
Sorry for adding a third extension method inspired by the other two, but I like it shorter:
static IEnumerable<T> Intertwine<T>(this IEnumerable<T> a, IEnumerable<T> b)
{
using (var enumerator1 = a.GetEnumerator())
using (var enumerator2 = b.GetEnumerator()) {
bool more1 = true, more2 = true;
do {
if (more1 && (more1 = enumerator1.MoveNext()))
yield return enumerator1.Current;
if (more2 && (more2 = enumerator2.MoveNext()))
yield return enumerator2.Current;
} while (more1 || more2);
}
}

Linq thenby running indefinitely

I have a function that is simply meant to print out a dictionary of frequent item sets in an easy-to-understand fashion. The goal is to order first by the size of the dictionary key and then by the lexicographical order of a list of numbers. The issue arises in the ThenBy statement as the commented out "hello" will get printed indefinitely. If I change the ThenBy to not use the comparer and simply use another int or string value, it works fine, so I'm clearly doing something wrong.
public static void printItemSets(Dictionary<List<int>, int> freqItemSet)
{
List<KeyValuePair<List<int>, int>> printList = freqItemSet.ToList();
printList = printList.OrderBy(x => x.Key.Count)
.ThenBy(x => x.Key, new ListComparer())
.ToList();
}
The code for the ListComparer is as follows:
public class ListComparer: IEqualityComparer<List<int>>, IComparer<List<int>>
{
public int Compare(List<int> a, List<int> b)
{
int larger = a.Count > b.Count ? 1: -1;
for (int i = 0; i < a.Count && i < b.Count; i++)
{
if (a[i] < b[i])
{
return -1;
}
else if (a[i] > b[i])
{
return 1;
}
else { }
}
return larger;
}
}
VERY simple test case:
int[] a = {1, 3, 5};
int[] b = { 2, 3, 5 };
int[] c = { 1, 2, 3, 5 };
int[] d = { 2, 5 };
int[] e = { 1, 3, 4 };
List<int> aL = a.ToList<int>();
List<int> bL = b.ToList<int>();
List<int> cL = c.ToList<int>();
List<int> dL = d.ToList<int>();
List<int> eL = e.ToList<int>();
Dictionary<List<int>, int> test = new Dictionary<List<int>, int>(new ListComparer());
test.Add(aL, 1);
test.Add(bL, 1);
test.Add(cL, 1);
test.Add(dL, 1);
test.Add(eL, 1);
The issue is that ListComparer is not checking if the arrays are the same. The same array is being passed in twice for both x and y. Checking if x and y are equal will resolve your issue.
Your comparer doesn't handle equal items. If the items are equal the order of the two items is what determines which is considered "larger". The comparer is thus not "reflexive". Being reflexive is a property sorting algorithms rely on.
The first line should be var larger = a.Count.CompareTo(b.Count); instead, so that truly equal lists will return 0 rather than either -1 or 1.

C# - elegant way of partitioning a list?

I'd like to partition a list into a list of lists, by specifying the number of elements in each partition.
For instance, suppose I have the list {1, 2, ... 11}, and would like to partition it such that each set has 4 elements, with the last set filling as many elements as it can. The resulting partition would look like {{1..4}, {5..8}, {9..11}}
What would be an elegant way of writing this?
Here is an extension method that will do what you want:
public static IEnumerable<List<T>> Partition<T>(this IList<T> source, Int32 size)
{
for (int i = 0; i < (source.Count / size) + (source.Count % size > 0 ? 1 : 0); i++)
yield return new List<T>(source.Skip(size * i).Take(size));
}
Edit: Here is a much cleaner version of the function:
public static IEnumerable<List<T>> Partition<T>(this IList<T> source, Int32 size)
{
for (int i = 0; i < Math.Ceiling(source.Count / (Double)size); i++)
yield return new List<T>(source.Skip(size * i).Take(size));
}
Using LINQ you could cut your groups up in a single line of code like this...
var x = new List<int>() { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 };
var groups = x.Select((i, index) => new
{
i,
index
}).GroupBy(group => group.index / 4, element => element.i);
You could then iterate over the groups like the following...
foreach (var group in groups)
{
Console.WriteLine("Group: {0}", group.Key);
foreach (var item in group)
{
Console.WriteLine("\tValue: {0}", item);
}
}
and you'll get an output that looks like this...
Group: 0
Value: 1
Value: 2
Value: 3
Value: 4
Group: 1
Value: 5
Value: 6
Value: 7
Value: 8
Group: 2
Value: 9
Value: 10
Value: 11
Something like (untested air code):
IEnumerable<IList<T>> PartitionList<T>(IList<T> list, int maxCount)
{
List<T> partialList = new List<T>(maxCount);
foreach(T item in list)
{
if (partialList.Count == maxCount)
{
yield return partialList;
partialList = new List<T>(maxCount);
}
partialList.Add(item);
}
if (partialList.Count > 0) yield return partialList;
}
This returns an enumeration of lists rather than a list of lists, but you can easily wrap the result in a list:
IList<IList<T>> listOfLists = new List<T>(PartitionList<T>(list, maxCount));
To avoid grouping, mathematics and reiteration.
The method avoids unnecessary calculations, comparisons and allocations. Parameter validation is included.
Here is a working demonstration on fiddle.
public static IEnumerable<IList<T>> Partition<T>(
this IEnumerable<T> source,
int size)
{
if (size < 2)
{
throw new ArgumentOutOfRangeException(
nameof(size),
size,
"Must be greater or equal to 2.");
}
T[] partition;
int count;
using (var e = source.GetEnumerator())
{
if (e.MoveNext())
{
partition = new T[size];
partition[0] = e.Current;
count = 1;
}
else
{
yield break;
}
while(e.MoveNext())
{
partition[count] = e.Current;
count++;
if (count == size)
{
yield return partition;
count = 0;
partition = new T[size];
}
}
}
if (count > 0)
{
Array.Resize(ref partition, count);
yield return partition;
}
}
var yourList = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 };
var groupSize = 4;
// here's the actual query that does the grouping...
var query = yourList
.Select((x, i) => new { x, i })
.GroupBy(i => i.i / groupSize, x => x.x);
// and here's a quick test to ensure that it worked properly...
foreach (var group in query)
{
foreach (var item in group)
{
Console.Write(item + ",");
}
Console.WriteLine();
}
If you need an actual List<List<T>> rather than an IEnumerable<IEnumerable<T>> then change the query as follows:
var query = yourList
.Select((x, i) => new { x, i })
.GroupBy(i => i.i / groupSize, x => x.x)
.Select(g => g.ToList())
.ToList();
Or in .Net 2.0 you would do this:
static void Main(string[] args)
{
int[] values = new int[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 };
List<int[]> items = new List<int[]>(SplitArray(values, 4));
}
static IEnumerable<T[]> SplitArray<T>(T[] items, int size)
{
for (int index = 0; index < items.Length; index += size)
{
int remains = Math.Min(size, items.Length-index);
T[] segment = new T[remains];
Array.Copy(items, index, segment, 0, remains);
yield return segment;
}
}
public static IEnumerable<IEnumerable<T>> Partition<T>(this IEnumerable<T> list, int size)
{
while (list.Any()) { yield return list.Take(size); list = list.Skip(size); }
}
and for the special case of String
public static IEnumerable<string> Partition(this string str, int size)
{
return str.Partition<char>(size).Select(AsString);
}
public static string AsString(this IEnumerable<char> charList)
{
return new string(charList.ToArray());
}
Using ArraySegments might be a readable and short solution (casting your list to array is required):
var list = new List<int>() { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 }; //Added 0 in front on purpose in order to enhance simplicity.
int[] array = list.ToArray();
int step = 4;
List<int[]> listSegments = new List<int[]>();
for(int i = 0; i < array.Length; i+=step)
{
int[] segment = new ArraySegment<int>(array, i, step).ToArray();
listSegments.Add(segment);
}
I'm not sure why Jochems answer using ArraySegment was voted down. It could be really useful as long as you are not going to need to extend the segments (cast to IList). For example, imagine that what you are trying to do is pass segments into a TPL DataFlow pipeline for concurrent processing. Passing the segments in as IList instances allows the same code to deal with arrays and lists agnostically.
Of course, that begs the question: Why not just derive a ListSegment class that does not require wasting memory by calling ToArray()? The answer is that arrays can actually be processed marginally faster in some situations (slightly faster indexing). But you would have to be doing some fairly hardcore processing to notice much of a difference. More importantly, there is no good way to protect against random insert and remove operations by other code holding a reference to the list.
Calling ToArray() on a million value numeric list takes about 3 milliseconds on my workstation. That's usually not too great a price to pay when you're using it to gain the benefits of more robust thread safety in concurrent operations, without incurring the heavy cost of locking.
You could use an extension method:
public static IList<HashSet<T>> Partition<T>(this IEnumerable<T> input, Func<T, object> partitionFunc)
{
Dictionary<object, HashSet> partitions = new Dictionary<object, HashSet<T>>();
object currentKey = null;
foreach (T item in input ?? Enumerable.Empty<T>())
{
currentKey = partitionFunc(item);
if (!partitions.ContainsKey(currentKey))
{
partitions[currentKey] = new HashSet<T>();
}
partitions[currentKey].Add(item);
}
return partitions.Values.ToList();
}
To avoid multiple checks, unnecessary instantiations, and repetitive iterations, you could use the code:
namespace System.Collections.Generic
{
using Linq;
using Runtime.CompilerServices;
public static class EnumerableExtender
{
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static bool IsEmpty<T>(this IEnumerable<T> enumerable) => !enumerable?.GetEnumerator()?.MoveNext() ?? true;
public static IEnumerable<IEnumerable<T>> Partition<T>(this IEnumerable<T> source, int size)
{
if (source == null)
throw new ArgumentNullException(nameof(source));
if (size < 2)
throw new ArgumentOutOfRangeException(nameof(size));
IEnumerable<T> items = source;
IEnumerable<T> partition;
while (true)
{
partition = items.Take(size);
if (partition.IsEmpty())
yield break;
else
yield return partition;
items = items.Skip(size);
}
}
}
}

Categories

Resources