Context: C# 3.0, .Net 3.5
Suppose I have a method that generates random numbers (forever):
private static IEnumerable<int> RandomNumberGenerator() {
while (true) yield return GenerateRandomNumber(0, 100);
}
I need to group those numbers in groups of 10, so I would like something like:
foreach (IEnumerable<int> group in RandomNumberGenerator().Slice(10)) {
Assert.That(group.Count() == 10);
}
I have defined Slice method, but I feel there should be one already defined. Here is my Slice method, just for reference:
private static IEnumerable<T[]> Slice<T>(IEnumerable<T> enumerable, int size) {
var result = new List<T>(size);
foreach (var item in enumerable) {
result.Add(item);
if (result.Count == size) {
yield return result.ToArray();
result.Clear();
}
}
}
Question: is there an easier way to accomplish what I'm trying to do? Perhaps Linq?
Note: above example is a simplification, in my program I have an Iterator that scans given matrix in a non-linear fashion.
EDIT: Why Skip+Take is no good.
Effectively what I want is:
var group1 = RandomNumberGenerator().Skip(0).Take(10);
var group2 = RandomNumberGenerator().Skip(10).Take(10);
var group3 = RandomNumberGenerator().Skip(20).Take(10);
var group4 = RandomNumberGenerator().Skip(30).Take(10);
without the overhead of regenerating number (10+20+30+40) times. I need a solution that will generate exactly 40 numbers and break those in 4 groups by 10.
Are Skip and Take of any use to you?
Use a combination of the two in a loop to get what you want.
So,
list.Skip(10).Take(10);
Skips the first 10 records and then takes the next 10.
I have done something similar. But I would like it to be simpler:
//Remove "this" if you don't want it to be a extension method
public static IEnumerable<IList<T>> Chunks<T>(this IEnumerable<T> xs, int size)
{
var curr = new List<T>(size);
foreach (var x in xs)
{
curr.Add(x);
if (curr.Count == size)
{
yield return curr;
curr = new List<T>(size);
}
}
}
I think yours are flawed. You return the same array for all your chunks/slices so only the last chunk/slice you take would have the correct data.
Addition: Array version:
public static IEnumerable<T[]> Chunks<T>(this IEnumerable<T> xs, int size)
{
var curr = new T[size];
int i = 0;
foreach (var x in xs)
{
curr[i % size] = x;
if (++i % size == 0)
{
yield return curr;
curr = new T[size];
}
}
}
Addition: Linq version (not C# 2.0). As pointed out, it will not work on infinite sequences and will be a great deal slower than the alternatives:
public static IEnumerable<T[]> Chunks<T>(this IEnumerable<T> xs, int size)
{
return xs.Select((x, i) => new { x, i })
.GroupBy(xi => xi.i / size, xi => xi.x)
.Select(g => g.ToArray());
}
Using Skip and Take would be a very bad idea. Calling Skip on an indexed collection may be fine, but calling it on any arbitrary IEnumerable<T> is liable to result in enumeration over the number of elements skipped, which means that if you're calling it repeatedly you're enumerating over the sequence an order of magnitude more times than you need to be.
Complain of "premature optimization" all you want; but that is just ridiculous.
I think your Slice method is about as good as it gets. I was going to suggest a different approach that would provide deferred execution and obviate the intermediate array allocation, but that is a dangerous game to play (i.e., if you try something like ToList on such a resulting IEnumerable<T> implementation, without enumerating over the inner collections, you'll end up in an endless loop).
(I've removed what was originally here, as the OP's improvements since posting the question have since rendered my suggestions here redundant.)
Let's see if you even need the complexity of Slice. If your random number generates is stateless, I would assume each call to it would generate unique random numbers, so perhaps this would be sufficient:
var group1 = RandomNumberGenerator().Take(10);
var group2 = RandomNumberGenerator().Take(10);
var group3 = RandomNumberGenerator().Take(10);
var group4 = RandomNumberGenerator().Take(10);
Each call to Take returns a new group of 10 numbers.
Now, if your random number generator re-seeds itself with a specific value each time it's iterated, this won't work. You'll simply get the same 10 values for each group. So instead, you would use:
var generator = RandomNumberGenerator();
var group1 = generator.Take(10);
var group2 = generator.Take(10);
var group3 = generator.Take(10);
var group4 = generator.Take(10);
This maintains an instance of the generator so that you can continue retrieving values without re-seeding the generator.
You could use the Skip and Take methods with any Enumerable object.
For your edit :
How about a function that takes a slice number and a slice size as a parameter?
private static IEnumerable<T> Slice<T>(IEnumerable<T> enumerable, int sliceSize, int sliceNumber) {
return enumerable.Skip(sliceSize * sliceNumber).Take(sliceSize);
}
It seems like we'd prefer for an IEnumerable<T> to have a fixed position counter so that we can do
var group1 = items.Take(10);
var group2 = items.Take(10);
var group3 = items.Take(10);
var group4 = items.Take(10);
and get successive slices rather than getting the first 10 items each time. We can do that with a new implementation of IEnumerable<T> which keeps one instance of its Enumerator and returns it on every call of GetEnumerator:
public class StickyEnumerable<T> : IEnumerable<T>, IDisposable
{
private IEnumerator<T> innerEnumerator;
public StickyEnumerable( IEnumerable<T> items )
{
innerEnumerator = items.GetEnumerator();
}
public IEnumerator<T> GetEnumerator()
{
return innerEnumerator;
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return innerEnumerator;
}
public void Dispose()
{
if (innerEnumerator != null)
{
innerEnumerator.Dispose();
}
}
}
Given that class, we could implement Slice with
public static IEnumerable<IEnumerable<T>> Slices<T>(this IEnumerable<T> items, int size)
{
using (StickyEnumerable<T> sticky = new StickyEnumerable<T>(items))
{
IEnumerable<T> slice;
do
{
slice = sticky.Take(size).ToList();
yield return slice;
} while (slice.Count() == size);
}
yield break;
}
That works in this case, but StickyEnumerable<T> is generally a dangerous class to have around if the consuming code isn't expecting it. For example,
using (var sticky = new StickyEnumerable<int>(Enumerable.Range(1, 10)))
{
var first = sticky.Take(2);
var second = sticky.Take(2);
foreach (int i in second)
{
Console.WriteLine(i);
}
foreach (int i in first)
{
Console.WriteLine(i);
}
}
prints
1
2
3
4
rather than
3
4
1
2
Take a look at Take(), TakeWhile() and Skip()
I think the use of Slice() would be a bit misleading. I think of that as a means to give me a chuck of an array into a new array and not causing side effects. In this scenario you would actually move the enumerable forward 10.
A possible better approach is to just use the Linq extension Take(). I don't think you would need to use Skip() with a generator.
Edit: Dang, I have been trying to test this behavior with the following code
Note: this is wasn't really correct, I leave it here so others don't fall into the same mistake.
var numbers = RandomNumberGenerator();
var slice = numbers.Take(10);
public static IEnumerable<int> RandomNumberGenerator()
{
yield return random.Next();
}
but the Count() for slice is alway 1. I also tried running it through a foreach loop since I know that the Linq extensions are generally lazily evaluated and it only looped once. I eventually did the code below instead of the Take() and it works:
public static IEnumerable<int> Slice(this IEnumerable<int> enumerable, int size)
{
var list = new List<int>();
foreach (var count in Enumerable.Range(0, size)) list.Add(enumerable.First());
return list;
}
If you notice I am adding the First() to the list each time, but since the enumerable that is being passed in is the generator from RandomNumberGenerator() the result is different every time.
So again with a generator using Skip() is not needed since the result will be different. Looping over an IEnumerable is not always side effect free.
Edit: I'll leave the last edit just so no one falls into the same mistake, but it worked fine for me just doing this:
var numbers = RandomNumberGenerator();
var slice1 = numbers.Take(10);
var slice2 = numbers.Take(10);
The two slices were different.
I had made some mistakes in my original answer but some of the points still stand. Skip() and Take() are not going to work the same with a generator as it would a list. Looping over an IEnumerable is not always side effect free. Anyway here is my take on getting a list of slices.
public static IEnumerable<int> RandomNumberGenerator()
{
while(true) yield return random.Next();
}
public static IEnumerable<IEnumerable<int>> Slice(this IEnumerable<int> enumerable, int size, int count)
{
var slices = new List<List<int>>();
foreach (var iteration in Enumerable.Range(0, count)){
var list = new List<int>();
list.AddRange(enumerable.Take(size));
slices.Add(list);
}
return slices;
}
I got this solution for the same problem:
int[] ints = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
IEnumerable<IEnumerable<int>> chunks = Chunk(ints, 2, t => t.Dump());
//won't enumerate, so won't do anything unless you force it:
chunks.ToList();
IEnumerable<T> Chunk<T, R>(IEnumerable<R> src, int n, Func<IEnumerable<R>, T> action){
IEnumerable<R> head;
IEnumerable<R> tail = src;
while (tail.Any())
{
head = tail.Take(n);
tail = tail.Skip(n);
yield return action(head);
}
}
if you just want the chunks returned, not do anything with them, use chunks = Chunk(ints, 2, t => t). What I would really like is to have to have t=>t as default action, but I haven't found out how to do that yet.
Related
I've a c# Dictionary<DateTime,SomeObject> instance.
I've the following code:
private Dictionary<DateTime, SomeObject> _containedObjects = ...;//Let's imagine you have ~4000 items in it
public IEnumerable<SomeObject> GetItemsList(HashSet<DateTime> requiredTimestamps){
//How to return the list of SomeObject contained in _containedObjects
//Knowing that rarely(~<5% of the call), one or several DateTime of "requiredTimestamps" may not be in _containedObjects
}
I'm looking how to return an IEnumerable<SomeObject> containing all element that were referenced by one of the provided keys. The only issue is that this method will be called very often, and we might not always have every given key in parameter.
So is there something more efficient than this:
private Dictionary<DateTime, SomeObject> _containedObjects = ...;//Let's imagine you have ~4000 items in it
public IEnumerable<SomeObject> GetItemsList(HashSet<DateTime> requiredTimestamps){
List<SomeObject> toReturn = new List<SomeObject>();
foreach(DateTime dateTime in requiredTimestamps){
SomeObject found;
if(_containedObjects.TryGetValue(dateTime, out found)){
toReturn.Add(found);
}
}
return toReturn;
}
In general, there are two ways you can do this:
Go through requiredTimestamps sequentially and look up each date/time stamp in the dictionary. Dictionary lookup is O(1), so if there are k items to look up, it will take O(k) time.
Go through the dictionary sequentially and extract those with matching keys in the requiredTimestamps hash set. This will take O(n) time, where n is the number of items in the dictionary.
In theory, the first option--which is what you currently have--will be the fastest way to do it.
In practice, it's likely that the first one will be more efficient when the number of items you're looking up is less than some percentage of the total number of items in the dictionary. That is, if you're looking up 100 keys in a dictionary of a million, the first option will almost certainly be faster. If you're looking up 500,000 keys in a dictionary of a million, the second method might be faster because it's a whole lot faster to move to the next key than it is to do a lookup.
You'll probably want to optimize for the most common case, which I suspect is looking up a relatively small percentage of keys. In that case, the method you describe is almost certainly the best approach. But the only way to know for sure is to measure.
One optimization you might consider is pre-sizing the output list. That will avoid re-allocations. So when you create your toReturn list:
List<SomeObject> toReturn = new List<SomeObject>(requiredTimestamps.Count);
Method 1:
To make this significantly faster - this is not by changing the algorithm but by making a local copy of _containedObjects in your method and referencing the local copy for the lookup.
Example:
public static IEnumerable<int> GetItemsList3(HashSet<DateTime> requiredTimestamps)
{
var tmp = _containedObjects;
List<int> toReturn = new List<int>();
foreach (DateTime dateTime in requiredTimestamps)
{
int found;
if (tmp.TryGetValue(dateTime, out found))
{
toReturn.Add(found);
}
}
return toReturn;
}
Test data and times (on set of 5000 items with 125 keys found):
Your original method (milliseconds): 2,06032186895335
Method 1 (milliseconds): 0,53549626223609
Method 2:
One way to make this marginally quicker is to iterate through the smaller set and do the lookup on the bigger set. Depending on the size difference you will gain some speed.
You are using a Dictionary and HashSet, so your lookup on either of these will be O(1).
Example: If _containedObjects has less items than requiredTimestamps we loop through _containedObjects (otherwise use your method for the converse)
public static IEnumerable<int> GetItemsList2(HashSet<DateTime> requiredTimestamps)
{
List<int> toReturn = new List<int>();
foreach (var dateTime in _containedObjects)
{
int found;
if (requiredTimestamps.Contains(dateTime.Key))
{
toReturn.Add(dateTime.Value);
}
}
return toReturn;
}
Test data and times (on set of 5000 for _containedObjects and set of 10000 items for requiredTimestamps with 125 keys found):
Your original method (milliseconds): 3,88056291367086
Method 2 (milliseconds): 3,31025939438943
You can use LINQ but I doubt if it is going to increase any performance, even if there is any difference it would be negligible.
Your method could be:
public IEnumerable<SomeObject> GetItemsList(HashSet<DateTime> requiredTimestamps)
{
return _containedObjects.Where(r => requiredTimestamps.Contains(r.Key))
.Select(d => d.Value);
}
One positive with this is lazy evaluation since you are not populating a list and returning it.
Here are some different ways to do it - performance is all pretty much the same so you can choose based on readability.
Paste this into LinqPad if you want to test it out - otherwise just harvest whatever code you need.
I think my personal favourite from a readability point of view is method 3. Method 4 is certainly readable but has the unpleasant feature that it does two lookups into the dictionary for every required timestamp.
void Main()
{
var obj = new TestClass<string>(i => string.Format("Element {0}", i));
var sampleDateTimes = new HashSet<DateTime>();
for(int i = 0; i < 4000 / 20; i++)
{
sampleDateTimes.Add(DateTime.Today.AddDays(i * -5));
}
var result = obj.GetItemsList_3(sampleDateTimes);
foreach (var item in result)
{
Console.WriteLine(item);
}
}
class TestClass<SomeObject>
{
private Dictionary<DateTime, SomeObject> _containedObjects;
public TestClass(Func<int, SomeObject> converter)
{
_containedObjects = new Dictionary<DateTime, SomeObject>();
for(int i = 0; i < 4000; i++)
{
_containedObjects.Add(DateTime.Today.AddDays(-i), converter(i));
}
}
public IEnumerable<SomeObject> GetItemsList_1(HashSet<DateTime> requiredTimestamps)
{
List<SomeObject> toReturn = new List<SomeObject>();
foreach(DateTime dateTime in requiredTimestamps)
{
SomeObject found;
if(_containedObjects.TryGetValue(dateTime, out found))
{
toReturn.Add(found);
}
}
return toReturn;
}
public IEnumerable<SomeObject> GetItemsList_2(HashSet<DateTime> requiredTimestamps)
{
foreach(DateTime dateTime in requiredTimestamps)
{
SomeObject found;
if(_containedObjects.TryGetValue(dateTime, out found))
{
yield return found;
}
}
}
public IEnumerable<SomeObject> GetItemsList_3(HashSet<DateTime> requiredTimestamps)
{
return requiredTimestamps
.Intersect(_containedObjects.Keys)
.Select (k => _containedObjects[k]);
}
public IEnumerable<SomeObject> GetItemsList_4(HashSet<DateTime> requiredTimestamps)
{
return requiredTimestamps
.Where(dt => _containedObjects.ContainsKey(dt))
.Select (dt => _containedObjects[dt]);
}
}
Given this code:
IEnumerable<object> FilteredList()
{
foreach( object item in FullList )
{
if( IsItemInPartialList( item ) )
yield return item;
}
}
Why should I not just code it this way?:
IEnumerable<object> FilteredList()
{
var list = new List<object>();
foreach( object item in FullList )
{
if( IsItemInPartialList( item ) )
list.Add(item);
}
return list;
}
I sort of understand what the yield keyword does. It tells the compiler to build a certain kind of thing (an iterator). But why use it? Apart from it being slightly less code, what's it do for me?
Using yield makes the collection lazy.
Let's say you just need the first five items. Your way, I have to loop through the entire list to get the first five items. With yield, I only loop through the first five items.
The benefit of iterator blocks is that they work lazily. So you can write a filtering method like this:
public static IEnumerable<T> Where<T>(this IEnumerable<T> source,
Func<T, bool> predicate)
{
foreach (var item in source)
{
if (predicate(item))
{
yield return item;
}
}
}
That will allow you to filter a stream as long as you like, never buffering more than a single item at a time. If you only need the first value from the returned sequence, for example, why would you want to copy everything into a new list?
As another example, you can easily create an infinite stream using iterator blocks. For example, here's a sequence of random numbers:
public static IEnumerable<int> RandomSequence(int minInclusive, int maxExclusive)
{
Random rng = new Random();
while (true)
{
yield return rng.Next(minInclusive, maxExclusive);
}
}
How would you store an infinite sequence in a list?
My Edulinq blog series gives a sample implementation of LINQ to Objects which makes heavy use of iterator blocks. LINQ is fundamentally lazy where it can be - and putting things in a list simply doesn't work that way.
With the "list" code, you have to process the full list before you can pass it on to the next step. The "yield" version passes the processed item immediately to the next step. If that "next step" contains a ".Take(10)" then the "yield" version will only process the first 10 items and forget about the rest. The "list" code would have processed everything.
This means that you see the most difference when you need to do a lot of processing and/or have long lists of items to process.
You can use yield to return items that aren't in a list. Here's a little sample that could iterate infinitely through a list until canceled.
public IEnumerable<int> GetNextNumber()
{
while (true)
{
for (int i = 0; i < 10; i++)
{
yield return i;
}
}
}
public bool Canceled { get; set; }
public void StartCounting()
{
foreach (var number in GetNextNumber())
{
if (this.Canceled) break;
Console.WriteLine(number);
}
}
This writes
0
1
2
3
4
5
6
7
8
9
0
1
2
3
4
...etc. to the console until canceled.
object jamesItem = null;
foreach(var item in FilteredList())
{
if (item.Name == "James")
{
jamesItem = item;
break;
}
}
return jamesItem;
When the above code is used to loop through FilteredList() and assuming item.Name == "James" will be satisfied on 2nd item in the list, the method using yield will yield twice. This is a lazy behavior.
Where as the method using list will add all the n objects to the list and pass the complete list to the calling method.
This is exactly a use case where difference between IEnumerable and IList can be highlighted.
The best real world example I've seen for the use of yield would be to calculate a Fibonacci sequence.
Consider the following code:
class Program
{
static void Main(string[] args)
{
Console.WriteLine(string.Join(", ", Fibonacci().Take(10)));
Console.WriteLine(string.Join(", ", Fibonacci().Skip(15).Take(1)));
Console.WriteLine(string.Join(", ", Fibonacci().Skip(10).Take(5)));
Console.WriteLine(string.Join(", ", Fibonacci().Skip(100).Take(1)));
Console.ReadKey();
}
private static IEnumerable<long> Fibonacci()
{
long a = 0;
long b = 1;
while (true)
{
long temp = a;
a = b;
yield return a;
b = temp + b;
}
}
}
This will return:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55
987
89, 144, 233, 377, 610
1298777728820984005
This is nice because it allows you to calculate out an infinite series quickly and easily, giving you the ability to use the Linq extensions and query only what you need.
why use [yield]? Apart from it being slightly less code, what's it do for me?
Sometimes it is useful, sometimes not. If the entire set of data must be examined and returned then there is not going to be any benefit in using yield because all it did was introduce overhead.
When yield really shines is when only a partial set is returned. I think the best example is sorting. Assume you have a list of objects containing a date and a dollar amount from this year and you would like to see the first handful (5) records of the year.
In order to accomplish this, the list must be sorted ascending by date, and then have the first 5 taken. If this was done without yield, the entire list would have to be sorted, right up to making sure the last two dates were in order.
However, with yield, once the first 5 items have been established the sorting stops and the results are available. This can save a large amount of time.
The yield return statement allows you to return only one item at a time. You are collecting all the items in a list and again returning that list, which is a memory overhead.
What is the quickest (and least resource intensive) to compare two massive (>50.000 items) and as a result have two lists like the ones below:
items that show up in the first list but not in the second
items that show up in the second list but not in the first
Currently I'm working with the List or IReadOnlyCollection and solve this issue in a linq query:
var list1 = list.Where(i => !list2.Contains(i)).ToList();
var list2 = list2.Where(i => !list.Contains(i)).ToList();
But this doesn't perform as good as i would like.
Any idea of making this quicker and less resource intensive as i need to process a lot of lists?
Use Except:
var firstNotSecond = list1.Except(list2).ToList();
var secondNotFirst = list2.Except(list1).ToList();
I suspect there are approaches which would actually be marginally faster than this, but even this will be vastly faster than your O(N * M) approach.
If you want to combine these, you could create a method with the above and then a return statement:
return !firstNotSecond.Any() && !secondNotFirst.Any();
One point to note is that there is a difference in results between the original code in the question and the solution here: any duplicate elements which are only in one list will only be reported once with my code, whereas they'd be reported as many times as they occur in the original code.
For example, with lists of [1, 2, 2, 2, 3] and [1], the "elements in list1 but not list2" result in the original code would be [2, 2, 2, 3]. With my code it would just be [2, 3]. In many cases that won't be an issue, but it's worth being aware of.
Enumerable.SequenceEqual Method
Determines whether two sequences are equal according to an equality comparer.
MS.Docs
Enumerable.SequenceEqual(list1, list2);
This works for all primitive data types. If you need to use it on custom objects you need to implement IEqualityComparer
Defines methods to support the comparison of objects for equality.
IEqualityComparer Interface
Defines methods to support the comparison of objects for equality.
MS.Docs for IEqualityComparer
More efficient would be using Enumerable.Except:
var inListButNotInList2 = list.Except(list2);
var inList2ButNotInList = list2.Except(list);
This method is implemented by using deferred execution. That means you could write for example:
var first10 = inListButNotInList2.Take(10);
It is also efficient since it internally uses a Set<T> to compare the objects. It works by first collecting all distinct values from the second sequence, and then streaming the results of the first, checking that they haven't been seen before.
If you want the results to be case insensitive, the following will work:
List<string> list1 = new List<string> { "a.dll", "b1.dll" };
List<string> list2 = new List<string> { "A.dll", "b2.dll" };
var firstNotSecond = list1.Except(list2, StringComparer.OrdinalIgnoreCase).ToList();
var secondNotFirst = list2.Except(list1, StringComparer.OrdinalIgnoreCase).ToList();
firstNotSecond would contain b1.dll
secondNotFirst would contain b2.dll
using System.Collections.Generic;
using System.Linq;
namespace YourProject.Extensions
{
public static class ListExtensions
{
public static bool SetwiseEquivalentTo<T>(this List<T> list, List<T> other)
where T: IEquatable<T>
{
if (list.Except(other).Any())
return false;
if (other.Except(list).Any())
return false;
return true;
}
}
}
Sometimes you only need to know if two lists are different, and not what those differences are. In that case, consider adding this extension method to your project. Note that your listed objects should implement IEquatable!
Usage:
public sealed class Car : IEquatable<Car>
{
public Price Price { get; }
public List<Component> Components { get; }
...
public override bool Equals(object obj)
=> obj is Car other && Equals(other);
public bool Equals(Car other)
=> Price == other.Price
&& Components.SetwiseEquivalentTo(other.Components);
public override int GetHashCode()
=> Components.Aggregate(
Price.GetHashCode(),
(code, next) => code ^ next.GetHashCode()); // Bitwise XOR
}
Whatever the Component class is, the methods shown here for Car should be implemented almost identically.
It's very important to note how we've written GetHashCode. In order to properly implement IEquatable, Equals and GetHashCode must operate on the instance's properties in a logically compatible way.
Two lists with the same contents are still different objects, and will produce different hash codes. Since we want these two lists to be treated as equal, we must let GetHashCode produce the same value for each of them. We can accomplish this by delegating the hashcode to every element in the list, and using the standard bitwise XOR to combine them all. XOR is order-agnostic, so it doesn't matter if the lists are sorted differently. It only matters that they contain nothing but equivalent members.
Note: the strange name is to imply the fact that the method does not consider the order of the elements in the list. If you do care about the order of the elements in the list, this method is not for you!
try this way:
var difList = list1.Where(a => !list2.Any(a1 => a1.id == a.id))
.Union(list2.Where(a => !list1.Any(a1 => a1.id == a.id)));
Not for this Problem, but here's some code to compare lists for equal and not! identical objects:
public class EquatableList<T> : List<T>, IEquatable<EquatableList<T>> where T : IEquatable<T>
/// <summary>
/// True, if this contains element with equal property-values
/// </summary>
/// <param name="element">element of Type T</param>
/// <returns>True, if this contains element</returns>
public new Boolean Contains(T element)
{
return this.Any(t => t.Equals(element));
}
/// <summary>
/// True, if list is equal to this
/// </summary>
/// <param name="list">list</param>
/// <returns>True, if instance equals list</returns>
public Boolean Equals(EquatableList<T> list)
{
if (list == null) return false;
return this.All(list.Contains) && list.All(this.Contains);
}
If only combined result needed, this will work too:
var set1 = new HashSet<T>(list1);
var set2 = new HashSet<T>(list2);
var areEqual = set1.SetEquals(set2);
where T is type of lists element.
While Jon Skeet's answer is an excellent advice for everyday's practice with small to moderate number of elements (up to a few millions) it is nevertheless not the fastest approach and not very resource efficient. An obvious drawback is the fact that getting the full difference requires two passes over the data (even three if the elements that are equal are of interest as well). Clearly, this can be avoided by a customized reimplementation of the Except method, but it remains that the creation of a hash set requires a lot of memory and the computation of hashes requires time.
For very large data sets (in the billions of elements) it usually pays off to consider the particular circumstances. Here are a few ideas that might provide some inspiration:
If the elements can be compared (which is almost always the case in practice), then sorting the lists and applying the following zip approach is worth consideration:
/// <returns>The elements of the specified (ascendingly) sorted enumerations that are
/// contained only in one of them, together with an indicator,
/// whether the element is contained in the reference enumeration (-1)
/// or in the difference enumeration (+1).</returns>
public static IEnumerable<Tuple<T, int>> FindDifferences<T>(IEnumerable<T> sortedReferenceObjects,
IEnumerable<T> sortedDifferenceObjects, IComparer<T> comparer)
{
var refs = sortedReferenceObjects.GetEnumerator();
var diffs = sortedDifferenceObjects.GetEnumerator();
bool hasNext = refs.MoveNext() && diffs.MoveNext();
while (hasNext)
{
int comparison = comparer.Compare(refs.Current, diffs.Current);
if (comparison == 0)
{
// insert code that emits the current element if equal elements should be kept
hasNext = refs.MoveNext() && diffs.MoveNext();
}
else if (comparison < 0)
{
yield return Tuple.Create(refs.Current, -1);
hasNext = refs.MoveNext();
}
else
{
yield return Tuple.Create(diffs.Current, 1);
hasNext = diffs.MoveNext();
}
}
}
This can e.g. be used in the following way:
const int N = <Large number>;
const int omit1 = 231567;
const int omit2 = 589932;
IEnumerable<int> numberSequence1 = Enumerable.Range(0, N).Select(i => i < omit1 ? i : i + 1);
IEnumerable<int> numberSequence2 = Enumerable.Range(0, N).Select(i => i < omit2 ? i : i + 1);
var numberDiffs = FindDifferences(numberSequence1, numberSequence2, Comparer<int>.Default);
Benchmarking on my computer gave the following result for N = 1M:
Method
Mean
Error
StdDev
Ratio
Gen 0
Gen 1
Gen 2
Allocated
DiffLinq
115.19 ms
0.656 ms
0.582 ms
1.00
2800.0000
2800.0000
2800.0000
67110744 B
DiffZip
23.48 ms
0.018 ms
0.015 ms
0.20
-
-
-
720 B
And for N = 100M:
Method
Mean
Error
StdDev
Ratio
Gen 0
Gen 1
Gen 2
Allocated
DiffLinq
12.146 s
0.0427 s
0.0379 s
1.00
13000.0000
13000.0000
13000.0000
8589937032 B
DiffZip
2.324 s
0.0019 s
0.0018 s
0.19
-
-
-
720 B
Note that this example of course benefits from the fact that the lists are already sorted and integers can be very efficiently compared. But this is exactly the point: If you do have favourable circumstances, make sure that you exploit them.
A few further comments: The speed of the comparison function is clearly relevant for the overall performance, so it may be beneficial to optimize it. The flexibility to do so is a benefit of the zipping approach. Furthermore, parallelization seems more feasible to me, although by no means easy and maybe not worth the effort and the overhead. Nevertheless, a simple way to speed up the process by roughly a factor of 2, is to split the lists respectively in two halfs (if it can be efficiently done) and compare the parts in parallel, one processing from front to back and the other in reverse order.
I have used this code to compare two list which has million of records.
This method will not take much time
//Method to compare two list of string
private List<string> Contains(List<string> list1, List<string> list2)
{
List<string> result = new List<string>();
result.AddRange(list1.Except(list2, StringComparer.OrdinalIgnoreCase));
result.AddRange(list2.Except(list1, StringComparer.OrdinalIgnoreCase));
return result;
}
I compared 3 different methods for comparing different data sets. Tests below create a string collection of all the numbers from 0 to length - 1, then another collection with the same range, but with even numbers. I then pick out the odd numbers from the first collection.
Using Linq Except
public void TestExcept()
{
WriteLine($"Except {DateTime.Now}");
int length = 20000000;
var dateTime = DateTime.Now;
var array = new string[length];
for (int i = 0; i < length; i++)
{
array[i] = i.ToString();
}
Write("Populate set processing time: ");
WriteLine(DateTime.Now - dateTime);
var newArray = new string[length/2];
int j = 0;
for (int i = 0; i < length; i+=2)
{
newArray[j++] = i.ToString();
}
dateTime = DateTime.Now;
Write("Count of items: ");
WriteLine(array.Except(newArray).Count());
Write("Count processing time: ");
WriteLine(DateTime.Now - dateTime);
}
Output
Except 2021-08-14 11:43:03 AM
Populate set processing time: 00:00:03.7230479
2021-08-14 11:43:09 AM
Count of items: 10000000
Count processing time: 00:00:02.9720879
Using HashSet.Add
public void TestHashSet()
{
WriteLine($"HashSet {DateTime.Now}");
int length = 20000000;
var dateTime = DateTime.Now;
var hashSet = new HashSet<string>();
for (int i = 0; i < length; i++)
{
hashSet.Add(i.ToString());
}
Write("Populate set processing time: ");
WriteLine(DateTime.Now - dateTime);
var newHashSet = new HashSet<string>();
for (int i = 0; i < length; i+=2)
{
newHashSet.Add(i.ToString());
}
dateTime = DateTime.Now;
Write("Count of items: ");
// HashSet Add returns true if item is added successfully (not previously existing)
WriteLine(hashSet.Where(s => newHashSet.Add(s)).Count());
Write("Count processing time: ");
WriteLine(DateTime.Now - dateTime);
}
Output
HashSet 2021-08-14 11:42:43 AM
Populate set processing time: 00:00:05.6000625
Count of items: 10000000
Count processing time: 00:00:01.7703057
Special HashSet test:
public void TestLoadingHashSet()
{
int length = 20000000;
var array = new string[length];
for (int i = 0; i < length; i++)
{
array[i] = i.ToString();
}
var dateTime = DateTime.Now;
var hashSet = new HashSet<string>(array);
Write("Time to load hashset: ");
WriteLine(DateTime.Now - dateTime);
}
> TestLoadingHashSet()
Time to load hashset: 00:00:01.1918160
Using .Contains
public void TestContains()
{
WriteLine($"Contains {DateTime.Now}");
int length = 20000000;
var dateTime = DateTime.Now;
var array = new string[length];
for (int i = 0; i < length; i++)
{
array[i] = i.ToString();
}
Write("Populate set processing time: ");
WriteLine(DateTime.Now - dateTime);
var newArray = new string[length/2];
int j = 0;
for (int i = 0; i < length; i+=2)
{
newArray[j++] = i.ToString();
}
dateTime = DateTime.Now;
WriteLine(dateTime);
Write("Count of items: ");
WriteLine(array.Where(a => !newArray.Contains(a)).Count());
Write("Count processing time: ");
WriteLine(DateTime.Now - dateTime);
}
Output
Contains 2021-08-14 11:19:44 AM
Populate set processing time: 00:00:03.1046998
2021-08-14 11:19:49 AM
Count of items: Hosting process exited with exit code 1.
(Didnt complete. Killed it after 14 minutes)
Conclusion:
Linq Except ran approximately 1 second slower on my device than using HashSets (n=20,000,000).
Using Where and Contains ran for a very long time
Closing remarks on HashSets:
Unique data
Make sure to override GetHashCode (correctly) for class types
May need up to 2x the memory if you make a copy of the data set, depending on implementation
HashSet is optimized for cloning other HashSets using the IEnumerable constructor, but it is slower to convert other collections to HashSets (see special test above)
First approach:
if (list1 != null && list2 != null && list1.Select(x => list2.SingleOrDefault(y => y.propertyToCompare == x.propertyToCompare && y.anotherPropertyToCompare == x.anotherPropertyToCompare) != null).All(x => true))
return true;
Second approach if you are ok with duplicate values:
if (list1 != null && list2 != null && list1.Select(x => list2.Any(y => y.propertyToCompare == x.propertyToCompare && y.anotherPropertyToCompare == x.anotherPropertyToCompare)).All(x => true))
return true;
Both Jon Skeet's and miguelmpn's answers are good. It depends on whether the order of the list elements is important or not:
// take order into account
bool areEqual1 = Enumerable.SequenceEqual(list1, list2);
// ignore order
bool areEqual2 = !list1.Except(list2).Any() && !list2.Except(list1).Any();
One line:
var list1 = new List<int> { 1, 2, 3 };
var list2 = new List<int> { 1, 2, 3, 4 };
if (list1.Except(list2).Count() + list2.Except(list1).Count() == 0)
Console.WriteLine("same sets");
I did the generic function for comparing two lists.
public static class ListTools
{
public enum RecordUpdateStatus
{
Added = 1,
Updated = 2,
Deleted = 3
}
public class UpdateStatu<T>
{
public T CurrentValue { get; set; }
public RecordUpdateStatus UpdateStatus { get; set; }
}
public static List<UpdateStatu<T>> CompareList<T>(List<T> currentList, List<T> inList, string uniqPropertyName)
{
var res = new List<UpdateStatu<T>>();
res.AddRange(inList.Where(a => !currentList.Any(x => x.GetType().GetProperty(uniqPropertyName).GetValue(x)?.ToString().ToLower() == a.GetType().GetProperty(uniqPropertyName).GetValue(a)?.ToString().ToLower()))
.Select(a => new UpdateStatu<T>
{
CurrentValue = a,
UpdateStatus = RecordUpdateStatus.Added,
}));
res.AddRange(currentList.Where(a => !inList.Any(x => x.GetType().GetProperty(uniqPropertyName).GetValue(x)?.ToString().ToLower() == a.GetType().GetProperty(uniqPropertyName).GetValue(a)?.ToString().ToLower()))
.Select(a => new UpdateStatu<T>
{
CurrentValue = a,
UpdateStatus = RecordUpdateStatus.Deleted,
}));
res.AddRange(currentList.Where(a => inList.Any(x => x.GetType().GetProperty(uniqPropertyName).GetValue(x)?.ToString().ToLower() == a.GetType().GetProperty(uniqPropertyName).GetValue(a)?.ToString().ToLower()))
.Select(a => new UpdateStatu<T>
{
CurrentValue = a,
UpdateStatus = RecordUpdateStatus.Updated,
}));
return res;
}
}
I think this is a simple and easy way to compare two lists element by element
x=[1,2,3,5,4,8,7,11,12,45,96,25]
y=[2,4,5,6,8,7,88,9,6,55,44,23]
tmp = []
for i in range(len(x)) and range(len(y)):
if x[i]>y[i]:
tmp.append(1)
else:
tmp.append(0)
print(tmp)
Maybe it's funny, but this works for me:
string.Join("",List1) != string.Join("", List2)
This is the best solution you'll found
var list3 = list1.Where(l => list2.ToList().Contains(l));
Python's module 'random' has a function random.choice
random.choice(seq)
Return a random element from the non-empty sequence seq. If seq is empty, raises IndexError.
How can I emulate this in .NET ?
public T RandomChoice<T> (IEnumerable<T> source)
Edit: I heard this as an interview question some years ago, but today the problem occurred naturally in my work. The interview question was stated with constraints
'the sequence is too long to save to memory'
'you can only loop over the sequence once'
'the sequence doesn't have a length/count method' (à la .NET IEnumerable)
To make a method that iterates the source only once, and doesn't have to allocate memory to store it temporarily, you count how many items you have iterated, and determine the probability that the current item should be the result:
public T RandomChoice<T> (IEnumerable<T> source) {
Random rnd = new Random();
T result = default(T);
int cnt = 0;
foreach (T item in source) {
cnt++;
if (rnd.Next(cnt) == 0) {
result = item;
}
}
return result;
}
When you are at the first item, the probability is 1/1 that it should be used (as that is the only item that you have seen this far). When you are at the second item, the probability is 1/2 that it should replace the first item, and so on.
This will naturally use a bit more CPU, as it creates one random number per item, not just a single random number to select an item, as dasblinkenlight pointed out. You can check if the source implements IList<T>, as Dan Tao suggested, and use an implementation that uses the capabilities to get the length of the collection and access items by index:
public T RandomChoice<T> (IEnumerable<T> source) {
IList<T> list = source as IList<T>;
if (list != null) {
// use list.Count and list[] to pick an item by random
} else {
// use implementation above
}
}
Note: You should consider sending the Random instance into the method. Otherwise you will get the same random seed if you call the method two times too close in time, as the seed is created from the current time.
The result of a test run, picking one number from an array containing 0 - 9, 1000000 times, to show that the distribution of the chosen numbers is not skewed:
0: 100278
1: 99519
2: 99994
3: 100327
4: 99571
5: 99731
6: 100031
7: 100429
8: 99482
9: 100638
To avoid iterating through the sequence two times (once for the count and once for the element) it is probably a good idea to save your sequence in an array before getting its random element:
public static class RandomExt {
private static Random rnd = new Random();
public static T RandomChoice<T> (this IEnumerable<T> source) {
var arr = source.ToArray();
return arr[rnd.Next(arr.Length)];
}
public static T RandomChoice<T> (this ICollection<T> source) {
return source[rnd.Next(rnd.Count)];
}
}
EDIT Implemented a very good idea by Chris Sinclair.
private static Random rng = new Random();
...
return source.Skip(rng.next(source.Count())).Take(1);
public T RandomChoice<T> (IEnumerable<T> source)
{
if (source == null)
{
throw new ArgumentNullException("source");
}
var list = source.ToList();
if (list.Count < 1)
{
throw new MissingMemberException();
}
var rnd = new Random();
return list[rnd.Next(0, list.Count)];
}
or extension
public static T RandomChoice<T> (this IEnumerable<T> source)
{
if (source == null)
{
throw new ArgumentNullException("source");
}
var list = source.ToList();
if (list.Count < 1)
{
throw new MissingMemberException();
}
var rnd = new Random();
return list[rnd.Next(0, list.Count)];
}
I'd go with dasblinkenlight's answer, with one small change: leverage the fact that source might already be an indexed collection, in which case you really don't need to populate a new array (or list):
public static class RandomExt
{
public static T Choice<T>(this Random random, IEnumerable<T> sequence)
{
var list = sequence as IList<T> ?? sequence.ToList();
return list[random.Next(list.Count)];
}
}
Note that I also modified the interface from the abovementioned answer to make it more consistent with the Python version you referenced in your question:
var random = new Random();
var numbers = new int[] { 1, 2, 3 };
int randomNumber = random.Choice(numbers);
Edit: I like Guffa's answer even better, actually.
Well, get a list of all elements in the sequence. ask a random number generator for the index, return elemnt by index. Define what Sequence is - IEnumerable would be most obvious, but you need to materialize that into a list then to know the number of elements for the random number generator.
This is btw., not emulate, it is implement.
Is this some homework beginner study course question?
Assuming one has an extension method IEnumerable.MinBy:
var r = new Random();
return source.MinBy(x=>r.Next())
The method MinBy doesn't save the sequence to memory, it works like IEnumerable.Min making one iteration (see MoreLinq or elsewhere )
I've got a list of timestamps (in ticks), and from this list I'd like to create another one that represents the delta time between entries.
Let's just say, for example, that my master timetable looks like this:
10
20
30
50
60
70
What I want back is this:
10
10
20
10
10
What I'm trying to accomplish here is detect that #3 in the output table is an outlier by calculating the standard deviation. I've not taken statistics before, but I think if I look for the prevalent value in the output list and throw out anything outside of 1 sigma that this will work adequately for me.
I'd love to be able to create the output list with a single LINQ query, but I haven't figured it out yet. Currently I'm just brute forcing it with a loop.
If you are running .NET 4.0, this should work fine:
var deltas = list.Zip(list.Skip(1), (current, next) => next - current);
Apart from the multiple enumerators, this is quite efficient; it should work well on any kind of sequence.
Here's an alternative for .NET 3.5:
var deltas = list.Skip(1)
.Select((next, index) => next - list[index]);
Obviously, this idea will only be efficient when the list's indexer is employed. Modifying it to use ElementAt may not be a good idea: quadratic run-time will occur for non IList<T> sequences. In this case, writing a custom iterator is a good solution.
EDIT: If you don't like the Zip + Skip(1) idea, writing an extension such as this (untested) maybe useful in these sorts of circumstances:
public class CurrentNext<T>
{
public T Current { get; private set; }
public T Next { get; private set; }
public CurrentNext(T current, T next)
{
Current = current;
Next = next;
}
}
...
public static IEnumerable<CurrentNext<T>> ToCurrentNextEnumerable<T>(this IEnumerable<T> source)
{
if (source == null)
throw new ArgumentException("source");
using (var source = enumerable.GetEnumerator())
{
if (!enumerator.MoveNext())
yield break;
T current = enumerator.Current;
while (enumerator.MoveNext())
{
yield return new CurrentNext<T>(current, enumerator.Current);
current = enumerator.Current;
}
}
}
Which you could then use as:
var deltas = list.ToCurrentNextEnumerable()
.Select(c=> c.Next - c.Current);
You can use Ani's answer:-
var deltas = list.Zip(list.Skip(1), (current, next) => next - current);
With a super-simple implementation of the Zip extension method:-
public static IEnumerable<TResult> Zip<TFirst, TSecond, TResult>(
this IEnumerable<TFirst> first,
IEnumerable<TSecond> second,
Func<TFirst, TSecond, TResult> func)
{
var ie1 = first.GetEnumerator();
var ie2 = second.GetEnumerator();
while (ie1.MoveNext() && ie2.MoveNext())
yield return func(ie1.Current, ie2.Current);
}
That'll work with 3.5.
This should do the trick:
static IEnumerable<int> GetDeltas(IEnumerable<int> collection)
{
int? previous = null;
foreach (int value in collection)
{
if (previous != null)
{
yield return value - (int)previous;
}
previous = value;
}
}
Now you can call your collection like this:
var masterTimetable = GetMasterTimeTable();
var deltas = GetDeltas(masterTimetable);
It's not really LINQ, but will effectively do the trick.
It looks like there are sufficient answers to get you going already, but I asked a similar question back in the spring:
How to zip one ienumerable with itself
In the responses to my question, I learned about "Pairwise" and "Pairwise"
As I recall, explicitly implementing your own "Pairwise" enumerator does mean that you iterate through you list exactly once whereas implementing "Pairwise" in terms of .Zip + .Skip(1) means that you will ultimately iterate over your list twice.
In my post I also include several examples of geometry (operating on lists of points) processing code such as Length/Distance, Area, Centroid.
Not that I recommend this, but totally abusing LINQ the following would work:
var vals = new[] {10, 20, 30, 50, 60, 70};
int previous = 0;
var newvals = vals.Select(i =>
{
int dif = i - previous;
previous = i;
return dif;
});
foreach (var newval in newvals)
{
Console.WriteLine(newval);
}
One liner for you:
int[] i = new int[] { 10, 20, 30, 50, 60, 70 };
IEnumerable<int> x = Enumerable.Range(1, i.Count()-1).Select(W => i[W] - i[W - 1]);
LINQ is not really designed for what you're trying to do here, because it usually evaluates value by value, much like an extremely efficient combination of for-loops.
You'd have to know your current index, something you don't, without some kind of workaround.