I have List<List<int>>, For example
List<List<int>> has {{1,2,3}, {1,1,2,}, {1,2,3}}.
I want to remove duplicate in this:
Result should be: {{1,2,3}, {1,1,2}}
The problem is the inner lists are reference types so they have different object hashcode and hence are treated separate.
I don't want to iterate list completely to find duplicates as its not optimum.
Try this:
List<List<int>> lst = new List<List<int>>()
{
new List<int> {1,2,3},
new List<int> {1,1,2},
new List<int> {1,2,3}
};
var result = lst.GroupBy(c => String.Join(",", c)).Select(c => c.First().ToList()).ToList();
You can implement an EqualityComparer class and use it in Distinct method of LINQ.
public class CustomEqualityComparer : IEqualityComparer<List<int>>
{
public bool Equals(List<int> x, List<int> y)
{
if (x.Count != y.Count)
return false;
for (int i = 0; i < x.Count; i++)
{
if (x[i] != y[i])
return false;
}
return true;
}
public int GetHashCode(List<int> obj)
{
return 0;
}
}
and use it like this
public static void Main(string[] args)
{
var list = new List<List<int>>() { new List<int> { 1, 1, 2 }, new List<int> { 1, 2, 3 }, new List<int> { 1, 1, 2 } };
var res = list.Distinct(new CustomEqualityComparer());
Console.WriteLine("Press any key to continue.");
Console.ReadLine();
}
It's very simple:
List<List<int>> lst = new List<List<int>>()
{
new List<int> {1,2,3},
new List<int> {1,1,2,},
new List<int> {1,2,3},
};
var result =
lst
.Where((xs, n) =>
!lst
.Skip(n + 1)
.Any(ys => xs.SequenceEqual(ys)))
.ToList();
I get this result:
What you want is more complicated than simple comparision.
In my point of view you should create a new type / class like
IntegerCollection : ICollection
Then you should implement Equals in that way:
bool Equals(IntegerCollection col) { if(this.Count() != col.Count())
return false;
if(this.Sum() != col.Sum())
return false;
for(int i = 0; i < this.Count(); i++) {
if(this[i]==col[i]){
continue;
}else{
return false;
} }
return true; }
And finally
List<IntegerCollection> collections = new List<IntegerCollection> {
new IntegerCollection({1,2,3}),
new IntegerCollection({1,1,2}),
new IntegerCollection({1,2,3})};
var distincts = collections.Distinct();
Using #farid's CustomEqualityComparer you can also make use of HashSet:
List<List<int>> RemoveDuplicates(IEnumerable<List<int>> values)
{
return new HashSet<List<int>>(values,new CustomEqualityComparer()).ToList();
}
Related
I have this code:
public List<int> Duplicates(List<int> sequence)
{
int[] countArr = new int[156];
foreach (int i in sequence)
{
countArr[i]++;
}
List<int> resultList = new List<int>();
for (var i = 0; i < countArr.Length; i++)
{
if (countArr[i] > 1)
{
resultList.Add(i);
}
}
return resultList;
}
This is getting me the elements that are duplicated, but not how many times this elements are duplicated.
Thanks in advance for any help provided.
EDIT
I do not want to use LINQ
Use GroupBy:
sequence.GroupBy(i => i).Select(g => new {Value = g.Key, Amount = g.Count()})
If you don't want to use Linq (why???) just collect value and amount together in a Tuple:
List<Tuple<int,int>> resultList = new List<Tuple<int,int>>();
for (var i = 0; i < countArr.Length; i++)
{
if (countArr[i] > 1)
{
resultList.Add(Tuple.Create(i, countArr[i]));
}
}
That's a very complicated way you use, i'd rather return a Dictionary<int, int>:
public static Dictionary<int, int> Duplicates(IEnumerable<int> sequence)
{
var duplicates = new Dictionary<int, int>();
foreach (int i in sequence)
{
if(duplicates.ContainsKey(i))
duplicates[i]++;
else
duplicates.Add(i, 1);
}
return duplicates;
}
Your algorithm already produces the required counts, so all you need to do is to arrange returning them to the caller in some way. One approach is to change the return type to IList<KeyValuePair<int,int>>. The collection of pairs you return would contain the number in the Key property, and its count in the Value property:
IList<KeyValuePair<int,int>> Duplicates(List<int> sequence) {
var countArr = new int[156];
foreach (int i in sequence) {
countArr[i]++;
}
var resultList = new List<KeyValuePair<int,int>>();
for (var i = 0; i < countArr.Length; i++) {
if (countArr[i] > 1) {
resultList.Add(new KeyValuePair<int,int>(i, countArr[i]));
}
}
return resultList;
}
Simple answer with dictionary:
void Main()
{
List<int> intlist = new List<int>
{
1,
1,
1,
2,
2,
3,
4,
4,
4,
4
};
var dict = new Dictionary<int, int>();
foreach (var item in intlist)
{
if (!dict.ContainsKey(item)) // this checks for the existance of an item
{
dict.Add(item, 0); // this initialises the item in the dictionary
}
dict[item]++; // this will update the count of the item
}
// this is just for linqpad debug output and shows each value and their count
// this can be achieved with foreach
dict.Select(x => new { x.Key, x.Value}).Dump();
}
Yes I know there is a Select at the bottom, but that has nothing to do with the duplicate collection.
I've got a dictionary that for each key lists its dependencies:
parent[2] = 1 (2 depends on 1)
parent[3] = 1 (3 depends on 1)
parent[4] = {2,3} (4 depends on 2, or 4 depends on 3)
I want to build lists out of this dictionary:
[4,2,1]
[4,3,1]
I've got the suspicion I should use a recursive algorithm. Any hints?
EDIT: this is what I have so far:
How I call the recursive function:
var result = new List<List<Node<TData, TId>>>();
GetResult(parent, target, result);
return result;
And the recursive function itself:
private static List<Node<TData, TId>> GetResult<TData, TId>(Dictionary<Node<TData, TId>, List<Node<TData, TId>>> parent, Node<TData, TId> index,
List<List<Node<TData, TId>>> finalList)
where TData : IIdentifiable<TId>
where TId : IComparable
{
var newResult = new List<Node<TData, TId>> { index };
if (parent.ContainsKey(index))
{
if (parent[index].Count == 1)
{
return new List<Node<TData, TId>> { index, parent[index].First()};
}
foreach (var child in parent[index])
{
var temp = newResult.Union(GetResult(parent, child, finalList)).ToList();
finalList.Add(temp);
}
}
return newResult;
}
You could try to adapt for your needs the following code:
public static List<List<int>> FindParents(Dictionary<int, List<int>> parents, int index)
{
List<int> prefix = new List<int>();
List<List<int>> results = new List<List<int>>();
FindParentsInternal(parents, index, prefix, results);
return results;
}
private static void FindParentsInternal(Dictionary<int, List<int>> parents, int index,
List<int> prefix, List<List<int>> results)
{
var newPrefix = new List<int>(prefix) { index };
if (!parents.ContainsKey(index))
{
results.Add(newPrefix);
return;
}
parents[index].ForEach(i => FindParentsInternal(parents, i, newPrefix, results));
}
Usage:
Dictionary<int, List<int>> parents = new Dictionary<int, List<int>>
{
{ 2, new List<int> { 1 } },
{ 3, new List<int> { 1 } },
{ 4, new List<int> { 2, 3 } }
};
var t = FindParents(parents, 4);
You can benefit by keeping a dictionary of results - that way you won't need to keep recomputing them.
Dictionary<Int, Set<Int>> results;
Set<Int> getResult(int index) {
Set<Int> dictResult = results.get(index);
if(dictResult != null) {
// result has already been computed
return dictResult;
} else {
// compute result, store in dictResult
Set<Int> newResult = // compute dependency set
dictResult.put(index, newResult);
return newResult;
}
}
As for the // compute dependency list part, you can do something like the following:
Set<Int> newResult = new Set(index);
if(dict.containsKey(index)) {
List<Int> dependencies = dict.get(index);
foreach(int subIndex in dependencies) {
newResult = newResult.union(getResult(subIndex));
}
}
Your base case is when the index is not in dict (i.e. dict.containsKey returns false), e.g. 1 for the data you provided.
Say I have two lists with following entries
List<int> a = new List<int> { 1, 2, 5, 10 };
List<int> b = new List<int> { 6, 20, 3 };
I want to create another List c where its entries are items inserted by position from two lists. So List c would contain the following entries:
List<int> c = {1, 6, 2, 20, 5, 3, 10}
Is there a way to do it in .NET using LINQ? I was looking at .Zip() LINQ extension, but wasn't sure how to use it in this case.
Thanks in advance!
To do it using LINQ, you can use this piece of LINQPad example code:
void Main()
{
List<int> a = new List<int> { 1, 2, 5, 10 };
List<int> b = new List<int> { 6, 20, 3 };
var result = Enumerable.Zip(a, b, (aElement, bElement) => new[] { aElement, bElement })
.SelectMany(ab => ab)
.Concat(a.Skip(Math.Min(a.Count, b.Count)))
.Concat(b.Skip(Math.Min(a.Count, b.Count)));
result.Dump();
}
Output:
This will:
Zip the two lists together (which will stop when either runs out of elements)
Producing an array containing the two elements (one from a, another from b)
Using SelectMany to "flatten" this out to one sequence of values
Concatenate in the remainder from either list (only one or neither of the two calls to Concat should add any elements)
Now, having said that, personally I would've used this:
public static IEnumerable<T> Intertwine<T>(this IEnumerable<T> a, IEnumerable<T> b)
{
using (var enumerator1 = a.GetEnumerator())
using (var enumerator2 = b.GetEnumerator())
{
bool more1 = enumerator1.MoveNext();
bool more2 = enumerator2.MoveNext();
while (more1 && more2)
{
yield return enumerator1.Current;
yield return enumerator2.Current;
more1 = enumerator1.MoveNext();
more2 = enumerator2.MoveNext();
}
while (more1)
{
yield return enumerator1.Current;
more1 = enumerator1.MoveNext();
}
while (more2)
{
yield return enumerator2.Current;
more2 = enumerator2.MoveNext();
}
}
}
Reasons:
It doesn't enumerate a nor b more than once
I'm skeptical about the performance of Skip
It can work with any IEnumerable<T> and not just List<T>
I'd create an extension method to do it.
public static List<T> MergeAll<T>(this List<T> first, List<T> second)
{
int maxCount = (first.Count > second. Count) ? first.Count : second.Count;
var ret = new List<T>();
for (int i = 0; i < maxCount; i++)
{
if (first.Count < maxCount)
ret.Add(first[i]);
if (second.Count < maxCount)
ret.Add(second[i]);
}
return ret;
}
This would iterate through both lists once. If one list is bigger than the other it will continue to add until it's done.
You could try this code:
List<int> c = a.Select((i, index) => new Tuple<int, int>(i, index * 2))
.Union(b.Select((i, index) => new Tuple<int, int>(i, index * 2 + 1)))
.OrderBy(t => t.Second)
.Select(t => t.First).ToList();
It makes a union of two collections and then sorts that union using index. Elements from the first collection have even indices, from the second - odd ones.
Just wrote a little extension for this:
public static class MyEnumerable
{
public static IEnumerable<T> Smash<T>(this IEnumerable<T> one, IEnumerable<T> two)
{
using (IEnumerator<T> enumeratorOne = one.GetEnumerator(),
enumeratorTwo = two.GetEnumerator())
{
bool twoFinished = false;
while (enumeratorOne.MoveNext())
{
yield return enumeratorOne.Current;
if (!twoFinished && enumeratorTwo.MoveNext())
{
yield return enumeratorTwo.Current;
}
}
if (!twoFinished)
{
while (enumeratorTwo.MoveNext())
{
yield return enumeratorTwo.Current;
}
}
}
}
}
Usage:
var a = new List<int> { 1, 2, 5, 10 };
var b = new List<int> { 6, 20, 3 };
var c = a.Smash(b); // 1, 6, 2, 20, 5, 3, 10
var d = b.Smash(a); // 6, 1, 20, 2, 3, 5, 10
This will work for any IEnumerable so you can also do:
var a = new List<string> { "the", "brown", "jumped", "the", "lazy", "dog" };
var b = new List<string> { "quick", "dog", "over" };
var c = a.Smash(b); // the, quick, brown, fox, jumped, over, the, lazy, dog
You could use Concat and an anonymous type which you order by the index:
List<int> c = a
.Select((val, index) => new { val, index })
.Concat(b.Select((val, index) => new { val, index }))
.OrderBy(x => x.index)
.Select(x => x.val)
.ToList();
However, since that's not really elegant and also less efficient than:
c = new List<int>(a.Count + b.Count);
int max = Math.Max(a.Count, b.Count);
int aMax = a.Count;
int bMax = b.Count;
for (int i = 0; i < max; i++)
{
if(i < aMax)
c.Add(a[i]);
if(i < bMax)
c.Add(b[i]);
}
I wouldn't use LINQ at all.
Sorry for adding a third extension method inspired by the other two, but I like it shorter:
static IEnumerable<T> Intertwine<T>(this IEnumerable<T> a, IEnumerable<T> b)
{
using (var enumerator1 = a.GetEnumerator())
using (var enumerator2 = b.GetEnumerator()) {
bool more1 = true, more2 = true;
do {
if (more1 && (more1 = enumerator1.MoveNext()))
yield return enumerator1.Current;
if (more2 && (more2 = enumerator2.MoveNext()))
yield return enumerator2.Current;
} while (more1 || more2);
}
}
I have a list of lists that contain integers (this list can be any length and can contain any number of integers:
{{1,2}, {3,4}, {2,4}, {9,10}, {9,12,13,14}}
What I want to do next is combine the lists where any integer matches any integer from any other list, in this case:
result = {{1,2,3,4}, {9,10,12,13,14}}
I have tried many different approaches but am stuck for an elegant solution.
If you just mean "combine when there's an intersection", then maybe something like below, with output:
{1,2,3,4}
{9,10,12}
noting that it also passes the test in your edit, with output:
{1,2,3,4}
{9,10,12,13,14}
Code:
static class Program {
static void Main()
{
var sets = new SetCombiner<int> {
{1,2},{3,4},{2,4},{9,10},{9,12}
};
sets.Combine();
foreach (var set in sets)
{
// edited for unity: original implementation
// Console.WriteLine("{" +
// string.Join(",", set.OrderBy(x => x)) + "}");
StringBuilder sb = new StringBuilder();
foreach(int i in set.OrderBy(x => x)) {
if(sb.Length != 0) sb.Append(',');
sb.Append(i);
}
Console.WriteLine("{" + sb + "}");
}
}
}
class SetCombiner<T> : List<HashSet<T>>
{
public void Add(params T[] values)
{
Add(new HashSet<T>(values));
}
public void Combine()
{
int priorCount;
do
{
priorCount = this.Count;
for (int i = Count - 1; i >= 0; i--)
{
if (i >= Count) continue; // watch we haven't removed
int formed = i;
for (int j = formed - 1; j >= 0; j--)
{
if (this[formed].Any(this[j].Contains))
{ // an intersection exists; merge and remove
this[j].UnionWith(this[formed]);
this.RemoveAt(formed);
formed = j;
}
}
}
} while (priorCount != this.Count); // making progress
}
}
Build custom comparer:
public class CusComparer : IComparer<int[]>
{
public int Compare(int[] x, int[] y)
{
x = x.OrderBy(i => i).ToArray();
y = y.OrderBy(i => i).ToArray();
for (int i = 0; i < Math.Min(x.Length, y.Length); i++ )
{
if (x[i] < y[i]) return -1;
if (x[i] > y[i]) return 1;
}
if (x.Length < y.Length) return -1;
if (x.Length > y.Length) return 1;
return 0;
}
}
Then, order by custom comparer first:
List<int[]> input = new List<int[]>()
{
new[] { 3, 4 }, new[] { 1, 2 }, new[] { 2, 4 },
new[] { 9, 10 }, new[] { 9, 12 }
};
var orderedInput = input.OrderBy(x => x, new CusComparer()).ToList();
Use Intersect.Any() to check:
List<int[]> output = new List<int[]>();
int[] temp = orderedInput[0];
foreach (var arr in orderedInput.Skip(1))
{
if (temp.Intersect(arr).Any())
temp = temp.Union(arr).ToArray();
else
{
output.Add(temp);
temp = arr;
}
}
output.Add(temp);
Here's a simple, flexible solution using LINQ's Aggregate:
void Main()
{
var ints = new []{new []{1,2},new []{3,4},new []{2,4},new []{9,10},new []{9,12}};
var grouped = ints.Aggregate(new List<HashSet<int>>(), Step);
foreach(var bucket in grouped)
Console.WriteLine(String.Join(",", bucket.OrderBy(b => b)));
}
static List<HashSet<T>> Step<T>(List<HashSet<T>> all, IEnumerable<T> current)
{
var bucket = new HashSet<T>();
foreach (var c in current)
bucket.Add(c);
foreach (var i in all.Where(b => b.Overlaps(bucket)).ToArray())
{
bucket.UnionWith(i);
all.Remove(i);
}
all.Add(bucket);
return all;
}
We maintain a list of resulting sets (1). For each source set, remove resulting sets that intersect it (2), and add a new resulting set (3) that is the union of the removed sets and the source set (4):
class Program {
static IEnumerable<IEnumerable<T>> CombineSets<T>(
IEnumerable<IEnumerable<T>> sets,
IEqualityComparer<T> eq
) {
var result_sets = new LinkedList<HashSet<T>>(); // 1
foreach (var set in sets) {
var result_set = new HashSet<T>(eq); // 3
foreach (var element in set) {
result_set.Add(element); // 4
var node = result_sets.First;
while (node != null) {
var next = node.Next;
if (node.Value.Contains(element)) { // 2
result_set.UnionWith(node.Value); // 4
result_sets.Remove(node); // 2
}
node = next;
}
}
result_sets.AddLast(result_set); // 3
}
return result_sets;
}
static IEnumerable<IEnumerable<T>> CombineSets<T>(
IEnumerable<IEnumerable<T>> src
) {
return CombineSets(src, EqualityComparer<T>.Default);
}
static void Main(string[] args) {
var sets = new[] {
new[] { 1, 2 },
new[] { 3, 4 },
new[] { 2, 4 },
new[] { 9, 10 },
new[] { 9, 12, 13, 14 }
};
foreach (var result in CombineSets(sets))
Console.WriteLine(
"{{{0}}}",
string.Join(",", result.OrderBy(x => x))
);
}
}
This prints:
{1,2,3,4}
{9,10,12,13,14}
Ok i LINQed this up! Hope this is what you wanted... crazy one ;)
void Main()
{
var matches = new List<List<ComparissonItem>> { /*Your Items*/ };
var overall =
from match in matches
let matchesOne =
(from searchItem in matches
where searchItem.Any(item => match.Any(val => val.Matches(item) && !val.Equals(item)))
select searchItem)
where matchesOne.Any()
select
matchesOne.Union(new List<List<ComparissonItem>> { match })
.SelectMany(item => item);
var result = overall.Select(item => item.ToHashSet());
}
static class Extensions
{
public static HashSet<T> ToHashSet<T>(this IEnumerable<T> enumerable)
{
return new HashSet<T>(enumerable);
}
}
class ComparissonItem
{
public int Value { get; set; }
public bool Matches(ComparissonItem item)
{
/* Your matching logic*/
}
public override bool Equals(object obj)
{
var other = obj as ComparissonItem;
return other == null ? false : this.Value == other.Value;
}
public override int GetHashCode()
{
return this.Value.GetHashCode();
}
}
Suppose I have this number list:
List<int> nu = new List<int>();
nu.Add(2);
nu.Add(1);
nu.Add(3);
nu.Add(5);
nu.Add(2);
nu.Add(1);
nu.Add(1);
nu.Add(3);
Keeping the list items in the same order, is it possible to group the items in linq that are sum of 6 so results would be something like this:
2,1,3 - 5 - 2,1,1 - 3
Solving this with LINQ directly would be bothersome, instead you could make an extension method:
// Assumptions:
// (1) All non-negative, or at least you don't mind them in your sum
// (2) Items greater than the sum are returned by their lonesome
static IEnumerable<IEnumerable<int>> GroupBySum(this IEnumerable<int> source,
int sum)
{
var running = 0;
var items = new List<int>();
foreach (var x in source)
{
if (running + x > sum && items.Any())
{
yield return items;
items = new List<int>();
running = 0;
}
running += x;
items.Add(x);
}
if (items.Any()) yield return items;
}
You could do it with Aggregate.
(Side note: Use LinqPad to test/write these types of queries, makes it easy)
Gives these results:
Like this:
class Less7Holder
{
public List<int> g = new List<int>();
public int mySum = 0;
}
void Main()
{
List<int> nu = new List<int>();
nu.Add(2);
nu.Add(1);
nu.Add(3);
nu.Add(5);
nu.Add(2);
nu.Add(1);
nu.Add(1);
nu.Add(3);
var result = nu .Aggregate(
new LinkedList<Less7Holder>(),
(holder,inItem) =>
{
if ((holder.Last == null) || (holder.Last.Value.mySum + inItem >= 7))
{
Less7Holder t = new Less7Holder();
t.g.Add(inItem);
t.mySum = inItem;
holder.AddLast(t);
}
else
{
holder.Last.Value.g.Add(inItem);
holder.Last.Value.mySum += inItem;
}
return holder;
},
(holder) => { return holder.Select((h) => h.g );} );
result.Dump();
}