I have 3 sets in Linq, like this:
struct Index
{
string code;
int indexValue;
}
List<Index> reviews
List<Index> products
List<Index> pages
These lists have different code.
I want to merge these sets as following:
Take the first in reviews
Take the first in products
Take the first in pages
Take the second in reviews
-... and so on, note that these lists are not same-size.
How can I do this in Linq?
EDIT: Wait, is there a change to do this without .NET 4.0?
Thank you very much
You could use Zip to do your bidding.
var trios = reviews
.Zip(products, (r, p) => new { Review = r, Product = p })
.Zip(pages, (rp, p) => new { rp.Review, rp.Product, Page = p });
Edit:
For .NET 3.5, it's possible to implement Zip quite easily: but there are a few gotcha s. Jon Skeet has a great post series on how to implement LINQ to objects operators (for educational purposes), including this post, on Zip. The source code of the whole series, edulinq, can be found on Google Code.
The simple answer
To merge them into a common list without any common data, using the order they appear this, you can use the Zip method:
var rows = reviews
.Zip(products, (r, p) => new { Review = r, Product = p })
.Zip(pages, (rp, page) => new { rp.Review, rp.Product, Page = page });
The problem with this solution is that the lists must be identical length, or your result will be chopped to the shortest list of those original three.
Edit:
If you can't use .Net 4, check out Jon Skeet's blog posts on a clean-room implementation of Linq and His article on Zip in particular.
If you're using .Net 2, then try his library (possibly) or try LinqBridge
How to deal with different-lengthed lists
You can pre-pad the list to the desired length. I couldn't find an existing method to do this, so I'd use an extension method:
public static class EnumerableExtensions
{
public static IEnumerable<T> Pad<T>(this IEnumerable<T> source,
int desiredCount, T padWith = default(T))
{
// Note: Not using source.Count() to avoid double-enumeration
int counter = 0;
var enumerator = source.GetEnumerator();
while(counter < desiredCount)
{
yield return enumerator.MoveNext()
? enumerator.Current
: padWith;
++counter;
}
}
}
You can use it like this:
var paddedReviews = reviews.Pad(desiredLength);
var paddedProducts = products.Pad(desiredLength,
new Product { Value2 = DateTime.Now }
);
Full compiling sample and corresponding output
using System;
using System.Collections.Generic;
using System.Linq;
class Review
{
public string Value1;
}
class Product
{
public DateTime Value2;
}
class Page
{
public int Value3;
}
public static class EnumerableExtensions
{
public static IEnumerable<T> Pad<T>(this IEnumerable<T> source,
int desiredCount, T padWith = default(T))
{
int counter = 0;
var enumerator = source.GetEnumerator();
while(counter < desiredCount)
{
yield return enumerator.MoveNext()
? enumerator.Current
: padWith;
++counter;
}
}
}
class Program
{
static void Main(string[] args)
{
var reviews = new List<Review>
{
new Review { Value1 = "123" },
new Review { Value1 = "456" },
new Review { Value1 = "789" },
};
var products = new List<Product>()
{
new Product { Value2 = DateTime.Now },
new Product { Value2 = DateTime.Now.Subtract(TimeSpan.FromSeconds(5)) },
};
var pages = new List<Page>()
{
new Page { Value3 = 123 },
};
int maxCount = Math.Max(Math.Max(reviews.Count, products.Count), pages.Count);
var rows = reviews.Pad(maxCount)
.Zip(products.Pad(maxCount), (r, p) => new { Review = r, Product = p })
.Zip(pages.Pad(maxCount), (rp, page) => new { rp.Review, rp.Product, Page = page });
foreach (var row in rows)
{
Console.WriteLine("{0} - {1} - {2}"
, row.Review != null ? row.Review.Value1 : "(null)"
, row.Product != null ? row.Product.Value2.ToString() : "(null)"
, row.Page != null ? row.Page.Value3.ToString() : "(null)"
);
}
}
}
123 - 9/7/2011 10:02:22 PM - 123
456 - 9/7/2011 10:02:17 PM - (null)
789 - (null) - (null)
On use of the Join tag
This operation isn't a logical Join. This is because you're matching on index, not on any data out of each object. Each object would have to have other data in common (besides their position in the lists) to be joined in the sense of a Join that you would find in a relational database.
Related
Below code is used to find all indices of a string that might occur only once in an array but the code isn't very fast. Does somebody know a faster and more efficient way to find unique strings in an array?
using System;
using System.Collections.Generic;
using System.Linq;
public static class EM
{
// Extension method, using Linq to find indices.
public static int[] FindAllIndicesOf<T>(this IEnumerable<T> values, T val)
{
return values.Select((b,i) => Equals(b, val) ? i : -1).Where(i => i != -1).ToArray();
}
}
public class Program
{
public static string FindFirstUniqueName(string[] names)
{
var results = new List<string>();
for (var i = 0; i < names.Length; i++)
{
var matchedIndices = names.FindAllIndicesOf(names[i]);
if (matchedIndices.Length == 1)
{
results.Add(names[matchedIndices[0]]);
break;
}
}
return results.Count > 0 ? results[0] : null;
}
public static void Main(string[] args)
{
Console.WriteLine("Found: " + FindFirstUniqueName(new[]
{
"James",
"Bill",
"Helen",
"Bill",
"Helen",
"Giles",
"James",
}
));
}
}
Your solution has O(n^2) complexity. You can improve it to O(n) by using Hash-Map.
Consider a Hash-Map with which in each name has it number of recurrences in your original list. Now all you have to do is check all key in the dictionary (aka hash-map) and return all that equal to 1. Notice that check all key in this dictionary is less then o(n) because it can not hold more then n names.
To implement this dictionary in C# you do as follow:
List<string> stuff = new List<string>();
var groups = stuff.GroupBy(s => s).Select(
s => new { Stuff = s.Key, Count = s.Count() });
var dictionary = groups.ToDictionary(g => g.Stuff, g => g.Count);
Taken from here or as suggested by juharr
O(n) is the minimum require as you will have to go over all names at least once.
Suppose I have 2 lists: one containing strings, one containing integers, they differ in length. The application I am building will use these lists to generate combinations of vehicle and coverage areas. Strings represent area names and ints represent vehicle ID's.
My goal is to generate a list of all possible unique combinations used for further investigation. One vehicle can service many areas, but one area can't be served by multiple vehicles. Every area must receive service, and every vehicle must be used.
So to conclude the constraints:
Every area is used only once
Every vehicle is used at least once
No area can be left out.
No vehicle can be left out
Here is an example:
public class record = {
public string areaId string{get;set;}
public int vehicleId int {get;set;}
}
List<string> areas = new List<string>{ "A","B","C","D"};
List<int> vehicles = new List<int>{ 1,2};
List<List<record>> uniqueCombinationLists = retrieveUniqueCombinations(areas,vehicles);
I just have no clue how to make the retrieveUniqueCombinations function. Maybe I am just looking wrong or thinking too hard. I am stuck thinking about massive loops and other brute force approaches. An explanation of a better approach would be much appreciated.
The results should resemble something like this, I think this contains all possibilities for this example.
A1;B1;C1;D2
A1;B1;C2;D1
A1;B2;C1;D1
A2;B1;C1;D1
A2;B2;C2;D1
A2;B2;C1;D2
A2;B1;C2;D2
A1;B2;C2;D2
A2;B1;C1;D2
A1;B2;C2;D1
A2;B2;C1;D1
A1;B1;C2;D2
A2;B1;C2;D1
A1;B2;C1;D2
Here's something I threw together that may or may not work. Borrowing heavily from dtb's work on this answer.
Basically, I generate them all, then remove the ones that don't meet the requirements.
List<string> areas = new List<string> { "A", "B", "C", "D" };
List<int> vehicles = new List<int> { 1, 2 };
var result = retrieveUniqueCombinations(areas, vehicles);
result.ToList().ForEach((recordList) => {
recordList.ToList().ForEach((record) =>
Console.Write("{0}{1};", record.areaId, record.vehicleId));
Console.WriteLine();
});
public IEnumerable<IEnumerable<record>> retrieveUniqueCombinations(IEnumerable<string> areas, IEnumerable<int> vehicles)
{
var items = from a in areas
from v in vehicles
select new record { areaId = a, vehicleId = v };
var result = items.GroupBy(i => i.areaId).CartesianProduct().ToList();
result.RemoveAll((records) =>
records.All(record =>
record.vehicleId == records.First().vehicleId));
return result;
}
public class record
{
public string areaId { get; set; }
public int vehicleId { get; set; }
}
static class Extensions
{
public static IEnumerable<IEnumerable<T>> CartesianProduct<T>(
this IEnumerable<IEnumerable<T>> sequences)
{
IEnumerable<IEnumerable<T>> emptyProduct = new[] { Enumerable.Empty<T>() };
return sequences.Aggregate(
emptyProduct,
(accumulator, sequence) =>
from accseq in accumulator
from item in sequence
select accseq.Concat(new[] { item }));
}
}
This produces the following:
A1;B1;C1;D2;
A1;B1;C2;D1;
A1;B1;C2;D2;
A1;B2;C1;D1;
A1;B2;C1;D2;
A1;B2;C2;D1;
A1;B2;C2;D2;
A2;B1;C1;D1;
A2;B1;C1;D2;
A2;B1;C2;D1;
A2;B1;C2;D2;
A2;B2;C1;D1;
A2;B2;C1;D2;
A2;B2;C2;D1;
Note that these are not in the same order as yours, but I'll leave the verification to you. Also, there's likely a better way of doing this (for instance, by putting the logic in the RemoveAll step in the CartesianProduct function), but hey, you get what you pay for ;).
So lets use some helper classes to convert numbers to IEnumerable<int> enumerations in different bases. It may be more efficient to use List<> but since we are trying to use LINQ:
public static IEnumerable<int> LeadingZeros(this IEnumerable<int> digits, int minLength) {
var dc = digits.Count();
if (dc < minLength) {
for (int j1 = 0; j1 < minLength - dc; ++j1)
yield return 0;
}
foreach (var j2 in digits)
yield return j2;
}
public static IEnumerable<int> ToBase(this int num, int numBase) {
IEnumerable<int> ToBaseRev(int n, int nb) {
do {
yield return n % nb;
n /= nb;
} while (n > 0);
}
foreach (var n in ToBaseRev(num, numBase).Reverse())
yield return n;
}
Now we can create an enumeration that lists all the possible answers (and a few extras). I converted the Lists to Arrays for indexing efficiency.
var areas = new List<string> { "A", "B", "C", "D" };
var vehicles = new List<int> { 1, 2 };
var areasArray = areas.ToArray();
var vehiclesArray = vehicles.ToArray();
var numVehicles = vehiclesArray.Length;
var numAreas = areasArray.Length;
var NumberOfCombos = Convert.ToInt32(Math.Pow(numVehicles, numAreas));
var ansMap = Enumerable.Range(0, NumberOfCombos).Select(n => new { n, nd = n.ToBase(numVehicles).LeadingZeros(numAreas)});
Given the enumeration of the possible combinations, we can convert into areas and vehicles and exclude the ones that don't use all vehicles.
var ans = ansMap.Select(nnd => nnd.nd).Select(m => m.Select((d, i) => new { a = areasArray[i], v = vehiclesArray[d] })).Where(avc => avc.Select(av => av.v).Distinct().Count() == numVehicles);
Take a look of this sample object,
public class Demo
{
public string DisplayName { get; set; }
public int Code1 { get; set; }
public int Code2 { get; set; }
...
}
and lets say I want to put all codes (Code1, Code2) in one list (IEnumerable)... one way is this one:
var codes = demoList.Select(item => item.Code1).ToList();
codes.AddRange(demoList.Select(item => item.Code2));
//var uniqueCodes = codes.Distinct(); // optional
I know this is not a nice neither optimal solution, so I am curious to know what will be a better approach / (best practice)?
How about with SelectMany:
var codes = demoList.SelectMany(item => new[] { item.Code1, item.Code2 });
By the way, the idiomatic way of doing a concatenation in LINQ is with Concat:
var codes = demoList.Select(item => item.Code1)
.Concat(demoList.Select(item => item.Code2));
Linq is not a silver bullet to kill everything
For your intent i'd propose the following
var codes = new List<int>(demoList.Count * 2);
foreach(var demo in demoList)
{
codes.Add(demo.Code1);
codes.Add(demo.Code2);
}
BENCHMARK
I did a benchmark iterating a list of 1 million and 1 thousand instances with my solution and Ani's
Amount: 1 million
Mine : 2ms
Ani's: 20ms
Amount 1000 items
Mine : 1ms
Ani's: 12ms
the sample code
List<MyClass> list = new List<MyClass>(1000);
for (int i = 0; i < 100000; i++)
{
list.Add(new MyClass
{
Code1 = i,
Code2 = i * 2,
});
}
System.Diagnostics.Stopwatch timer1 = System.Diagnostics.Stopwatch.StartNew();
var resultLinq = list.SelectMany(item => new[] { item.Code1, item.Code2 }).ToList();
Console.WriteLine("Ani's: {0}", timer1.ElapsedMilliseconds);
System.Diagnostics.Stopwatch timer2 = System.Diagnostics.Stopwatch.StartNew();
var codes = new List<int>(list.Count * 2);
foreach (var item in list)
{
codes.Add(item.Code1);
codes.Add(item.Code2);
}
Console.WriteLine("Mine : {0}", timer2.ElapsedMilliseconds);
}
// this won't return duplicates so no need to use Distinct.
var codes = demoList.Select(i=> i.Code1)
.Union(demoList.Select(i=>i.Code2));
Edited just for completeness (see #Ani answer) after some comments:
// Optionally use .Distinct()
var codes = demoList.Select(i=>i.Code1)
.Concat(demoList.Select(i=>i.Code2))
.Distinct();
Even the code you have written is perfect,i am just giving you another option
Try this
var output = Enumerable.Concat(demoList.Select(item => item.Code1).ToList(), demoList.Select(item => item.Code2).ToList()).ToList();
The Luis' answer is good enough for me. but I did re-factored it, using extension methods for any numbers of fields... and the optimal result still Luis's answer. (example of 100000 records)
Ani's: 21
Luis: 4
Jaider's: 15
Here my extension method.
public static IEnumerable<T> SelectExt<R, T>(this IEnumerable<R> list, params Func<R, T>[] GetValueList)
{
var result = new List<T>(list.Count() * GetValueList.Length);
foreach (var item in list)
{
foreach (var getValue in GetValueList)
{
var value = getValue(item);
result.Add(value);
}
}
return result;
}
The usage, will be:
var codes = demoList.SelectExt(item => item.Code1, item => item.Code2).ToList();
Right now, I have a class called TrainingPlan that looks like this:
public class TrainingPlan
{
public int WorkgroupId { get; set; }
public int AreaId { get; set; }
}
I'm given an array of these instances, and need to load the matching training plans from the database. The WorkgroupId and AreaId basically form a compound key. What I'm doing now is looping through each TrainingPlan like so:
foreach (TrainingPlan plan in plans)
LoadPlan(pid, plan.AreaId, plan.WorkgroupId);
Then, LoadPlan has a LINQ query to load the individual plan:
var q = from tp in context.TPM_TRAININGPLAN.Include("TPM_TRAININGPLANSOLUTIONS")
where tp.PROJECTID == pid && tp.AREAID == areaid &&
tp.WORKGROUPID == workgroupid
select tp;
return q.FirstOrDefault();
The Problem:
This works, however it's very slow for a large array of plans. I believe this could be much faster if I could perform a single LINQ query to load in every TPM_TRAININGPLAN at once.
My Question:
Given an array of TrainingPlan objects, how can I load every matching WorkgroupId/AreaId combination at once? This query should translate into similar SQL syntax:
SELECT * FROM TPM_TRAININGPLANS
WHERE (AREAID, WORKGROUPID) IN ((1, 2), (3, 4), (5, 6), (7, 8));
I've used Contains to run a bulk filter similar to where-in. I setup a rough approximation of your scenario. The single select queries actually ran quicker than Contains did. I recommend running a similar test on your end with the DB tied in to see how your results wind up. Ideally see how it scales too. I'm running .NET 4.0 in visual studio 2012. I jammed in ToList() calls to push past potential lazy loading problems.
public class TrainingPlan
{
public int WorkgroupId { get; set; }
public int AreaId { get; set; }
public TrainingPlan(int workGroupId, int areaId)
{
WorkgroupId = workGroupId;
AreaId = areaId;
}
}
public class TrainingPlanComparer : IEqualityComparer<TrainingPlan>
{
public bool Equals(TrainingPlan x, TrainingPlan y)
{
//Check whether the compared objects reference the same data.
if (x.WorkgroupId == y.WorkgroupId && x.AreaId == y.AreaId)
return true;
return false;
}
public int GetHashCode(TrainingPlan trainingPlan)
{
if (ReferenceEquals(trainingPlan, null))
return 0;
int wgHash = trainingPlan.WorkgroupId.GetHashCode();
int aHash = trainingPlan.AreaId.GetHashCode();
return wgHash ^ aHash;
}
}
internal class Class1
{
private static void Main()
{
var plans = new List<TrainingPlan>
{
new TrainingPlan(1, 2),
new TrainingPlan(1, 3),
new TrainingPlan(2, 1),
new TrainingPlan(2, 2)
};
var filter = new List<TrainingPlan>
{
new TrainingPlan(1, 2),
new TrainingPlan(1, 3),
};
Stopwatch resultTimer1 = new Stopwatch();
resultTimer1.Start();
var results = plans.Where(plan => filter.Contains(plan, new TrainingPlanComparer())).ToList();
resultTimer1.Stop();
Console.WriteLine("Elapsed Time for filtered result {0}", resultTimer1.Elapsed);
Console.WriteLine("Result count: {0}",results.Count());
foreach (var item in results)
{
Console.WriteLine("WorkGroup: {0}, Area: {1}",item.WorkgroupId, item.AreaId);
}
resultTimer1.Reset();
resultTimer1.Start();
var result1 = plans.Where(p => p.AreaId == filter[0].AreaId && p.WorkgroupId == filter[0].WorkgroupId).ToList();
var result2 = plans.Where(p => p.AreaId == filter[1].AreaId && p.WorkgroupId == filter[1].WorkgroupId).ToList();
resultTimer1.Stop();
Console.WriteLine("Elapsed time for single query result: {0}",resultTimer1.Elapsed);//single query is faster
Console.ReadLine();
}
}
It seems to me that using Intersect() may get this done the way that you want. But, I don't have an environment set up to test this myself.
var q = (from tp in context.TPM_TRAININGPLAN.Include("TPM_TRAININGPLANSOLUTIONS")
where pid == tp.PROJECTID
select tp)
.Intersect
(from tp in context.TPM_TRAININGPLAN.Include("TPM_TRAININGPLANSOLUTIONS")
where plans.Any(p => p.AreaID == tp.AREAID)
select tp)
.Intersect
(from tp in context.TPM_TRAININGPLAN.Include("TPM_TRAININGPLANSOLUTIONS")
where plans.Any(p => p.WorkgroupId == tp.WORKGROUPID)
select tp);
My only concern might be that Intersect could cause it to load more records in memory than you would want, but I'm unable to test to confirm if that's the case.
Often I find myself filling ASP.NET repeaters with items that need the CSS class set depending on index: 'first' for index 0, 'last' for index (length-1), and 'mid' in the middle:
_repeater.DataSource = from f in foos
select new
{
...,
CssClass = MakeCssClass( foos, f )
};
private static string MakeCssClass( Foo[] foos, Foo f )
{
var index = Array.IndexOf( foos, f );
if( index == 0 )
{
return "first";
}
else if( index == foos.Length - 1 )
{
return "last";
}
else
{
return "mid";
}
}
Is there a nicer way I can achieve this (eg using lambda functions)? If I try I get CS0828, "Cannot assign lambda expression to anonymous type property".
You might be interested in my SmartEnumerable type in MiscUtil.
From the usage page, there's an example:
using System;
using System.Collections.Generic;
using MiscUtil.Collections;
class Example
{
static void Main(string[] args)
{
List<string> list = new List<string>();
list.Add("a");
list.Add("b");
list.Add("c");
list.Add("d");
list.Add("e");
foreach (SmartEnumerable<string>.Entry entry in
new SmartEnumerable<string>(list))
{
Console.WriteLine ("{0,-7} {1} ({2}) {3}",
entry.IsLast ? "Last ->" : "",
entry.Value,
entry.Index,
entry.IsFirst ? "<- First" : "");
}
}
}
With implicitly typed variables and a bit more type inference the syntax could be tidied up quite easily. I must get round to that some time, but the basics are already there.
Here's a clever way to get those mids - Skip, Reverse, Skip (what is this, UNO?).
List<SomeClass> myList = foos
.Select(f => new SomeClass{ ..., CssClass=string.Empty })
.ToList();
if (myList.Any())
{
myList.First().CssClass = "first";
myList.Last().CssClass = "last";
foreach(var z in myList.Skip(1).Reverse().Skip(1))
{
z.CssClass = "mid";
}
}
_repeater.DataSource = myList;
Here's a better way for this problem statement.
List<SomeClass> myList = foos
.Select(f => new SomeClass{ ..., CssClass="mid" })
.ToList();
if (myList.Any())
{
myList.First().CssClass = "first";
myList.Last().CssClass = "last";
}
_repeater.DataSource = myList;
Of course, neither of these technique will work if you are using anonymous types (their properties are read-only). Don't use anonymous types for query results.