I have three lists of different types :
List<Customer> customerList = new List<Customer>();
List<Product> productList = new List<Product>();
List<Vehicle> vehicleList = new List<Vehicle>();
I also have this list
List<string> stringList = {"AND","OR"};
Since first element of stringList is AND I want to make inner join with customerList and productList. Then I want to make right join vehicleList with the result such as :
from cust in customerList
join prod in productList on cust.ProductId equals prod.Id
join veh in vehicleList on prod.VehicleId equals veh.Id into v
from veh in v.DefaultIfEmpty()
select new {customerName = cust.Name, customerVehicle=veh.VehicleName}
I want to make this in automatized way, lets say I have N number of lists and N-1 number of ANDs and ORs, how can I join them? Besides there can be many lists of the same type. Is such a thing even possible? If not what can I do to make this closer to my need? Thanks in advance.
EDIT :
I'm holding the lists and their types in a Dictionary like this :
var listDict = new Dictionary<Type, object>();
So I can iterate inside this dictionary if necessary.
UPDATE 5-15-17:
Just for the sake of recap what I am proposing is an example that we want to:
Pass in a list of N number of Table objects.
Pass in a list of N-1 join clauses of how to join them. EG: You have 2 tables you need a single join, 3 you need 2, and so on.
We want to be to pass in a predicate to go up or down the chain to narrow scope.
What I would propose is to do all of this in SQL and pass into SQL an xml object that it can parse. However to keep it a little more simple to not deal with XML serialization too, let's stick with strings that are essentially one or many values to pass in. Say we have a structure going off of above like this:
/*
CREATE TABLE Customer ( Id INT IDENTITY, CustomerName VARCHAR(64), ProductId INT)
INSERT INTO Customer VALUES ('Acme', 1),('Widgets', 2)
CREATE TABLE Product (Id INT IDENTITY, ProductName VARCHAR(64), VehicleId INT)
Insert Into Product Values ('Shirt', 1),('Pants', 2)
CREATE TABLE VEHICLE (Id INT IDENTITY, VehicleName VARCHAR(64))
INSERT INTO dbo.VEHICLE VALUES ('Car'),('Truck')
CREATE TABLE Joins (Id INT IDENTITY, OriginTable VARCHAR(32), DestinationTable VARCHAR(32), JoinClause VARCHAR(32))
INSERT INTO Joins VALUES ('Customer', 'Product', 'ProductId = Id'),('Product', 'Vehicle', 'VehicleId = Id')
--Data as is if I joined all three tables
CustomerId CustomerName ProductId ProductName VehicleId VehicleName
1 Acme 1 Shirt 1 Car
2 Widgets 2 Pants 2 Truck
*/
This structure is pretty simplistic and everything is one to one key relationships versus it could have some other identifiers. The key to making things work is to maintain a table that describes HOW these tables relate. I called this table joins. Now I can create a dynamic proc like so:
CREATE PROC pDynamicFind
(
#Tables varchar(256)
, #Joins VARCHAR(256)
, #Predicate VARCHAR(256)
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQL NVARCHAR(MAX) =
'With x as
(
SELECT
a.Id
, {nameColumns}
From {joins}
Where {predicate}
)
SELECT *
From x
UNPIVOT (Value FOR TableName In ({nameColumns})) AS unpt
'
DECLARE #Tbls TABLE (id INT IDENTITY, tableName VARCHAR(256), joinType VARCHAR(16))
DECLARE #Start INT = 2
DECLARE #alphas VARCHAR(26) = 'abcdefghijklmnopqrstuvwxyz'
--Comma seperated into temp table (realistically most people create a function to do this so you don't have to do it over and over again)
WHILE LEN(#Tables) > 0
BEGIN
IF PATINDEX('%,%', #Tables) > 0
BEGIN
INSERT INTO #Tbls (tableName) VALUES (RTRIM(LTRIM(SUBSTRING(#Tables, 0, PATINDEX('%,%', #Tables)))))
SET #Tables = SUBSTRING(#Tables, LEN(SUBSTRING(#Tables, 0, PATINDEX('%,%', #Tables)) + ',') + 1, LEN(#Tables))
END
ELSE
BEGIN
INSERT INTO #Tbls (tableName) VALUES (RTRIM(LTRIM(#Tables)))
SET #Tables = NULL
END
END
--Have to iterate over this one seperately
WHILE LEN(#Joins) > 0
BEGIN
IF PATINDEX('%,%', #Joins) > 0
BEGIN
Update #Tbls SET joinType = (RTRIM(LTRIM(SUBSTRING(#Joins, 0, PATINDEX('%,%', #Joins))))) WHERE id = #Start
SET #Joins = SUBSTRING(#Joins, LEN(SUBSTRING(#Joins, 0, PATINDEX('%,%', #Joins)) + ',') + 1, LEN(#Joins))
SET #Start = #Start + 1
END
ELSE
BEGIN
Update #Tbls SET joinType = (RTRIM(LTRIM(#Joins))) WHERE id = #Start
SET #Joins = NULL
SET #Start = #Start + 1
END
END
DECLARE #Join VARCHAR(256) = ''
DECLARE #Cols VARCHAR(256) = ''
--Determine dynamic columns and joins
Select
#Join += CASE WHEN joinType IS NULL THEN t.tableName + ' ' + SUBSTRING(#alphas, t.id, 1)
ELSE ' ' + joinType + ' JOIN ' + t.tableName + ' ' + SUBSTRING(#alphas, t.id, 1) + ' ON ' + SUBSTRING(#alphas, t.id-1, 1) + '.' + REPLACE(j.JoinClause, '= ', '= ' + SUBSTRING(#alphas, t.id, 1) + '.' )
END
, #Cols += CASE WHEN joinType IS NULL THEN t.tableName + 'Name' ELSE ' , ' + t.tableName + 'Name' END
From #Tbls t
LEFT JOIN Joins j ON t.tableName = j.DestinationTable
SET #SQL = REPLACE(#SQL, '{joins}', #Join)
SET #SQL = REPLACE(#SQL, '{nameColumns}', #Cols)
SET #SQL = REPLACE(#SQL, '{predicate}', #Predicate)
--PRINT #SQL
EXEC sp_executesql #SQL
END
GO
I now have a medium for finding things that makes it stubbed query so to speak that I can replace the source of the from statement, what I query on, what value I use to query on. I would get results from it like this:
EXEC pDynamicFind 'Customer, Product', 'Inner', 'CustomerName = ''Acme'''
EXEC pDynamicFind 'Customer, Product, Vehicle', 'Inner, Inner', 'VehicleName = ''Car'''
Now what about setting that up in EF and using it in code? Well you can add procs to EF and get data from this as context. The answer that this addresses is that I am essentially giving back a fixed object now despite however many columns I may add. If my pattern is always going to be '(table)name' to N numbers of tables I can normalize my result by unpivoting and then just getting N number of rows for however many tables I have. Thus performance may be worse as you get larger result sets but the potential to make however many joins you want as long as similar structure is used is possible.
The point I am making though is that SQL is ultimately getting your data and doing crazy joins that result from Linq is at times more work than it's worth. But if you do have a small result set and a small db, you are probably fine. This is just an example of how you would get completely different objects in SQL using dynamic sql and how fast it can do something once the code for the proc is written. This is just one way to skin a cat of which I am sure there are many. The problem is whatever road you go down with dynamic joins or a method of getting things out is going to require some type of normalization standard, factory pattern or something where it says I can have N inputs that always yield the same X object no matter what. I do this through a vertical result set, but if you want a different column than say 'name' you are going to have to code more for that as well. However the way I built this if you want the description but say wanted to do a predicate for a date field, this would be fine with that.
If you always want the same set of output columns, then write your query ahead of time:
select *
from
customerList c
inner join
productList p on c.ProductId = p.Id
inner join
vehicleList v on p.VehicleId = v.Id
Then append a dynamic where. At its simplest, just replace 'CustomerCity:' with 'c.city' and so on, so that what they wrote becomes valid SQL (Danger danger: if your user is not to be trusted then you must must must make your SQL injection proof. At the very least scan it for DML, or limit the keywords they can provide. Better would be to parse it into fields, parameterise it properly and add the values they provide to parameters)
Simple (ugh) we let the SQL parser do some work:
string whereClause = userInput;
whereClause = whereClause.Replace("CustomerCity:", "c.City = '");
whereClause = whereClause.Replace("VehicleNumber:", "v.Number = ");
//and so on
whereClause = whereClause.Replace(" AND", "' AND");
//some logic here to go through the string and close up those apostrophes
Ugly, and fragile. And hackable (if you care).
Parsing would be better:
sqlCommand.CommandText = "SELECT ... WHERE ";
string whereBits = userInput.Split(" ");
var parameters as new Dictionary<string, string>();
parameters["customercity"] = "c.City";
parameters["vehiclenumber"] = "v.Number";
foreach(var token in whereBits){
var frags = token.Split(':');
string friendlyName = frags[0].ToLower();
//handle here the AND and OR -> append to sql command text and continue the loop
if(parameters.ContainsKey(friendlyName)){
sqlCommand.CommandText += parameters[friendlyName] + " = #" + friendlyName;
sqlCommand.Parameters.AddWithValue("#" + friendlyname, frags[1]);
}
}
//now you should have an sql that looks like
//SELECT ... WHERE customercity = #customercity ...
// and a params collection that looks like:
//sql.Params[0] => ("#customercity", "Seattle", varchar)...
One thing to consider: will your user be able to construct that query and get the results they want? What in a users mind does CustomerCity:Seattle OR ProductType:Computer AND VehicleNumber:8 AND CustomerName:Jason mean anyway? Everyone in Seattle, plus every Jason whose Computer is in vehicle 8?
Everyone in Seattle or who has a computer, but they must have vehicle 8 and be called jason?
Without precedence, queries could just turn out garbage in the user's hands
I think it would have been better if you just describe what the requirement is, instead of asking how to implement this strange design.
Performance isn't a problem... now. But that is how it always starts...
Anyways, I do not think performance has to be an issue. But that depends on the relations between tables. In your example there are lists with only one foreign key. Each customer has one product and each product has one vehicle. Resulting in one record.
But what happens if one vehicle has multiple products, from multiple customers? If you allow to combine tables in all kinds of ways, you're bound to create a Cartesian Product somewhere. Resulting in 1000s or more rows.
And how are you going to implement multiple relations between objects? Suppose there are users, and customer has the fields UpdatedByUser and CreatedByUser. How do you know which user maps to which field?
And what about numeric fields? It seems that you are treating all fields as string.
If you want to allow users to build queries, according to the relations in the database and existing fields, the best thing to do may be to write (generic) code to build your own expression trees. Using reflection you can show properties, etc. That may also result in the best queries.
But you may also consider to use MongoDB instead of Sql Server. If relations are not that important, then a relational database may not be the right place to store data. You may also consider to use the Full-text search feature in Sql Server.
If you want to use Sql Server then you should take advantage of the navigation properties that are present in Entity Framework 6 (code first). You think that is not what you need, but I think it can be very easy.
First you'll need to create a model and entities. Please note that you should not use the [Required] attribute for foreign keys. Because if you do, this will be translated to an inner join.
Next take the table you want to query:
var ctx = new Model();
//ctx.Configuration.ProxyCreationEnabled = false;
var q = ctx.Customers.AsQueryable();
// parse the 'parameters' to build the query
q = q.Include("Product");
// You'll have to build the include string
q = q.Include("Product.Vehicle");
var res = q.FirstOrDefault();
This will get all the data you'll need, all using left joins. In order to 'convert' a left join to an inner join you filter the foreign key to be not null:
var res = q.FirstOrDefault(cust => cust.ProductId != null);
So all you need is the table where you want to start. And then build the query anyway you like. You can even parse a string: Customer AND Product OR Vehicle instead of using seperate lists.
The variable res contains the customer which links to Product. But res should be the result of a select:
var res = q.Select(r => new { CustName = Customer.Name, ProductName = Customer.Product.Name).FirstOrDefault();
In the question there is no mention of filters, but in the comments there is. In case you want to add filters you can also think of building your query like this:
q = q.Where(cust => cust.Name.StartsWith("a"));
if (someCondition = true)
q = q.Where(cust => cust.Product.Name.StartsWith("a"));
var res = q.ToList();
This is just to give you an idea how you can take advantage of EF6 (code-first). You don't have to think about the joins, since these are already defined and automatically picked up.
decompose your linq/lambda expression using How to Convert LINQ Comprehension Query Syntax to Method Syntax using Lambda
you will get
customerList.Join(productList, cust => cust.ProductId, prod => prod.Id, (cust, prod) => new { cust = cust, prod = prod })
.GroupJoin(vehicleList, cp => cp.prod.VehicleId, veh => veh.Id, (cp, v) => new { cp = cp, v = v })
.SelectMany(cv => cv.v.DefaultIfEmpty(), (cv, veh) => new { customerName = cv.cp.cust.Name, customerVehicle = veh.VehicleName });
besides listDict, you will need the following keyArr as well:
keyArr[0] = { OuterKey = cust => cust.ProductId; InnerKey = prod => cust.Id; };
keyArr[1] = ...
for loop the listDict using the follow code:
var result = customerList;
foreach(var ld in listDict)
{
//use this
result = result.Join(ld, keyArr[i].OuterKey, keyArr[i].InnerKey, (cust, prod) => new { cust = cust, prod = prod });
//or this or both depends on the query
result = result.GroupJoin(ld, cp => cp.prod.VehicleId, veh => veh.Id, (cp, v) => new { cp = cp, v = v })
}
// need to define concrete class for each table
// and grouping result after each join
//and finally
result.SelectMany(cv => cv.v.DefaultIfEmpty(), (cv, veh) => { customerName = cv.cp.cust.Name, customerVehicle = veh.VehicleName });
The following code solves your problem.
Fist we need data, so I build some sample lists of three different types. My solution can handle multiple tables of the same data type.
Then I build the list of join specifications, specifying the tables, join fields and join type:
Warning: The order of the specifications must be same (must follow the topological sort). The first join joins two tables. The subsequent joins must join one new table to one of the existing tables.
var joinSpecs = new IJoinSpecification[] {
JoinSpecification.Create(list1, list2, v1 => v1.Id, v2 => v2.ForeignKeyTo1, JoinType.Inner),
JoinSpecification.Create(list2, list3, v2 => v2.Id, v3 => v3.ForeignKeyTo2, JoinType.LeftOuter)
};
then you just execute the joins:
//Creating LINQ query
IEnumerable<Dictionary<object, object>> result = null;
foreach (var joinSpec in joinSpecs) {
result = joinSpec.PerformJoin(result);
}
//Executing the LINQ query
var finalResult = result.ToList();
The result is a list of dictionaries containing the joined items, so the access looks like this: rowDict[table1].Column2. You can even have multiple tables of same type - this system handles that easily.
Here is how you do the final projection of your joined data:
var resultWithColumns = (
from row in finalResult
let item1 = row.GetItemFor(list1)
let item2 = row.GetItemFor(list2)
let item3 = row.GetItemFor(list3)
select new {
Id1 = item1?.Id,
Id2 = item2?.Id,
Id3 = item3?.Id,
Value1 = item1?.Value,
Value2 = item2?.Value,
Value3 = item3?.Value
}).ToList();
The full code:
using System;
using System.Collections.Generic;
using System.Linq;
public class Type1 {
public int Id { get; set; }
public int Value { get; set; }
}
public class Type2 {
public int Id { get; set; }
public string Value { get; set; }
public int ForeignKeyTo1 { get; set; }
}
public class Type3 {
public int Id { get; set; }
public string Value { get; set; }
public int ForeignKeyTo2 { get; set; }
}
public class Program {
public static void Main() {
//Data
var list1 = new List<Type1>() {
new Type1 { Id = 1, Value = 1 },
new Type1 { Id = 2, Value = 2 },
new Type1 { Id = 3, Value = 3 }
//4 is missing
};
var list2 = new List<Type2>() {
new Type2 { Id = 1, Value = "1", ForeignKeyTo1 = 1 },
new Type2 { Id = 2, Value = "2", ForeignKeyTo1 = 2 },
//3 is missing
new Type2 { Id = 4, Value = "4", ForeignKeyTo1 = 4 }
};
var list3 = new List<Type3>() {
new Type3 { Id = 1, Value = "1", ForeignKeyTo2 = 1 },
//2 is missing
new Type3 { Id = 3, Value = "2", ForeignKeyTo2 = 2 },
new Type3 { Id = 4, Value = "4", ForeignKeyTo2 = 4 }
};
var joinSpecs = new IJoinSpecification[] {
JoinSpecification.Create(list1, list2, v1 => v1.Id, v2 => v2.ForeignKeyTo1, JoinType.Inner),
JoinSpecification.Create(list2, list3, v2 => v2.Id, v3 => v3.ForeignKeyTo2, JoinType.LeftOuter)
};
//Creating LINQ query
IEnumerable<Dictionary<object, object>> result = null;
foreach (var joinSpec in joinSpecs) {
result = joinSpec.PerformJoin(result);
}
//Executing the LINQ query
var finalResult = result.ToList();
//This is just to illustrate how to get the final projection columns
var resultWithColumns = (
from row in finalResult
let item1 = row.GetItemFor(list1)
let item2 = row.GetItemFor(list2)
let item3 = row.GetItemFor(list3)
select new {
Id1 = item1?.Id,
Id2 = item2?.Id,
Id3 = item3?.Id,
Value1 = item1?.Value,
Value2 = item2?.Value,
Value3 = item3?.Value
}).ToList();
foreach (var row in resultWithColumns) {
Console.WriteLine(row.ToString());
}
//Outputs:
//{ Id1 = 1, Id2 = 1, Id3 = 1, Value1 = 1, Value2 = 1, Value3 = 1 }
//{ Id1 = 2, Id2 = 2, Id3 = 3, Value1 = 2, Value2 = 2, Value3 = 2 }
}
}
public static class RowDictionaryHelpers {
public static IEnumerable<Dictionary<object, object>> CreateFrom<T>(IEnumerable<T> source) where T : class {
return source.Select(item => new Dictionary<object, object> { { source, item } });
}
public static T GetItemFor<T>(this Dictionary<object, object> dict, IEnumerable<T> key) where T : class {
return dict[key] as T;
}
public static Dictionary<object, object> WithAddedItem<T>(this Dictionary<object, object> dict, IEnumerable<T> key, T item) where T : class {
var result = new Dictionary<object, object>(dict);
result.Add(key, item);
return result;
}
}
public interface IJoinSpecification {
IEnumerable<Dictionary<object, object>> PerformJoin(IEnumerable<Dictionary<object, object>> sourceData);
}
public enum JoinType {
Inner = 1,
LeftOuter = 2
}
public static class JoinSpecification {
public static JoinSpecification<TLeft, TRight, TKeyType> Create<TLeft, TRight, TKeyType>(IEnumerable<TLeft> LeftTable, IEnumerable<TRight> RightTable, Func<TLeft, TKeyType> LeftKeySelector, Func<TRight, TKeyType> RightKeySelector, JoinType JoinType) where TLeft : class where TRight : class {
return new JoinSpecification<TLeft, TRight, TKeyType> {
LeftTable = LeftTable,
RightTable = RightTable,
LeftKeySelector = LeftKeySelector,
RightKeySelector = RightKeySelector,
JoinType = JoinType,
};
}
}
public class JoinSpecification<TLeft, TRight, TKeyType> : IJoinSpecification where TLeft : class where TRight : class {
public IEnumerable<TLeft> LeftTable { get; set; } //Must already exist
public IEnumerable<TRight> RightTable { get; set; } //Newly joined table
public Func<TLeft, TKeyType> LeftKeySelector { get; set; }
public Func<TRight, TKeyType> RightKeySelector { get; set; }
public JoinType JoinType { get; set; }
public IEnumerable<Dictionary<object, object>> PerformJoin(IEnumerable<Dictionary<object, object>> sourceData) {
if (sourceData == null) {
sourceData = RowDictionaryHelpers.CreateFrom(LeftTable);
}
return
from joinedRowsObj in sourceData
join rightRow in RightTable
on joinedRowsObj.GetItemFor(LeftTable).ApplyIfNotNull(LeftKeySelector) equals rightRow.ApplyIfNotNull(RightKeySelector)
into rightItemsForLeftItem
from rightItem in rightItemsForLeftItem.DefaultIfEmpty()
where JoinType == JoinType.LeftOuter || rightItem != null
select joinedRowsObj.WithAddedItem(RightTable, rightItem)
;
}
}
public static class FuncExtansions {
public static TResult ApplyIfNotNull<T, TResult>(this T item, Func<T, TResult> func) where T : class {
return item != null ? func(item) : default(TResult);
}
}
The code outputs:
{ Id1 = 1, Id2 = 1, Id3 = 1, Value1 = 1, Value2 = 1, Value3 = 1 }
{ Id1 = 2, Id2 = 2, Id3 = 3, Value1 = 2, Value2 = 2, Value3 = 2 }
P.S. The code absolutely lacks any error checking to make it more compact and easier to read.
I think there are several reasons why you (and other answers and comments so far) are struggling with the solution. Primarily, as stated, you do not have enough meta information to successfully construct the complex relationship of the overall operation.
Absent Metadata
In looking at your inline LINQ example, specifically to quote:
from cust in customerList
join prod in productList on cust.ProductId equals prod.Id
join veh in vehicleList on prod.VehicleId equals veh.Id into v
from veh in v.DefaultIfEmpty()
select new {customerName = cust.Name, customerVehicle=veh.VehicleName}
... if we are to parse the knowledge that is inherently stated in the above code, we'll identify the following:
There are 3 separate data sets (of non-homogeneous types, though this is more evident from your List<T> examples at the beginning of the question) that serve as source of data. This meta information is available in the List<T> setups as sources to LINQ, and thus this part is not an issue.
The join order and type of join (i.e. AND implies .Join() and OR implies .GroupJoin()). This meta information is more or less also available for the list approach setup.
The relationship between the types, and the key to be used to compare one type to another. That is, that customer relates to product (as opposed to vehicle) and that customer-product relationship is defined as Customer.ProductId = Product.Id; or that vehicle relates to product (as opposed to customer) and that relationship is defined as Product.VehicleId = Vehicle.Id. This meta information, as list setup presented in your question is NOT available.
Projection of the resulting (interim and final) data set members. The example is not specific whether each data set is represented by a unique model (i.e. for all List<T>s that each T is unique) or if repeats are possible. Because inline LINQ allows you to reference specific data set, having two data sets of the same type is not an issue when defined statically because each data set is referenced by name and thus relationship is clear. If type can appear more than once, and if metadata is available to determine type relationships dynamically, the trouble creeps in that you don't know which instance of multiple instances of the same type to relate to. In other words if it is possible to have Person join Friends join Person join Car, it is not clear if Car should be matched to first Person or second Person. One possibility is to make assumption that in such cases you resolve relationship to the last instance of Person. Needless to say your lists setup doesn't have this meta information. For the purposes of this answer going forward, I'll assume that all types are unique and do not repeat.
Unlike the intersect example you referenced in comments, whereas Intersect is a parameter-less operator (besides the other set to intersect over), Join operator requires parameter(s) to identify the relationship by which to relate to the other data set. I.e. the parameter(s) is the meta information described in point 3 above.
Metadata
To close the gaps identified above is not simple, but is not insurmountable either. One approach is to simply annotate the data model types with relationship meta data. Something along the lines of:
class Vehicle
{
public int Id;
}
// PrimaryKey="Id" - Id refers to Vehicle.Id, not Product.Id
[RelationshipLink(BelongsTo=typeof(Product), PrimaryKey="Id", ForeignKey="VehicleId"]
class Product
{
public int Id;
public int VehicleId;
}
// PrimaryKey="Id" - Id refers to Product.Id, not Customer.Id
[RelationshipLink(BelongsTo=typeof(Product), PrimaryKey="Id", ForeignKey="ProductId"]
class Customer
{
public int Id;
public int ProductId;
}
This way, as you loop through the data sets as you're setting up joins, using reflection you can examine what type this data set is related to and how, lookup previous data sets for matching data type, and, again using reflection, setup .Join's or .GroupJoins key selectors for matching the relationship of instances of data.
Interim Projections
In static definitions of LINQ statements (be it using inline join or extension method .Join) you control what result of the join looks like and how data is merged and transformed into a shape (aka another model) convenient for subsequent operations (usually by use of anonymous objects). With dynamic set up, this is very difficult if not altogether impossible because you'd need to know what to keep, what not, how to resolve name collision of data models' properties, etc.
To solve this issue, you can probably propagate all interim results (aka projections) as a Dictionary<Type, object>, and simply carry through full models, each tracked by its type. And the reason you want to make it easy to track by its type is so that when you join previous interim result with the next dataset, and need to build the primary/foreign key functions, you have easy means to lookup the time that you discover from [RelationshipLink] metadata.
The final project of the result, again, is not really stated in your question, but you need some way of dynamically determining what part of very wide result do you want (or all of it), or how to transform its shape back into whatever function that will be consuming the results of the giant join.
Algorithm
Finally, we can put the whole thing together. The code below is going to be just high-level of algorithm in C#-pseudocode, and not full C#. See footnote.
var datasets = GetListsOfDatasets().ToArray(); // i.e. the function that returns customerList, productList, vehicleList, etc as a set of List<T>'s
var joins = datasets.First().Select(item => new Dictionary<Type, object> {[item.GetType()] = item});
var joinTypes = stringList.ToQueue() // the "AND", "OR" that tells how to join next one. Convert to queue so we can pop of the top. Better make it enum rather than string.
foreach(dataset in datasets.Skip(1))
{
var outerKeyMember = GetPrimaryKeyMember(dataset.GetGenericEnumerableUnderlyingType());
var innerKeyMember = GetForeignKeyMember(dataset.GetGenericEnumerableUnderlyingType());
var joinType = joinTypes.Pop();
if ()
joins = joinType == "AND:
? joins.Join(
dataset,
outerKey => ReflectionGetValue(outerKeyMember.Member, outerKey[outerKeyMember.Type]),
innerKey => ReflectionGetValue(innerKeyMember.Member, innerKey),
(outer, inner) => {
outer[inner.GetType] = inner;
return outer;
})
: joins.GroupJoin(/* similar key selection as above */)
.SelectMany (i => i) // Flatten the list from IGrouping<T> back to IEnumerable<T>
}
var finalResult = joins.Select(v => /* TODO: whatever you want to project out, and however you dynamically want to determine what you want out */);
/////////////////////////////////////
public Type GetGenericEnumerableUnderlyingType<T>(this IEnumerable<T>)
{
return typeof(T);
}
public TypeAndMemberInfo GetPrimaryKeyMember(Type type)
{
// TODO
// Using reflection examine type, look for RelationshipLinkAttribute, and examine PrimaryKey specified on the attribute.
// Then reflect over BelongsTo declared type and find member declared as PrimaryKey
return new TypeAndMemberInfo {Type = __belongsToType, Member = __relationshipLinkAttribute.PrimaryKey.AsMemberInfo }
}
public TypeAndMemberInfo GetForeignKeyMember(Type type)
{
// TODO Very similar to GetPrimaryKeyMember, but for this type and this type's foreign key annotation marker.
}
public object ReflectionGetValue(MemberInfo member, object instance)
{
// TODO using reflection as member to return value belonging to instance.
}
So the high-level idea is that you take the first data set and wrap each member of the set with dictionary that specifies the type of the member and the member instance itself. Then, for each next dataset, you discover the underlying model type of the dataset, using reflection lookup the relationship metadata that tells you how to relate it to another type (that should have already been exposed in previous processed dataset or the code will blow up because join won't have anything to get key values from), lookup instance of the type from the outer enumerable's dictionary, get that instance and discovered key and get that instance's value as the value for outer key, and very similar reflect and discover value of the inner's foreign key member, and let .Join do the rest of the joining. Keep looping to the end, with each iteration projection carrying full instances of each model.
Once done with all datasets, define what you want out of it using .Select with whatever definition you want, and execute the complex LINQ to pump the data.
Performance Considerations
To perform a join, it means that at least one data-set must be fully read so that key membership may be probed into it while processing the other data-set for matches.
Modern DB engines like SQL Server are able to process joins of extremely large data sets because they go the extra step of having the ability to persist out interim results rather than build up everything in memory, and pull from disk as needed. As such, billions of items join billions of items does not blow up due to free memory starvation - once memory pressure is identified, the interim data and matched results are temporarily persisted to tempdb (or whatever disk storage that backs memory).
Here, default LINQ .Join is an in-memory operator. Large enough data set will blow memory and cause OutOfMemoryException. If you foresee processing many joins resulting in very large datasets, you may need to write your own implementation of .Join and .GroupJoin that use some sort of disk paging to store one data set in format that can be easily probed for membership when trying to match items from the other set, so as to relieve the memory pressure and use disk for memory.
Voila!
Footnotes
First, because you question (sans comments) is asked in the domain of a simple LINQ (meaning IEnumerable and not IQueryable and not SQL or stored procs, I have thus limited the scope of the answer to strictly that domain to follow the spirit of the question. This is not to say that at higher level this problem doesn't lend well to a solution in some other domain.
Second, even though SO rules are for good, compile-able, working code in answers, the reality of this solution is that it is probably at least a few hundred lines of code, and would require many lines of code to do reflection. How to do reflection in C# is, obviously, beyond the scope of the question. As thus, code presented is pseudo code and focuses on algorithm, reducing non-pertinent parts to comments describing what happens and leaving the implementation to the OP (or those finding this useful in the future.
Byt lets say I have an integer weight where i.e. elements with weight 10 has 10 times higher probability to be selected than element with weight 1.
var ws = db.WorkTypes
.Where(e => e.HumanId != null && e.SeoPriority != 0)
.OrderBy(e => /*????*/ * e.SeoPriority)
.Select(e => new
{
DescriptionText = e.DescriptionText,
HumanId = e.HumanId
})
.Take(take).ToArray();
How do I solved getting random records in Linq when I need the result to be weighted?
I need somthing like Random Weighted Choice in T-SQL but in linq and not only getting one record?
If I wouldn't have the weighted requirement, I'd use the NEWID approach, can I adopt this some way?
partial class DataContext
{
[Function(Name = "NEWID", IsComposable = true)]
public Guid Random()
{
throw new NotImplementedException();
}
}
...
var ws = db.WorkTypes
.Where(e => e.HumanId != null && e.SeoPriority != 0)
.OrderBy(e => db.Random())
.Select(e => new
{
DescriptionText = e.DescriptionText,
HumanId = e.HumanId
})
.Take(take).ToArray();
My first idea was the same as Ron Klein's - create a weighted list and select randomly from that.
Here's a LINQ extension method to create the weighted list from the normal list, given a lambda function that knows the weight property of the object.
Don't worry if you don't get all the generics stuff right away... The usage below should make it clearer:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
public class Item
{
public int Weight { get; set; }
public string Name { get; set; }
}
public static class Extensions
{
public static IEnumerable<T> Weighted<T>(this IEnumerable<T> list, Func<T, int> weight)
{
foreach (T t in list)
for (int i = 0; i < weight(t); i++)
yield return t;
}
}
class Program
{
static void Main(string[] args)
{
List<Item> list = new List<Item>();
list.Add(new Item { Name = "one", Weight = 5 });
list.Add(new Item { Name = "two", Weight = 1 });
Random rand = new Random(0);
list = list.Weighted<Item>(x => x.Weight).ToList();
for (int i = 0; i < 20; i++)
{
int index = rand.Next(list.Count());
Console.WriteLine(list.ElementAt(index).Name);
}
Console.ReadLine();
}
}
}
As you can see from the output, the results are both random and weighted as you require.
I'm assuming that the weight is an integer. Here's an approach which joins to a dummy table to increase the row-count per the weight; first, lets prove it just at TSQL:
SET NOCOUNT ON
--DROP TABLE [index]
--DROP TABLE seo
CREATE TABLE [index] ([key] int not null) -- names for fun ;-p
CREATE TABLE seo (url varchar(10) not null, [weight] int not null)
INSERT [index] values(1) INSERT [index] values(2)
INSERT [index] values(3) INSERT [index] values(4)
INSERT [index] values(5) INSERT [index] values(6)
INSERT [index] values(7) INSERT [index] values(8)
INSERT [index] values(9) INSERT [index] values(10)
INSERT [seo] VALUES ('abc',1) INSERT [seo] VALUES ('def',2)
INSERT [seo] VALUES ('ghi',1) INSERT [seo] VALUES ('jkl',3)
INSERT [seo] VALUES ('mno',1) INSERT [seo] VALUES ('mno',1)
INSERT [seo] VALUES ('pqr',2)
DECLARE #count int, #url varchar(10)
SET #count = 0
DECLARE #check_rand TABLE (url varchar(10) not null)
-- test it lots of times to check distribution roughly matches weights
WHILE #count < 11000
BEGIN
SET #count = #count + 1
SELECT TOP 1 #url = [seo].[url]
FROM [seo]
INNER JOIN [index] ON [index].[key] <= [seo].[weight]
ORDER BY NEWID()
-- this to check distribution
INSERT #check_rand VALUES (#url)
END
SELECT ISNULL(url, '(total)') AS [url], COUNT(1) AS [hits]
FROM #check_rand
GROUP BY url WITH ROLLUP
ORDER BY url
This outputs something like:
url hits
---------- -----------
(total) 11000
abc 1030
def 1970
ghi 1027
jkl 2972
mno 2014
pqr 1987
Showing that we have the correct overall distribution. Now lets bring that into LINQ-to-SQL; I've added the two tables to a data-context (you will need to create something like the [index] table to do this) - my DBML:
<Table Name="dbo.[index]" Member="indexes">
<Type Name="index">
<Column Name="[key]" Member="key" Type="System.Int32" DbType="Int NOT NULL" CanBeNull="false" />
</Type>
</Table>
<Table Name="dbo.seo" Member="seos">
<Type Name="seo">
<Column Name="url" Type="System.String" DbType="VarChar(10) NOT NULL" CanBeNull="false" />
<Column Name="weight" Type="System.Int32" DbType="Int NOT NULL" CanBeNull="false" />
</Type>
</Table>
Now we'll consume this; in the partial class for the data-context, add a compiled-query (for performance) in addition to the Random method:
partial class MyDataContextDataContext
{
[Function(Name = "NEWID", IsComposable = true)]
public Guid Random()
{
throw new NotImplementedException();
}
public string GetRandomUrl()
{
return randomUrl(this);
}
static readonly Func<MyDataContextDataContext, string>
randomUrl = CompiledQuery.Compile(
(MyDataContextDataContext ctx) =>
(from s in ctx.seos
from i in ctx.indexes
where i.key <= s.weight
orderby ctx.Random()
select s.url).First());
}
This LINQ-to-SQL query is very similar to the key part of the TSQL we wrote; lets test it:
using (var ctx = CreateContext()) {
// show sample query
ctx.Log = Console.Out;
Console.WriteLine(ctx.GetRandomUrl());
ctx.Log = null;
// check distribution
var counts = new Dictionary<string, int>();
for (int i = 0; i < 11000; i++) // obviously a bit slower than inside db
{
if (i % 100 == 1) Console.WriteLine(i); // show progress
string s = ctx.GetRandomUrl();
int count;
if (counts.TryGetValue(s, out count)) {
counts[s] = count + 1;
} else {
counts[s] = 1;
}
}
Console.WriteLine("(total)\t{0}", counts.Sum(p => p.Value));
foreach (var pair in counts.OrderBy(p => p.Key)) {
Console.WriteLine("{0}\t{1}", pair.Key, pair.Value);
}
}
This runs the query once to show the TSQL is suitable, then (like before) 11k times to check the distribution; output (not including the progress updates):
SELECT TOP (1) [t0].[url]
FROM [dbo].[seo] AS [t0], [dbo].[index] AS [t1]
WHERE [t1].[key] <= [t0].[weight]
ORDER BY NEWID()
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.4926
which doesn't look too bad at all - it has both tables and the range condition, and the TOP 1, so it is doing something very similar; data:
(total) 11000
abc 939
def 1893
ghi 1003
jkl 3104
mno 2048
pqr 2013
So again, we've got the right distribution, all from LINQ-to-SQL. Sorted?
Your suggested solution, as it seems from the question, is bound to Linq/Linq2Sql.
If I understand correctly, your main goal is to fetch at most X records from the database, that have a weight of more than 0. If the database holds more than X records, you'd like to choose from them using the record's weight, and have a random result.
If all is correct so far, my solution is to clone each record by its weight: if a record's weight is 5, make sure you have it 5 times. This way the random choice takes into account the weight.
However, cloning the records makes, well, duplications. So you can't just take X records, you should take more and more records until you have X distinct records.
So far I described a general solution, not related to the implementation.
I think it's harder to implement my solution using only Linq2Sql. If the total records count in the DB is not huge, I suggest reading the entire table and do the cloning and random outside the SQL Server.
If the total count is huge, I suggest you take, say, 100,000 records (or less) chosen at random (via Linq2Sql), and apply the implementation as above. I believe it's random enough.
Try by using the RAND() sql function - it'll give you a 0 to 1 float.
The downside is that I am not sure if it would cause a full table scan on the sql server side i.e. if the resulting query + execution on sql would be optimized in such a way that once you have the top n records it ignores the rest of the table.
var rand = new Random();
var ws = db.WorkTypes
.Where(e => e.HumanId != null && e.SeoPriority != 0)
.OrderByDescending(e => rand.Next() * e.SeoPriority)
.Select(e => new
{
DescriptionText = e.DescriptionText,
HumanId = e.HumanId
})
.Take(take).ToArray();
The reason the GUID (NEWID) function was being used in the SQL example you are looking at is simply that SQL Servers RAND function only calculates once per statement. So is useless in a randomising select.
But as your using linq, A quick and dirty solution is to create a Random object and replace your order by statement.
Random rand = new Random(DateTime.Now.Millisecond);
var ws = db.WorkTypes
.Where(e => e.HumanId != null && e.SeoPriority != 0)
.OrderByDescending(e => rand.Next(10) * e.SeoPriority)
.Select(e => new{ DescriptionText = e.DescriptionText, HumanId = e.HumanId})
.Take(take).ToArray();
The rand.Next(10) assumes your SeoPriority scales from 0 to 10.
It's not 100% acurate, but it's close, adjusting the Next value can tweak it.