Could C# Linq do combinatorics? - c#

I have this data structure:
class Product
{
public string Name { get; set; }
public int Count { get; set; }
}
var list = new List<Product>(){ { Name = "Book", Count = 40}, { Name = "Car", Count = 70}, { Name = "Pen", Count = 60}........... } // 500 product object
var productsUpTo100SumCountPropert = list.Where(.....) ????
// productsUpTo100SumCountPropert output:
// { { Name = "Book", Count = 40}, { Name = "Pen", Count = 60} }
I want to sum the Count properties of the collection and return only products objects where that property Count sum is less than or equal to 100.
If is not possible with linq, what is a better approach that I can use?

Judging from the comments you've left on other peoples' answers and your gist (link), it looks like what you're trying to solve is in fact the Knapsack Problem - in particular, the 0/1 Knapsack Problem (link).
The Wikipedia page on this topic (that I linked to) has a short dynamic programming solution for you. It has pseudo-polynomial running time ("pseudo" because the complexity depends on the capacity you choose for your knapsack (W).
A good preprocessing step to take before running the algorithm is to find the greatest common denominator (GCD) of all of your item weights (w_i) and then divide it out of each value.
d <- GCD({w_1, w_2, ..., w_N})
w_i' <- w_i / d //for each i = 1, 2, ..., N
W' <- W / d //integer division here
Then solve the problem using the modified weights and capacity instead (w_i' and W').
The greedy algorithm you use in your gist won't work very well. This better algorithm is simple enough that it's worth implementing.

You need the Count extension method
list.Count(p => p.Count <= 100);
EDIT:
If you want the sum of the items, Where and Sum extension methods could be utilized:
list.Where(p => p.Count <= 100).Sum(p => p.Count);

list.Where(p=> p.Count <= 100).ToList();

Related

Shortest list from a two dimensional array

This question is more about an algorithm than actual code, but example code would be appreciated.
Let's say I have a two-dimensional array such as this:
A B C D E
--------------
1 | 0 2 3 4 5
2 | 1 2 4 5 6
3 | 1 3 4 5 6
4 | 2 3 4 5 6
5 | 1 2 3 4 5
I am trying to find the shortest list that would include a value from each row. Currently, I am going row by row and column by column, adding each value to a SortedSet and then checking the length of the set against the shortest set found so far. For example:
Adding cells {1A, 2A, 3A, 4A, 5A} would add the values {0, 1, 1, 2, 1} which would result in a sorted set {0, 1, 2}. {1B, 2A, 3A, 4A, 5A} would add the values {2, 1, 1, 2, 1} which would result in a sorted set {1, 2}, which is shorter than the previous set.
Obviously, adding {1D, 2C, 3C, 4C, 5D} or {1E, 2D, 3D, 4D, 5E} would be the shortest sets, having only one item each, and I could use either one.
I don't have to include every number in the array. I just need to find the shortest set while including at least one number from every row.
Keep in mind that this is just an example array, and the arrays that I'm using are much, much larger. The smallest is 495x28. Brute force will take a VERY long time (28^495 passes). Is there a shortcut that someone knows, to find this in the least number of passes? I have C# code, but it's kind of long.
Edit:
Posting current code, as per request:
// Set an array of counters, Add enough to create largest initial array
int ListsCount = MatrixResults.Count();
int[] Counters = new int[ListsCount];
SortedSet<long> CurrentSet = new SortedSet<long>();
for (long X = 0; X < ListsCount; X++)
{
Counters[X] = 0;
CurrentSet.Add(X);
}
while (true)
{
// Compile sequence list from MatrixResults[]
SortedSet<long> ThisSet = new SortedSet<long>();
for (int X = 0; X < Count4; X ++)
{
ThisSet.Add(MatrixResults[X][Counters[X]]);
}
// if Sequence Length less than current low, set ThisSet as Current
if (ThisSet.Count() < CurrentSet.Count())
{
CurrentSet.Clear();
long[] TSI = ThisSet.ToArray();
for (int Y = 0; Y < ThisSet.Count(); Y ++)
{
CurrentSet.Add(TSI[Y]);
}
}
// Increment Counters
int Index = 0;
bool EndReached = false;
while (true)
{
Counters[Index]++;
if (Counters[Index] < MatrixResults[Index].Count()) break;
Counters[Index] = 0;
Index++;
if (Index >= ListsCount)
{
EndReached = true;
break;
}
Counters[Index]++;
}
// If all counters are fully incremented, then break
if (EndReached) break;
}
With all computations there is always a tradeoff, several factors are in play, like will You get paid for getting it perfect (in this case for me, no). This is a case of the best being the enemy of the good. How long can we spend on solving a problem and will it be sufficient to get close enough to fulfil the use case (imo) and when we can solve the problem without hand painting pixels in UHD resolution to get the idea of a key through, lets!
So, my choice is an approach which will get a covering set which is small and ehem... sometimes will be the smallest :) In essence because of the sequence in comparing would to be spot on be iterative between different strategies, comparing the length of the sets for different strategies - and for this evening of fun I chose to give one strategy which is I find defendable to be close to or equal the minimal set.
So this strategy is to observe the multi dimensional array as a sequence of lists that has a distinct value set each. Then if reducing the total amount of lists with the smallest in the remainder iteratively, weeding out any non used values in that smallest list when having reduced total set in each iteration we will get a path which is close enough to the ideal to be effective as it completes in milliseconds with this approach.
A critique of this approach up front is then that the direction you pass your minimal list in really would have to get iteratively varied to pick best, left to right, right to left, in position sequences X,Y,Z, ... because the amount of potential reducing is not equal. So to get close to the ideal iterations of sequences would have to be made for each iteration too until all combinations were covered, choosing the most reducing sequence. right - but I chose left to right, only!
Now I chose not to run compare execution against Your code, because of the way you instantiate your MatrixResults is an array of int arrays and not instantiated as a multidimension array, which your drawing is, so I went by Your drawing and then couldn't share data source with your code. No matter, you can make that conversion if you wish, onwards to generate sample data:
private int[,] CreateSampleArray(int xDimension, int yDimensions, Random rnd)
{
Debug.WriteLine($"Created sample array of dimensions ({xDimension}, {yDimensions})");
var array = new int[xDimension, yDimensions];
for (int x = 0; x < array.GetLength(0); x++)
{
for(int y = 0; y < array.GetLength(1); y++)
{
array[x, y] = rnd.Next(0, 4000);
}
}
return array;
}
The overall structure with some logging, I'm using xUnit to run the code in
[Fact]
public void SetCoverExperimentTest()
{
var rnd = new Random((int)DateTime.Now.Ticks);
var sw = Stopwatch.StartNew();
int[,] matrixResults = CreateSampleArray(rnd.Next(100, 500), rnd.Next(100, 500), rnd);
//So first requirement is that you must have one element per row, so lets get our unique rows
var listOfAll = new List<List<int>>();
List<int> listOfRow;
for (int y = 0; y < matrixResults.GetLength(1); y++)
{
listOfRow = new List<int>();
for (int x = 0; x < matrixResults.GetLength(0); x++)
{
listOfRow.Add(matrixResults[x, y]);
}
listOfAll.Add(listOfRow.Distinct().ToList());
}
var setFound = new HashSet<int>();
List<List<int>> allUniquelyRequired = GetDistinctSmallestList(listOfAll, setFound);
// This set now has all rows that are either distinctly different
// Or have a reordering of distinct values of that length value lists
// our HashSet has the unique value range
//Meaning any combination of sets with those values,
//grabbing any one for each set, prefering already chosen ones should give a covering total set
var leastSet = new LeastSetData
{
LeastSet = setFound,
MatrixResults = matrixResults,
};
List<Coordinate>? minSet = leastSet.GenerateResultsSet();
sw.Stop();
Debug.WriteLine($"Completed in {sw.Elapsed.TotalMilliseconds:0.00} ms");
Assert.NotNull(minSet);
//There is one for each row
Assert.False(minSet.Select(s => s.y).Distinct().Count() < minSet.Count());
//We took less than 25 milliseconds
var timespan = new TimeSpan(0, 0, 0, 0, 25);
Assert.True(sw.Elapsed < timespan);
//Outputting to debugger for the fun of it
var sb = new StringBuilder();
foreach (var coordinate in minSet)
{
sb.Append($"({coordinate.x}, {coordinate.y}) {matrixResults[coordinate.x, coordinate.y]},");
}
var debugLine = sb.ToString();
debugLine = debugLine.Substring(0, debugLine.Length - 1);
Debug.WriteLine("Resulting set: " + debugLine);
}
Now the more meaty iterative bits
private List<List<int>> GetDistinctSmallestList(List<List<int>> listOfAll, HashSet<int> setFound)
{
// Our smallest set must be a subset the distinct sum of all our smallest lists for value range,
// plus unknown
var listOfShortest = new List<List<int>>();
int shortest = int.MaxValue;
foreach (var list in listOfAll)
{
if (list.Count < shortest)
{
listOfShortest.Clear();
shortest = list.Count;
listOfShortest.Add(list);
}
else if (list.Count == shortest)
{
if (listOfShortest.Contains(list))
continue;
listOfShortest.Add(list);
}
}
var setFoundAddition = new HashSet<int>(setFound);
foreach (var list in listOfShortest)
{
foreach (var item in list)
{
if (setFound.Contains(item))
continue;
if (setFoundAddition.Contains(item))
continue;
setFoundAddition.Add(item);
}
}
//Now we can remove all rows with those found, we'll add the smallest later
var listOfAllRemainder = new List<List<int>>();
bool foundInList;
List<int> consumedWhenReducing = new List<int>();
foreach (var list in listOfAll)
{
foundInList = false;
foreach (int item in list)
{
if (setFound.Contains(item))
{
//Covered by data from last iteration(s)
foundInList = true;
break;
}
else if (setFoundAddition.Contains(item))
{
consumedWhenReducing.Add(item);
foundInList = true;
break;
}
}
if (!foundInList)
{
listOfAllRemainder.Add(list); //adding what lists did not have elements found
}
}
//Remove any from these smallestset lists that did not get consumed in the favour used pass before
if (consumedWhenReducing.Count == 0)
{
throw new Exception($"Shouldn't be possible to remove the row itself without using one of its values, please investigate");
}
var removeArray = setFoundAddition.Where(a => !consumedWhenReducing.Contains(a)).ToArray();
setFoundAddition.RemoveWhere(x => removeArray.Contains(x));
foreach (var value in setFoundAddition)
{
setFound.Add(value);
}
if (listOfAllRemainder.Count != 0)
{
//Do the whole thing again until there in no list left
listOfShortest.AddRange(GetDistinctSmallestList(listOfAllRemainder, setFound));
}
return listOfShortest; //Here we will ultimately have the sum of shortest lists per iteration
}
To conclude: I hope to have inspired You, at least I had fun coming up with a best approximate, and should you feel like completing the code, You're very welcome to grab what You like.
Obviously we should really track the sequence we go through the shortest lists, after all it is of significance if we start by reducing the total distinct lists by element at position 0 or 0+N and which one we reduce with after. I mean we must have one of those values but each time consuming each value has removed most of the total list all it really produces is a value range and the range consumption sequence matters to the later iterations - Because a position we didn't reach before there were no others left e.g. could have remove potentially more than some which were covered. You get the picture I'm sure.
And this is just one strategy, One may as well have chosen the largest distinct list even within the same framework and if You do not iteratively cover enough strategies, there is only brute force left.
Anyways you'd want an AI to act. Just like a human, not to contemplate the existence of universe before, after all we can reconsider pretty often with silicon brains as long as we can do so fast.
With any moving object at least, I'd much rather be 90% on target correcting every second while taking 14 ms to get there, than spend 2 seconds reaching 99% or the illusive 100% => meaning we should stop the vehicle before the concrete pillar or the pram or conversely buy the equity when it is a good time to do so, not figuring out that we should have stopped, when we are allready on the other side of the obstacle or that we should've bought 5 seconds ago, but by then the spot price already jumped again...
Thus the defense rests on the notion that it is opinionated if this solution is good enough or simply incomplete at best :D
I realize it's pretty random, but just to say that although this sketch is not entirely indisputably correct, it is easy to read and maintain and anyways the question is wrong B-] We will very rarely need the absolute minimal set and when we do the answer will be much longer :D
... woopsie, forgot the support classes
public struct Coordinate
{
public int x;
public int y;
public override string ToString()
{
return $"({x},{y})";
}
}
public struct CoordinateValue
{
public int Value { get; set; }
public Coordinate Coordinate { get; set; }
public override string ToString()
{
return string.Concat(Coordinate.ToString(), " ", Value.ToString());
}
}
public class LeastSetData
{
public HashSet<int> LeastSet { get; set; }
public int[,] MatrixResults { get; set; }
public List<Coordinate> GenerateResultsSet()
{
HashSet<int> chosenValueRange = new HashSet<int>();
var chosenSet = new List<Coordinate>();
for (int y = 0; y < MatrixResults.GetLength(1); y++)
{
var candidates = new List<CoordinateValue>();
for (int x = 0; x < MatrixResults.GetLength(0); x++)
{
if (LeastSet.Contains(MatrixResults[x, y]))
{
candidates.Add(new CoordinateValue
{
Value = MatrixResults[x, y],
Coordinate = new Coordinate { x = x, y = y }
}
);
continue;
}
}
if (candidates.Count == 0)
throw new Exception($"OMG Something's wrong! (this row did not have any of derived range [y: {y}])");
var done = false;
foreach (var c in candidates)
{
if (chosenValueRange.Contains(c.Value))
{
chosenSet.Add(c.Coordinate);
done = true;
break;
}
}
if (!done)
{
var firstCandidate = candidates.First();
chosenSet.Add(firstCandidate.Coordinate);
chosenValueRange.Add(firstCandidate.Value);
}
}
return chosenSet;
}
}
This problem is NP hard.
To show that, we have to take a known NP hard problem, and reduce it to this one. Let's do that with the Set Cover Problem.
We start with a universe U of things, and a collection S of sets that covers the universe. Assign each thing a row, and each set a number. This will fill different numbers of columns for each row. Fill in a rectangle by adding new numbers.
Now solve your problem.
For each new number in your solution that didn't come from a set in the original problem, we can replace it with another number in the same row that did come from a set.
And now we turn numbers back into sets and we have a solution to the Set Cover Problem.
The transformations from set cover to your problem and back again are both O(number_of_elements * number_of_sets) which is polynomial in the input. And therefore your problem is NP hard.
Conversely if you replace each number in the matrix with the set of rows covered, your problem turns into the Set Cover Problem. Using any existing solver for set cover then gives a reasonable approach for your problem as well.
The code is not particularly tidy or optimised, but illustrates the approach I think #btilly is suggesting in his answer (E&OE) using a bit of recursion (I was going for intuitive rather than ideal for scaling, so you may have to work an iterative equivalent).
From the rows with their values make a "values with the rows that they appear in" counterpart. Now pick a value, eliminate all rows in which it appears and solve again for the reduced set of rows. Repeat recursively, keeping only the shortest solutions.
I know this is not terribly readable (or well explained) and may come back to tidy up in the morning, so let me know if it does what you want (is worth a bit more of my time;-).
// Setup
var rowValues = new Dictionary<int, HashSet<int>>
{
[0] = new() { 0, 2, 3, 4, 5 },
[1] = new() { 1, 2, 4, 5, 6 },
[2] = new() { 1, 3, 4, 5, 6 },
[3] = new() { 2, 3, 4, 5, 6 },
[4] = new() { 1, 2, 3, 4, 5 }
};
Dictionary<int, HashSet<int>> ValueRows(Dictionary<int, HashSet<int>> rv)
{
var vr = new Dictionary<int, HashSet<int>>();
foreach (var row in rv.Keys)
{
foreach (var value in rv[row])
{
if (vr.ContainsKey(value))
{
if (!vr[value].Contains(row))
vr[value].Add(row);
}
else
{
vr.Add(value, new HashSet<int> { row });
}
}
}
return vr;
}
List<int> FindSolution(Dictionary<int, HashSet<int>> rAndV)
{
if (rAndV.Count == 0) return new List<int>();
var bestSolutionSoFar = new List<int>();
var vAndR = ValueRows(rAndV);
foreach (var v in vAndR.Keys)
{
var copyRemove = new Dictionary<int, HashSet<int>>(rAndV);
foreach (var r in vAndR[v])
copyRemove.Remove(r);
var solution = new List<int>{ v };
solution.AddRange(FindSolution(copyRemove));
if (bestSolutionSoFar.Count == 0 || solution.Count > 0 && solution.Count < bestSolutionSoFar.Count)
bestSolutionSoFar = solution;
}
return bestSolutionSoFar;
}
var solution = FindSolution(rowValues);
Console.WriteLine($"Optimal solution has values {{ {string.Join(',', solution)} }}");
output Optimal solution has values { 4 }

alternative to else if for random number C#.net [duplicate]

This question already has answers here:
Select a random item from a weighted list
(4 answers)
Closed 4 years ago.
Is there a shorter way to check my random number from 1 - 100 (catNum) against this table of animals? This one doesn't look so bad but I have several more larger tables to work through, I would like to use less lines than I would have to using the statement below:
if (catNum < 36) { category = "Urban"; }
else if (catNum < 51) { category = "Rural"; }
else if (catNum < 76) { category = "Wild"; }
else if (catNum < 86) { category = "Wild Birds"; }
else { category = "Zoo"; }
Example of further tables:
I prefer to use something like this instead of many if/else
A category class
class Category
{
public int Min { get; set; }
public int Max { get; set; }
public string Name { get; set; }
}
Initialise categories once and fill it with your values
var categories = new List<Category>();
and finally a method to resolve the category
public static string Get(int currentValue)
{
var last = categories.Last(m => m.Min < currentValue);
//if the list is ordered
//or
// var last = categories.FirstOrDefault(m => m.Min <= currentValue && m.Max >= currentValue);
return last?.Name;
}
One alternative is to build up a full list of the items, then you can just select one, at random, by index:
var categories =
Enumerable.Repeat("Urban", 35)
.Concat(Enumerable.Repeat("Rural", 15))
.Concat(Enumerable.Repeat("Wild", 25))
.Concat(Enumerable.Repeat("Wild Birds", 10))
.Concat(Enumerable.Repeat("Zoo", 15))
.ToArray();
var category = categories[45]; //Rural;
Yes, this is a well-studied problem and there are solutions that are more efficient than the if-else chain that you've already discovered. See https://en.wikipedia.org/wiki/Alias_method for the details.
My advice is: construct a generic interface type which represents the probability monad -- say, IDistribution<T>. Then write a discrete distribution implementation that uses the alias method. Encapsulate the mechanism work into the distribution class, and then at the use site, you just have a constructor that lets you make the distribution, and a T Sample() method that gives you an element of the distribution.
I notice that in your example you might have a Bayesian probability, ie, P(Dog | Urban). A probability monad is the ideal mechanism to represent these things because we reformulate P(A|B) as Func<B, IDistribution<A>> So what have we got? We've got a IDistribution<Location>, we've got a function from Location to IDistribution<Animal>, and we then recognize that we put them together via the bind operation on the probability monad. Which means that in C# we can use LINQ. SelectMany is the bind operation on sequences, but it can also be used as the bind operation on any monad!
Now, given that, an exercise: What is the conditioned probability operation in LINQ?
Remember the goal is to make the code at the call site look like the operation being performed. If you are logically sampling from a discrete distribution, then have an object that represents discrete distributions and sample from it.
string[] animals = new string[] { "Urban", "Rural", "Wild", "Wild Birds", "Zoo" };
int[] table = new int[] { 35, 50, 75, 85 };
for (int catNum = 10; catNum <= 100; catNum += 10)
{
int index = Array.BinarySearch(table, catNum);
if (index < 0) index = ~index;
Console.WriteLine(catNum + ": " + animals[index]);
}
Run online: https://dotnetfiddle.net/yMeSPB

Find keys with min difference in dictionary

Say, I have this collection, it is generic dictionary
var items = new Dictionary<int, SomeData>
{
{ 1 , new SomeData() },
{ 5 , new SomeData() },
{ 23 , new SomeData() },
{ 22 , new SomeData() },
{ 2 , new SomeData() },
{ 7 , new SomeData() },
{ 59 , new SomeData() }
}
In this case min distance (difference) between keys = 1, for instance, between 23 and 22 or between 1 and 2
23 - 22 = 1 or 2 - 1 = 1
Question : how to find min difference between keys in generic Dictionary? Is there one line LINQ solution for this?
Purpose : If there are several matches then I need only one - the smallest, this is needed to fill missing keys (gaps) between items
I don't know how to do it by one line in LINQ but this is multiline solution for this problem.
var items = new Dictionary<int, string>();
items.Add(1, "SomeData");
items.Add(5, "SomeData");
items.Add(23, "SomeData");
items.Add(22, "SomeData");
items.Add(2, "SomeData");
items.Add(7, "SomeData");
items.Add(59, "SomeData");
var sortedArray = items.Keys.OrderBy(x => x).ToArray();
int minDistance = int.MaxValue;
for (int i = 1; i < sortedArray.Length; i++)
{
var distance = Math.Abs(sortedArray[i] - sortedArray[i - 1]);
if (distance < minDistance)
minDistance = distance;
}
Console.WriteLine(minDistance);
not sure Linq is the most appropriate but something (roughly) along this should work :
var smallestDiff = (from key1 in items.Keys
from key2 in items.Keys
where key1 != key2
group new { key1, key2 } by Math.Abs (key1 - key2) into grp
orderby grp.Key
from keyPair in grp
orderby keyPair.key1
select keyPair).FirstOrDefault ();
I won't give you a LinQ query because there already is an answer.
I know this is not what you are asking for, but I want to show you how to solve it in a very fast and easy to understand/maintain way, if performance and legibility is of any concern to you.
int[] keys;
int i, d, min;
keys = items.Keys.ToArray();
Array.Sort(keys); // leverage fastest possible implementation of sort
min = int.MaxValue;
for (i = 0; i < keys.Length - 1; i++)
{
d = keys[i + 1] - key[i]; // d is always non-negative after sort
if (d < min)
{
if (d == 2)
{
return 2; // minimum 1-gap already reached
} else if (d > 2) // ignore non-gap
{
min = d;
}
}
}
return min; // min contains the minimum difference between keys
Because there is only one sort the performance of this non-LinQ solution performs pretty quick. I don't say this is the best way, but only that you should measure both solutions and compare performance.
EDIT: based on your purpose I've added this piece:
if (d == 2)
{
return 2; // minimum 1-gap already reached
} else if (d > 2) // ignore non-gap
{
min = d;
}
Now what does this mean?
Say the PROBABILITY of having 1-gaps is high, it is probably faster to check at every change of min if you've reached that minimum gap. This may happen when you are 1% or 10% through the for loop, based on probability. So, for very large sets (say, above 1 million or 1 billion) and once you know the probability to expect, this probabilistic approach may give you huge performance gains.
On the contrary, for small sets or when the probability of 1-gaps is low, these extra CPU cycles are wasted and you are better off without that check.
As with very large databases (think of probabilistic indexing) probabilistic reasoning becomes relevant.
The problem is that you'll have to estimate beforehand if and when the probabilistic effect kicks in, and that's a pretty complex topic.
EDIT 2: a 1-gap actually has an index difference of 2. Furthermore, and index difference of 1 is a non-gap (there is no gap to insert an index in between).
So the previous solution was simply wrong, because as soon as two indices are contiguous (say 34, 35) the minimum will be 1, which is not a gap at all.
Because of this gap-problem the internal if() is necessary and at that point the overhead of the probabilistic approach is nullified. You'll be better off with the correct code and probabilistic approach!
I think LINQ is simplest
First, making diff pair from your dictionary
var allPair = items.SelectMany((l) => items.Select((r) => new {l,r}).Where((pair) => l.Key != r.Key));
Then find the min of diff
allPair.OrderBy((pair) => Math.Abs(pair.l.Key - pair.r.Key)).FirstOrDefault();
But you may have multiple pair with same difference value, so you may need to use GroupBy before using OrderBy then handle the multiple pair by yourself
A one line solution not listed in answers:
items.Keys.OrderBy(x => x).Select(x => new { CurVal = x, MinDist = int.MaxValue }).Aggregate((ag, x) => new { CurVal = x.CurVal, MinDist = Math.Min(ag.MinDist, x.CurVal - ag.CurVal) }).MinDist

Ordered Unique Combinations [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have the following class:
internal class Course
{
public int CourseCode { get; set; }
public string DeptCode { get; set; }
public string Name { get; set; }
}
and the following code is the 2 dimensional array I have:
Course[][] courses = new Course[3][];
courses[0] = new Course[] {
new Course() { CourseCode = 100, DeptCode = "EGR", Name = "EGR A" },
new Course() { CourseCode = 100, DeptCode = "EGR", Name = "EGR B" }
};
courses[1] = new Course[] {
new Course() { CourseCode = 200, DeptCode = "EN", Name = "EN A" }
};
courses[2] = new Course[] {
new Course() { CourseCode = 300, DeptCode = "PHY", Name = "PHY A" }
};
What I want to do is get the different combinations that each item in a group could do with the other groups; for example with the previous code, the results would be:
1. EGR A - EN A - PHY A
2. EGR B - EN A - PHY A
Answers:
To get the number of combinations possible, we can use Rule of Product, in the above case, the possible combinations will be (2 * 1 * 1) = 2 which is indeed the 2 combinations I wrote above.
LordTakkera has given the perfect answer, thank you so much!
You could use a nested for loop:
for (int i = 0; i < courses[0].Length; i++)
{
for (int j = 0; j < courses[1].Length; i++)
{
for (int k = 0; k < courses[2].Length; i++)
{
//Do whatever you need with the combo, accessed like:
//courses[0][i], courses[1][j], courses[2][k]
}
}
}
Of course, this solution gets really messy the more nesting you need. If you need to go any deeper, I would use some kind of recursive function to traverse the collection and generate the combinations.
It would be something like:
class CombinationHelper
{
public List<List<Course>> GetAllCombinations(Course[][] courses)
{
return GetCourseCombination(courses, 0);
}
public List<List<Course>> GetCourseCombination(Course[][] courses, int myIndex)
{
List<List<Course>> combos = new List<List<Course>>();
for (int i = 0; i < courses[myIndex].Length; i++)
{
if (myIndex + 1 < courses.GetLength(0))
{
foreach (List<Course> combo in GetCourseCombination(courses, myIndex + 1))
{
combo.Add(courses[myIndex][i]);
combos.Add(combo);
}
}
else
{
List<Course> newCombination = new List<Course>() { courses[myIndex][i] };
combos.Add(newCombination);
}
}
return combos;
}
}
I tested this (substituting "int" for "Course" to make verification easier) and it produced all 8 combinations (though not in order, recursion tends to do that. If I come up with the ordering code, I'll post, but it shouldn't be too difficult).
Recursive functions are hard enough for me to come up with, so my explanation won't be very good. Basically, we start by kicking the whole thing off with the "0" index (so that we start from the beginning). Then we iterate over the current array. If we aren't the last array in the "master" array, we recurse into the next sub-array. Otherwise, we create a new combination, add ourselves to it, and return.
As the recursive stack "unwinds" we add the generated combinations to our return list, add ourselves to it, and return again. Eventually the whole thing "unwinds" and you are left with one list of all the combinations.
Again, I'm sure that was a very confusing explanation, but recursive algorithms are (at least for me) inherently confusing. I would be happy to attempt to elaborate on any point you would like.
Take a look at your second index - the 0 or 1. If you look only at that, you see a binary number from 0 to 7.
Count from 0 to 7, translate it to bits and get the combination pattern you need.

c# lambda expression complexity

A simple question regarding lambda expression
I wanted to get an average of all trades in following code. The formula I am using is ((price 1*qty 1+(price 2*qty 2)....+(price n*qty n)/(qty 1+qty 2+...+qty n)
In following code, I am using sum function to calculate total of (price*qty) and the complexity will be O(n) and once more time to add up all qty the complexity will be O(n). So, is there any way I can find a summation of both using complexity O(n) means a single lambda expression which can calculate both results.
Using for loop, I can calculate both results in O(n) complexity.
class Program
{
static void Main(string[] args)
{
List<Trade> trades = new List<Trade>()
{
new Trade() {price=2,qty=2},
new Trade() {price=3,qty=3}
};
///using lambda
int price = trades.Sum(x => x.price * x.qty);
int qty = trades.Sum(x => x.qty);
///using for loop
int totalPriceQty=0, totalQty=0;
for (int i = 0; i < trades.Count; ++i)
{
totalPriceQty += trades[i].price * trades[i].qty;
totalQty += trades[i].qty;
}
Console.WriteLine("Average {0}", qty != 0 ? price / qty : 0);
Console.Read();
}
}
class Trade
{
public int price;
public int qty;
}
Edit: I know that the coefficient does not count. Let me rephrase the question by saying that with lambda we are going to go through each element in the list twice while with for loop we are going to go through each element only once. Is there any solution with lambda so it does not have to go through list elements twice?
As mentioned the Big-O is unchanged, whether you iterate once or twice. If you wanted to use Linq to iterate just once you could use a custom aggregator (since you are reducing to the same properties we can just use an instance of Trade for aggregation):
var ag = trades.Aggregate(new Trade(),
(agg, trade) =>
{
agg.price += trade.price * trade.qty;
agg.qty += trade.qty;
return agg;
});
int price = ag.price;
int qty = ag.qty;
At this point personally I would just use a foreach loop or the simple lambdas you already have - unless performance is crucial here (measure it!)
Big-O complexity does not take consideration of constant coefficients. O(n) + O(n) still gives O(n).
If you’re determined you want to it in lambda, here’s an example using the Aggregate operator. It looks contrived, and I wouldn’t recommend it over a traditional for loop.
var result = trades.Aggregate(
Tuple.Create(0, 0),
(acc, trade) =>
Tuple.Create(acc.Item1 + trade.price * trade.qty, acc.Item2 + trade.qty));
int totalPrice = result.Item1;
int totalQuantity = result.Item2;
To expand on BrokenGlass's answer, you could also use an anonymous type as your aggregator like so:
var result = trades.Aggregate(
new { TotalValue = 0L, TotalQuantity = 0L },
(acc, trade) => new
{
TotalValue = acc.TotalValue + trade.price,
TotalQuantity = acc.TotalQuantity + trade.qty
}
);
There are two small upsides to this approach:
If you're doing this kind of calculation on a large number of trades (or on large trades), it's possible that you would overflow the int that keep track of total value of trades and total shares. This approach lets you specify long as your data type (so it would take longer to overflow).
The object you get back from this aggregation will have more meaningful properties than you would if you simply returned a Trade object.
The big downside is that it's kinda weird to look at.

Categories

Resources