Suppose there is an Item that a customer is ordering - in this case it turns out they are ordering 176 (totalNeeded) of this Item.
The database has 5 records associated with this item that this item can be stored in:
{5 pack, 8 pack, 10 pack, 25 pack, 50 pack}.
A rough way of packing this would be:
Sort the array from biggest to smallest.
While (totalPacked < totalNeeded) // 176
{
1. Maintain an <int, int> dictionary which contains Keys of pack id's,
and values of how many needed
2. Add the largest pack, which is not larger than the amount remaining to pack,
increment totalPacked by the pack size
3. If any remainder is left over after the above, add the smallest pack to reduce
waste
e.g., 4 needed, smallest size is 5, so add one 5; one extra item packed
}
Based on the above logic, the outcome would be:
You need: 3 x 50 packs, 1 x 25 pack, 1 x 5 pack
Total Items: 180
Excess = 4 items; 180 - 176
The above is not too difficult to code, I have it working locally. However, it is not truly the best way to pack this item. Note: "best" means, smallest amount of excess.
Thus ... we have an 8 pack available, we need 176. 176 / 8 = 22. Send the customer 22 x 8 packs, they will get exactly what they need. Again, this is even simpler than the pseudo-code I wrote ... see if the total needed is evenly divisible by any of the packs in the array - if so, "at the very least" we know that we can fall back on 22 x 8 packs being exact.
In the case that the number is not divisible by an array value, I am attempting to determine possible way that the array values can be combined to reach at least the number we need (176), and then score the different combinations by # of Packs needed total.
If anyone has some reading that can be done on this topic, or advice of any kind to get me started it would be greatly appreciated.
Thank you
This is a variant of the Subset Sum Problem (Optimization version)
While the problem is NP-Complete, there is a pretty efficient pseudo-polynomial time Dynamic Programming solution to it, by following the recursive formulas:
D(x,i) = false x<0
D(0,i) = true
D(x,0) = false x != 0
D(x,i) = D(x,i-1) OR D(x-arr[i],i
The Dynamic Programming Solution will build up a table, where an element D[x][i]==true iff you can use the first i kinds of packs to establish sum x.
Needless to say that D[x][n] == true iff there is a solution with all available packs that sums to x. (where n is the total number of packs you have).
To get the "closest higher number", you just need to create a table of size W+pack[0]-1 (pack[0] being the smallest available pack, W being the sum you are looking for), and choose the value that yields true which is closest to W.
If you wish to give different values to the different pack types, this becomes Knapsack Problem, which is very similar - but uses values instead a simple true/false.
Getting the actual "items" (packs) chosen after is done by going back the table and retracing your steps. This thread and this thread elaborate how to achieve it with more details.
If this example problem is truly representative of the actual problem you are solving, it is small enough to try every combination with brute force using recursion. For example, I found exactly 6,681 unique packings that are locally maximized, with a total of 205 that have exactly 176 total items. The (unique) solution with minimum number of packs is 6, and that is { 2-8, 1-10, 3-50 }. Total runtime for the algorithm was 8 ms.
public static List<int[]> GeneratePackings(int[] packSizes, int totalNeeded)
{
var packings = GeneratePackingsInternal(packSizes, 0, new int[packSizes.Length], totalNeeded);
return packings;
}
private static List<int[]> GeneratePackingsInternal(int[] packSizes, int packSizeIndex, int[] packCounts, int totalNeeded)
{
if (packSizeIndex >= packSizes.Length) return new List<int[]>();
var currentPackSize = packSizes[packSizeIndex];
var currentPacks = new List<int[]>();
if (packSizeIndex + 1 == packSizes.Length) {
var lastOptimal = totalNeeded / currentPackSize;
packCounts[packSizeIndex] = lastOptimal;
return new List<int[]> { packCounts };
}
for (var i = 0; i * currentPackSize <= totalNeeded; i++) {
packCounts[packSizeIndex] = i;
currentPacks.AddRange(GeneratePackingsInternal(packSizes, packSizeIndex + 1, (int[])packCounts.Clone(), totalNeeded - i * currentPackSize));
}
return currentPacks;
}
The algorithm is pretty straightforward
Loop through every combination of number of 5-packs.
Loop through every combination of number of 8-packs, from remaining amount after deducting specified number of 5-packs.
etc to 50-packs. For 50-pack counts, directly divide the remainder.
Collect all combinations together recursively (so it dynamically handles any set of pack sizes).
Finally, once all the combinations are found, it is pretty easy to find all packs with least waste and least number of packages:
var packSizes = new int[] { 5, 8, 10, 25, 50 };
var totalNeeded = 176;
var result = GeneratePackings(packSizes, totalNeeded);
Console.WriteLine(result.Count());
var maximal = result.Where (r => r.Zip(packSizes, (a, b) => a * b).Sum() == totalNeeded).ToList();
var min = maximal.Min(m => m.Sum());
var minPacks = maximal.Where (m => m.Sum() == min).ToList();
foreach (var m in minPacks) {
Console.WriteLine("{ " + string.Join(", ", m) + " }");
}
Here is a working example: https://ideone.com/zkCUYZ
This partial solution is specifically for your pack sizes of 5, 8, 10, 25, 50. And only for order sizes at least 40 large. There are a few gaps at smaller sizes that you'll have to fill another way (specifically at values like 6, 7, 22, 27 etc).
Clearly, the only way to get any number that isn't a multiple of 5 is to use the 8 packs.
Determine the number of 8-packs needed with modular arithmatic. Since the 8 % 5 == 3, each 8-pack will handle a different remainder of 5 in this cycle: 0, 2, 4, 1, 3. Something like
public static int GetNumberOf8Packs(int orderCount) {
int remainder = (orderCount % 5);
return ((remainder % 3) * 5 + remainder) / 3;
}
In your example of 176. 176 % 5 == 1 which means you'll need 2 8-packs.
Subtract the value of the 8-packs to get the number of multiples of 5 you need to fill. At this point you still need to deliver 176 - 16 == 160.
Fill all the 50-packs you can by integer dividing. Keep track of the leftovers.
Now just fit the 5, 10, 25 packs as needed. Obviously use the larger values first.
All together your code might look like this:
public static Order MakeOrder(int orderSize)
{
if (orderSize < 40)
{
throw new NotImplementedException("You'll have to write this part, since the modular arithmetic for 8-packs starts working at 40.");
}
var order = new Order();
order.num8 = GetNumberOf8Packs(orderSize);
int multipleOf5 = orderSize - (order.num8 * 8);
order.num50 = multipleOf5 / 50;
int remainderFrom50 = multipleOf5 % 50;
while (remainderFrom50 > 0)
{
if (remainderFrom50 >= 25)
{
order.num25++;
remainderFrom50 -= 25;
}
else if (remainderFrom50 >= 10)
{
order.num10++;
remainderFrom50 -= 10;
}
else if (remainderFrom50 >= 5)
{
order.num5++;
remainderFrom50 -= 5;
}
}
return order;
}
A DotNetFiddle
Related
My goal is for the user to enter a number, the program to divide it with each element from an array and then say if the number is prime or not.
This is the best I have come so far:
Console.Write("Enter a number: ");
int number = Convert.ToInt32(Console.ReadLine());
int i = 0;
int[] div = {1, 2, 3, 5, 7, 11 };
int result;
do
{
result = number / div[i];
i++;
} while ((result % 1) == 0);
if ((result % 1) != 0)
{
Console.WriteLine("The number seems to be prime for now");
Console.ReadLine();
}
I know there's a problem but I can't figure it out. Also I might change the code, based on my progress and research.
When you get stuck on a programming problem, break it into parts.
First, write a function that can tell you if a number is divisible by another number.
bool IsDivisibleBy(int dividend, int divisor)
{
return (dividend % divisor) == 0;
}
Now write a function that will accept a list of divisors and do the same thing.
bool IsDivisibleByAny(int dividend, int[] divisors)
{
foreach (var divisor in divisors)
{
bool isDivisible = IsDivisibleBy(dividend, divisor);
if (isDivisible) return true;
}
return false;
}
Or one line with Linq (if you have gotten that far yet):
bool IsDivisibleByAny(int dividend, IEnumerable<int> divisors) =>
divisors.Any( divisor => IsDivisibleBy(dividend, divisor) );
Now your main program is very easy to write:
var divisors = new int[] { 1, 2, 3, 5, 7, 11 };
Console.Write("Enter a number: ");
int number = Convert.ToInt32(Console.ReadLine());
bool isDivisible = IsDivisibleByAny(number, divisors);
if (!isDivisible)
{
Console.WriteLine("The number seems to be prime for now");
}
This will probably not be the answer you expect.
If we discard the first number (1), it seems your array div is a (very) partial list of primes. If you already know the prime numbers, you could simply check if the number entered is in that list. If yes it is a prime number.
While it could look like cheating, it is actually a widely used technique (calculating sinus/cosinus in signal processors for instance).
That could be implemented in several ways.
As a Linq query:
int Candidate; // The variable in which you store the read number.
var Primes = new List<int> { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, [...]}; // Could be obtained by parsing an external file.
var IsPrime = Primes.Any(p => p == Candidate);
Or for better computational efficiency, in a HashSet (a not so often used data structure. It could be replaced by the widely used Dictionary):
int Candidate; // The variable in which you store the read number.
var Primes = new HashSet<int> { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, [...]}; // Could be obtained by parsing an external file.
var IsPrime = Primes.Contains(Candidate);
If you need a file with prime numbers, see https://primes.utm.edu/lists/small/millions/
You'll notice the interesting comment on the page:
In this directory I have the first fifty million primes in blocks of
one million. Usually it is faster to run a program on your own
computer than to download them, but by popular demand, here they are!
I hope it will motivate you to find how he does!
This leads to another path: actually calculate if a number is prime, but then your div array should not contains all these values (because you want to calculate then, right?).
This is another subject altogether, and I won't extend on this. For the record, the first known way to calculate the prime numbers was the Erastothenes' sieve.
See: https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
Notice than checking if a number is prime and calculating primes is a different problem (even if the first can be deduced by solving the second).
A last comment:
The two general methods I outlined: referring to an already computed list, and calculating, are good examples of tradeoffs.
the first one generally requires a large amount of memory, for very little computation power.
the second one might require a smaller amount of memory, but trading it for a large computation power.
In the case of prime number, neither tradeoff seems evidently better than the other, but in the the sinus/cosinus case I mentioned previously, using already computed values is tremendously advantageous, because of the cyclic nature of these functions. You only need a finite amount of memory space to store all the needed values. (assuming you don't need an infinite precision of course).
I've got a problem I can't seem to crack. It's not that easy to explain so I'll do my best :
I have a really big array of values I have already sorted :
[ 2, 8, 26, ..., 1456, 1879, ..., 7812, 9614, ..., 20 408, 26 584 ]
I have 8 empty "silos" (kind of like an array) with unlimited capacity.
My goal is to fill those silo with all my values in order to make them as balanced as possible.
When I say "balanced", I mean that I want the sum of all the values in a silo to be almost equal to one another.
For example, if my silo 1 has a total sum of 52 000 and my silo 8 has a sum of 30 000 then it's not good. I'd rather have something like 41 000 and 43 500 if possible.
I've already done a round Robin, but it doesn't seem precise enough because I get a quite big difference between my silos.
I've looked up bin-packing also but it doesn't seem appropriate to me.
Thank you for any help or advice you may provide!
Optimization problems are never easy, but since you want to minimize the deviation from the average you simply iterate your values from highest to lowest value and add it to the silo with the lowest sum. That way you don't waste too much time on optimization and should (depending on your values) have a relatively good result.
// Generate some random data
int[] values = new int[1000];
Random r = new Random();
for (int i = 0; i < values.Length; i++)
{
values[i] = r.Next(1, 30000);
}
// Initialize silos
const int siloCount = 8;
List<int>[] result = new List<int>[siloCount];
for (int i = 0; i < siloCount; i++)
{
result[i] = new List<int>();
}
int[] sums = new int[siloCount];
int[] indices = Enumerable.Range(0, siloCount).ToArray();
// Iterate all values in descending order
foreach (int value in values.OrderByDescending(i => i).ToList())
{
// Find silo with lowest sum (or one of them)
int siloIndex = indices.First(i => sums[i] == sums.Min());
// Add to collection and sum - so we don't have to call .Sum() every iteration
sums[siloIndex] += value;
result[siloIndex].Add(value);
}
Debug.WriteLine(String.Join("\r\n", result.Select(s => s.Sum())));
.Net Fiddle
I'm trying to solve a problem on code wars and the unit tests provided make absolutely no sense...
The problem is as follows and sounds absolutely simple enough to have something working in 5 minutes
Consider a sequence u where u is defined as follows:
The number u(0) = 1 is the first one in u.
For each x in u, then y = 2 * x + 1 and z = 3 * x + 1 must be in u too.
There are no other numbers in u.
Ex: u = [1, 3, 4, 7, 9, 10, 13, 15, 19, 21, 22, 27, ...]
1 gives 3 and 4, then 3 gives 7 and 10, 4 gives 9 and 13, then 7 gives 15 and 22 and so on...
Task:
Given parameter n the function dbl_linear (or dblLinear...) returns the element u(n) of the ordered (with <) sequence u.
Example:
dbl_linear(10) should return 22
At first I used a sortedset with a linq query as I didnt really care about efficiency, I quickly learned that this operation will have to calculate to ranges where n could equal ~100000 in under 12 seconds.
So this abomination was born, then butchered time and time again since a for loop would generate issues for some reason. It was then "upgraded" to a while loop which gave slightly more passed unit tests ( 4 -> 8 ).
public class DoubleLinear {
public static int DblLinear(int n) {
ListSet<int> table = new ListSet<int> {1};
for (int i = 0; i < n; i++) {
table.Put(Y(table[i]));
table.Put(Z(table[i]));
}
table.Sort();
return table[n];
}
private static int Y(int y) {
return 2 * y + 1;
}
private static int Z(int z) {
return 3 * z + 1;
}
}
public class ListSet<T> : List<T> {
public void Put(T item) {
if (!this.Contains(item))
this.Add(item);
}
}
With this code it still fails the calculation in excess of n = 75000, but passes up to 8 tests.
I've checked if other people have passed this, and they have. However, i cannot check what they wrote to learn from it.
Can anyone provide insight to what could be wrong here? I'm sure the answer is blatantly obvious and I'm being dumb.
Also is using a custom list in this way a bad idea? is there a better way?
ListSet is slow for sorting, and you constantly get memory reallocation as you build the set. I would start by allocating the table in its full size first, though honestly I would also tell you using a barebones array of the size you need is best for performance.
If you know you need n = 75,000+, allocate a ListSet (or an ARRAY!) of that size. If the unit tests start taking you into the stratosphere, there is a binary segmentation technique we can discuss, but that's a bit involved and logically tougher to build.
I don't see anything logically wrong with the code. The numbers it generates are correct from where I'm standing.
EDIT: Since you know 3n+1 > 2n+1, you only ever have to maintain 6 values:
Target index in u
Current index in u
Current x for y
Current x for z
Current val for y
Current val for z
public static int DblLinear(int target) {
uint index = 1;
uint ind_y = 1;
uint ind_z = 1;
uint val_y = 3;
uint val_z = 4;
if(target < 1)
return 1;
while(index < target) {
if(val_y < val_z) {
ind_y++;
val_y = 2*ind_y + 1;
} else {
ind_z++;
val_z = 3*ind_z + 1;
}
index++;
}
return (val_y < val_z) ? val_y : val_z;
}
You could modify the val_y if to be a while loop (more efficient critical path) if you either widen the branch to 2 conditions or implement a backstep loop for when you blow past your target index.
No memory allocation will definitely speed your calculations up, even f people want to (incorrectly) belly ache about branch prediction in such an easily predictable case.
Also, did you turn optimization on in your Visual Studio project? If you're submitting a binary and not a code file, then that can also shave quite a bit of time.
I want to have all combination of elements in a list for a result like this:
List: {1,2,3}
1
2
3
1,2
1,3
2,3
My problem is that I have 180 elements, and I want to have all combinations up to 5 elements. With my tests with 4 elements, it took a long time (2 minutes) but all went well. But with 5 elements, I get a run out of memory exception.
My code presently is this:
public IEnumerable<IEnumerable<Rondin>> getPossibilites(List<Rondin> rondins)
{
var combin5 = rondins.Combinations(5);
var combin4 = rondins.Combinations(4);
var combin3 = rondins.Combinations(3);
var combin2 = rondins.Combinations(2);
var combin1 = rondins.Combinations(1);
return combin5.Concat(combin4).Concat(combin3).Concat(combin2).Concat(combin1).ToList();
}
With the fonction: (taken from this question: Algorithm to return all combinations of k elements from n)
public static IEnumerable<IEnumerable<T>> Combinations<T>(this IEnumerable<T> elements, int k)
{
return k == 0 ? new[] { new T[0] } :
elements.SelectMany((e, i) =>
elements.Skip(i + 1).Combinations(k - 1).Select(c => (new[] { e }).Concat(c)));
}
I need to search in the list for a combination where each element added up is near (with a certain precision) to a value, this for each element in an other list. There is all my code for this part:
var possibilites = getPossibilites(opt.rondins);
possibilites = possibilites.Where(p => p.Sum(r => r.longueur + traitScie) < 144);
foreach(BilleOptimisee b in opt.billesOptimisees)
{
var proches = possibilites.Where(p => p.Sum(r => (r.longueur + traitScie)) < b.chute && Math.Abs(b.chute - p.Sum(r => r.longueur)) - (p.Count() * 0.22) < 0.01).OrderByDescending(p => p.Sum(r => r.longueur)).ElementAt(0);
if(proches != null)
{
foreach (Rondin r in proches)
{
opt.rondins.Remove(r);
b.rondins.Add(r);
possibilites = possibilites.Where(p => !p.Contains(r));
}
}
}
With the code I have, how can I limit the memory taken by my list ? Or is there a better solution to search in a very big set of combinations ?
Please, if my question is not good, tell me why and I will do my best to learn and ask better questions next time ;)
Your output list for combinations of 5 elements will have ~1.5*10^9 (that's billion with b) sublists of size 5. If you use 32bit integers, even neglecting lists overhead and assuming you have a perfect list with 0b overhead - that will be ~200GB!
You should reconsider if you actually need to generate the list like you do, some alternative might be: streaming the list of elements - i.e. generating them on the fly.
That can be done by creating a function, which gets the last combination as an argument - and outputs the next. (to think how it is done, think about increasing by one a number. you go from last to first, remembering a "carry over" until you are done)
A streaming example for choosing 2 out of 4:
start: {4,3}
curr = start {4, 3}
curr = next(curr) {4, 2} // reduce last by one
curr = next(curr) {4, 1} // reduce last by one
curr = next(curr) {3, 2} // cannot reduce more, reduce the first by one, and set the follower to maximal possible value
curr = next(curr) {3, 1} // reduce last by one
curr = next(curr) {2, 1} // similar to {3,2}
done.
Now, you need to figure how to do it for lists of size 2, then generalize it for arbitrary size - and program your streaming combination generator.
Good Luck!
Let your precision be defined in the imaginary spectrum.
Use a real index to access the leaf and then traverse the leaf with the required precision.
See PrecisLise # http://net7mma.codeplex.com/SourceControl/latest#Common/Collections/Generic/PrecicseList.cs
While the implementation is not 100% complete as linked you can find where I used a similar concept here:
http://net7mma.codeplex.com/SourceControl/latest#RtspServer/MediaTypes/RFC6184Media.cs
Using this concept I was able to re-order h.264 Access Units and their underlying Network Access Layer Components in what I consider a very interesting way... outside of interesting it also has the potential to be more efficient using close the same amount of memory.
et al, e.g, 0 can be proceeded by 0.1 or 0.01 or 0.001, depending on the type of the key in the list (double, float, Vector, inter alia) you may have the added benefit of using the FPU or even possibly Intrinsics if supported by your processor, thus making sorting and indexing much faster than would be possible on normal sets regardless of the underlying storage mechanism.
Using this concept allows for very interesting ordering... especially if you provide a mechanism to filter the precision.
I was also able to find several bugs in the bit-stream parser of quite a few well known media libraries using this methodology...
I found my solution, I'm writing it here so that other people that has a similar problem than me can have something to work with...
I made a recursive fonction that check for a fixed amount of possibilities that fit the conditions. When the amount of possibilities is found, I return the list of possibilities, do some calculations with the results, and I can restart the process. I added a timer to stop the research when it takes too long. Since my condition is based on the sum of the elements, I do every possibilities with distinct values, and search for a small amount of possibilities each time (like 1).
So the fonction return a possibility with a very high precision, I do what I need to do with this possibility, I remove the elements of the original list, and recall the fontion with the same precision, until there is nothing returned, so I can continue with an other precision. When many precisions are done, there is only about 30 elements in my list, so I can call for all the possibilities (that still fits the maximum sum), and this part is much easier than the beginning.
There is my code:
public List<IEnumerable<Rondin>> getPossibilites(IEnumerable<Rondin> rondins, int nbElements, double minimum, double maximum, int instance = 0, double longueur = 0)
{
if(instance == 0)
timer = DateTime.Now;
List<IEnumerable<Rondin>> liste = new List<IEnumerable<Rondin>>();
//Get all distinct rondins that can fit into the maximal length
foreach (Rondin r in rondins.Where(r => r.longueur < (maximum - longueur)).DistinctBy(r => r.longueur).OrderBy(r => r.longueur))
{
//Check the current length
double longueur2 = longueur + r.longueur + traitScie;
//If the current length is under the maximal length
if (longueur2 < maximum)
{
//Get all the possibilities with all rondins except the current one, and add them to the list
foreach (IEnumerable<Rondin> poss in getPossibilites(rondins.Where(rondin => rondin.id != r.id), nbElements - liste.Count, minimum, maximum, instance + 1, longueur2).Select(possibilite => possibilite.Concat(new Rondin[] { r })))
{
liste.Add(poss);
if (liste.Count >= nbElements && nbElements > 0)
break;
}
//If this the current length in higher than the minimum, add it to the list
if (longueur2 >= minimum)
liste.Add(new Rondin[] { r });
}
//If we have enough possibilities, we stop the research
if (liste.Count >= nbElements && nbElements > 0)
break;
//If the research is taking too long, stop the research and return the list;
if (DateTime.Now.Subtract(timer).TotalSeconds > 30)
break;
}
return liste;
}
I am currently trying to write C# code that finds multiple arrays of integers that equal a specified total when they are summed up. I would like to find these combinations while each integer in the array is given a range it can be.
For example, if our total is 10 and we have an int array of size 3 where the first number can be between 1 and 4, the second 2 and 4, and the third 3 and 6, some possible combination are [1, 3, 6], [2, 2, 6], and [4, 2, 4].
What sort of algorithm would help with solving a problem like this that can run in them most efficient amount of time? Also, what other things should I keep in mind when transitioning this problem into C# code?
I would do this using recursion. You can simply iterate over all possible values and see if they give a required sum.
Input
Let's suppose we have the following input pattern:
N S
min1 min2 min3 ... minN
max1 max2 max3 ... maxN
For your example
if our total is 10 and we have an int array of size 3 where the first
number can be between 1 and 4, the second 2 and 4, and the third 3 and
6
it will be:
3 10
1 2 3
4 4 6
Solution
We have read our input values. Now, we just try to use each possible number for our solution.
We will have a List which will store the current path:
static List<int> current = new List<int>();
The recursive function is pretty simple:
private static void Do(int index, int currentSum)
{
if (index == length) // Termination
{
if (currentSum == sum) // If result is a required sum - just output it
Output();
return;
}
// try all possible solutions for current index
for (int i = minValues[index]; i <= maxValues[index]; i++)
{
current.Add(i);
Do(index + 1, currentSum + i); // pass new index and new sum
current.RemoveAt(current.Count() - 1);
}
}
For non-negative values we can also include such condition. This is the recursion improvement which will cut off a huge amount of incorrect iterations. If we already have a currentSum greater than sum then it is useless to continue in this recursion branch:
if (currentSum > sum) return;
Actually, this algorithm is a simple "find combinations that give a sum S" problem solution with one difference: inner loop indices within minValue[index] and maxValue[index].
Demo
Here is the working IDEOne demo of my solution.
You cannot do much better than nested for loops/recursion. Though if you are familiar with the 3SUM problem you will know a little trick to reduce the time complexity of this sort of algorithm! If you have n ranges then you know what number you have to pick from the nth range after you make your first n-1 choices!
I will use an example to walk through my suggestion.
if our total is 10 and we have an int array of size 3 where the first number can be between 1 and 4, the second 2 and 4, and the third 5 and 6
First of all lets process the data to be a bit nicer to deal with. I personally like the idea of working with ranges that start at 0 instead of arbitrary numbers! So we subtract the lower bounds from the upper bounds:
(1 to 4) -> (0 to 3)
(2 to 4) -> (0 to 2)
(5 to 6) -> (0 to 1)
Of course now we need to adjust our target sum to reflect the new ranges. So we subtract our original lower bounds from our target sum as well!
TargetSum = 10-1-2-5 = 2
Now we can represent our ranges with just the upper bound since they share a lower bound! So a range array would look something like:
RangeArray = [3,2,1]
Lets sort this (it will become more obvious why later). So we have:
RangeArray = [1,2,3]
Great! Now onto the beef of the algorithm... the summing! For now I will use for loops as it is easier to use for example purposes. You will have to use recursion. Yeldar's code should give you a good starting place.
result = []
for i from 0 to RangeArray[0]:
SumList = [i]
newSum = TargetSum - i
for j from 0 to RangeArray[1]:
if (newSum-j)>=0 and (newSum-j)<=RangeArray[2] then
finalList = SumList + [j, newSum-j]
result.append(finalList)
Note the inner loop. This is what was inspired by the 3SUM algorithm. We take advantage of the fact that we know what value we have to pick from the third range (since it is defined by our first 2 choices).
From here you have to of course re-map the results back to the original ranges by adding the original lowerbounds to the values that came from the corresponding ranges.
Notice that we now understand why it may be a good idea to sort RangeList. The last range gets absorbed into the secondlast range's loop. We want the largest range to be the one that does not loop.
I hope this helps to get you started! If you need any help translating my pseudocode into c# just ask :)