I have a list of Offers, from which I want to create "chains" (e.g. permutations) with limited chain lengths.
I've gotten as far as creating the permutations using the Kw.Combinatorics project.
However, the default behavior creates permutations in the length of the list count. I'm not sure how to limit the chain lengths to 'n'.
Here's my current code:
private static List<List<Offers>> GetPerms(List<Offers> list, int chainLength)
{
List<List<Offers>> response = new List<List<Offers>>();
foreach (var row in new Permutation(list.Count).GetRows())
{
List<Offers> innerList = new List<Offers>();
foreach (var mix in Permutation.Permute(row, list))
{
innerList.Add(mix);
}
response.Add(innerList);
innerList = new List<Offers>();
}
return response;
}
Implemented by:
List<List<AdServer.Offers>> lst = GetPerms(offers, 2);
I'm not locked in KWCombinatorics if someone has a better solution to offer.
Here's another implementation which I think should be faster than the accepted answer (and it's definitely less code).
public static IEnumerable<IEnumerable<T>> GetVariationsWithoutDuplicates<T>(IList<T> items, int length)
{
if (length == 0 || !items.Any()) return new List<List<T>> { new List<T>() };
return from item in items.Distinct()
from permutation in GetVariationsWithoutDuplicates(items.Where(i => !EqualityComparer<T>.Default.Equals(i, item)).ToList(), length - 1)
select Prepend(item, permutation);
}
public static IEnumerable<IEnumerable<T>> GetVariations<T>(IList<T> items, int length)
{
if (length == 0 || !items.Any()) return new List<List<T>> { new List<T>() };
return from item in items
from permutation in GetVariations(Remove(item, items).ToList(), length - 1)
select Prepend(item, permutation);
}
public static IEnumerable<T> Prepend<T>(T first, IEnumerable<T> rest)
{
yield return first;
foreach (var item in rest) yield return item;
}
public static IEnumerable<T> Remove<T>(T item, IEnumerable<T> from)
{
var isRemoved = false;
foreach (var i in from)
{
if (!EqualityComparer<T>.Default.Equals(item, i) || isRemoved) yield return i;
else isRemoved = true;
}
}
On my 3.1 GHz Core 2 Duo, I tested with this:
public static void Test(Func<IList<int>, int, IEnumerable<IEnumerable<int>>> getVariations)
{
var max = 11;
var timer = System.Diagnostics.Stopwatch.StartNew();
for (int i = 1; i < max; ++i)
for (int j = 1; j < i; ++j)
getVariations(MakeList(i), j).Count();
timer.Stop();
Console.WriteLine("{0,40}{1} ms", getVariations.Method.Name, timer.ElapsedMilliseconds);
}
// Make a list that repeats to guarantee we have duplicates
public static IList<int> MakeList(int size)
{
return Enumerable.Range(0, size/2).Concat(Enumerable.Range(0, size - size/2)).ToList();
}
Unoptimized
GetVariations 11894 ms
GetVariationsWithoutDuplicates 9 ms
OtherAnswerGetVariations 22485 ms
OtherAnswerGetVariationsWithDuplicates 243415 ms
With compiler optimizations
GetVariations 9667 ms
GetVariationsWithoutDuplicates 8 ms
OtherAnswerGetVariations 19739 ms
OtherAnswerGetVariationsWithDuplicates 228802 ms
You're not looking for a permutation, but for a variation. Here is a possible algorithm. I prefer iterator methods for functions that can potentially return very many elements. This way, the caller can decide if he really needs all elements:
IEnumerable<IList<T>> GetVariations<T>(IList<T> offers, int length)
{
var startIndices = new int[length];
var variationElements = new HashSet<T>(); //for duplicate detection
while (startIndices[0] < offers.Count)
{
var variation = new List<T>(length);
var valid = true;
for (int i = 0; i < length; ++i)
{
var element = offers[startIndices[i]];
if (variationElements.Contains(element))
{
valid = false;
break;
}
variation.Add(element);
variationElements.Add(element);
}
if (valid)
yield return variation;
//Count up the indices
startIndices[length - 1]++;
for (int i = length - 1; i > 0; --i)
{
if (startIndices[i] >= offers.Count)
{
startIndices[i] = 0;
startIndices[i - 1]++;
}
else
break;
}
variationElements.Clear();
}
}
The idea for this algorithm is to use a number in offers.Count base. For three offers, all digits are in the range 0-2. We then basically increment this number step by step and return the offers that reside at the specified indices. If you want to allow duplicates, you can remove the check and the HashSet<T>.
Update
Here is an optimized variant that does the duplicate check on the index level. In my tests it is a lot faster than the previous variant:
IEnumerable<IList<T>> GetVariations<T>(IList<T> offers, int length)
{
var startIndices = new int[length];
for (int i = 0; i < length; ++i)
startIndices[i] = i;
var indices = new HashSet<int>(); // for duplicate check
while (startIndices[0] < offers.Count)
{
var variation = new List<T>(length);
for (int i = 0; i < length; ++i)
{
variation.Add(offers[startIndices[i]]);
}
yield return variation;
//Count up the indices
AddOne(startIndices, length - 1, offers.Count - 1);
//duplicate check
var check = true;
while (check)
{
indices.Clear();
for (int i = 0; i <= length; ++i)
{
if (i == length)
{
check = false;
break;
}
if (indices.Contains(startIndices[i]))
{
var unchangedUpTo = AddOne(startIndices, i, offers.Count - 1);
indices.Clear();
for (int j = 0; j <= unchangedUpTo; ++j )
{
indices.Add(startIndices[j]);
}
int nextIndex = 0;
for(int j = unchangedUpTo + 1; j < length; ++j)
{
while (indices.Contains(nextIndex))
nextIndex++;
startIndices[j] = nextIndex++;
}
break;
}
indices.Add(startIndices[i]);
}
}
}
}
int AddOne(int[] indices, int position, int maxElement)
{
//returns the index of the last element that has not been changed
indices[position]++;
for (int i = position; i > 0; --i)
{
if (indices[i] > maxElement)
{
indices[i] = 0;
indices[i - 1]++;
}
else
return i;
}
return 0;
}
If I got you correct here is what you need
this will create permutations based on the specified chain limit
public static List<List<T>> GetPerms<T>(List<T> list, int chainLimit)
{
if (list.Count() == 1)
return new List<List<T>> { list };
return list
.Select((outer, outerIndex) =>
GetPerms(list.Where((inner, innerIndex) => innerIndex != outerIndex).ToList(), chainLimit)
.Select(perms => (new List<T> { outer }).Union(perms).Take(chainLimit)))
.SelectMany<IEnumerable<IEnumerable<T>>, List<T>>(sub => sub.Select<IEnumerable<T>, List<T>>(s => s.ToList()))
.Distinct(new PermComparer<T>()).ToList();
}
class PermComparer<T> : IEqualityComparer<List<T>>
{
public bool Equals(List<T> x, List<T> y)
{
return x.SequenceEqual(y);
}
public int GetHashCode(List<T> obj)
{
return (int)obj.Average(o => o.GetHashCode());
}
}
and you'll call it like this
List<List<AdServer.Offers>> lst = GetPerms<AdServer.Offers>(offers, 2);
I made this function is pretty generic so you may use it for other purpose too
eg
List<string> list = new List<string>(new[] { "apple", "banana", "orange", "cherry" });
List<List<string>> perms = GetPerms<string>(list, 2);
result
Related
I have an array of n integers and I need to divide any of it's elements by 2 (return the ceiling of the result) for k times such that the sum is minimum. The value of k can be very large as compared to n.
I am using this code:
private static int GetMaxSum(int[] array, int k)
{
int n = array.Length;
for (int i = 0; i < k; i++)
{
var indexAtMax = GetMaxIndex(array);
if (array[indexAtMax] == 1) break;
array[indexAtMax] = array[indexAtMax] / 2 + array[indexAtMax] % 2;
}
return array.Sum();
}
private static int GetMaxIndex(int[] array)
{
int maxIndex = 0;
int max = array[0];
for (int i=1; i<array.Length;i++)
{
if (array[i] > max)
{
max = array[i];
maxIndex = i;
}
}
return maxIndex;
}
How can we improve the performance further probably by using max heap or some other data structure?
Unless I'm misunderstanding your requirements, your solution seens way too complicated (and apparently wrong according to comments).
I can't really think this through right now, but wouldn't it be the case that the global solution is made up of optimal intermediate steps? The order in which you divide is irrelevant and the problem is linear.
If that is the case, you simply have to evaluate the optimal division in each step and that is not very hard to do:
static void Minimize(int[] arr, int k)
{
for (var j = 0; j < k; j++)
{
var maxGainIndex = -1;
var maxGain = int.MinValue;
for (var i = 0; i < arr.Length; i++)
{
var gain = arr[i] - (arr[i]/2 + arr[i] % 2);
if (gain > maxGain)
{
maxGain = gain;
maxGainIndex = i;
}
}
arr[maxGainIndex] -= maxGain;
}
}
If I'm not wrong, the asymptotic behavior of this algorithm is O(k·n).
UPDATE:
Based on claims of posted code being far less optimal, I've taken the liberty of benchmarking both algorithms with these results on my machine:
Input array: 100;120;80;55;75;115;125;150;90;35;65;77;89;10;11;113;200;300
Number of divisions: 20
Running benchmarks in Release mode without debugger attached.
1000000 of GetMimimum finished in 584 ms with result 704.
1000000 of GetMimimum2 finished in 8846 ms with result 704.
Benchmarking code can be found here: https://dotnetfiddle.net/ITx53q
The performance gain of my proposed algorithm is rather staggering (x15), which was expected because your solution is, as evaluated initally, overcomplicated at best for such a simple problem.
As the assumption was that k>>n, the simpler algorithms are of the order O(kn) which can be too much of iterations.
I have written this code thinking of the problem and how can I limit sorting or calculating min/max. I have divided the array into subarrays so that the operations can be performed on subarrays without thinking of the order of operations.
private static int GetMinSum(int[] array, int k)
{
int n = array.Length;
var sum = 0;
k = GetOptimizedListAndK(array, n, k, out var lists);
//If more sublists are needed
if (k > 0)
{
var count = lists.CountSum;
var key = lists.Key;
if (key > 0)
{
var poweroftwo = 1 << key;
sum += count * poweroftwo - k * poweroftwo / 2;
var dictionary2 = GetDictionary(array, lists, poweroftwo);
key = dictionary2.Keys.Last();
while (k > 0 && key > 0)
{
var list2 = dictionary2[key];
count = list2.Count;
if (k >= count)
{
list2.ForEach(
index => array[index] = array[index] / 2 + array[index] % 2);
dictionary2.Remove(key);
key = dictionary2.Keys.LastOrDefault();
k -= count;
}
else
{
if (k <= Log2(count))
{
for (int i = 0; i < k; i++)
{
var indexAtMax = GetMaxIndex(list2, array);
array[indexAtMax] = array[indexAtMax] / 2 + array[indexAtMax] % 2;
}
k = 0;
}
if (count - k <= Log2(count))
{
var minIndexes = GetMinIndexes(list2, array, count - k);
foreach (var i in list2)
{
if (!minIndexes.Contains(i))
{
array[i] = array[i] / 2 + array[i] % 2;
}
}
k = 0;
}
if (k > 0)
{
poweroftwo = 1 << key;
sum += list2.Count * poweroftwo - k * poweroftwo / 2;
dictionary2 = GetDictionary(array, list2, poweroftwo);
key = dictionary2.Keys.Last();
}
}
}
}
}
return array.Sum() + sum;
}
private static int GetOptimizedListAndK(int[] array, int n, int k, out Lists lists)
{
lists = null;
Dictionary<int, Lists> dictionary = new Dictionary<int, Lists>();
PopulatePowerBasedDictionary(array, n, dictionary);
var key = dictionary.Keys.Max();
while (key > 0 && k > 0)
{
lists = dictionary[key];
var count = lists.CountSum;
if (k >= count)
{
lists.ForEach(list => list.ForEach(index => array[index] = array[index] / 2 + array[index] % 2));
if (key > 1)
{
if (dictionary.TryGetValue(key - 1, out var lowerlists))
{
lowerlists.AddRange(lists);
lowerlists.CountSum += count;
}
else dictionary.Add((key - 1), lists);
}
dictionary.Remove(key);
key--;
k -= count;
}
else
{
if (k < Log2(count))
{
for (int i = 0; i < k; i++)
{
var indexAtMax = GetMaxIndex(lists, array);
array[indexAtMax] = array[indexAtMax] / 2 + array[indexAtMax] % 2;
}
k = 0;
}
if (count - k < Log2(count))
{
var minIndexes = GetMinIndexes(lists, array, count - k);
foreach (var list in lists)
{
foreach (var i in list)
{
if (!minIndexes.Contains(i))
{
array[i] = array[i] / 2 + array[i] % 2;
}
}
}
k = 0;
}
break;
}
}
return k;
}
private static void PopulatePowerBasedDictionary(int[] array, int n, Dictionary<int, Lists> dictionary)
{
for (int i = 0; i < n; i++)
{
if (array[i] < 2) continue;
var log2 = Log2(array[i]);
if (dictionary.TryGetValue(log2, out var lists))
{
lists[0].Add(i);
lists.CountSum++;
}
else
{
lists = new Lists(1,log2) { new List<int> { i } };
dictionary.Add(log2, lists);
}
}
}
private static int GetMaxIndex(List<int> list, int[] array)
{
var maxIndex = 0;
var max = 0;
foreach (var i in list)
{
if (array[i] > max)
{
maxIndex = i;
max = array[i];
}
}
return maxIndex;
}
private static SortedDictionary<int, List<int>> GetDictionary(int[] array, Lists lists, int poweroftwo)
{
SortedDictionary<int, List<int>> dictionary = new SortedDictionary<int, List<int>>();
foreach (var list in lists)
{
foreach (var i in list)
{
array[i] = array[i] - poweroftwo;
if (array[i] < 2)
{
continue;
}
var log2 = Log2(array[i]);
if (dictionary.TryGetValue(log2, out var list2))
{
list2.Add(i);
}
else
{
list2 = new List<int> { i };
dictionary.Add(log2, list2);
}
}
}
return dictionary;
}
private static SortedDictionary<int, List<int>> GetDictionary(int[] array, List<int> list, int poweroftwo)
{
SortedDictionary<int, List<int>> dictionary = new SortedDictionary<int, List<int>>();
foreach (var i in list)
{
array[i] = array[i] - poweroftwo;
if (array[i] < 2)
{
continue;
}
var log2 = Log2(array[i]);
if (dictionary.TryGetValue(log2, out var list2))
{
list2.Add(i);
}
else
{
list2 = new List<int> { i };
dictionary.Add(log2, list2);
}
}
return dictionary;
}
private static int GetMaxIndex(Lists lists, int[] array)
{
var maxIndex = 0;
var max = 0;
foreach (var list in lists)
{
foreach (var i in list)
{
if (array[i]>max)
{
maxIndex = i;
max = array[i];
}
}
}
return maxIndex;
}
private static HashSet<int> GetMinIndexes(Lists lists, int[] array, int k)
{
var mins = new HashSet<int>();
var minIndex = 0;
var min = int.MaxValue;
for (int j = 0; j < k; j++)
{
foreach (var list in lists)
{
foreach (var i in list)
{
if (array[i] < min && !mins.Contains(i))
{
minIndex = i;
min = array[i];
}
}
}
mins.Add(minIndex);
min = int.MaxValue;
}
return mins;
}
private static HashSet<int> GetMinIndexes(List<int> list, int[] array, int k)
{
var mins = new HashSet<int>();
var minIndex = 0;
var min = int.MaxValue;
for (int j = 0; j < k; j++)
{
foreach (var i in list)
{
if (array[i] < min && !mins.Contains(i))
{
minIndex = i;
min = array[i];
}
}
mins.Add(minIndex);
min = int.MaxValue;
}
return mins;
}
private static int Log2(int n)
{
return BitOperations.Log2((uint)n);
}
Lists Class:
public class Lists:List<List<int>>
{
public int Key { get; set; }
public int CountSum { get; set; }
public Lists(int countSum, int key):base()
{
CountSum = countSum;
Key = key;
}
}
i have a deal with a hackerrank algorithm problem.
It works at all cases, except 6-7-8-9. It gives timeout error. I had spent so much time at this level. Someone saw where is problem?
static long[] climbingLeaderboard(long[] scores, long[] alice)
{
//long[] ranks = new long[scores.Length];
long[] aliceRanks = new long[alice.Length]; // same length with alice length
long lastPoint = 0;
long lastRank;
for (long i = 0; i < alice.Length; i++)
{
lastPoint = scores[0];
lastRank = 1;
bool isIn = false; // if never drop in if statement
for (long j = 0; j < scores.Length; j++)
{
if (lastPoint != scores[j]) //if score is not same, raise the variable
{
lastPoint = scores[j];
lastRank++;
}
if (alice[i] >= scores[j])
{
aliceRanks[i] = lastRank;
isIn = true;
break;
}
aliceRanks[i] = !isIn & j + 1 == scores.Length ? ++lastRank : aliceRanks[i]; //drop in here
}
}
return aliceRanks;
}
This problem can be solved in O(n) time, no binary search needed at all. First, we need to extract the most useful piece of data given in the problem statement, which is,
The existing leaderboard, scores, is in descending order.
Alice's scores, alice, are in ascending order.
An approach that makes this useful is to create two pointers, one at the start of alice array, let's call it "i", and the second is at the end of scores array, let's call it "j". We then loop until i reaches the end of alice array and at each iteration, we check for three main conditions. We increment i by one if alice[i] is less than scores[j] because the next element of alice may be also less than the current element of scores, or we decrement j if alice[i] is greater than scores[j] because we are sure that the next elements of alice are also greater than those elements discarded in scores. The last condition is that if alice[i] == scores[j], we only increment i.
I solved this question in C++, my goal here is to make you understand the algorithm, I think you can easily convert it to C# if you understand it. If there are any confusions, please tell me. Here is the code:
// Complete the climbingLeaderboard function below.
vector<int> climbingLeaderboard(vector<int> scores, vector<int> alice) {
int j = 1, i = 1;
// this is to remove duplicates from the scores vector
for(i =1; i < scores.size(); i++){
if(scores[i] != scores[i-1]){
scores[j++] = scores[i];
}
}
int size = scores.size();
for(i = 0; i < size-j; i++){
scores.pop_back();
}
vector<int> ranks;
i = 0;
j = scores.size()-1;
while(i < alice.size()){
if(j < 0){
ranks.push_back(1);
i++;
continue;
}
if(alice[i] < scores[j]){
ranks.push_back(j+2);
i++;
} else if(alice[i] > scores[j]){
j--;
} else {
ranks.push_back(j+1);
i++;
}
}
return ranks;
}
I think this may help you too:
vector is like an array list that resizes itself.
push_back() is inserting at the end of the vector.
pop_back() is removing from the end of the vector.
Here is my solution with c#
public static List<int> climbingLeaderboard(List<int> ranked, List<int> player)
{
List<int> result = new List<int>();
ranked = ranked.Distinct().ToList();
var pLength = player.Count;
var rLength = ranked.Count-1;
int j = rLength;
for (int i = 0; i < pLength; i++)
{
for (; j >= 0; j--)
{
if (player[i] == ranked[j])
{
result.Add(j + 1);
break;
}
else if(player[i] < ranked[j])
{
result.Add(j + 2);
break;
}
else if(player[i] > ranked[j]&&j==0)
{
result.Add(1);
break;
}enter code here
}
}
return result;
}
Here is a solution that utilizes BinarySearch. This method returns the index of the searched number in the array, or if the number is not found then it returns a negative number that is the bitwise complement of the index of the next element in the array. Binary search only works in sorted arrays.
public static int[] GetRanks(long[] scores, long[] person)
{
var defaultComparer = Comparer<long>.Default;
var reverseComparer = Comparer<long>.Create((x, y) => -defaultComparer.Compare(x, y));
var distinctOrderedScores = scores.Distinct().OrderBy(i => i, reverseComparer).ToArray();
return person
.Select(i => Array.BinarySearch(distinctOrderedScores, i, reverseComparer))
.Select(pos => (pos >= 0 ? pos : ~pos) + 1)
.ToArray();
}
Usage example:
var scores = new long[] { 100, 100, 50, 40, 40, 20, 10 };
var alice = new long[] { 5, 25, 50, 120 };
var ranks = GetRanks(scores, alice);
Console.WriteLine($"Ranks: {String.Join(", ", ranks)}");
Output:
Ranks: 6, 4, 2, 1
I was bored so i gave this a go with Linq and heavily commented it for you,
Given
public static IEnumerable<int> GetRanks(long[] scores, long[] person)
// Convert scores to a tuple
=> scores.Select(s => (scores: s, isPerson: false))
// convert persons score to a tuple and concat
.Concat(person.Select(s => (scores: s, isPerson: true)))
// Group by scores
.GroupBy(x => x.scores)
// order by score
.OrderBy(x => x.Key)
// select into an indexable tuple so we know everyones rank
.Select((groups, i) => (rank: i, groups))
// Filter the person
.Where(x => x.groups.Any(y => y.isPerson))
// select the rank
.Select(x => x.rank);
Usage
static void Main(string[] args)
{
var scores = new long[]{1, 34, 565, 43, 44, 56, 67};
var alice = new long[]{578, 40, 50, 67, 6};
var ranks = GetRanks(scores, alice);
foreach (var rank in ranks)
Console.WriteLine(rank);
}
Output
1
3
6
8
10
Based on the given constraint brute-force solution will not be efficient for the problem.
you have to optimize your code and the key part here is to look up for exact place which can be effectively done by using binary search.
Here is the solution using binary search:-
static int[] climbingLeaderboard(int[] scores, int[] alice) {
int n = scores.length;
int m = alice.length;
int res[] = new int[m];
int[] rank = new int[n];
rank[0] = 1;
for (int i = 1; i < n; i++) {
if (scores[i] == scores[i - 1]) {
rank[i] = rank[i - 1];
} else {
rank[i] = rank[i - 1] + 1;
}
}
for (int i = 0; i < m; i++) {
int aliceScore = alice[i];
if (aliceScore > scores[0]) {
res[i] = 1;
} else if (aliceScore < scores[n - 1]) {
res[i] = rank[n - 1] + 1;
} else {
int index = binarySearch(scores, aliceScore);
res[i] = rank[index];
}
}
return res;
}
private static int binarySearch(int[] a, int key) {
int lo = 0;
int hi = a.length - 1;
while (lo <= hi) {
int mid = lo + (hi - lo) / 2;
if (a[mid] == key) {
return mid;
} else if (a[mid] < key && key < a[mid - 1]) {
return mid;
} else if (a[mid] > key && key >= a[mid + 1]) {
return mid + 1;
} else if (a[mid] < key) {
hi = mid - 1;
} else if (a[mid] > key) {
lo = mid + 1;
}
}
return -1;
}
You can refer to this link for a more detailed video explanation.
static int[] climbingLeaderboard(int[] scores, int[] alice) {
int[] uniqueScores = IntStream.of(scores).distinct().toArray();
int [] rank = new int [alice.length];
int startIndex=0;
for(int j=alice.length-1; j>=0;j--) {
for(int i=startIndex; i<=uniqueScores.length-1;i++) {
if (alice[j]<uniqueScores[uniqueScores.length-1]){
rank[j]=uniqueScores.length+1;
break;
}
else if(alice[j]>=uniqueScores[i]) {
rank[j]=i+1;
startIndex=i;
break;
}
else{continue;}
}
}
return rank;
}
My solution in javascript for climbing the Leaderboard Hackerrank problem. The time complexity of the problem can be O(i+j), i is the length of scores and j is the length of alice. The space complexity is O(1).
// Complete the climbingLeaderboard function below.
function climbingLeaderboard(scores, alice) {
const ans = [];
let count = 0;
// the alice array is arranged in ascending order
let j = alice.length - 1;
for (let i = 0 ; i < scores.length ; i++) {
const score = scores[i];
for (; j >= 0 ; j--) {
if (alice[j] >= score) {
// if higher than score
ans.unshift(count+1);
} else if (i === scores.length - 1) {
// if smallest
ans.unshift(count+2);
} else {
break;
}
}
// actual rank of the score in leaderboard
if (score !== scores[i-1]) {
count++;
}
}
return ans;
}
Here is my solution
List<int> distinct = null;
List<int> rank = new List<int>();
foreach (int item in player)
{
ranked.Add(item);
ranked.Sort();
ranked.Reverse();
distinct = ranked.Distinct().ToList();
for (int i = 0; i < distinct.Count; i++)
{
if (item == distinct[i])
{
rank.Add(i + 1);
break;
}
}
}
return rank;
This can be modified by removing the inner for loop also
List<int> distinct = null;
List<int> rank = new List<int>();
foreach (int item in player)
{
ranked.Add(item);
ranked.Sort();
ranked.Reverse();
distinct = ranked.Distinct().ToList();
var index = ranked.FindIndex(x => x == item);
rank.Add(index + 1);
}
return rank;
This is my solution in c# for Hackerrank Climbing the Leaderboard based on C++ answer here.
public static List<int> climbingLeaderboard(List<int> ranked, List<int> player)
{
List<int> _ranked = new List<int>();
_ranked.Add(ranked[0]);
for(int a=1; a < ranked.Count(); a++)
if(_ranked[_ranked.Count()-1] != ranked[a])
_ranked.Add(ranked[a]);
int j = _ranked.Count()-1;
int i = 0;
while(i < player.Count())
{
if(j < 0)
{
player[i] = 1;
i++;
continue;
}
if(player[i] < _ranked[j])
{
player[i] = j+2;
i++;
}
else
if(player[i] == _ranked[j])
{
player[i] = j+1;
i++;
}
else
{
j--;
}
}
return player;
}
My solution in Java for climbing the Leaderboard Hackerrank problem.
// Complete the climbingLeaderboard function below.
static int[] climbingLeaderboard(int[] scores, int[] alice) {
Arrays.sort(scores);
HashSet<Integer> set = new HashSet<Integer>();
int[] ar = new int[alice.length];
int sc = 0;
for(int i=0; i<alice.length; i++){
sc = 1;
set.clear();
for(int j=0; j<scores.length; j++){
if(alice[i] < scores[j] && !set.contains(scores[j])){
sc++;
set.add(scores[j]);
}
}
ar[i] = sc;
}return ar;
}
What I mean is that if I have a set like
{ 1, 2, 3 }
then all the subsets are
{ },
{ 1 },
{ 2 },
{ 3 },
{ 1, 2 },
{ 2, 3 },
{ 1, 3 },
{ 1, 2, 3 }
which is 2^3 of them because the set has size 3. Any solution I think of requires "remembering" the previous subsets up to size n - 1 where n is the length of the set. For example, I have a solution I wrote that looks like
public static IEnumerable<IEnumerable<T>> AllSubcollections<T>(this IEnumerable<T> source)
{
// The empty set is always a subcollection
yield return new List<T>() { };
T[] arr = source.ToArray();
IEnumerable<int> indices = Enumerable.Range(0, arr.Length);
var last = new List<HashSet<int>>(new List<HashSet<int>>() { new HashSet<int>() });
for(int k = 1; k < arr.Length; ++k)
{
var next = new List<HashSet<int>>(new List<HashSet<int>>());
foreach(HashSet<int> hs in last)
{
foreach(int index in indices)
{
if(!hs.Contains(index))
{
var addition = hs.Concat(new List<int> { index });
yield return addition.Select(i => arr[i]);
next.Add(new HashSet<int>(addition));
}
}
}
}
}
Notice how that assumes that source can fit in an array, and assumes that a HashSet can hold the previous subcollections.
Given that IEnumerable<T> can yield an arbitrary number of results (even an infinite amount), is it possible to write a solution that does this for the subset problem?
You point out the flaw yourself in your question. We have IEnumerables that don't terminate on archive.
In fact this is still possible if you carefully write your set enumerator so that it returns another IEnumerable. This won't terminate if the source doesn't but that's not all that bad of a problem. All subsets of a billion rows will run too long anyway so you'll probably never hit that OutOfMemoryException.
public static IEnumerable<IReadOnlyList<T>> AllSubcollections<T>(this IEnumerable<T> source)
{
// The empty set is always a subcollection
yield return new List<T>() { };
List<T> priors = new List<T>();
List<bool> includes = new List<bool>(); // Need a bitvector to get past 64 elements.
while (source.MoveNext())
{
for (int i = 0; i < includes.Count; ++i)
includes[i] = false;
// Always return the newest item in any set. This avoids dupes.
priors.Add(source.Current);
priors.Add(true);
bool breakout;
do {
List<T> set = new List<T>();
for (int i = 0; i < priors.Count; ++i)
if (includes[i])
set[i] = includes[i];
yield return (IReadOnlyList<T>)set;
// Bitvector increment
breakout = true;
for (int i = 0; i < priors.Count; ++i) {
if (!includes[i]) {
for (int j = 0; j < i; ++j)
includes[j] = 0;
includes[i] = true;
breakout = false;
break;
}
}
} while (!breakout);
}
}
i'm creating a program which requires to check every possible permutation. let's say we have 1,2,3 the program will work just fine and show me all the possible ones : 1,3,2 2,1,3 2,3,1 3,1,2 and 3,2,1 however i also want it to be able to try such combinations
1,1,1
2,2,2
1,2,2
3,3,2
i.e include absolutely every possible combination.. Here's my code :
public static bool NextPermutation<T>(T[] elements) where T : IComparable<T>
{
// More efficient to have a variable instead of accessing a property
var count = elements.Length;
// Indicates whether this is the last lexicographic permutation
var done = true;
// Go through the array from last to first
for (var i = count - 1; i > 0; i--)
{
var curr = elements[i];
// Check if the current element is less than the one before it
if (curr.CompareTo(elements[i - 1]) < 0)
{
continue;
}
// An element bigger than the one before it has been found,
// so this isn't the last lexicographic permutation.
done = false;
// Save the previous (bigger) element in a variable for more efficiency.
var prev = elements[i - 1];
// Have a variable to hold the index of the element to swap
// with the previous element (the to-swap element would be
// the smallest element that comes after the previous element
// and is bigger than the previous element), initializing it
// as the current index of the current item (curr).
var currIndex = i;
// Go through the array from the element after the current one to last
for (var j = i + 1; j < count; j++)
{
// Save into variable for more efficiency
var tmp = elements[j];
// Check if tmp suits the "next swap" conditions:
// Smallest, but bigger than the "prev" element
if (tmp.CompareTo(curr) < 0 && tmp.CompareTo(prev) > 0)
{
curr = tmp;
currIndex = j;
}
}
// Swap the "prev" with the new "curr" (the swap-with element)
elements[currIndex] = prev;
elements[i - 1] = curr;
// Reverse the order of the tail, in order to reset it's lexicographic order
for (var j = count - 1; j > i; j--, i++)
{
var tmp = elements[j];
elements[j] = elements[i];
elements[i] = tmp;
}
// Break since we have got the next permutation
// The reason to have all the logic inside the loop is
// to prevent the need of an extra variable indicating "i" when
// the next needed swap is found (moving "i" outside the loop is a
// bad practice, and isn't very readable, so I preferred not doing
// that as well).
break;
}
// Return whether this has been the last lexicographic permutation.
return done;
}
This is a simple example of how i use it
var arr = new[] {0, 1, 2,};
var conditions = new[] {true, false, true};
int count = 0;
while (!NextPermutation(arr))
{
List<bool> tempConditions = new List<bool>();
for (int i = 0; i < arr.Length; i++)
{
tempConditions.Add(conditions[arr[i]]);
Console.Write(tempConditions[i] + " ");
}
count++;
Console.WriteLine();
}
Console.WriteLine("count : {0}", count);
You can do this with a method that returns IEnumerable<IEnumerable<T>> like so:
using System;
using System.Collections.Generic;
using System.Linq;
namespace Demo
{
public static class Program
{
public static void Main()
{
string[] test = {"A", "B", "C", "D"};
foreach (var perm in PermuteWithRepeats(test))
Console.WriteLine(string.Join(" ", perm));
}
public static IEnumerable<IEnumerable<T>> PermuteWithRepeats<T>(IEnumerable<T> sequence)
{
return permuteWithRepeats(sequence, sequence.Count());
}
private static IEnumerable<IEnumerable<T>> permuteWithRepeats<T>(IEnumerable<T> sequence, int count)
{
if (count == 0)
{
yield return Enumerable.Empty<T>();
}
else
{
foreach (T startingElement in sequence)
{
IEnumerable<T> remainingItems = sequence;
foreach (IEnumerable<T> permutationOfRemainder in permuteWithRepeats(remainingItems, count - 1))
yield return new[]{startingElement}.Concat(permutationOfRemainder);
}
}
}
}
}
1,1,2 2,2,2 and such aren't permutations - they are variations. There will be count ^ count of them
You can generate them like this:
// you can do precise powering if needed
double number_of_variations = Math.Pow(count, count);
T[] result = new T[count];
for (int i = 0; i < number_of_variations; ++i) {
int x = i;
for (int j = 0; j < count; ++j) {
result[j] = elements[x % count];
x /= count;
}
// do something with one of results
}
I have this array of integers:-
int[] numbers = new int[] { 10, 20, 30, 40 };
I am trying to create an array which will have first element, last element, second element, second-last element and so on..
So, my resulting output will be:-
int[] result = {10,40,20,30};
This was my approach, in one loop start from first and go till the middle & in second loop start from last and get to the middle and select items accordingly, but I totally messed it up. Here is my attempted code:-
private static IEnumerable<int> OrderedArray(int[] numbers)
{
bool takeFirst = true;
if (takeFirst)
{
takeFirst = false;
for (int i = 0; i < numbers.Length / 2; i++)
{
yield return numbers[i];
}
}
else
{
takeFirst = true;
for (int j = numbers.Length; j < numbers.Length / 2; j--)
{
yield return numbers[j];
}
}
}
Need Help.
You might try this:
int[] result = numbers.Zip(numbers.Reverse(), (n1,n2) => new[] {n1, n2})
.SelectMany(x =>x)
.Take(numbers.Length)
.ToArray();
Explanation: This approach basically pairs up the elements of the original collection with the elements of its reverse ordered collection (using Zip). So you get a collection of pairs like [first, last], [second, second from last], etc.
It then flattens those collection of pairs into a single collection (using SelectMany). So the collection becomes [first, last, second, second from last,...].
Finally, we limit the number of elements to the length of the original array (n). Since we are iterating through twice as many elements (normal and reverse), it works out that iterating through n elements allow us to stop in the middle of the collection.
As a different approach, this is a modification on your existing method:
private static IEnumerable<int> OrderedArray(int[] numbers)
{
var count = (numbers.Length + 1) / 2;
for (int i = 0; i < count; i++)
{
yield return numbers[i];
int reverseIdx = numbers.Length - 1 - i;
if(i != reverseIdx)
yield return numbers[reverseIdx];
}
}
ok,
public static class Extensions
{
public static IEnumerable<T> EndToEnd<T>(this IReadOnlyList<T> source)
{
var length = source.Count;
var limit = length / 2;
for (var i = 0; i < limit; i++)
{
yield return source[i];
yield return source[length - i - 1];
}
if (length % 2 > 0)
{
yield return source[limit];
}
}
}
Which you could use like this,
var result = numbers.EndToEnd().ToArray();
more optimally,
public static class Extensions
{
public static IEnumerable<T> EndToEnd<T>(this IReadOnlyList<T> source)
{
var c = source.Count;
for (int i = 0, f = 0, l = c - 1; i < c; i++, f++, l--)
{
yield return source[f];
if (++i == c)
{
break;
}
yield return source[l];
}
}
}
no divide or modulus required.
With a simple for;
int len = numbers.Length;
int[] result = new int[len];
for (int i = 0, f = 0, l = len - 1; i < len; f++, l--)
{
result[i++] = numbers[f];
if (f != l)
result[i++] = numbers[l];
}
Based on Selman22's now deleted answer:
int[] numbers = new int[] { 10, 20, 30, 40 };
int[] result = numbers
.Select((x,idx) => idx % 2 == 0
? numbers[idx/2]
: numbers[numbers.Length - 1 -idx/2])
.ToArray();
result.Dump();
(The last line is LinqPad's way of outputting the results)
Or in less LINQy form as suggested by Jeppe Stig Nielsen
var result = new int[numbers.Length];
for (var idx = 0; idx < result.Length; idx++) {
result[idx] = idx % 2 == 0 ? numbers[idx/2] : numbers[numbers.Length - 1 -idx/2];
}
The principle is that you have two sequences, one for even elements (in the result) and one for odd. The even numbers count the first half of the array and the odds count the second half from the back.
The only modification to Selman's code is adding the /2 to the indexes to keep it counting one by one in the right half while the output index (which is what idx basically is in this case) counts on.
Came up with this
static void Main(string[] args)
{
List<int> numbers = new List<int>() { 10, 20, 30, 40, 50, 60, 70};
List<int> numbers2 = new List<int>();
int counter1 = 0;
int counter2 = numbers.Count - 1;
int remainder = numbers.Count % 2 == 0 ? 1: 0;
while (counter1-1 < counter2)
{
if (counter1 + counter2 % 2 == remainder)
{
numbers2.Add(numbers[counter1]);
counter1++;
}
else
{
numbers2.Add(numbers[counter2]);
counter2--;
}
}
string s = "";
for(int a = 0; a< numbers2.Count;a++)
s+=numbers2[a] + " ";
Console.Write(s);
Console.ReadLine();
}
This late answer steals a lot from the existing answers!
The idea is to allocate the entire result array at once (since its length is known). Then fill out all even-indexed members first, from one end of source. And finally fill out odd-numbered entries from the back end of source.
public static TElement[] EndToEnd<TElement>(this IReadOnlyList<TElement> source)
{
var count = source.Count;
var result = new TElement[count];
for (var i = 0; i < (count + 1) / 2; i++)
result[2 * i] = source[i];
for (var i = 1; i <= count / 2; i++)
result[2 * i - 1] = source[count - i];
return result;
}
Came up with this
public int[] OrderedArray(int[] numbers)
{
int[] final = new int[numbers.Length];
var limit=numbers.Length;
int last = numbers.Length - 1;
var finalCounter = 0;
for (int i = 0; finalCounter < numbers.Length; i++)
{
final[finalCounter] = numbers[i];
final[((finalCounter + 1) >= limit ? limit - 1 : (finalCounter + 1))] = numbers[last];
finalCounter += 2;
last--;
}
return final;
}