I have a list and my goal is to determine how many times the values in that list goes above a certain value.
For instance if my list is:
List = {0, 0, 3, 3, 4, 0, 4, 4, 4}
Id like to know that there were two instances where my values in the list were greater than 2 and stayed above 2. So in this case there were 2 instances, since it dropped below 2 at one point and went above it again.
private void Report_GeneratorButton_Click(object sender, EventArgs e)
{
//Lists
var current = _CanDataGraph._DataPoints[CanDataGraph.CurveTag.Current].ToList();
var SOC = _CanDataGraph._DataPoints[CanDataGraph.CurveTag.Soc].ToList();
var highcell = _CanDataGraph._DataPoints[CanDataGraph.CurveTag.HighestCell].ToList();
var lowcell = _CanDataGraph._DataPoints[CanDataGraph.CurveTag.LowestCell].ToList();
//Seperates current list into charging, discharging, and idle
List<double> charging = current.FindAll(i => i > 2);
List<double> discharging = current.FindAll(i => i < -2);
List<double> idle = current.FindAll(i => i < 2 && i > -2);
//High cell
List<double> overcharged = highcell.FindAll(i => i > 3.65);
int ov = overcharged.Count;
if (ov > 1)
{
Console.WriteLine("This Battery has gone over Voltage!");
}
else
{
Console.WriteLine("This battery has never been over Voltage.");
}
//Low cell
List<double> overdischarged = lowcell.FindAll(i => i > 3.65);
int lv = overdischarged.Count;
if (lv > 1)
{
Console.WriteLine("This Battery has been overdischarged!");
}
else
{
Console.WriteLine("This battery has never been overdischarged.");
}
//Each value is 1 second
int chargetime = charging.Count;
int dischargetime = discharging.Count;
int idletime = idle.Count;
Console.WriteLine("Charge time: " + chargetime + "s" + "\n" + "Discharge time: " + dischargetime + "s" + "\n" + "Idle time: " + idletime);
}
My current code is this and outputs:
This battery has never been over Voltage.
This battery has never been overdischarged.
Charge time: 271s
Discharge time: 0s
Idle time: 68
There are a great many ways to solve this problem; my suggestion is that you break it down into a number of smaller problems and then write a simple method that solves each problem.
Here's a simpler problem: given a sequence of T, give me back a sequence of T with "doubled" items removed:
public static IEnumerable<T> RemoveDoubles<T>(
this IEnumerable<T> items)
{
T previous = default(T);
bool first = true;
foreach(T item in items)
{
if (first || !item.Equals(previous)) yield return item;
previous = item;
first = false;
}
}
Great. How is this helpful? Because the solution to your problem is now:
int count = myList.Select(x => x > 2).RemoveDoubles().Count(x => x);
Follow along.
If you have myList as {0, 0, 3, 3, 4, 0, 4, 4, 4} then the result of the Select is {false, false, true, true, true, false, true, true, true}.
The result of the RemoveDoubles is {false, true, false, true}.
The result of the Count is 2, which is the desired result.
Try to use off-the-shelf parts when you can. If you cannot, try to solve a simple, general problem that gets you what you need; now you have a tool you can use for other tasks that require you to remove duplicates in a sequence.
This solution should achieve the desired result.
List<int> lsNums = new List<int>() {0, 0, 3, 3, 4, 0, 4, 4, 4} ;
public void MainFoo(){
int iChange = GetCritcalChangeNum(lsNums, 2);
Console.WriteLine("Critical change = %d", iChange);
}
public int GetCritcalChangeNum(List<int> lisNum, int iCriticalThreshold) {
int iCriticalChange = 0;
int iPrev = 0;
lisNum.ForEach( (int ele) => {
if(iPrev <= iCriticalThreshold && ele > iCriticalThreshold){
iCriticalChange++;
}
iPrev = ele;
});
return iCriticalChange;
}
You can create an extension method as shown below.
public static class ListExtensions
{
public static int InstanceCount(this List<double> list, Predicate<double> predicate)
{
int instanceCount = 0;
bool instanceOccurring = false;
foreach (var item in list)
{
if (predicate(item))
{
if (!instanceOccurring)
{
instanceCount++;
instanceOccurring = true;
}
}
else
{
instanceOccurring = false;
}
}
return instanceCount;
}
}
And use your newly created method like this
current.InstanceCount(p => p > 2)
public static int CountOverLimit(IEnumerable<double> items, double limit)
{
int overLimitCount = 0;
bool isOverLimit = false;
foreach (double item in items)
{
if (item > limit)
{
if (!isOverLimit)
{
overLimitCount++;
isOverLimit = true;
}
}
else if (isOverLimit)
{
isOverLimit = false;
}
}
return overLimitCount;
}
Here's a fairly concise and readable solution. Hopefully this helps. If the limit is variable, just put it in a function and take the list and the limit as parameters.
int [] array = new int [9]{0, 0, 3, 1, 4, 0, 4, 4, 4};
List<int> values = array.ToList();
int overCount = 0;
bool currentlyOver2 = false;
for (int i = 0; i < values.Count; i++)
{
if (values[i] > 2)
{
if (!currentlyOver2)
overCount++;
currentlyOver2 = true;
}
else
currentlyOver2 = false;
}
Another way to do this using System.Linq is to walk through the list, selecting both the item itself and it's index, and return true for each item where the item is greater than value and the previous item is less than or equal to value, and then select the number of true results. Of course there's a special case for index 0 where we don't check the previous item:
public static int GetSpikeCount(List<int> items, int threshold)
{
return items?
.Select((item, index) =>
index == 0
? item > threshold
: item > threshold && items[index - 1] <= threshold)
.Count(x => x == true) // '== true' is here for readability, but it's not necessary
?? 0; // return '0' if 'items' is null
}
Sample usage:
private static void Main()
{
var myList = new List<int> {0, 0, 3, 3, 4, 0, 4, 4, 4};
var count = GetSpikeCount(myList, 2);
// count == 2
}
Related
Let us say we have a List<int> with content like [0,0,0,0,1,1,1,1,0,0,0,1,2,2,0,0,2,2] and we want to have the index of the nth number that is not zero.
For example, GetNthNotZero(3) should return 6.
It would be easy with a for loop, but I feel there should be a LINQ to accomplish that. Is that possible with a LINQ statement?
There isn't an out of the box method, but have you considered writing your own extension method to provide something similar to LINQ's FindIndex()?
class Program
{
static void Main(string[] args)
{
var list = new List<int>{ 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 2, 2, 0, 0, 2, 2 };
var index = list.FindNthIndex(x => x > 0, 3);
}
}
public static class IEnumerableExtensions
{
public static int FindNthIndex<T>(this IEnumerable<T> enumerable, Predicate<T> match, int count)
{
var index = 0;
foreach (var item in enumerable)
{
if (match.Invoke(item))
count--;
if (count == 0)
return index;
index++;
}
return -1;
}
}
Actually you can do that with standard LINQ, you can use:
List<int> sequence = new List<int>{0,0,0,0,1,1,1,1,0,0,0,1,2,2,0,0,2,2};
int index = sequence.Select((x, ix) => (Item:x, Index:ix))
.Where(x => x.Item != 0)
.Skip(2) // you want the 3rd, so skip 2
.Select(x => x.Index)
.DefaultIfEmpty(-1) // if there is no third matching condition you get -1
.First(); // result: 6
This is certainly possible, but the Linq approach will make it much more complicated. This is one of those cases where an explicit loop is much better.
Two significant complications arising from using Linq are:
Handling an empty sequence or a sequence with no zeros.
Synthesizing an index to use.
A Linq solution might look like this (but note that there are probably many different possible approaches using Linq):
using System;
using System.Collections.Generic;
using System.Linq;
public static class Program
{
public static void Main()
{
var ints = new List<int> { 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 2, 2, 0, 0, 2, 2 };
Console.WriteLine(IndexOfNthNotZero(ints, 3)); // 6
Console.WriteLine(IndexOfNthNotZero(Enumerable.Repeat(0, 10), 3)); // -1
Console.WriteLine(IndexOfNthNotZero(ints, 100)); // -1
Console.WriteLine(IndexOfNthNotZero(Array.Empty<int>(), 0)); // -1
}
public static int IndexOfNthNotZero(IEnumerable<int> sequence, int n)
{
return sequence
.Select((v, i) => (value:v, index:i)) // Synthesize the value and index.
.Where(item => item.value != 0) // Choose only the non-zero value.
.Skip(n-1) // Skip to the nth value.
.FirstOrDefault((value:0, index:-1)).index; // Handle missing data by supplying a default index of -1.
}
}
Note that this implementation returns -1 to indicate that a suitable value was not found.
Compare that with a simple loop implementation and I think you'll agree it's better to use a simple loop!
public static int IndexOfNthNotZero(IReadOnlyList<int> sequence, int n)
{
for (int i = 0; i < sequence.Count; ++i)
if (sequence[i] != 0 && --n == 0) // If element matches, decrement n and return index if it reaches 0.
return i;
return -1;
}
Or alternatively if you prefer (avoiding predecrement):
public static int IndexOfNthNotZero(IReadOnlyList<int> sequence, int n)
{
for (int i = 0, numberOfMatches = 0; i < sequence.Count; ++i)
{
if (sequence[i] != 0) // If condition matches
if (++numberOfMatches == n) // Increment number of matches, and if it reaches n
return i; // then return the current index
}
return -1;
}
So there's this blog that gives Five programming problems every Software Engineer should be able to solve in less than 1 hour and I'm just revisiting some of the concepts.
The first question reads
Write three functions that compute the sum of the numbers in a given list using a for-loop, a while-loop, and recursion.
Obviously the for- and while-loops are easy, but i started out with
int[] l = { 1, 2, 3, 4, 5, 6, 7, 8, 9};
Is it at all possible to pop an item off the list and then pass the shortened list every time?
An attempt I saw in python:
numbers = [1,2,3,4,5,6,7,8,9]
def recurse_count(lst):
if len(lst) == 1:
return lst[0]
else:
i = len(lst) - 1
subtotal = lst[i] + lst[i - 1]
lst.pop() #into the void with you
lst[-1] = subtotal
return recurse_count(lst)
Would it be possible with a int[] in c# ?
A very elegant solution would be:
static public int sumThisUp(IEnumerable<int> list)
{
return list.FirstOrDefault() + (list.Any() ? sumThisUp(list.Skip(1)) : 0);
}
Yes. I do belive the List-class has a simple removeAt(int)-method. A recursive method would look like this:
public int sumThisUp(List<int> list) {
int result = list[0];
list.removeAt(0);
return (list.length > 0) ? result + sumThisUp(list) : result;
}
Alternatively if you dont wanna edit the orginal list this would do:
public int sumThisUp2(List<int> list, int index = 0) {
int result = list[index++];
return (list.Count > index) ? result + sumThisUp2(list, index) : result;
}
Yes, it is possible in C#.
But I want to introduce some trick first: instead of modifying the source list we can just pass the start index. It will be much faster:
private static int Sum(int[] array, int startIndex)
{
if (startIndex >= array.Length)
{
return 0;
}
return array[startIndex] + Sum(array, startIndex + 1);
}
static void Main(string[] args)
{
int[] array = new int[] { 1, 2, 3, 4 };
int result = Sum(array, 0);
Console.WriteLine(result);
}
This should do it:
public int Sum(int[] numbers, int startAt = 0)
{
if (startAt == numbers.Length)
return 0;
return numbers[startAt] + Sum(numbers, startAt + 1);
}
Length = input Long(can be 2550, 2880, 2568, etc)
List<long> = {618, 350, 308, 300, 250, 232, 200, 128}
The program takes a long value, for that particular long value we have to find the possible combination from the above list which when added give me a input result(same value can be used twice). There can be a difference of +/- 30.
Largest numbers have to be used most.
Ex:Length = 868
For this combinations can be
Combination 1 = 618 + 250
Combination 2 = 308 + 232 + 200 +128
Correct Combination would be Combination 1
But there should also be different combinations.
public static void Main(string[] args)
{
//subtotal list
List<int> totals = new List<int>(new int[] { 618, 350, 308, 300, 250, 232, 200, 128 });
// get matches
List<int[]> results = KnapSack.MatchTotal(2682, totals);
// print results
foreach (var result in results)
{
Console.WriteLine(string.Join(",", result));
}
Console.WriteLine("Done.");
}
internal static List<int[]> MatchTotal(int theTotal, List<int> subTotals)
{
List<int[]> results = new List<int[]>();
while (subTotals.Contains(theTotal))
{
results.Add(new int[1] { theTotal });
subTotals.Remove(theTotal);
}
if (subTotals.Count == 0)
return results;
subTotals.Sort();
double mostNegativeNumber = subTotals[0];
if (mostNegativeNumber > 0)
mostNegativeNumber = 0;
if (mostNegativeNumber == 0)
subTotals.RemoveAll(d => d > theTotal);
for (int choose = 0; choose <= subTotals.Count; choose++)
{
IEnumerable<IEnumerable<int>> combos = Combination.Combinations(subTotals.AsEnumerable(), choose);
results.AddRange(from combo in combos where combo.Sum() == theTotal select combo.ToArray());
}
return results;
}
public static class Combination
{
public static IEnumerable<IEnumerable<T>> Combinations<T>(this IEnumerable<T> elements, int choose)
{
return choose == 0 ?
new[] { new T[0] } :
elements.SelectMany((element, i) =>
elements.Skip(i + 1).Combinations(choose - 1).Select(combo => (new[] { element }).Concat(combo)));
}
}
I Have used the above code, can it be more simplified, Again here also i get unique values. A value can be used any number of times. But the largest number has to be given the most priority.
I have a validation to check whether the total of the sum is greater than the input value. The logic fails even there..
The algorithm you have shown assumes that the list is sorted in ascending order. If not, then you shall first have to sort the list in O(nlogn) time and then execute the algorithm.
Also, it assumes that you are only considering combinations of pairs and you exit on the first match.
If you want to find all combinations, then instead of "break", just output the combination and increment startIndex or decrement endIndex.
Moreover, you should check for ranges (targetSum - 30 to targetSum + 30) rather than just the exact value because the problem says that a margin of error is allowed.
This is the best solution according to me because its complexity is O(nlogn + n) including the sorting.
V4 - Recursive Method, using Stack structure instead of stack frames on thread
It works (tested in VS), but there could be some bugs remaining.
static int Threshold = 30;
private static Stack<long> RecursiveMethod(long target)
{
Stack<long> Combination = new Stack<long>(establishedValues.Count); //Can grow bigger, as big as (target / min(establishedValues)) values
Stack<int> Index = new Stack<int>(establishedValues.Count); //Can grow bigger
int lowerBound = 0;
int dimensionIndex = lowerBound;
long fail = -1 * Threshold;
while (true)
{
long thisVal = establishedValues[dimensionIndex];
dimensionIndex++;
long afterApplied = target - thisVal;
if (afterApplied < fail)
lowerBound = dimensionIndex;
else
{
target = afterApplied;
Combination.Push(thisVal);
if (target <= Threshold)
return Combination;
Index.Push(dimensionIndex);
dimensionIndex = lowerBound;
}
if (dimensionIndex >= establishedValues.Count)
{
if (Index.Count == 0)
return null; //No possible combinations
dimensionIndex = Index.Pop();
lowerBound = dimensionIndex;
target += Combination.Pop();
}
}
}
Maybe V3 - Suggestion for Ordered solution trying every combination
Although this isn't chosen as the answer for the related question, I believe this is a good approach - https://stackoverflow.com/a/17258033/887092(, otherwise you could try the chosen answer (although the output for that is only 2 items in set being summed, rather than up to n items)) - it will enumerate every option including multiples of the same value. V2 works but would be slightly less efficient than an ordered solution, as the same failing-attempt will likely be attempted multiple times.
V2 - Random Selection - Will be able to reuse the same number twice
I'm a fan of using random for "intelligence", allowing the computer to brute force the solution. It's also easy to distribute - as there is no state dependence between two threads trying at the same time for example.
static int Threshold = 30;
public static List<long> RandomMethod(long Target)
{
List<long> Combinations = new List<long>();
Random rnd = new Random();
//Assuming establishedValues is sorted
int LowerBound = 0;
long runningSum = Target;
while (true)
{
int newLowerBound = FindLowerBound(LowerBound, runningSum);
if (newLowerBound == -1)
{
//No more beneficial values to work with, reset
runningSum = Target;
Combinations.Clear();
LowerBound = 0;
continue;
}
LowerBound = newLowerBound;
int rIndex = rnd.Next(LowerBound, establishedValues.Count);
long val = establishedValues[rIndex];
runningSum -= val;
Combinations.Add(val);
if (Math.Abs(runningSum) <= 30)
return Combinations;
}
}
static int FindLowerBound(int currentLowerBound, long runningSum)
{
//Adjust lower bound, so we're not randomly trying a number that's too high
for (int i = currentLowerBound; i < establishedValues.Count; i++)
{
//Factor in the threshold, because an end aggregate which exceeds by 20 is better than underperforming by 21.
if ((establishedValues[i] - Threshold) < runningSum)
{
return i;
}
}
return -1;
}
V1 - Ordered selection - Will not be able to reuse the same number twice
Add this very handy extension function (uses a binary algorithm to find all combinations):
//Make sure you put this in a static class inside System namespace
public static IEnumerable<List<T>> EachCombination<T>(this List<T> allValues)
{
var collection = new List<List<T>>();
for (int counter = 0; counter < (1 << allValues.Count); ++counter)
{
List<T> combination = new List<T>();
for (int i = 0; i < allValues.Count; ++i)
{
if ((counter & (1 << i)) == 0)
combination.Add(allValues[i]);
}
if (combination.Count == 0)
continue;
yield return combination;
}
}
Use the function
static List<long> establishedValues = new List<long>() {618, 350, 308, 300, 250, 232, 200, 128, 180, 118, 155};
//Return is a list of the values which sum to equal the target. Null if not found.
List<long> FindFirstCombination(long target)
{
foreach (var combination in establishedValues.EachCombination())
{
//if (combination.Sum() == target)
if (Math.Abs(combination.Sum() - target) <= 30) //Plus or minus tolerance for difference
return combination;
}
return null; //Or you could throw an exception
}
Test the solution
var target = 858;
var result = FindFirstCombination(target);
bool success = (result != null && result.Sum() == target);
//TODO: for loop with random selection of numbers from the establishedValues, Sum and test through FindFirstCombination
What is the fastest way to union 2 sets of sorted values? Speed (big-O) is important here; not clarity - assume this is being done millions of times.
Assume you do not know the type or range of the values, but have an efficent IComparer<T> and/or IEqualityComparer<T>.
Given the following set of numbers:
var la = new int[] { 1, 2, 4, 5, 9 };
var ra = new int[] { 3, 4, 5, 6, 6, 7, 8 };
I am expecting 1, 2, 3, 4, 5, 6, 7, 8, 9. The following stub may be used to test the code:
static void Main(string[] args)
{
var la = new int[] { 1, 2, 4, 5, 9 };
var ra = new int[] { 3, 4, 5, 6, 6, 7, 8 };
foreach (var item in UnionSorted(la, ra, Int32Comparer.Default))
{
Console.Write("{0}, ", item);
}
Console.ReadLine();
}
class Int32Comparer : IComparer<Int32>
{
public static readonly Int32Comparer Default = new Int32Comparer();
public int Compare(int x, int y)
{
if (x < y)
return -1;
else if (x > y)
return 1;
else
return 0;
}
}
static IEnumerable<T> UnionSorted<T>(IEnumerable<T> sortedLeft, IEnumerable<T> sortedRight, IComparer<T> comparer)
{
}
The following method returns the correct results:
static IEnumerable<T> UnionSorted<T>(IEnumerable<T> sortedLeft, IEnumerable<T> sortedRight, IComparer<T> comparer)
{
var first = true;
var continueLeft = true;
var continueRight = true;
T left = default(T);
T right = default(T);
using (var el = sortedLeft.GetEnumerator())
using (var er = sortedRight.GetEnumerator())
{
// Loop until both enumeration are done.
while (continueLeft | continueRight)
{
// Only if both enumerations have values.
if (continueLeft & continueRight)
{
// Seed the enumeration.
if (first)
{
continueLeft = el.MoveNext();
if (continueLeft)
{
left = el.Current;
}
else
{
// left is empty, just dump the right enumerable
while (er.MoveNext())
yield return er.Current;
yield break;
}
continueRight = er.MoveNext();
if (continueRight)
{
right = er.Current;
}
else
{
// right is empty, just dump the left enumerable
if (continueLeft)
{
// there was a value when it was read earlier, let's return it before continuing
do
{
yield return el.Current;
}
while (el.MoveNext());
} // if continueLeft is false, then both enumerable are empty here.
yield break;
}
first = false;
}
// Compare them and decide which to return.
var comp = comparer.Compare(left, right);
if (comp < 0)
{
yield return left;
// We only advance left until they match.
continueLeft = el.MoveNext();
if (continueLeft)
left = el.Current;
}
else if (comp > 0)
{
yield return right;
continueRight = er.MoveNext();
if (continueRight)
right = er.Current;
}
else
{
// The both match, so advance both.
yield return left;
continueLeft = el.MoveNext();
if (continueLeft)
left = el.Current;
continueRight = er.MoveNext();
if (continueRight)
right = er.Current;
}
}
// One of the lists is done, don't advance it.
else if (continueLeft)
{
yield return left;
continueLeft = el.MoveNext();
if (continueLeft)
left = el.Current;
}
else if (continueRight)
{
yield return right;
continueRight = er.MoveNext();
if (continueRight)
right = er.Current;
}
}
}
}
The space is ~O(6) and time ~O(max(n,m)) (where m is the second set).
This will make your UnionSorted function a little less versatile, but you can make a small improvement by making an assumption about types. If you do the comparison inside the loop itself (rather than calling the Int32Comparer) then that'll save on some function call overhead.
So your UnionSorted declaration becomes this...
static IEnumerable<int> UnionSorted(IEnumerable<int> sortedLeft, IEnumerable<int> sortedRight)
And then you do this inside the loop, getting rid of the call to comparer.Compare()...
//var comp = comparer.Compare(left, right); // too slow
int comp = 0;
if (left < right)
comp = -1;
else if (left > right)
comp = 1;
In my testing this was about 15% faster.
I'm going to give LINQ the benefit of the doubt and say this is probably as fast as you are going to get without writing excessive code:
var result = la.Union(ra);
EDITED:
Thanks, I missed the sorted part.
You could do:
var result = la.Union(ra).OrderBy(i => i);
I would solve the problem this way. (I am making an assumption which lightens the difficulty of this problem significantly, only to illustrate the idea.)
Assumption: All numbers contained in sets are non-negative.
Create a word of at least n bits, where n is the largest value you expect. (If the largest value you expect is 12, then you must create a word of 16 bits.).
Iterate through both sets. For each value, val, or the val-th bit with 1.
Once done, count the amount of bits set to 1. Create an array of that size.
Go through each bit one by one, adding n to the new array if the n-th bit is set.
I have been stumped on this one for a while. I want to take a List and order the list such that the Products with the largest Price end up in the middle of the list. And I also want to do the opposite, i.e. make sure that the items with the largest price end up on the outer boundaries of the list.
Imagine a data structure like this.. 1,2,3,4,5,6,7,8,9,10
In the first scenario I need to get back 1,3,5,7,9,10,8,6,4,2
In the second scenario I need to get back 10,8,6,4,2,1,3,5,7,9
The list may have upwards of 250 items, the numbers will not be evenly distributed, and they will not be sequential, and I wanted to minimize copying. The numbers will be contained in Product objects, and not simple primitive integers.
Is there a simple solution that I am not seeing?
Any thoughts.
So for those of you wondering what I am up to, I am ordering items based on calculated font size. Here is the code that I went with...
The Implementation...
private void Reorder()
{
var tempList = new LinkedList<DisplayTag>();
bool even = true;
foreach (var tag in this) {
if (even)
tempList.AddLast(tag);
else
tempList.AddFirst(tag);
even = !even;
}
this.Clear();
this.AddRange(tempList);
}
The Test...
[TestCase(DisplayTagOrder.SmallestToLargest, Result=new[]{10,14,18,22,26,30})]
[TestCase(DisplayTagOrder.LargestToSmallest, Result=new[]{30,26,22,18,14,10})]
[TestCase(DisplayTagOrder.LargestInTheMiddle, Result = new[] { 10, 18, 26, 30, 22, 14 })]
[TestCase(DisplayTagOrder.LargestOnTheEnds, Result = new[] { 30, 22, 14, 10, 18, 26 })]
public int[] CalculateFontSize_Orders_Tags_Appropriately(DisplayTagOrder sortOrder)
{
list.CloudOrder = sortOrder;
list.CalculateFontSize();
var result = (from displayTag in list select displayTag.FontSize).ToArray();
return result;
}
The Usage...
public void CalculateFontSize()
{
GetMaximumRange();
GetMinimunRange();
CalculateDelta();
this.ForEach((displayTag) => CalculateFontSize(displayTag));
OrderByFontSize();
}
private void OrderByFontSize()
{
switch (CloudOrder) {
case DisplayTagOrder.SmallestToLargest:
this.Sort((arg1, arg2) => arg1.FontSize.CompareTo(arg2.FontSize));
break;
case DisplayTagOrder.LargestToSmallest:
this.Sort(new LargestFirstComparer());
break;
case DisplayTagOrder.LargestInTheMiddle:
this.Sort(new LargestFirstComparer());
Reorder();
break;
case DisplayTagOrder.LargestOnTheEnds:
this.Sort();
Reorder();
break;
}
}
The appropriate data structure is a LinkedList because it allows you to efficiently add to either end:
LinkedList<int> result = new LinkedList<int>();
int[] array = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
Array.Sort(array);
bool odd = true;
foreach (var x in array)
{
if (odd)
result.AddLast(x);
else
result.AddFirst(x);
odd = !odd;
}
foreach (int item in result)
Console.Write("{0} ", item);
No extra copying steps, no reversing steps, ... just a small overhead per node for storage.
C# Iterator version
(Very simple code to satisfy all conditions.)
One function to rule them all! Doesn't use intermediate storage collection (see yield keyword). Orders the large numbers either to the middle, or to the sides depending on the argument. It's implemented as a C# iterator
// Pass forward sorted array for large middle numbers,
// or reverse sorted array for large side numbers.
//
public static IEnumerable<long> CurveOrder(long[] nums) {
if (nums == null || nums.Length == 0)
yield break; // Nothing to do.
// Move forward every two.
for (int i = 0; i < nums.Length; i+=2)
yield return nums[i];
// Move backward every other two. Note: Length%2 makes sure we're on the correct offset.
for (int i = nums.Length-1 - nums.Length%2; i >= 0; i-=2)
yield return nums[i];
}
Example Usage
For example with array long[] nums = { 1,2,3,4,5,6,7,8,9,10,11 };
Start with forward sort order, to bump high numbers into the middle.
Array.Sort(nums); //forward sort
// Array argument will be: { 1,2,3,4,5,6,7,8,9,10,11 };
long[] arrLargeMiddle = CurveOrder(nums).ToArray();
Produces: 1 3 5 7 9 11 10 8 6 4 2
Or, Start with reverse sort order, to push high numbers to sides.
Array.Reverse(nums); //reverse sort
// Array argument will be: { 11,10,9,8,7,6,5,4,3,2,1 };
long[] arrLargeSides = CurveOrder(nums).ToArray();
Produces: 11 9 7 5 3 1 2 4 6 8 10
Significant namespaces are:
using System;
using System.Collections.Generic;
using System.Linq;
Note: The iterator leaves the decision up to the caller about whether or not to use intermediate storage. The caller might simply be issuing a foreach loop over the results instead.
Extension Method Option
Optionally change the static method header to use the this modifier public static IEnumerable<long> CurveOrder(this long[] nums) { and put it inside a static class in your namespace;
Then call the order method directly on any long[ ] array instance like so:
Array.Reverse(nums); //reverse sort
// Array argument will be: { 11,10,9,8,7,6,5,4,3,2,1 };
long[] arrLargeSides = nums.CurveOrder().ToArray();
Just some (unneeded) syntactic sugar to mix things up a bit for fun. This can be applied to any answers to your question that take an array argument.
I might go for something like this
static T[] SortFromMiddleOut<T, U>(IList<T> list, Func<T, U> orderSelector, bool largestInside) where U : IComparable<U>
{
T[] sortedArray = new T[list.Count];
bool add = false;
int index = (list.Count / 2);
int iterations = 0;
IOrderedEnumerable<T> orderedList;
if (largestInside)
orderedList = list.OrderByDescending(orderSelector);
else
orderedList = list.OrderBy(orderSelector);
foreach (T item in orderedList)
{
sortedArray[index] = item;
if (add)
index += ++iterations;
else
index -= ++iterations;
add = !add;
}
return sortedArray;
}
Sample invocations:
int[] array = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
int[] sortedArray = SortFromMiddleOut(array, i => i, false);
foreach (int item in sortedArray)
Console.Write("{0} ", item);
Console.Write("\n");
sortedArray = SortFromMiddleOut(array, i => i, true);
foreach (int item in sortedArray)
Console.Write("{0} ", item);
With it being generic, it could be a list of Foo and the order selector could be f => f.Name or whatever you want to throw at it.
The fastest (but not the clearest) solution is probably to simply calculate the new index for each element:
Array.Sort(array);
int length = array.Length;
int middle = length / 2;
int[] result2 = new int[length];
for (int i = 0; i < array.Length; i++)
{
result2[middle + (1 - 2 * (i % 2)) * ((i + 1) / 2)] = array[i];
}
Something like this?
public IEnumerable<int> SortToMiddle(IEnumerable<int> input)
{
var sorted = new List<int>(input);
sorted.Sort();
var firstHalf = new List<int>();
var secondHalf = new List<int>();
var sendToFirst = true;
foreach (var current in sorted)
{
if (sendToFirst)
{
firstHalf.Add(current);
}
else
{
secondHalf.Add(current);
}
sendToFirst = !sendToFirst;
}
//to get the highest values on the outside just reverse
//the first list instead of the second
secondHalf.Reverse();
return firstHalf.Concat(secondHalf);
}
For your specific (general) case (assuming unique keys):
public static IEnumerable<T> SortToMiddle<T, TU>(IEnumerable<T> input, Func<T, TU> getSortKey)
{
var sorted = new List<TU>(input.Select(getSortKey));
sorted.Sort();
var firstHalf = new List<TU>();
var secondHalf = new List<TU>();
var sendToFirst = true;
foreach (var current in sorted)
{
if (sendToFirst)
{
firstHalf.Add(current);
}
else
{
secondHalf.Add(current);
}
sendToFirst = !sendToFirst;
}
//to get the highest values on the outside just reverse
//the first list instead of the second
secondHalf.Reverse();
sorted = new List<TU>(firstHalf.Concat(secondHalf));
//This assumes the sort keys are unique - if not, the implementation
//needs to use a SortedList<TU, T>
return sorted.Select(s => input.First(t => s.Equals(getSortKey(t))));
}
And assuming non-unique keys:
public static IEnumerable<T> SortToMiddle<T, TU>(IEnumerable<T> input, Func<T, TU> getSortKey)
{
var sendToFirst = true;
var sorted = new SortedList<TU, T>(input.ToDictionary(getSortKey, t => t));
var firstHalf = new SortedList<TU, T>();
var secondHalf = new SortedList<TU, T>();
foreach (var current in sorted)
{
if (sendToFirst)
{
firstHalf.Add(current.Key, current.Value);
}
else
{
secondHalf.Add(current.Key, current.Value);
}
sendToFirst = !sendToFirst;
}
//to get the highest values on the outside just reverse
//the first list instead of the second
secondHalf.Reverse();
return(firstHalf.Concat(secondHalf)).Select(kvp => kvp.Value);
}
Simplest solution - order the list descending, create two new lists, into the first place every odd-indexed item, into the other every even indexed item. Reverse the first list then append the second to the first.
Okay, I'm not going to question your sanity here since I'm sure you wouldn't be asking the question if there weren't a good reason :-)
Here's how I'd approach it. Create a sorted list, then simply create another list by processing the keys in order, alternately inserting before and appending, something like:
sortedlist = list.sort (descending)
biginmiddle = new list()
state = append
foreach item in sortedlist:
if state == append:
biginmiddle.append (item)
state = prepend
else:
biginmiddle.insert (0, item)
state = append
This will give you a list where the big items are in the middle. Other items will fan out from the middle (in alternating directions) as needed:
1, 3, 5, 7, 9, 10, 8, 6, 4, 2
To get a list where the larger elements are at the ends, just replace the initial sort with an ascending one.
The sorted and final lists can just be pointers to the actual items (since you state they're not simple integers) - this will minimise both extra storage requirements and copying.
Maybe its not the best solution, but here's a nifty way...
Let Product[] parr be your array.
Disclaimer It's java, my C# is rusty.
Untested code, but you get the idea.
int plen = parr.length
int [] indices = new int[plen];
for(int i = 0; i < (plen/2); i ++)
indices[i] = 2*i + 1; // Line1
for(int i = (plen/2); i < plen; i++)
indices[i] = 2*(plen-i); // Line2
for(int i = 0; i < plen; i++)
{
if(i != indices[i])
swap(parr[i], parr[indices[i]]);
}
The second case, Something like this?
int plen = parr.length
int [] indices = new int[plen];
for(int i = 0; i <= (plen/2); i ++)
indices[i] = (plen^1) - 2*i;
for(int i = 0; i < (plen/2); i++)
indices[i+(plen/2)+1] = 2*i + 1;
for(int i = 0; i < plen; i++)
{
if(i != indices[i])
swap(parr[i], parr[indices[i]]);
}