Return the next whole number - c#

I want to pass a number and have the next whole number returned,
I've tried Math.Ceiling(3) , but it returns 3.
Desired output :
double val = 9.1 => 10
double val = 3 => 4
Thanks

There are two ways I would suggest doing this:
Using Math.Floor():
return Math.Floor(input + 1);
Using casting (to lose precision)
return (int)input + 1;
Fiddle here

Using just the floor or ceiling wont give you the next whole number in every case.
For eg:- If you input negative numbers. Better way is to create a function that does that.
public class Test{
public int NextWholeNumber(double n)
{
if(n < 0)
return 0;
else
return Convert.ToInt32(Math.Floor(n)+1);
}
// Main method
static public void Main()
{
Test o = new Test();
Console.WriteLine(o.NextWholeNumber(1.254));
}
}

Usually when you refer to whole number it is positive integers only. But if you require negative integers as well then you can try this, the code will return 3.0 => 4, -1.0 => 0, -1.1 => -1
double doubleValue = double.Parse(Console.ReadLine());
int wholeNumber = 0;
if ((doubleValue - Math.Floor(doubleValue) > 0))
{
wholeNumber = int.Parse(Math.Ceiling(doubleValue).ToString());
}
else
{
wholeNumber = int.Parse((doubleValue + 1).ToString());
}

Related

Nearest value from user input in an array C#

so in my application , I read some files into it and ask the user for a number , in these files there a lot of numbers and I am trying to find the nearest value when the number they enter is not in the file. So far I have as following
static int nearest(int close_num, int[] a)
{
foreach (int bob in a)
{
if ((close_num -= bob) <= 0)
return bob;
}
return -1;
}
Console.WriteLine("Enter a number to find out if is in the selected Net File: ");
int i3 = Convert.ToInt32(Console.ReadLine());
bool checker = false;
//Single nearest = 0;
//linear search#1
for (int i = 0; i < a.Length; i++)//looping through array
{
if(a[i] == i3)//checking to see the value is found in the array
{
Console.WriteLine("Value found and the position of it in the descending value of the selected Net File is: " + a[i]);
checker = true;
}
else
{
int found = nearest(i3,a);
Console.WriteLine("Cannot find this number in the Net File however here the closest number to that: " + found );
//Console.WriteLine("Cannot find this number in the Net File however here the closest number to that : " + nearest);
}
}
When a value that is in the file is entered the output is fine , but when it comes to the nearest value I cannot figure a way. I can't use this such as BinarySearchArray for this. a = the array whilst i3 is the value the user has entered. Would a binary search algorithm just be simpler for this?
Any help would be appreciated.
You need to make a pass over all the elements of the array, comparing each one in turn to find the smallest difference. At the same time, keep a note of the current nearest value.
There are many ways to do this; here's a fairly simple one:
static int nearest(int close_num, int[] a)
{
int result = -1;
long smallestDelta = long.MaxValue;
foreach (int bob in a)
{
long delta = (bob > close_num) ? (bob - close_num) : (close_num - bob);
if (delta < smallestDelta)
{
smallestDelta = delta;
result = bob;
}
}
return result;
}
Note that delta is calculated so that it is the absolute value of the difference.
Well, first we should define, what is nearest. Assuming that,
int nearest for given int number is the item of int[] a such that Math.Abs(nearest - number) is the smallest possible value
we can put it as
static int nearest(int number, int[] a)
{
long diff = -1;
int result = 0;
foreach (int item in a)
{
// actual = Math.Abs((long)item - number);
long actual = (long)item - number;
if (actual < 0)
actual = -actual;
// if item is the very first value or better than result
if (diff < 0 || actual < diff) {
result = item;
diff = actual;
}
}
return result;
}
The only tricky part is long for diff: it may appear that item - number exceeds int range (and will either have IntegerOverflow exceprion thrown or *invalid answer), e.g.
int[] a = new int[] {int.MaxValue, int.MaxValue - 1};
Console.Write(nearest(int.MinValue, a));
Note, that expected result is 2147483646, not 2147483647
what about LINQ ?
var nearestNumber = a.OrderBy(x => Math.Abs(x - i3)).First();
Just iterate through massive and find the minimal delta between close_num and array members
static int nearest(int close_num, int[] a)
{
// initialize as big number, 1000 just an example
int min_delta=1000;
int result=-1;
foreach (int bob in a)
{
if (Math.Abs(bob-close_num) <= min_delta)
{
min_delta = bob-close_num;
result = bob;
}
}
return result;
}

Array.Sort / IComparable sometimes doesn't sort correctly on first call

I am trying to solve 11321 - Sort! Sort!! and Sort!!! using C#, directly translating my solutions (as well as other people's) from cpp and Java.
My problem is that listName.Sort() or Array.Sort(listName, ...Comparer.Create()...) does not correctly sort the output during the first pass. I have to call it twice to be able to sort it properly.
In some cases, I manually set breakpoints inside the CompareTo() when calling Array.Sort, add a reference to the list inside the closure on purpose so I can watch the values as it sorts, and it sorts properly until after the Array.Sort() method returns, in which then I see some values return back to an incorrect order.
I am using Morass' test cases from uDebug to test, and one example of an incorrect sort result I get is on line 10919 of the output:
Accepted My Output
10919 457 10919 461
10920 461 10920 457
As you can see, the numbers 461 and 457 should be sorted in ascending order of their modulo 500 value, which is 461 and 457 respectively. If I call the sort method again a second time in my code below, then I finally get a correct output.
I guess my question is, why is this happening? Is there anything wrong with my implementation? My implementations are almost 1-to-1 translations of accepted Java or cpp code. Note that I've also tried using LINQ's OrderBy(), which produces different results but ultimately a correct one when called enough times.
I have the following Number class with the corresponding IComparable implementation:
class Number : IComparable<Number>
{
public int Value { get; }
public int Mod { get; }
public bool IsOdd { get; }
public Number(int val, int mod)
{
Value = val;
Mod = mod;
IsOdd = val % 2 != 0;
}
public int CompareTo(Number other)
{
var leftVal = Value;
var leftMod = Mod;
var rightVal = other.Value;
var rightMod = other.Mod;
var leftOdd = IsOdd;
var rightOdd = other.IsOdd;
if (leftMod < rightMod) return -1;
else if (leftMod > rightMod) return 1;
else
{
if (leftOdd && rightOdd)
{
return leftVal > rightVal ? -1 : 1;
}
else if (!leftOdd && !rightOdd)
{
return leftVal > rightVal ? 1 : -1;
}
else if (leftOdd)
{
return -1;
}
else// (rightOdd)
{
return 1;
}
}
}
}
And my main method:
public static void Main(string[] args)
{
while (true)
{
var settings = Console.ReadLine().Split(' ');
var N = int.Parse(settings[0]);
var M = int.Parse(settings[1]);
if (N == 0 && M == 0) break;
Console.WriteLine($"{N} {M}");
var output = new List<Number>();
var i = 0;
while (i < N)
{
var line = Console.ReadLine();
var val = int.Parse(line);
var mod = val % M;
output.Add(new Number(val, mod));
i++;
}
output.Sort();
// uncomment to produce acceptable answer
// output.Sort();
foreach (var line in output)
{
Console.WriteLine(line.Value);
}
}
Console.WriteLine("0 0");
}
Edit 1:
Just a note that I am redirecting stdin and stdout from a file/to a StringBuilder, so I can automate testing.
static void Main(string[] args)
{
var builder = new StringBuilder();
var output = new StringWriter(builder);
Console.SetOut(output);
var solution = File.ReadAllText("P11321_Outputs");
var problem = new StreamReader("P11321_Inputs");
Console.SetIn(problem);
P11321_1.Main(args);
}
Edit 2:
Here is a portion of the test case where the weird behavior is occurring. A concrete repro step is, if you change the test case to only have 38 items, and remove 11 from the input, then 457 and 461 sorts correctly.
Input:
39 500
-121
582
163
457
-86
-296
740
220
-867
-333
-773
11
-446
-259
-238
782
461
756
-474
-21
-358
593
548
-962
-411
45
-604
-977
47
-561
-647
926
578
516
382
-508
-781
-322
712
0 0
Output:
39 500
-977
-474
-962
-446
-411
-867
-358
-333
-322
-296
-781
-773
-259
-238
-647
-121
-604
-86
-561
-21
-508
11
516
45
47
548
578
582
593
163
712
220
740
756
782
382
926
457
461
0 0
You managed to check all cases in your boolean tests, except for if the values are equal. A sort algorithm not only needs to know if elements are greater than or less than each other, it also needs to know if they are equal to each other.
if (leftMod < rightMod)
return -1;
else if (leftMod > rightMod)
return 1;
else
{
if (leftVal == rightVal)
{
return 0; // need this so you don't orphan an element when tested against itself
}
if (leftOdd && rightOdd)
{
return leftVal > rightVal ? -1 : 1;
}
else if (!leftOdd && !rightOdd)
{
return leftVal > rightVal ? 1 : -1;
}
else if (leftOdd)
{
return -1;
}
else// (rightOdd)
{
return 1;
}
}
Doubling on Mark Benningfield's answer, I would like to present a proof of concept, about why including the equality in the implementation of a custom comparer is important. Not only there is a risk of incorrect results, but there is also a risk of never getting the results!
Trying to sort just two numbers (2, 1) with a buggy comparer:
class BuggyComparer : IComparer<int>
{
public int Compare(int x, int y) => x < y ? -1 : 1; // Equality?
}
var source = new int[] { 2, 1 };
var sorted = source.OrderBy(n => n, new BuggyComparer());
Console.WriteLine(String.Join(", ", sorted)); // Infinite loop
The program is not terminating, because the sort cannot be completed.

How to optimize a loops going through arrays?

I've been going though www.testdome.com to test my skills and opened a list of public questions. One of the practice questions was:
Implement function CountNumbers that accepts a sorted array of
integers and counts the number of array elements that are less than
the parameter lessThan.
For example, SortedSearch.CountNumbers(new int[] { 1, 3, 5, 7 }, 4)
should return 2 because there are two array elements less than 4.
And my answer was:
using System;
public class SortedSearch
{
public static int CountNumbers(int[] sortedArray, int lessThan)
{
int count = 0;
int l = sortedArray.Length;
for (int i = 0; i < l; i++) {
if (sortedArray [i] < lessThan)
count++;
}
return count;
}
public static void Main(string[] args)
{
Console.WriteLine(SortedSearch.CountNumbers(new int[] { 1, 3, 5, 7 }, 4));
}
}
It seems that I've failed on two counts:
Performance test when sortedArray contains lessThan: Time limit exceeded
and
Performance test when sortedArray doesn't contain lessThan: Time limit exceeded
To be honest I'm not sure what to optimize there? Maybe I'm using a wrong method and there is a similar way to speed up the calculation?
If someone could point out my mistake or explain what I'm going wrong, I'd really appreciate it!
Because the array is sorted, you can stop counting as soon as you reach or exceed the lessThan parameter.
else break would probably do it.
Does it have to be really a loop? You could do Lambda exp for that
public static int CountNumbers(int[] sortedArray, int lessThan)
{
return sortedArray.ToList().Where(x=>x < lessThan).Count();
}
Harold's answer and approach is spot on.
Find below another code sample in case you're practicing for technical interviews. It handles cases when the array is null or empty, when lessThan is presented in the array (including duplicates), etc.
private static int CountNumbers(int[] sortedArray, int lessThan)
{
if (sortedArray == null)
{
throw new ArgumentNullException("Sorted array cannot be null.");
}
if (sortedArray.Length == 0)
{
throw new ArgumentException("Sorted array cannot be empty.");
}
int start = 0;
int end = sortedArray.Length;
int middle = int.MinValue;
while (start < end)
{
middle = (start + end) / 2;
if (sortedArray[middle] == lessThan)
{
break; // Found the "lessThan" number in the array, we can stop and move left
}
else if (sortedArray[middle] < lessThan)
{
start = middle + 1;
}
else
{
end = middle - 1;
}
}
// Adjust the middle pointer based on the "current" and "lessThan" numbers in the sorted array
while (middle >= 0 && sortedArray[middle] >= lessThan)
{
middle--;
}
// +1 because middle is calculated through 0-based (e.g. start)
return middle + 1;
}

What's a good algorithm for checking if a given list of values alternates up and down?

Assuming the function takes in a list of double and an index to perform the check from, I need to check if the values alternates up and down consecutively.
For example, a list of [14.0,12.3,13.0,11.4] alternates consecutively but a list of [14.0,12.3,11.4,13.0] doesn't.
The algorithm doesn't have to be fast, but I'd like it to be compact to write (LINQ is totally fine). This is my current method, and it looks way too crude to my taste:
enum AlternatingDirection { Rise, Fall, None };
public bool CheckConsecutiveAlternation(List<double> dataList, int currDataIndex)
{
/*
* Result True : Fail
* Result False : Pass
*/
if (!_continuousRiseFallCheckBool)
return false;
if (dataList.Count < _continuousRiseFallValue)
return false;
if (currDataIndex + 1 < _continuousRiseFallValue)
return false;
AlternatingDirection direction = AlternatingDirection.None;
int startIndex = currDataIndex - _continuousRiseFallValue + 1;
double prevVal = 0;
for (int i = startIndex; i <= currDataIndex; i++)
{
if (i == startIndex)
{
prevVal = dataList[i];
continue;
}
if (prevVal > dataList[i])
{
prevVal = dataList[i];
switch (direction)
{
case AlternatingDirection.None:
direction = AlternatingDirection.Fall;
continue;
case AlternatingDirection.Rise:
direction = AlternatingDirection.Fall;
continue;
default:
//Two falls in a row. Not a signal.
return false;
}
}
if (prevVal < dataList[i])
{
prevVal = dataList[i];
switch (direction)
{
case AlternatingDirection.None:
direction = AlternatingDirection.Rise;
continue;
case AlternatingDirection.Fall:
direction = AlternatingDirection.Rise;
continue;
default:
//Two rise in a row. Not a signal.
return false;
}
}
return false;
}
//Alternated n times until here. Data is out of control.
return true;
}
Try this:
public static bool IsAlternating(double[] data)
{
var d = GetDerivative(data);
var signs = d.Select(val => Math.Sign(val));
bool isAlternating =
signs.Zip(signs.Skip(1), (a, b) => a != b).All(isAlt => isAlt);
return isAlternating;
}
private static IEnumerable<double> GetDerivative(double[] data)
{
var d = data.Zip(data.Skip(1), (a, b) => b - a);
return d;
}
Live demo
The idea is:
If the given list of values is alternating up and down, mathematically it means that it's derivative keeps changing its sign.
So this is exactly what this piece of code does:
Get the derivative.
Checks for sign fluctuations.
And the bonus is that it will not evaluate all of the derivative / signs arrays unless it is necessary.
I'd do it with a couple of consecutive zips, bundled in an extension method:
public static class AlternatingExtensions {
public static bool IsAlternating<T>(this IList<T> list) where T : IComparable<T>
{
var diffSigns = list.Zip(list.Skip(1), (a,b) => b.CompareTo(a));
var signChanges = diffSigns.Zip(diffSigns.Skip(1), (a,b) => a * b < 0);
return signChanges.All(s => s);
}
}
Edit: for completeness, here's how you'd use the feature:
var alternatingList = new List<double> { 14.0, 12.3, 13.0, 11.4 };
var nonAlternatingList = new List<double> { 14.0, 12.3, 11.4, 13.0 };
alternatingList.IsAlternating(); // true
nonAlternatingList.IsAlternating(); // false
I also changed the implementation to work on more types, making use of generics as much as possible.
Here is a small pseudo code. Assuming no repeated elements (can be handled easily though by few tweaks)
Idea is to have a sign variable which is alternating 1,-1,... that is multipled by the difference of two consecutive pairs, the difference multipled by this sign variable must always be positive. If it's not at some point, return false.
isUpAndDown(l):
if size(l) < 2: // empty,singleton list is always good.
return true
int sign = (l[0] < l[1] ? 1 : -1)
for i from 0 to n-1 (exclusive):
if sign * (l[i+1] - l[i]) < 0:
return false //not alternating
sign = sign * -1
end for
return true //all good
You may create kind of a signed array first:
double previous = 0;
var sign = myList.Select(x => {
int s = Math.Sign(x - previous);
previos = x;
return s;
});
This gives you a list similar to
{ -1, 1, -1, ... }
Now you can take a similar appraoch as the previos Select-statement to check if a -1 follows a 1:
var result = sign.All(x => {
bool b = x == -previous;
previous = x;
return b;
});
Now result is true if your list alternates, false otherwise.
EDIT: To ensure that the very first check within the second query also passes add previous = -sign[0]; before the second query.
Assuming that two equal values in a row are not acceptable (if they are, just skip over equal values):
if (dataList[0] == dataList[1])
return false;
bool nextMustRise = dataList[0] > dataList[1];
for (int i = 2; i < dataList.Count; i++) {
if (dataList[i - 1] == dataList[i] || (dataList[i - 1] < dataList[i]) != nextMustRise)
return false;
nextMustRise = !nextMustRise;
}
return true;
public double RatioOfAlternations(double[] dataList)
{
double Alternating = 0;
double Total = (dataList.Count()-2);
for (int (i) = 0; (i) < Total; (i)++)
{
if (((dataList[i+1]-dataList[i])*(dataList[i+2]-dataList[i+1]))<0)
// If previous change is opposite sign to current change, this will be negative
{
Alternating++;
}
else
{
}
}
return (Alternating/Total);
}

selection based on percentage weighting

I have a set of values, and an associated percentage for each:
a: 70% chance
b: 20% chance
c: 10% chance
I want to select a value (a, b, c) based on the percentage chance given.
how do I approach this?
my attempt so far looks like this:
r = random.random()
if r <= .7:
return a
elif r <= .9:
return b
else:
return c
I'm stuck coming up with an algorithm to handle this. How should I approach this so it can handle larger sets of values without just chaining together if-else flows.
(any explanation or answers in pseudo-code are fine. a python or C# implementation would be especially helpful)
Here is a complete solution in C#:
public class ProportionValue<T>
{
public double Proportion { get; set; }
public T Value { get; set; }
}
public static class ProportionValue
{
public static ProportionValue<T> Create<T>(double proportion, T value)
{
return new ProportionValue<T> { Proportion = proportion, Value = value };
}
static Random random = new Random();
public static T ChooseByRandom<T>(
this IEnumerable<ProportionValue<T>> collection)
{
var rnd = random.NextDouble();
foreach (var item in collection)
{
if (rnd < item.Proportion)
return item.Value;
rnd -= item.Proportion;
}
throw new InvalidOperationException(
"The proportions in the collection do not add up to 1.");
}
}
Usage:
var list = new[] {
ProportionValue.Create(0.7, "a"),
ProportionValue.Create(0.2, "b"),
ProportionValue.Create(0.1, "c")
};
// Outputs "a" with probability 0.7, etc.
Console.WriteLine(list.ChooseByRandom());
For Python:
>>> import random
>>> dst = 70, 20, 10
>>> vls = 'a', 'b', 'c'
>>> picks = [v for v, d in zip(vls, dst) for _ in range(d)]
>>> for _ in range(12): print random.choice(picks),
...
a c c b a a a a a a a a
>>> for _ in range(12): print random.choice(picks),
...
a c a c a b b b a a a a
>>> for _ in range(12): print random.choice(picks),
...
a a a a c c a c a a c a
>>>
General idea: make a list where each item is repeated a number of times proportional to the probability it should have; use random.choice to pick one at random (uniformly), this will match your required probability distribution. Can be a bit wasteful of memory if your probabilities are expressed in peculiar ways (e.g., 70, 20, 10 makes a 100-items list where 7, 2, 1 would make a list of just 10 items with exactly the same behavior), but you could divide all the counts in the probabilities list by their greatest common factor if you think that's likely to be a big deal in your specific application scenario.
Apart from memory consumption issues, this should be the fastest solution -- just one random number generation per required output result, and the fastest possible lookup from that random number, no comparisons &c. If your likely probabilities are very weird (e.g., floating point numbers that need to be matched to many, many significant digits), other approaches may be preferable;-).
Knuth references Walker's method of aliases. Searching on this, I find http://code.activestate.com/recipes/576564-walkers-alias-method-for-random-objects-with-diffe/ and http://prxq.wordpress.com/2006/04/17/the-alias-method/. This gives the exact probabilities required in constant time per number generated with linear time for setup (curiously, n log n time for setup if you use exactly the method Knuth describes, which does a preparatory sort you can avoid).
Take the list of and find the cumulative total of the weights: 70, 70+20, 70+20+10. Pick a random number greater than or equal to zero and less than the total. Iterate over the items and return the first value for which the cumulative sum of the weights is greater than this random number:
def select( values ):
variate = random.random() * sum( values.values() )
cumulative = 0.0
for item, weight in values.items():
cumulative += weight
if variate < cumulative:
return item
return item # Shouldn't get here, but just in case of rounding...
print select( { "a": 70, "b": 20, "c": 10 } )
This solution, as implemented, should also be able to handle fractional weights and weights that add up to any number so long as they're all non-negative.
Let T = the sum of all item weights
Let R = a random number between 0 and T
Iterate the item list subtracting each item weight from R and return the item that causes the result to become <= 0.
def weighted_choice(probabilities):
random_position = random.random() * sum(probabilities)
current_position = 0.0
for i, p in enumerate(probabilities):
current_position += p
if random_position < current_position:
return i
return None
Because random.random will always return < 1.0, the final return should never be reached.
import random
def selector(weights):
i=random.random()*sum(x for x,y in weights)
for w,v in weights:
if w>=i:
break
i-=w
return v
weights = ((70,'a'),(20,'b'),(10,'c'))
print [selector(weights) for x in range(10)]
it works equally well for fractional weights
weights = ((0.7,'a'),(0.2,'b'),(0.1,'c'))
print [selector(weights) for x in range(10)]
If you have a lot of weights, you can use bisect to reduce the number of iterations required
import random
import bisect
def make_acc_weights(weights):
acc=0
acc_weights = []
for w,v in weights:
acc+=w
acc_weights.append((acc,v))
return acc_weights
def selector(acc_weights):
i=random.random()*sum(x for x,y in weights)
return weights[bisect.bisect(acc_weights, (i,))][1]
weights = ((70,'a'),(20,'b'),(10,'c'))
acc_weights = make_acc_weights(weights)
print [selector(acc_weights) for x in range(100)]
Also works fine for fractional weights
weights = ((0.7,'a'),(0.2,'b'),(0.1,'c'))
acc_weights = make_acc_weights(weights)
print [selector(acc_weights) for x in range(100)]
today, the update of python document give an example to make a random.choice() with weighted probabilities:
If the weights are small integer ratios, a simple technique is to build a sample population with repeats:
>>> weighted_choices = [('Red', 3), ('Blue', 2), ('Yellow', 1), ('Green', 4)]
>>> population = [val for val, cnt in weighted_choices for i in range(cnt)]
>>> random.choice(population)
'Green'
A more general approach is to arrange the weights in a cumulative distribution with itertools.accumulate(), and then locate the random value with bisect.bisect():
>>> choices, weights = zip(*weighted_choices)
>>> cumdist = list(itertools.accumulate(weights))
>>> x = random.random() * cumdist[-1]
>>> choices[bisect.bisect(cumdist, x)]
'Blue'
one note: itertools.accumulate() needs python 3.2 or define it with the Equivalent.
I think you can have an array of small objects (I implemented in Java although I know a little bit C# but I am afraid can write wrong code), so you may need to port it yourself. The code in C# will be much smaller with struct, var but I hope you get the idea
class PercentString {
double percent;
String value;
// Constructor for 2 values
}
ArrayList<PercentString> list = new ArrayList<PercentString();
list.add(new PercentString(70, "a");
list.add(new PercentString(20, "b");
list.add(new PercentString(10, "c");
double percent = 0;
for (int i = 0; i < list.size(); i++) {
PercentString p = list.get(i);
percent += p.percent;
if (random < percent) {
return p.value;
}
}
If you are really up to speed and want to generate the random values quickly, the Walker's algorithm mcdowella mentioned in https://stackoverflow.com/a/3655773/1212517 is pretty much the best way to go (O(1) time for random(), and O(N) time for preprocess()).
For anyone who is interested, here is my own PHP implementation of the algorithm:
/**
* Pre-process the samples (Walker's alias method).
* #param array key represents the sample, value is the weight
*/
protected function preprocess($weights){
$N = count($weights);
$sum = array_sum($weights);
$avg = $sum / (double)$N;
//divide the array of weights to values smaller and geq than sum/N
$smaller = array_filter($weights, function($itm) use ($avg){ return $avg > $itm;}); $sN = count($smaller);
$greater_eq = array_filter($weights, function($itm) use ($avg){ return $avg <= $itm;}); $gN = count($greater_eq);
$bin = array(); //bins
//we want to fill N bins
for($i = 0;$i<$N;$i++){
//At first, decide for a first value in this bin
//if there are small intervals left, we choose one
if($sN > 0){
$choice1 = each($smaller);
unset($smaller[$choice1['key']]);
$sN--;
} else{ //otherwise, we split a large interval
$choice1 = each($greater_eq);
unset($greater_eq[$choice1['key']]);
}
//splitting happens here - the unused part of interval is thrown back to the array
if($choice1['value'] >= $avg){
if($choice1['value'] - $avg >= $avg){
$greater_eq[$choice1['key']] = $choice1['value'] - $avg;
}else if($choice1['value'] - $avg > 0){
$smaller[$choice1['key']] = $choice1['value'] - $avg;
$sN++;
}
//this bin comprises of only one value
$bin[] = array(1=>$choice1['key'], 2=>null, 'p1'=>1, 'p2'=>0);
}else{
//make the second choice for the current bin
$choice2 = each($greater_eq);
unset($greater_eq[$choice2['key']]);
//splitting on the second interval
if($choice2['value'] - $avg + $choice1['value'] >= $avg){
$greater_eq[$choice2['key']] = $choice2['value'] - $avg + $choice1['value'];
}else{
$smaller[$choice2['key']] = $choice2['value'] - $avg + $choice1['value'];
$sN++;
}
//this bin comprises of two values
$choice2['value'] = $avg - $choice1['value'];
$bin[] = array(1=>$choice1['key'], 2=>$choice2['key'],
'p1'=>$choice1['value'] / $avg,
'p2'=>$choice2['value'] / $avg);
}
}
$this->bins = $bin;
}
/**
* Choose a random sample according to the weights.
*/
public function random(){
$bin = $this->bins[array_rand($this->bins)];
$randValue = (lcg_value() < $bin['p1'])?$bin[1]:$bin[2];
}
Here is my version that can apply to any IList and normalize the weight. It is based on Timwi's solution : selection based on percentage weighting
/// <summary>
/// return a random element of the list or default if list is empty
/// </summary>
/// <param name="e"></param>
/// <param name="weightSelector">
/// return chances to be picked for the element. A weigh of 0 or less means 0 chance to be picked.
/// If all elements have weight of 0 or less they all have equal chances to be picked.
/// </param>
/// <returns></returns>
public static T AnyOrDefault<T>(this IList<T> e, Func<T, double> weightSelector)
{
if (e.Count < 1)
return default(T);
if (e.Count == 1)
return e[0];
var weights = e.Select(o => Math.Max(weightSelector(o), 0)).ToArray();
var sum = weights.Sum(d => d);
var rnd = new Random().NextDouble();
for (int i = 0; i < weights.Length; i++)
{
//Normalize weight
var w = sum == 0
? 1 / (double)e.Count
: weights[i] / sum;
if (rnd < w)
return e[i];
rnd -= w;
}
throw new Exception("Should not happen");
}
I've my own solution for this:
public class Randomizator3000
{
public class Item<T>
{
public T value;
public float weight;
public static float GetTotalWeight<T>(Item<T>[] p_itens)
{
float __toReturn = 0;
foreach(var item in p_itens)
{
__toReturn += item.weight;
}
return __toReturn;
}
}
private static System.Random _randHolder;
private static System.Random _random
{
get
{
if(_randHolder == null)
_randHolder = new System.Random();
return _randHolder;
}
}
public static T PickOne<T>(Item<T>[] p_itens)
{
if(p_itens == null || p_itens.Length == 0)
{
return default(T);
}
float __randomizedValue = (float)_random.NextDouble() * (Item<T>.GetTotalWeight(p_itens));
float __adding = 0;
for(int i = 0; i < p_itens.Length; i ++)
{
float __cacheValue = p_itens[i].weight + __adding;
if(__randomizedValue <= __cacheValue)
{
return p_itens[i].value;
}
__adding = __cacheValue;
}
return p_itens[p_itens.Length - 1].value;
}
}
And using it should be something like that (thats in Unity3d)
using UnityEngine;
using System.Collections;
public class teste : MonoBehaviour
{
Randomizator3000.Item<string>[] lista;
void Start()
{
lista = new Randomizator3000.Item<string>[10];
lista[0] = new Randomizator3000.Item<string>();
lista[0].weight = 10;
lista[0].value = "a";
lista[1] = new Randomizator3000.Item<string>();
lista[1].weight = 10;
lista[1].value = "b";
lista[2] = new Randomizator3000.Item<string>();
lista[2].weight = 10;
lista[2].value = "c";
lista[3] = new Randomizator3000.Item<string>();
lista[3].weight = 10;
lista[3].value = "d";
lista[4] = new Randomizator3000.Item<string>();
lista[4].weight = 10;
lista[4].value = "e";
lista[5] = new Randomizator3000.Item<string>();
lista[5].weight = 10;
lista[5].value = "f";
lista[6] = new Randomizator3000.Item<string>();
lista[6].weight = 10;
lista[6].value = "g";
lista[7] = new Randomizator3000.Item<string>();
lista[7].weight = 10;
lista[7].value = "h";
lista[8] = new Randomizator3000.Item<string>();
lista[8].weight = 10;
lista[8].value = "i";
lista[9] = new Randomizator3000.Item<string>();
lista[9].weight = 10;
lista[9].value = "j";
}
void Update ()
{
Debug.Log(Randomizator3000.PickOne<string>(lista));
}
}
In this example each value has a 10% chance do be displayed as a debug =3
Based loosely on python's numpy.random.choice(a=items, p=probs), which takes an array and a probability array of the same size.
public T RandomChoice<T>(IEnumerable<T> a, IEnumerable<double> p)
{
IEnumerator<T> ae = a.GetEnumerator();
Random random = new Random();
double target = random.NextDouble();
double accumulator = 0;
foreach (var prob in p)
{
ae.MoveNext();
accumulator += prob;
if (accumulator > target)
{
break;
}
}
return ae.Current;
}
The probability array p must sum to (approx.) 1. This is to keep it consistent with the numpy interface (and mathematics), but you could easily change that if you wanted.

Categories

Resources