I am receiving values that are constantly changing. Let's say I receive 1 then 6 then 9 then 3 then 10 and so on. This makes my graphics on control jumping all around. I would like to smoothen this transition. After some research I have found an example, but it seems to be for Unity and for example Mathf class is not available outside Unity.
This:
private IEnumerator ChangeSpeed(float v_start, float v_end, float duration)
{
float speed = 0.0f;
float elapsed = 0.0f;
while (elapsed < duration)
{
speed = Mathf.Lerp(v_start, v_end, elapsed / duration);
elapsed += Time.deltaTime;
yield return null;
}
speed = v_end;
}
I have been trying to work it out by somehow dividing values into smaller ones to get change in decimals. So if I would get 1 and 6, I would remember these two values and then input into my property 1.1, 1.2, 1.3, 1.4 and so on until 6. This would decrease the step. However I am having hard times to understand how it can be done in principle.
What I have tried so far, not working, no smoothing:
private void StartRandomizer()
{
// Create a timer with a two second interval.
Timer timer = new System.Timers.Timer(2000);
// Hook up the Elapsed event for the timer.
timer.Elapsed += OnTimedEvent;
timer.AutoReset = true;
timer.Enabled = true;
}
private void OnTimedEvent(Object source, ElapsedEventArgs e)
{
double oldValue = 0;
Random random = new Random();
double number = random.Next(1, 20) * 5;
this.SmoothValue(number, oldValue);
//ExpFilter expFilter = new ExpFilter(number, 1, 0.1);
this.Speed = number;
oldValue = number;
}
private double SmoothValue(double newValue, double oldValue)
{
double difference = Math.Abs(newValue - oldValue) / 100;
return difference;
}
Property Speed is double. Any ideas how to get this working?
You can create a helper method which returns the intermediate values for two given values and a given step size. The method can look like this:
public static IList<double> BuildIntermediateValues(double start, double end, double stepSize)
{
if (stepSize <= 0)
{
throw new ArgumentException("The step size must be positive", nameof(stepSize));
}
IList<double> result = new List<double>();
if (Math.Abs(start-end) < double.Epsilon)
{
return result;
}
if (start < end)
{
// go up
for (double d = start+stepSize; d<end; d+= stepSize)
{
result.Add(d);
}
}
else
{
// go down.
for (double d = start-stepSize; d>end; d-= stepSize)
{
result.Add(d);
}
}
return result;
}
You can use it to get the intermediate values between two input values. The usage can look like this:
public static void Main(string[] args)
{
IList<double> inputs = new double[] {1, 6, 9, 3, 10};
for (int i=0; i<inputs.Count-1; i++)
{
double inputValue = inputs[i];
double nextInputValue = inputs[i+1];
Console.WriteLine($"Input value is: {inputValue}");
IList<double> intermediateValues = BuildIntermediateValues(inputValue, nextInputValue, 1.25);
foreach(double intermediate in intermediateValues)
{
Console.WriteLine($"Intermediate: {intermediate}");
}
}
Console.WriteLine($"Last input: {inputs.Last()}");
}
This will generate the following output:
Input value is: 1
Intermediate: 2.25
Intermediate: 3.5
Intermediate: 4.75
Input value is: 6
Intermediate: 7.25
Intermediate: 8.5
Input value is: 9
Intermediate: 7.75
Intermediate: 6.5
Intermediate: 5.25
Intermediate: 4
Input value is: 3
Intermediate: 4.25
Intermediate: 5.5
Intermediate: 6.75
Intermediate: 8
Intermediate: 9.25
Last input: 10
Adjust the stepSize values for your needs.
It's very easy to generate normally distributed data with a desired mean and standard distribution:
IEnumerable<double> sample = MathNet.Numerics.Distributions.Normal.Samples(mean, sd).Take(n);
However with a sufficiently large value for n you will get values miles away from the mean. To put it into context I have a real world data set with mean = 15.93 and sd = 6.84. For this data set it is impossible to have a value over 30 or under 0, but I cannot see a way to add upper and lower bounds to the data that is generated.
I can remove data that falls outside of this range as below, but this results in the mean and SD for the generated sample differing significantly (in my opinion, probably not statistically) from the values I requested.
Normal.Samples(mean, sd).Where(x => x is >= 0 and <= 30).Take(n);
Is there any way to ensure that the values generated fall within a specified range without effecting the mean and SD of the generated data?
The following proposed solution relies on a specific formula for calculating the standard deviation relative to the bounds: the standard deviation has to be a third of the difference between the mean and the required minimum or maximum.
This first code block is the TruncatedNormalDistribution class, which encapsulates MathNet's Normal class. The main technique for making a truncated normal distribution is in the constructor. Note the resulting workaround that is required in the Sample method:
using MathNet.Numerics.Distributions;
public class TruncatedNormalDistribution {
public TruncatedNormalDistribution(double xMin, double xMax) {
XMin = xMin;
XMax = xMax;
double mean = XMin + (XMax - XMin) / 2; // Halfway between minimum and maximum.
// If the standard deviation is a third of the difference between the mean and
// the required minimum or maximum of a normal distribution, 99.7% of samples should
// be in the required range.
double standardDeviation = (mean - XMin) / 3;
Distribution = new Normal(mean, standardDeviation);
}
private Normal Distribution { get; }
private double XMin { get; }
private double XMax { get; }
public double CumulativeDistribution(double x) {
return Distribution.CumulativeDistribution(x);
}
public double Density(double x) {
return Distribution.Density(x);
}
public double Sample() {
// Constrain results lower than XMin or higher than XMax
// to those bounds.
return Math.Clamp(Distribution.Sample(), XMin, XMax);
}
}
And here is a usage example. For a visual representation of the results, open each of the two output CSV files in a spreadsheet, such as Excel, and map its data to a line chart:
// Put the path of the folder where the CSVs will be saved here
const string chartFolderPath =
#"C:\Insert\chart\folder\path\here";
const double xMin = 0;
const double xMax = 100;
var distribution = new TruncatedNormalDistribution(xMin, xMax);
// Densities
var dictionary = new Dictionary<double, double>();
for (double x = xMin; x <= xMax; x += 1) {
dictionary.Add(x, distribution.Density(x));
}
string csvPath = Path.Combine(
chartFolderPath,
$"Truncated Normal Densities, Range {xMin} to {xMax}.csv");
using var writer = new StreamWriter(csvPath);
foreach ((double key, double value) in dictionary) {
writer.WriteLine($"{key},{value}");
}
// Cumulative Distributions
dictionary.Clear();
for (double x = xMin; x <= xMax; x += 1) {
dictionary.Add(x, distribution.CumulativeDistribution(x));
}
csvPath = Path.Combine(
chartFolderPath,
$"Truncated Normal Cumulative Distributions, Range {xMin} to {xMax}.csv");
using var writer2 = new StreamWriter(csvPath);
foreach ((double key, double value) in dictionary) {
writer2.WriteLine($"{key},{value}");
}
I'm working on a project with NAudio 1.9 and I want to compute an fft for an entire song, i.e split the song in chunks of equal size and compute fft for each chunk. The problem is that NAudio FFT function returns really small and equal values for any freq in the freq spectrum.
I searched for previous related posts but none seemed to help me.
The code that computes FFT using NAudio:
public IList<FrequencySpectrum> Fft(uint windowSize) {
IList<Complex[]> timeDomainChunks = this.SplitInChunks(this.audioContent, windowSize);
return timeDomainChunks.Select(this.ToFrequencySpectrum).ToList();
}
private IList<Complex[]> SplitInChunks(float[] audioContent, uint chunkSize) {
IList<Complex[]> splittedContent = new List<Complex[]>();
for (uint k = 0; k < audioContent.Length; k += chunkSize) {
long size = k + chunkSize < audioContent.Length ? chunkSize : audioContent.Length - k;
Complex[] chunk = new Complex[size];
for (int i = 0; i < chunk.Length; i++) {
//i've tried windowing here but didn't seem to help me
chunk[i].X = audioContent[k + i];
chunk[i].Y = 0;
}
splittedContent.Add(chunk);
}
return splittedContent;
}
private FrequencySpectrum ToFrequencySpectrum(Complex[] timeDomain) {
int m = (int) Math.Log(timeDomain.Length, 2);
//true = forward fft
FastFourierTransform.FFT(true, m, timeDomain);
return new FrequencySpectrum(timeDomain, 44100);
}
The FrequencySpectrum:
public struct FrequencySpectrum {
private readonly Complex[] frequencyDomain;
private readonly uint samplingFrequency;
public FrequencySpectrum(Complex[] frequencyDomain, uint samplingFrequency) {
if (frequencyDomain.Length == 0) {
throw new ArgumentException("Argument value must be greater than 0", nameof(frequencyDomain));
}
if (samplingFrequency == 0) {
throw new ArgumentException("Argument value must be greater than 0", nameof(samplingFrequency));
}
this.frequencyDomain = frequencyDomain;
this.samplingFrequency = samplingFrequency;
}
//returns magnitude for freq
public float this[uint freq] {
get {
if (freq >= this.samplingFrequency) {
throw new IndexOutOfRangeException();
}
//find corresponding bin
float k = freq / ((float) this.samplingFrequency / this.FftWindowSize);
Complex c = this.frequencyDomain[checked((uint) k)];
return (float) Math.Sqrt(c.X * c.X + c.Y * c.Y);
}
}
}
for a file that contains a sine wave of 440Hz
expected output: values like 0.5 for freq=440 and 0 for the others
actual output: values like 0.000168153987f for any freq in the spectrum
It seems that I made 4 mistakes:
1) Here I'm asumming that sampling freq is 44100. This was not the reason my code wasn't working, though
return new FrequencySpectrum(timeDomain, 44100);
2) Always make a visual representation of your output data! I must learn this lesson... It seems that for a file containing a 440Hz sine wave I'm getting the right result but...
3) The frequency spectrum is a little shifted from what I was expecting because of this:
int m = (int) Math.Log(timeDomain.Length, 2);
FastFourierTransform.FFT(true, m, timeDomain);
timeDomain is an array of size 44100 becaused that's the value of windowSize (I called the method with windowSize = 44100), but FFT method expects a window size with a value power of 2. I'm saying "Here, NAudio, compute me the fft of this array that has 44100 elements, but take into account only the first 32768". I didn't realize that this was going to have serious implications on the result:
float k = freq / ((float) this.samplingFrequency / this.FftWindowSize);
Here this.FftWindowSize is a property based on the size of the array, not on m. So, after visualizing the result I found out that magnitude of 440Hz freq was actually corresponding to the call:
spectrum[371]
instead of
spectrum[440]
So, my mistake was that the window size of fft (m) was not corresponding to the actual length of the array (FrequencySpectrum.FftWindowSize).
4) The small values that I was receiving for the magnitudes came from the fact that the audio file on which I was testing my code wasn't recorded with enough gain.
OK the title is ugly but the problem is quite straightforward:
I have a WPF control where I want to display plot lines. My "viewport" has its limits, and these limits (for example, bottom and top value in object coordinates) are doubles.
So I would like to draw lines at every multiple of, say, 5. If my viewport goes from -8.3 to 22.8, I would get [-5, 0, 5, 10, 15, 20].
I would like to use LINQ, it seems the natural candidate, but cannot find a way...
I imagine something along these lines:
int nlines = (int)((upper_value - lower_value)/step);
var seq = Enumerable.Range((int)(Math.Ceiling(magic_number)), nlines).Select(what_else);
Given values are (double)lower_value, (double)upper_value and (int)step.
Enumerable.Range should do the trick:
Enumerable.Range(lower_value, upper_value - lower_value)
.Where(x => x % step == 0);
Try this code:
double lower_value = -8.3;
double upper_value = 22.8;
int step = 5;
int low = (int)lower_value / step;
int up = (int)upper_value / step;
var tt = Enumerable.Range(low, up - low + 1).Select(i => i * step);
EDIT
This code is intended for all negative values of the lower_value and for positive values which are divisible by the step. To make it work for all other positive values as well, the following correction should be applied:
if (lower_value > step * low)
low++;
The first problem is to determine the nearest factor of your step value from your starting point. Some simple arithmetic can deduce this value:
public static double RoundToMultiple(double value, double multiple)
{
return value - value % multiple;
}
To then create a sequence of all factors of a given value between a range an iterator block is well suited:
public static IEnumerable<double> FactorsInRange(
double start, double end, double factor)
{
var current = RoundToMultiple(start, factor);
while (start < end)
{
yield return start;
current = current + factor;
}
}
If you have the Generate method from MoreLinq, then you could write this without an explicit iterator block:
public static IEnumerable<double> FactorsInRange(
double start, double end, double factor)
{
return Generate(RoundToMultiple(start, factor),
current => current + factor)
.TakeWhile(current => current < end);
}
To avoid having to enumerate every number, you'll have to go outside of LINQ:
List<int> steps;
int currentStep = (lower_value / step) * step; //This takes advantage of integer division to "floor" the factor
steps.Add(currentStep);
while (currentStep < upper_value)
{
currentStep += step;
steps.Add(currentStep);
}
I made some adjustments to the code.
private List<int> getMultiples(double lower_value, double upper_value, int step) {
List<int> steps = new List<int>();
int currentStep = (int)(lower_value / step) * step; //This takes advantage of integer division to "floor" the factor
steps.Add(currentStep);
while (currentStep <= upper_value) {
steps.Add(currentStep);
currentStep += step;
}
return steps;
}
I have a set of values, and an associated percentage for each:
a: 70% chance
b: 20% chance
c: 10% chance
I want to select a value (a, b, c) based on the percentage chance given.
how do I approach this?
my attempt so far looks like this:
r = random.random()
if r <= .7:
return a
elif r <= .9:
return b
else:
return c
I'm stuck coming up with an algorithm to handle this. How should I approach this so it can handle larger sets of values without just chaining together if-else flows.
(any explanation or answers in pseudo-code are fine. a python or C# implementation would be especially helpful)
Here is a complete solution in C#:
public class ProportionValue<T>
{
public double Proportion { get; set; }
public T Value { get; set; }
}
public static class ProportionValue
{
public static ProportionValue<T> Create<T>(double proportion, T value)
{
return new ProportionValue<T> { Proportion = proportion, Value = value };
}
static Random random = new Random();
public static T ChooseByRandom<T>(
this IEnumerable<ProportionValue<T>> collection)
{
var rnd = random.NextDouble();
foreach (var item in collection)
{
if (rnd < item.Proportion)
return item.Value;
rnd -= item.Proportion;
}
throw new InvalidOperationException(
"The proportions in the collection do not add up to 1.");
}
}
Usage:
var list = new[] {
ProportionValue.Create(0.7, "a"),
ProportionValue.Create(0.2, "b"),
ProportionValue.Create(0.1, "c")
};
// Outputs "a" with probability 0.7, etc.
Console.WriteLine(list.ChooseByRandom());
For Python:
>>> import random
>>> dst = 70, 20, 10
>>> vls = 'a', 'b', 'c'
>>> picks = [v for v, d in zip(vls, dst) for _ in range(d)]
>>> for _ in range(12): print random.choice(picks),
...
a c c b a a a a a a a a
>>> for _ in range(12): print random.choice(picks),
...
a c a c a b b b a a a a
>>> for _ in range(12): print random.choice(picks),
...
a a a a c c a c a a c a
>>>
General idea: make a list where each item is repeated a number of times proportional to the probability it should have; use random.choice to pick one at random (uniformly), this will match your required probability distribution. Can be a bit wasteful of memory if your probabilities are expressed in peculiar ways (e.g., 70, 20, 10 makes a 100-items list where 7, 2, 1 would make a list of just 10 items with exactly the same behavior), but you could divide all the counts in the probabilities list by their greatest common factor if you think that's likely to be a big deal in your specific application scenario.
Apart from memory consumption issues, this should be the fastest solution -- just one random number generation per required output result, and the fastest possible lookup from that random number, no comparisons &c. If your likely probabilities are very weird (e.g., floating point numbers that need to be matched to many, many significant digits), other approaches may be preferable;-).
Knuth references Walker's method of aliases. Searching on this, I find http://code.activestate.com/recipes/576564-walkers-alias-method-for-random-objects-with-diffe/ and http://prxq.wordpress.com/2006/04/17/the-alias-method/. This gives the exact probabilities required in constant time per number generated with linear time for setup (curiously, n log n time for setup if you use exactly the method Knuth describes, which does a preparatory sort you can avoid).
Take the list of and find the cumulative total of the weights: 70, 70+20, 70+20+10. Pick a random number greater than or equal to zero and less than the total. Iterate over the items and return the first value for which the cumulative sum of the weights is greater than this random number:
def select( values ):
variate = random.random() * sum( values.values() )
cumulative = 0.0
for item, weight in values.items():
cumulative += weight
if variate < cumulative:
return item
return item # Shouldn't get here, but just in case of rounding...
print select( { "a": 70, "b": 20, "c": 10 } )
This solution, as implemented, should also be able to handle fractional weights and weights that add up to any number so long as they're all non-negative.
Let T = the sum of all item weights
Let R = a random number between 0 and T
Iterate the item list subtracting each item weight from R and return the item that causes the result to become <= 0.
def weighted_choice(probabilities):
random_position = random.random() * sum(probabilities)
current_position = 0.0
for i, p in enumerate(probabilities):
current_position += p
if random_position < current_position:
return i
return None
Because random.random will always return < 1.0, the final return should never be reached.
import random
def selector(weights):
i=random.random()*sum(x for x,y in weights)
for w,v in weights:
if w>=i:
break
i-=w
return v
weights = ((70,'a'),(20,'b'),(10,'c'))
print [selector(weights) for x in range(10)]
it works equally well for fractional weights
weights = ((0.7,'a'),(0.2,'b'),(0.1,'c'))
print [selector(weights) for x in range(10)]
If you have a lot of weights, you can use bisect to reduce the number of iterations required
import random
import bisect
def make_acc_weights(weights):
acc=0
acc_weights = []
for w,v in weights:
acc+=w
acc_weights.append((acc,v))
return acc_weights
def selector(acc_weights):
i=random.random()*sum(x for x,y in weights)
return weights[bisect.bisect(acc_weights, (i,))][1]
weights = ((70,'a'),(20,'b'),(10,'c'))
acc_weights = make_acc_weights(weights)
print [selector(acc_weights) for x in range(100)]
Also works fine for fractional weights
weights = ((0.7,'a'),(0.2,'b'),(0.1,'c'))
acc_weights = make_acc_weights(weights)
print [selector(acc_weights) for x in range(100)]
today, the update of python document give an example to make a random.choice() with weighted probabilities:
If the weights are small integer ratios, a simple technique is to build a sample population with repeats:
>>> weighted_choices = [('Red', 3), ('Blue', 2), ('Yellow', 1), ('Green', 4)]
>>> population = [val for val, cnt in weighted_choices for i in range(cnt)]
>>> random.choice(population)
'Green'
A more general approach is to arrange the weights in a cumulative distribution with itertools.accumulate(), and then locate the random value with bisect.bisect():
>>> choices, weights = zip(*weighted_choices)
>>> cumdist = list(itertools.accumulate(weights))
>>> x = random.random() * cumdist[-1]
>>> choices[bisect.bisect(cumdist, x)]
'Blue'
one note: itertools.accumulate() needs python 3.2 or define it with the Equivalent.
I think you can have an array of small objects (I implemented in Java although I know a little bit C# but I am afraid can write wrong code), so you may need to port it yourself. The code in C# will be much smaller with struct, var but I hope you get the idea
class PercentString {
double percent;
String value;
// Constructor for 2 values
}
ArrayList<PercentString> list = new ArrayList<PercentString();
list.add(new PercentString(70, "a");
list.add(new PercentString(20, "b");
list.add(new PercentString(10, "c");
double percent = 0;
for (int i = 0; i < list.size(); i++) {
PercentString p = list.get(i);
percent += p.percent;
if (random < percent) {
return p.value;
}
}
If you are really up to speed and want to generate the random values quickly, the Walker's algorithm mcdowella mentioned in https://stackoverflow.com/a/3655773/1212517 is pretty much the best way to go (O(1) time for random(), and O(N) time for preprocess()).
For anyone who is interested, here is my own PHP implementation of the algorithm:
/**
* Pre-process the samples (Walker's alias method).
* #param array key represents the sample, value is the weight
*/
protected function preprocess($weights){
$N = count($weights);
$sum = array_sum($weights);
$avg = $sum / (double)$N;
//divide the array of weights to values smaller and geq than sum/N
$smaller = array_filter($weights, function($itm) use ($avg){ return $avg > $itm;}); $sN = count($smaller);
$greater_eq = array_filter($weights, function($itm) use ($avg){ return $avg <= $itm;}); $gN = count($greater_eq);
$bin = array(); //bins
//we want to fill N bins
for($i = 0;$i<$N;$i++){
//At first, decide for a first value in this bin
//if there are small intervals left, we choose one
if($sN > 0){
$choice1 = each($smaller);
unset($smaller[$choice1['key']]);
$sN--;
} else{ //otherwise, we split a large interval
$choice1 = each($greater_eq);
unset($greater_eq[$choice1['key']]);
}
//splitting happens here - the unused part of interval is thrown back to the array
if($choice1['value'] >= $avg){
if($choice1['value'] - $avg >= $avg){
$greater_eq[$choice1['key']] = $choice1['value'] - $avg;
}else if($choice1['value'] - $avg > 0){
$smaller[$choice1['key']] = $choice1['value'] - $avg;
$sN++;
}
//this bin comprises of only one value
$bin[] = array(1=>$choice1['key'], 2=>null, 'p1'=>1, 'p2'=>0);
}else{
//make the second choice for the current bin
$choice2 = each($greater_eq);
unset($greater_eq[$choice2['key']]);
//splitting on the second interval
if($choice2['value'] - $avg + $choice1['value'] >= $avg){
$greater_eq[$choice2['key']] = $choice2['value'] - $avg + $choice1['value'];
}else{
$smaller[$choice2['key']] = $choice2['value'] - $avg + $choice1['value'];
$sN++;
}
//this bin comprises of two values
$choice2['value'] = $avg - $choice1['value'];
$bin[] = array(1=>$choice1['key'], 2=>$choice2['key'],
'p1'=>$choice1['value'] / $avg,
'p2'=>$choice2['value'] / $avg);
}
}
$this->bins = $bin;
}
/**
* Choose a random sample according to the weights.
*/
public function random(){
$bin = $this->bins[array_rand($this->bins)];
$randValue = (lcg_value() < $bin['p1'])?$bin[1]:$bin[2];
}
Here is my version that can apply to any IList and normalize the weight. It is based on Timwi's solution : selection based on percentage weighting
/// <summary>
/// return a random element of the list or default if list is empty
/// </summary>
/// <param name="e"></param>
/// <param name="weightSelector">
/// return chances to be picked for the element. A weigh of 0 or less means 0 chance to be picked.
/// If all elements have weight of 0 or less they all have equal chances to be picked.
/// </param>
/// <returns></returns>
public static T AnyOrDefault<T>(this IList<T> e, Func<T, double> weightSelector)
{
if (e.Count < 1)
return default(T);
if (e.Count == 1)
return e[0];
var weights = e.Select(o => Math.Max(weightSelector(o), 0)).ToArray();
var sum = weights.Sum(d => d);
var rnd = new Random().NextDouble();
for (int i = 0; i < weights.Length; i++)
{
//Normalize weight
var w = sum == 0
? 1 / (double)e.Count
: weights[i] / sum;
if (rnd < w)
return e[i];
rnd -= w;
}
throw new Exception("Should not happen");
}
I've my own solution for this:
public class Randomizator3000
{
public class Item<T>
{
public T value;
public float weight;
public static float GetTotalWeight<T>(Item<T>[] p_itens)
{
float __toReturn = 0;
foreach(var item in p_itens)
{
__toReturn += item.weight;
}
return __toReturn;
}
}
private static System.Random _randHolder;
private static System.Random _random
{
get
{
if(_randHolder == null)
_randHolder = new System.Random();
return _randHolder;
}
}
public static T PickOne<T>(Item<T>[] p_itens)
{
if(p_itens == null || p_itens.Length == 0)
{
return default(T);
}
float __randomizedValue = (float)_random.NextDouble() * (Item<T>.GetTotalWeight(p_itens));
float __adding = 0;
for(int i = 0; i < p_itens.Length; i ++)
{
float __cacheValue = p_itens[i].weight + __adding;
if(__randomizedValue <= __cacheValue)
{
return p_itens[i].value;
}
__adding = __cacheValue;
}
return p_itens[p_itens.Length - 1].value;
}
}
And using it should be something like that (thats in Unity3d)
using UnityEngine;
using System.Collections;
public class teste : MonoBehaviour
{
Randomizator3000.Item<string>[] lista;
void Start()
{
lista = new Randomizator3000.Item<string>[10];
lista[0] = new Randomizator3000.Item<string>();
lista[0].weight = 10;
lista[0].value = "a";
lista[1] = new Randomizator3000.Item<string>();
lista[1].weight = 10;
lista[1].value = "b";
lista[2] = new Randomizator3000.Item<string>();
lista[2].weight = 10;
lista[2].value = "c";
lista[3] = new Randomizator3000.Item<string>();
lista[3].weight = 10;
lista[3].value = "d";
lista[4] = new Randomizator3000.Item<string>();
lista[4].weight = 10;
lista[4].value = "e";
lista[5] = new Randomizator3000.Item<string>();
lista[5].weight = 10;
lista[5].value = "f";
lista[6] = new Randomizator3000.Item<string>();
lista[6].weight = 10;
lista[6].value = "g";
lista[7] = new Randomizator3000.Item<string>();
lista[7].weight = 10;
lista[7].value = "h";
lista[8] = new Randomizator3000.Item<string>();
lista[8].weight = 10;
lista[8].value = "i";
lista[9] = new Randomizator3000.Item<string>();
lista[9].weight = 10;
lista[9].value = "j";
}
void Update ()
{
Debug.Log(Randomizator3000.PickOne<string>(lista));
}
}
In this example each value has a 10% chance do be displayed as a debug =3
Based loosely on python's numpy.random.choice(a=items, p=probs), which takes an array and a probability array of the same size.
public T RandomChoice<T>(IEnumerable<T> a, IEnumerable<double> p)
{
IEnumerator<T> ae = a.GetEnumerator();
Random random = new Random();
double target = random.NextDouble();
double accumulator = 0;
foreach (var prob in p)
{
ae.MoveNext();
accumulator += prob;
if (accumulator > target)
{
break;
}
}
return ae.Current;
}
The probability array p must sum to (approx.) 1. This is to keep it consistent with the numpy interface (and mathematics), but you could easily change that if you wanted.