Related
I'm following Sebastian Lague's Procedural Landmass Generation tutorial . He is generating landmass' color by height value. But I want to separate landmass to a array or list as areas by their color. Because the way Sebastian use generates too many water or mountain areas so I want to make them less. I tried to edit the code of him but the code I made tooks 2-3 minutes to separate. Does anyone have idea to make it faster?
The class I use to seperate regions and the areas in every region:
public class Positions
{
public int regionID;
public int areaID;
public int x;
public float y;
public int z;
}
List<Positions> positions = new List<Positions>();
The code I try to find and list them:
for (int y = 0; y<mapChunkSize; y++)
{
for (int x = 0; x<mapChunkSize; x++)
{
float currentHeight = noiseMap[x, y];
for (int i = 0; i<regions.Length; i++)
{
if (currentHeight <= regions[i].height)
{
int areaID = 0;
if(positions.Where(x => x.regionID == i).Count() != 0)
{
areaID = getNeighourIndex(i, x, y);
}
Positions p = new Positions { regionID = i, areaID = areaID, x = x, y = noiseMap[x, y], z = y };
positions.Add(p);
break;
}
}
}
}
int getNeighourIndex(int regionID, int x, int y)
{
List<Positions> pos = positions.Where(x => x.regionID == regionID).ToList();
if (pos.Where(q => Vector2Int.Distance(new Vector2Int(q.x, q.z), new Vector2Int(x, y)) <= 1).Count() > 0)
return pos.Find(q => Vector2Int.Distance(new Vector2Int(q.x, q.z), new Vector2Int(x, y)) <= 1).areaID;
return (pos.Select(q => q.areaID).OrderBy(x => x).LastOrDefault()) + 1;
}
As always when doing optimizing, the first step should be to measure, ideally with a profiler since this can hint at what it is that takes most time.
But I would guess that the majority of the time is spent in positions.Where(x => x.regionID == regionID). To solve this you could use a multi value dictionary, i.e. a dictionary where each key can map to multiple values. It is fairly easy to make your own wrapper around a Dictionary<TKey, List<TValue>>, or you could use one from Microsoft.Experimental.Collections.
You could also consider using a hierarchical search structure for the points, like a kd-tree, quad-tree or similar.
Also, if performance is of high importance, using LINQ is likely not the best option. LINQ is convenient, but the abstraction adds some overhead. In most cases this overhead is irrelevant, but in tight loops like this it may very well be a significant factor.
An stl file may contain 2 3D models. Is there any way I can detect if there are 2 or more models stored in one stl file?
In my current code, it can detect that there are 2 models in the example, but there are instances that it detects a lot of model even though it only has one.
The Triangle class structure has Vertices that contains 3 points (x, y, z)..
Sample STL File:
EDIT: Using #Gebb's answer this is how I implemented it:
private int GetNumberOfModels(List<TopoVertex> vertices)
{
Vertex[][] triangles = new Vertex[vertices.Count() / 3][];
int vertIdx = 0;
for(int i = 0; i < vertices.Count() / 3; i++)
{
Vertex v1 = new Vertex(vertices[vertIdx].pos.x, vertices[vertIdx].pos.y, vertices[vertIdx].pos.z);
Vertex v2 = new Vertex(vertices[vertIdx + 1].pos.x, vertices[vertIdx + 1].pos.y, vertices[vertIdx + 1].pos.z);
Vertex v3 = new Vertex(vertices[vertIdx + 2].pos.x, vertices[vertIdx + 2].pos.y, vertices[vertIdx + 2].pos.z);
triangles[i] = new Vertex[] { v1, v2, v3 };
vertIdx += 3;
}
var uniqueVertices = new HashSet<Vertex>(triangles.SelectMany(t => t));
int vertexCount = uniqueVertices.Count;
// The DisjointUnionSets class works with integers, so we need a map from vertex
// to integer (its id).
Dictionary<Vertex, int> indexedVertices = uniqueVertices
.Zip(
Enumerable.Range(0, vertexCount),
(v, i) => new { v, i })
.ToDictionary(vi => vi.v, vi => vi.i);
int[][] indexedTriangles =
triangles
.Select(t => t.Select(v => indexedVertices[v]).ToArray())
.ToArray();
var du = new XYZ.view.wpf.DisjointUnionSets(vertexCount);
// Iterate over the "triangles" consisting of vertex ids.
foreach (int[] triangle in indexedTriangles)
{
int vertex0 = triangle[0];
// Mark 0-th vertexes connected component as connected to those of all other vertices.
foreach (int v in triangle.Skip(1))
{
du.Union(vertex0, v);
}
}
var connectedComponents =
new HashSet<int>(Enumerable.Range(0, vertexCount).Select(x => du.Find(x)));
return connectedComponents.Count;
}
In some cases, it produces the correct output, but for the example image above, it outputs 3 instead of 2. I am now trying to optimize the snippet #Gebb gave to use float values since I believe that the floating points are necessary to the comparisons. Does anyone have a way to do that as well? Maybe I need another perspective.
You could do this by representing vertices and connections between them as a graph and finding the number of connected components of the graph with the help of the Disjoint-set data structure.
using System;
using System.Collections.Generic;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Text.RegularExpressions;
using Vertex = System.ValueTuple<double,double,double>;
namespace UnionFindSample
{
internal class DisjointUnionSets
{
private readonly int _n;
private readonly int[] _rank;
private readonly int[] _parent;
public DisjointUnionSets(int n)
{
_rank = new int[n];
_parent = new int[n];
_n = n;
MakeSet();
}
// Creates n sets with single item in each
public void MakeSet()
{
for (var i = 0; i < _n; i++)
// Initially, all elements are in
// their own set.
_parent[i] = i;
}
// Finds the representative of the set
// that x is an element of.
public int Find(int x)
{
if (_parent[x] != x)
{
// if x is not the parent of itself, then x is not the representative of
// his set.
// We do the path compression by moving x’s node directly under the representative
// of this set.
_parent[x] = Find(_parent[x]);
}
return _parent[x];
}
// Unites the set that includes x and
// the set that includes x
public void Union(int x, int y)
{
// Find representatives of two sets.
int xRoot = Find(x), yRoot = Find(y);
// Elements are in the same set, no need to unite anything.
if (xRoot == yRoot)
{
return;
}
if (_rank[xRoot] < _rank[yRoot])
{
// Then move x under y so that depth of tree remains equal to _rank[yRoot].
_parent[xRoot] = yRoot;
}
else if (_rank[yRoot] < _rank[xRoot])
{
// Then move y under x so that depth of tree remains equal to _rank[xRoot].
_parent[yRoot] = xRoot;
}
else
{
// if ranks are the same
// then move y under x (doesn't matter which one goes where).
_parent[yRoot] = xRoot;
// And increment the result tree's
// rank by 1
_rank[xRoot] = _rank[xRoot] + 1;
}
}
}
internal class Program
{
private static void Main(string[] args)
{
string file = args[0];
Vertex[][] triangles = ParseStl(file);
var uniqueVertices = new HashSet<Vertex>(triangles.SelectMany(t => t));
int vertexCount = uniqueVertices.Count;
// The DisjointUnionSets class works with integers, so we need a map from vertex
// to integer (its id).
Dictionary<Vertex, int> indexedVertices = uniqueVertices
.Zip(
Enumerable.Range(0, vertexCount),
(v, i) => new {v, i})
.ToDictionary(vi => vi.v, vi => vi.i);
int[][] indexedTriangles =
triangles
.Select(t => t.Select(v => indexedVertices[v]).ToArray())
.ToArray();
var du = new DisjointUnionSets(vertexCount);
// Iterate over the "triangles" consisting of vertex ids.
foreach (int[] triangle in indexedTriangles)
{
int vertex0 = triangle[0];
// Mark 0-th vertexes connected component as connected to those of all other vertices.
foreach (int v in triangle.Skip(1))
{
du.Union(vertex0, v);
}
}
var connectedComponents =
new HashSet<int>(Enumerable.Range(0, vertexCount).Select(x => du.Find(x)));
int count = connectedComponents.Count;
Console.WriteLine($"Number of connected components: {count}.");
var groups = triangles.GroupBy(t => du.Find(indexedVertices[t[0]]));
foreach (IGrouping<int, Vertex[]> g in groups)
{
Console.WriteLine($"Group id={g.Key}:");
foreach (Vertex[] triangle in g)
{
string tr = string.Join(' ', triangle);
Console.WriteLine($"\t{tr}");
}
}
}
private static Regex _triangleStart = new Regex(#"^\s+outer loop");
private static Regex _triangleEnd = new Regex(#"^\s+endloop");
private static Regex _vertex = new Regex(#"^\s+vertex\s+(\S+)\s+(\S+)\s+(\S+)");
private static Vertex[][] ParseStl(string file)
{
double ParseCoordinate(GroupCollection gs, int i) =>
double.Parse(gs[i].Captures[0].Value, CultureInfo.InvariantCulture);
var triangles = new List<Vertex[]>();
bool isInsideTriangle = false;
List<Vertex> triangle = new List<Vertex>();
foreach (string line in File.ReadAllLines(file))
{
if (isInsideTriangle)
{
if (_triangleEnd.IsMatch(line))
{
isInsideTriangle = false;
triangles.Add(triangle.ToArray());
triangle = new List<Vertex>();
continue;
}
Match vMatch = _vertex.Match(line);
if (vMatch.Success)
{
double x1 = ParseCoordinate(vMatch.Groups, 1);
double x2 = ParseCoordinate(vMatch.Groups, 2);
double x3 = ParseCoordinate(vMatch.Groups, 3);
triangle.Add((x1, x2, x3));
}
}
else
{
if (_triangleStart.IsMatch(line))
{
isInsideTriangle = true;
}
}
}
return triangles.ToArray();
}
}
}
I'm also using the fact that System.ValueTuple implements Equals and GetHashCode in an appropriate way, so we can easily compare vertices (this is used implicitly by HashSet) and use them as keys in a dictionary.
Assuming the function takes in a list of double and an index to perform the check from, I need to check if the values alternates up and down consecutively.
For example, a list of [14.0,12.3,13.0,11.4] alternates consecutively but a list of [14.0,12.3,11.4,13.0] doesn't.
The algorithm doesn't have to be fast, but I'd like it to be compact to write (LINQ is totally fine). This is my current method, and it looks way too crude to my taste:
enum AlternatingDirection { Rise, Fall, None };
public bool CheckConsecutiveAlternation(List<double> dataList, int currDataIndex)
{
/*
* Result True : Fail
* Result False : Pass
*/
if (!_continuousRiseFallCheckBool)
return false;
if (dataList.Count < _continuousRiseFallValue)
return false;
if (currDataIndex + 1 < _continuousRiseFallValue)
return false;
AlternatingDirection direction = AlternatingDirection.None;
int startIndex = currDataIndex - _continuousRiseFallValue + 1;
double prevVal = 0;
for (int i = startIndex; i <= currDataIndex; i++)
{
if (i == startIndex)
{
prevVal = dataList[i];
continue;
}
if (prevVal > dataList[i])
{
prevVal = dataList[i];
switch (direction)
{
case AlternatingDirection.None:
direction = AlternatingDirection.Fall;
continue;
case AlternatingDirection.Rise:
direction = AlternatingDirection.Fall;
continue;
default:
//Two falls in a row. Not a signal.
return false;
}
}
if (prevVal < dataList[i])
{
prevVal = dataList[i];
switch (direction)
{
case AlternatingDirection.None:
direction = AlternatingDirection.Rise;
continue;
case AlternatingDirection.Fall:
direction = AlternatingDirection.Rise;
continue;
default:
//Two rise in a row. Not a signal.
return false;
}
}
return false;
}
//Alternated n times until here. Data is out of control.
return true;
}
Try this:
public static bool IsAlternating(double[] data)
{
var d = GetDerivative(data);
var signs = d.Select(val => Math.Sign(val));
bool isAlternating =
signs.Zip(signs.Skip(1), (a, b) => a != b).All(isAlt => isAlt);
return isAlternating;
}
private static IEnumerable<double> GetDerivative(double[] data)
{
var d = data.Zip(data.Skip(1), (a, b) => b - a);
return d;
}
Live demo
The idea is:
If the given list of values is alternating up and down, mathematically it means that it's derivative keeps changing its sign.
So this is exactly what this piece of code does:
Get the derivative.
Checks for sign fluctuations.
And the bonus is that it will not evaluate all of the derivative / signs arrays unless it is necessary.
I'd do it with a couple of consecutive zips, bundled in an extension method:
public static class AlternatingExtensions {
public static bool IsAlternating<T>(this IList<T> list) where T : IComparable<T>
{
var diffSigns = list.Zip(list.Skip(1), (a,b) => b.CompareTo(a));
var signChanges = diffSigns.Zip(diffSigns.Skip(1), (a,b) => a * b < 0);
return signChanges.All(s => s);
}
}
Edit: for completeness, here's how you'd use the feature:
var alternatingList = new List<double> { 14.0, 12.3, 13.0, 11.4 };
var nonAlternatingList = new List<double> { 14.0, 12.3, 11.4, 13.0 };
alternatingList.IsAlternating(); // true
nonAlternatingList.IsAlternating(); // false
I also changed the implementation to work on more types, making use of generics as much as possible.
Here is a small pseudo code. Assuming no repeated elements (can be handled easily though by few tweaks)
Idea is to have a sign variable which is alternating 1,-1,... that is multipled by the difference of two consecutive pairs, the difference multipled by this sign variable must always be positive. If it's not at some point, return false.
isUpAndDown(l):
if size(l) < 2: // empty,singleton list is always good.
return true
int sign = (l[0] < l[1] ? 1 : -1)
for i from 0 to n-1 (exclusive):
if sign * (l[i+1] - l[i]) < 0:
return false //not alternating
sign = sign * -1
end for
return true //all good
You may create kind of a signed array first:
double previous = 0;
var sign = myList.Select(x => {
int s = Math.Sign(x - previous);
previos = x;
return s;
});
This gives you a list similar to
{ -1, 1, -1, ... }
Now you can take a similar appraoch as the previos Select-statement to check if a -1 follows a 1:
var result = sign.All(x => {
bool b = x == -previous;
previous = x;
return b;
});
Now result is true if your list alternates, false otherwise.
EDIT: To ensure that the very first check within the second query also passes add previous = -sign[0]; before the second query.
Assuming that two equal values in a row are not acceptable (if they are, just skip over equal values):
if (dataList[0] == dataList[1])
return false;
bool nextMustRise = dataList[0] > dataList[1];
for (int i = 2; i < dataList.Count; i++) {
if (dataList[i - 1] == dataList[i] || (dataList[i - 1] < dataList[i]) != nextMustRise)
return false;
nextMustRise = !nextMustRise;
}
return true;
public double RatioOfAlternations(double[] dataList)
{
double Alternating = 0;
double Total = (dataList.Count()-2);
for (int (i) = 0; (i) < Total; (i)++)
{
if (((dataList[i+1]-dataList[i])*(dataList[i+2]-dataList[i+1]))<0)
// If previous change is opposite sign to current change, this will be negative
{
Alternating++;
}
else
{
}
}
return (Alternating/Total);
}
I have a set of values, and an associated percentage for each:
a: 70% chance
b: 20% chance
c: 10% chance
I want to select a value (a, b, c) based on the percentage chance given.
how do I approach this?
my attempt so far looks like this:
r = random.random()
if r <= .7:
return a
elif r <= .9:
return b
else:
return c
I'm stuck coming up with an algorithm to handle this. How should I approach this so it can handle larger sets of values without just chaining together if-else flows.
(any explanation or answers in pseudo-code are fine. a python or C# implementation would be especially helpful)
Here is a complete solution in C#:
public class ProportionValue<T>
{
public double Proportion { get; set; }
public T Value { get; set; }
}
public static class ProportionValue
{
public static ProportionValue<T> Create<T>(double proportion, T value)
{
return new ProportionValue<T> { Proportion = proportion, Value = value };
}
static Random random = new Random();
public static T ChooseByRandom<T>(
this IEnumerable<ProportionValue<T>> collection)
{
var rnd = random.NextDouble();
foreach (var item in collection)
{
if (rnd < item.Proportion)
return item.Value;
rnd -= item.Proportion;
}
throw new InvalidOperationException(
"The proportions in the collection do not add up to 1.");
}
}
Usage:
var list = new[] {
ProportionValue.Create(0.7, "a"),
ProportionValue.Create(0.2, "b"),
ProportionValue.Create(0.1, "c")
};
// Outputs "a" with probability 0.7, etc.
Console.WriteLine(list.ChooseByRandom());
For Python:
>>> import random
>>> dst = 70, 20, 10
>>> vls = 'a', 'b', 'c'
>>> picks = [v for v, d in zip(vls, dst) for _ in range(d)]
>>> for _ in range(12): print random.choice(picks),
...
a c c b a a a a a a a a
>>> for _ in range(12): print random.choice(picks),
...
a c a c a b b b a a a a
>>> for _ in range(12): print random.choice(picks),
...
a a a a c c a c a a c a
>>>
General idea: make a list where each item is repeated a number of times proportional to the probability it should have; use random.choice to pick one at random (uniformly), this will match your required probability distribution. Can be a bit wasteful of memory if your probabilities are expressed in peculiar ways (e.g., 70, 20, 10 makes a 100-items list where 7, 2, 1 would make a list of just 10 items with exactly the same behavior), but you could divide all the counts in the probabilities list by their greatest common factor if you think that's likely to be a big deal in your specific application scenario.
Apart from memory consumption issues, this should be the fastest solution -- just one random number generation per required output result, and the fastest possible lookup from that random number, no comparisons &c. If your likely probabilities are very weird (e.g., floating point numbers that need to be matched to many, many significant digits), other approaches may be preferable;-).
Knuth references Walker's method of aliases. Searching on this, I find http://code.activestate.com/recipes/576564-walkers-alias-method-for-random-objects-with-diffe/ and http://prxq.wordpress.com/2006/04/17/the-alias-method/. This gives the exact probabilities required in constant time per number generated with linear time for setup (curiously, n log n time for setup if you use exactly the method Knuth describes, which does a preparatory sort you can avoid).
Take the list of and find the cumulative total of the weights: 70, 70+20, 70+20+10. Pick a random number greater than or equal to zero and less than the total. Iterate over the items and return the first value for which the cumulative sum of the weights is greater than this random number:
def select( values ):
variate = random.random() * sum( values.values() )
cumulative = 0.0
for item, weight in values.items():
cumulative += weight
if variate < cumulative:
return item
return item # Shouldn't get here, but just in case of rounding...
print select( { "a": 70, "b": 20, "c": 10 } )
This solution, as implemented, should also be able to handle fractional weights and weights that add up to any number so long as they're all non-negative.
Let T = the sum of all item weights
Let R = a random number between 0 and T
Iterate the item list subtracting each item weight from R and return the item that causes the result to become <= 0.
def weighted_choice(probabilities):
random_position = random.random() * sum(probabilities)
current_position = 0.0
for i, p in enumerate(probabilities):
current_position += p
if random_position < current_position:
return i
return None
Because random.random will always return < 1.0, the final return should never be reached.
import random
def selector(weights):
i=random.random()*sum(x for x,y in weights)
for w,v in weights:
if w>=i:
break
i-=w
return v
weights = ((70,'a'),(20,'b'),(10,'c'))
print [selector(weights) for x in range(10)]
it works equally well for fractional weights
weights = ((0.7,'a'),(0.2,'b'),(0.1,'c'))
print [selector(weights) for x in range(10)]
If you have a lot of weights, you can use bisect to reduce the number of iterations required
import random
import bisect
def make_acc_weights(weights):
acc=0
acc_weights = []
for w,v in weights:
acc+=w
acc_weights.append((acc,v))
return acc_weights
def selector(acc_weights):
i=random.random()*sum(x for x,y in weights)
return weights[bisect.bisect(acc_weights, (i,))][1]
weights = ((70,'a'),(20,'b'),(10,'c'))
acc_weights = make_acc_weights(weights)
print [selector(acc_weights) for x in range(100)]
Also works fine for fractional weights
weights = ((0.7,'a'),(0.2,'b'),(0.1,'c'))
acc_weights = make_acc_weights(weights)
print [selector(acc_weights) for x in range(100)]
today, the update of python document give an example to make a random.choice() with weighted probabilities:
If the weights are small integer ratios, a simple technique is to build a sample population with repeats:
>>> weighted_choices = [('Red', 3), ('Blue', 2), ('Yellow', 1), ('Green', 4)]
>>> population = [val for val, cnt in weighted_choices for i in range(cnt)]
>>> random.choice(population)
'Green'
A more general approach is to arrange the weights in a cumulative distribution with itertools.accumulate(), and then locate the random value with bisect.bisect():
>>> choices, weights = zip(*weighted_choices)
>>> cumdist = list(itertools.accumulate(weights))
>>> x = random.random() * cumdist[-1]
>>> choices[bisect.bisect(cumdist, x)]
'Blue'
one note: itertools.accumulate() needs python 3.2 or define it with the Equivalent.
I think you can have an array of small objects (I implemented in Java although I know a little bit C# but I am afraid can write wrong code), so you may need to port it yourself. The code in C# will be much smaller with struct, var but I hope you get the idea
class PercentString {
double percent;
String value;
// Constructor for 2 values
}
ArrayList<PercentString> list = new ArrayList<PercentString();
list.add(new PercentString(70, "a");
list.add(new PercentString(20, "b");
list.add(new PercentString(10, "c");
double percent = 0;
for (int i = 0; i < list.size(); i++) {
PercentString p = list.get(i);
percent += p.percent;
if (random < percent) {
return p.value;
}
}
If you are really up to speed and want to generate the random values quickly, the Walker's algorithm mcdowella mentioned in https://stackoverflow.com/a/3655773/1212517 is pretty much the best way to go (O(1) time for random(), and O(N) time for preprocess()).
For anyone who is interested, here is my own PHP implementation of the algorithm:
/**
* Pre-process the samples (Walker's alias method).
* #param array key represents the sample, value is the weight
*/
protected function preprocess($weights){
$N = count($weights);
$sum = array_sum($weights);
$avg = $sum / (double)$N;
//divide the array of weights to values smaller and geq than sum/N
$smaller = array_filter($weights, function($itm) use ($avg){ return $avg > $itm;}); $sN = count($smaller);
$greater_eq = array_filter($weights, function($itm) use ($avg){ return $avg <= $itm;}); $gN = count($greater_eq);
$bin = array(); //bins
//we want to fill N bins
for($i = 0;$i<$N;$i++){
//At first, decide for a first value in this bin
//if there are small intervals left, we choose one
if($sN > 0){
$choice1 = each($smaller);
unset($smaller[$choice1['key']]);
$sN--;
} else{ //otherwise, we split a large interval
$choice1 = each($greater_eq);
unset($greater_eq[$choice1['key']]);
}
//splitting happens here - the unused part of interval is thrown back to the array
if($choice1['value'] >= $avg){
if($choice1['value'] - $avg >= $avg){
$greater_eq[$choice1['key']] = $choice1['value'] - $avg;
}else if($choice1['value'] - $avg > 0){
$smaller[$choice1['key']] = $choice1['value'] - $avg;
$sN++;
}
//this bin comprises of only one value
$bin[] = array(1=>$choice1['key'], 2=>null, 'p1'=>1, 'p2'=>0);
}else{
//make the second choice for the current bin
$choice2 = each($greater_eq);
unset($greater_eq[$choice2['key']]);
//splitting on the second interval
if($choice2['value'] - $avg + $choice1['value'] >= $avg){
$greater_eq[$choice2['key']] = $choice2['value'] - $avg + $choice1['value'];
}else{
$smaller[$choice2['key']] = $choice2['value'] - $avg + $choice1['value'];
$sN++;
}
//this bin comprises of two values
$choice2['value'] = $avg - $choice1['value'];
$bin[] = array(1=>$choice1['key'], 2=>$choice2['key'],
'p1'=>$choice1['value'] / $avg,
'p2'=>$choice2['value'] / $avg);
}
}
$this->bins = $bin;
}
/**
* Choose a random sample according to the weights.
*/
public function random(){
$bin = $this->bins[array_rand($this->bins)];
$randValue = (lcg_value() < $bin['p1'])?$bin[1]:$bin[2];
}
Here is my version that can apply to any IList and normalize the weight. It is based on Timwi's solution : selection based on percentage weighting
/// <summary>
/// return a random element of the list or default if list is empty
/// </summary>
/// <param name="e"></param>
/// <param name="weightSelector">
/// return chances to be picked for the element. A weigh of 0 or less means 0 chance to be picked.
/// If all elements have weight of 0 or less they all have equal chances to be picked.
/// </param>
/// <returns></returns>
public static T AnyOrDefault<T>(this IList<T> e, Func<T, double> weightSelector)
{
if (e.Count < 1)
return default(T);
if (e.Count == 1)
return e[0];
var weights = e.Select(o => Math.Max(weightSelector(o), 0)).ToArray();
var sum = weights.Sum(d => d);
var rnd = new Random().NextDouble();
for (int i = 0; i < weights.Length; i++)
{
//Normalize weight
var w = sum == 0
? 1 / (double)e.Count
: weights[i] / sum;
if (rnd < w)
return e[i];
rnd -= w;
}
throw new Exception("Should not happen");
}
I've my own solution for this:
public class Randomizator3000
{
public class Item<T>
{
public T value;
public float weight;
public static float GetTotalWeight<T>(Item<T>[] p_itens)
{
float __toReturn = 0;
foreach(var item in p_itens)
{
__toReturn += item.weight;
}
return __toReturn;
}
}
private static System.Random _randHolder;
private static System.Random _random
{
get
{
if(_randHolder == null)
_randHolder = new System.Random();
return _randHolder;
}
}
public static T PickOne<T>(Item<T>[] p_itens)
{
if(p_itens == null || p_itens.Length == 0)
{
return default(T);
}
float __randomizedValue = (float)_random.NextDouble() * (Item<T>.GetTotalWeight(p_itens));
float __adding = 0;
for(int i = 0; i < p_itens.Length; i ++)
{
float __cacheValue = p_itens[i].weight + __adding;
if(__randomizedValue <= __cacheValue)
{
return p_itens[i].value;
}
__adding = __cacheValue;
}
return p_itens[p_itens.Length - 1].value;
}
}
And using it should be something like that (thats in Unity3d)
using UnityEngine;
using System.Collections;
public class teste : MonoBehaviour
{
Randomizator3000.Item<string>[] lista;
void Start()
{
lista = new Randomizator3000.Item<string>[10];
lista[0] = new Randomizator3000.Item<string>();
lista[0].weight = 10;
lista[0].value = "a";
lista[1] = new Randomizator3000.Item<string>();
lista[1].weight = 10;
lista[1].value = "b";
lista[2] = new Randomizator3000.Item<string>();
lista[2].weight = 10;
lista[2].value = "c";
lista[3] = new Randomizator3000.Item<string>();
lista[3].weight = 10;
lista[3].value = "d";
lista[4] = new Randomizator3000.Item<string>();
lista[4].weight = 10;
lista[4].value = "e";
lista[5] = new Randomizator3000.Item<string>();
lista[5].weight = 10;
lista[5].value = "f";
lista[6] = new Randomizator3000.Item<string>();
lista[6].weight = 10;
lista[6].value = "g";
lista[7] = new Randomizator3000.Item<string>();
lista[7].weight = 10;
lista[7].value = "h";
lista[8] = new Randomizator3000.Item<string>();
lista[8].weight = 10;
lista[8].value = "i";
lista[9] = new Randomizator3000.Item<string>();
lista[9].weight = 10;
lista[9].value = "j";
}
void Update ()
{
Debug.Log(Randomizator3000.PickOne<string>(lista));
}
}
In this example each value has a 10% chance do be displayed as a debug =3
Based loosely on python's numpy.random.choice(a=items, p=probs), which takes an array and a probability array of the same size.
public T RandomChoice<T>(IEnumerable<T> a, IEnumerable<double> p)
{
IEnumerator<T> ae = a.GetEnumerator();
Random random = new Random();
double target = random.NextDouble();
double accumulator = 0;
foreach (var prob in p)
{
ae.MoveNext();
accumulator += prob;
if (accumulator > target)
{
break;
}
}
return ae.Current;
}
The probability array p must sum to (approx.) 1. This is to keep it consistent with the numpy interface (and mathematics), but you could easily change that if you wanted.
This question already has answers here:
How can I get every nth item from a List<T>?
(10 answers)
Closed 2 years ago.
I've got a list of 'double' values. I need to select every 6th record. It's a list of coordinates, where I need to get the minimum and maximum value of every 6th value.
List of coordinates (sample): [2.1, 4.3, 1.0, 7.1, 10.6, 39.23, 0.5, ... ]
with hundrets of coordinates.
Result should look like: [x_min, y_min, z_min, x_max, y_max, z_max]
with exactly 6 coordinates.
Following code works, but it takes to long to iterate over all coordinates. I'd like to use Linq instead (maybe faster?)
for (int i = 0; i < 6; i++)
{
List<double> coordinateRange = new List<double>();
for (int j = i; j < allCoordinates.Count(); j = j + 6)
coordinateRange.Add(allCoordinates[j]);
if (i < 3) boundingBox.Add(coordinateRange.Min());
else boundingBox.Add(coordinateRange.Max());
}
Any suggestions?
Many thanks! Greets!
coordinateRange.Where( ( coordinate, index ) => (index + 1) % 6 == 0 );
The answer from Webleeuw was posted prior to this one, but IMHO it's clearer to use the index as an argument instead of using the IndexOf method.
Something like this could help:
public static IEnumerable<T> Every<T>(this IEnumerable<T> source, int count)
{
int cnt = 0;
foreach(T item in source)
{
cnt++;
if (cnt == count)
{
cnt = 0;
yield return item;
}
}
}
You can use it like this:
int[] list = new []{1,2,3,4,5,6,7,8,9,10,11,12,13};
foreach(int i in list.Every(3))
{ Console.WriteLine(i); }
EDIT:
If you want to skip the first few entries, you can use the Skip() extension method:
foreach (int i in list.Skip(2).Every(6))
{ Console.WriteLine(i); }
There is an overload of the Where method with lets you use the index directly:
coordinateRange.Where((c,i) => (i + 1) % 6 == 0);
Any particular reason you want to use LINQ to do this?
Why not write a loop that steps with 6 increments each time and get access the value directly?
To find a faster solution start a profile!
Measure how long it takes for every step in your for loop and try to avoid the biggest bottleneck.
After making a second look at your code, it seems, your problem is that you run six times over your big list. So the time needed is always six times of the list size.
Instead you should run once over the whole list and put every item into the correct slot.
Just to make a performance test for yourself you should test these two approaches:
Sample class to hold data
public class Coordinates
{
public double x1 { get; set; }
public double x2 { get; set; }
public double y1 { get; set; }
public double y2 { get; set; }
public double z1 { get; set; }
public double z2 { get; set; }
}
Initializing of the value holder
Coordinates minCoordinates = new Coordinates();
//Cause we want to hold the minimum value, it will be initialized with
//value that is definitely greater or equal than the greatest in the list
minCoordinates.x1 = Double.MaxValue;
minCoordinates.x2 = Double.MaxValue;
minCoordinates.y1 = Double.MaxValue;
minCoordinates.y2 = Double.MaxValue;
minCoordinates.z1 = Double.MaxValue;
minCoordinates.z2 = Double.MaxValue;
Using a for loop if the index operator is O(1)
for (int i = 0; i < allCoordinates.Count; i++)
{
switch (i % 6)
{
case 0:
minCoordinates.x1 = Math.Min(minCoordinates.x1, allCoordinates[i]);
break;
case 1:
minCoordinates.x2 = Math.Min(minCoordinates.x2, allCoordinates[i]);
break;
case 2:
minCoordinates.y1 = Math.Min(minCoordinates.y1, allCoordinates[i]);
break;
case 3:
minCoordinates.y2 = Math.Min(minCoordinates.y2, allCoordinates[i]);
break;
case 4:
minCoordinates.z1 = Math.Min(minCoordinates.z1, allCoordinates[i]);
break;
case 5:
minCoordinates.z2 = Math.Min(minCoordinates.z2, allCoordinates[i]);
break;
}
}
Using foreach if IEnumerator is O(1)
int count = 0;
foreach (var item in allCoordinates)
{
switch (count % 6)
{
case 0:
minCoordinates.x1 = Math.Min(minCoordinates.x1, item);
break;
case 1:
minCoordinates.x2 = Math.Min(minCoordinates.x2, item);
break;
case 2:
minCoordinates.y1 = Math.Min(minCoordinates.y1, item);
break;
case 3:
minCoordinates.y2 = Math.Min(minCoordinates.y2, item);
break;
case 4:
minCoordinates.z1 = Math.Min(minCoordinates.z1, item);
break;
case 5:
minCoordinates.z2 = Math.Min(minCoordinates.z2, item);
break;
}
count++;
}
Well its not LINQ, but If you are worrying about performance, this might help.
public static class ListExtensions
{
public static IEnumerable<T> Every<T>(this IList<T> list, int stepSize, int startIndex)
{
if (stepSize <= 0)
throw new ArgumentException();
for (int i = startIndex; i < list.Count; i += stepSize)
yield return list[i];
}
}
Suggestion:
coordinateRange.Where(c => (coordinateRange.IndexOf(c) + 1) % 6 == 0);
I stand corrected, thanks for the comments.
As stated by codymanix, the correct answer is indeed:
coordinateRange.Where((c, i) => (i + 1) % 6 == 0);
The best way to do this would be refactoring the datastructure so that every dimension would be its own array. That way x1_max would be just x1.Max(). If you cannot change the input data structure, the next best way is to iterate over the array once and do all accesses locally. This helps with staying within the L1-cached memory:
var minValues = new double[] { Double.MaxValue, Double.MaxValue, Double.MaxValue };
var maxValues = new double[] { Double.MinValue, Double.MinValue, Double.MinValue };
for (int i = 0; i < allCoordinates.Length; i += 6)
{
for (int j = 0; i < 3; i++)
{
if (allCoordinates[i+j] < minValues[j])
minValues[j] = allCoordinates[i+j];
if (allCoordinates[i+j+3] > maxValues[j])
maxValues[j] = allCoordinates[i+j+3];
}
}
Use Skip and combintaion with Take.
coordinateRange.Skip(6).Take(1).First();
I recommend you move this to a extension method.