I'm creating a game in which someone opens a chest and the chest will give them a random prize. The maximum I can give out is 85,000,000 in 10,000 chests which is 8,500 average however I want some to make it so some chests will be below this value and above and to be able to set a min lose of 2,500 and max win 250,000 but still get the total value of 85,000,000.
I'm really struggling to come up with an algorithm for this using my C# knowledge.
Here goes some OOP. You have Player class. Which stores some info - amount of gold he has, chests left to open, and total amount of gold in chests he will find.
public class Player
{
private int gold = 0;
private int goldLeftInChests = 85000000;
private int chestsToOpen = 10000;
private Random random = new Random();
public void OpenChest()
{
if (chestsToOpen == 0)
return; // or whatever you want after 10000 chests.
int goldInChest = CalculateGoldInNextChest();
goldLeftInChests -= goldInChest;
chestsToOpen--;
gold += goldInChest;
}
private int CalculateGoldInNextChest()
{
if (chestsToOpen == 1)
return goldLeftInChests;
var average = goldLeftInChests / chestsToOpen;
return random.Next(average);
}
}
When next chest is opened, gold in chest is calculated and player data ajusted - we add some gold to player and reduce total amount of gold in chests, and chests left to open.
Calculating gold in a chest is very simple. We get average amount left and calculate number between 1 and average. First time this value will always be below 8500. But next time average will be little bit bigger. So player will have chance to find more than 8500. If he will be unlucky again, average will grow. Or it will be reduced if palyer gets lot of gold.
UPDATE: As #Hans pointed, I didn't count min and max restrictions for gold in chests. Also there is a problem in #Hans solution - you should move gold between 10000 chests lot of time to get some chests close to 250000 value. And you have to fill and keep all 10000 values. Next problem I thought about was random numbers distribution in .NET. Values have equal probability on all interval we are using. So if we are generating value from 2500 to 250000, chance that we'll get value around 8500 (average) is like 12000 (8500±6000) vs 235500 (250000-12000-2500). That means generating default random numbers from given range will give you lot of big numbers in the begining, and then you will stick near lowest boundary (2500). So you need random numbers with different distribution - Gaussian variables. We still want to have 8500 gold with highest probablity, and 250000 with lowest probability. Something like that:
And last part - calculation. We need to update only one method :)
private int CalculateGoldInNextChest()
{
const int mean = 8500;
var goldPerChestRange = new Range(2500, 250000);
var averageRange = new Range(mean - 2500, mean + 2500);
if (chestsToOpen == 1)
return goldLeftInChests;
do
{
int goldInChest = (int)random.NextGaussian(mu: mean, sigma: 50000);
int averageLeft = (goldLeftInChests - goldInChest) / (chestsToOpen - 1);
if (goldPerChestRange.Contains(goldInChest) && averageRange.Contains(averageLeft))
return goldInChest;
}
while (true);
}
Note: I used range to make code more readable. Running tests several times produces nice top values more than 200000.
pseudocode algoritm:
use an array of chests
index of array is chest number; length of array is amount of chests
value in array is amount in chest at that index
initial value is total amount divided by number of chests
now repeat a number of times (say: 10 times the number of chests)
get two random chests
work out the maximum amount you can transfer from chest 1 to chest 2, so that 1 doesn't get below the minimum and 2 doesn't get above the maximum
get a random value below that maximum and transfer it
Now try and implement this in C#.
This should be a good starting point. Each chest gets filled randomly with the limits adapting to make sure the remaining chests can also get valid values.
Random rand = new Random();
int[] chests = new int[numOffChests];
int remaining = TotalValue;
for(int i = 0; i < numOffChests; i++)
{
int minB = Math.Max(remaining / (numOffChests - i), maxChestValue);
int maxB = Math.Min(remaining - (numOffChests - i * minChestValue), maxChestValue);
int val = rand.Next(minB, maxB);
remaining -= val;
chests[i] = val;
}
The distribution has to be heavily skewed to get that range of values with that mean. Try an exponential formula, X=exp(a*U+b)+c where U is uniform on [0,1]. Then the conditions are
-2,500 = exp(b)+c
250,000 = exp(a+b)+c
8,500 = integral(exp(a*u+b), u=0..1)
= exp(b)/a*(exp(a)-1)+c
= 252,500/a+c
which gives the two equations
250,000+2,500*exp(a) = c*(1-exp(a))
8,500 = 252,500/a+c
A bit of graphical and numerical solution gives the "magic" numbers
a = 22.954545,
b = -10.515379,
c = -2500.00002711621
Now fill 10,000 chests according to that formula, compute the sum over the chest prices and distribute the, with high probability small, excess in any pattern you like.
If you want to hit the upper and lower bounds more regularly, increase the bounds at the basis of the computation and cut the computed value if outside the original bounds.
I assume that a probabilistic function gives the chance of a win/lose value V to occur. Let's say that the probability for V is proportional to (250000-V)**2, giving fewer chances to get high prizes.
To simplify some rounding issues, let's also assume that win/lose are multiple of 100. You may then make the following (untested) computations:
int minwin = -2500 ;
int maxwin = 250000;
int chestcount = 10000;
int maxamount = 85000;
// ----------- get probabilities for each win/lose amount to occur in all chests ----------
long probtotal = 0 ;
List<long> prob = new List<long> ;
for (long i=minwin;i<=maxwin;i++) if (i%100==0)
{ long ii=(maxwin-i)*(maxwin-i) ; prob.Add((float)ii) ; probtotal+=ii ; }
for (int i=0;i<prob.Count;i++) prob[i]=prob[i]/(probtotal) ;
for (int i=0;i<prob.Count;i++)
Console.writeLine("Win/lose amount"+((i-minwin)*100).ToString()+" probability="+(proba[i]*100).ToString("0.000")) ;
// Transform "prob" so as to indicate the float count of chest corresponding to each win/lose amount
for (int i=0;i<prob.Count;i++) prob[i]=prob[i]*chestcount ;
// ---------- Set the 10000 chest values starting from the highest win -------------
int chestindex=0 ;
List<int> chestvalues = new List<int>();
float remainder = 0 ;
int totalwin=0 ;
for (int i=0;i<prob.Count;i++)
{
int n = (int)(prob[i]+remainder) ; // only the integer part of the float ;
remainder = prob[i]+remainder-n ;
// set to n chests the win/lose amount
int curwin=(i-minwin)*100 ;
for (int j=0;j<n && chestvalues.count<chestcount;j++) chestvalues.Add(curwin) ;
totalwin+=curwin ;
}
// if stvalues.count lower than chestcount, create missing chestvalues
for (int i=chestvalues.Count;i<chestcount;i++) chestvalues.Add(0) ;
// --------------- due to float computations, we perhaps want to make some adjustments --------------
// i.e. if totalwin>maxamount (not sure if it may happen), decrease some chestvalues
...
// --------------- We have now a list of 10000 chest values to be randomly sorted --------------
Random rnd = new Random();
SortedList<int,int> randomchestvalues = new SortedList<int,int>() ;
for (int i=0;i<chestcount;i++) randomchestvalues.Add(rnd.Next(0,99999999),chestvalues[i]) ;
// display the first chests amounts
for (int i=0;i<chestcount;i++) if (i<234)
{ int chestamount = randomchestvalues.GetByIndex(i) ; Console.WriteLine(i.ToString()+":"+chestamount) ; }
}
Related
List<int> NPower = new List<int>();
List<double> list = new List<double>();
try
{
for (int i = 1; i < dataGridView1.Rows.Count; i++)
{
for (int n = 0; n < i + 30; n++)
{
NPower.Add(Convert.ToInt32(dataGridView1.Rows[i + n].Cells[6].Value));
}
}
average = NPower.Average();
total = Math.Pow(average, 4);
NPower.Clear();
}
catch (Exception)
{
average = NPower.Average();
NP = Convert.ToInt32(Math.Pow(average, (1.0 / 3.0)));
label19.Text = "Normalised Power: " + NP.ToString();
NPower.Clear();
}
Hi so i'm trying to calculate the normalized power for a cycling polar cycle. I know that for the normalized power you need to:
1) starting at the 30 s mark, calculate a rolling 30 s average (of the preceeding time points, obviously).
2) raise all the values obtained in step #1 to the 4th power.
3) take the average of all of the values obtained in step #2.
4) take the 4th root of the value obtained in step #3.
I think i have done that but the normalized power comes up with 16 which isnt correct. Could anyone look at my code to see if they could figure out a solution. Thankyou, sorry for my code i'm still quite new to this so my code might be in the incorrect format.
I'm not sure that I understand your requirements or code completely, but a few things I noticed:
Since you're supposed to start taking the rolling average after 30 seconds, shouldn't i be initialized to 30 instead of 1?
Since it's a rolling average, shouldn't n be initialized to the value of i instead of 0?
Why is the final result calculated inside a catch block?
Shouldn't it be Math.Pow(average, (1.0 / 4.0)) since you want the fourth root, not the third?
I was trying to create my own factorial function when I found that the that the calculation is twice as fast if it is calculated in pairs. Like this:
Groups of 1: 2*3*4 ... 50000*50001 = 4.1 seconds
Groups of 2: (2*3)*(4*5)*(6*7) ... (50000*50001) = 2.0 seconds
Groups of 3: (2*3*4)*(5*6*7) ... (49999*50000*50001) = 4.8 seconds
Here is the c# I used to test this.
Stopwatch timer = new Stopwatch();
timer.Start();
// Seperate the calculation into groups of this size.
int k = 2;
BigInteger total = 1;
// Iterates from 2 to 50002, but instead of incrementing 'i' by one, it increments it 'k' times,
// and the inner loop calculates the product of 'i' to 'i+k', and multiplies 'total' by that result.
for (var i = 2; i < 50000 + 2; i += k)
{
BigInteger partialTotal = 1;
for (var j = 0; j < k; j++)
{
// Stops if it exceeds 50000.
if (i + j >= 50000) break;
partialTotal *= i + j;
}
total *= partialTotal;
}
Console.WriteLine(timer.ElapsedMilliseconds / 1000.0 + "s");
I tested this at different levels and put the average times over a few tests in a bar graph. I expected it to become more efficient as I increased the number of groups, but 3 was the least efficient and 4 had no improvement over groups of 1.
Link to First Data
Link to Second Data
What causes this difference, and is there an optimal way to calculate this?
BigInteger has a fast case for numbers of 31 bits or less. When you do a pairwise multiplication, this means a specific fast-path is taken, that multiplies the values into a single ulong and sets the value more explicitly:
public void Mul(ref BigIntegerBuilder reg1, ref BigIntegerBuilder reg2) {
...
if (reg1._iuLast == 0) {
if (reg2._iuLast == 0)
Set((ulong)reg1._uSmall * reg2._uSmall);
else {
...
}
}
else if (reg2._iuLast == 0) {
...
}
else {
...
}
}
public void Set(ulong uu) {
uint uHi = NumericsHelpers.GetHi(uu);
if (uHi == 0) {
_uSmall = NumericsHelpers.GetLo(uu);
_iuLast = 0;
}
else {
SetSizeLazy(2);
_rgu[0] = (uint)uu;
_rgu[1] = uHi;
}
AssertValid(true);
}
A 100% predictable branch like this is perfect for a JIT, and this fast-path should get optimized extremely well. It's possible that _rgu[0] and _rgu[1] are even inlined. This is extremely cheap, so effectively cuts down the number of real operations by a factor of two.
So why is a group of three so much slower? It's obvious that it should be slower than for k = 2; you have far fewer optimized multiplications. More interesting is why it's slower than k = 1. This is easily explained by the fact that the outer multiplication of total now hits the slow path. For k = 2 this impact is mitigated by halving the number of multiplies and the potential inlining of the array.
However, these factors do not help k = 3, and in fact the slow case hurts k = 3 a lot more. The second multiplication in the k = 3 case hits this case
if (reg1._iuLast == 0) {
...
}
else if (reg2._iuLast == 0) {
Load(ref reg1, 1);
Mul(reg2._uSmall);
}
else {
...
}
which allocates
EnsureWritable(1);
uint uCarry = 0;
for (int iu = 0; iu <= _iuLast; iu++)
uCarry = MulCarry(ref _rgu[iu], u, uCarry);
if (uCarry != 0) {
SetSizeKeep(_iuLast + 2, 0);
_rgu[_iuLast] = uCarry;
}
why does this matter? Well, EnsureWritable(1) causes
uint[] rgu = new uint[_iuLast + 1 + cuExtra];
so rgu becomes length 3. The number of passes in total's code is decided in
public void Mul(ref BigIntegerBuilder reg1, ref BigIntegerBuilder reg2)
as
for (int iu1 = 0; iu1 < cu1; iu1++) {
...
for (int iu2 = 0; iu2 < cu2; iu2++, iuRes++)
uCarry = AddMulCarry(ref _rgu[iuRes], uCur, rgu2[iu2], uCarry);
...
}
which means that we have a total of len(total._rgu) * 3 operations. This hasn't saved us anything! There are only len(total._rgu) * 1 passes for k = 1 - we just do it three times!
There is actually an optimization on the outer loop that reduces this back down to len(total._rgu) * 2:
uint uCur = rgu1[iu1];
if (uCur == 0)
continue;
However, they "optimize" this optimization in a way that hurts far more than before:
if (reg1.CuNonZero <= reg2.CuNonZero) {
rgu1 = reg1._rgu; cu1 = reg1._iuLast + 1;
rgu2 = reg2._rgu; cu2 = reg2._iuLast + 1;
}
else {
rgu1 = reg2._rgu; cu1 = reg2._iuLast + 1;
rgu2 = reg1._rgu; cu2 = reg1._iuLast + 1;
}
For k = 2, that causes the outer loop to be over total, since reg2 contains no zero values with high probability. This is great because total is way longer than partialTotal, so the fewer passes the better. For k = 3, the EnsureWritable(1) will always cause a spare space because the multiplication of three numbers no more than 15 bits long can never exceed 64 bits. This means that, although we still only do one pass over total for k = 2, we do two for k = 3!
This starts to explain why the speed increases again beyond k = 3: the number of passes per addition increases slower than the number of additions decreases, as you're only adding ~15 bits to the inner value each time. The inner multiplications are fast relative to the massive total multiplications, so the more time spent consolidating values, the more time saved in passes over total. Further, the optimization is less frequently a pessimism.
It also explains why odd values take longer: they add an extra 32-bit integer to the _rgu array. This won't happen so cleanly if the ~15 bits wasn't so close to half of 32.
It's worth noting that there are a lot of ways to improve this code; the comments here are about why, not how to fix it. The easiest improvement would be to chuck the values in a heap and multiply only the two smallest values at a time.
The time required to do a BigInteger multiplication depends on the size of the product.
Both methods take the same number of multiplications, but if you multiply the factors in pairs, then the average size of the product is much smaller than it is if you multiply each factor with the product of all the smaller ones.
You can do even better if you always multiply the two smallest factors (original factors or intermediate products) that have yet to be multiplied together, until you get to the complete product.
I think you have a bug ('+' instead of '*').
partialTotal *= i + j;
Good to check that you are getting the right answer, not just interesting performance metrics.
But I'm curious what motivated you to try this. If you do find a difference, I would expect it would have to do with optimalities in register and/or memory allocation. And I would expect it would be 0-30% or something like that, not 50%.
In trying to test whether knowing the history of a random number could help predict the future results, I found a strong, unexpected correlation between the average of the number generated, and the number of correct guesses.
The test was supposed to simulate flipping a coin (heads = 0, tails = 1) and if previous attempts were biased towards heads then guess tails and vice versa.
Why is the sum of the generated numbers always nearly equal to the number of correct guesses in the following LinqPad program?
void Main()
{
var rnd = new Random();
var attempts = 10000000;
var correctGuesses = 0;
long sum = 0;
decimal avg = 0.5m;
for (int i = 0; i < attempts; i++)
{
var guess = avg < 0.5m ? 1 : 0;
var result = rnd.Next(0, 2);
if (guess == result)
{
correctGuesses += 1;
}
sum += result;
avg = (decimal)sum/(decimal)attempts;
}
attempts.Dump("Attempts");
correctGuesses.Dump("Correct Guesses");
avg = (decimal)sum / (decimal)attempts;
avg.Dump("Random Number Average");
}
Have a made an error in the code? Is this a natural relationship? I expected the averages to converge at 0.5 as I increased the number of attempts because the distribution is fairly even - I tested this with 10bn calls to Random.Next(0,2) - but I did not expect the sum of generated numbers to correlate to the number of correct guesses.
Your error is this line:
avg = (decimal)sum/(decimal)attempts;
Makes no sense to divide the sum (based over i to that point) by attempts. Divide by i (EDIT: more precisely i+1) instead for avg to give you something meaningful.
The Random class, without a seed, generates a random number using the current time as seed, meaning that a call of the rnd.Next method in your cycle will result in the same number several times over, depending on how fast your machine goes through the cycle.
I want to generate random numbers but controlled, meaning the numbers should be nearly equally separated, and spread through the range.
As an example, if the bounds were 1 and 50, then if the first generated number is 40 then the next number should not be close. Suppose it's 20, then 30 would be an acceptable third number.
Please help.
Rather than completely random numbers, you might want to look at noise functions like Perlin Noise to generate superficially random data in a predictable fashion.
http://en.wikipedia.org/wiki/Perlin_noise
There are a few variations out there - definitely worth researching if you can describe your segmentation of data algorithmically.
It's used a lot in gaming to smooth and add interest to otherwise randomly generated terrain textures.
There's a few sample implementations in C# out there, this one is used to generate a bitmap but could easily be adapted to fill a 2d array:
http://www.gutgames.com/post/Perlin-Noise.aspx
There's also plenty of questions here on SO about Perlin Noise too:
https://stackoverflow.com/search?q=perlin+noise
You may do something like this:
randomSpaced[interval_, mindistance_, lastone_] :=
(While[Abs[(new = RandomReal[interval])-lastone] < mindistance,];
Return[new];)
Randomnicity test drive:
For[i = 1, i < 500000, i++,
rnd[i] = randomSpaced[{0, 40}, 10, rnd[i - 1]];
];
Histogram[Table[rnd[i], {i, 500000}]]
You may see that the frequencies accumulates in the borders
Moreover, if you are not cautious, and ask for a distance too high, the results will be something like:
For[i = 1, i < 50000, i++,
AppendTo[rnd, randomSpaced[{0, 40}, 25, Last[rnd]]];];
Histogram[rnd
]
because you are not allowing points at the center.
Define a separation distance d the new number should have to the last. If the last number was, say, 20 the next random number should not be from 20-d to 20+d. That means the random interval should be [1, 20-d) and (20+d,50].
Since you can not call random.next() with two intervals you need to call it with an interval reduced by 2d and then map the random number to your original [1,50] interval.
static class RandomExcludingSurrounding
{
static Random random = new Random();
public static int Next(int x, int d, int min, int max)
{
int next = random.Next(min, max-2*d);
if (next > x-d)
next += 2*d;
return next;
}
}
int min = 1;
int max = 50;
Random random = new Random();
int next = random.Next(min, max);
while(true)
{
int next = RandomExcludingSurrounding.Next(next, 20, min, max);
Console.WriteLine(next);
}
Does anyone know of an algorithm that can generate unique bingo card faces? I'm looking to implement this algorithm in C#.
Thanks,
get 5 sets containing 15 numbers each (1-15 for set 1, 16-30 for set 2...)
select 5 different numbers in sets 1,2,4,5
select 4 different numbers in set 3
To check if that card already exists
Check each existing card for top left correspondance with new card
if both numbers are equal, then move to the second number
if you get 24 times the same number at the same place then both cards are equal and new card must be rejected
This is an interesting problem, but as Michael Madsen reported, given the number of possibilities, you would probably be better generate them randomly and after, check if you have duplicates. (Unless you want to generate all 111 quadrillion possibilities, which I hope you have data storage space for!)
Here's a function for generating a random subset of integers from a given range which you might find useful:
private static IEnumerable<int> RandomSubsetOfRange(int min, int max, int count)
{
Random random = new Random();
int size = max - min + 1;
for (int i = 0; i <= size; i += 1)
{
if (random.NextDouble() <= ((float)count / (float)(size - i + 1)))
{
yield return min + i;
count -= 1;
}
}
}