List<int> NPower = new List<int>();
List<double> list = new List<double>();
try
{
for (int i = 1; i < dataGridView1.Rows.Count; i++)
{
for (int n = 0; n < i + 30; n++)
{
NPower.Add(Convert.ToInt32(dataGridView1.Rows[i + n].Cells[6].Value));
}
}
average = NPower.Average();
total = Math.Pow(average, 4);
NPower.Clear();
}
catch (Exception)
{
average = NPower.Average();
NP = Convert.ToInt32(Math.Pow(average, (1.0 / 3.0)));
label19.Text = "Normalised Power: " + NP.ToString();
NPower.Clear();
}
Hi so i'm trying to calculate the normalized power for a cycling polar cycle. I know that for the normalized power you need to:
1) starting at the 30 s mark, calculate a rolling 30 s average (of the preceeding time points, obviously).
2) raise all the values obtained in step #1 to the 4th power.
3) take the average of all of the values obtained in step #2.
4) take the 4th root of the value obtained in step #3.
I think i have done that but the normalized power comes up with 16 which isnt correct. Could anyone look at my code to see if they could figure out a solution. Thankyou, sorry for my code i'm still quite new to this so my code might be in the incorrect format.
I'm not sure that I understand your requirements or code completely, but a few things I noticed:
Since you're supposed to start taking the rolling average after 30 seconds, shouldn't i be initialized to 30 instead of 1?
Since it's a rolling average, shouldn't n be initialized to the value of i instead of 0?
Why is the final result calculated inside a catch block?
Shouldn't it be Math.Pow(average, (1.0 / 4.0)) since you want the fourth root, not the third?
Related
I am trying to generate a probability of getting a specific number from n dice, with no guarantee of them having the same number of sides. (eg, 1d6 + 2d10)
I know there is a really expensive way of doing it (With recursion), but if there is a mathematical way of determining the chance of an event happening, that would be way better.
One way to do this:
Create an output array count with length sum(sides all dice)+1, i.e. so that the max that can possibly be rolled works as an index.
This represents the number of ways that the index can be rolled. Initialise this with [0] = 1.
For each dice of N sides, enumerate the results of each possible rolled value.
Copy the existing count array into prev, say, and create a new empty count array
for roll = 1 to N, for total = 0 to count.length-1-roll, count[total+roll]+=prev[total]
Now the probability of rolling value = count[value] / sum(count)
Notes:
This isn't, as you feared, either really expensive or needs recursion. This will be O(N^2) where N as the total faces on all dice.
This will compute the probability of all outputs not just the one output that you're interested in, which may be an issue if the total faces is extremely large and the value you're interested in small. You could cap the count array at length (value you're interested in) + 1, if necessary, and compute the total number of rolls as the product of each die face as you process it rather than from sum(count) as I've suggested above.
#Rup already gave one standard solution, the bottom up dynamic programming method.
The top down approach is to write your recursive function..and then memoize it. That is when your function is called you first check whether you have seen this before (ie you look into a dictionary to see if you have a "memo" to yourself about the answer), and if you haven't you calculate the answer and save it. Then you return the memoized answer.
The usual tradeoffs apply:
Top down is easier to figure out and write.
Bottom up lets you see that you don't need to store 2 dice answers when you have the 3 dice ones, and therefore reduces working memory requirements.
Therefore it is good to know both approaches, but I always reach for a top down approach first.
Here I generated from 2 dice rolling
1 Randon() will be generated from n faces
2 here n times is rolled on
3 sum is displayed for n rolled
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Dicerolling
{
class Program
{
static void Main(string[] args)
{
Random x = new Random();
int throw_times = 1;
int sum = 0;
int[] dice = new int[2];
dice[0] = x.Next(1, 7);
dice[1] = x.Next(1, 7);
Console.WriteLine("enter the no of rollings :");
var n = int.Parse(Console.ReadLine());
for (int i = 1; i <= n; i++)
{
dice[0] = x.Next(1, 7);
dice[1] = x.Next(1, 7);
int total_var = dice[0] + dice[1];
sum += dice[0] + dice[1] ;//total in array
Console.Write("Throw " + throw_times + ": " + dice[0] + " d " + dice[1] + " = ");
Console.WriteLine(total_var);
throw_times++;
Array.Sort(dice);
for (int a = dice.Length - 1; a >= 0; a--)
{
int s = dice[a];
Console.WriteLine("#" + s);
}
}
Console.WriteLine("Total sum: " + sum);//only returns sum of last 2 rolls
Console.ReadLine();
}
}
}
I have a piece of code in my C# Windows Form Application that looks like below:
List<string> RESULT_LIST = new List<string>();
int[] arr = My_LIST.ToArray();
string s = "";
Stopwatch sw = new Stopwatch();
sw.Start();
for (int i = 0; i < arr.Length; i++)
{
int counter = i;
for (int j = 1; j <= arr.Length; j++)
{
counter++;
if (counter == arr.Length)
{
counter = 0;
}
s += arr[counter].ToString();
RESULT_LIST.Add(s);
}
s = "";
}
sw.Stop();
TimeSpan ts = sw.Elapsed;
string elapsedTime = String.Format("{0:00}", ts.TotalMilliseconds * 1000);
MessageBox.Show(elapsedTime);
I use this code to get any combination of the numbers of My list. I have behaved with My_LIST like a recursive one. The image below demonstrates my purpose very clearly:
All I need to do is:
Making a formula to calculate the approximate run time of these two
nested for loops to guess the run time for any length and help the
user know the approximate time that he/she must wait.
I have used a C# Stopwatch like this: Stopwatch sw = new Stopwatch(); to show the run time and below are the results(Note that in order to reduce the chance of error I've repeated the calculation three times for each length and the numbers show the time in nano seconds for the first, second and third attempt respectively.):
arr.Length = 400; 127838 - 107251 - 100898
arr.Length = 800; 751282 - 750574 - 739869
arr.Length = 1200; 2320517 - 2136107 - 2146099
arr.Length = 2000; 8502631 - 7554743 - 7635173
Note that there are only one-digit numbers in My_LIST to make the time
of adding numbers to the list approximately equal.
How can I find out the relation between arr.Length and run time?
First, let's suppose you have examined the algorithm and noticed that it appears to be quadratic in the array length. This suggests to us that the time taken to run should be a function of the form
t = A + B n + C n2
You've gathered some observations by running the code multiple times with different values for n and measuring t. That's a good approach.
The question now is: what are the best values for A, B and C such that they match your observations closely?
This problem can be solved in a variety of ways; I would suggest to you that the least-squares method of regression would be the place to start, and see if you get good results. There's a page on it here:
www.efunda.com/math/leastsquares/lstsqr2dcurve.cfm
UPDATE: I just looked at your algorithm again and realized it is cubic because you have a quadratic string concat in the inner loop. So this technique might not work so well. I suggest you use StringBuilder to make your algorithm quadratic.
Now, suppose you did not know ahead of time that the problem was quadratic. How would you determine the formula then? A good start would be to graph your points on log scale paper; if they roughly form a straight line then the slope of the line gives you a clue as to the power of the polynomial. If they don't form a straight line -- well, cross that bridge when you come to it.
Well you gonna do some math here.
Since the total number of runs is exactly n^2, not O(n^2) but exactly n^2 times.
Then what you could do is to keep a counter variable for the number of items processed and use math to find out an estimate
int numItemProcessed;
int timeElapsed;//read from stop watch
int totalItems = n * n;
int remainingEstimate = ((float) totalItems - numItemProcessed) / numItemProcessed) * timeElapsed
Don't assume the algorithm is necessarily N^2 in time complexity.
Take the averages of your numbers, and plot the best fit on a log-log plot, then measure the gradient. This will give you an idea as to the largest term in the polynomial. (see wikipedia log-log plot)
Once you have that, you can do a least-squares regression to work out the coefficients of the polynomial of the correct order. This will allow an estimate from the data, of the time taken for an unseen problem.
Note: As Eric Lippert said, it depends on what you want to measure - averaging may not be appropriate depending on your use case - the first run time might be more correct.
This method will work for any polynomial algorithm. It will also tell you if the algorithm is polynomial (non-polynomial running times will not give straight lines on the log-log plot).
I was trying to create my own factorial function when I found that the that the calculation is twice as fast if it is calculated in pairs. Like this:
Groups of 1: 2*3*4 ... 50000*50001 = 4.1 seconds
Groups of 2: (2*3)*(4*5)*(6*7) ... (50000*50001) = 2.0 seconds
Groups of 3: (2*3*4)*(5*6*7) ... (49999*50000*50001) = 4.8 seconds
Here is the c# I used to test this.
Stopwatch timer = new Stopwatch();
timer.Start();
// Seperate the calculation into groups of this size.
int k = 2;
BigInteger total = 1;
// Iterates from 2 to 50002, but instead of incrementing 'i' by one, it increments it 'k' times,
// and the inner loop calculates the product of 'i' to 'i+k', and multiplies 'total' by that result.
for (var i = 2; i < 50000 + 2; i += k)
{
BigInteger partialTotal = 1;
for (var j = 0; j < k; j++)
{
// Stops if it exceeds 50000.
if (i + j >= 50000) break;
partialTotal *= i + j;
}
total *= partialTotal;
}
Console.WriteLine(timer.ElapsedMilliseconds / 1000.0 + "s");
I tested this at different levels and put the average times over a few tests in a bar graph. I expected it to become more efficient as I increased the number of groups, but 3 was the least efficient and 4 had no improvement over groups of 1.
Link to First Data
Link to Second Data
What causes this difference, and is there an optimal way to calculate this?
BigInteger has a fast case for numbers of 31 bits or less. When you do a pairwise multiplication, this means a specific fast-path is taken, that multiplies the values into a single ulong and sets the value more explicitly:
public void Mul(ref BigIntegerBuilder reg1, ref BigIntegerBuilder reg2) {
...
if (reg1._iuLast == 0) {
if (reg2._iuLast == 0)
Set((ulong)reg1._uSmall * reg2._uSmall);
else {
...
}
}
else if (reg2._iuLast == 0) {
...
}
else {
...
}
}
public void Set(ulong uu) {
uint uHi = NumericsHelpers.GetHi(uu);
if (uHi == 0) {
_uSmall = NumericsHelpers.GetLo(uu);
_iuLast = 0;
}
else {
SetSizeLazy(2);
_rgu[0] = (uint)uu;
_rgu[1] = uHi;
}
AssertValid(true);
}
A 100% predictable branch like this is perfect for a JIT, and this fast-path should get optimized extremely well. It's possible that _rgu[0] and _rgu[1] are even inlined. This is extremely cheap, so effectively cuts down the number of real operations by a factor of two.
So why is a group of three so much slower? It's obvious that it should be slower than for k = 2; you have far fewer optimized multiplications. More interesting is why it's slower than k = 1. This is easily explained by the fact that the outer multiplication of total now hits the slow path. For k = 2 this impact is mitigated by halving the number of multiplies and the potential inlining of the array.
However, these factors do not help k = 3, and in fact the slow case hurts k = 3 a lot more. The second multiplication in the k = 3 case hits this case
if (reg1._iuLast == 0) {
...
}
else if (reg2._iuLast == 0) {
Load(ref reg1, 1);
Mul(reg2._uSmall);
}
else {
...
}
which allocates
EnsureWritable(1);
uint uCarry = 0;
for (int iu = 0; iu <= _iuLast; iu++)
uCarry = MulCarry(ref _rgu[iu], u, uCarry);
if (uCarry != 0) {
SetSizeKeep(_iuLast + 2, 0);
_rgu[_iuLast] = uCarry;
}
why does this matter? Well, EnsureWritable(1) causes
uint[] rgu = new uint[_iuLast + 1 + cuExtra];
so rgu becomes length 3. The number of passes in total's code is decided in
public void Mul(ref BigIntegerBuilder reg1, ref BigIntegerBuilder reg2)
as
for (int iu1 = 0; iu1 < cu1; iu1++) {
...
for (int iu2 = 0; iu2 < cu2; iu2++, iuRes++)
uCarry = AddMulCarry(ref _rgu[iuRes], uCur, rgu2[iu2], uCarry);
...
}
which means that we have a total of len(total._rgu) * 3 operations. This hasn't saved us anything! There are only len(total._rgu) * 1 passes for k = 1 - we just do it three times!
There is actually an optimization on the outer loop that reduces this back down to len(total._rgu) * 2:
uint uCur = rgu1[iu1];
if (uCur == 0)
continue;
However, they "optimize" this optimization in a way that hurts far more than before:
if (reg1.CuNonZero <= reg2.CuNonZero) {
rgu1 = reg1._rgu; cu1 = reg1._iuLast + 1;
rgu2 = reg2._rgu; cu2 = reg2._iuLast + 1;
}
else {
rgu1 = reg2._rgu; cu1 = reg2._iuLast + 1;
rgu2 = reg1._rgu; cu2 = reg1._iuLast + 1;
}
For k = 2, that causes the outer loop to be over total, since reg2 contains no zero values with high probability. This is great because total is way longer than partialTotal, so the fewer passes the better. For k = 3, the EnsureWritable(1) will always cause a spare space because the multiplication of three numbers no more than 15 bits long can never exceed 64 bits. This means that, although we still only do one pass over total for k = 2, we do two for k = 3!
This starts to explain why the speed increases again beyond k = 3: the number of passes per addition increases slower than the number of additions decreases, as you're only adding ~15 bits to the inner value each time. The inner multiplications are fast relative to the massive total multiplications, so the more time spent consolidating values, the more time saved in passes over total. Further, the optimization is less frequently a pessimism.
It also explains why odd values take longer: they add an extra 32-bit integer to the _rgu array. This won't happen so cleanly if the ~15 bits wasn't so close to half of 32.
It's worth noting that there are a lot of ways to improve this code; the comments here are about why, not how to fix it. The easiest improvement would be to chuck the values in a heap and multiply only the two smallest values at a time.
The time required to do a BigInteger multiplication depends on the size of the product.
Both methods take the same number of multiplications, but if you multiply the factors in pairs, then the average size of the product is much smaller than it is if you multiply each factor with the product of all the smaller ones.
You can do even better if you always multiply the two smallest factors (original factors or intermediate products) that have yet to be multiplied together, until you get to the complete product.
I think you have a bug ('+' instead of '*').
partialTotal *= i + j;
Good to check that you are getting the right answer, not just interesting performance metrics.
But I'm curious what motivated you to try this. If you do find a difference, I would expect it would have to do with optimalities in register and/or memory allocation. And I would expect it would be 0-30% or something like that, not 50%.
In trying to test whether knowing the history of a random number could help predict the future results, I found a strong, unexpected correlation between the average of the number generated, and the number of correct guesses.
The test was supposed to simulate flipping a coin (heads = 0, tails = 1) and if previous attempts were biased towards heads then guess tails and vice versa.
Why is the sum of the generated numbers always nearly equal to the number of correct guesses in the following LinqPad program?
void Main()
{
var rnd = new Random();
var attempts = 10000000;
var correctGuesses = 0;
long sum = 0;
decimal avg = 0.5m;
for (int i = 0; i < attempts; i++)
{
var guess = avg < 0.5m ? 1 : 0;
var result = rnd.Next(0, 2);
if (guess == result)
{
correctGuesses += 1;
}
sum += result;
avg = (decimal)sum/(decimal)attempts;
}
attempts.Dump("Attempts");
correctGuesses.Dump("Correct Guesses");
avg = (decimal)sum / (decimal)attempts;
avg.Dump("Random Number Average");
}
Have a made an error in the code? Is this a natural relationship? I expected the averages to converge at 0.5 as I increased the number of attempts because the distribution is fairly even - I tested this with 10bn calls to Random.Next(0,2) - but I did not expect the sum of generated numbers to correlate to the number of correct guesses.
Your error is this line:
avg = (decimal)sum/(decimal)attempts;
Makes no sense to divide the sum (based over i to that point) by attempts. Divide by i (EDIT: more precisely i+1) instead for avg to give you something meaningful.
The Random class, without a seed, generates a random number using the current time as seed, meaning that a call of the rnd.Next method in your cycle will result in the same number several times over, depending on how fast your machine goes through the cycle.
I'm a beginner in C#, I'm trying to write an application to get primes between two numbers entered by the user. The problem is: At large numbers (valid numbers are in the range from 1 to 1000000000) getting the primes takes long time and according to the problem I'm solving, the whole operation must be carried out in a small time interval. This is the problem link for more explanation:
SPOJ-Prime
And here's the part of my code that's responsible of getting primes:
public void GetPrime()
{
int L1 = int.Parse(Limits[0]);
int L2 = int.Parse(Limits[1]);
if (L1 == 1)
{
L1++;
}
for (int i = L1; i <= L2; i++)
{
for (int k = L1; k <= L2; k++)
{
if (i == k)
{
continue;
}
else if (i % k == 0)
{
flag = false;
break;
}
else
{
flag = true;
}
}
if (flag)
{
Console.WriteLine(i);
}
}
}
Is there any faster algorithm?
Thanks in advance.
I remember solving the problem like this:
Use the sieve of eratosthenes to generate all primes below sqrt(1000000000) = ~32 000 in an array primes.
For each number x between m and n only test if it's prime by testing for divisibility against numbers <= sqrt(x) from the array primes. So for x = 29 you will only test if it's divisibile by 2, 3 and 5.
There's no point in checking for divisibility against non-primes, since if x divisible by non-prime y, then there exists a prime p < y such that x divisible by p, since we can write y as a product of primes. For example, 12 is divisible by 6, but 6 = 2 * 3, which means that 12 is also divisible by 2 or 3. By generating all the needed primes in advance (there are very few in this case), you significantly reduce the time needed for the actual primality testing.
This will get accepted and doesn't require any optimization or modification to the sieve, and it's a pretty clean implementation.
You can do it faster by generalising the sieve to generate primes in an interval [left, right], not [2, right] like it's usually presented in tutorials and textbooks. This can get pretty ugly however, and it's not needed. But if anyone is interested, see:
http://pastie.org/9199654 and this linked answer.
You are doing a lot of extra divisions that are not needed - if you know a number is not divisible by 3, there is no point in checking if it is divisible by 9, 27, etc. You should try to divide only by the potential prime factors of the number. Cache the set of primes you are generating and only check division by the previous primes. Note that you do need to generate the initial set of primes below L1.
Remember that no number will have a prime factor that's greater than its own square root, so you can stop your divisions at that point. For instance, you can stop checking potential factors of the number 29 after 5.
You also do can increment by 2 so you can disregard checking if an even number is prime altogether (special casing the number 2, of course.)
I used to ask this question during interviews - as a test I compared an implementation similar to yours with the algorithm I described. With the optimized algorithm, I could generate hundreds of thousands of primes very fast - I never bothered waiting around for the slow, straightforward implementation.
You could try the Sieve of Eratosthenes. The basic difference would be that you start at L1 instead of starting at 2.
Let's change the question a bit: How quickly can you generate the primes between m and n and simply write them to memory? (Or, possibly, to a RAM disk.) On the other hand, remember the range of parameters as described on the problem page: m and n can be as high as a billion, while n-m is at most a million.
IVlad and Brian most of a competitive solution, even if it is true that a slower solution could be good enough. First generate or even precompute the prime numbers less than sqrt(billion); there aren't very many of them. Then do a truncated Sieve of Eratosthenes: Make an array of length n-m+1 to keep track of the status of every number in the range [m,n], with initially every such number marked as prime (1). Then for each precomputed prime p, do a loop that looks like this:
for(k=ceil(m/p)*p; k <= n; k += p) status[k-m] = 0;
This loop marks all of the numbers in the range m <= x <= n as composite (0) if they are multiple of p. If this is what IVlad meant by "pretty ugly", I don't agree; I don't think that it's so bad.
In fact, almost 40% of this work is just for the primes 2, 3, and 5. There is a trick to combine the sieve for a few primes with initialization of the status array. Namely, the pattern of divisibility by 2, 3, and 5 repeats mod 30. Instead of initializing the array to all 1s, you can initialize it to a repeating pattern of 010000010001010001010001000001. If you want to be even more cutting edge, you can advance k by 30*p instead of by p, and only mark off the multiples in the same pattern.
After this, realistic performance gains would involve steps like using a bit vector rather than a char array to keep the sieve data in on-chip cache. And initializing the bit vector word by word rather than bit by bit. This does get messy, and also hypothetical since you can get to the point of generating primes faster than you can use them. The basic sieve is already very fast and not very complicated.
One thing no one's mentioned is that it's rather quick to test a single number for primality. Thus, if the range involved is small but the numbers are large (ex. generate all primes between 1,000,000,000 and 1,000,100,000), it would be faster to just check every number for primality individually.
There are many algorithms finding prime numbers. Some are faster, others are easier.
You can start by making some easiest optimizations. For example,
why are you searching if every number is prime? In other words, are you sure that, given a range of 411 to 418, there is a need to search if numbers 412, 414, 416 and 418 are prime? Numbers which divide by 2 and 3 can be skipped with very simple code modifications.
if the number is not 5, but ends by a digit '5' (1405, 335), it is not prime bad idea: it will make the search slower.
what about caching the results? You can then divide by primes rather by every number. Moreover, only primes less than square root of the number you search are concerned.
If you need something really fast and optimized, taking an existing algorithm instead of reinventing the wheel can be an alternative. You can also try to find some scientific papers explaining how to do it fast, but it can be difficult to understand and to translate to code.
int ceilingNumber = 1000000;
int myPrimes = 0;
BitArray myNumbers = new BitArray(ceilingNumber, true);
for(int x = 2; x < ceilingNumber; x++)
if(myNumbers[x])
{
for(int y = x * 2; y < ceilingNumber; y += x)
myNumbers[y] = false;
}
for(int x = 2; x < ceilingNumber; x++)
if(myNumbers[x])
{
myPrimes++;
Console.Out.WriteLine(x);
}
Console.Out.WriteLine("======================================================");
Console.Out.WriteLine("There is/are {0} primes between 0 and {1} ",myPrimes,ceilingNumber);
Console.In.ReadLine();
I think i have a very fast and efficient(generate all prime even if using type BigInteger) algorithm to getting prime number,it much more faster and simpler than any other one and I use it to solve almost problem related to prime number in Project Euler with just a few seconds for complete solution(brute force)
Here is java code:
public boolean checkprime(int value){ //Using for loop if need to generate prime in a
int n, limit;
boolean isprime;
isprime = true;
limit = value / 2;
if(value == 1) isprime =false;
/*if(value >100)limit = value/10; // if 1 number is not prime it will generate
if(value >10000)limit = value/100; //at lest 2 factor (not 1 or itself)
if(value >90000)limit = value/300; // 1 greater than average 1 lower than average
if(value >1000000)limit = value/1000; //ex: 9997 =13*769 (average ~ sqrt(9997) is 100)
if(value >4000000)limit = value/2000; //so we just want to check divisor up to 100
if(value >9000000)limit = value/3000; // for prime ~10000
*/
limit = (int)Math.sqrt(value); //General case
for(n=2; n <= limit; n++){
if(value % n == 0 && value != 2){
isprime = false;
break;
}
}
return isprime;
}
import java.io.*;
import java.util.Scanner;
class Test{
public static void main(String args[]){
Test tt=new Test();
Scanner obj=new Scanner(System.in);
int m,n;
System.out.println(i);
m=obj.nextInt();
n=obj.nextInt();
tt.IsPrime(n,m);
}
public void IsPrime(int num,int k)
{
boolean[] isPrime = new boolean[num+1];
// initially assume all integers are prime
for (int i = 2; i <= num; i++) {
isPrime[i] = true;
}
// mark non-primes <= N using Sieve of Eratosthenes
for (int i = 2; i*i <= num; i++) {
// if i is prime, then mark multiples of i as nonprime
// suffices to consider mutiples i, i+1, ..., N/i
if (isPrime[i]) {
for (int j = i; i*j <=num; j++) {
isPrime[i*j] = false;
}
}
}
for (int i =k; i <= num; i++) {
if (isPrime[i])
{
System.out.println(i);
}
}
}
}
List<int> prime(int x, int y)
{
List<int> a = new List<int>();
int b = 0;
for (int m = x; m < y; m++)
{
for (int i = 2; i <= m / 2; i++)
{
b = 0;
if (m % i == 0)
{
b = 1;
break;
}
}
if (b == 0) a.Add(m)`
}
return a;
}