I need to calculate a value based on a formula that I've checked there, as we can see in this screenshot:
I've tried to this equation in my C# app, but I don't get the expected values.
For example, I've created a basic console app:
public static void Calculate()
{
var values = new double[] { 1, 0.75, 0.5, 0.25, 0};
// y = - ((4/3) * x^3) + (3 * x^2) - ((2/3) * x)
foreach (var value in values)
{
var calcul1 = - ((4 / 3) * Math.Pow(value, 3))
+ (3 * Math.Pow(value, 2))
- ((2 / 3) * value);
var calcul2 = ((-4 / 3) * (value * value * value))
+ (3 * (value * value))
+ ((-2 / 3) * value);
Console.WriteLine($"value: {value} - calcul1: {calcul1} / calcul2: {calcul2}");
}
}
I get these results, that are not close to the expected results:
value: 1 - calcul1: 2 / calcul2: 2
value: 0.75 - calcul1: 1.265625 / calcul2: 1.265625
value: 0.5 - calcul1: 0.625 / calcul2: 0.625
value: 0.25 - calcul1: 0.171875 / calcul2: 0.171875
value: 0 - calcul1: 0 / calcul2: 0
What's wrong? Is it related to the use of double?
I refactored your code in order to obtain the correct values.
If you perform a division calculation without a explicit casting c# Implicitly cast to an integer, discarding the decimal part:
public static void Calculate()
{
var values = new double[] { 1, 0.75, 0.5, 0.25, 0};
// y = - ((4/3) * x^3) + (3 * x^2) - ((2/3) * x)
foreach (var value in values)
{
double firstFraction = (double)4/3;
double secondFraction = (double)2/3;
var calcul1 = - (firstFraction * Math.Pow(value, 3))
+ (3 * Math.Pow(value, 2))
- (secondFraction * value);
var calcul2 = ((firstFraction*-1) * (value * value * value))
+ (3 * (value * value))
+ ((secondFraction*-1) * value);
Console.WriteLine($"value: {value} - calcul1: {calcul1} / calcul2: {calcul2}");
}
}
You need to be more carefull with your data types.
as a test try to show the output for
var calc = (4 / 3) * 0.5;
Console.WriteLine(calc); //-> shows 0.5
because 4 and 3 are integers which is a data type without decimal point, 4/3 = 1
you can force 4 and 3 to be evaluated as doubles using literals floating point documentation
like this:
var calc = (4D / 3D) * 0.5;
Console.WriteLine(calc); //-> shows 0.666666666666667
see dotnetfiddle for an online example
So I have a problem that I'm stuck on it since 3 days ago.
You want to participate at the lottery 6/49 with only one winning variant(simple) and you want to know what odds of winning you have:
-at category I (6 numbers)
-at category II (5 numbers)
-at category III (4 numbers)
Write a console app which gets from input the number of total balls, the number of extracted balls, and the category, then print the odds of winning with a precision of 10 decimals if you play with one simple variant.
Inputs:
40
5
II
Result I must print:
0.0002659542
static void Main(string[] args)
{
int numberOfBalls = Convert.ToInt32(Console.ReadLine());
int balls = Convert.ToInt32(Console.ReadLine());
string line = Console.ReadLine();
int theCategory = FindCategory(line);
double theResult = CalculateChance(numberOfBalls, balls, theCategory);
Console.WriteLine(theResult);
}
static int FindCategory (string input)
{
int category = 0;
switch (input)
{
case "I":
category = 1;
break;
case "II":
category = 2;
break;
case "III":
category = 3;
break;
default:
Console.WriteLine("Wrong category.");
break;
}
return category;
}
static int CalculateFactorial(int x)
{
int factorial = 1;
for (int i = 1; i <= x; i++)
factorial *= i;
return factorial;
}
static int CalculateCombinations(int x, int y)
{
int combinations = CalculateFactorial(x) / (CalculateFactorial(y) * CalculateFactorial(x - y));
return combinations;
}
static double CalculateChance(int a, int b, int c)
{
double result = c / CalculateCombinations(a, b);
return result;
}
Now my problems: I'm pretty sure I have to use Combinations. For using combinations I need to use Factorials. But on the combinations formula I'm working with pretty big factorials so my numbers get truncated. And my second problem is that I don't really understand what I have to do with those categories, and I'm pretty sure I'm doing wrong on that method also. I'm new to programming so please bare with me. And I can use for this problem just basic stuff, like conditions, methods, primitives, arrays.
Let's start from combinatorics; first, come to terms:
a - all possible numbers (40 in your test case)
t - all taken numbers (5 in your test case)
c - category (2) in your test case
So we have
t - c + 1 for numbers which win and c - 1 for numbers which lose. Let's count combinations:
All combinations: take t from a possible ones:
A = a! / t! / (a - t)!
Winning numbers' combinations: take t - c + 1 winning number from t possible ones:
W = t! / (t - c + 1)! / (t - t + c - 1) = t! / (t - c + 1)! / (c - 1)!
Lost numbers' combinations: take c - 1 losing numbers from a - t possible ones:
L = (a - t)! / (c - 1)! / (a - t - c + 1)!
All combinations with category c, i.e. with exactly t - c + 1 winning and c - 1 losing numbers:
C = L * W
Probability:
P = C / A = L * W / A =
t! * t! (a - t)! * (a - t)! / (t - c + 1)! / (c - 1)! / (c - 1)! / (a - t- c + 1)! / a!
Ugh! Not let's implement some code for it:
Code:
// double : note, that int is too small for 40! and the like values
private static double Factorial(int value) {
double result = 1.0;
for (int i = 2; i <= value; ++i)
result *= i;
return result;
}
private static double Chances(int a, int t, int c) =>
Factorial(a - t) * Factorial(t) * Factorial(a - t) * Factorial(t) /
Factorial(t - c + 1) /
Factorial(c - 1) /
Factorial(c - 1) /
Factorial(a - t - c + 1) /
Factorial(a);
Test:
Console.Write(Chances(40, 5, 2));
Outcome:
0.00026595421332263435
Edit:
in terms of Combinations, if C(x, y) which means "take y items from x" we
have
A = C(a, t); W = C(t, t - c + 1); L = C(a - t, c - 1)
and
P = W * L / A = C(t, t - c + 1) * C(a - t, c - 1) / C(a, t)
Code for Combinations is quite easy; the only trick is that we return double:
// Let'g get rid of noisy "Compute": what else can we do but compute?
// Just "Combinations" without pesky prefix.
static double Combinations(int x, int y) =>
Factorial(x) / Factorial(y) / Factorial(x - y);
private static double Chances(int a, int t, int c) =>
Combinations(t, t - c + 1) *
Combinations(a - t, c - 1) /
Combinations(a, t);
You can fiddle the solution
I am trying to turn the BBP Formula (Bailey-Borwein-Plouffe) in to C# code, it is digit extraction of pi in base 16 (spigot algorithm), the idea is give the input of what index/decimal place you want of pi then get that single digit. Let's say I want the digit that are at the decimal place/index 40000 (in base 16) without having to calculate pi with 40000 decimals because I don't care about the other digits.
Anyhow here is the math formula, (doesn't look like it should be to much code? )
Can't say I understand 100% what the formal mean, if I did I probably be able to make it in to code, but from my understanding looking at it.
Is this correct?
pseudo code
Pi = SUM = (for int n = 0; n < infinity;n++) { SUM += ((4/((8*n)+1))
- (2/((8*n)+4)) - (1/((8*n)+5)) - (1/((8*n)+6))*((1/16)^n)) }
Capital sigma basically is like a "for loop" to sum sequences together?
example
and in C# code:
static int CapSigma(int _start, int _end)
{
int sum = 0;
for(int n = _start; n <= _end; n++)
{
sum += n;
}
return (sum);
}
Code so far (not working):
static int BBPpi(int _precision)
{
int pi = 0;
for(int n = 0; n < _precision; n++)
{
pi += ((16 ^ -n) * (4 / (8 * n + 1) - 2 / (8 * n + 4) - 1 / (8 * n + 5) - 1 / (8 * n + 6)));
}
return (pi);
}
I'm not sure how to make it in to actual code also if my pseudo code math is correct?
How to sum 0 to infinity? Can't do it in a for loop and also where in the formula is the part ("input") that specify what nth (index) digit you want to get out? is it the start n (n = 0)? so too get digit 40000 would be n =40000?
You need to cast to double :
class Program
{
static void Main(string[] args)
{
double sum = 0;
for (int i = 1; i < 100; i++)
{
sum += BBPpi(i);
Console.WriteLine(sum.ToString());
}
Console.ReadLine();
}
static double BBPpi(int n)
{
double pi = ((16 ^ -n) * (4.0 / (8.0 * (double)n + 1.0) - 2 / (8.0 * (double)n + 4.0) - 1 / (8.0 * (double)n + 5.0) - 1.0 / (8.0 * (double)n + 6.0)));
return (pi);
}
}
I have 3 very large signed integers.
long x = long.MaxValue;
long y = long.MaxValue - 1;
long z = long.MaxValue - 2;
I want to calculate their truncated average. Expected average value is long.MaxValue - 1, which is 9223372036854775806.
It is impossible to calculate it as:
long avg = (x + y + z) / 3; // 3074457345618258600
Note: I read all those questions about average of 2 numbers, but I don't see how that technique can be applied to average of 3 numbers.
It would be very easy with the usage of BigInteger, but let's assume I cannot use it.
BigInteger bx = new BigInteger(x);
BigInteger by = new BigInteger(y);
BigInteger bz = new BigInteger(z);
BigInteger bavg = (bx + by + bz) / 3; // 9223372036854775806
If I convert to double, then, of course, I lose precision:
double dx = x;
double dy = y;
double dz = z;
double davg = (dx + dy + dz) / 3; // 9223372036854780000
If I convert to decimal, it works, but also let's assume that I cannot use it.
decimal mx = x;
decimal my = y;
decimal mz = z;
decimal mavg = (mx + my + mz) / 3; // 9223372036854775806
Question: Is there a way to calculate the truncated average of 3 very large integers only with the usage of long type? Don't consider that question as C#-specific, just it is easier for me to provide samples in C#.
This code will work, but isn't that pretty.
It first divides all three values (it floors the values, so you 'lose' the remainder), and then divides the remainder:
long n = x / 3
+ y / 3
+ z / 3
+ ( x % 3
+ y % 3
+ z % 3
) / 3
Note that the above sample does not always work properly when having one or more negative values.
As discussed with Ulugbek, since the number of comments are exploding below, here is the current BEST solution for both positive and negative values.
Thanks to answers and comments of Ulugbek Umirov, James S, KevinZ, Marc van Leeuwen, gnasher729 this is the current solution:
static long CalculateAverage(long x, long y, long z)
{
return (x % 3 + y % 3 + z % 3 + 6) / 3 - 2
+ x / 3 + y / 3 + z / 3;
}
static long CalculateAverage(params long[] arr)
{
int count = arr.Length;
return (arr.Sum(n => n % count) + count * (count - 1)) / count - (count - 1)
+ arr.Sum(n => n / count);
}
NB - Patrick has already given a great answer. Expanding on this you could do a generic version for any number of integers like so:
long x = long.MaxValue;
long y = long.MaxValue - 1;
long z = long.MaxValue - 2;
long[] arr = { x, y, z };
var avg = arr.Select(i => i / arr.Length).Sum()
+ arr.Select(i => i % arr.Length).Sum() / arr.Length;
Patrick Hofman has posted a great solution. But if needed it can still be implemented in several other ways. Using the algorithm here I have another solution. If implemented carefully it may be faster than the multiple divisions in systems with slow hardware divisors. It can be further optimized by using divide by constants technique from hacker's delight
public class int128_t {
private int H;
private long L;
public int128_t(int h, long l)
{
H = h;
L = l;
}
public int128_t add(int128_t a)
{
int128_t s;
s.L = L + a.L;
s.H = H + a.H + (s.L < a.L);
return b;
}
private int128_t rshift2() // right shift 2
{
int128_t r;
r.H = H >> 2;
r.L = (L >> 2) | ((H & 0x03) << 62);
return r;
}
public int128_t divideby3()
{
int128_t sum = {0, 0}, num = new int128_t(H, L);
while (num.H || num.L > 3)
{
int128_t n_sar2 = num.rshift2();
sum = add(n_sar2, sum);
num = add(n_sar2, new int128_t(0, num.L & 3));
}
if (num.H == 0 && num.L == 3)
{
// sum = add(sum, 1);
sum.L++;
if (sum.L == 0) sum.H++;
}
return sum;
}
};
int128_t t = new int128_t(0, x);
t = t.add(new int128_t(0, y));
t = t.add(new int128_t(0, z));
t = t.divideby3();
long average = t.L;
In C/C++ on 64-bit platforms it's much easier with __int128
int64_t average = ((__int128)x + y + z)/3;
You can calculate the mean of numbers based on the differences between the numbers rather than using the sum.
Let's say x is the max, y is the median, z is the min (as you have). We will call them max, median and min.
Conditional checker added as per #UlugbekUmirov's comment:
long tmp = median + ((min - median) / 2); //Average of min 2 values
if (median > 0) tmp = median + ((max - median) / 2); //Average of max 2 values
long mean;
if (min > 0) {
mean = min + ((tmp - min) * (2.0 / 3)); //Average of all 3 values
} else if (median > 0) {
mean = min;
while (mean != tmp) {
mean += 2;
tmp--;
}
} else if (max > 0) {
mean = max;
while (mean != tmp) {
mean--;
tmp += 2;
}
} else {
mean = max + ((tmp - max) * (2.0 / 3));
}
Patching Patrick Hofman's solution with supercat's correction, I give you the following:
static Int64 Avg3 ( Int64 x, Int64 y, Int64 z )
{
UInt64 flag = 1ul << 63;
UInt64 x_ = flag ^ (UInt64) x;
UInt64 y_ = flag ^ (UInt64) y;
UInt64 z_ = flag ^ (UInt64) z;
UInt64 quotient = x_ / 3ul + y_ / 3ul + z_ / 3ul
+ ( x_ % 3ul + y_ % 3ul + z_ % 3ul ) / 3ul;
return (Int64) (quotient ^ flag);
}
And the N element case:
static Int64 AvgN ( params Int64 [ ] args )
{
UInt64 length = (UInt64) args.Length;
UInt64 flag = 1ul << 63;
UInt64 quotient_sum = 0;
UInt64 remainder_sum = 0;
foreach ( Int64 item in args )
{
UInt64 uitem = flag ^ (UInt64) item;
quotient_sum += uitem / length;
remainder_sum += uitem % length;
}
return (Int64) ( flag ^ ( quotient_sum + remainder_sum / length ) );
}
This always gives the floor() of the mean, and eliminates every possible edge case.
Because C uses floored division rather than Euclidian division, it may easier to compute a properly-rounded average of three unsigned values than three signed ones. Simply add 0x8000000000000000UL to each number before taking the unsigned average, subtract it after taking the result, and use an unchecked cast back to Int64 to get a signed average.
To compute the unsigned average, compute the sum of the top 32 bits of the three values. Then compute the sum of the bottom 32 bits of the three values, plus the sum from above, plus one [the plus one is to yield a rounded result]. The average will be 0x55555555 times the first sum, plus one third of the second.
Performance on 32-bit processors might be enhanced by producing three "sum" values each of which is 32 bits long, so that the final result is ((0x55555555UL * sumX)<<32) + 0x55555555UL * sumH + sumL/3; it might possibly be further enhanced by replacing sumL/3 with ((sumL * 0x55555556UL) >> 32), though the latter would depend upon the JIT optimizer [it might know how to replace a division by 3 with a multiply, and its code might actually be more efficient than an explicit multiply operation].
If you know you have N values, can you just divide each value by N and sum them together?
long GetAverage(long* arrayVals, int n)
{
long avg = 0;
long rem = 0;
for(int i=0; i<n; ++i)
{
avg += arrayVals[i] / n;
rem += arrayVals[i] % n;
}
return avg + (rem / n);
}
You could use the fact that you can write each of the numbers as y = ax + b, where x is a constant. Each a would be y / x (the integer part of that division). Each b would be y % x (the rest/modulo of that division). If you choose this constant in an intelligent way, for example by choosing the square root of the maximum number as a constant, you can get the average of x numbers without having problems with overflow.
The average of an arbitrary list of numbers can be found by finding:
( ( sum( all A's ) / length ) * constant ) +
( ( sum( all A's ) % length ) * constant / length) +
( ( sum( all B's ) / length )
where % denotes modulo and / denotes the 'whole' part of division.
The program would look something like:
class Program
{
static void Main()
{
List<long> list = new List<long>();
list.Add( long.MaxValue );
list.Add( long.MaxValue - 1 );
list.Add( long.MaxValue - 2 );
long sumA = 0, sumB = 0;
long res1, res2, res3;
//You should calculate the following dynamically
long constant = 1753413056;
foreach (long num in list)
{
sumA += num / constant;
sumB += num % constant;
}
res1 = (sumA / list.Count) * constant;
res2 = ((sumA % list.Count) * constant) / list.Count;
res3 = sumB / list.Count;
Console.WriteLine( res1 + res2 + res3 );
}
}
I also tried it and come up with a faster solution (although only by a factor about 3/4). It uses a single division
public static long avg(long a, long b, long c) {
final long quarterSum = (a>>2) + (b>>2) + (c>>2);
final long lowSum = (a&3) + (b&3) + (c&3);
final long twelfth = quarterSum / 3;
final long quarterRemainder = quarterSum - 3*twelfth;
final long adjustment = smallDiv3(lowSum + 4*quarterRemainder);
return 4*twelfth + adjustment;
}
where smallDiv3 is division by 3 using multipliation and working only for small arguments
private static long smallDiv3(long n) {
assert -30 <= n && n <= 30;
// Constants found rather experimentally.
return (64/3*n + 10) >> 6;
}
Here is the whole code including a test and a benchmark, the results are not that impressive.
This function computes the result in two divisions. It should generalize nicely to other divisors and word sizes.
It works by computing the double-word addition result, then working out the division.
Int64 average(Int64 a, Int64 b, Int64 c) {
// constants: 0x10000000000000000 div/mod 3
const Int64 hdiv3 = UInt64(-3) / 3 + 1;
const Int64 hmod3 = UInt64(-3) % 3;
// compute the signed double-word addition result in hi:lo
UInt64 lo = a; Int64 hi = a>=0 ? 0 : -1;
lo += b; hi += b>=0 ? lo<b : -(lo>=UInt64(b));
lo += c; hi += c>=0 ? lo<c : -(lo>=UInt64(c));
// divide, do a correction when high/low modulos add up
return hi>=0 ? lo/3 + hi*hdiv3 + (lo%3 + hi*hmod3)/3
: lo/3+1 + hi*hdiv3 + Int64(lo%3-3 + hi*hmod3)/3;
}
Math
(x + y + z) / 3 = x/3 + y/3 + z/3
(a[1] + a[2] + .. + a[k]) / k = a[1]/k + a[2]/k + .. + a[k]/k
Code
long calculateAverage (long a [])
{
double average = 0;
foreach (long x in a)
average += (Convert.ToDouble(x)/Convert.ToDouble(a.Length));
return Convert.ToInt64(Math.Round(average));
}
long calculateAverage_Safe (long a [])
{
double average = 0;
double b = 0;
foreach (long x in a)
{
b = (Convert.ToDouble(x)/Convert.ToDouble(a.Length));
if (b >= (Convert.ToDouble(long.MaxValue)-average))
throw new OverflowException ();
average += b;
}
return Convert.ToInt64(Math.Round(average));
}
Try this:
long n = Array.ConvertAll(new[]{x,y,z},v=>v/3).Sum()
+ (Array.ConvertAll(new[]{x,y,z},v=>v%3).Sum() / 3);
I'm trying to implement the recursive definition for B-Splines in c# but I can't get it right. Here's what I've done:
public static Double RecursiveBSpline(int i, Double[] t, int order, Double x)
{
Double result = 0;
if (order == 0)
{
if (t[i] <= x && x < t[i + 1])
{
result = 1;
}
else
{
result = 0;
}
}
else
{
Double denom1, denom2, num1, num2;
denom1 = t[i + order + 1] - t[i + 1];
denom2 = t[i + order] - t[i];
if (denom1 == 0)
{
num1 = 0;
}
else
{
num1 = t[i + order + 1] - x / denom1;
}
if (denom2 == 0)
{
num2 = 0;
}
else
{
num2 = x - t[i] / denom2;
}
result = num1 * RecursiveBSpline(i + 1, t, order - 1, x)
+ num2 * RecursiveBSpline(i, t, order - 1, x);
}
return result;
}
And here is how I call the function:
Double[] vect = new Double[] { 0, 1, 2, 3 };
MessageBox.Show(BSpline.RecursiveBSpline(0,vect,2,0.5).ToString());
I should see 0,125 on the screen, instead I get 0,25. The two denominator variables are used to check if they equal 0 and if they do, the number should be set to 0 by definition. Can someone point out where I'm getting this wrong?
Bear in mind, that the mathematical and logical operators in C# have a precedence order. Your second solution works fine if you put the right terms in braces (explanation follows). This line:
num2 = x - t[i] / denom2;
should be changed to:
num2 = (x - t[i]) / denom2;
and so on. Then the result is as desired: 0.125
The division operator has a higher order precedence as the addition operator. To affect the execution order use braces (everything in braces will be evaluated at first):
var r1 = 2 + 2 / 2; // Step1: 2 / 2 = 1 Step2: 2 + 1 Output: 3
var r2 = (2 + 2) / 2; // Step1: (2 + 2) = 4 Step2: 4 / 2 = 2 Output: 2