Average function without overflow exception - c#

.NET Framework 3.5.
I'm trying to calculate the average of some pretty large numbers.
For instance:
using System;
using System.Linq;
class Program
{
static void Main(string[] args)
{
var items = new long[]
{
long.MaxValue - 100,
long.MaxValue - 200,
long.MaxValue - 300
};
try
{
var avg = items.Average();
Console.WriteLine(avg);
}
catch (OverflowException ex)
{
Console.WriteLine("can't calculate that!");
}
Console.ReadLine();
}
}
Obviously, the mathematical result is 9223372036854775607 (long.MaxValue - 200), but I get an exception there. This is because the implementation (on my machine) to the Average extension method, as inspected by .NET Reflector is:
public static double Average(this IEnumerable<long> source)
{
if (source == null)
{
throw Error.ArgumentNull("source");
}
long num = 0L;
long num2 = 0L;
foreach (long num3 in source)
{
num += num3;
num2 += 1L;
}
if (num2 <= 0L)
{
throw Error.NoElements();
}
return (((double) num) / ((double) num2));
}
I know I can use a BigInt library (yes, I know that it is included in .NET Framework 4.0, but I'm tied to 3.5).
But I still wonder if there's a pretty straight forward implementation of calculating the average of integers without an external library. Do you happen to know about such implementation?
Thanks!!
UPDATE:
The previous example, of three large integers, was just an example to illustrate the overflow issue. The question is about calculating an average of any set of numbers which might sum to a large number that exceeds the type's max value. Sorry about this confusion. I also changed the question's title to avoid additional confusion.
Thanks all!!

This answer used to suggest storing the quotient and remainder (mod count) separately. That solution is less space-efficient and more code-complex.
In order to accurately compute the average, you must keep track of the total. There is no way around this, unless you're willing to sacrifice accuracy. You can try to store the total in fancy ways, but ultimately you must be tracking it if the algorithm is correct.
For single-pass algorithms, this is easy to prove. Suppose you can't reconstruct the total of all preceding items, given the algorithm's entire state after processing those items. But wait, we can simulate the algorithm then receiving a series of 0 items until we finish off the sequence. Then we can multiply the result by the count and get the total. Contradiction. Therefore a single-pass algorithm must be tracking the total in some sense.
Therefore the simplest correct algorithm will just sum up the items and divide by the count. All you have to do is pick an integer type with enough space to store the total. Using a BigInteger guarantees no issues, so I suggest using that.
var total = BigInteger.Zero
var count = 0
for i in values
count += 1
total += i
return total / (double)count //warning: possible loss of accuracy, maybe return a Rational instead?

If you're just looking for an arithmetic mean, you can perform the calculation like this:
public static double Mean(this IEnumerable<long> source)
{
if (source == null)
{
throw Error.ArgumentNull("source");
}
double count = (double)source.Count();
double mean = 0D;
foreach(long x in source)
{
mean += (double)x/count;
}
return mean;
}
Edit:
In response to comments, there definitely is a loss of precision this way, due to performing numerous divisions and additions. For the values indicated by the question, this should not be a problem, but it should be a consideration.

You may try the following approach:
let number of elements is N, and numbers are arr[0], .., arr[N-1].
You need to define 2 variables:
mean and remainder.
initially mean = 0, remainder = 0.
at step i you need to change mean and remainder in the following way:
mean += arr[i] / N;
remainder += arr[i] % N;
mean += remainder / N;
remainder %= N;
after N steps you will get correct answer in mean variable and remainder / N will be fractional part of the answer (I am not sure you need it, but anyway)

If you know approximately what the average will be (or, at least, that all pairs of numbers will have a max difference < long.MaxValue), you can calculate the average difference from that value instead. I take an example with low numbers, but it works equally well with large ones.
// Let's say numbers cannot exceed 40.
List<int> numbers = new List<int>() { 31 28 24 32 36 29 }; // Average: 30
List<int> diffs = new List<int>();
// This can probably be done more effectively in linq, but to show the idea:
foreach(int number in numbers.Skip(1))
{
diffs.Add(numbers.First()-number);
}
// diffs now contains { -3 -6 1 5 -2 }
var avgDiff = diffs.Sum() / diffs.Count(); // the average is -1
// To get the average value, just add the average diff to the first value:
var totalAverage = numbers.First()+avgDiff;
You can of course implement this in some way that makes it easier to reuse, for example as an extension method to IEnumerable<long>.

Here is how I would do if given this problem. First let's define very simple RationalNumber class, which contains two properties - Dividend and Divisor and an operator for adding two complex numbers. Here is how it looks:
public sealed class RationalNumber
{
public RationalNumber()
{
this.Divisor = 1;
}
public static RationalNumberoperator +( RationalNumberc1, RationalNumber c2 )
{
RationalNumber result = new RationalNumber();
Int64 nDividend = ( c1.Dividend * c2.Divisor ) + ( c2.Dividend * c1.Divisor );
Int64 nDivisor = c1.Divisor * c2.Divisor;
Int64 nReminder = nDividend % nDivisor;
if ( nReminder == 0 )
{
// The number is whole
result.Dividend = nDividend / nDivisor;
}
else
{
Int64 nGreatestCommonDivisor = FindGreatestCommonDivisor( nDividend, nDivisor );
if ( nGreatestCommonDivisor != 0 )
{
nDividend = nDividend / nGreatestCommonDivisor;
nDivisor = nDivisor / nGreatestCommonDivisor;
}
result.Dividend = nDividend;
result.Divisor = nDivisor;
}
return result;
}
private static Int64 FindGreatestCommonDivisor( Int64 a, Int64 b)
{
Int64 nRemainder;
while ( b != 0 )
{
nRemainder = a% b;
a = b;
b = nRemainder;
}
return a;
}
// a / b = a is devidend, b is devisor
public Int64 Dividend { get; set; }
public Int64 Divisor { get; set; }
}
Second part is really easy. Let's say we have an array of numbers. Their average is estimated by Sum(Numbers)/Length(Numbers), which is the same as Number[ 0 ] / Length + Number[ 1 ] / Length + ... + Number[ n ] / Length. For to be able to calculate this we will represent each Number[ i ] / Length as a whole number and a rational part ( reminder ). Here is how it looks:
Int64[] aValues = new Int64[] { long.MaxValue - 100, long.MaxValue - 200, long.MaxValue - 300 };
List<RationalNumber> list = new List<RationalNumber>();
Int64 nAverage = 0;
for ( Int32 i = 0; i < aValues.Length; ++i )
{
Int64 nReminder = aValues[ i ] % aValues.Length;
Int64 nWhole = aValues[ i ] / aValues.Length;
nAverage += nWhole;
if ( nReminder != 0 )
{
list.Add( new RationalNumber() { Dividend = nReminder, Divisor = aValues.Length } );
}
}
RationalNumber rationalTotal = new RationalNumber();
foreach ( var rational in list )
{
rationalTotal += rational;
}
nAverage = nAverage + ( rationalTotal.Dividend / rationalTotal.Divisor );
At the end we have a list of rational numbers, and a whole number which we sum together and get the average of the sequence without an overflow. Same approach can be taken for any type without an overflow for it, and there is no lost of precision.
EDIT:
Why this works:
Define: A set of numbers.
if Average( A ) = SUM( A ) / LEN( A ) =>
Average( A ) = A[ 0 ] / LEN( A ) + A[ 1 ] / LEN( A ) + A[ 2 ] / LEN( A ) + ..... + A[ N ] / LEN( 2 ) =>
if we define An to be a number that satisfies this: An = X + ( Y / LEN( A ) ), which is essentially so because if you divide A by B we get X with a reminder a rational number ( Y / B ).
=> so
Average( A ) = A1 + A2 + A3 + ... + AN = X1 + X2 + X3 + X4 + ... + Reminder1 + Reminder2 + ...;
Sum the whole parts, and sum the reminders by keeping them in rational number form. In the end we get one whole number and one rational, which summed together gives Average( A ). Depending on what precision you'd like, you apply this only to the rational number at the end.

Simple answer with LINQ...
var data = new[] { int.MaxValue, int.MaxValue, int.MaxValue };
var mean = (int)data.Select(d => (double)d / data.Count()).Sum();
Depending on the size of the set fo data you may want to force data .ToList() or .ToArray() before your process this method so it can't requery count on each pass. (Or you can call it before the .Select(..).Sum().)

If you know in advance that all your numbers are going to be 'big' (in the sense of 'much nearer long.MaxValue than zero), you can calculate the average of their distance from long.MaxValue, then the average of the numbers is long.MaxValue less that.
However, this approach will fail if (m)any of the numbers are far from long.MaxValue, so it's horses for courses...

I guess there has to be a compromise somewhere or the other. If the numbers are really getting so large then few digits of lower orders (say lower 5 digits) might not affect the result as much.
Another issue is where you don't really know the size of the dataset coming in, especially in stream/real time cases. Here I don't see any solution other then the
(previousAverage*oldCount + newValue) / (oldCount <- oldCount+1)
Here's a suggestion:
*LargestDataTypePossible* currentAverage;
*SomeSuitableDatatypeSupportingRationalValues* newValue;
*int* count;
addToCurrentAverage(value){
newValue = value/100000;
count = count + 1;
currentAverage = (currentAverage * (count-1) + newValue) / count;
}
getCurrentAverage(){
return currentAverage * 100000;
}

Averaging numbers of a specific numeric type in a safe way while also only using that numeric type is actually possible, although I would advise using the help of BigInteger in a practical implementation. I created a project for Safe Numeric Calculations that has a small structure (Int32WithBoundedRollover) which can sum up to 2^32 int32s without any overflow (the structure internally uses two int32 fields to do this, so no larger data types are used).
Once you have this sum you then need to calculate sum/total to get the average, which you can do (although I wouldn't recommend it) by creating and then incrementing by total another instance of Int32WithBoundedRollover. After each increment you can compare it to the sum until you find out the integer part of the average. From there you can peel off the remainder and calculate the fractional part. There are likely some clever tricks to make this more efficient, but this basic strategy would certainly work without needing to resort to a bigger data type.
That being said, the current implementation isn't build for this (for instance there is no comparison operator on Int32WithBoundedRollover, although it wouldn't be too hard to add). The reason is that it is just much simpler to use BigInteger at the end to do the calculation. Performance wise this doesn't matter too much for large averages since it will only be done once, and it is just too clean and easy to understand to worry about coming up with something clever (at least so far...).
As far as your original question which was concerned with the long data type, the Int32WithBoundedRollover could be converted to a LongWithBoundedRollover by just swapping int32 references for long references and it should work just the same. For Int32s I did notice a pretty big difference in performance (in case that is of interest). Compared to the BigInteger only method the method that I produced is around 80% faster for the large (as in total number of data points) samples that I was testing (the code for this is included in the unit tests for the Int32WithBoundedRollover class). This is likely mostly due to the difference between the int32 operations being done in hardware instead of software as the BigInteger operations are.

How about BigInteger in Visual J#.

If you're willing to sacrifice precision, you could do something like:
long num2 = 0L;
foreach (long num3 in source)
{
num2 += 1L;
}
if (num2 <= 0L)
{
throw Error.NoElements();
}
double average = 0;
foreach (long num3 in source)
{
average += (double)num3 / (double)num2;
}
return average;

Perhaps you can reduce every item by calculating average of adjusted values and then multiply it by the number of elements in collection. However, you'll find a bit different number of of operations on floating point.
var items = new long[] { long.MaxValue - 100, long.MaxValue - 200, long.MaxValue - 300 };
var avg = items.Average(i => i / items.Count()) * items.Count();

You could keep a rolling average which you update once for each large number.

Use the IntX library on CodePlex.

NextAverage = CurrentAverage + (NewValue - CurrentAverage) / (CurrentObservations + 1)

Here is my version of an extension method that can help with this.
public static long Average(this IEnumerable<long> longs)
{
long mean = 0;
long count = longs.Count();
foreach (var val in longs)
{
mean += val / count;
}
return mean;
}

Let Avg(n) be the average in first n number, and data[n] is the nth number.
Avg(n)=(double)(n-1)/(double)n*Avg(n-1)+(double)data[n]/(double)n
Can avoid value overflow however loss precision when n is very large.

For two positive numbers (or two negative numbers) , I found a very elegant solution from here.
where an average computation of (a+b)/2 can be replaced with a+((b-a)/2.

Related

Double every time brings different values

It's my generating algorithm it's generating random double elements for the array which sum must be 1
public static double [] GenerateWithSumOfElementsIsOne(int elements)
{
double sum = 1;
double [] arr = new double [elements];
for (int i = 0; i < elements - 1; i++)
{
arr[i] = RandomHelper.GetRandomNumber(0, sum);
sum -= arr[i];
}
arr[elements - 1] = sum;
return arr;
}
And the method helper
public static double GetRandomNumber(double minimum, double maximum)
{
Random random = new Random();
return random.NextDouble() * (maximum - minimum) + minimum;
}
My test cases are:
[Test]
[TestCase(7)]
[TestCase(5)]
[TestCase(4)]
[TestCase(8)]
[TestCase(10)]
[TestCase(50)]
public void GenerateWithSumOfElementsIsOne(int num)
{
Assert.AreEqual(1, RandomArray.GenerateWithSumOfElementsIsOne(num).Sum());
}
And the thing is - when I'm testing it returns every time different value like this cases :
Expected: 1
But was: 0.99999999999999967d
Expected: 1
But was: 0.99999999999999989d
But in the next test, it passes sometimes all of them, sometimes not.
I know that troubles with rounding and ask for some help, dear experts :)
https://en.wikipedia.org/wiki/Floating-point_arithmetic
In computing, floating-point arithmetic is arithmetic using formulaic
representation of real numbers as an approximation so as to support a
trade-off between range and precision. For this reason, floating-point
computation is often found in systems which include very small and
very large real numbers, which require fast processing times. A number
is, in general, represented approximately to a fixed number of
significant digits (the significand) and scaled using an exponent in
some fixed base; the base for the scaling is normally two, ten, or
sixteen.
In short, this is what floats do, they dont hold every single value and do approximate. If you would like more precision try using a Decimal instead, or adding tolerance by an epsilon (an upper bound on the relative error due to rounding in floating point arithmetic)
var ratio = a / b;
var diff = Math.Abs(ratio - 1);
return diff <= epsilon;
Round up errors are frequent in case of floating point types (like Single and Double), e.g. let's compute an easy sum:
// 0.1 + 0.1 + ... + 0.1 = ? (100 times). Is it 0.1 * 100 == 10? No!
Console.WriteLine((Enumerable.Range(1, 100).Sum(i => 0.1)).ToString("R"));
Outcome:
9.99999999999998
That's why when comparing floatinfg point values with == or != add tolerance:
// We have at least 8 correct digits
// i.e. the asbolute value of the (round up) error is less than tolerance
Assert.IsTrue(Math.Abs(RandomArray.GenerateWithSumOfElementsIsOne(num).Sum() - 1.0) < 1e-8);

How to know the repeating decimal in a fraction?

I already know when a fraction is repeating decimals. Here is the function.
public bool IsRepeatingDecimal
{
get
{
if (Numerator % Denominator == 0)
return false;
var primes = MathAlgorithms.Primes(Denominator);
foreach (int n in primes)
{
if (n != 2 && n != 5)
return true;
}
return false;
}
}
Now, I'm trying to get the repeated number. I'm checking this web site: http://en.wikipedia.org/wiki/Repeating_decimal
public decimal RepeatingDecimal()
{
if (!IsRepeatingDecimal) throw new InvalidOperationException("The fraction is not producing repeating decimals");
int digitsToTake;
switch (Denominator)
{
case 3:
case 9: digitsToTake = 1; break;
case 11: digitsToTake = 2; break;
case 13: digitsToTake = 6; break;
default: digitsToTake = Denominator - 1; break;
}
return MathExtensions.TruncateAt((decimal)Numerator / Denominator, digitsToTake);
}
But I really realized, that some numbers has a partial decimal finite and later infinite. For example: 1/28
Do you know a better way to do this? Or an Algorithm?
A very simple algorithm is this: implement long division. Record every intermediate division you do. As soon as you see a division identical to the one you've done before, you have what's being repeated.
Example: 7/13.
1. 13 goes into 7 0 times with remainder 7; bring down a 0.
2. 13 goes into 70 5 times with remainder 5; bring down a 0.
3. 13 goes into 50 3 times with remainder 11; bring down a 0.
4. 13 goes into 110 8 times with remainder 6; bring down a 0.
5. 13 goes into 60 4 times with remainder 8; bring down a 0.
6. 13 goes into 80 6 times with remainder 2; bring down a 0.
7. 13 goes into 20 1 time with remainder 7; bring down a 0.
8. We have already seen 13/70 on line 2; so lines 2-7 have the repeating part
The algorithm gives us 538461 as the repeating part. My calculator says 7/13 is 0.538461538. Looks right to me! All that remains are implementation details, or to find a better algorithm!
If you have a (positive) reduced fraction numerator / denominator, the decimal expansion of the fraction terminates if and only if denominator has no prime factor other than 2 or 5. If it has any other prime factor, the decimal expansion will be periodic. However, the cases where the denominator is divisible by at least one of 2 and 5 and where it isn't give rise to slightly different behaviour. We have three cases:
denominator = 2^a * 5^b, then the decimal expansion terminates max {a, b} digits after the decimal point.
denominator = 2^a * 5^b * m where m > 1 is not divisible by 2 or by 5, then the fractional part of the decimal expansions consists of two parts, the pre-period of length max {a, b} and the period, whose length is determined by m and independent of the numerator.
denominator > 1 is not divisible by 2 or by 5, then the decimal expansion is purely periodic, meaning the period starts immediately after the decimal point.
The treatment of cases 1. and 2. has a common part, let c = max {a, b}, then
numerator / denominator = (numerator * 2^(c-a) * 5^(c-b)) / (10^c * m)
where m = 1 for case 1. Note that one of the factors 2^(c-a) and 5^(c-b) with which we multiply the numerator is 1. Then you get the decimal expansion by expanding
(numerator * 2^(c-a) * 5^(c-b)) / m
and shifting the decimal point c places to the left. In the first case (m = 1) that part is trivial.
The treatment of cases 2. and 3. also has a common part, the calculation of a fraction
n / m
where n and m have no common prime factor (and m > 1). We can write n = q*m + r with 0 <= r < m (division with remainder, r = n % m), q is the integral part of the fraction and rather uninteresting.
Since the fraction was assumed reduced, we have r > 0, so we want to find the expansion of a fraction r / m where 0 < r < m and m is not divisible by 2 or by 5. As mentioned above, such an expansion is purely periodic, so finding the period means finding the complete expansion.
Let's go about finding the period heuristically. So let k be the length of the (shortest) period and p = d_1d1_2...d_k the period. So
r / m = 0.d_1d_2...d_kd_1d_2...d_kd_1...
= (d_1d_2...d_k)/(10^k) + (d_1d_2...d_k)/(10^(2k)) + (d_1d_2...d_k)/(10^(3k)) + ...
= p/(10^k) * (1 + 1/(10^k) + 1/(10^(2k)) + 1/(10^(3k)) + ...)
The last term is a geometric series, 1 + q + q^2 + q^3 + ... which, for |q| < 1 has the sum 1/(1-q).
In our case, 0 < q = 1/(10^k) < 1, so the sum is 1 / (1 - 1/(10^k)) = 10^k / (10^k-1). Thus we have seen that
r / m = p / (10^k-1)
Since r and m have no common factor, that means there is an s with 10^k - 1 = s*m and p = s*r. If we know k, the length of the period, we can simply find the digits of the period by calculating
p = ((10^k - 1)/m) * r
and padding with leading zeros until we have k digits. (Note: it is that simple only if k is sufficiently small or a big integer type is available. To calculate the period of for example 17/983 with standard fixed-width integer types, use long division as explained by #Patrick87.)
So it remains to find the length of the period. We can revert the reasoning above and find that if m divides 10^u - 1, then we can write
r / m = t/(10^u - 1) = t/(10^u) + t/(10^(2u)) + t/(10^(3u)) + ...
= 0.t_1t_2...t_ut_1t_2...t_ut_1...
and r/m has a period of length u. So the length of the shortest period is the minimal positive u such that m divides 10^u - 1, or, put another way, the smallest positive u such that 10^u % m == 1.
We can find it in O(m) time with
u = 0;
a = 1;
do {
++u;
a = (10*a) % m;
while(a != 1);
Now, finding the length of the period that way is not more efficient than finding the digits and length of the period together with long division, and for small enough m that is the most efficient method.
int[] long_division(int numerator, int denominator) {
if (numerator < 1 || numerator >= denominator) throw new IllegalArgumentException("Bad call");
// now we know 0 < numerator < denominator
if (denominator % 2 == 0 || denominator % 5 == 0) throw new IllegalArgumentException("Bad denominator");
// now we know we get a purely periodic expansion
int[] digits = new int[denominator];
int k = 0, n = numerator;
do {
n *= 10;
digits[k++] = n / denominator;
n = n % denominator;
}while(n != numerator);
int[] period = new int[k];
for(n = 0; n < k; ++n) {
period[n] = digits[n];
}
return period;
}
That works as long as 10*(denominator - 1) doesn't overflow, of course int could be a 32-bit or 64-bit integer as needed.
But for large denominators, that is inefficient, one can find the period length and also the period faster by considering the prime factorisation of the denominator. Regarding the period length,
If the denominator is a prime power, m = p^k, the period length of r/m is a divisor of (p-1) * p^(k-1)
If a and b are coprime and m = a * b, the period length of r/m is the least common multiple of the period lengths of 1/a and 1/b.
Taken together, the period length of r/m is a divisor of λ(m), where λ is the Carmichael function.
So to find the period length of r/m, find the prime factorisation of m and for all prime power factors p^k, find the period of 1/(p^k) - equivalently, the multiplicative order of 10 modulo p^k, which is known to be a divisor of (p-1) * p^(k-1). Since such numbers haven't many divisors, that is quickly done.
Then find the least common multiple of all these.
For the period itself (the digits), if a big integer type is available and the period isn't too long, the formula
p = (10^k - 1)/m * r
is a quick way to compute it. If the period is too long or no big integer type is available, efficiently computing the digits is messier, and off the top of my head I don't remember how exactly that is done.
One way would be to repeat the way that you do long division by hand, and keep note of the remainder at each stage. When the remainder repeats, the rest of the process must repeat as well. E.g. the digits of 1.0/7 are 0.1 remainder 3 then 0.14 remainder 2 then 0.142 remainder 6 then 0.1428 remainder 4 then 0.14285 remainder 5 then 0.142857 remainder 1 which is the 1 that starts it off again amd so you get 0.1428571 remainder 3 and it repeats again from there.
The long division algorithm is pretty good, so I have nothing to add there.
But note that your algorithm IsRepeatingDecimal may not work and is inneficient.
It will not work if your fraction is not irreductible, that is if there exists an integer larger than 1 that divides both your numerator and your denominator. For example, if you feed 7/14 then your algorithm will return true when it should return false.
To reduce your fraction, find the gcd between both numerator and denominator and divide both by this gcd.
If you assume that the fraction is irreducible, then your test
if (Numerator % Denominator == 0)
can simply be replaced with
if (Denominator == 1)
But that is still unnecessary since if Denominator is 1, then your list 'primes' is going to be empty and your algorithm will return false anyway.
Finally, calling MathAlgorithms.Primes(Denominator) is going to be expensive for large numbers and can be avoided. Indeed, all you need to do is divide your denominator by 5 (respectively 2) untill it is no longer divisible by 5 (resp. 2). If the end result is 1, then return false, otherwise return true.
I came here expecting to be able to copy & paste the code to do this, but it didn't exist. So after reading #Patrick87's answer, I went ahead and coded it up. I spent some time testing it thoroughly and giving things a nice name. I thought I would leave it here so others don't have to waste their time.
Features:
If the decimal terminates, it handles that. It calculates the period and puts that in a separate variable called period, in case you want to know the length of the reptend.
Limitations:
It will fail if the transient + reptend is longer than can be represented by a System.Decimal.
public static string FormatDecimalExpansion(RationalNumber value)
{
RationalNumber currentValue = value;
string decimalString = value.ToDecimal().ToString();
int currentIndex = decimalString.IndexOf('.');
Dictionary<RationalNumber, int> dict = new Dictionary<RationalNumber, int>();
while (!dict.ContainsKey(currentValue))
{
dict.Add(currentValue, currentIndex);
int rem = currentValue.Numerator % currentValue.Denominator;
int carry = rem * 10;
if (rem == 0) // Terminating decimal
{
return decimalString;
}
currentValue = new RationalNumber(carry, currentValue.Denominator);
currentIndex++;
}
int startIndex = dict[currentValue];
int endIndex = currentIndex;
int period = (endIndex - startIndex); // The period is the length of the reptend
if (endIndex >= decimalString.Length)
{
throw new ArgumentOutOfRangeException(nameof(value),
"The value supplied has a decimal expansion that is longer" +
$" than can be represented by value of type {nameof(System.Decimal)}.");
}
string transient = decimalString.Substring(0, startIndex);
string reptend = decimalString.Substring(startIndex, period);
return transient + $"({reptend})";
}
And for good measure, I will include my RationalNumber class.
Note: It inherits from IEquatable so that it works correctly with the dictionary:
public struct RationalNumber : IEquatable<RationalNumber>
{
public int Numerator;
public int Denominator;
public RationalNumber(int numerator, int denominator)
{
Numerator = numerator;
Denominator = denominator;
}
public decimal ToDecimal()
{
return Decimal.Divide(Numerator, Denominator);
}
public bool Equals(RationalNumber other)
{
return (Numerator == other.Numerator && Denominator == other.Denominator);
}
public override int GetHashCode()
{
return new Tuple<int, int>(Numerator, Denominator).GetHashCode();
}
public override string ToString()
{
return $"{Numerator}/{Denominator}";
}
}
Enjoy!

how to always round up to the next integer [duplicate]

This question already has answers here:
How can I ensure that a division of integers is always rounded up?
(10 answers)
Closed 6 years ago.
i am trying to find total pages in building a pager on a website (so i want the result to be an integer. i get a list of records and i want to split into 10 per page (the page count)
when i do this:
list.Count() / 10
or
list.Count() / (decimal)10
and the list.Count() =12, i get a result of 1.
How would I code it so i get 2 in this case (the remainder should always add 1)
Math.Ceiling((double)list.Count() / 10);
(list.Count() + 9) / 10
Everything else here is either overkill or simply wrong (except for bestsss' answer, which is awesome). We do not want the overhead of a function call (Math.Truncate(), Math.Ceiling(), etc.) when simple math is enough.
OP's question generalizes (pigeonhole principle) to:
How many boxes do I need to store x objects if only y objects fit into each box?
The solution:
derives from the realization that the last box might be partially empty, and
is (x + y - 1) ÷ y using integer division.
You'll recall from 3rd grade math that integer division is what we're doing when we say 5 ÷ 2 = 2.
Floating-point division is when we say 5 ÷ 2 = 2.5, but we don't want that here.
Many programming languages support integer division. In languages derived from C, you get it automatically when you divide int types (short, int, long, etc.). The remainder/fractional part of any division operation is simply dropped, thus:
5 / 2 == 2
Replacing our original question with x = 5 and y = 2 we have:
How many boxes do I need to store 5 objects if only 2 objects fit into each box?
The answer should now be obvious: 3 boxes -- the first two boxes hold two objects each and the last box holds one.
(x + y - 1) ÷ y =
(5 + 2 - 1) ÷ 2 =
6 ÷ 2 =
3
So for the original question, x = list.Count(), y = 10, which gives the solution using no additional function calls:
(list.Count() + 9) / 10
A proper benchmark or how the number may lie
Following the argument about Math.ceil(value/10d) and (value+9)/10 I ended up coding a proper non-dead code, non-interpret mode benchmark.
I've been telling that writing micro benchmark is not an easy task. The code below illustrates this:
00:21:40.109 starting up....
00:21:40.140 doubleCeil: 19444599
00:21:40.140 integerCeil: 19444599
00:21:40.140 warming up...
00:21:44.375 warmup doubleCeil: 194445990000
00:21:44.625 warmup integerCeil: 194445990000
00:22:27.437 exec doubleCeil: 1944459900000, elapsed: 42.806s
00:22:29.796 exec integerCeil: 1944459900000, elapsed: 2.363s
The benchmark is in Java since I know well how Hotspot optimizes and ensures it's a fair result. With such results, no statistics, noise or anything can taint it.
Integer ceil is insanely much faster.
The code
package t1;
import java.math.BigDecimal;
import java.util.Random;
public class Div {
static int[] vals;
static long doubleCeil(){
int[] v= vals;
long sum = 0;
for (int i=0;i<v.length;i++){
int value = v[i];
sum+=Math.ceil(value/10d);
}
return sum;
}
static long integerCeil(){
int[] v= vals;
long sum = 0;
for (int i=0;i<v.length;i++){
int value = v[i];
sum+=(value+9)/10;
}
return sum;
}
public static void main(String[] args) {
vals = new int[7000];
Random r= new Random(77);
for (int i = 0; i < vals.length; i++) {
vals[i] = r.nextInt(55555);
}
log("starting up....");
log("doubleCeil: %d", doubleCeil());
log("integerCeil: %d", integerCeil());
log("warming up...");
final int warmupCount = (int) 1e4;
log("warmup doubleCeil: %d", execDoubleCeil(warmupCount));
log("warmup integerCeil: %d", execIntegerCeil(warmupCount));
final int execCount = (int) 1e5;
{
long time = System.nanoTime();
long s = execDoubleCeil(execCount);
long elapsed = System.nanoTime() - time;
log("exec doubleCeil: %d, elapsed: %.3fs", s, BigDecimal.valueOf(elapsed, 9));
}
{
long time = System.nanoTime();
long s = execIntegerCeil(execCount);
long elapsed = System.nanoTime() - time;
log("exec integerCeil: %d, elapsed: %.3fs", s, BigDecimal.valueOf(elapsed, 9));
}
}
static long execDoubleCeil(int count){
long sum = 0;
for(int i=0;i<count;i++){
sum+=doubleCeil();
}
return sum;
}
static long execIntegerCeil(int count){
long sum = 0;
for(int i=0;i<count;i++){
sum+=integerCeil();
}
return sum;
}
static void log(String msg, Object... params){
String s = params.length>0?String.format(msg, params):msg;
System.out.printf("%tH:%<tM:%<tS.%<tL %s%n", new Long(System.currentTimeMillis()), s);
}
}
This will also work:
c = (count - 1) / 10 + 1;
I think the easiest way is to divide two integers and increase by one :
int r = list.Count() / 10;
r += (list.Count() % 10 == 0 ? 0 : 1);
No need of libraries or functions.
edited with the right code.
You can use Math.Ceiling
http://msdn.microsoft.com/en-us/library/system.math.ceiling%28v=VS.100%29.aspx
Xform to double (and back) for a simple ceil?
list.Count()/10 + (list.Count()%10 >0?1:0) - this bad, div + mod
edit 1st:
on a 2n thought that's probably faster (depends on the optimization): div * mul (mul is faster than div and mod)
int c=list.Count()/10;
if (c*10<list.Count()) c++;
edit2 scarpe all. forgot the most natural (adding 9 ensures rounding up for integers)
(list.Count()+9)/10
Check by using mod - if there is a remainder, simply increment the value by one.

Evenly divide a dollar amount (decimal) by an integer

I need to write an accounting routine for a program I am building that will give me an even division of a decimal by an integer. So that for example:
$143.13 / 5 =
28.62
28.62
28.63
28.63
28.63
I have seen the article here: Evenly divide in c#, but it seems like it only works for integer divisions. Any idea of an elegant solution to this problem?
Calculate the amounts one at a time, and subtract each amount from the total to make sure that you always have the correct total left:
decimal total = 143.13m;
int divider = 5;
while (divider > 0) {
decimal amount = Math.Round(total / divider, 2);
Console.WriteLine(amount);
total -= amount;
divider--;
}
result:
28,63
28,62
28,63
28,62
28,63
You can solve this (in cents) without constructing an array:
int a = 100 * amount;
int low_value = a / n;
int high_value = low_value + 1;
int num_highs = a % n;
int num_lows = n - num_highs;
It's easier to deal with cents. I would suggest that instead of 143.13, you divide 14313 into 5 equal parts. Which gives you 2862 and a remainder of 3. You can assign this remainder to the first three parts or any way you like. Finally, convert the cents back to dollars.
Also notice that you will always get a remainder less than the number of parts you want.
First of all, make sure you don't use a floating point number to represent dollars and cents (see other posts for why, but the simple reason is that not all decimal numbers can be represented as floats, e.g., $1.79).
Here's one way of doing it:
decimal total = 143.13m;
int numberOfEntries = 5;
decimal unadjustedEntryAmount = total / numberOfEntries;
decimal leftoverAmount = total - (unadjustedEntryAmount * numberOfEntries);
int numberOfPenniesToDistribute = leftoverAmount * 100;
int numberOfUnadjustedEntries = numberOfEntries - numberOfPenniesToDistribute;
So now you have the unadjusted amounts of 28.62, and then you have to decide how to distribute the remainder. You can either distribute an extra penny to each one starting at the top or at the bottom (looks like you want from the bottom).
for (int i = 0; i < numberOfUnadjustedEntries; i++) {
Console.WriteLine(unadjustedEntryAmount);
}
for (int i = 0; i < numberOfPenniesToDistribute; i++) {
Console.WriteLine(unadjustedEntryAmount + 0.01m);
}
You could also add the entire remainder to the first or last entries. Finally, depending on the accounting needs, you could also create a separate transaction for the remainder.
If you have a float that is guaranteed exactly two digits of precision, what about this (pseudocode):
amount = amount * 100 (convert to cents)
int[] amounts = new int[divisor]
for (i = 0; i < divisor; i++) amounts[i] = amount / divisor
extra = amount % divisor
for (i = 0; i < extra; i++) amounts[i]++
and then do whatever you want with amounts, which are in cents - you could convert back to floats if you absolutely had to, or format as dollars and cents.
If not clear, the point of all this is not just to divide a float value evenly but to divide a monetary amount as evenly as possible, given that cents are an indivisible unit of USD. To the OP: let me know if this isn't what you wanted.
You can use the algorithm in the question you're referencing by multipling by 100, using the integer evenly divide function, and then dividing each of the results by 100 (assuming you only want to handle 2 dp, if you want 3dp multiple by 1000 etc)
It is also possible to use C# iterator generation to make Guffa's answer more convenient:
public static IEnumerable<decimal> Divide(decimal amount, int numBuckets)
{
while(numBuckets > 0)
{
// determine the next amount to return...
var partialAmount = Math.Round(amount / numBuckets, 2);
yield return partialAmount;
// reduce th remaining amount and #buckets
// to account for previously yielded values
amount -= partialAmount;
numBuckets--;
}
}

Project Euler #16 - C# 2.0

I've been wrestling with Project Euler Problem #16 in C# 2.0. The crux of the question is that you have to calculate and then iterate through each digit in a number that is 604 digits long (or there-abouts). You then add up these digits to produce the answer.
This presents a problem: C# 2.0 doesn't have a built-in datatype that can handle this sort of calculation precision. I could use a 3rd party library, but that would defeat the purpose of attempting to solve it programmatically without external libraries. I can solve it in Perl; but I'm trying to solve it in C# 2.0 (I'll attempt to use C# 3.0 in my next run-through of the Project Euler questions).
Question
What suggestions (not answers!) do you have for solving project Euler #16 in C# 2.0? What methods would work?
NB: If you decide to post an answer, please prefix your attempt with a blockquote that has ###Spoiler written before it.
A number of a series of digits. A 32 bit unsigned int is 32 binary digits. The string "12345" is a series of 5 digits. Digits can be stored in many ways: as bits, characters, array elements and so on. The largest "native" datatype in C# with complete precision is probably the decimal type (128 bits, 28-29 digits). Just choose your own method of storing digits that allows you to store much bigger numbers.
As for the rest, this will give you a clue:
21 = 2
22 = 21 + 21
23 = 22 + 22
Example:
The sum of digits of 2^100000 is 135178
Ran in 4875 ms
The sum of digits of 2^10000 is 13561
Ran in 51 ms
The sum of digits of 2^1000 is 1366
Ran in 2 ms
SPOILER ALERT: Algorithm and solution in C# follows.
Basically, as alluded to a number is nothing more than an array of digits. This can be represented easily in two ways:
As a string;
As an array of characters or digits.
As others have mentioned, storing the digits in reverse order is actually advisable. It makes the calculations much easier. I tried both of the above methods. I found strings and the character arithmetic irritating (it's easier in C/C++; the syntax is just plain annoying in C#).
The first thing to note is that you can do this with one array. You don't need to allocate more storage at each iteration. As mentioned you can find a power of 2 by doubling the previous power of 2. So you can find 21000 by doubling 1 one thousand times. The doubling can be done in place with the general algorithm:
carry = 0
foreach digit in array
sum = digit + digit + carry
if sum > 10 then
carry = 1
sum -= 10
else
carry = 0
end if
digit = sum
end foreach
This algorithm is basically the same for using a string or an array. At the end you just add up the digits. A naive implementation might add the results into a new array or string with each iteration. Bad idea. Really slows it down. As mentioned, it can be done in place.
But how large should the array be? Well that's easy too. Mathematically you can convert 2^a to 10^f(a) where f(a) is a simple logarithmic conversion and the number of digits you need is the next higher integer from that power of 10. For simplicity, you can just use:
digits required = ceil(power of 2 / 3)
which is a close approximation and sufficient.
Where you can really optimise this is by using larger digits. A 32 bit signed int can store a number between +/- 2 billion (approximately. Well 9 digits equals a billion so you can use a 32 bit int (signed or unsigned) as basically a base one billion "digit". You can work out how many ints you need, create that array and that's all the storage you need to run the entire algorithm (being 130ish bytes) with everything being done in place.
Solution follows (in fairly rough C#):
static void problem16a()
{
const int limit = 1000;
int ints = limit / 29;
int[] number = new int[ints + 1];
number[0] = 2;
for (int i = 2; i <= limit; i++)
{
doubleNumber(number);
}
String text = NumberToString(number);
Console.WriteLine(text);
Console.WriteLine("The sum of digits of 2^" + limit + " is " + sumDigits(text));
}
static void doubleNumber(int[] n)
{
int carry = 0;
for (int i = 0; i < n.Length; i++)
{
n[i] <<= 1;
n[i] += carry;
if (n[i] >= 1000000000)
{
carry = 1;
n[i] -= 1000000000;
}
else
{
carry = 0;
}
}
}
static String NumberToString(int[] n)
{
int i = n.Length;
while (i > 0 && n[--i] == 0)
;
String ret = "" + n[i--];
while (i >= 0)
{
ret += String.Format("{0:000000000}", n[i--]);
}
return ret;
}
I solved this one using C# also, much to my dismay when I discovered that Python can do this in one simple operation.
Your goal is to create an adding machine using arrays of int values.
Spoiler follows
I ended up using an array of int
values to simulate an adding machine,
but I represented the number backwards
- which you can do because the problem only asks for the sum of the digits,
this means order is irrelevant.
What you're essentially doing is
doubling the value 1000 times, so you
can double the value 1 stored in the
1st element of the array, and then
continue looping until your value is
over 10. This is where you will have
to keep track of a carry value. The
first power of 2 that is over 10 is
16, so the elements in the array after
the 5th iteration are 6 and 1.
Now when you loop through the array
starting at the 1st value (6), it
becomes 12 (so you keep the last
digit, and set a carry bit on the next
index of the array) - which when
that value is doubled you get 2 ... plus the 1 for the carry bit which
equals 3. Now you have 2 and 3 in your
array which represents 32.
Continues this process 1000 times and
you'll have an array with roughly 600
elements that you can easily add up.
I have solved this one before, and now I re-solved it using C# 3.0. :)
I just wrote a Multiply extension method that takes an IEnumerable<int> and a multiplier and returns an IEnumerable<int>. (Each int represents a digit, and the first one it the least significant digit.) Then I just created a list with the item { 1 } and multiplied it by 2 a 1000 times. Adding the items in the list is simple with the Sum extension method.
19 lines of code, which runs in 13 ms. on my laptop. :)
Pretend you are very young, with square paper. To me, that is like a list of numbers. Then to double it you double each number, then handle any "carries", by subtracting the 10s and adding 1 to the next index. So if the answer is 1366... something like (completely unoptimized, rot13):
hfvat Flfgrz;
hfvat Flfgrz.Pbyyrpgvbaf.Trarevp;
pynff Cebtenz {
fgngvp ibvq Pneel(Yvfg<vag> yvfg, vag vaqrk) {
juvyr (yvfg[vaqrk] > 9) {
yvfg[vaqrk] -= 10;
vs (vaqrk == yvfg.Pbhag - 1) yvfg.Nqq(1);
ryfr yvfg[vaqrk + 1]++;
}
}
fgngvp ibvq Znva() {
ine qvtvgf = arj Yvfg<vag> { 1 }; // 2^0
sbe (vag cbjre = 1; cbjre <= 1000; cbjre++) {
sbe (vag qvtvg = 0; qvtvg < qvtvgf.Pbhag; qvtvg++) {
qvtvgf[qvtvg] *= 2;
}
sbe (vag qvtvg = 0; qvtvg < qvtvgf.Pbhag; qvtvg++) {
Pneel(qvtvgf, qvtvg);
}
}
qvtvgf.Erirefr();
sbernpu (vag v va qvtvgf) {
Pbafbyr.Jevgr(v);
}
Pbafbyr.JevgrYvar();
vag fhz = 0;
sbernpu (vag v va qvtvgf) fhz += v;
Pbafbyr.Jevgr("fhz: ");
Pbafbyr.JevgrYvar(fhz);
}
}
If you wish to do the primary calculation in C#, you will need some sort of big integer implementation (Much like gmp for C/C++). Programming is about using the right tool for the right job. If you cannot find a good big integer library for C#, it's not against the rules to calculate the number in a language like Python which already has the ability to calculate large numbers. You could then put this number into your C# program via your method of choice, and iterate over each character in the number (you will have to store it as a string). For each character, convert it to an integer and add it to your total until you reach the end of the number. If you would like the big integer, I calculated it with python below. The answer is further down.
Partial Spoiler
10715086071862673209484250490600018105614048117055336074437503883703510511249361
22493198378815695858127594672917553146825187145285692314043598457757469857480393
45677748242309854210746050623711418779541821530464749835819412673987675591655439
46077062914571196477686542167660429831652624386837205668069376
Spoiler Below!
>>> val = str(2**1000)
>>> total = 0
>>> for i in range(0,len(val)): total += int(val[i])
>>> print total
1366
If you've got ruby, you can easily calculate "2**1000" and get it as a string. Should be an easy cut/paste into a string in C#.
Spoiler
In Ruby: (2**1000).to_s.split(//).inject(0){|x,y| x+y.to_i}
spoiler
If you want to see a solution check
out my other answer. This is in Java but it's very easy to port to C#
Here's a clue:
Represent each number with a list. That way you can do basic sums like:
[1,2,3,4,5,6]
+ [4,5]
_____________
[1,2,3,5,0,1]
One alternative to representing the digits as a sequence of integers is to represent the number base 2^32 as a list of 32 bit integers, which is what many big integer libraries do. You then have to convert the number to base 10 for output. This doesn't gain you very much for this particular problem - you can write 2^1000 straight away, then have to divide by 10 many times instead of multiplying 2 by itself 1000 times ( or, as 1000 is 0b1111101000. calculating the product of 2^8,32,64,128,256,512 using repeated squaring 2^8 = (((2^2)^2)^2))) which requires more space and a multiplication method, but is far fewer operations ) - is closer to normal big integer use, so you may find it more useful in later problems ( if you try to calculate the last ten digits of 28433×2^(7830457)+1 using the digit-per int method and repeated addition, it may take some time (though in that case you could use modulo arthimetic, rather than adding strings of millions of digits) ).
Working solution that I have posted it here as well: http://www.mycoding.net/2012/01/solution-to-project-euler-problem-16/
The code:
import java.math.BigInteger;
public class Euler16 {
public static void main(String[] args) {
int power = 1;
BigInteger expo = new BigInteger("2");
BigInteger num = new BigInteger("2");
while(power < 1000){
expo = expo.multiply(num);
power++;
}
System.out.println(expo); //Printing the value of 2^1000
int sum = 0;
char[] expoarr = expo.toString().toCharArray();
int max_count = expoarr.length;
int count = 0;
while(count<max_count){ //While loop to calculate the sum of digits
sum = sum + (expoarr[count]-48);
count++;
}
System.out.println(sum);
}
}
Euler problem #16 has been discussed many times here, but I could not find an answer that gives a good overview of possible solution approaches, the lay of the land as it were. Here's my attempt at rectifying that.
This overview is intended for people who have already found a solution and want to get a more complete picture. It is basically language-agnostic even though the sample code is C#. There are some usages of features that are not available in C# 2.0 but they are not essential - their purpose is only to get boring stuff out of the way with a minimum of fuss.
Apart from using a ready-made BigInteger library (which doesn't count), straightforward solutions for Euler #16 fall into two fundamental categories: performing calculations natively - i.e. in a base that is a power of two - and converting to decimal in order to get at the digits, or performing the computations directly in a decimal base so that the digits are available without any conversion.
For the latter there are two reasonably simple options:
repeated doubling
powering by repeated squaring
Native Computation + Radix Conversion
This approach is the simplest and its performance exceeds that of naive solutions using .Net's builtin BigInteger type.
The actual computation is trivially achieved: just perform the moral equivalent of 1 << 1000, by storing 1000 binary zeroes and appending a single lone binary 1.
The conversion is also quite simple and can be done by coding the pencil-and-paper division method, with a suitably large choice of 'digit' for efficiency. Variables for intermediate results need to be able to hold two 'digits'; dividing the number of decimal digits that fit in a long by 2 gives 9 decimal digits for the maximum meta-digit (or 'limb', as it is usually called in bignum lore).
class E16_RadixConversion
{
const int BITS_PER_WORD = sizeof(uint) * 8;
const uint RADIX = 1000000000; // == 10^9
public static int digit_sum_for_power_of_2 (int exponent)
{
var dec = new List<int>();
var bin = new uint[(exponent + BITS_PER_WORD) / BITS_PER_WORD];
int top = bin.Length - 1;
bin[top] = 1u << (exponent % BITS_PER_WORD);
while (top >= 0)
{
ulong rest = 0;
for (int i = top; i >= 0; --i)
{
ulong temp = (rest << BITS_PER_WORD) | bin[i];
ulong quot = temp / RADIX; // x64 uses MUL (sometimes), x86 calls a helper function
rest = temp - quot * RADIX;
bin[i] = (uint)quot;
}
dec.Add((int)rest);
if (bin[top] == 0)
--top;
}
return E16_Common.digit_sum(dec);
}
}
I wrote (rest << BITS_PER_WORD) | big[i] instead of using operator + because that is precisely what is needed here; no 64-bit addition with carry propagation needs to take place. This means that the two operands could be written directly to their separate registers in a register pair, or to fields in an equivalent struct like LARGE_INTEGER.
On 32-bit systems the 64-bit division cannot be inlined as a few CPU instructions, because the compiler cannot know that the algorithm guarantees quotient and remainder to fit into 32-bit registers. Hence the compiler calls a helper function that can handle all eventualities.
These systems may profit from using a smaller limb, i.e. RADIX = 10000 and uint instead of ulong for holding intermediate (double-limb) results. An alternative for languages like C/C++ would be to call a suitable compiler intrinsic that wraps the raw 32-bit by 32-bit to 64-bit multiply (assuming that division by the constant radix is to be implemented by multiplication with the inverse). Conversely, on 64-bit systems the limb size can be increased to 19 digits if the compiler offers a suitable 64-by-64-to-128 bit multiply primitive or allows inline assembler.
Decimal Doubling
Repeated doubling seems to be everyone's favourite, so let's do that next. Variables for intermediate results need to hold one 'digit' plus one carry bit, which gives 18 digits per limb for long. Going to ulong cannot improve things (there's 0.04 bit missing to 19 digits plus carry), and so we might as well stick with long.
On a binary computer, decimal limbs do not coincide with computer word boundaries. That makes it necessary to perform a modulo operation on the limbs during each step of the calculation. Here, this modulo op can be reduced to a subtraction of the modulus in the event of carry, which is faster than performing a division. The branching in the inner loop can be eliminated by bit twiddling but that would be needlessly obscure for a demonstration of the basic algorithm.
class E16_DecimalDoubling
{
const int DIGITS_PER_LIMB = 18; // == floor(log10(2) * (63 - 1)), b/o carry
const long LIMB_MODULUS = 1000000000000000000L; // == 10^18
public static int digit_sum_for_power_of_2 (int power_of_2)
{
Trace.Assert(power_of_2 > 0);
int total_digits = (int)Math.Ceiling(Math.Log10(2) * power_of_2);
int total_limbs = (total_digits + DIGITS_PER_LIMB - 1) / DIGITS_PER_LIMB;
var a = new long[total_limbs];
int limbs = 1;
a[0] = 2;
for (int i = 1; i < power_of_2; ++i)
{
int carry = 0;
for (int j = 0; j < limbs; ++j)
{
long new_limb = (a[j] << 1) | carry;
carry = 0;
if (new_limb >= LIMB_MODULUS)
{
new_limb -= LIMB_MODULUS;
carry = 1;
}
a[j] = new_limb;
}
if (carry != 0)
{
a[limbs++] = carry;
}
}
return E16_Common.digit_sum(a);
}
}
This is just as simple as radix conversion, but except for very small exponents it does not perform anywhere near as well (despite its huge meta-digits of 18 decimal places). The reason is that the code must perform (exponent - 1) doublings, and the work done in each pass corresponds to about half the total number of digits (limbs).
Repeated Squaring
The idea behind powering by repeated squaring is to replace a large number of doublings with a small number of multiplications.
1000 = 2^3 + 2^5 + 2^6 + 2^7 + 2^8 + 2^9
x^1000 = x^(2^3 + 2^5 + 2^6 + 2^7 + 2^8 + 2^9)
x^1000 = x^2^3 * x^2^5 * x^2^6 * x^2^7 * x^2*8 * x^2^9
x^2^3 can be obtained by squaring x three times, x^2^5 by squaring five times, and so on. On a binary computer the decomposition of the exponent into powers of two is readily available because it is the bit pattern representing that number. However, even non-binary computers should be able to test whether a number is odd or even, or to divide a number by two.
The multiplication can be done by coding the pencil-and-paper method; here I'm using a helper function that computes one row of a product and adds it into the result at a suitably shifted position, so that the rows of partial products do not need to be stored for a separate addition step later. Intermediate values during computation can be up to two 'digits' in size, so that the limbs can be only half as wide as for repeated doubling (where only one extra bit had to fit in addition to a 'digit').
Note: the radix of the computations is not a power of 2, and so the squarings of 2 cannot be computed by simple shifting here. On the positive side, the code can be used for computing powers of bases other than 2.
class E16_DecimalSquaring
{
const int DIGITS_PER_LIMB = 9; // language limit 18, half needed for holding the carry
const int LIMB_MODULUS = 1000000000;
public static int digit_sum_for_power_of_2 (int e)
{
Trace.Assert(e > 0);
int total_digits = (int)Math.Ceiling(Math.Log10(2) * e);
int total_limbs = (total_digits + DIGITS_PER_LIMB - 1) / DIGITS_PER_LIMB;
var squared_power = new List<int>(total_limbs) { 2 };
var result = new List<int>(total_limbs);
result.Add((e & 1) == 0 ? 1 : 2);
while ((e >>= 1) != 0)
{
squared_power = multiply(squared_power, squared_power);
if ((e & 1) == 1)
result = multiply(result, squared_power);
}
return E16_Common.digit_sum(result);
}
static List<int> multiply (List<int> lhs, List<int> rhs)
{
var result = new List<int>(lhs.Count + rhs.Count);
resize_to_capacity(result);
for (int i = 0; i < rhs.Count; ++i)
addmul_1(result, i, lhs, rhs[i]);
trim_leading_zero_limbs(result);
return result;
}
static void addmul_1 (List<int> result, int offset, List<int> multiplicand, int multiplier)
{
// it is assumed that the caller has sized `result` appropriately before calling this primitive
Trace.Assert(result.Count >= offset + multiplicand.Count + 1);
long carry = 0;
foreach (long limb in multiplicand)
{
long temp = result[offset] + limb * multiplier + carry;
carry = temp / LIMB_MODULUS;
result[offset++] = (int)(temp - carry * LIMB_MODULUS);
}
while (carry != 0)
{
long final_temp = result[offset] + carry;
carry = final_temp / LIMB_MODULUS;
result[offset++] = (int)(final_temp - carry * LIMB_MODULUS);
}
}
static void resize_to_capacity (List<int> operand)
{
operand.AddRange(Enumerable.Repeat(0, operand.Capacity - operand.Count));
}
static void trim_leading_zero_limbs (List<int> operand)
{
int i = operand.Count;
while (i > 1 && operand[i - 1] == 0)
--i;
operand.RemoveRange(i, operand.Count - i);
}
}
The efficiency of this approach is roughly on par with radix conversion but there are specific improvements that apply here. Efficiency of the squaring can be doubled by writing a special squaring routine that utilises the fact that ai*bj == aj*bi if a == b, which cuts the number of multiplications in half.
Also, there are methods for computing addition chains that involve fewer operations overall than using the exponent bits for determining the squaring/multiplication schedule.
Helper Code and Benchmarks
The helper code for summing decimal digits in the meta-digits (decimal limbs) produced by the sample code is trivial, but I'm posting it here anyway for your convenience:
internal class E16_Common
{
internal static int digit_sum (int limb)
{
int sum = 0;
for ( ; limb > 0; limb /= 10)
sum += limb % 10;
return sum;
}
internal static int digit_sum (long limb)
{
const int M1E9 = 1000000000;
return digit_sum((int)(limb / M1E9)) + digit_sum((int)(limb % M1E9));
}
internal static int digit_sum (IEnumerable<int> limbs)
{
return limbs.Aggregate(0, (sum, limb) => sum + digit_sum(limb));
}
internal static int digit_sum (IEnumerable<long> limbs)
{
return limbs.Select((limb) => digit_sum(limb)).Sum();
}
}
This can be made more efficient in various ways but overall it is not critical.
All three solutions take O(n^2) time where n is the exponent. In other words, they will take a hundred times as long when the exponent grows by a factor of ten. Radix conversion and repeated squaring can both be improved to roughly O(n log n) by employing divide-and-conquer strategies; I doubt whether the doubling scheme can be improved in a similar fastion but then it was never competitive to begin with.
All three solutions presented here can be used to print the actual results, by stringifying the meta-digits with suitable padding and concatenating them. I've coded the functions as returning the digit sum instead of the arrays/lists with decimal limbs only in order to keep the sample code simple and to ensure that all functions have the same signature, for benchmarking.
In these benchmarks, the .Net BigInteger type was wrapped like this:
static int digit_sum_via_BigInteger (int power_of_2)
{
return System.Numerics.BigInteger.Pow(2, power_of_2)
.ToString()
.ToCharArray()
.Select((c) => (int)c - '0')
.Sum();
}
Finally, the benchmarks for the C# code:
# testing decimal doubling ...
1000: 1366 in 0,052 ms
10000: 13561 in 3,485 ms
100000: 135178 in 339,530 ms
1000000: 1351546 in 33.505,348 ms
# testing decimal squaring ...
1000: 1366 in 0,023 ms
10000: 13561 in 0,299 ms
100000: 135178 in 24,610 ms
1000000: 1351546 in 2.612,480 ms
# testing radix conversion ...
1000: 1366 in 0,018 ms
10000: 13561 in 0,619 ms
100000: 135178 in 60,618 ms
1000000: 1351546 in 5.944,242 ms
# testing BigInteger + LINQ ...
1000: 1366 in 0,021 ms
10000: 13561 in 0,737 ms
100000: 135178 in 69,331 ms
1000000: 1351546 in 6.723,880 ms
As you can see, the radix conversion is almost as slow as the solution using the builtin BigInteger class. The reason is that the runtime is of the newer type that does performs certain standard optimisations only for signed integer types but not for unsigned ones (here: implementing division by a constant as multiplication with the inverse).
I haven't found an easy means of inspecting the native code for existing .Net assemblies, so I decided on a different path of investigation: I coded a variant of E16_RadixConversion for comparison where ulong and uint were replaced by long and int respectively, and BITS_PER_WORD decreased by 1 accordingly. Here are the timings:
# testing radix conv Int63 ...
1000: 1366 in 0,004 ms
10000: 13561 in 0,202 ms
100000: 135178 in 18,414 ms
1000000: 1351546 in 1.834,305 ms
More than three times as fast as the version that uses unsigned types! Clear evidence of numbskullery in the compiler...
In order to showcase the effect of different limb sizes I templated the solutions in C++ on the unsigned integer types used as limbs. The timings are prefixed with the byte size of a limb and the number of decimal digits in a limb, separated by a colon. There is no timing for the often-seen case of manipulating digit characters in strings, but it is safe to say that such code will take at least twice as long as the code that uses double digits in byte-sized limbs.
# E16_DecimalDoubling
[1:02] e = 1000 -> 1366 0.308 ms
[2:04] e = 1000 -> 1366 0.152 ms
[4:09] e = 1000 -> 1366 0.070 ms
[8:18] e = 1000 -> 1366 0.071 ms
[1:02] e = 10000 -> 13561 30.533 ms
[2:04] e = 10000 -> 13561 13.791 ms
[4:09] e = 10000 -> 13561 6.436 ms
[8:18] e = 10000 -> 13561 2.996 ms
[1:02] e = 100000 -> 135178 2719.600 ms
[2:04] e = 100000 -> 135178 1340.050 ms
[4:09] e = 100000 -> 135178 588.878 ms
[8:18] e = 100000 -> 135178 290.721 ms
[8:18] e = 1000000 -> 1351546 28823.330 ms
For the exponent of 10^6 there is only the timing with 64-bit limbs, since I didn't have the patience to wait many minutes for full results. The picture is similar for radix conversion, except that there is no row for 64-bit limbs because my compiler does not have a native 128-bit integral type.
# E16_RadixConversion
[1:02] e = 1000 -> 1366 0.080 ms
[2:04] e = 1000 -> 1366 0.026 ms
[4:09] e = 1000 -> 1366 0.048 ms
[1:02] e = 10000 -> 13561 4.537 ms
[2:04] e = 10000 -> 13561 0.746 ms
[4:09] e = 10000 -> 13561 0.243 ms
[1:02] e = 100000 -> 135178 445.092 ms
[2:04] e = 100000 -> 135178 68.600 ms
[4:09] e = 100000 -> 135178 19.344 ms
[4:09] e = 1000000 -> 1351546 1925.564 ms
The interesting thing is that simply compiling the code as C++ doesn't make it any faster - i.e., the optimiser couldn't find any low-hanging fruit that the C# jitter missed, apart from not toeing the line with regard to penalising unsigned integers. That's the reason why I like prototyping in C# - performance in the same ballpark as (unoptimised) C++ and none of the hassle.
Here's the meat of the C++ version (sans reams of boring stuff like helper templates and so on) so that you can see that I didn't cheat to make C# look better:
template<typename W>
struct E16_RadixConversion
{
typedef W limb_t;
typedef typename detail::E16_traits<W>::long_t long_t;
static unsigned const BITS_PER_WORD = sizeof(limb_t) * CHAR_BIT;
static unsigned const RADIX_DIGITS = std::numeric_limits<limb_t>::digits10;
static limb_t const RADIX = detail::pow10_t<limb_t, RADIX_DIGITS>::RESULT;
static unsigned digit_sum_for_power_of_2 (unsigned e)
{
std::vector<limb_t> digits;
compute_digits_for_power_of_2(e, digits);
return digit_sum(digits);
}
static void compute_digits_for_power_of_2 (unsigned e, std::vector<limb_t> &result)
{
assert(e > 0);
unsigned total_digits = unsigned(std::ceil(std::log10(2) * e));
unsigned total_limbs = (total_digits + RADIX_DIGITS - 1) / RADIX_DIGITS;
result.resize(0);
result.reserve(total_limbs);
std::vector<limb_t> bin((e + BITS_PER_WORD) / BITS_PER_WORD);
bin.back() = limb_t(limb_t(1) << (e % BITS_PER_WORD));
while (!bin.empty())
{
long_t rest = 0;
for (std::size_t i = bin.size(); i-- > 0; )
{
long_t temp = (rest << BITS_PER_WORD) | bin[i];
long_t quot = temp / RADIX;
rest = temp - quot * RADIX;
bin[i] = limb_t(quot);
}
result.push_back(limb_t(rest));
if (bin.back() == 0)
bin.pop_back();
}
}
};
Conclusion
These benchmarks also show that this Euler task - like many others - seems designed to be solved on a ZX81 or an Apple ][, not on our modern toys that are a million times as powerful. There's no challenge involved here unless the limits are increased drastically (an exponent of 10^5 or 10^6 would be much more adequate).
A good overview of the practical state of the art can be got from GMP's overview of algorithms. Another excellent overview of the algorithms is chapter 1 of "Modern Computer Arithmetic" by Richard Brent and Paul Zimmermann. It contains exactly what one needs to know for coding challenges and competitions, but unfortunately the depth is not equal to that of Donald Knuth's treatment in "The Art of Computer Programming".
The radix conversion solution adds a useful technique to one's code challenge toolchest, since the given code can be trivially extended for converting any old big integer instead of only the bit pattern 1 << exponent. The repeated squaring solutiono can be similarly useful since changing the sample code to power something other than 2 is again trivial.
The approach of performing computations directly in powers of 10 can be useful for challenges where decimal results are required, because performance is in the same ballpark as native computation but there is no need for a separate conversion step (which can require similar amounts of time as the actual computation).

Categories

Resources