Related
Here's a bit of a puzzler: Random.Next() has an overload that accepts a minimum value and a maximum value. This overload returns a number that is greater than or equal to the minimum value (inclusive) and less than the maximum value (exclusive).
I would like to include the entire range including the maximum value. In some cases, I could accomplish this by just adding one to the maximum value. But in this case, the maximum value can be int.MaxValue, and adding one to this would not accomplish what I want.
So does anyone know a good trick to get a random number from int.MinValue to int.MaxValue, inclusively?
UPDATE:
Note that the lower range can be int.MinValue but can also be something else. If I know it would always be int.MinValue then the problem would be simpler.
The internal implementation of Random.Next(int minValue, int maxValue) generates two samples for large ranges, like the range between Int32.MinValue and Int32.MaxValue. For the NextInclusive method I had to use another large range Next, totaling four samples. So the performance should be comparable with the version that fills a buffer with 4 bytes (one sample per byte).
public static class RandomExtensions
{
public static int NextInclusive(this Random random, int minValue, int maxValue)
{
if (maxValue == Int32.MaxValue)
{
if (minValue == Int32.MinValue)
{
var value1 = random.Next(Int32.MinValue, Int32.MaxValue);
var value2 = random.Next(Int32.MinValue, Int32.MaxValue);
return value1 < value2 ? value1 : value1 + 1;
}
return random.Next(minValue - 1, Int32.MaxValue) + 1;
}
return random.Next(minValue, maxValue + 1);
}
}
Some results:
new Random(0).NextInclusive(int.MaxValue - 1, int.MaxValue); // returns int.MaxValue
new Random(1).NextInclusive(int.MaxValue - 1, int.MaxValue); // returns int.MaxValue - 1
new Random(0).NextInclusive(int.MinValue, int.MinValue + 1); // returns int.MinValue + 1
new Random(1).NextInclusive(int.MinValue, int.MinValue + 1); // returns int.MinValue
new Random(24917099).NextInclusive(int.MinValue, int.MaxValue); // returns int.MinValue
var random = new Random(784288084);
random.NextInclusive(int.MinValue, int.MaxValue);
random.NextInclusive(int.MinValue, int.MaxValue); // returns int.MaxValue
Update: My implementation has mediocre performance for the largest possible range (Int32.MinValue - Int32.MaxValue), so I came up with a new one that is 4 times faster. It produces around 22,000,000 random numbers per second in my machine. I don't think that it can get any faster than that.
public static int NextInclusive(this Random random, int minValue, int maxValue)
{
if (maxValue == Int32.MaxValue)
{
if (minValue == Int32.MinValue)
{
var value1 = random.Next() % 0x10000;
var value2 = random.Next() % 0x10000;
return (value1 << 16) | value2;
}
return random.Next(minValue - 1, Int32.MaxValue) + 1;
}
return random.Next(minValue, maxValue + 1);
}
Some results:
new Random(0).NextInclusive(int.MaxValue - 1, int.MaxValue); // = int.MaxValue
new Random(1).NextInclusive(int.MaxValue - 1, int.MaxValue); // = int.MaxValue - 1
new Random(0).NextInclusive(int.MinValue, int.MinValue + 1); // = int.MinValue + 1
new Random(1).NextInclusive(int.MinValue, int.MinValue + 1); // = int.MinValue
new Random(1655705829).NextInclusive(int.MinValue, int.MaxValue); // = int.MaxValue
var random = new Random(1704364573);
random.NextInclusive(int.MinValue, int.MaxValue);
random.NextInclusive(int.MinValue, int.MaxValue);
random.NextInclusive(int.MinValue, int.MaxValue); // = int.MinValue
No casting, no long, all boundary cases taken into account, best performance.
static class RandomExtension
{
private static readonly byte[] bytes = new byte[sizeof(int)];
public static int InclusiveNext(this Random random, int min, int max)
{
if (max < int.MaxValue)
// can safely increase 'max'
return random.Next(min, max + 1);
// now 'max' is definitely 'int.MaxValue'
if (min > int.MinValue)
// can safely decrease 'min'
// so get ['min' - 1, 'max' - 1]
// and move it to ['min', 'max']
return random.Next(min - 1, max) + 1;
// now 'max' is definitely 'int.MaxValue'
// and 'min' is definitely 'int.MinValue'
// so the only option is
random.NextBytes(bytes);
return BitConverter.ToInt32(bytes, 0);
}
}
Well, I have a trick. I'm not sure I'd describe it as a "good trick", but I feel like it might work.
public static class RandomExtensions
{
public static int NextInclusive(this Random rng, int minValue, int maxValue)
{
if (maxValue == int.MaxValue)
{
var bytes = new byte[4];
rng.NextBytes(bytes);
return BitConverter.ToInt32(bytes, 0);
}
return rng.Next(minValue, maxValue + 1);
}
}
So, basically an extension method that will simply generate four bytes if the upper-bound is int.MaxValue and convert to an int, otherwise just use the standard Next(int, int) overload.
Note that if maxValue is int.MaxValue it will ignore minValue. Guess I didn't account for that...
Split the ranges in two, and compensate for the MaxValue:
r.Next(2) == 0 ? r.Next(int.MinValue, 0) : (1 + r.Next(-1, int.MaxValue))
If we make the ranges of equal size, we can get the same result with different math. Here we rely on the fact that int.MinValue = -1 - int.MaxValue:
r.Next(int.MinValue, 0) - (r.Next(2) == 0 ? 0 : int.MinValue)
I'd suggest using System.Numerics.BigInteger like this:
class InclusiveRandom
{
private readonly Random rnd = new Random();
public byte Next(byte min, byte max) => (byte)NextHelper(min, max);
public sbyte Next(sbyte min, sbyte max) => (sbyte)NextHelper(min, max);
public short Next(short min, short max) => (short)NextHelper(min, max);
public ushort Next(ushort min, ushort max) => (ushort)NextHelper(min, max);
public int Next(int min, int max) => (int)NextHelper(min, max);
public uint Next(uint min, uint max) => (uint)NextHelper(min, max);
public long Next(long min, long max) => (long)NextHelper(min, max);
public ulong Next(ulong min, ulong max) => (ulong)NextHelper(min, max);
private BigInteger NextHelper(BigInteger min, BigInteger max)
{
if (max <= min)
throw new ArgumentException($"max {max} should be greater than min {min}");
return min + RandomHelper(max - min);
}
private BigInteger RandomHelper(BigInteger bigInteger)
{
byte[] bytes = bigInteger.ToByteArray();
BigInteger random;
do
{
rnd.NextBytes(bytes);
bytes[bytes.Length - 1] &= 0x7F;
random = new BigInteger(bytes);
} while (random > bigInteger);
return random;
}
}
I tested it with sbyte.
var rnd = new InclusiveRandom();
var frequency = Enumerable.Range(sbyte.MinValue, sbyte.MaxValue - sbyte.MinValue + 1).ToDictionary(i => (sbyte)i, i => 0ul);
var count = 100000000;
for (var i = 0; i < count; i++)
frequency[rnd.Next(sbyte.MinValue, sbyte.MaxValue)]++;
foreach (var i in frequency)
chart1.Series[0].Points.AddXY(i.Key, (double)i.Value / count);
chart1.ChartAreas[0].AxisY.StripLines
.Add(new StripLine { Interval = 0, IntervalOffset = 1d / 256, StripWidth = 0.0003, BackColor = Color.Red });
Distribution is OK.
This is guaranteed to work with negative and non-negative values:
public static int NextIntegerInclusive(this Random r, int min_value, int max_value)
{
if (max_value < min_value)
{
throw new InvalidOperationException("max_value must be greater than min_value.");
}
long offsetFromZero =(long)min_value; // e.g. -2,147,483,648
long bound = (long)max_value; // e.g. 2,147,483,647
bound -= offsetFromZero; // e.g. 4,294,967,295 (uint.MaxValue)
bound += Math.Sign(bound); // e.g. 4,294,967,296 (uint.MaxValue + 1)
return (int) (Math.Round(r.NextDouble() * bound) + offsetFromZero); // e.g. -2,147,483,648 => 2,147,483,647
}
It's actually interesting that this isn't the implementation for Random.Next(int, int), because you can derive the behavior of exclusive from the behavior of inclusive, but not the other way around.
public static class RandomExtensions
{
private const long IntegerRange = (long)int.MaxValue - int.MinValue;
public static int NextInclusive(this Random random, int minValue, int maxValue)
{
if (minValue > maxValue)
{
throw new ArgumentOutOfRangeException(nameof(minValue));
}
var buffer = new byte[4];
random.NextBytes(buffer);
var a = BitConverter.ToInt32(buffer, 0);
var b = a - (long)int.MinValue;
var c = b * (1.0 / IntegerRange);
var d = c * ((long)maxValue - minValue + 1);
var e = (long)d + minValue;
return (int)e;
}
}
new Random(0).NextInclusive(int.MaxValue - 1, int.MaxValue); // returns int.MaxValue
new Random(1).NextInclusive(int.MaxValue - 1, int.MaxValue); // returns int.MaxValue - 1
new Random(0).NextInclusive(int.MinValue, int.MinValue + 1); // returns int.MinValue + 1
new Random(1).NextInclusive(int.MinValue, int.MinValue + 1); // returns int.MinValue
new Random(-451732719).NextInclusive(int.MinValue, int.MaxValue); // returns int.MinValue
new Random(-394328071).NextInclusive(int.MinValue, int.MaxValue); // returns int.MaxValue
As I understand it you want Random to put out a value between -2.147.483.648 and +2.147.483.647. But the problem is that Random given those values will only give values from -2.147.483.648 to +2.147.483.646, as the maximum is exclusive.
Option 0: Take the thing away and learn to do without it
Douglas Adams was not a Programmer AFAIK, but he has some good advice for us: "The technology involved in making anything invisible is so infinitely complex that nine hundred and ninety-nine billion, nine hundred and ninety-nine million, nine hundred and ninety-nine thousand, nine hundred and ninety-nine times out of a trillion it is much simpler and more effective just to take the thing away and do without it."
This might be such a case.
Option 1: We need a bigger Random!
Random.Next() uses Int32 as Argument. One option I can think off would be to use a different Random Function Which can take the next higher level of Integers (Int64) as input. An Int32 is implicitly cast into an Int64. Int64 Number = Int64(Int32.MaxValue)+1;
But afaik, you would have to go outside of .NET libraries to do this. At that point, you might as well look for a Random that is inclusive of the Max.
But I think there is a mathematical reason it had to exclude one value.
Option 2: Roll more
Another way is to use two calls of Random - each for one half of the range - and then add them.
Number1 = rng.Next(-2.147.483.648, 0);
Number2 = rng.Next(0, 2.147.483.647);
resut = Number1 + Number2;
However, I am 90% certain that will ruin the random distribution. My P&P RPG experience gave me some experience with dice chances and I know for a fact rolling 2 dice (or the same 2 times) will get you a very different result distribution than one specific die. If you do not need this random distribution, that is an option. But if you do not care too much about the distribution it is worth a check.
Option 3: Do you need the full range? Or do you just care for min and max to be in it?
I assume you are doing some form of testing and you need both Int.MaxValue and Int.MinValue to be in the range. But do you need every value in between as well, or could you do without one of them?
If you have to lose value, would you prefer loosing 4 rather then Int.MaxValue?
Number = rng.Next(Int.MinValue, Int.MaxValue);
if(Number > 3)
Number = Number +1;
code like this would get you every number between MinValue and MaxValue, except 4. But in most cases code that can deal with 3 and 5 can also deal with 4. There is no need to explicitly test 4.
Of course, that assumes 4 is not some important test number that has to be run (I avoided 1 and 0 for those reasons). You could also decide the number to "skip" Randomly:
skipAbleNumber = rng.Next(Int.MinValue +1, Int.MaxValue);
And then use > skipAbleNumber rather than > 4.
You can not use Random.Next() to achieve what you want, because you can not correspond sequence of N numbers to N+1 and not miss one :). Period.
But you can use Random.NextDouble(), which returns double result:
0 <= x < 1 aka [0, 1)
between 0, where [ is inclusive sign and ) exclusive
How do we correspond N numbers to [0, 1)?
You need to split [0, 1) to N equal segments:
[0, 1/N), [1/N, 2/N), ... [N-1/N, 1)
And here is where it becomes important that one border is inclusive and another is exclusive - all N segments are absolutely equal!
Here is my code: I made it as a simple console program.
class Program
{
private static Int64 _segmentsQty;
private static double _step;
private static Random _random = new Random();
static void Main()
{
InclusiveRandomPrep();
for (int i = 1; i < 20; i++)
{
Console.WriteLine(InclusiveRandom());
}
Console.ReadLine();
}
public static void InclusiveRandomPrep()
{
_segmentsQty = (Int64)int.MaxValue - int.MinValue;
_step = 1.0 / _segmentsQty;
}
public static int InclusiveRandom()
{
var randomDouble = _random.NextDouble();
var times = randomDouble / _step;
var result = (Int64)Math.Floor(times);
return (int)result + int.MinValue;
}
}
This method can give you a random integer within any integer limits. If the maximum limit is less than int.MaxValue, then it uses the ordinary Random.Next(Int32, Int32) but with adding 1 to upper limit to include its value. If not but with lower limit greater than int.MinValue, it lowers the lower limit with 1 to shift the range 1 less that add 1 to the result. Finally, if both limits are int.MinValue and int.MaxValue, it generates a random integer 'a' that is either 0 or 1 with 50% probability of each, then it generates two other integers, the first is between int.MinValue and -1 inclusive, 2147483648 values, and the second is between 0 and int.MaxValue inclusive , 2147483648 values also, and using them with the value of 'a' it select an integer with totally equal probability.
private int RandomInclusive(int min, int max)
{
if (max < int.MaxValue)
return Random.Next(min, max + 1);
if (min > int.MinValue)
return Random.Next(min - 1, max) + 1;
int a = Random.Next(2);
return Random.Next(int.MinValue, 0) * a + (Random.Next(-1, int.MaxValue) + 1) * (1 - a);
}
What about this?
using System;
public class Example
{
public static void Main()
{
Random rnd = new Random();
int min_value = max_value;
int max_value = min_value;
Console.WriteLine("\n20 random integers from 10 to 20:");
for (int ctr = 1; ctr <= 20; ctr++)
{
Console.Write("{0,6}", rnd.Next(min_value, max_value));
if (ctr % 5 == 0) Console.WriteLine();
}
}
}
You can try this. A bit hacky but can get you both min and max inclusive.
static void Main(string[] args)
{
int x = 0;
var r = new Random();
for (var i = 0; i < 32; i++)
x = x | (r.Next(0, 2) << i);
Console.WriteLine(x);
Console.ReadKey();
}
You can add 1 to generated number randomly so it still random and cover full range integer.
public static class RandomExtension
{
public static int NextInclusive(this Random random, int minValue, int maxValue)
{
var randInt = random.Next(minValue, maxValue);
var plus = random.Next(0, 2);
return randInt + plus;
}
}
Will this work for you?
int random(Random rnd, int min, int max)
{
return Convert.ToInt32(rnd.NextDouble() * (max - min) + min);
}
In CodinGame learning platform, one of the questions used as an example in a C# tutorial is this one:
The aim of this exercise is to check the presence of a number in an
array.
Specifications: The items are integers arranged in ascending order.
The array can contain up to 1 million items. The array is never null.
Implement the method boolean Answer.Exists(int[] ints, int k) so that
it returns true if k belongs to ints, otherwise the method should
return false.
Important note: Try to save CPU cycles if possible.
Example:
int[] ints = {-9, 14, 37, 102};
Answer.Exists(ints, 102) returns true.
Answer.Exists(ints, 36) returns false.
My proposal was to do that:
using System;
using System.IO;
public class Answer
{
public static bool Exists(int[] ints, int k)
{
foreach (var i in ints)
{
if (i == k)
{
return true;
}
if (i > k)
{
return false;
}
}
return false;
}
}
The result of the test was:
✔ The solution works with a 'small' array (200 pts) - Problem solving
✔ The solution works with an empty array (50 pts) - Reliability
✘ The solution works in a reasonable time with one million items (700 pts) - Problem solving
I don't get the last point. It appears that the code may be more optimal than the one I suggested.
How to optimize this piece of code? Is a binary search an actual solution (given that the values in the array are already ordered), or there is something simpler that I missed?
Yes, I think that binary search O(log(N)) complexity v. O(N) complexity is the solution:
public static bool Exists(int[] ints, int k) {
return Array.BinarySearch(ints, k) >= 0;
}
since Array.BinarySearch return non-negative value if the item (k) has been found:
https://msdn.microsoft.com/en-us/library/2cy9f6wb(v=vs.110).aspx
Return Value Type: System.Int32 The index of the specified value in
the specified array, if value is found; otherwise, a negative number.
Here is a fast method for an ordered array
public static class Answer
{
public static bool Exists( int[] ints, int k )
{
var lower = 0;
var upper = ints.Length - 1;
if ( k < ints[lower] || k > ints[upper] ) return false;
if ( k == ints[lower] ) return true;
if ( k == ints[upper] ) return true;
do
{
var middle = lower + ( upper - lower ) / 2;
if ( ints[middle] == k ) return true;
if ( lower == upper ) return false;
if ( k < ints[middle] )
upper = Math.Max( lower, middle - 1 );
else
lower = Math.Min( upper, middle + 1 );
} while ( true );
}
}
Takes around 50 ticks on my cpu (with 90.000.000 items in the array)
Sample on dotnetfiddle
class Answer
{
public static bool Exists(int[] ints, int k)
{
int index = Array.BinarySearch(ints, k);
if (index > -1)
{
return true;
}
else
{
return false;
}
}
static void Main(string[] args)
{
int[] ints = { -9, 14, 37, 102 };
Console.WriteLine(Answer.Exists(ints, 14)); // true
Console.WriteLine(Answer.Exists(ints, 4)); // false
}
}
Apparently, the task intends we use the default binary search method instead of implementing one. I was also somewhat surprised it is what it evaluates for in 3rd test. "The solution uses the standard library to perform the binary search (iterating on ints)"
Which kinda is confusing, they could have mentioned this in the code instead of giving some 15 - 20 minutes to solve. which is another reason for this confusion.
This is what I wrote for that question -> dividing array to half and search the half -> a more rudimentary way of implementing it...
int half = size/2;
if( k < ints[half])
{
for(int i=0; i < half; i++)
{
if( k == ints[i])
{
return true;
}
}
}
else
{
for(int i=half; i < size; i++)
{
if( k == ints[i])
{
return true;
}
}
}
public static bool Exists(int[] ints, int k)
{
var d = 0;
var f = ints.Length - 1;
if (d > f) return false;
if (k > ints[f] || k < ints[d]) return false;
if (k == ints[f] || k == ints[d]) return true;
return (BinarySearch(ints, k, d, f) > 0);
}
public static int BinarySearch(int[] V, int Key, int begin, int end)
{
if (begin > end)
return -1;
var MidellIndex = (begin + end) / 2;
if (Key == V[MidellIndex])
return MidellIndex;
else
{
if (Key > V[MidellIndex])
{
begin = MidellIndex + 1;
return BinarySearch(V, Key, begin, end);
}
else
{
end = MidellIndex - 1;
return BinarySearch(V, Key, begin, end);
}
}
}
I saw the all solutions, by the way I create and test the following recursive approach and get the complete points:
public static bool Exists(int[] ints, int k)
{
if (ints.Length > 0 && ints[0] <= k && k <= ints[ints.Length - 1])
{
if (ints[0] == k || ints[ints.Length - 1] == k) return true;
return SearchRecursive(ints, k, 0, ints.Length - 1) != -1;
}
return false;
}
private static int SearchRecursive(int[] array, int value, int first, int last)
{
int middle = (first + last) / 2;
if (array[middle] == value)
{
return middle;
}
else if (first >= last)
{
return -1;
}
else if (value < array[middle])
{
return SearchRecursive(array, value, first, middle - 1);
}
else
{
return SearchRecursive(array, value, middle + 1, last);
}
}
Yes, BinarySearch would be faster than most algorithms you can write manually. However, if the intent of the exercise is to learn how to write an algorithm, you are on the right track. Your algorithm, though, makes an unnecessary check with if (i > k) ... why do you need this?
Below is my general algorithm for simple requirements like this. The while loop like this is slightly more performant than a for-loop and out performs a foreach easily.
public class Answer
{
public static bool Exists(int[] ints, int k)
{
var i = 0;
var hasValue = false;
while(i < ints.Length && !hasValue)
{
hasValue = ints[i] == k;
++i;
}
return hasValue;
}
}
If you are trying to squeeze every ounce of speed out of it... consider that your array has 1..100 and you want to search for 78. Your algorithm needs to search and compare 78 items before you find the right one. How about instead you search the first item and its not there, so you jump to array size / 2 and find 50? Now you skipped 50 iterations. You know that 78 MUST be in the top half of the array, so you can again split it in half and jump to 75, etc. By continuously splitting the array in half, you do much fewer iterations then your brute force approach.
I wrote a code that will sort based on decimals. The quick sort algorithm.
But my issue is the removal of the number 0 in some decimals. Eg 723.1000 becomes 723.1 Numbers are important to me because I want to be displayed and stored as
Now how can I do this without removing the zeros in decimal numbers
We may see these numbers in an array
Zeros will not be removed.
10.20 ==> 10.2
or
50.60000 ==> 50.6
or
145698.154780 ===> 145698.15478
So many zeros are not fixed
And this is my problem.
internal class Sorter
{
public string[] QuickSort(string[] array, int low, int high)
{
if (low < high)
{
int p = Partition(array, low, high);
QuickSort(array, low, p - 1);
QuickSort(array, p + 1, high);
}
return array;
}
private static int Partition(string[] array, int low, int high)
{
int left = low + 1, right = high;
string temp;
double pivot = double.Parse(array[low]);
int piv;
while (left <= right)
{
while (left <= right && Convert.ToDouble(array[left]) <= pivot)
left++;
while (left <= right && Convert.ToDouble(array[right]) >= pivot)
right--;
if (left < right)
{
temp = array[left];
array[left] = array[right];
array[right] = temp;
}
}
if (right == high)
piv = high;
else if (right == low)
piv = low;
else
piv = left - 1;
array[low] = array[piv];
array[piv] = pivot.ToString();
return piv;
}
}
You can try with Double.ToString():
double d=723.1000;
string s = d.ToString("0.0");
Check here for more formatting options.
Update
You can try this :
string s = d.ToString().TrimEnd('0');
Update 2:
As discussed here :
double doesn't keep insignificant digits - there's no difference between 1.5 and 1.50000 as far as double is concerned.
If you want to preserve insignificant digits, use decimal instead. It
may well be more appropriate for you anyway, depending on your exact
context. (We have very little context to go on here...)
So you can use this decimal instead of double:
decimal d = 723.1000M;
string s = d.ToString();
try like this Custom Numeric Format Strings,
decimal d = 0.00000000000010000000000m;
string custom = d.ToString("0.#########################");
// gives: 0,0000000000001
string general = d.ToString("G29");
// gives: 1E-13
You may try some linq to sort something like
double[] myDubleNumber = { 15.00003, 1758.4856, 20.123, 1478.0214,120.0223 };
var result = myDubleNumber.OrderBy(x => x).Select(x => Math.Round(x, 2));
foreach (var d in result)
{
Console.WriteLine(d);
}
Console.ReadLine();
It's better to be public double[] QuickSort(double[] array, int low, int high)
There should be 2 parts in your program:
Input a numeric array and output the sorted one;
Format and show the result.
In the first step, please don't consider about the removal of the number 0.
In the second step, please use .toString() here to format a number.
I am trying to write a method to calculate the sum of the odd numbers in all the numbers less than the given number. so eg. CalcOdd(7) would return 5 + 3 + 1 = 9. CalcOdd (10) would return 9 + 7 + 5 + 3 + 1 = 25 etc
The method needs to take in a number, subtract 1, then recursively work backwards adding all odd numbers until it reaches 0. This is what I have so far.
private static int CalcOdd(int n)
{
if (n <= 1)
return 1;
else
if (n % 2 == 0)
n--;
return n + CalcOdd(n - 2);
}
It doesn't work so well, it includes the number passed in in the addition which is not what I want. Can anyone suggest a better way of doing this ? I would also loke to be able to port the answer to work for even numbers and add the option to include the original passed in number in the answer.
Many thanks
Why would you use recursion here? Just loop; or better, figure out the math to do it in a simple equation...
The fact is that C# doesn't make for excellent deep recursion for things like maths; the tail-call isn't really there at the moment.
Loop approach:
private static int CalcOdd(int n)
{
int sum = 0, i = 1;
while (i < n)
{
sum += i;
i += 2;
}
return sum;
}
You could do this with recursion as you say, but if you wish to do it quicker, then I can tell you that the sum of the n first odd numbers is equal to n*n.
private static int CalcOdd(int n) {
if (n<=1)
return 0;
if (n%2 == 1)
n--;
int k = n/2;
return k*k;
}
The reason this works is:
Every even number is of the form 2k, and the odd number before it is 2k-1.
Because 2*1-1 = 1, there are k odd numbers below 2k.
If n is odd, we don't want to include it, so we simply go down to the even number below it and we automatically have what we want.
Edited to fix broken code.
the sum of odd numbers less than a given number is a perfect square.
get the whole part of (n/2) to get the number of odd number less than itself.
square that and voila!
private static int CalcSumOdd(int n)
{
int i;
int.tryParse(n / 2, out i);
return i*i;
}
for even numbers its:
int i = n/2;
return i*(i+1);
correction. The above "even number sum" includes the original number "n". ie fn(12) = 42 = 2 + 4 + 6 + 8 + 10 + 12
if you want to exclude it, you should either unilaterally exclude it, or remove it with logic based on a passed in parameter.
Here is a correction,
int CalcOdd(int n)
{
n--; // <----
if (n <= 1)
return 0; // <----
else
if (n % 2 == 0)
n--;
return n + CalcOdd(n); // <----
}
i'm new here but this seems like a silly recursion exercise, given it can be done with a simple equation:
int sum(n,isEven,notFirst) {
int c=1; //skip the else
if (isEven) c=2;
if (notFirst) n-=2;
return ((n+c)*((n+c)/2))/2; }
classic discrete math sum series..
sum from 1 to 100 (odds and evens) is ((100+1)*(100/2))=5050
edit: in my code here, if you're calculating the sum of odds with n being even, or vice versa, it doesn't work, but i'm not going to put the work into that (and slop the code) right now. i'll assume your code will take care of that by the time it hits the function.. for example 7/2 isn't an int (obviously)
Why use recursion?
private Int32 CalcOdd(Int32 value)
{
Int32 r = 0;
{
while (value >= 1)
{
value--;
if (value % 2 != 0)
{
r += value;
}
}
}
return r;
}
Use a helper function. CalcOdd consists of testing n to see if it is even or odd; if it is even, return helper(n); if it is odd, return helper(n-2).
The helper function must handle three cases:
1) n is less than 1; in this case return 0.
2) n is even, in this case return helper(n-1).
3) n is odd, in this case return n+helper(n-1).
public static int CalcOdd(int n) {
// Find the highest even number. (Either n, or n-1.)
// Divide that by 2, and the answer should be the square of that number.
n = (n & 0x3FFFFFFE) >> 1;
return (int)Math.Pow(n, 2);
}
private static int CalcOdd(int n) {
n -= 1;
if ((n & 1) == 0) n--;
if (n <= 1) return 1;
return n + CalcOdd(n - 1);
}
But I would say doing loops is better and cleaner.
private static int CalcOdd(int n) {
int i, r = 1;
for (i = 3; i < n; i+=2)
r += i;
return r;
}
Since you want the option of including or excluding the first answer (and, keeping your "recursion" constraint in mind):
int calcOdd(int n, bool includeN)
{
if( !includeN )
return calcOdd(n-1, true);
if(n<=1)
return 1;
else
if(n%2 == 0)
n--;
return n+calcOdd(n-1, true);
}
The includeFirst, if passed as true, will include n in the calculations. Otherwise, the next layer down will start "including N".
Granted, as others have said, this is a horribly inefficient use of recursion, but... If you like recursion, try Haskell. It's a language built almost entirely on the concept.
int CalcOdd(int n)
{
n -= 1;
if (n <= 0)
return 0;
if (n % 2 == 0)
n--;
return n + CalcOdd(n);
}
This function is also recursive, and it has parameters which makes you able to decide wether to do even or odd number and wether you want to include the first number or not. If you are confused as to how it works, remember that bools also can be seen as 1 (true) and 0 (false)
int Calc(int n, bool even = false, bool includeFirst = false)
{
n -= !includeFirst;
if (n <= 0)
return 0;
if (n % 2 == even)
n--;
return n + Calc(n - includeFirst, even);
}
Håkon, I have ported your code to c# in VS 2008 as follows
static int Calc(int n, bool bEven, bool bIncludeFirst)
{
int iEven = Bool2Int(bEven);
int iIncludeFirst = Bool2Int(bIncludeFirst);
n -= 1 - iIncludeFirst;
if (n <= 0)
return 0;
if (n % 2 == iEven)
n--;
return n + Calc(n - iIncludeFirst, bEven, bIncludeFirst);
}
private static int Bool2Int(bool b)
{
return b ? 1 : 0;
}
It seems to be working. Now is there anything I can do to optomise ? i.e. I dont want to have to parse those bools to ints every time etc ?
I'd isolate the 'make it odd' part from the 'sum every other descending number' part: (forgive the Python)
def sumEveryTwoRecursive(n):
if n <= 0:
return 0
return n + sumEveryTwoRecursive(n - 2)
def calcOdd(n):
return sumEveryTwoRecursive(n - (2 if n % 2 else 1))
Just because there isn't one here yet, I've decided to use the LINQ hammer on this nail...
(borrowed from Nick D and Jason's pair programmed answer here)
void Main()
{
GetIterator(7, true, false).Sum().Dump();
// Returns 9
GetIterator(10, true, false).Sum().Dump();
// Returns 25
}
public IEnumerable<int> GetIterator(int n, bool isOdd, bool includeOriginal)
{
if (includeOriginal)
n++;
if (isOdd)
return GetIterator(n, 1);
else
return GetIterator(n, 0);
}
public IEnumerable<int> GetIterator(int n, int odd)
{
n--;
if (n < 0)
yield break;
if (n % 2 == odd)
yield return n;
foreach (int i in GetIterator(n, odd))
yield return i;
}
#include <iostream>
using namespace std;
int sumofodd(int num);
int main()
{
int number,res;
cin>>number;
res=sumofodd(number);
cout<<res;
return 0;
}
int sumofodd(int num)
{ if(num<1) return 0;
if (num%2==0) num--;
return num+sumofodd(num-1);
}
All numbers that divide evenly into x.
I put in 4 it returns: 4, 2, 1
edit: I know it sounds homeworky. I'm writing a little app to populate some product tables with semi random test data. Two of the properties are ItemMaximum and Item Multiplier. I need to make sure that the multiplier does not create an illogical situation where buying 1 more item would put the order over the maximum allowed. Thus the factors will give a list of valid values for my test data.
edit++:
This is what I went with after all the help from everyone. Thanks again!
edit#: I wrote 3 different versions to see which I liked better and tested them against factoring small numbers and very large numbers. I'll paste the results.
static IEnumerable<int> GetFactors2(int n)
{
return from a in Enumerable.Range(1, n)
where n % a == 0
select a;
}
private IEnumerable<int> GetFactors3(int x)
{
for (int factor = 1; factor * factor <= x; factor++)
{
if (x % factor == 0)
{
yield return factor;
if (factor * factor != x)
yield return x / factor;
}
}
}
private IEnumerable<int> GetFactors1(int x)
{
int max = (int)Math.Ceiling(Math.Sqrt(x));
for (int factor = 1; factor < max; factor++)
{
if(x % factor == 0)
{
yield return factor;
if(factor != max)
yield return x / factor;
}
}
}
In ticks.
When factoring the number 20, 5 times each:
GetFactors1-5,445,881
GetFactors2-4,308,234
GetFactors3-2,913,659
When factoring the number 20000, 5 times each:
GetFactors1-5,644,457
GetFactors2-12,117,938
GetFactors3-3,108,182
pseudocode:
Loop from 1 to the square root of the number, call the index "i".
if number mod i is 0, add i and number / i to the list of factors.
realocode:
public List<int> Factor(int number)
{
var factors = new List<int>();
int max = (int)Math.Sqrt(number); // Round down
for (int factor = 1; factor <= max; ++factor) // Test from 1 to the square root, or the int below it, inclusive.
{
if (number % factor == 0)
{
factors.Add(factor);
if (factor != number/factor) // Don't add the square root twice! Thanks Jon
factors.Add(number/factor);
}
}
return factors;
}
As Jon Skeet mentioned, you could implement this as an IEnumerable<int> as well - use yield instead of adding to a list. The advantage with List<int> is that it could be sorted before return if required. Then again, you could get a sorted enumerator with a hybrid approach, yielding the first factor and storing the second one in each iteration of the loop, then yielding each value that was stored in reverse order.
You will also want to do something to handle the case where a negative number passed into the function.
The % (remainder) operator is the one to use here. If x % y == 0 then x is divisible by y. (Assuming 0 < y <= x)
I'd personally implement this as a method returning an IEnumerable<int> using an iterator block.
Very late but the accepted answer (a while back) didn't not give the correct results.
Thanks to Merlyn, I got now got the reason for the square as a 'max' below the corrected sample. althought the answer from Echostorm seems more complete.
public static IEnumerable<uint> GetFactors(uint x)
{
for (uint i = 1; i * i <= x; i++)
{
if (x % i == 0)
{
yield return i;
if (i != x / i)
yield return x / i;
}
}
}
As extension methods:
public static bool Divides(this int potentialFactor, int i)
{
return i % potentialFactor == 0;
}
public static IEnumerable<int> Factors(this int i)
{
return from potentialFactor in Enumerable.Range(1, i)
where potentialFactor.Divides(i)
select potentialFactor;
}
Here's an example of usage:
foreach (int i in 4.Factors())
{
Console.WriteLine(i);
}
Note that I have optimized for clarity, not for performance. For large values of i this algorithm can take a long time.
Another LINQ style and tying to keep the O(sqrt(n)) complexity
static IEnumerable<int> GetFactors(int n)
{
Debug.Assert(n >= 1);
var pairList = from i in Enumerable.Range(1, (int)(Math.Round(Math.Sqrt(n) + 1)))
where n % i == 0
select new { A = i, B = n / i };
foreach(var pair in pairList)
{
yield return pair.A;
yield return pair.B;
}
}
Here it is again, only counting to the square root, as others mentioned. I suppose that people are attracted to that idea if you're hoping to improve performance. I'd rather write elegant code first, and optimize for performance later, after testing my software.
Still, for reference, here it is:
public static bool Divides(this int potentialFactor, int i)
{
return i % potentialFactor == 0;
}
public static IEnumerable<int> Factors(this int i)
{
foreach (int result in from potentialFactor in Enumerable.Range(1, (int)Math.Sqrt(i))
where potentialFactor.Divides(i)
select potentialFactor)
{
yield return result;
if (i / result != result)
{
yield return i / result;
}
}
}
Not only is the result considerably less readable, but the factors come out of order this way, too.
I did it the lazy way. I don't know much, but I've been told that simplicity can sometimes imply elegance. This is one possible way to do it:
public static IEnumerable<int> GetDivisors(int number)
{
var searched = Enumerable.Range(1, number)
.Where((x) => number % x == 0)
.Select(x => number / x);
foreach (var s in searched)
yield return s;
}
EDIT: As Kraang Prime pointed out, this function cannot exceed the limit of an integer and is (admittedly) not the most efficient way to handle this problem.
Wouldn't it also make sense to start at 2 and head towards an upper limit value that's continuously being recalculated based on the number you've just checked? See N/i (where N is the Number you're trying to find the factor of and i is the current number to check...) Ideally, instead of mod, you would use a divide function that returns N/i as well as any remainder it might have. That way you're performing one divide operation to recreate your upper bound as well as the remainder you'll check for even division.
Math.DivRem
http://msdn.microsoft.com/en-us/library/wwc1t3y1.aspx
If you use doubles, the following works: use a for loop iterating from 1 up to the number you want to factor. In each iteration, divide the number to be factored by i. If (number / i) % 1 == 0, then i is a factor, as is the quotient of number / i. Put one or both of these in a list, and you have all of the factors.
And one more solution. Not sure if it has any advantages other than being readable..:
List<int> GetFactors(int n)
{
var f = new List<int>() { 1 }; // adding trivial factor, optional
int m = n;
int i = 2;
while (m > 1)
{
if (m % i == 0)
{
f.Add(i);
m /= i;
}
else i++;
}
// f.Add(n); // adding trivial factor, optional
return f;
}
I came here just looking for a solution to this problem for myself. After examining the previous replies I figured it would be fair to toss out an answer of my own even if I might be a bit late to the party.
The maximum number of factors of a number will be no more than one half of that number.There is no need to deal with floating point values or transcendent operations like a square root. Additionally finding one factor of a number automatically finds another. Just find one and you can return both by just dividing the original number by the found one.
I doubt I'll need to use checks for my own implementation but I'm including them just for completeness (at least partially).
public static IEnumerable<int>Factors(int Num)
{
int ToFactor = Num;
if(ToFactor == 0)
{ // Zero has only itself and one as factors but this can't be discovered through division
// obviously.
yield return 0;
return 1;
}
if(ToFactor < 0)
{// Negative numbers are simply being treated here as just adding -1 to the list of possible
// factors. In practice it can be argued that the factors of a number can be both positive
// and negative, i.e. 4 factors into the following pairings of factors:
// (-4, -1), (-2, -2), (1, 4), (2, 2) but normally when you factor numbers you are only
// asking for the positive factors. By adding a -1 to the list it allows flagging the
// series as originating with a negative value and the implementer can use that
// information as needed.
ToFactor = -ToFactor;
yield return -1;
}
int FactorLimit = ToFactor / 2; // A good compiler may do this optimization already.
// It's here just in case;
for(int PossibleFactor = 1; PossibleFactor <= FactorLimit; PossibleFactor++)
{
if(ToFactor % PossibleFactor == 0)
{
yield return PossibleFactor;
yield return ToFactor / PossibleFactor;
}
}
}
Program to get prime factors of whole numbers in javascript code.
function getFactors(num1){
var factors = [];
var divider = 2;
while(num1 != 1){
if(num1 % divider == 0){
num1 = num1 / divider;
factors.push(divider);
}
else{
divider++;
}
}
console.log(factors);
return factors;
}
getFactors(20);
In fact we don't have to check for factors not to be square root in each iteration from the accepted answer proposed by chris fixed by Jon, which could slow down the method when the integer is large by adding an unnecessary Boolean check and a division. Just keep the max as double (don't cast it to an int) and change to loop to be exclusive not inclusive.
private static List<int> Factor(int number)
{
var factors = new List<int>();
var max = Math.Sqrt(number); // (store in double not an int) - Round down
if (max % 1 == 0)
factors.Add((int)max);
for (int factor = 1; factor < max; ++factor) // (Exclusice) - Test from 1 to the square root, or the int below it, inclusive.
{
if (number % factor == 0)
{
factors.Add(factor);
//if (factor != number / factor) // (Don't need check anymore) - Don't add the square root twice! Thanks Jon
factors.Add(number / factor);
}
}
return factors;
}
Usage
Factor(16)
// 4 1 16 2 8
Factor(20)
//1 20 2 10 4 5
And this is the extension version of the method for int type:
public static class IntExtensions
{
public static IEnumerable<int> Factors(this int value)
{
// Return 2 obvious factors
yield return 1;
yield return value;
// Return square root if number is prefect square
var max = Math.Sqrt(value);
if (max % 1 == 0)
yield return (int)max;
// Return rest of the factors
for (int i = 2; i < max; i++)
{
if (value % i == 0)
{
yield return i;
yield return value / i;
}
}
}
}
Usage
16.Factors()
// 4 1 16 2 8
20.Factors()
//1 20 2 10 4 5
Linq solution:
IEnumerable<int> GetFactors(int n)
{
Debug.Assert(n >= 1);
return from i in Enumerable.Range(1, n)
where n % i == 0
select i;
}